uuid int64 541B 3,299B | dataset stringclasses 1
value | text stringlengths 1 4.29M |
|---|---|---|
2,877,628,091,554 | arxiv |
\subsubsection*{#1}}
\pagestyle{headings}
\markright{Reference sheet: \texttt{natbib}}
\usepackage{shortvrb}
\MakeShortVerb{\|}
\begin{document}
\thispagestyle{plain}
\newcommand{\textsc{Bib}\TeX}{\textsc{Bib}\TeX}
\newcommand{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}}{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}}
\begin{center}{\bfseries\Large
Reference sheet for \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ usage}\\
\large(Describing version \fileversion\ from \filedate)
\end{center}
\begin{quote}\slshape
For a more detailed description of the \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package, \LaTeX\ the
source file \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.dtx}.
\end{quote}
\head{Overview}
The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is a reimplementation of the \LaTeX\ |\cite| command,
to work with both author--year and numerical citations. It is compatible with
the standard bibliographic style files, such as \texttt{plain.bst}, as well as
with those for \texttt{harvard}, \texttt{apalike}, \texttt{chicago},
\texttt{astron}, \texttt{authordate}, and of course \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}.
\head{Loading}
Load with |\usepackage[|\emph{options}|]{|\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}|}|. See list of
\emph{options} at the end.
\head{Replacement bibliography styles}
I provide three new \texttt{.bst} files to replace the standard \LaTeX\
numerical ones:
\begin{quote}\ttfamily
plainnat.bst \qquad abbrvnat.bst \qquad unsrtnat.bst
\end{quote}
\head{Basic commands}
The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package has two basic citation commands, |\citet| and
|\citep| for \emph{textual} and \emph{parenthetical} citations, respectively.
There also exist the starred versions |\citet*| and |\citep*| that print
the full author list, and not just the abbreviated one.
All of these may take one or two optional arguments to add some text before
and after the citation.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citet{jon90}| & Jones et al. (1990)\\
|\citet[chap.~2]{jon90}| & Jones et al. (1990, chap.~2)\\[0.5ex]
|\citep{jon90}| & (Jones et al., 1990)\\
|\citep[chap.~2]{jon90}| & (Jones et al., 1990, chap.~2)\\
|\citep[see][]{jon90}| & (see Jones et al., 1990)\\
|\citep[see][chap.~2]{jon90}| & (see Jones et al., 1990, chap.~2)\\[0.5ex]
|\citet*{jon90}| & Jones, Baker, and Williams (1990)\\
|\citep*{jon90}| & (Jones, Baker, and Williams, 1990)
\end{tabular}
\end{quote}
\head{Multiple citations}
Multiple citations may be made by including more than one
citation key in the |\cite| command argument.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citet{jon90,jam91}| & Jones et al. (1990); James et al. (1991)\\
|\citep{jon90,jam91}| & (Jones et al., 1990; James et al. 1991)\\
|\citep{jon90,jon91}| & (Jones et al., 1990, 1991)\\
|\citep{jon90a,jon90b}| & (Jones et al., 1990a,b)
\end{tabular}
\end{quote}
\head{Numerical mode}
These examples are for author--year citation mode. In numerical mode, the
results are different.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citet{jon90}| & Jones et al. [21]\\
|\citet[chap.~2]{jon90}| & Jones et al. [21, chap.~2]\\[0.5ex]
|\citep{jon90}| & [21]\\
|\citep[chap.~2]{jon90}| & [21, chap.~2]\\
|\citep[see][]{jon90}| & [see 21]\\
|\citep[see][chap.~2]{jon90}| & [see 21, chap.~2]\\[0.5ex]
|\citep{jon90a,jon90b}| & [21, 32]
\end{tabular}
\end{quote}
\head{Suppressed parentheses}
As an alternative form of citation, |\citealt| is the same as |\citet| but
\emph{without parentheses}. Similarly, |\citealp| is |\citep| without
parentheses. Multiple references, notes, and the starred variants
also exist.
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citealt{jon90}| & Jones et al.\ 1990\\
|\citealt*{jon90}| & Jones, Baker, and Williams 1990\\
|\citealp{jon90}| & Jones et al., 1990\\
|\citealp*{jon90}| & Jones, Baker, and Williams, 1990\\
|\citealp{jon90,jam91}| & Jones et al., 1990; James et al., 1991\\
|\citealp[pg.~32]{jon90}| & Jones et al., 1990, pg.~32\\
|\citetext{priv.\ comm.}| & (priv.\ comm.)
\end{tabular}
\end{quote}
The |\citetext| command
allows arbitrary text to be placed in the current citation parentheses.
This may be used in combination with |\citealp|.
\head{Partial citations}
In author--year schemes, it is sometimes desirable to be able to refer to
the authors without the year, or vice versa. This is provided with the
extra commands
\begin{quote}
\begin{tabular}{l@{\quad$\Rightarrow$\quad}l}
|\citeauthor{jon90}| & Jones et al.\\
|\citeauthor*{jon90}| & Jones, Baker, and Williams\\
|\citeyear{jon90}| & 1990\\
|\citeyearpar{jon90}| & (1990)
\end{tabular}
\end{quote}
\head{Forcing upper cased names}
If the first author's name contains a \textsl{von} part, such as ``della
Robbia'', then |\citet{dRob98}| produces ``della Robbia (1998)'', even at the
beginning of a sentence. One can force the first letter to be in upper case
with the command |\Citet| instead. Other upper case commands also exist.
\begin{quote}
\begin{tabular}{rl@{\quad$\Rightarrow$\quad}l}
when & |\citet{dRob98}| & della Robbia (1998) \\
then & |\Citet{dRob98}| & Della Robbia (1998) \\
& |\Citep{dRob98}| & (Della Robbia, 1998) \\
& |\Citealt{dRob98}| & Della Robbia 1998 \\
& |\Citealp{dRob98}| & Della Robbia, 1998 \\
& |\Citeauthor{dRob98}| & Della Robbia
\end{tabular}
\end{quote}
These commands also exist in starred versions for full author names.
\head{Citation aliasing}
Sometimes one wants to refer to a reference with a special designation,
rather than by the authors, i.e. as Paper~I, Paper~II. Such aliases can be
defined and used, textual and/or parenthetical with:
\begin{quote}
\begin{tabular}{lcl}
|\defcitealias{jon90}{Paper~I}|\\
|\citetalias{jon90}| & $\Rightarrow$ & Paper~I\\
|\citepalias{jon90}| & $\Rightarrow$ & (Paper~I)
\end{tabular}
\end{quote}
These citation commands function much like |\citet| and |\citep|: they may
take multiple keys in the argument, may contain notes, and are marked as
hyperlinks.
\head{Selecting citation style and punctuation}
Use the command |\bibpunct| with one optional and 6 mandatory arguments:
\begin{enumerate}
\item the opening bracket symbol, default = (
\item the closing bracket symbol, default = )
\item the punctuation between multiple citations, default = ;
\item the letter `n' for numerical style, or `s' for numerical superscript
style, any other letter for
author--year, default = author--year;
\item the punctuation that comes between the author names and the year
\item the punctuation that comes between years or numbers when common author
lists are suppressed (default = ,);
\end{enumerate}
The optional argument is the character preceding a post-note, default is a
comma plus space. In redefining this character, one must include a space if
one is wanted.
Example~1, |\bibpunct{[}{]}{,}{a}{}{;}| changes the output of
\begin{quote}
|\citep{jon90,jon91,jam92}|
\end{quote}
into [Jones et al. 1990; 1991, James et al. 1992].
Example~2, |\bibpunct[; ]{(}{)}{,}{a}{}{;}| changes the output of
\begin{quote}
|\citep[and references therein]{jon90}|
\end{quote}
into (Jones et al. 1990; and references therein).
\head{Other formatting options}
Redefine |\bibsection| to the desired sectioning command for introducing
the list of references. This is normally |\section*| or |\chapter*|.
Define |\bibpreamble| to be any text that is to be printed after the heading but
before the actual list of references.
Define |\bibfont| to be a font declaration, e.g.\ |\small| to apply to
the list of references.
Define |\citenumfont| to be a font declaration or command like |\itshape|
or |\textit|.
Redefine |\bibnumfmt| as a command with an argument to format the numbers in
the list of references. The default definition is |[#1]|.
The indentation after the first line of each reference is given by
|\bibhang|; change this with the |\setlength| command.
The vertical spacing between references is set by |\bibsep|; change this with
the |\setlength| command.
\head{Automatic indexing of citations}
If one wishes to have the citations entered in the \texttt{.idx} indexing
file, it is only necessary to issue |\citeindextrue| at any point in the
document. All following |\cite| commands, of all variations, then insert
the corresponding entry to that file. With |\citeindexfalse|, these
entries will no longer be made.
\head{Use with \texttt{chapterbib} package}
The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is compatible with the \texttt{chapterbib} package
which makes it possible to have several bibliographies in one document.
The package makes use of the |\include| command, and each |\include|d file
has its own bibliography.
The order in which the \texttt{chapterbib} and \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ packages are loaded
is unimportant.
The \texttt{chapterbib} package provides an option \texttt{sectionbib}
that puts the bibliography in a |\section*| instead of |\chapter*|,
something that makes sense if there is a bibliography in each chapter.
This option will not work when \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ is also loaded; instead, add
the option to \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}.
Every |\include|d file must contain its own
|\bibliography| command where the bibliography is to appear. The database
files listed as arguments to this command can be different in each file,
of course. However, what is not so obvious, is that each file must also
contain a |\bibliographystyle| command, \emph{preferably with the same
style argument}.
\head{Sorting and compressing citations}
Do not use the \texttt{cite} package with \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}; rather use one of the
options \texttt{sort} or \texttt{sort\&compress}.
These also work with author--year citations, making multiple citations appear
in their order in the reference list.
\head{Long author list on first citation}
Use option \texttt{longnamesfirst} to have first citation automatically give
the full list of authors.
Suppress this for certain citations with |\shortcites{|\emph{key-list}|}|,
given before the first citation.
\head{Local configuration}
Any local recoding or definitions can be put in \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.cfg} which
is read in after the main package file.
\head{Options that can be added to \texttt{\char`\\ usepackage}}
\begin{description}
\item[\ttfamily round] (default) for round parentheses;
\item[\ttfamily square] for square brackets;
\item[\ttfamily curly] for curly braces;
\item[\ttfamily angle] for angle brackets;
\item[\ttfamily colon] (default) to separate multiple citations with
colons;
\item[\ttfamily comma] to use commas as separaters;
\item[\ttfamily authoryear] (default) for author--year citations;
\item[\ttfamily numbers] for numerical citations;
\item[\ttfamily super] for superscripted numerical citations, as in
\textsl{Nature};
\item[\ttfamily sort] orders multiple citations into the sequence in
which they appear in the list of references;
\item[\ttfamily sort\&compress] as \texttt{sort} but in addition multiple
numerical citations are compressed if possible (as 3--6, 15);
\item[\ttfamily longnamesfirst] makes the first citation of any reference
the equivalent of the starred variant (full author list) and subsequent
citations normal (abbreviated list);
\item[\ttfamily sectionbib] redefines |\thebibliography| to issue
|\section*| instead of |\chapter*|; valid only for classes with a
|\chapter| command; to be used with the \texttt{chapterbib} package;
\item[\ttfamily nonamebreak] keeps all the authors' names in a citation on
one line; causes overfull hboxes but helps with some \texttt{hyperref}
problems.
\end{description}
\end{document}
\section{Introduction} \label{sec:intro}
\setcounter{footnote}{12}
The Galactic stellar halo is expected to be assembled through a succession of merging events between the Milky Way and dwarf galaxies of various masses in the context of the hierarchical formation paradigm \citep{sz1978, White1991cdm, Kauffmann1993, Springel2006}. Upon interacting with the Galactic gravitational potential well, the constituent stars of these satellites become tidally unbound and, over time, phase-mixed into a smooth halo \citep[e.g.,][]{HelmiWhite1999}. The intermediate stage of this process is characterized by the appearance of stellar streams, spatially elongated structures produced by accreted debris that remains kinematically cohesive \citep{Johnston1998, BullockJohnston2005, Cooper2010, Cooper2013, Pillepich2015halos, Morinaga2019_MWhalos}.
The magnificence of immense stellar streams can be appreciated both in external massive galaxies (e.g., M31/Andromeda, NGC~5128/Centaurus A, and M104/Sombrero; \citealt{Ibata2001streamM31}, \citealt{Crnojevic2016_streamsCenA}, \citealt{MartinezDelgado2021sombrero}) as well as in the Milky Way itself (as illustrated by the ``Field of Streams"; \citealt{Belokurov2006Streams}). Indeed, the textbook example of the above-described process is the Sagittarius (Sgr) stream \citep[e.g.,][]{Mateo1998}, the tidal tails produced by the ongoing destruction of the Sgr dwarf spheroidal (dSph) galaxy \citep{Ibata1994, Ibata1995}.
Over the past couple of decades, the Sgr stream has been mapped across ever increasing areas of the sky \citep{Mateo1996, Mateo1998, Alard1996sgr, Ibata2001sgrStream, Newberg2003SgrStream, MartinezDelgado2004sgr}. Eventually, wide-area photometric data
allowed us to contemplate the grandiosity of Sgr stream throughout both hemispheres \citep{Majewski2003, Yanny2009sgr}. Furthermore, observations of distant halo tracers \citep[e.g., RR Lyrae stars;][]{Vivas2005sgrRRL} and line-of-sight velocity ($v_{\rm los}$) measurements \citep[][]{Majewski2004sgr} served as constraints for an early generation of $N$-body simulations that attempted to reproduce the phase-space properties of
the stream \citep{HelmiWhite2001sgr, Helmi2004sgr, Law2005sgr, Fellhauer2006sgr, Penarrubia2010sgr}. These works culminated in the landmark model of \citet{LM2010modelSgrStream}, which was capable of reproducing most of Sgr stream's features known at the time.
\defcitealias{Vasiliev2021tango}{V21}
Thanks to the \textit{Gaia} space mission \citep{GaiaMission}, in particular its second data release \citep[DR2;][]{gaiadr2}, precise astrometric data for more than a billion stars in the Milky Way
are now available. This crucial piece of information, i.e., proper motions and parallaxes, has revolutionized the knowledge about Galactic stellar streams \citep[e.g.,][]{Malhan2018streams, PriceWhelan2018gd1, Shipp2019streamsPMs, Riley2020streams, Li2021AtlasAliqaUma}. For instance, it has allowed the blind detection of $\mathcal{O}(10^5)$ high-probability members of Sgr stream \citep{Antoja2020SgrStream, Ibata2020SgrStream, Ramos2022sgr
, dramatically advancing our understanding of its present-day
kinematics. Moreover, a misalignment between the stream's track and the motion of its debris has been identified toward the leading arm (Galactic latitude $b > 0^{\circ}$) of Sgr \citep[][hereafter \citetalias{Vasiliev2021tango}]{Vasiliev2021tango}. Such observation can be reconciled with time-dependent perturbations induced by a massive (${\geq}10^{11} M_\odot$) Large Magellanic Cloud (LMC; see also \citealt{Oria2022Sgr_bifurcations} and \citealt{Wang2022arXivSgr}.
\defcitealias{Penarrubia2021sgr}{PP21}
Despite these \textit{Gaia}-led advances, a fundamental difficulty in studies of Sgr stream continues to be the large heliocentric distances of its member stars (${\gtrsim}10$\,kpc as informed by, e.g., the aforementioned \citetalias{Vasiliev2021tango} model).
This challenge is usually tackled via the utilization of stellar standard candles appropriate for the study of old stellar populations \citep[e.g.,][]{Belokurov2014sgr, Hernitschek2017sgr, Ramos2020SgrStream}. For instance, \citet[][hereby \citetalias{Penarrubia2021sgr}]{Penarrubia2021sgr} have recently utilized a sample of blue horizontal-branch and red giant-branch stars
to demonstrate that Sgr stream can be isolated from other halo substructures in angular-momentum spac
. Others have used similar tracers to detect the signature of Sgr stream in integrals of motion \citep{Yang2019sgrLAMOST, Yuan2019cetus, Yuan2020lms1}
Although the usage of some specific halo tracers has been crucial for advancing our knowledge of the dynamical status of Sgr stream, it comes with the obvious caveat of limited sample sizes. Also, restricting our analyzes to certain spectral types and/or evolutionary stages can introduce biases in our interpretations of entire stellar populations. One way to go about this is to leverage both spectroscopic and photometric information from large-scale surveys in order to obtain full spectro-photometric distance estimates \citep{Santiago2016starhorse, Coronado2018lamost, McMillan2018rave, Queiroz2018, Hogg2019specphot, Leung2019distances}. In combination with informative priors from \textit{Gaia}'s parallaxes, this approach allows us to break the dwarf--giant degeneracy, a strategy pedagogically exemplified by \citet{sestito2019
.
\defcitealias{Johnson2020sgr}{J20}
Recently, \citet{Hayes2020} took advantage of such spectro-photometric distances
for stars observed during the Apache Point Observatory Galactic Evolution Experiment \citep[APOGEE;][]{apogee2017} to investigate
abundances in the Sgr system (stream$+$remnant). Significant chemical differences between the leading and trailing ($b < 0^{\circ}$) arms were reported, with the latter being more metal-rich (by ${\sim}0.3$\,dex) than the former (see also \citealt{Monaco2005sgr, Monaco2007sgr}, \citealt{JingLi2016sgr, JingLi2019sgr}, \citealt{Carlin2018sgr}, and \citealt{Ramos2022sgr}), as well as $\rm[Fe/H]$ and [$\alpha$/Fe] gradients \textit{along} the stream itself (corroborating earlier claims by, e.g., \citealt{Bellazzini2006sgr}, \citealt{Chou2007sgr, Chou2010sgr}, \citealt{Keller2010sgr}, \citealt{Shi2012sgr}, and \citealt{Hyde2015sgr}). Moreover, \citet[][referred to as \citetalias{Johnson2020sgr}]{Johnson2020sgr} investigated the stellar population(s) of Sgr stream with data from the Hectochelle in the Halo at High Resolution (H3; \citealt{Conroy2019surveyH3}) survey and spectro-photometric distances derived as in \citet[][see also \citealt{naidu2020}]{Cargile2020minesweeper}. The extended metallicity range (reaching $\rm[Fe/H] \approx -3$) probed by H3, in comparison to APOGEE ($\rm[Fe/H] \gtrsim -2$; see \citealt{Limberg2022gse} for a recent discussion), allowed these authors to uncover a metal-poor, dynamically hot, and spatially diffuse component of Sgr stream, confirming a previous suggestion by \citet{Gibbons2017sgr}.
In this contribution, we explore the phase-space and chemical properties of Sgr stream, but focusing on its very metal-poor (VMP; $\rm[Fe/H] < -2$)\footnote{Following the convention of \citet{beers2005}.} population. However, instead of dividing this substructure into distinct components (dynamically cold/metal-rich vs. dynamically hot/metal-poor) as in, e.g., \citet{Gibbons2017sgr} and \citetalias{Johnson2020sgr}, we seek to quantify the whole evolution of its kinematics as a function of chemistry. For this task, we need a large enough sample of stars covering a wide metallicity range. Hence, we do not rely on high-resolution ($\mathcal{R} \geq 20{,}000$) spectroscopy as was done by \citet[][also \citealt{Hasselquist2019sgr, Hasselquist2021dwarf_gals}]{Hayes2020} and \citetalias{Johnson2020sgr}, but rather turn our attention to low-resolution ($\mathcal{R} \sim 1800$) data from the Sloan Extension for Galactic Understanding and Exploration \citep[SEGUE;][]{yanny2009, Rockosi2022segue} survey. Atmospheric parameters provided by the SEGUE Stellar Parameter Pipeline \citep[SSPP;][]{AllendePrieto2008sspp, lee2008a, lee2008b, Lee2011sspp, Smolinski2011sspp} are combined with \textit{Gaia}'s parallaxes and broad-band photometry from various sources, similar to what was done in \citet{Queiroz2020}, to estimate spectro-photometric distances for ${\sim}175{,}000$ low-metallicity ($-3.5 \lesssim \rm[Fe/H] \leq -0.5$) stars in the SEGUE catalog. The complete description of this effort, including other spectroscopic surveys, is reserved for an accompanying paper (Queiroz et al., submitted).
This work is organized as follows. Section \ref{sec:data} describes the observational data analyzed throughout this wor
. Section \ref{sec:sgr_stream_segue} is dedicated to investigate the chemodynamical properties of Sgr stream in the SEGUE catalog. Comparisons with the
$N$-body model of \citetalias[][]{Vasiliev2021tango} are presented in Section \ref{subsec:model}. We explore $\alpha$-element and carbon abundances in Section \ref{sec:abundances}. Finally, Section \ref{sec:conclusions} is reserved for a brief discussion and our concluding remarks.
\section{Data} \label{sec:data}
\subsection{SEGUE\texorpdfstring{$+$}GGaia} \label{subsec:segue+gaia}
The main data set employed in this work is from SEGUE, a sub-project within the Sloan Digital Sky Survey \citep[SDSS;][]{sdssYork}. The SEGUE emphasis on the distant halo is suitable for studying Sgr \citep[and other streams;][]{Newberg2010orphan, Koposov2010gd1} and has, indeed, been extensively explored for this purpose \citep{Yanny2009sgr, Belokurov2014sgr, deBoer2014sgr, deBoer2015sgr, Gibbons2017sgr, Chandra2022echoes, Thomas&Battaglia2022cetus}. The novelty is the availability of complete phase-space information thanks to \textit{Gaia} and newly obtained spectro-photometric distances for SEGUE targets by Queiroz et al. (submitted). Hence, we are in a position to construct a larger sample of confident Sgr stream members than previous effort
.
Stellar atmospheric parameters, namely effective temperatures ($T_{\rm eff}$), surface gravity ($\log g$), and metallicities (in the form of [Fe/H]), as well as $\alpha$-element abundances ([$\alpha$/Fe]) and $v_{\rm los}$ values for SEGUE stars were obtained via application of the SSPP\footnote{Over the years, the SSPP has also been expanded to deliver carbon \citep{carollo2012, lee2013, lee2017, lee2019, Arentsen2022cemp}, nitrogen \citep[][]{Kim2022nitrogen}, and sodium \citep{Koo2022sodiumSSPP} abundances (see Section \ref{sec:carbon}).} routines. The final run of the SSPP to SEGUE spectra was presented alongside SDSS DR9 \citep[][]{SDSS_DR9} and has been included, unchanged, in all subsequent DRs. Recently, \citet{Rockosi2022segue} reevaluated the internal precision of SSPP's atmospheric parameters for DR9; these are no worse than 80\,K, 0.35\,dex, and 0.25\,dex for $T_{\rm eff}$, $\log g$, and [Fe/H], respectively, across the entire metallicity and color ranges explored. For the SEGUE/\texttt{StarHorse} ru
, we only consider spectra with moderate signal-to-noise ratio ($S/N > 20$ pixel$^{-1}$). In this work, we keep only those stars within $4500 < T_{\rm eff}/{\rm K} < 6500$, which is the optimal interval for the performance of SSPP. Moreover, we limit our sample to low-metallicity stars ($\rm[Fe/H] \leq -0.5
), which removes most of the contamination from the thin disk, but maintains the majority of Sgr stream members; out of 166 stars analyzed by \citet{Hayes2020}, only 5 (3\%) show $\rm[Fe/H] > -0.5$.
We cross-match (1.5$''$ search radius) the above-described SEGUE low-metallicity sample with \textit{Gaia}'s early DR3 (EDR3; \citealt{GaiaEDR3Summary}), which provides parallaxes and absolute proper motion
. In order to ensure the good quality of the data at hand, we only retain those stars whose renormalized unit weight errors
are within the recommended range ($\texttt{ruwe} \leq 1.4$; \citealt{Lindegren2020_AstromSol}
. Parallax biases and error inflation are handled following \citet{Lindegren2020_PlxBias} and \citet[][]{Fabricius2021}, taking into account magnitudes, colors, and on-sky position
. Stars with largely negative parallax values ($\texttt{parallax} < -5$) are discarded. Also, we emphasize that
only those stars with an available parallax measurement are considered to ensure good distance results. Those stars with potentially spurious astrometric solutions are also removed ($\texttt{fidelity\_v2} < 0.5$; \citealt{Rybizki2022fidelity}).
\begin{figure}[pt!]
\centering
\includegraphics[width=1.0\columnwidth]{segue_starhorse_mp_xyz.png}
\caption{Cartesian Galactocentric projections of the SEGUE/\texttt{StarHorse} low-metallicity sample. Top: $(X,Y)$. Bottom: $(X,Z)$. Spatial bins are color-coded by their mean [Fe/H] values. The attentive reader may notice the footprint of Sgr stream as metal-rich trails at ${|Z|} \gtrsim 20\,{\rm kpc}
.
\label{fig:xyz}}
\end{figure}
\subsection{The SEGUE/\texttt{StarHorse} Run} \label{subsec:starhorse}
\texttt{StarHorse} \citep{Santiago2016starhorse, Queiroz2018} is an isochrone-fitting code capable of delivering stellar parameters, distances, and $V$-band ($\lambda = 542\,{\rm nm}$) extinctions for individual field stars in a Bayesian framework. We applied this method to the
SEGUE$+$\textit{Gaia} EDR3 sample
in order to estimate precise distances that would allow us to study the Sgr stream; at ${\geq}10\,{\rm kpc}$ from the Sun, our derived
uncertainties are at the level of ${\sim}13\%$. The medians of the derived posterior distributions are adopted as our nominal values, while $16$th and $84$th percentiles are taken as uncertainties. The initial mass function utilized is from \citet[][]{Chabrier2003imf}. Further details regarding stellar-evolution models and geometric priors can be found in \citet{Queiroz2018, Queiroz2020} and \citet{Anders2019starhorseGaiaDR2}. A discussion on updated priors is presented in \citet[][]{Anders2022starhorseEDR3}. The full release of \texttt{StarHorse}'s data products for a variety of spectroscopic survey
, including SEGUE for the complete metallicity range, will be made available alongside a parallel effort (Queiroz et al., submitted).
For this \texttt{StarHorse} run, spectroscopic (SEGUE) and astrometric (\textit{Gaia} EDR3) information were combined with multi-wavelength photometry from various large-area surveys. Specifically, the data employed came from the Two Micron All Sky Survey \citep[2MASS;][]{2MASS}, Wide-field Infrared Survey Explorer \citep[WISE;][in particular \citealt{Schlafly2019unWISE}]{WISEsurvey2010}, and Panoramic Survey Telescope and Rapid Response System \citep[Pan-STARRS1 DR1;][with corrections following \citealt{Scolnic2015panstarrs}]{Pan-STARRS2016}. Whenever available, $griz$ magnitudes from SkyMapper DR2 \citep[][with recalibrated zero points by \citealt{Huang2021recalibration}]{SkyMapperDR2}, are also included. SkyMapper's bluer bands ($u$ and $v$; \citealt{Bessell2011skymapper}) are discarded due to a limitation of the extinction law adopted \citep[][]{Schlafly2016extinction}. We refer the reader to \citet{Anders2022starhorseEDR3} for complete details regarding the compiled photometric data.
With the \texttt{StarHorse} output at hand, we restrict our sample to stars with moderate ($<$20\%) fractional Gaussian uncertainties in their estimated distance values. Throughout the remainder of this paper, we refer to this catalog as the ``SEGUE/\texttt{StarHorse} low-metallicity sample'' or close variations of that. Its coverage in a Cartesian Galactocentric projection can be appreciated in Figure \ref{fig:xyz}. By color-coding this plot with the mean [Fe/H] in spatial bins, the footprint of Sgr stream is already perceptible as metal-rich trails at ${|Z|} \gtrsim 20\,{\rm kpc}$. Finally, we refer the interested reader to \citet{Perottoni2022gse} for an initial scientific application of these data. The authors showed that the chemodynamical properties of stars in a pair of halo stellar overdensities \citep[][]{Vivas2001vod, Newberg2002ghosts, Belokurov2007HAC} are indistinguishable from nearby (within 5\,kpc from the Sun) debris of the \textit{Gaia}-Sausage/Enceladus \citep[GSE;][also \citealt{Haywood2018}]{belokurov2018, helmi2018} disrupted dwarf galaxy.
\subsection{Kinematics and Dynamics} \label{subsec:kindyn}
Positions ($\alpha$, $\delta$) and proper motions ($\mu_{\alpha}^{*} = \mu_{\alpha} \cos{\delta}$, $\mu_{\delta}$) on the sky, $v_{\rm los}$ values from SEGUE, and \texttt{StarHorse} distances are converted to Galactocentric Cartesian phase-space coordinates using \texttt{Astropy} Python tools \citep{astropy, astropy2018}.
The adopted distance from the Sun to the Galactic center is 8.2\,kpc \citep[][]{BlandHawthorn2016, EHT2022sgrA*I}. Within \texttt{Astropy}'s standard right-handed frame, $X_\odot = -8.2\,{\rm kpc}$. The local circular velocity is $\mathbf{V_{\rm circ}} = (0.0, 232.8, 0.0)\,{\rm km\,s^{-1}}$. The complete velocity vector of the Sun is $(V_x, V_y, V_z)_\odot = (11.10, 245.04, 7.25)\,{\rm km\,s^{-1}}$, which includes both $\mathbf{V_{\rm circ}}$ and the Sun's peculiar motion with respect to the local standard of rest being $(U,V,W)_\odot = (11.10, 12.24, 7.25)\,{\rm km \ s^{-1}}$ \citep{schon2010}. Lastly, the vertical displacement of the Sun with respect to the Galactic plane is $Z_\odot = 0.0208\,{\rm kpc}$ \citep{Bennett&Bovy2019vertical}.
In Section \ref{subsec:selection}, the angular-momentum distribution of our sample will be utilized to select genuine members of the Sgr stream as suggested by previous works \citepalias[][]{Johnson2020sgr, Penarrubia2021sgr}. Hence, we write, below, how each component of the total angular momentum ($\boldsymbol{L}$) is calculated.
\begin{equation}
\begin{aligned}
L_x = YV_z - ZV_y \\
L_y = ZV_x - XV_z \\
L_z = XV_y - YV_x{,}
\end{aligned}
\end{equation}
where
$L = \sqrt{L_x^2 + L_y^2 + L_z^2}$. We recall that, although $\boldsymbol{L}$ is not fully conserved in an axisymmetric potential, with the exception of the $L_z$ component, it has been historically used for the identification of substructure in the Galaxy as it preserves reasonable amount of clumping over time (see \citealt{Helmi2020} for a review)
For the entire SEGUE/\texttt{StarHorse} low-metallicity sample, we also compute other dynamical parameters, such as orbital energies ($E$) and actions ($\boldsymbol{J}$).
Actions are presented in the cylindrical frame, i.e., $\boldsymbol{J} = (J_R, J_\phi, J_z)$.
The azimuthal action is equivalent to the vertical component of angular momentum ($J_\phi \equiv L_z$) and we use these nomenclatures interchangeably. In order to obtain these quantities, orbits are calculated for 10\,Gyr forward with the \texttt{AGAMA} package \citep[][]{agama}. The axisymmetric Galactic potential model of \citet{mcmillan2017} is adopted, which includes thin and thick stellar disks \citep{Gilmore1983thick}, gaseous disks \citep{DehnenBinney1998mass}, flattened bulge \citep{Bissantz2002bulge}, and spherical dark matter (DM) halo \citep{NFW1996halos}. Although the Milky Way contains non-axisymmetric features such as a central rotating bar and spiral arms (see \citealt{freeman2002}, \citealt{BlandHawthorn2016}, and \citealt{Barbuy2018} for reviews), these are not expected to significantly affect our calculations for halo stars. A total of 100 initial conditions were generated for each star with a Monte Carlo approach, accounting for uncertainties in proper motions, $v_{\rm los}$, and distances. The final orbital parameters are taken as the medians of the resulting distributions, with $16$th and $84$th percentiles as uncertainties.
\begin{figure*}[pt!]
\centering
\includegraphics[width=2.1\columnwidth]{selec_sgr.png}
\caption{Upper row: $J_z > J_R$ (predominantly polar orbits). Bottom: $J_z < J_R$ (radial/eccentric orbits). Background density maps are produced with the full SEGUE/\texttt{StarHorse} low-metallicity sampl
. White dots are Galactic GCs compiled by \citet{VasilievBaumgardt2021gcs}. Those GCs associated with Sgr dSph/stream
are displayed as yellow diamonds, with M54 as the star symbol (see text). Left panels: $(L_z, L_y)$. Our Sgr stream selection is shown as the yellow box. The gray dashed line exhibits the \citetalias{Johnson2020sgr} criterion. The yellow cross is the central location of Sgr in this space according to \citetalias{Penarrubia2021sgr}. Yellow contours represent the kinematic/dynamical locus occupied by our Sgr stream members. Middle: $(L_z, E)$. Right: $(L_z, J_z)$.
\label{fig:selection}}
\end{figure*}
\pagebreak
\section{Sgr Stream in SEGUE} \label{sec:sgr_stream_segue}
\subsection{Selection of Members} \label{subsec:selection}
In the context of our goals delineated by the end of Section \ref{sec:intro}, we seek to construct a sample of Sgr stream members that is both ($i$) larger in size and ($ii$) with greater purity than previously considered by \citetalias[][also \citealt{naidu2020}]{Johnson2020sgr}, but ($iii$) with a similarly extended metallicity range, reaching the extremely metal-poor ($\rm[Fe/H] < -3$) regime. These authors have shown, by comparison with the well-known \citet{LM2010modelSgrStream} model, that stars from the Sgr stream can be selected to exquisite completeness in the ($L_z, L_y$) plane, which exploits the polar nature of their orbits. For the convenience of the reader, we reproduce their criterion in Figure \ref{fig:selection} (left panels, dashed lines). However, \citetalias[][]{Penarrubia2021sgr} have recently argued that the \citetalias{Johnson2020sgr}'s criterion also includes ${\approx}21\%$ of
interlopers. Hence, we build on these previous efforts and design a new set of criteria capable of yielding better purity, but remaining comprehensible and readily reproducible.
First, we inspect the aforementioned ($L_z, L_y$) plan
. Inspired by \citet[][their figure 6]{naidu2020}, we split this parameter space into $J_z > J_R$ (predominantly polar orbits; Figure \ref{fig:selection}, top row) and $J_z < J_R$ (radial/eccentric orbits; bottom row). We notice that, in the top left panel of Figure \ref{fig:selection}, there exists an excess of stars toward negative values of $L_y$. This same group of stars can be identified as a high-energy ($E \sim -0.8 \times 10^5\,{\rm km^2\,s^{-2}}$) prograde ($L_z \sim -1500\,{\rm kpc\,km\,s^{-1}}$) population (top middle panel); this is the footprint of Sgr stream \citep[see][]{Malhan2022atlas}. This substructure can also be recognized by its exceptional values of $J_z \gtrsim 2000\,{\rm kpc\,km\,s^{-1}}$ \citep[top right panel;][]{Thomas&Battaglia2022cetus}. We also highlight that this kinematic/dynamical signature of Sgr stream completely vanishes in the bottom panels. In this case where $J_z < J_R$, the ($L_z,E$) space is dominated by a prominent overdensity around $L_z \sim 0$ accompanied by
several globular clusters (GCs; white dots in Figure \ref{fig:selection}), which corresponds to GSE.
Developing on the above-described facts, we wish to quantify how useful the $J_z > J_R$ condition can be for eliminating potential GSE contaminants within our Sgr stream members. We look at a suite of chemodynamical simulations of Milky Way-mass galaxies with stellar halos produced by a single GSE-like merger presented in \citet[][]{Amarante2022gsehalos}. Within these models, the fraction of GSE debris that end up (at redshift $z=0$) on orbits with
$J_z > J_R$ is always below 9\%. Therefore, we incorporated this condition to our selection as it should remove ${>}90\%$ of potential GSE stars.
We further inform ourselves with the \citetalias{Vasiliev2021tango} pure $N$-body model of the Sgr system. We verified that both star and DM particles are not expected to be found closer than ${\sim}6$\,kpc from the Sun. Hence, we eliminated stars with heliocentric distances within this range from our Sgr stream sample. In practice, this cut removes mostly thin/thick-disk stars with low $J_z$ values.
Lastly, we restrict the kinematic locus occupied by Sgr stream in ($L_z, L_y$) in comparison to \citetalias{Johnson2020sgr}. The results of \citetalias{Penarrubia2021sgr} (their figure 1) indicate that the Sgr stream footprint is better defined approximately within $-10 < L_y/(10^3 \,{\rm kpc} \,{\rm km} \,{\rm s}^{-1}) < -3$ and $-4 < L_z/(10^3 \,{\rm kpc} \,{\rm km} \,{\rm s}^{-1}) < +1$. We note that the Galactic fundamental parameters \citep{Drimmel2018sunVel, GRAVITY2019} adopted by these authors are very similar to the ones described in our Section \ref{subsec:kindyn}. An example of non-Sgr substructure that is also allowed by the \citetalias{Johnson2020sgr} criteria, even after our $J_z > J_R$ cut, is the Orphan stream \citep{Belokurov2007Orphan, Newberg2010orphan}, though this contamination appears to be minimal within the footprint of H3 \citep{Naidu2022mzr}. In Appendix \ref{sec:polar_streams}, we discuss the existence of other polar stellar streams \citep[see][]{Malhan2021lms1} that could overlap with Sgr in kinematic/dynamical parameter spaces. Overall, our set of criteria is
robust against these potential sources of contamination.
\begin{figure*}[pt!]
\centering
\includegraphics[width=1.8\columnwidth]{sgr_metal_poor_sample_segue_starhorse.png}
\caption{Top row: metallicity (left) and age (right) distributions. In both panels, blue and red histograms represent the leading and trailing arms of Sgr stream, respectivel
. The complete SEGUE/\texttt{StarHorse} low-metallicity sample
is shown in black.
Bottom row: configuration space in the Galactic Cartesian coordinate system. Left: $(X,Z)$. Right: $(Y,Z)$. Blue and red dots are stars associated with the leading and trailing arms, respectively. The location of the M54 GC, which coincides with the center of Sgr dSph \citep[e.g.,][]{Bellazzini2008m54}, is shown as the yellow star symbol. The background Sgr model (stream$+$surviving core) is from \citetalias[][]{Vasiliev2021tango}, where gray and pink dots are DM and stellar particles, respectively. The Milky Way's center and disk (40\,kpc diameter) are illustrated by the black circle and line, respectively.
\label{fig:sgr_sample}}
\end{figure*}
In this work, the conditions that a star within the SEGUE/\texttt{StarHorse} metal-poor sample must fulfill in order to be considered a genuine member of Sgr stream are listed below:
\begin{itemize}
\vspace{-1mm}
\item $J_z > J_R$;
\vspace{-2mm}
\item ${\rm heliocentric \ distance} > 6\,{\rm kpc}$;
\vspace{-2mm}
\item $-10 < L_y/(10^3 \,{\rm kpc} \,{\rm km} \,{\rm s}^{-1}) < -3$;
\vspace{-2mm}
\item $-4 < L_z/(10^3 \,{\rm kpc} \,{\rm km} \,{\rm s}^{-1}) < +1$.
\end{itemize}
This selection of the Sgr stream is delineated by the yellow box in the top left panel of Figure \ref{fig:selection}. It is clear that our criteria is more conservative than \citetalias{Johnson2020sgr}'s. Nevertheless, the raw size of our final Sgr stream sample (${\sim}1600$ stars) is twice as large as the one presented by these authors (${\sim}800$ members) despite the sharp cut at $\rm[Fe/H] < -0.5$. Moreover, the metallicity range covered
reaches $\rm[Fe/H] \sim -3$, with ${\gtrsim}200$ VMP stars in the sample (top left panel of Figure \ref{fig:sgr_sample}). This excess of VMP stars found in SEGUE is particularly suitable for us to quantify the
kinematics of the diffuse, dynamically hot component of Sgr stream proposed by \citetalias[][]{Johnson2020sgr}.
Finally, although these Sgr stream candidates were identified from their locus in ($L_z, L_y$), we found them to be spatially cohesive and in agreement with the
\citetalias{Vasiliev2021tango} model in configuration space (bottom row of Figure \ref{fig:sgr_sample}).
With this newly defined set of selection criteria at hand, we verified which known Galactic GCs would be connected to the Sgr system. For this exercise, we examined the orbital properties, computed as in Section \ref{subsec:kindyn}, of 170 GCs from the \textit{Gaia} EDR3-based catalog of \citet{VasilievBaumgardt2021gcs}. We found that a total of seven GCs can be linked to this group, including NGC 6715/M54, Whiting1, Koposov1, Terzan7, Arp2, Terzan8, and Pal12. We note that M54 has long been recognized to be the nuclear star cluster of Sgr dSph \citep[e.g.,][]{Bellazzini2008m54}.
Furthermore, most of these other GCs had already been attributed to Sgr by several authors \citep{massari2019, Bellazzini2020sgr, Forbes2020, Kruijssen2020kraken, Callingham2022gcs}. Notorious absences in this list are NGC 2419 (marginally outside our selection), NGC 4147 and NGC 5634 (potential members of GSE\footnote{NGC 4147 and NGC 5634 have been associated with Helmi stream \citep{helmi1999} by, e.g., \citet{Callingham2022gcs}.}; \citealt{Limberg2022gse}), and NGC 5824 (recently associated with the Cetus\footnote{The Cetus stream was first described by \citet{Newberg2009cetus}.} accretion event \citep{Yuan2019cetus, Yuan2022cetus, Chang2020cetus, Malhan2022atlas}.
\subsection{Leading and Trailing Arms} \label{subsec:arms}
We begin our study of Sgr stream's stellar populations by looking at the metallicity distributions obtained for the leading and trailing arms and differences between them. Looking at Figure \ref{fig:sgr_sample}, the immediately perceptible feature is the excess of VMP stars in the leading arm. On the contrary, the trailing arm presents a significant contribution of metal-rich ($\rm[Fe/H] \gtrsim -1$) stars. This property of Sgr stream, the leading arm being more metal-poor than the trailing one, had already been noticed by several authors (including with high-resolution spectroscopy; see \citealt{Carlin2018sgr} and \citealt{Hayes2020}) and is recovered despite the intentional bias of the SEGUE catalog to low-metallicity stars (note the excess at $\rm[Fe/H] \lesssim -1$ in the black/all-sample histogram; see \citealt{Bonifacio2021mdf} and \citealt{Whitten2021splus} for discussions). The final median [Fe/H] values we obtained for the leading and trailing arms are $-1.46^{+0.02}_{-0.03}$ and $-1.28^{+0.03}_{-0.05}$, respectively, where upper and lower limits represent bootstrapped ($10^4$ times) 95\% confidence intervals. These metallicity values derived from SEGUE are $\sim$0.3--0.4\,dex lower than the ones obtained from APOGEE data \citep{Hayes2020, Limberg2022gse}, but we reinforce that this is an artifact due to SEGUE's target selection function \citep{Rockosi2022segue}.
From \texttt{StarHorse}'s output, we should also, in principle, be able to access information regarding ages for individual stars as this parameter is a byproduct of the isochrone-fitting procedure (e.g., \citealt{Edvardsson1993isoFit}, \citealt{Jorgensen2005isoFit}, and \citealt{Sanders&Das2018ages}). However, there are some caveats in this approach. First, it becomes increasingly difficult to distinguish between isochrones of different ages toward both the cooler regions of the main sequence as well as the upper portions of the red giant branch (see figure 2 of \citealt{souza2020} for a didactic visualizatio
). However, it is still possible to go around this issue by looking at the turnoff and subgiant areas where isochrones tend to be better segregated \citep[see discussion in][]{Vickers2021agesLAMOST}. Second, even at these evolutionary stages, variations in ages and metallicities have similar effects on the color--magnitude diagram (e.g., \citealt{YaleYonsei2001}, \citealt{YaleYonsei2004}, \citealt{Pietrinferni2004basti1, Pietrinferni2006basti2}, and \citealt{dotter2008}). Hence, spectroscopic [Fe/H] values can be leveraged as informative priors to break this age--metallicity degeneracy. Third, distant non-giant stars are quite faint, which is the case for our Sgr stream sample. This is where SEGUE's exquisite depth, with targets as faint as $g = 19.5$, where $g$ is SDSS broad band centered at $4800$\,{\AA} \citep{Fukugita1996sdssPhoto}, comes in handy.
In this spirit, we attempt to provide a first
estimate of the typical ages for stars in Sgr stream. Similar to recent efforts \citep[][]{Bonaca2020, Buder2022halo, Xiang&Rix2022ages}, we selected stars in the SEGUE/\texttt{StarHorse} low-metallicity sample near the turnoff and subgiant stages. For the sake of consistency, we utilized stellar parameters derived by \texttt{StarHorse} itself during the isochrone-fitting process as these will be directly correlated with the ages at hand. These turnoff and subgiant stars are mostly contained within $4.5 < \log g_{\rm SH} \lesssim 3.6$ and $T_{\rm eff, SH} \gtrsim 5250\,{\rm K}$, where the subscript ``SH'' indicates values from \texttt{StarHorse} instead of SSPP. A parallel paper describes in detail this (sub)sample with reliable ages (Queiroz et al., submitted).
In any case, for the purpose of this work, we highlight that typical differences between SEGUE's atmospheric parameters and those obtained with \texttt{StarHorse} are at the level of SSPP's internal precisio
.
We found a total of 56 turnoff or subgiant stars in Sgr stream (31 in the leading arm plus 25 in the trailing one)
for which ages
are most reliable (top right panel of Figure \ref{fig:sgr_sample}). As expected, these are quite faint ($17.5 < g < 19.5$), which reinforces the value of a deep spectroscopic survey such as SEGUE.
Members of Sgr stream (blue and red histograms representing leading and trailing arms, respectively) appear to be older (11--12\,Gyr) than the bulk of
our sample, which
is mostly composed of thick-disk stars. It is reassuring that the age distribution for the entire SEGUE/\texttt{StarHorse} low-metallicity sample (black) peaks at 10--11\,Gyr, which is, indeed, in agreement with ages derived from asteroseismic data for the chemically-defined, i.e., high-$\alpha$, thick-disk population \citep[][]{SilvaAguirre2018age_disks, Miglio2021age_disks}. We quantify this visual interpretation with a kinematically-selected thick-disk sample, following $100 < |\mathbf{V}-\mathbf{V_{\rm circ}|/({\rm km\,s^{-1}}}) <180$ \citep[check][]{Venn2004, Bensby2014, LiZhao2017, Posti2018, Koppelman2020RESSONANCES}, where $\mathbf{V} = (V_x, V_y, V_z)$ is the total velocity vector of a given star, i.e., $V = \sqrt{V_x^2 + V_y^2 + V_z^2}$. Within the SEGUE/\texttt{StarHorse} low-metallicity data (${\sim}7,800$ stars), we found a median age of 10.6\,Gyr for this population.
For Sgr stream specifically, the bootstrapped median age for the leading arm is $11.6^{+0.4}_{-0.2}\,{\rm Gyr}$. For the trailing arm, we found $11.8^{+0.3}_{-0.2}\,{\rm Gyr}$. This translates to $11.7^{+0.3}_{-0.2}\,{\rm Gyr}$ considering all Sgr stream stars. Of course, uncertainties for individual stars are still substantial, usually at the level of ${>}25\%$ (${\sim}3$\,Gyr). Therefore, we hope that it will be possible to test this
scenario, that the Sgr stream is dominated by stars older (by ${\sim}1$\,Gyr) than those from the Galactic thick disk, with data provided by the upcoming generation of spectroscopic surveys, such as 4MOST \citep{4MOST2019}, SDSS-V \citep{Kollmeier2017}, and WEAVE \citep{WEAVE2016}, and building on the statistical isochrone-fitting framework of \texttt{StarHorse}.
\subsection{Evolution of Velocity Dispersion with Metallicity} \label{subsec:vel_disp}
As
mentioned in Section \ref{sec:intro}, the
motivation for us to identify
Sgr stream members in the SEGUE/\texttt{StarHorse} catalog was to analyze the evolution of its kinematics extending deeply into the VMP regime. Past efforts that conducted similar exercises include \citet[][]{Gibbons2017sgr} and \citetalias{Johnson2020sgr}. The former was the first to propose the existence of two populations in Sgr stream using SEGUE data itself. Its main limitation was the lack of complete phase-space information, which are now available thanks to \textit{Gaia}. Regarding the latter, the caveats were the small amount of (${\sim}50$) VMP stars
in their
sample (from H3 survey; see also \citealt{naidu2020}) and potential contamination by Milky Way foreground stars \citepalias[]
]{Penarrubia2021sgr}. Here, instead of splitting Sgr stream into two components, our approach is to model its velocity distribution (e.g., \citealt{Li2017eridanusII, Li2018tucanaIII})
across different [Fe/H] intervals. The results of this exercise can provide constraints to future chemodynamical simulations attempting to reproduce the Sgr system as was recently done for GSE \citep[][]{Amarante2022gsehalos}.
In the context of the above-mentioned goal, Figure \ref{fig:vel_disp} displays the distributions of total velocity ($V
) across different metallicity ranges, from VMP (left) to metal-rich
right). The color scheme is the same as Figure \ref{fig:sgr_sample}, i.e., blue/red for leading/trailing arm. From a preliminary visual inspection, one can notice that both histograms become broader at lower [Fe/H] values. In order to
quantify this effect of increasing velocity dispersion ($\sigma_V$) with decreasing [Fe/H], we model these distributions, while also accounting for uncertainties, using a Markov chain Monte Carlo (MCMC) method implemented with the \texttt{emcee} Python package \citep[][]{Foreman-Mackey2013emcee}. As in \citet[][see also \citealt{Wan2020phoenix}]{Li2017eridanusII, Li2018tucanaIII}, the Gaussian $\log$-likelihood function is written as
\begin{equation}
\log{\mathcal{L}} = -\dfrac{1}{2} \sum_{i=1}^{N} \left[ \log{\left( \sigma^2_V + \sigma^2_{V,i} \right)} + \dfrac{ \left( V_i - \langle V \rangle \right)^2 }{ \left( \sigma^2_V + \sigma^2_{V,i} \right)} \right]{,}
\label{eq:likelihood}
\end{equation}
where $V_i$ and $\sigma_{V,i}$ are the total velocity and its respective uncertainty for the $i$th star within a given [Fe/H] bi
. We adopt only the following uninformative priors: $0 < \langle V \rangle /({\rm km}\,{\rm s}^{-1}) < 500$ and $\sigma_V > 0$. Lastly, we run the MCMC sampler for 500 steps with 50 walkers, including a burn-in stage of 100. Although some of the $V$ histograms in Figure \ref{fig:vel_disp} show non-Gaussian tails, this exercise is sufficient for the present purpose.
The results of our calculations with the
MCMC strategy are presented in Table \ref{tab:vel_metal}. Upper and lower limits are 16th and 84th percentiles, respectively, from the posterior distributions. Within the most metal-rich intervals, between $-1.5 < \rm[Fe/H] \leq -0.5$, we found no statistically-significant (${<}1\sigma
) evidence for $\sigma_V$ variations. However, at $\rm[Fe/H] \leq -1.5$, the $\sigma_V$
increases substantially for both leading and trailing arms. Overall, we verified that, according to present data, the VMP component of Sgr stream (left panel of Figure \ref{fig:vel_disp}) is dynamically hotter than its metal-rich counterpart at the ${\gtrsim}2\sigma$ level. As a sanity check, we also verified that this effect is less prominent (${\sim}1\sigma$) for GSE (green histograms in Figure \ref{fig:vel_disp}; Table \ref{tab:vel_metal}) even with a not-so-pure (at least 18\% contamination; \citealt{Limberg2022gse}) selection \citep[][]{Feuillet2020}, which is to be expected given the advanced stage of phase-mixing of this substructure.
\begin{figure*}[pt!]
\centering
\includegraphics[width=2.\columnwidth]{vel_dist_sgr_final.png}
\caption{Distributions of $V = \sqrt{V_x^2 + V_y^2 + V_z^2}$ in intervals of [Fe/H]. From left to right, we move from the VMP to the metal-rich regime. Blue, red, and green histograms represent the leading arm, trailing arm, and GSE, respectivel
.
\label{fig:vel_disp}}
\end{figure*}
\renewcommand{\arraystretch}{1.0}
\setlength{\tabcolsep}{1.0em}
\begin{table*}[ht!]
\centering
\caption{Velocity Dispersion for the Leading and Trailing Arms of Sgr, as well as GSE, in bins of [Fe/H]
}
\label{tab:vel_metal}
\begin{tabular}{>{\normalsize}c >{\normalsize}c >{\normalsize}c >{\normalsize}c >{\normalsize}c}
\hline
\hline
Substructure & $\sigma_V$ & \multicolumn{1}{c}{$\sigma_V$} & $\sigma_V$ & $\sigma_V$ \\ %
& (km\,s$^{-1}$) & \multicolumn{1}{c}{(km\,s$^{-1}$)} & (km\,s$^{-1}$) & (km\,s$^{-1}$) \\ %
& $\rm[Fe/H] \leq -2.0$ & \multicolumn{1}{c}{$-2.0 < \rm[Fe/H] \leq -1.5$} & $-1.5 < \rm[Fe/H] \leq -1.0$ & $-1.0 < \rm[Fe/H] \leq -0.5$ \\[1mm]
\hline
Leading & $70^{+4}_{-4}\%$ $\phantom{0}$(194) & $62^{+3}_{-3}\%$ $\phantom{0}$(284) & $51^{+2}_{-2}\%$ $\phantom{0}$(405) & $53^{+5}_{-4}\%$ $\phantom{00}$(94) \\
Trailing & $44^{+6}_{-5}\%$ $\phantom{00}$(62) & $38^{+3}_{-3}\%$ $\phantom{0}$(185) & $32^{+2}_{-5}\%$ $\phantom{0}$(319) & $28^{+2}_{-2}\%$ $\phantom{0}$(203) \\
GSE & $87^{+2}_{-2}\%$ (1153) & $86^{+1}_{-1}\%$ (4314) & $83^{+1}_{-1}\%$ (8020) & $82^{+2}_{-2}\%$ (1322) \\
\hline
\end{tabular}
\end{table*}
Now, we put our results in context with those
in the literature. With the understanding that Sgr stream is comprised of two kinematically distinct populations (\citetalias[][]{Johnson2020sgr}), the increasing $\sigma_V$ as a function of decreasing metallicity can be interpreted as larger fractions of the ``diffuse'' \citepalias[][]{Johnson2020sgr} component contributing to the low-[Fe/H] (dynamically hotter) bins. On the contrary, the ``main'' component, which contains most of the stars
of the substructure, is preferentially associated with the high-[Fe/H] (colder) intervals.
\citetalias{Penarrubia2021sgr} recently argued that the broad velocity distribution for metal-poor stars in Sgr stream could be an artifact of Milky Way contamination in the \citetalias[][]{Johnson2020sgr} Sgr stream data. However, this effect is still clearly present in our $2{\times}$ larger sample with more rigorous selection criteria.
To summarize, in the low-metallicity regime, there appears to be considerable contribution from \textit{both} ancient and recently formed wraps of the stream. On the other hand, at high metallicities ($\rm[Fe/H] > -1.0$), only the newest wrap is represente
.
\begin{figure*}[pt!]
\centering
\includegraphics[width=2.1\columnwidth]{stream_coord_sgr.png}
\caption{Sgr stream in $(\Lambda_{\rm Sgr}, v_{\rm los})$ space. As in Figure \ref{fig:sgr_sample}, blue and red dots are stars associated with the leading and trailing arms, respectivel
. DM and stellar particles from the \citetalias[][]{Vasiliev2021tango} model are shown as gray and colored dots, respectively. Green dots are attributed to the new ($t_{\rm strip}<2\,{\rm Gyr}$) and orange ones to the old ($t_{\rm strip}>2\,{\rm Gyr}$) wrap
.
Black dots remain bound to the progenitor until the present day (redshift $z=0$), i.e., the end of the simulation. From top to bottom, we move from the VMP to the metal-rich regime following the same [Fe/H] ranges of Figure \ref{fig:vel_disp} and Table \ref{tab:vel_metal}.
\label{fig:sgr_coords}}
\end{figure*}
\pagebreak
\section{Model Comparisons} \label{subsec:model}
In this section, we interpret the phase-space properties of Sgr stream and how they correlate with chemistry via the comparison of our Sgr sample with the model of \citetalias{Vasiliev2021tango}. Section \ref{subsec:V21model} summarizes the main features of the $N$-body model. Sections \ref{subsec:comps} and \ref{subsec:dsph} compare the observed properties of both trailing and leading arms against the simulation and how these are connected to the initial structure of Sgr, respectively.
\subsection{Model Properties, Assumptions, and Limitations} \label{subsec:V21model}
\citetalias[][]{Vasiliev2021tango}'s is a tailored $N$-body model of the Sgr system designed to match several properties of its tidal tails. In particular, in order to mimic the aforementioned misalignment between the stream track and its proper motions in the leading arm, the authors invoke the presence an LMC with a total mass of $1.5 \times 10^{11}\,M_\odot$. This value is, indeed, in reasonable agreement with recent estimates of LMC's mass based on perturbations in Galactic stellar streams \citep{Erkal2019massLMC, Shipp2021massLMC, Koposov2022LMCmass}, but is slightly smaller than estimates based on the so-called ``timing argument'' \citep{Penarrubia2016timing} or cosmological abundance matching \citep{Boylan-Kolchin2010abundance, Dooley2017abundance}, both of which give ${\gtrsim}2\times10^{11}\,M_\odot$.
The initial conditions are set to reproduce the present-day positions and velocities of both Sgr and LMC building on earlier results \citep{Vasiliev2020sgr}. However, unlike the LMC and Sgr, the Milky Way is not modeled in a live $N$-body scheme. Hence, it comes with the limitation that \citetalias[][]{Vasiliev2021tango} depend on the Chandrasekhar analytical prescription for dynamical friction \citep[e.g.,][]{MoBoschWhite2010book}. See \citet{Ramos2022sgr} for a discussion on how this approximation might influence the stripping history of the stream.
In the fiducial model, the initial stellar mass of Sgr dSph is $2\times10^8\,M_\odot$ and follows a spherical King density profile \citep{King1962}. Moreover, the system is embedded in an, also spherical, extended DM halo of $3.6 \times 10^9\,M_\odot$. Other key features of \citetalias{Vasiliev2021tango}'s work is the capability of recovering crucial kinematic and structural features of Sgr's remnant \citep[as in][]{Vasiliev2020sgr}, accounting for perturbations introduced by the gravitational field of LMC \citep{Garavito-Camargo2019lmc, Garavito-Camargo2021lmc, Cunningham2020lmc, Petersen2020reflex, PetersenPenarrubia2021, Erkal2021sloshing}, and properly following mass loss suffered by the system.
Despite the close match between observations and the \citetalias{Vasiliev2021tango} model, there are a few limitations that could affect their results. For instance, the model does not account for the gaseous component, thus no star formation, which may be relevant for the distribution of the debris as discussed in \citet{Wang2022arXivSgr} and references therein. An additional caveat is the lack of bifurcations in the modeled stream, as originally observed by \citet{Belokurov2006Streams} and \citet[][see discussions by \citealt{Oria2022Sgr_bifurcations}]{Koposov2012sgr}. Finally, we note that Sgr likely experienced at least one pericentric passage $\gtrsim$6\,Gyr ago as can be inferred from dynamical perturbations in the Galactic disk \citep[][and see \citealt{Antoja2018spirals}]{Binney2018spiral, Laporte2018sgr, Laporte2019sgr, Bland-Hawthorn2021spiral, McMillan2022spiral} as well as the star-formation histories of both the Milky Way \citep[][]{Ruiz-Lara2020sag} and Sgr itself \citep[][]{Siegel2007SFHsgr, deBoer2015sgr}. Hence, the \citetalias{Vasiliev2021tango} simulation, which starts only 3\,Gyr in the past, can not cover this earlier interaction.
\pagebreak
\subsection{New and Old Wraps} \label{subsec:comps}
Figure \ref{fig:sgr_coords} shows observational and simulation data in $(\Lambda_{\rm Sgr}, v_{\rm los})$, where $\Lambda_{\rm Sgr}$ is the stream longitude coordinate as defined by \citet[][]{Majewski2003} based on Sgr's orbital plane.
Leading/trailing arm stars are blue/red dots. These are overlaid to the \citetalias[][]{Vasiliev2021tango} model, where gray and colored points are DM and stellar particles, respectively. As in \citetalias[][]{Penarrubia2021sgr}, we split these simulated particles according to their stripping time ($t_{\rm strip}$\footnote{Formally, $t_{\rm strip}$ is defined as the most recent time when a particle left a 5\,kpc-radius sphere around the progenitor \citepalias[][]{Vasiliev2021tango}.}). For the remainder of this paper, we refer to the portion of the (simulated) stream formed more recently ($t_{\rm strip} < 2\,{\rm Gyr}$) as the ``new'' wrap (green). The more ancient ($t_{\rm strip} > 2\,{\rm Gyr}$) component is henceforth the ``old'' wrap (orange). Stellar particles that are still bound to the progenitor by the end of the simulation (redshift $z=0$) are colored black.
As was done
previously, we divide our Sgr stream data in the same metallicity intervals as in Section \ref{subsec:vel_disp}. We note that, essentially, our selected members of Sgr stream share all regions of phase space with the \citetalias[][]{Vasiliev2021tango} model particles. Notwithstanding, the bottom-most panel of Figure \ref{fig:sgr_coords} reveals a first interesting feature. Metal-rich
stars in the sample are almost exclusively associated with the new wra
, though this is more difficult to immediately assert for the leading arm because of the overlap between new and old
portions within $180\degree \lesssim \Lambda_{\rm Sgr} < 300\degree$.
As we move toward lower-metallicity (upper) panels of Figure \ref{fig:sgr_coords}, we see larger fractions of observed Sgr stream stars coinciding with the old wrap in phase space. At the same time, the dense groups of stars overlapping with the new wrap fade away as we reach the VMP regime (top panel of Figure \ref{fig:sgr_coords}). As a direct consequence, stream members are more spread along the $v_{\rm los}$ axis in Figure \ref{fig:sgr_coords}, which, then, translates into the higher $\sigma_V$ discussed in Section \ref{subsec:vel_disp} for metal-poor/VMP stars. In general, the new wrap is preferentially associated with metal-rich
stars, but also extends into the VMP realm. Conversely, the old component contains exclusively metal-poor ($\rm[Fe/H] \lesssim -1$) stars. Therefore,
these suggest that, at low metallicities, Sgr stream is composed of a mixture between old and new wraps and this phenomenon drives the increasing $\sigma_V$ quantified in Table \ref{tab:vel_metal}.
\begin{figure*}[pt!]
\centering
\includegraphics[width=2.1\columnwidth]{sgr_init_snapshot.png}
\caption{Left: distributions of $t_{\rm strip}$ for DM (gray) and stellar (pink) particles in the \citetalias[][]{Vasiliev2021tango} model of Sgr (stream$+$dSph) system. The orbital trajectory of Sgr in the form of Galactocentric distance
is presented as the overlapping blue line and circles. Middle: DM and stellar (colored dots) particles in configuration space, where $(X,Y)_{\rm Sgr,0}$ are spatial coordinates in a Sgr-centered sytem
in the initial snapshot of the same \citetalias[][]{Vasiliev2021tango} model (see text). Right: cumulative distribution functions of galactocentric radii ($r_{\rm Sgr,0}$) of the same model particles, also centered around Sgr in the initial snapshot. Vertical dashed lines mark the positions containing 90\% of the stars of each component. In all panels, the color scheme is the same as Figure \ref{fig:sgr_coords}, where orange represents stars that end up as the old wrap of the stream ($t_{\rm strip} > 2$\,Gyr), green is for the new wrap ($t_{\rm strip} < 2$\,Gyr), and black are those particles that remain bound to Sgr dSph until present-day/end of the simulation ($t_{\rm strip} = 0$).
\label{fig:initial}}
\end{figure*}
\subsection{Sgr dSph Before its Disruption} \label{subsec:dsph}
The dichotomy between metal-rich/cold and metal-poor/hot portions of Sgr stream has been suggested, by \citetalias[][]{Johnson2020sgr}, to be linked to the existence of a stellar halo-like structure in Sgr dSph prior to its infall. This stellar halo would have larger velocity dispersion, be spatially more extended, and have lower metallicity than the rest of the Sgr galaxy. As a consequence of its kinematics, this component would be stripped at earlier times. Indeed, we verified that the old wrap of Sgr stream, stripped ${>}2\,{\rm Gyr}$ ago in \citetalias[][]{Vasiliev2021tango}'s model, is
mainly associated with metal-poor stars
Figure \ref{fig:sgr_coords}), in conformity with \citetalias[][]{Johnson2020sgr}'s hypothesis. Meanwhile, the majority of the most metal-rich stars can be attributed to the new wrap. In order to check how the present-day properties of Sgr stream are connected to those of its dSph progenitor, hence testing other conjectures of \citetalias[][]{Johnson2020sgr}, we now look at the initial snapshot of \citetalias[][]{Vasiliev2021tango}'s simulation, including the satellite's
orbit and disruption history.
Simply to comprehend the assembly of the stream over time according to the \citetalias[][]{Vasiliev2021tango} model, we plot the distribution of $t_{\rm strip}$ in the left panel of Figure \ref{fig:initial}. The color scheme is the same as Figure \ref{fig:sgr_sample}, with gray and pink representing DM and stars, respectively. The excess at $t_{\rm strip} = 0$ is due to $N$-body particles that remain bound to the progenitor. On top of these histograms, we add the trajectory of Sgr dSph in the simulation (blue line and dots) in terms of its Galactocentric distanc
. With this visualization, it is clear how intense episodes of material being stripped (at both $2.0 < t_{\rm strip}/{\rm Gyr} \lesssim 2.7$ and $0.5 \lesssim t_{\rm strip}/{\rm Gyr} \lesssim 1.5$) are intrinsically related to close encounters between Sgr and the Milky Way (at ${\sim}2.5$ and ${\sim}1.2$\,Gyr ago), which originates the new and old wraps discussed in Section \ref{subsec:comps}. Also, note how most of the material is associated with the recently formed component (new wrap) of the stream.
In order to test the conjecture that the stripped portion of Sgr dSph associated with the formation of the old wrap was already dynamically hotter than the new one prior to the galaxy's disruption, we check the $\sigma_V$, with respect to Sgr, of these components in the initial snapshot of \citetalias[][]{Vasiliev2021tango}'s simulation, which starts 3\,Gyr in the past (redshift $z \sim 0.25$ in \citealt{PlanckCollab2020} cosmology). Indeed, the $\sigma_V$ of stars that end up forming the old wrap, i.e., stripped at earlier times, is higher (${\sim}18$\,km\,s$^{-1}$) in comparison to the $\sigma_V$ of stars from the new component (${\sim}14$\,km\,s$^{-1}$).
Furthermore, the middle panel of Figure \ref{fig:initial} shows the same snapshot in configuration space as $(X,Y)_{\rm Sgr,0}$, a Sgr-centered frame. The orange dots ($t_{\rm strip} > 2$\,Gyr/old wrap) in this plot are less centrally concentrated (90\% of stellar particles within $\sim$4\,kpc) than the green ones ($t_{\rm strip} < 2$\,Gyr/new wrap; 90\% within 3\,kpc). This behavior is clear from the right panel of the same figure that shows the cumulative distributions of galactocentric radii ($r_{\rm Sgr,0}$) in the same system. Also, stars that remain bound until redshift $z=0$ have even lower $\sigma_V$ (${\sim}11$\,km\,s$^{-1}$) and are spatially more concentrated (90\% within ${<}2$\,kpc) than the other components.
From the above-described properties of the \citetalias[][]{Vasiliev2021tango} model, we can infer that the periphery of the simulated Sgr dSph contains a larger fraction of stars that end up as the old wrap (stripped earlier) in comparison to its central regions. Therefore, with the understanding that the old wrap is essentially composed of low-metallicity stars), we reach the conclusion that the core regions of Sgr dSph were more metal-rich than its outskirts prior to its accretion.
We recall that, indeed, previous works reported evidence for a metallicity gradient in Sgr remaining core \citep[][]{Bellazzini1999sgr, Layden2000sgr, Siegel2007SFHsgr, McDonald2013sgr, Garro2021sgrGCs, Vitali2022SgrPristine}. Nevertheless, fully understanding how these stellar-population variations in the Sgr system relate to
its interaction with the Milky Way remains to be seem (for example, via induced star-formation bursts; \citealt{Hasselquist2021dwarf_gals}).
Although our interpretation
favors a scenario where Sgr dSph
had enough time to develop a metallicity gradient before its disruption, quantifying this effect is difficult. One way to approach this would be to kinematically decompose Sgr stream stars, as in \citetalias[][]{Johnson2020sgr}, then rearranging them into spatial distributions following the same density profiles of the different components (early or late stripping) of the \citetalias[][]{Vasiliev2021tango} model. Unfortunately, this strategy is difficult to be applied for the SEGUE data because of its selection functio
. The other way around is also feasible, i.e., \textit{painting} the model with \textit{ad hoc} metallicity gradients and, then, comparing with, for instance, the present-day [Fe/H] variations observed across the Sgr stream (\citealt{Hayes2020} and references therein). We defer this exploration to a forthcoming paper.
\begin{figure*}[pt!]
\centering
\includegraphics[width=2.1\columnwidth]{abundances_sgr.png}
\caption{Left: [$\alpha$/Fe]--[Fe/H]. Green line is the running median of GSE's [$\alpha$/Fe] values in bins of 0.2\,dex in [Fe/H], with the shaded area covering 16th and 84th percentiles. Blue and red lines and shaded regions are the same, but for the leading and trailing arms of Sgr stream, respectively. Middle: [C/Fe]--[Fe/H]. The yellow rectangle marks the considered locus of CEMP stars. Blue/red symbols are leading/trailing arm stars. Candidate ($3\sigma$; see text) CEMP stars are shown as star symbols. Right: $\rm[Fe/H]_{\rm L13}$--$\rm[Fe/H]_{\rm DR9}$, where the ``L13''cand ``DR9'' subscripts refer to [Fe/H] values either from \citet[][]{lee2013} or SEGUE's standard publicly available DR9 catalog (\citealt{SDSS_DR9
). Background
density maps represent the full SEGUE/\texttt{StarHorse} low-metallicity sampl
.
\label{fig:chem}}
\end{figure*}
\section{Chemical Abundances} \label{sec:abundances}
\subsection{\texorpdfstring{$\alpha$}x Elements} \label{sec:alpha}
The strategy adopted in SSPP to estimate $\alpha$-element abundances is to match the observed spectra with synthetic ones within the wavelength range $4500 \leq \lambda/{\rm \textup{\AA}} \leq 5500$ \citep[][]{Lee2011sspp}. This region contains several absorption features of interest, including several \ion{Ti}{1} and \ion{Ti}{2} lines as well as the \ion{Mg}{1} triplet (${\sim}5200$\,{\AA}), but also avoids the CH $G$-band at ${\sim}4300$\,{\AA}. Indeed, \citet[][]{deBoer2014sgr} utilized $[\alpha/{\rm Fe}]$
values made available by SEGUE for stars in Sgr stream to argue that a ``knee'' \citep[e.g.,][]{Matteucci1990} existed at $\rm[Fe/H] \lesssim -1.3$ in the [$\alpha$/Fe]--[Fe/H] diagram \citep[][]{Wallerstein1962gdwarfs, Tinsley1979} for this substructure. However, this result is not supported by contemporaneous high-resolution spectroscopic data, specially from APOGEE survey \citep[][]{Hayes2020, Limberg2022gse}, but also H3 \citep[][]{Johnson2020sgr, naidu2020}. If the position of Sgr's $\alpha$ knee was truly located at such high [Fe/H], it would mean that it should be even more massive than GSE \citep[e.g.,][]{Monty2020, Horta2022haloSubs}. In fact, the [$\alpha$/Fe] vs. metallicity distribution of Sgr flattens at $\rm[Fe/H] \gtrsim -1$, which is a telltale sign that this dSph galaxy experienced an additional burst of star formation, likely due to its interaction with the Milky Way \citep[][]{Hasselquist2021dwarf_gals}.
Here, we revisit the abundances of $\alpha$ elements for Sgr stream using SEGUE, but with a larger sample with lower contamination than previously considered. In the left panel of Figure \ref{fig:chem}, we see the continuous decrease of [$\alpha$/Fe] as a function of increasing metallicity for both the Sgr stream and GSE, which is expected for standard chemical-evolution prescriptions \citep[e.g.,][]{Matteucci1990}. Most important, at a given value [Fe/H], the median [$\alpha$/Fe] of Sgr stream (both leading and trailing arms) is lower than GSE's. This difference becomes more prominent at $\rm[Fe/H] \gtrsim -1.5$, in agreement with high-resolution spectroscopy results from both H3 \citep[][]{naidu2020, Naidu2022mzr} and APOGEE \citep[][]{Hasselquist2021dwarf_gals, Horta2022haloSubs, Limberg2022gse}.
Overall, similarly to the age distributions presented in
Figure \ref{fig:sgr_sample}, [$\alpha$/Fe] seems to be capable of revealing broad disparities between halo/Milky Way components. Indeed, SEGUE's [$\alpha$/Fe] abundance ratios were used to investigate the chemical thin--thick disk decomposition, as well as the so-called Splash or \textit{heated} disk \citep[][]{DiMatteo2019, AnBeers2020blueprintI, AnBeers2021blueprintII, belokurov2020, Amarante2020splash}, by several authors in the past \citep[][]{Lee2011disk, Bovy2012disk, IvezicBeersJuric2012, Liu2012disk, Han2020disk, Lee2022disk}. However, the low accuracy/precision of [$\alpha$/Fe] in SEGUE still makes it difficult to attribute stars to certain populations on an individual basis.
\subsection{Carbon} \label{sec:carbon}
With the SEGUE low-metallicity data at hand, we also explore carbon abundances. In particular, we are interested in finding carbon-enhanced metal-poor (CEMP; $\rm[C/Fe] > +0.7$ and $\rm[Fe/H] < -1$; see \citealt{beers2005}, \citealt{aoki2007}, and \citealt{placco2014Carbon}) stars in Sgr stream. The reasoning for that being the recent results by \citet[][also \citealt{Hansen2018sgr} and \citealt{Chiti2019sgr}]{Chiti2020sgr} where these authors found no CEMP star in their sample of Sgr dSph members within $-3.1 < \rm[Fe/H] \lesssim -1.5$. Moreover, we utilize observations of Sgr stream as a shortcut to check for potential differences in CEMP fractions between a dwarf galaxy and the Milky Way's stellar halo \citep[see][]{Venn2012, Kirby2015carbon, Salvadori2015, Chiti2018sculptor} in a homogeneous setting. Given that the CEMP phenomenon, specially at $\rm[Fe/H] \lesssim -2.5$, is connected to nucleosynthesis events associated with the first generations of stars, perhaps Population III \citep[e.g.,][]{Nomoto2013, yoon2016, Chiaki2017}, identifying such objects provide clues about the first chemical-enrichment processes that happened in a galaxy.
Throughout this section, we consider carbon abundances obtained for SEGUE spectra by \citet[][see also \citealt{carollo2012}, \citealt{lee2017, lee2019}, and \citealt{Arentsen2022cemp}]{lee2013} in an independent run of the SSPP. Inconveniently, this catalog also comes with slight variations of the stellar atmospheric parameters in comparison with the public SEGUE DR9 release, which was adopted for the new
\texttt{StarHorse} ru
. Therefore, in order to confidently identify CEMP stars, we first select candidates using only \citet[][]{lee2013} [C/Fe] and [Fe/H] (subscripts ``L13'' in Figure \ref{fig:chem}). Then, we compare $\rm[Fe/H]_{\rm L13}$ with [Fe/H] values from our standard DR9 sample ($\rm[Fe/H]_{\rm DR9}$; right panel of Figure \ref{fig:chem}) to confirm their low-metallicity nature.
The middle panel of Figure \ref{fig:chem} ([C/Fe]--[Fe/H]) exhibits our selection of CEMP candidates, delineated by the yellow box.
Note that we take only those stars at $\rm[Fe/H]_{\rm L13} < -1.5$, for consistency with the metallicity range covered by \citet[][]{Chiti2020sgr}. In any case, expanding this boundary to $\rm[Fe/H]_{\rm L13} < -1$ would only include a couple of additional CEMP candidates. We discovered a total of 39 likely-CEMP stars (33 at $\rm[Fe/H]_{\rm L13} < -2$). With this sample at hand, we looked for those candidates confidently ($3\sigma$ in $\rm[C/Fe]_{\rm L13}$) encompassed by the CEMP criteria. We found 7 such objects, shown as star symbols in Figure \ref{fig:chem}. Although two of these CEMP stars have discrepant metallicity determinations ($\rm[Fe/H]_{\rm L13}$ vs. $\rm[Fe/H]_{\rm DR9}$; right panel of Figure \ref{fig:chem}), we can still confidently assert that there exist CEMP stars in Sgr stream.
A possible explanation for the lack of CEMP stars in \citeauthor{Chiti2020sgr}'s (\citeyear{Chiti2020sgr}) sample could be their photometric target selection, which was based on SkyMapper DR1 \citep[][]{Wolf2018SkyMapper}. The excess of carbon, hence the exquisite strength of the CH $G$-band, is capable of depressing the continuum extending to the wavelength region of the \ion{Ca}{2} K/H lines, where SkyMapper's $v$ filter is centered ($3825\textup{\AA}$; see \citealt{DaCosta2019} and references therein), a phenomenon referred to as ``carbon veiling'' \citep[][]{yoon2020}. A scenario where, if confirmed, the surviving core of Sgr dSph has a lower CEMP fraction than its outskirts/stream at a given metallicity could be similar to what potentially happens to the Milky Way's bulge and halo \citep[][]{Arentsen2021pigs3}. Either way, the unbiased discovery of additional VMP stars in Sgr \citep[e.g.,][]{Vitali2022SgrPristine} as well as other dSph satellites \citep[e.g.,][]{Skuladottir2021UMPsculptor} will be paramount for us to advance our understanding about the earliest stages of chemical enrichment in these systems.
Finally, we also calculate the fraction of CEMP stars in Sgr stream and compare it with the Milky Way. \citet[][]{Arentsen2022cemp} has recently demonstrated that various observational efforts focused on the discovery and analysis of metal-poor stars via low/medium-resolution (up to $\mathcal{R} \sim 3000$) spectroscopy
report inconsistent CEMP fractions among them \citep{lee2013, placco2018, placco2019, aguado2019, Arentsen2020pigs2, Yuan2020dtgs, Limberg2021_Gemini+SOAR, Shank2022dtgs}. However, we reinforce that it is not our goal to provide absolute CEMP fractions \citep[e.g.,][]{rossi2005, lucatello2006, yoon2018}, but rather use the SEGUE/\texttt{StarHorse} low-metallicity sample to make a homogeneous comparison. For this reason, we do not perform any evolutionary corrections (as in \citealt{placco2014Carbon}) to the carbon abundances of \citet[][]{lee2013}. The overall fraction of CEMP stars in the full sample at $\rm[Fe/H] < -2$, but excluding Sgr, is $19\pm1\%$\footnote{Uncertainties for fractions are given by Wilson score confidence
intervals \citep[][]{Wilson1927}. See \citet[][]{Limberg2021_Gemini+SOAR} for details.}. For the whole Sgr stream, leading and trailing arms altogether, this number is $16\pm5\%$. Therefore, we conclude the SEGUE carbon-abundance data does not provide evidence for variations in the CEMP frequency between Sgr (stream) and the Milky Way.
\section{Conclusions} \label{sec:conclusions}
In this work, we performed a chemodynamical study of Sgr stream, the tidal tails produced by the ongoing disruption of Sgr dSph galaxy. Because of recent literature results, we were particularly interested in exploring the VMP regime of this substructure. Our main goals were to quantify the kinematic properties of this population as well as search for CEMP stars. For the task, we leveraged low-resolution spectroscopic and astrometric data from SEGUE DR9 and \textit{Gaia} EDR3, respectively. Moreover, this catalog was combined with broad-band photometry from various sources in order to deliver
precise distances for ${\sim}175,000$ low-metallicity ($\rm[Fe/H] \leq -0.5$) stars via Bayesian isochrone-fitting in a new \texttt{StarHorse} run (Figure \ref{fig:xyz}), an effort that is fully described in an accompanying paper (Queiroz et al., submitte
). Our main conclusions can be summarized as follows.
\raggedbottom
\begin{itemize}
\item We delineated a new set of selection criteria for the Sgr stream based on
angular momenta and actions (Figure \ref{fig:selection}). Despite being more conservative than previous works (e.g., \citetalias[][]{Johnson2020sgr} and \citealt[][]{naidu2020}), we identify ${\sim}1600$ members of Sgr stream, which is twice as many as these authors. Out of these, there are ${>}200$ VMP stars
\item Reassuringly, although the SEGUE target selection inflates the number of metal-poor stars ($\rm[Fe/H] < -1$; Figure \ref{fig:sgr_sample}), we found the leading arm to be more metal-poor, by ${\sim}0.2$\,dex, than the trailing one. This is in agreement with many previous works \citep[notably][]{Hayes2020}.
\item We provided the first age estimates for individual stars in Sgr stream. For the task, we constructed a subsample of 56 turnoff/subgiant stars in this substructure, for which \texttt{StarHorse} ages are most reliable. We found an overall median age of $11.7^{+0.3}_{-0.2}\,{\rm Gyr}$, which is ${\sim}1$\,Gyr older than the bulk of thick-disk stars according both to our own SEGUE/\texttt{StarHorse} data as well as asteroseismic estimates \citep[][]{Miglio2021age_disks}.
\item We found ($2\sigma$) evidence for increasing velocity dispersion in Sgr stream between its metal-rich and VMP populations. Similar findings had been previously presented by \citetalias[][]{Johnson2020sgr}, but were contested by \citetalias[][]{Penarrubia2021sgr} who argued that these authors' sample was highly contaminated by Milky Way interlopers. Now, we reassert the former's findings with a $2{\times}$ larger and more rigorously-selected Sgr stream members (Figure \ref{fig:vel_disp}/Table \ref{tab:vel_metal}).
\item With the $N$-body model of \citetalias[][]{Vasiliev2021tango}, we found that the new wrap (composed of stars recently stripped; $t_{\rm strip} < 2\,{\rm Gyr}$) of Sgr stream preferentially contains metal-rich ($\rm[Fe/H] > -1.0$) stars. Conversely, the old wrap ($t_{\rm strip} > 2\,{\rm Gyr}$) is exclusively associated with metal-poor stars ($\rm[Fe/H] < -1.0$) in phase space. Hence, the increasing velocity dispersion with decreasing [Fe/H] is driven by the mixture between these components, i.e., larger fractions of the old wrap are found at lower metallicities, while the metal-rich population is only representative of the new wrap (Figure \ref{fig:sgr_coords}).
\item Looking at the initial snapshot of the \citetalias[][]{Vasiliev2021tango} simulation, we found that stars that end up forming the old wrap are dynamically hotter
and less centrally concentrated than those that compose the new wra
. With the understanding that the old wrap contains stars of lower metallicitie
, this implies that the outskirts of Sgr dSph, prior to disruption, were more metal-poor than is core regions, i.e., internal [Fe/H] variations in the galaxy. The self-consistent reconstruction of such metallicity gradient and comparisons with surviving dSph galaxies \citep[see][]{Kirby2011MetalGrads} will be the topic of a forthcoming contribution.
\item On the chemical-abundance front, SEGUE data allowed us to verify that the [$\alpha$/Fe] of Sgr stream decreases with increasing [Fe/H]. Most important, at a given metallicity, we ascertained that the median [$\alpha$/Fe] of Sgr stream is lower than GSE's, in conformity with other recent efforts \citep[][]{Hasselquist2021dwarf_gals, Limberg2022gse}.
\item We confidently (${>}3\sigma$) identify CEMP stars in Sgr stream. Also, its CEMP fraction ($16\pm5\%$) is compatible ($1\sigma$) with the overall SEGUE catalog ($19\pm1\%$). Hence, we argue that the apparent lack of CEMP stars in Sgr dSph \citep[][and references therein]{Chiti2020sgr} could be associated with target-selection effects. Nevertheless, carbon-abundance information for larger samples of VMP stars across the whole Sgr system will be necessary to investigate this discrepancy between the stream and the remaining core of this dSph galaxy.
\end{itemize}
This paper emphasizes how powerful the combination between deep spectroscopy and astrometric data can be in our quest to unravel the outer Galactic halo. It also shows how crucial the fully Bayesian approach of \texttt{StarHorse} is for the task of deriving precise parameters (mainly distances) even for faint stars. In fact, the SEGUE/\texttt{StarHorse} catalog provides a glimpse of the scientific potential that will be unlocked by the next generation of wide-field surveys such as 4MOST, SDSS-
, and WEAVE. Finally, we reinforce the importance of tailored $N$-body models as fundamental tools for interpreting of the complex debris left behind by disrupted dwarf galaxies in the Milky Way's halo.
\software{\texttt{corner} \citep{corner2016}, \texttt{gala} \citep{gala2017}, \texttt{jupyter} \citep{jupyter2016}, \texttt{matplotlib} \citep{matplotlib}, \texttt{NumPy} \citep{numpy}, \texttt{pandas} \citep{pandasSoftware}, \texttt{SciPy} \citep{scipy}, \texttt{scikit-learn} \citep{scikit-learn}, \texttt{TOPCAT} \citep{TOPCAT2005}.
}
\begin{acknowledgments}
G.L. thanks Alex Ji, Ani Chiti, Felipe Almeida-Fernandes, and Vini Placco for discussions and suggestions that contributed to the final manuscript. G.L. also thanks several authors that provided observational or simulation data, namely Amina Helmi, Emma Dodd, Khyati Malhan, Sergey Koposov, and Zhen Yuan. G.L. is particularly grateful toward Eugene Vasiliev, who readily provided the initial snapshot of the \citetalias[][]{Vasiliev2021tango} simulation as well as assistance with the model. Finally, G.L. thanks all those authors that made their observational and/or simulation data publicly available and are referenced throughout this work. G.L., H.D.P., S.R., J.A., and R.M.S. extend heartfelt thanks to all involved with the ``Brazilian Milky Way group meeting", namely Eduardo Machado-Pereira, Fabrícia O. Barbosa, H\'elio J. Rocha-Pinto, Lais Borbolato, Leandro Beraldo e Silva, and Yuri Abuchaim.
G.L. acknowledges FAPESP (procs. 2021/10429-0 and 2022/07301-5). H.D.P. thanks FAPESP
(procs. 2018/21250-9 and 2022/04079-0). S.R. thanks support from FAPESP (procs. 2014/18100-4 and 2015/50374-0), CAPES, and CNPq. J.A. acknowledges funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement No. 852839). R.M.S. acknowledges CNPq (Proc. 306667/2020-7). A.P.-V. acknowledges the DGAPA-PAPIIT grant IA103122. Y.S.L. acknowledges support from the National Research Foundation (NRF) of Korea grant funded by the Ministry of Science and ICT (NRF-2021R1A2C1008679). Y.S.L. also gratefully acknowledges partial support for his visit to the University of Notre Dame from OISE-1927130: The International Research Network for Nuclear Astrophysics (IReNA), awarded by the US National Science Foundation.
This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement.
Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High Performance Computing at the University of Utah. The SDSS website is \url{www.sdss.org}. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration.
This research has made use of the VizieR catalogue access tool, CDS, Strasbourg, France (\url{https://cds.u-strasbg.fr}). The original description of the VizieR service was published in \citet{VizieR2000}.
\end{acknowledgments}
|
2,877,628,091,555 | arxiv | \section{INTRODUCTION}
\label{sec:intro}
The present research and training capabilities in observational
astronomy in Iran can, by no mean, respond to the growing demand due
to the rapid growth in higher education over the past decade. The
existing observational facilities consists of a number of
small telescopes in various university campus observatories generally
used for undergraduate and graduate training. A medium size optical
telescope is thought to be a step to facilitate research in astronomy
and observational cosmology. The geographic location of Iran, 32N
53E, relative dry climate and high altitude mountains, offer suitable
locations for optical telescopes.
Site selection study for a proposed 2-4 meter class telescope started
few years before the INO project received administrative approval. The
study led by S. Nasiri (report in preparation) began by collecting and
analysis of weather data, seismic hazard data, accessibility and shinny day
statistics over central dry regions of the country. A large number of sites
were identified and inspected. When the number of potential sites, mostly
scattered around the central desert, was reduced to a manageable number, long
term seeing monitoring has also started and continued for two years on
4 different sites with altitudes between 2500m and 3000m.
\section{Site characterization}
It has been shown that the atmospheric turbulence has a strong
connection to astronomical seeing. In particular the
Fried parameter, $r_0$, which represents the telescope aperture
diameter, for which the diffraction-limited image resolution is equal
to the FWHM of the seeing-limited image is shown to be determined by
refractive index structure constant (Fried 1966) which itself depends
on the temperature structure of the atmosphere (e.g. Marks et al
1996).
Site characterization involved measurement of a number of key site
parameters such as the wind speed and direction, sky brightness,
seeing and microthermal variation profile at the two sites,
known as Dinava (3000m) and Gargash (3600m). These two sites are 70km apart.
The key objective of the monitoring was to find the best of the two
sites for the installation of the 3.4m telescope.
\subsection{Wind speed and direction}
Typical weather stations were installed in both sites on 12m masts by
the end of 2008. They allowed the measurement of temperature, wind
speed and direction, barometric pressure and humidity. Wind data
recording was performed every 10 minutes at an 8m height above the
peak. Two years of measurement indicates that both sites shows a
peak wind speed of 4.0-8.0 m/s but despite a 600m higher altitude, the
wind speed in Gargash is generally lower than in Dinava. The west and
south-west are generally the dominant wind directions in both sites.
This is shown in Fig 1.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[height=8cm]{WR.ps} \hspace*{1cm}
\includegraphics[height=8cm]{WH.ps}
\end{tabular}
\end{center}
\caption[example] { \label{fig:wind} Left: Gargash site windrose is shown
which clearly indicates a dominant wind direction and its
intensity. The data covers Jan 2009 - Oct 2010. Dinava site shows a
similar windorse. Right: The wind speed histogram is shown for Gargash
(top) and Dinava (bottom) during the same period.}
\end{figure}
\subsection{Humidity, clear sky and temperature}
Statistically there is about 230 shinny days available for the
region. Monitoring the cloud coverage over two years indicates that
around 45\% clear sky is available annually. This increases to above
70\% between June-Oct.
In about 55\% of the nights the relative humidity remains below
60\%. This increases to over 80\% between May-Oct. There is no
measurable difference between the two sites in relative humidity.
Temperature variation ($T_{max}-T_{min}$) during the night (between
twilights) is generally 3 degrees. The temperature changes at a rate
of about 0.15 ($\pm0.3$) degree celsius per hour between sunset and
midnight. Dinava site is generally about 5 degrees celsius warmer
than the Gargash site.
\subsection{Seeing measurement}
Seeing is one of the most important parameters describing the
atmospheric turbulence. Seeing measurement was carried out using DIMM
systems (eg. Sarazin \& Roddier 1990, Vernin \& Munoz 1995, Tokovinin
2002) which comprised of Orion Ritchey-chretien 8 inches telescopes,
44 mm apertures with a 122 mm separation, installed on metal pillar
located on a lifted concrete platform providing an altitude of 3.5m
above the ground for the telescopes in both sites. The two DIMM
systems installed in Gargash and Dinava were cross calibrated at Dinava site
using the same configuration. This configuration was kept unchanged
for the period of observations June-Oct 2010. A similar method was
adopted by the site selection team (2004-2006) using 11-inch
telescopes, but on conventional telescope tripod. A comparison of
the measured seeing in Dinava and Gargash is shown in Fig 2.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[height=12cm,angle=-90]{Seeing.ps}
\end{tabular}
\end{center}
\caption[example]
{ \label{fig:seeing}
Seeing distribution compared simultaneously between Gargash (red) and
Dinava (blue) sites in summer 2010 obtained from similar DIMM
systems. The first quartile seeing in Gargash at 3600m is 0.54
($\pm0.04$) arcsec compared to 0.60 ($\pm 0.09$) arcsec. Second
quartile seeing in Gargash is 0.67 arcsec and 0.72 arcsec for Gargash
and Dinava, respectively.}
\end{figure}
\subsection{Microthermal variation measurement and CFD modeling}
The main aim of the microthermal measurements is to determine the
height of ground layer turbulence which allows an optimization of the
cost-height, driven by desire to located the primary mirror above the
turbulent layer. In case of complex peak topography, multiple
measurements further helps to better constrain the location of the
telescope/enclosure.
As the time-scale of the temperature variation is of the order of
10-100Hz and the amplitude of the variation is of the order of 0.01 of
a degree, the sensitivity of the sensors and the data recording system
as well as their response time should be adequately set.
We therefore designed a system to deliver $\sim$1 kHz recording frequency
with a few $\times$ 0.001 degree sensitivity using Platinum wire with high
purity and 20 micron diameter.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[height=12cm,angle=-90]{MT6.ps}
\end{tabular}
\end{center}
\caption[example]
{ \label{fig:seeing}
An example microthermal variation profile for one of the masts in Gargash. The median of the variance in each level is obtained for 6 consecutive days in Sept 2010. The lower levels show larger variance during the day and night relative to the upper levels.}
\end{figure}
The microthermal variation measurements were performed in 6 locations (given
the complexity of the peak topography) in Gargash site and 2 locations in
Dinava site simultaneously in September-October 2010. The sensors were
placed at 8 levels with a separation of 1.5m vertically. The horizontal
separation of the sensors is 2 meters. A quick analysis of the
results show that first mast in the direction of the dominant wind
direction (shown in Fig 3) provides a textbook example of the thermal
variation profile. There is a clear difference in the recorded variance
between the levels which is observed in day and night time. More
detailed analysis of the microthermal measurements is in progress.
We have obtained the topographic map of the peak with resolutions of 1
meter and 5 meters for the upper (-30 meter from the peak) and lower
(-100 meters of the peak) regions of the peak to be able to perform a
Computational Fluid Dynamics (CFD) modeling of the peak under various
wind flow and turbulence conditions. Our initial findings indicate
that the boundary layer is about 15-20 meters from the ground.
\subsection{Sky brightness}
Sky background was measured under photometric conditions in Dinava and
Gargash. We find that Gargash site is about 0.4 magnitude darker than
the Dinava site owing to a larger distance from major cities. The V-band
sky brightness in Dinava and Gargash are 21.6 and 22.0 mag, respectively. A
light pollution control project is being planned to preserve the sites
for astronomical observations.
\section{Concluding remarks}
Our studies indicate a relative advantage of the Gargash site
in comparison to Dinava site. Gargash site is found to be darker,
benefitting from a better astronomical seeing and also higher altitude
and therefore less affected by dust.
\acknowledgments
The site selection activity was handled by Institute for Advanced
Studies Basic Sciences (led by S. Nasiri) between 2001 and 2007. Site
characterization and monitoring reported here was handled by the
Iranian National Observatory Project team and the Institute for
Research in Fundamental Sciences (IPM). I acknowledge the contribution
of individuals, R. Mansouri, A. Ardeberg, S. Arbabi, A. Haghighat,
A. Behnam, A. Molainejad, A. Roozrokh, A. Danesh, A. Jafarzadeh,
R. Ravani, A. Mirhoseini, B, Afzalifar, F. Ghaderi and site monitoring
teams.
\section*{REFERENCES}
Fried D.L., 1966, J. Opt. Soc. Am. 56, 1372\\
Marks, R.D., Vernin J., Azouit M., Briggs J.W., Burton M.G., Ashley M.C.B and Manigault J.F., 1996, Astron. Astrophys. Suppl. Ser. 118, 385\\
Sarazin, M.; Roddier, F., 1990, A\&A, 227, 294\\
Tokovinin, A., 2002, PASP, 114, 1156\\
Vernin, J., Munoz-Tunon, C., 1995, PASP, 107, 265\\
\end{document}
|
2,877,628,091,556 | arxiv | \section{Introduction}
Over the last 15 years, approaches to sentiment analysis which concentrated on creating and curating sentiment lexicons \cite{Turney2002,Liu2005} or used n-grams for classification \cite{Pang2002} have been replaced by models that are able to exploit compositionality \cite{Socher2013b,Irsoy2014a} or implicitly learn relations between tokens \cite{Peters2018,Howard2018,Devlin2018}. These neural models push the state of the art to over 90\% accuracy on binary sentence-level sentiment analysis.
Although these methods show a quantitative improvement over previous approaches, they are not often accompanied with a thorough analysis of the qualitative differences. This has led to the current situation, where we are aware of quantitative, but not qualitative differences between state-of-the-art sentiment classifiers. It also means that we are not
aware of the outstanding conceptual challenges that we still face in sentiment analysis.
In this work, we attempt to discover what conceptual challenges still prove a problem for all state-of-the-art sentiment methods for English. To do so, we train and test three state-of-the-art machine learning classifiers (BERT, ELMo, and a BiLSTM) as well as a bag-of-words classifier on six sentence-level sentiment datasets available for English. We then collect the subset of sentences that all models misclassify and annotate them for 18 linguistic and paralinguistic phenomena, such as negation, sarcasm, modality or world knowledge. We present this new data as a challenging dataset for future research in sentiment analysis, which enables probing the problems that sentiment classifiers still face in more depth.
Specifically, the contributions of this work are:
\begin{itemize}
\setlength{\itemsep}{5pt}
\setlength{\parskip}{0pt}
\setlength{\parsep}{0pt}
\item the creation of a challenging sentiment dataset from previously available data,
\item the annotation of errors in this dataset for 18 linguistic and paralinguistic phenomena,
\item a thorough analysis of the dataset,
\item and finally presenting a practical use-case demonstrating how the dataset can be
used to probe the particular types of errors made by a new model.
\end{itemize}
The rest of the paper is organized into related work (Section \ref{relatedwork}), a description of the experimental setup (Section \ref{expsetup}), a brief description of the dataset (Section \ref{challengedataset}), an in-depth analysis (Section \ref{datasetanalysis}), a case-study that demonstrates the usefulness of the dataset (Section \ref{casestudy}), and finally a conclusion (Section \ref{conclusion}).
\section{Related Work}
\label{relatedwork}
Neural networks are now ubiquitous in NLP tasks, often giving state-of-the-art results. However, they are known for being ``black boxes'' which are not easily interpretable. Recent interest in interpreting these methods has led to new lines of research which attempt to discover what linguistic phenomena neural networks are able to learn \cite{Linzen2016,Gulordava2018,Conneau2018}, how robust neural networks are to perturbations in input data \cite{Ribeiro2018,Ebrahimi2018,Schluter2018}, and what biases they propagate \cite{Park2018,Zhao2018,Kiritchenko2018}.
Specifically within the task of sentiment analysis, certain linguistic phenomena are known to be challenging. Negation is one of the aspects of language that most clearly affects expressions of sentiment and that has been studied widely within sentiment analysis (see \newcite{Wiegand2010} for an early survey). The difficulties of resolving negation for sentiment analysis include determining negation scope \cite{Hogenboom2011,Lapponi2012,Reitan2015}, and semantic composition \cite{Wilson2005,Choi2008,Kiritchenko2016}.
Verbal polarity shifters have also been studied. \newcite{Schulder2018} annotate verbal shifters at the sense-level. They conclude that, although individual negation words are more frequent in the Amazon Product Review Data corpus, the overall frequency of negation words and shifters is likely similar. This suggests that there is a Zipfian tail of shifters which are not often handled within sentiment analysis.
Furthermore, the linguistic phenomenon of modality has also been shown to be problematic. Both \newcite{Narayanan2009} and \newcite{Liu2014} explore the effect of modality on sentiment classification and find that explicitly modeling certain modalities improves classification results. They advocate for a divide-and-conquer approach, which would address the various realizations of modality individually. \newcite{Benamara2012} perform linguistic experiments using native speakers concerning the effects of both negation and modality on opinions, and similarly find that the type of negation and modality determines the final interpretation of polarity.
The sentiment models inspected in these analyses, however, were lexicon- and word- and n-gram-based models. It is not clear that neural networks have the same weaknesses, as they have been shown to deal with compositionality and long-distance dependencies to some degree \cite{Socher2013b,Linzen2016}. Additionally, authors did not attempt to discover from the data what phenomena were present that could affect sentiment. In the current paper we aim to provide a systematic analysis of error types found across a range of datasets, domains and classifiers.
\section{Experimental Setup}
\label{expsetup}
In these experiments, we test three state-of-the-art models for sentence-level sentiment classification. We choose to focus on sentence-level classification for three reasons: 1) sentence-level classification is a popular and useful task, 2) there is a large amount of high-quality annotated data available, and 3) annotation of linguistic phenomena is easier at sentence-level than document-level. It is also likely that most phenomena that occur at sentence-level, \textit{e.\,g.}\xspace, negation, comparative sentiment, or modality, will transfer to other sentiment tasks.
\subsection{Datasets}
In order to discover a subset of sentences that all state-of-the-art models are unable to correctly predict, we collect six English-language datasets previously annotated for sentence-level sentiment from five domains (news wire, hotel reviews, movie reviews, twitter, and micro-blogs). Table \ref{datasetstats} shows the statistics for each of the datasets.
\begin{table}
\newcommand{\cmidrule(lr){2-2}\cmidrule(lr){3-3}\cmidrule(lr){4-4}\cmidrule(lr){5-5}\cmidrule(lr){6-6}\cmidrule(lr){7-7}\cmidrule(lr){8-8}}{\cmidrule(lr){2-2}\cmidrule(lr){3-3}\cmidrule(lr){4-4}\cmidrule(lr){5-5}\cmidrule(lr){6-6}\cmidrule(lr){7-7}}
\centering\small
\setlength{\tabcolsep}{5pt}
\begin{tabular}{lrrrrrr}
\toprule
& MPQA & OP. & Sem. & SST & Ta. & Th. \\
\cmidrule(lr){2-2}\cmidrule(lr){3-3}\cmidrule(lr){4-4}\cmidrule(lr){5-5}\cmidrule(lr){6-6}\cmidrule(lr){7-7}\cmidrule(lr){8-8}
$++$ & $-$ & 379 & $-$ & 1,852 & $-$ & $-$ \\
$+$ & 193 & 879 & 3,499 & 3,111 & 923 & 2,727 \\
0 & 527 & $-$ & 4,478 & 2,242 & 1,419 & 1,779 \\
$-$ & 413 & 399 & 1,310 & 3,140 & 1,320 & 1,828 \\
$--$ & $-$ & 74 & $-$ & 1,510 & $-$ & $-$ \\
\cmidrule(lr){2-2}\cmidrule(lr){3-3}\cmidrule(lr){4-4}\cmidrule(lr){5-5}\cmidrule(lr){6-6}\cmidrule(lr){7-7}\cmidrule(lr){8-8}
total & 1,133 & 1,731 & 9,287 & 11,855 & 3,662 & 6,334 \\
\bottomrule
\end{tabular}
\caption{Statistics for the sentence-level annotations in each dataset.}
\label{datasetstats}
\end{table}
\paragraph{MPQA}
The Multi-perspective Question Answer (MPQA) Opinion Corpus \cite{Wilson2005} provides contextual polarity annotations for English news documents from world press. The annotations are private state frames, which include annotations for text anchor, source, target, and attitude type, among others. We extract sentiment labeled sentences by taking only those sentences that have sentiment annotations. Additionally, we remove sentences that contain both positive and negative sentiment. This leaves a three-class (positive, neutral, negative) sentence-level dataset.
\paragraph{OpeNER}
The Open Polarity Enhanced Named Entity Recognition (OpeNER) sentiment datasets \cite{Agerri2013} contain hotel reviews annotated for 4-class (strong positive, positive, negative, strong negative) sentiment classification. We take the English dataset, where self-attention networks give state-of-the-art results \cite{Ambartsoumian2018}.
\paragraph{SemEval}
The SemEval 2013 tweet classification dataset \cite{Nakov2013} contains tweets collected and annotated for three-class (positive, neutral, negative) sentiment. The state-of-the-art model is a Convolutional Network \cite{Severyn2015}.
\paragraph{Stanford Sentiment Treebank}
The Stanford Sentiment Treebank \citep{Socher2013b} contains 11,855 English sentences from movie reviews which have been annotated at each node of a constituency parse tree. Contextualized word representations combined with a bi-attentive sentiment network currently give state-of-the-art results \cite{Peters2018}.
\paragraph{Täckström dataset}
The Täckström dataset \cite{Tackstrom2011} contains product reviews which have been annotated at both document- and sentence-level for three-class sentiment, although the sentence-level annotations also have a ``not relevant'' label. We keep the sentence-level annotations, which gives 3,662 sentences annotated for three-class sentiment.
\paragraph{Thelwall dataset}
The Thelwall dataset derives from datasets provided with SentiStrength\footnote{The data are available at \url{http://sentistrength.wlv.ac.uk/}} \cite{Thelwall2010}. It contains microblogs annotated for both positive and negative sentiment on a scale from 1 to 5. We map these to single sentiment labels such that sentences which are clearly positive (pos $>=$ 3 and neg $<$ 3) are given the positive label,
clearly negative sentences (pos $<$ 3 and neg $>=$ 3) the negative label, and clearly neutral sentences ( 3 $<$ pos $>$ 2 and 3 $<$ neg $>$ 2) the neutral. We discard all other sentences, which finally leaves 6,334 annotated sentences.
\subsection{Models}
In order to gain an idea of what errors most models suffer from, we test three state-of-the-art models on the datasets. Additionally, we use a bag-of-words model as it is a strong baseline for text classification. For the \textsc{Single} setup, we train all models on the training and development data for each dataset and test on the corresponding test set, therefore avoiding domain problems.
\paragraph{BERT} The BERT model \cite{Devlin2018} is a bidirectional transformer that is pretrained on two tasks: 1) a cloze-like language modeling task and 2) a binary next-sentence prediction task. It is pre-trained on 330 million words from the BooksCorpus \cite{Zhu_2015_ICCV} and English Wikipedia. We fine-tune the available pretrained model\footnote{\url{https://github.com/google-research/bert}} on each sentiment dataset.
\paragraph{ELMo} We use the bi-attentive classification network\footnote{\url{https://s3-us-west-2.amazonaws.com/allennlp/models/sst-5-elmo-biattentive-classification-network-2018.09.04.tar.gz}} from \newcite{Peters2018}. The network uses both word embeddings, as well as creating character-based embeddings from a character-level CNN-BiLSTM network. The word representations are first passed through a feedforward layer, and then through a sequence-to-sequence network with biattention. This new representation of the text is combined with the original representation and passed through another sequence-to-sequence network. Finally, a max, min, mean and self-attention pool representation is created from this last sequence. For classification, these features are sent to a maxout layer.
\paragraph{BiLSTM} Bidirectional long short-term memory (BiLSTM) networks have shown to be strong baselines for sentiment tasks \cite{Tai2015a,Barnes2017}. We implement a single-layered BiLSTM which takes pretrained skipgram embeddings as input, creates a sentence representation by concatenating the final hidden layer of both left and right LSTMs, and then passes this representation to a softmax layer for classification. Additionally, dropout serves as a regularizer.
\paragraph{Bag-of-Words classifier} Finally, bag-of-words classifiers are strong baselines for sentiment and when combined with other features can still give state-of-the-art results for sentiment tasks \cite{Mohammad2013}. Therefore, we train a Linear SVM on a bag-of-words representation of the training sentences.
\begin{table*}
\newcommand{\cmidrule(lr){2-2}\cmidrule(lr){3-3}\cmidrule(lr){4-4}\cmidrule(lr){5-5}\cmidrule(lr){6-6}\cmidrule(lr){7-7}\cmidrule(lr){8-8}}{\cmidrule(lr){3-3}\cmidrule(lr){4-4}\cmidrule(lr){5-5}\cmidrule(lr){6-6}\cmidrule(lr){7-7}\cmidrule(lr){8-8}}
\centering\small
\begin{tabular}{llcccccc}
\toprule
&& MPQA & OpeNER & SemEval & SST & Täckström & Thelwall\\
\cmidrule(lr){2-2}\cmidrule(lr){3-3}\cmidrule(lr){4-4}\cmidrule(lr){5-5}\cmidrule(lr){6-6}\cmidrule(lr){7-7}\cmidrule(lr){8-8}
\multirow{4}{*}{\rt{Single}}
& BOW & 40.9 & 69.7 & 62.3 & 50.9 & 46.0 & 53.5 \\
& BiLSTM & 48.7 & 71.5 & 58.0 & 37.5 & 45.0 & 52.0 \\
& ELMo & 61.0 & 82.1 & 71.9 & 51.3 & 53.1 & 59.1 \\
& BERT & \textbf{62.3} & \textbf{84.2} & \textbf{75.1} & \textbf{53.0} & \textbf{60.2} & \textbf{63.9} \\
\bottomrule
\end{tabular}
\caption{Accuracy of models on the sentiment datasets, where a different classifier is trained for each dataset.}
\label{table:results}
\end{table*}
\subsection{Model performance}
Table \ref{table:results} shows the accuracy of the models on the six tasks. Both methods that use pretrained language model classifiers (ELMo and BERT) are the best performing models, with an average of 11.8 difference between the language model classifiers and standard models (BOW and BILSTM). The error rates range between 8.3 on OpeNER and 20.5 on SST (see Table \ref{table:errors}), indicating that there are differences in difficulty of datasets due to domain and annotation characteristics.
Additional experiments on a \textsc{Merged} setup, where the labels from OpeNER and SST are mapped to the three-class setup, and a single model is trained on the concatenation of the training sets from all datasets, indicate that no clear performance gain is achieved. We therefore prefer to avoid the problem of domain differences and keep only the original results.
\section{Challenging Dataset}
\label{challengedataset}
We create a challenging dataset by collecting the subset of test sentences that \emph{all} of the sentiment systems predicted incorrectly (statistics are shown in Table \ref{table:errors}). After removing sentences with incorrect gold labels, there are a total of 836 sentences in the dataset, with a similar number of positive, neutral, and negative labels and fewer strong labels. This is expected, as only two datasets have strong labels.
Furthermore, the main sources of examples are the SemEval task (249), Stanford Sentiment Treebank (452) and Thelwall datasets (215), while the Täckström dataset (129), MPQA (39) and OpeNER (29) contribute much less. This is a result of both dataset size and difficulty.
\begin{table*}
\newcommand{\cmidrule(lr){2-2}\cmidrule(lr){3-3}\cmidrule(lr){4-4}\cmidrule(lr){5-5}\cmidrule(lr){6-6}\cmidrule(lr){7-7}\cmidrule(lr){8-8}}{\cmidrule(lr){2-2}\cmidrule(lr){3-3}\cmidrule(lr){4-4}\cmidrule(lr){5-5}\cmidrule(lr){6-6}\cmidrule(lr){7-7}\cmidrule(lr){8-8}}
\centering\small
\begin{tabular}{lrrrrrr|r}
\toprule
& MPQA & OpeNER & SemEval & SST & Täckström & Thelwall & Total\\
\cmidrule(lr){2-2}\cmidrule(lr){3-3}\cmidrule(lr){4-4}\cmidrule(lr){5-5}\cmidrule(lr){6-6}\cmidrule(lr){7-7}\cmidrule(lr){8-8}
$++$ & $-$ & 8 & $-$ & 87 & $-$ & $-$ & 95 \\
$+$ & 16 & 9 & 59 & 49 & 46 & 9 & 188 \\
0 & 1 & $-$ & 45 & 75 & 31 & 48 & 200 \\
$-$ & 16 & 2 & 47 & 51 & 18 & 116 & 250 \\
$--$ & $-$ & 4 & $-$ & 99 & $-$ & $-$ & 103 \\
\cmidrule(lr){2-2}\cmidrule(lr){3-3}\cmidrule(lr){4-4}\cmidrule(lr){5-5}\cmidrule(lr){6-6}\cmidrule(lr){7-7}\cmidrule(lr){8-8}
Total & 33 & 23 & 151 & 361 & 95 & 173 & \textbf{836}\\
\% of original & 14.5 & 6.6 & 6.4 & 16.3 & 12.9 & 13.6 & 11.7\\
avg. length & 25.0 & 13.4 & 19.0 & 19.9 & 23.4 & 17.5 & 19.7\\
\bottomrule
\end{tabular}
\caption{Statistics of dataset, including the number of sentences from each dataset and for each label, the percentage of the original dataset kept in the dataset, and average length (in tokens) of sentences.}
\label{table:errors}
\end{table*}
\begin{table}[b]
\begin{tabular}{ll}
\toprule
Strong Positive & It was \underline{\textbf{spot on}}. \\
Positive & They're \underline{\textbf{on a roll}}. \\
Neutral & It's a bit \underline{\textbf{hit-or-miss}}.\\
Negative & I'm \underline{\textbf{pulling my hair out}}. \\
Strong Negative & Madonna \underline{\textbf{can't act a lick}}.\\
\bottomrule
\end{tabular}
\caption{Examples of idioms.}
\label{setphrase:examples}
\end{table}
\section{Dataset Analysis}
\label{datasetanalysis}
In order to give a clearer view of the data found in the dataset, we annotate these instances using 19 linguistic and paralinguistic labels. While most of these come from previous attempts to qualitatively analyze sentiment classifiers \cite{HuandLiu2004,Das2007,PangLee2008,Socher2013b,Barnes2018a}, others (incorrect label, no sentiment, morphology) emerged during the error annotation process. We further chose to manually annotate for the polarity of the sentence irrespective of the gold label in order to be able to locate possible annotation errors during our analysis. The annotation scheme and (manually constructed) examples of each label are shown in Table \ref{error_labels}. Note that we did not limit the number of labels that the annotator could assign to each sentence and in principle they should assign all suitable labels during annotation.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.7]{dist_matrix_purple4.pdf}
\caption{Distribution of labels across error categories.}
\label{fig:errordist}
\end{figure}
\begin{table}[t]
\centering
\begin{tabular}{lc}
\toprule
label & \# examples \\
\cmidrule(lr){1-1}\cmidrule(lr){2-2}
incorrect label & \textbf{277} \\
no sentiment & \textbf{214} \\
mixed & \textbf{185}\\
non-standard spelling & \textbf{180}\\
desirable element & \textbf{144}\\
idioms & 132 \\
strong & 122\\
negation & 97\\
world knowledge & 81\\
amplifier & 79\\
comparative & 68\\
sarcasm/irony & 58\\
shifter & 50\\
emoji & 46 \\
modality & 38\\
morphology & 31\\
reducer & 13\\
\bottomrule
\end{tabular}
\caption{Number of labels for each category in annotation study. \textbf{Bold} numbers indicate the five most frequent sources of errors. The total number of labels does not sum to the number of sentences in the dataset, as each sentence can have multiple labels.}
\label{errorcategories}
\end{table}
An initial analysis of the errors shown in Table \ref{errorcategories} and Figure \ref{fig:errordist} reveals that the most common errors come from the no-sentiment (214), mixed category (185), non-standard spelling and hashtags (180), desirable elements (144), and the strong label (122).
The distribution of errors across labels (strong negative: 106, negative: 299, neutral: 303, positive: 296, strong positive: 109) compared to the gold distribution (strong negative: 294, negative: 1742, neutral: 2249, positive: 2402, strong positive: 475) shows that the strong negative is the most difficult and least common class, while positive is the easiest to classify. In the following we briefly discuss the error categories, also showing examples for each.
\begin{table*}[t]
\begin{tabular}{ll}
\toprule
positive & ``It was good.'' \\
negative & ``It was bad.'' \\
negation & ``It was not good.'' \\
strong & ``It was incredible.'' \\
amplifier & ``It was really good.'' \\
reducer & ``It was kind of bad.''\\
desirable element & ``It had a pool.'' \\
comparative & ``It was better than the first hotel.'' \\
shifter & ``They denied him the scholarship'' \\
modality & ``I would have loved the room if it been bigger.'' \\
world knowledge & ``It was 2 minutes from the beach.'' vs. ``It was 2 hours from the beach.'' \\
morphology & ``It was un-fricking-believable.'' \\
non-standard spelling & ``It was awesoooome.'' \\
idioms & ``It's not my cup of tea.'' \\
sarcasm/irony & ``I love it when people yell at me first thing in the morning.'' \\
emoji & ``:)'' \\
no sentiment & ``The president will hold a talk tomorrow.''\\
mixed & ``The plot was nice, but a little slow.''\\
incorrect label & Any clearly incorrect label. \\
\bottomrule
\end{tabular}
\caption{Categories and examples for error annotation guidelines.}
\label{error_labels}
\end{table*}
\paragraph{Mixed Polarity}
The largest set of errors, with 185 sentences labeled, are what we refer to as ``mixed'' polarity sentences. These are sentences where two differing polarities are expressed, either towards two separate entities, or towards the same entity. While the first can be solved by a more fine-grained approach (aspect-level or targeted sentiment), the second is more difficult and is often considered a category of its own \cite{Shamma2009,Saif2013EvaluationDF,Kenyon-Dean2018}.
An analysis of the mixed category errors reveals that while most of the examples are in the ``neutral'' category (45\%), the other 55\% are annotated as having mostly positive or negative sentiment. This is a confusing situation for both annotators and sentiment classifiers, and a direct product of performing sentence-level classification rather than aspect-level. Nearly a third of the errors contain ``but'' clauses, which could be correctly classified by splitting them.
A more problematic situation is found among nearly 20\% of the examples (34), where the annotator found the original label to be completely incorrect.\footnote{We do not include examples where only the strength of the polarity was considered different, \textit{i.\,e.}\xspace, positive vs. strong positive.}
\paragraph{Non-standard spelling}
Most errors in this category (180 total) are labeled either negative (49\%) or positive (29\%), with almost no strong positive or strong negative, which comes mainly from the fact that the noisier datasets do not contain the strong labels.
Around a third of the examples contain hashtags that clearly express the sentiment of the whole sentence, \textit{e.\,g.}\xspace, ``\#imtiredof this SNOW and COLD weather!!!''. This indicates the need to properly deal with hashtags in order to correctly classify sentiment.
\paragraph{Idioms}
Table \ref{setphrase:examples} presents some examples of sentiment-bearing idioms that are taken from the challenge data set.
In this category, errors (132 sentences labeled) are spread relatively uniformly across labels. Learning these correctly from sentence-level annotations is unlikely, especially because they are seldom found repeatedly, even in a training corpus of decent size. Therefore, incorporating idiomatic information from external data sources may be necessary to improve the classification of sentences within this category.
\paragraph{Strong Labels}
This category (122 total) is particularly difficult for sentiment classifiers for several reasons. First, strong negative sentiment is often expressed in an understated or ironic manner. For example, ``Better at putting you to sleep than a sound machine.''
For strong positive examples in the dataset, there is often difficult vocabulary and morphologically creative uses of language, \textit{e.\,g.}\xspace, ``It is a kickass , dense sci-fi action thriller hybrid that delivers and then some.'', while strong negative examples often contain sarcasm or non-standard spelling, \textit{e.\,g.}\xspace, ``All prints of this film should be sent to and buried on Pluto.''.
\paragraph{Negation}
Negation, which accounts for 97 errors, directly affects the classification of polar sentence \cite{Wiegand2010}. Therefore, we look at the differences between correctly and incorrectly classified sentences containing negation, by analyzing 100 correctly and incorrectly classified sentences containing negation.
From our analysis, there is no specific negator that is more difficult to resolve regarding its effect on sentiment classification.
We also perform an analysis of negation scope under the assumption that when a negator occurs farther from its negated element, it is more difficult for the sentiment classifier to correctly resolve the negation. Let $d$ be the distance between the negator $n$ and the relevant sentiment element $se$, such that $d = |ind(se) - ind(n)|$ where the function $ind$ calculates the index of a token in a sentence. We find that the incorrectly classified examples have an average $d$ of 2.7, while the correctly classified examples had 2.5. This seems to rule out a problem of negation scope as the underlying difference.
High-level or clausal negation occurs when the negator negates a full clause, rather than an adjective or noun phrase, \textit{e.\,g.}\xspace, ``I don't think it is a particularly interesting film''. In the dataset this phenomenon is found more prevalently in the incorrectly classified examples (8\%) versus the correctly classified examples (3\%), but does not occur often in absolute terms.
The main source of difference regarding correctly classifying examples involving negation seems to be irrelevant negation. Irrelevant negation refers to cases where a sentence contains a negation but where the sentiment-bearing expression is not within the scope of negation. In our data, there is a strong difference in the distribution of irrelevant negation in correctly and incorrectly classified examples (80\% vs. 25\%, respectively), suggesting that sentiment classifiers learn to ignore most occurrences of negation.
\paragraph{World Knowledge}
Examples from the dataset where world knowledge is necessary to correctly classify a sentence (81 sentences) include comparisons with entities commonly associated with positive or negative polarity, \textit{e.\,g.}\xspace, ``Elicits more groans from the audience than Jar Jar Binks, Scrappy Doo and Scooby Dumb, all wrapped up into one.'', analogies, \textit{e.\,g.}\xspace, ``Adam Sandler is to Gary Cooper what a gnat is to a racehorse.'', or rating scales, \textit{e.\,g.}\xspace, ``10/10 overall''.
This category is also highly correlated with sarcasm and irony. In fact, irony is often defined as ``violating expectations'' \cite{Hao2010}, which presupposes that we possess a world knowledge containing expectations of a situation.
\paragraph{Amplified}
Amplifiers occur mainly in negative and strong positive examples, such as ``It's an awfully derivative story.'' Most of the amplified sentences found in the dataset (71/79) contain amplifiers other than ``very'', such as ``super'', ``incredibly'', or ``so''.
\paragraph{Comparative}
Comparative sentiment, with 68 errors, is known to be difficult \cite{HuandLiu2004,Liu2012}, as it is necessary to determine which entity is on which side of the inequality. Sentences like ``Will probably stay in the shadow of its two older, more accessible Qatsi siblings'' are difficult for sentiment classifiers that do not model this phenomenon explicitly.
\paragraph{Sarcasm/Irony}
Sarcasm and irony (58 errors), which are often treated separately from sentiment analysis \cite{Filatova2012,Barbieri2014}, are present mainly in negative and strong negative examples in the dataset. Correctly capturing sarcasm and irony is necessary to classify some negative and strong negative examples, \textit{e.\,g.}\xspace, ``If Melville is creatively a great whale, this film is canned tuna.''
\paragraph{Shifters}
Shifters (50 errors), such as ``abandon'', ``lessen'', or ``reject'' are less common within the dataset, but normally move positive polarity words towards a more negative sentiment. The most common shifter is the word ``miss'', used as in ``We miss the quirky amazement that used to come along for an integral part of the ride.''
\paragraph{Emoji}
While the models handle most occurrences of emojis well, they falter more on the negative examples (46 errors). More than half of the examples in the dataset present positive emoji with a negative gold label, such as ``Pricess Leia is going to be gutted! :-).''
\paragraph{Modality}
None of the state-of-the-art sentiment systems deals explicitly with modality (38 total errors). While in many of the examples modality does not express a different sentiment than the same sentence without modality, in the dataset there are examples that do, \textit{e.\,g.}\xspace, ``Still, I thought it could have been more.''
\paragraph{Morphology}
While not the most prominent label (31 errors), the examples in the dataset that contain morphological features that effect sentiment are normally strong positive or strong negative. This most often contains creative use of English morphology, \textit{e.\,g.}\xspace, ``It was fan-freakin-tastic!'' or ``It's hyper-cliched''.
\paragraph{Reducers}
Reducers (13 errors), such as ``kind of'', ``less'', or ``all that'' cooccur with both positive and negative polar words within the dataset, and tend to lead to positive or neutral sentiment, \textit{e.\,g.}\xspace, ``It was a lot less hassle.''
\section{Case Study: Training with phrase-level annotations}
\label{casestudy}
As a case study for the usage of the dataset presented here, we evaluate a model that has access to more compositional information. Besides having sentence-level annotations, the SST dataset also contains annotations for each phrase in a constituency tree, which gives a considerable amount more training data, specifically 155,019 annotated phrases vs. 8,544 annotated sentences. It has been claimed that this data allows models to learn more compositionality \cite{Socher2013b}. Therefore, we fine-tune the best performing model (BERT) on this data and test on our dataset. The BERT model trained on phrases achieves 55.1 accuracy on the SST dataset, versus 53.0 for the model trained only on sentence-level annotations.
\begin{table}
\centering
\newcommand{\pointout}[1]{\textbf{#1}}
\begin{tabular}{llll}
\toprule
label & Sent. & Phrases & Rel. Imp.\\
\cmidrule(lr){1-1}\cmidrule(lr){2-2}\cmidrule(lr){3-3}\cmidrule(lr){4-4}
overall & 23.0 & 31.1 & 10.5\\
\cmidrule(lr){1-1}\cmidrule(lr){2-2}\cmidrule(lr){3-3}\cmidrule(lr){4-4}
positive & 19.0 & \pointout{26.9} & 9.8\%\ \\
negative & 23.1 & \pointout{35.0} & 15.5\% \\
mixed & 21.2 & \pointout{26.5} & 6.7\% \\
no-sentiment & 37.6 & \pointout{42.6} & 8.1\% \\
non-strd spelling & 40.3 & \pointout{43.5} & 3.8\% \\
desirable & 25.7 & \pointout{28.7} & 4.0\% \\
idioms & 13.7 & \pointout{23.1} & 11.0\% \\
strong & 15.5 & \pointout{23.7} & 9.7\% \\
negation & 23.9 & \pointout{38.6} & \posbox{19.3\%} \\
world know. & 14.9 & \pointout{21.6} & \posbox{19.6\%} \\
amplified & 13.9& \pointout{31.9} & \posbox{20.9\%} \\
comparative & 11.7 & \pointout{13.3} & 1.8\% \\
irony & \pointout{20.8} & 18.8 & \negbox{-2.5\%} \\
shifters & \pointout{33.3} & 24.4 & \negbox{-11.8\%}\\
emoji & 33.3 & \pointout{50.0} & \posbox{25.0\%} \\
modality & 20 & \pointout{22.9} & 3.6\%\\
morphology & \pointout{18.5} & \pointout{18.5} & \negbox{0\%}\\
reduced & 7.7 & \pointout{23.1} & \posbox{16.7\%} \\
\bottomrule
\end{tabular}
\caption{Per category accuracy and relative improvement (last column) of BERT model trained on SST sentences (8,544) and SST phrases (155,019).}
\label{table:casestudy}
\end{table}
Table \ref{table:casestudy} shows that the model trained on the SST phrases performs overall much better than the model trained on SST sentences\footnote{It is important to realize that the SST-sentence model has 0 accuracy on the subset of the dataset taken from the SST dataset, but not on the sentences taken from the other datasets.} on the dataset. Using the error annotations in the challenge data set, we find that results improve greatly on the sentences which contain the labels negation, world knowledge, amplified, emoji, and reduced, while performing worse on irony, shifters and equally on morphology. This analysis seems to indicate that phrase-level annotations help primarily with learning compositional sentiment (negation, amplified, reduced), while other phenomena, such as irony or morphology do not receive improvements. This confirms that training on the phrase-level annotations improves a sentiment model's ability to classify compositional sentiment, while also demonstrating the usefulness of our dataset for introspection.
\section{Conclusion and Future Work}
\label{conclusion}
In this paper, we tested three state-of-the-art sentiment classifiers and a baseline bag-of-words classifier on six English sentence-level sentiment datasets. We gathered the sentences that all methods misclassified in order to create a dataset. Additionally, we performed a fine-grained annotation of error types in order to provide insight into the kinds of problems sentiment classifiers have. We will release both the code and the annotated data with the hope that future research will utilize this resource
to probe sentiment classifiers for qualitative differences, rather than rely only on quantitative scores, which often obscure the plentiful challenges that still exist.
Many of the phenomena found in the dataset, \textit{e.\,g.}\xspace, negation or modality, have been discussed in depth in \cite{Liu2012}. However, the dataset that resulted from this work demonstrates that modern neural methods still fail on many examples of these phenomena. Additionally, our dataset enables a quick analysis of qualitative differences between models, probing their performance with respect to the linguistic and paralinguistic categories of errors.
Additionally, many of the findings from this paper are likely to vary to a degree for other languages, due to typological differences, as well as differences in available training data. The annotation method proposed in this paper, however, should enable the creation of similar analyses and datasets in other languages.
We expect that this approach to creating a dataset is also easily transferable to other tasks which are affected by linguistic or paralinguistic phenomena, such as hate speech detection or sarcasm detection. It would be more useful to have some knowledge of the phenomena that could affect the task beforehand, but a careful error analysis can also lead to insights which can be translated into annotation labels.
Regarding ways of moving forward, there are already many sources of data for the linguistic phenomena we have analyzed in this work, ranging from datasets annotated for negation \cite{Morante2012,Liu:Fan:Web:18}, irony \cite{van-hee-etal-2018-semeval}, emoji \cite{barbieri-etal-2018-semeval}, as well as datasets for idioms \cite{muzny-zettlemoyer-2013-automatic} and their relationship with sentiment \cite{jochim-etal-2018-slide}. We believe that discovering ways to explicitly incorporate this available information into state-of-the-art sentiment models may provide a way to improve current approaches. Multi-task learning \cite{Caruana93multitasklearning} and transfer learning \cite{Peters2018,Devlin2018,Howard2018} have shown promise in this respect, but have not been exploited for improving sentiment classification with regards to these specific phenomena.
\section*{Acknowledgements}
This work has been carried out as part of the SANT project, funded by the Research Council of Norway (grant number 270908).
|
2,877,628,091,557 | arxiv | \section{Discussion \& Conclusion}
\label{sec:conclusion}
In this paper, we compose several techniques for robust and efficient planning together in a framework designed for fast multi-robot{} planning in environments with uncertain moving obstacles, such as humans{}.
Each robot{} generates real-time motion plans while maintaining safety with respect to external disturbances and modeled dynamics via the FaSTrack framework.
To maintain safety with respect to humans{}, robots{} sense humans{}' states and form probabilistic, adaptive predictions over their future trajectories. For efficiency, we model these humans{}' motions as independent, and to maintain robustness, we adapt prediction model confidence online.
Finally, to remain safe with respect to other robots{}, we introduce multi-robot{} cooperation through STP, which relieves the computational complexity of planning in the joint state space of all robots{} by instead allowing robots{} to plan sequentially according to a fixed priority ordering.
We demonstrate our framework in hardware with two quadcopters navigating around two humans{}. We also present a larger simulation of five quadcopters and ten humans{}.
To further demonstrate our framework's robustness, we are interested in exploring (a) non-grid based methods of planning and prediction, (b) the incorporation of sensor uncertainty, (c) optimization for timing and communication delays, and (d) recursive feasibility in planning. We are also interested in testing more sophisticated predictive models for humans{}, and other low-level motion planners.
\section{Implementation and Experimental Results}
\label{sec:demo}
We demonstrate \framework{}'s feasibility in hardware with two robot{}s and two human{}s, and its scalability in simulation with five robot{}s and ten human{}s.
\subsection{Hardware Demonstration}
We implemented the \framework{} framework in C++ and Python, using Robot Operating System (ROS) \cite{quigley2009ros}. All computations for our hardware demonstration were done on a laptop computer (specs: 31.3 GB of memory, 216.4 GB disk, Intel Core i7 @ 2.70GHz x 8). As shown in Fig.~\ref{fig:front_fig}, we used Crazyflie 2.0 quadcopters as our robots, and two human{} volunteers. The position and orientation of robots{} and humans{} were measured at roughly 235 Hz by an OptiTrack infrared motion capture system.
The human{}s were instructed to move towards different places in the lab, while the quadcopters planned collision-free trajectories in three dimensions $(x,y,z)$ using a time-varying implementation of A$^*$.
The quadcopters tracked these trajectories using the precomputed FaSTrack controller designed for a 6D near-hover quadcopter model tracking a 3D point \cite{fridovich2018planning}. Human motion was predicted $2$ s into the future. Fig.~\ref{fig:robot_planning_algorithm} shows several snapshots of this scene over time. Note that the humans must move around each other to reach their goals---this is an unmodeled interaction affect. The predictions become less certain during this interaction, and the quadcopters plan more conservatively, giving the humans a wider berth. The full video of the hardware demonstration can be viewed in our video submission.
\subsection{\framework{} Framework Simulation Analysis}
Due to the relatively small size of our motion capture arena, we demonstrate scalability of the \framework{} framework through a large-scale simulation.
In this simulation, pedestrians are crossing through a $25 \times 20 \textrm{m}^2$ region of the UC Berkeley campus.
We simulate the pedestrians' motion using potential fields \cite{goodrich2002potential}, which ``pull'' each pedestrian toward his or her own goal and ``push'' them away from other pedestrians and robots{}.
These interaction effects between humans{} and robots{} are not incorporated into the state-action functions $\{Q_i\}$, and lead to increased model uncertainty (i.e., higher estimates of $\beta_i$) during such interactions.
The fleet of robots must reach their respective goals while maintaining safety with respect to their internal dynamics, humans, and each other. We ran our simulation on a desktop workstation with an Intel i7 Processor and 12 CPUs operating at 3.3 GHz.%
\footnote{The computation appears to be dominated by the prediction step, which we have not yet invested effort in optimizing.}
Our simulation took $98$ seconds for all robots to reach their respective goals. Predictions over human motion took $0.15 \pm 0.06$ seconds to compute for each human. This computation can be done in parallel. Each robot took $0.23 \pm 0.16$ seconds to determine a plan. There was no significant difference in planning time between robots of varying priority.
Robots used time-varying A$^*$ on a $2$-dimensional grid with $1.5$ m resolution, and collision checks were performed at $0.1$ m along each trajectory segment. The resolution for human predictions was $0.25$ m and human motion was predicted $2$ s into the future.
\begin{figure}
\centering
\includegraphics[width=.8\columnwidth]{figs/large_scale_sim_screenshot.pdf}
\caption{Simulation of 5 dynamic robots navigating in a scene with 10 humans. The simulated humans according to a potential field, which results in unmodeled interaction effects. However, \framework{} enables each robot to still reach its goal safely.}
\label{fig:simulation}
\vspace{-.7cm}
\end{figure}
\section{Robot Planning and Control}
\label{sec:fastrack}
In this section we begin with the canonical problem of planning through a static environment.
Efficient algorithms such as A$^*$ or rapidly-exploring random trees (RRT) \cite{hart1968astar, karaman2011RRTPRM} excel at this task.
These algorithms readily extend to environments with deterministically-moving obstacles by collision-checking in both time and space.
We now introduce robot{} dynamics and allow the environment to have external disturbances such as wind.
Kinematic planners such as A$^*$ and RRT do not consider these factors when creating plans.
In practice, however, these planners are often used to generate an initial trajectory, which may then be smoothed and tracked using a feedback controller such as a linear quadratic regulator (LQR).
During execution, the mismatch between the planning model and the physical system can result in tracking error, which may cause the robot{} to deviate far enough from its plan to collide with an obstacle.
To reduce the chance of collision, one can augment the robot{} by a heuristic error buffer; this induces a ``safety bubble'' around the robot{} used when collision checking. However, heuristically generating this buffer will not guarantee safety.
Several recent approaches address efficient planning while considering model dynamics and maintaining robustness with respect to external disturbances. Majumdar and Tedrake~\cite{majumdar2017funnel} use motion primitives with safety funnels, while Rakovi{\'c}~\cite{rakovic2009set} utilizes robust model-predictive control, and Singh et. al.~\cite{Singh2017} leverage contraction theory.
In this paper, we use FaSTrack \cite{herbert2017fastrack, fridovich2018planning}, a modular framework that computes a tracking error bound (TEB) via \emph{offline} reachability analysis.
This TEB can be thought of as a rigorous counterpart
of the error-buffer concept introduced above.
More concretely, the TEB is the set of states capturing the maximum relative distance (i.e. maximum tracking error) that may occur between the physical robot{} and the current state of the planned trajectory.
We compute the TEB by formulating the tracking task as a pursuit-evasion game between the planning algorithm and the physical robot. We then solve this differential game using Hamilton-Jacobi reachability analysis. To ensure robustness, we assume (a) worst-case behavior of the planning algorithm (i.e. being as difficult as possible to track), and (b) that the robot{} is experiencing worst-case, bounded external disturbances.
The computation of the TEB also provides a corresponding error-feedback controller for the robot to always remain inside the TEB.
Thus, FaSTrack wraps efficient motion planners, and adds robustness to modeled system dynamics and external disturbances through the precomputed TEB and error-feedback controller.
Fig. \ref{fig:fastrack_birds_eye} shows a top-down view of a quadcopter using a kinematic planner (A$^*$) to navigate around static obstacles. By employing the error-feedback controller, the quadcopter is guaranteed to remain within the TEB (shown in blue) as it traverses the A$^*$ path.
\subsection{FaSTrack Block}
\textbf{\textit{Requirements:}} To use FaSTrack, one needs a high-fidelity dynamical model of the system used for reference tracking, and a (potentially simpler) dynamic or kinematic model used by the planning algorithm.
Using the relative dynamics between the tracking model and the planning model, the TEB and safety controller may be computed using Hamilton-Jacobi reachability analysis \cite{herbert2017fastrack}, sum-of-squares optimization \cite{singh2018robust}, or approximate dynamic programming \cite{royo2018classification}.
\begin{figure
\centering
\includegraphics[width=.9\columnwidth]{figs/fastrack.pdf}
\caption{FaSTrack Block}
\label{fig:fastrack}
\vspace{-.3cm}
\end{figure}
\begin{figure
\centering
\includegraphics[width=.7\columnwidth]{figs/fastrack_birds_eye.pdf}
\caption{Top-down view of FaSTrack applied to a 6D quadcopter navigating a static environment. Note the simple planned trajectory (changing color over time) and the tracking error bound (TEB) around the quadcopter. This TEB is a 6D set that has been projected down to the position dimensions. Because we assuem the quadcopter moves independently in $(x,y,z)$, this projection looks like a box, making collision-checking very straightforward.}
\label{fig:fastrack_birds_eye}
\vspace{-.5cm}
\end{figure}
\textbf{\textit{Implementation:}}
Fig.~\ref{fig:fastrack} describes the online algorithm for FaSTrack after the offline precomputation of the TEB and safety controller. We initialize the
\textbf{planning block}
to start within the TEB centered on the robot{}'s current state. The planner then uses any desired planning algorithm (e.g. A$^*$, or model predictive control) to find a trajectory from this initial state to a desired goal state.
When collision-checking, the planning algorithm must ensure that the tube defined by the Minkowski sum of the TEB and the planned trajectory does not overlap any obstacles in the \textbf{obstacle map}.
The planning block provides the current planned reference state to the \textbf{FaSTrack controller}, which determines the relative state between the tracking model (robot{}) and planned reference (motion plan). The controller then applies the corresponding optimal, safe tracking control via an efficient look-up table.
\subsection{FaSTrack in the \framework{} Framework}
In the \textbf{robot planning and control} section of Fig.~\ref{fig:framework}, each robot{} uses FaSTrack for robust planning and control.
FaSTrack guarantees that each robot{} remains within its TEB-augmented trajectory.
\section{The \framework{} Framework}
\label{sec:framework}
Fig. \ref{fig:framework} illustrates our overall planning framework, called SCAFFOLD.
We introduce the components of the framework by incrementally addressing the three main challenges identified above.
We first present the \textbf{robot planning and control} block (Section \ref{sec:fastrack}), which is instantiated for each robot{}.
Each robot{} uses a robust controller (e.g. the reachability-based controller of \cite{herbert2017fastrack}) to track motion plans within a precomputed error margin that accounts for modeled dynamics and external disturbances.
In order to generate safe motion plans, each robot{} will ensure that output trajectories are collision-checked with a set of obstacle maps, using the tracking error margin.
These obstacle maps include an \emph{a priori} known set of static obstacles, as well as predictions of the future motion of any humans{}, which are generated by the \textbf{human predictions} block (Section \ref{sec:predictions}).
By generating these predictions, each robot{} is able to remain probabilistically safe with respect to the humans{}.
To ensure tractability for multiple humans, we generate predictions using simplified interaction models, and subsequently adapt them
following a real-time Bayesian approach such as~\cite{fisac2018probabilistically}.
We leverage the property that individual predictions automatically become more uncertain whenever their accuracy degrades, and use this to enable our tractable predictions to be robust to unmodeled interaction effects.
Finally, to guarantee safety with respect to other robots, we carry out \textbf{sequential trajectory planning} (Section \ref{sec:STP}) by adapting the cooperative multi-agent planning scheme~\cite{bansal2017safe}
to function in real time with the robust trajectories from the planning and control block.
The robots{} generate plans according to a pre-specified priority ordering. Each robot{} plans to avoid the most recently generated trajectories from robots{} of higher priority, i.e. robot{} $i$ must generate a plan that is safe with respect to the planned trajectories from robots{} $j, j < i$.
This removes the computational complexity of planning in the joint state space of all robots{} at once.
\section{Human Predictions}
\label{sec:predictions}
Section~\ref{sec:fastrack} introduced methods for the fast and safe navigation of a single robot{} in an environment with deterministic, moving obstacles.
However, moving obstacles---especially human beings---are not always best modeled as deterministic.
For such ``obstacles,'' robots{} can employ probabilistic predictive models to produce a distribution of states the human{} may occupy in the future. The quality of these predictions and the methods used to plan around them determine the overall safety of the system.
Generating accurate real-time predictions for multiple humans (and, more generally, uncertain agents) is an open problem.
Part of the challenge arises from the combinatorial explosion of interaction effects as the number of agents increases.
Any simplifying assumptions, such as neglecting interaction effects,
will inevitably cause predictions to become inaccurate.
Such inaccuracies could threaten the safety
of plans that rely on these predictions.
Our goal is to compute real-time motion plans that are based on up-to-date predictions of all humans{} in the environment, and at the same time maintain safety when these predictions become inaccurate.
The confidence-aware prediction approach of \cite{fisac2018probabilistically} provides a convenient mechanism for adapting prediction uncertainty online to reflect the degree to which humans{}' actions match an internal model.
This automatic uncertainty adjustment allows us to simplify or even neglect interaction effects between humans{}, because uncertain predictions will automatically result in more conservative plans when the observed behavior departs from internal modeling assumptions.
\subsection{Human Prediction Block}
\textbf{\textit{Requirements:}}
In order to make any sort of collision-avoidance guarantees, we require a prediction algorithm that produces distributions over future states, and rapidly adjusts those predictions such that the actual trajectories followed by humans{} lie within the prediction envelope. There are many approaches to probabilistic trajectory prediction in the literature, e.g. \cite{ding2011human,hawkins2013probabilistic, ziebart2009planning, lasota2015analyzing}.
These methods could be used to produce a probabilistic prediction of the $i$-th human's state $x_i\in\R^{n_i}$ at future times $\tau$, conditioned on observations%
\footnote{For simplicity, we will later assume complete state observability: ${z^t=x^t}$.
}
$z$: $P(x_i^\tau | z^{0:t})$. These observations are random variables and depend upon the full state of all robots{} and humans{} $x$ until the current time $t$. However, by default these distributions will not necessarily capture the true trajectories of each human{}, especially when the models do not explicitly account for interaction effects. Fisac et. al.~\cite{fisac2018probabilistically} provide an efficient mechanism for updating the uncertainty (e.g., the variance) of distributions $P(x_i^\tau | z^{0:t})$ to satisfy this safety requirement.
\textbf{\textit{Implementation:}}
Fig. \ref{fig:prediction} illustrates the \textbf{human prediction block}. We use a maximum-entropy model
as in \cite{fisac2018probabilistically, finn2016guided, ziebart2008maximum}, in which the dynamics of the $i$-th human{} are affected by actions $u_i^t$ drawn from a Boltzmann probability distribution. This time-dependent distribution over actions implies a distribution over future states.
Given a sensed state $x_i^t$ of human{} $i$ at time $t$, we invert the dynamics model to infer the human{}'s action, $u_i^t$.
Given this action, we perform a Bayesian update on the distribution of two parameters: $\theta_i$, which describes the objective of the human{} (e.g. the set of candidate goal locations), and $\beta_i$, which
governs the variance of the predicted action distributions.
$\beta_i$ can be interpreted as a natural
indicator of \textit{model confidence}, quantifying the model's ability to capture humans{}' current behavior~\cite{fisac2018probabilistically}.
Were we to model actions with a different distribution, e.g. a Gaussian process, the corresponding parameters could be learned from prior data \cite{ziebart2008maximum,ziebart2009planning,finn2016guided}, or inferred online \cite{Sadigh2016information,fisac2018probabilistically} using standard inverse optimal control (inverse reinforcement learning) techniques.
Once distributional parameters are updated, we produce a prediction over the future actions of human{} $i$ through the following Boltzmann distribution:
\begin{equation}\label{eq:Boltzmann}
P(u^t_i \mid x^t; \beta_i, \theta_i) \propto e^{\beta_i Q_i(x^t,u^t_i; \theta_i)}\enspace.
\end{equation}
This model treats each human{} as more likely to choose actions with high expected utility as measured by the (state-action) Q-value
associated to a certain reward function, $r_i(x,u_i; \theta_i)$. In general, this value function may depend upon the joint state $x$ and the human{}'s own action $u_i$, as well as the parameters $\theta_i, \beta_i$.
Finally, combining~\eqref{eq:Boltzmann} with a dynamics model, these predicted actions may be used to generate a distribution over future states. In practice, we represent this distribution as a discrete occupancy grid. One such grid is visualized in Fig.~\ref{fig:humans_birds_eye}.
By reasoning about each human{}'s model confidence as a hidden state \cite{fisac2018probabilistically}, our framework dynamically adapts predictions
to the evolving accuracy of the models encoded in the set of state-action functions, $\{Q_i\}$.
Uncertain predictions will force the planner to be more cautious whenever the actions of the humans{} occur with low probability as measured by \eqref{eq:Boltzmann}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figs/prediction.pdf}
\caption{Human Prediction Block}
\label{fig:prediction}
\vspace{-.4cm}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.6\columnwidth]{figs/humans_birds_eye.pdf}
\caption{Our environment now has a human{} (red square). The robot models the human{} as likely to move north. Visualized on top of the human{} is the distribution of future states (pink is high, blue is low probability). Since the human{} is walking north and matching the model, the robot{}'s predictions are confident that the human{} will continue northward and remain collision-free.
}
\label{fig:humans_birds_eye}
\vspace{-.6cm}
\end{figure}
\subsection{Human Prediction in the \framework{} Framework}
The predicted future motion of the humans{} is generated as a probability mass function,
represented as time-indexed set of occupancy grids.
These distributions are interpreted as an obstacle map by the FaSTrack block.
During planning, a state is considered to be unsafe if the total probability mass contained within the TEB centered at that state exceeds a preset threshold, $P_{\mathrm{th}}$. As in \cite{fisac2018probabilistically}, we consider a trajectory to be unsafe if the maximum \emph{marginal} collision probability at any individual state along it exceeds $P_{\mathrm{th}}$.
When there are multiple humans{}, their state at any future time $\tau$ will generally be characterized by a joint probability distribution $P(x_{1}^\tau,...,x_{N}^\tau)$.
Let $\pstate^\tau$ be the planned state of a robot{} at time $\tau$. We write $\mathrm{coll}(\pstate^\tau, x_i^\tau)$ to denote the overlap of the TEB centered at $\pstate^\tau$ with the $i$th human{} at state $x_i^\tau$. Thus, we may formalize the probability of collision with \emph{at least one} human{} as:
\begin{align}\label{eq:joint_coll_probability}
P\big(\text{coll}&(\pstate^\tau,\{x_i^\tau\}_{i=1}^N)\big) = \\
&1 - \prod_{i=1}^N P\big(\neg\text{coll}(\pstate^\tau,x_i^\tau)\mid
\neg\text{coll}(\pstate^\tau,\{x_j^\tau\}_{j=1}^{i-1})\big)
\enspace,\notag
\end{align}
Intuitively, \eqref{eq:joint_coll_probability} states that the probability that the robot is in collision at $s^\tau$ is one minus the probability that the robot is not in collision. We compute the second term by taking the product over the probability that the robot is not in collision with each human, given that the robot is not in collision with all previously accounted for humans. Unfortunately, it is generally intractable to compute the terms in the product in \eqref{eq:joint_coll_probability}.
Fortunately, tractable approximations can be computed by storing only the marginal predicted distribution of each human at every future time step $\tau$, and assuming independence between humans.
This way, each robot{} need only operate with $N$ occupancy grids. The resulting computation is: \vspace{-.2cm}
\begin{equation}\label{eq:marginal_coll_probability}
P\big(\text{coll}(\pstate^\tau,\{x_i^\tau\}_{i=1}^N)\big) \approx
1 - \prod_{i=1}^N \Big(1-P\big(\text{coll}(\pstate^\tau,x_i^\tau)\big)\Big)
\enspace.
\end{equation}
Here we take the product over the probability that the robot is not in collision with each human (one minus probability of collision), and then again take the complement to compute the probability of collision with any human. Note that when the predictive model neglects future interactions between multiple humans{}, \eqref{eq:joint_coll_probability} reduces to \eqref{eq:marginal_coll_probability}.
If model confidence analysis \cite{fisac2018probabilistically} is used in conjunction with such models, we hypothesize that each marginal distribution will naturally become more uncertain when interaction effects are significant.
Once a collision probability is exactly or approximately computed, the planner can reject plans for which, at any time $\tau>t$,
the probability of collision from \eqref{eq:marginal_coll_probability} exceeds $P_{\mathrm{th}}$. Thus, we are able to generate computationally tractable predictions that result in \mbox{$P_{\mathrm{th}}$-safe} planned trajectories for the physical robot.
\section{Introduction}
\label{sec:intro}
As robotic systems are increasingly used for applications such as drone delivery services, semi-automated warehouses, and autonomous cars, safe and efficient robotic navigation around humans is crucial.
Consider the example in Fig.~\ref{fig:front_fig}, inspired by a drone delivery scenario, where two quadcopters must plan a safe trajectory around two humans who are walking through the environment. We would like to guarantee that the robots will reach their goals without ever colliding with each other, any of the humans, or the static surroundings.\footnote{Note that our laboratory setting uses a motion capture system for sensing and state estimation---robustness with respect to sensor uncertainty is an important component that is beyond the scope of this paper.}
This safe motion planning problem faces three main challenges: (1) controlling the nonlinear robot dynamics subject to external disturbances (e.g. wind), (2) planning around multiple humans{} in real time, and (3) avoiding conflicts with other robots' plans. Extensive prior work from control theory, motion planning, and cognitive science has enabled computational tools
for
rigorous safety analysis, faster motion planners for nonlinear systems, and predictive models of human agents.
Individually, these problems are difficult---computing robust control policies, coupled robot plans, and joint predictions of multiple human agents are all computationally demanding at best
and intractable at worst~\cite{Mitchell2005, chen2016general}.
Recent work, however, has made progress in provably-safe real-time motion planning~\cite{herbert2017fastrack, majumdar2017funnel, singh2018robust}, real-time probabilistic prediction of a human agent's motion~\cite{fisac2018probabilistically, ziebart2009planning}, and robust sequential trajectory planning for multi-robot systems~\cite{bansal2017safe, chen2016multi}.
\begin{figure}[t!]
\centering
\includegraphics[width=\columnwidth]{figs/front_fig_small.pdf}
\caption{Hardware demonstration of real-time multi-agent planning while maintaining safety with respect to internal dynamics, external disturbances, and intentional humans{}. The planned trajectories from the quadcopters are visualized, and the tracking error bound is shown as a box around each quadcopter. The probabilistic distribution over the future motion of the humans are shown in pink in front of each human.}
\label{fig:front_fig}
\vspace{-.6cm}
\end{figure}
It remains a challenge to synthesize these into a real-time planning system, primarily due to the difficulty of joint planning and prediction for multiple robots and humans. There has been some work combining subsets of this problem \cite{knepper2012pedestrian, trautman2010unfreezing, kruse2013human}, but the full setup of real-time and robust multi-robot navigation around multiple humans remains underexplored
Our main contributions in this paper are tractable approaches to joint planning and prediction, while still ensuring efficient, probabilistically-safe motion planning.
\begin{figure*}[!ht]
\centering
\includegraphics[width=\textwidth]{figs/framework_flat.pdf}
\caption{The \framework{} Framework}
\label{fig:framework}
\vspace{-.7cm}
\end{figure*}
We use the reachability-based FaSTrack framework \cite{herbert2017fastrack} for real-time robust motion planning.
To ensure real-time feasibility, robots{} predict human{} motion using a simple model neglecting future interaction effects.
Because this model will be a simplification of true human{} motion, we use confidence-aware predictions \cite{fisac2018probabilistically} that become more conservative whenever humans{} deviate from the assumed model.
Finally, groups of robots{} plan sequentially according to a pre-specified priority ordering \cite{chen2015safe}, which serves to reduce the complexity of the joint planning problem while maintaining safety with respect to each other.
We demonstrate our framework in hardware, and provide a large-scale simulation to showcase scalability.
\section{problem statement}
\label{sec:problem}
We consider multiple communicative robots moving to individual goal locations in a shared space with multiple
humans. We assume that the robots can share future trajectories with each other, but that human
motion must be anticipated. \shnote{assumptions on human motion}. The robots should maintain a safe distance from each other and the humans at all times while working toward their respective goals.T his theory is presented in a general format, but for illustration purposes we use a running example of three quadcopters navigating around two humans.
\subsection{Robot Motion Model}
\label{subsec:robot_motion_model}
\shnote{copied from STP paper}
Consider $N$ robots $\R_i,i=1\ldots,N$, each trying to reach one of $N$ goal sets $\mathcal{T}_i,i=1\ldots,N$, while avoiding obstacles and collision with each other. Each robot $i$ has states $\x_{R,i}\in \R^{n_i}$ and moves with the following dynamics:
\begin{equation} \label{eq:dyn}
\dotx_{R,i} = f_{R,i} (\x_{R,i}, \ctrl_{R,i}),
\end{equation}
\noindent where $\x_{R,i}$ is robot $i$'s state
and $\ctrl_i \in \mathbb{R}^{m_{R,i}}$ represents the robot's control actions. The dynamics $f_{R,i}(\cdot,\cdot)$ of the robot are individual and can be different across the team of robots.
Each robot $\R_i$ must avoid collision with respect to every other robot $\R_j, j\neq i$. We define pairwise robot keep-out sets $\mathcal{D}_{R_i,R_j} \subset \mathbb{R}^{n_{R_i}} \times \mathbb{R}^{n_{R_j}}$, which are the set of joint robot-robot states to be avoided due to collisions.
Given $M$ pedestrians, humans $H_i, i=1\ldots,M$ are similarly modeled with the following dynamics:
\begin{equation} \label{eq:dyn}
\dotx_{H,i} = f_{H,i} (\x_{H,i}, \ctrl_{H,i}),
\end{equation}
For each robot we define a human keep-out set as $\mathcal{D}_{R_i,H}$ as the set of joint states between the robot and all humans that are to be avoided, e.g. because they imply physical collisions between the robot and at least one human. Therefore, each robot must stay safe with respect to every preceding robot ($\mathcal{D}_{R_i,R_j}$) and all humans ($\mathcal{D}_{R_i,H}$).
\subsection{Human Predictive Model}
\begin{itemize}
\item copy/paste from RSS paper
\item talk about how this changes for multiple humans
\end{itemize}
\section*{Acknowledgments}
The authors would like to thank Joe Menke for his assistance in hardware and motion capture systems, and Daniel Hua and Claire Dong for their help early on.
\balance
\printbibliography
\end{document}
\section{Sequential Trajectory Planning
\label{sec:STP}
Thus far, we have shown how our framework allows a single robot{} to navigate in real-time through an environment with multiple humans{} while maintaining safety (at a probability of approximately \mbox{$P_{\mathrm{th}}$-safe}) and accounting for internal dynamics, external disturbances, and humans{}.
However, in many applications (such as autonomous driving), the environment may also be occupied by other robots{}.
Finding the optimal set of trajectories for all robots{} in the environment would require solving the planning problem over the joint state space of all robots{}. This very quickly becomes computationally intractable with increasing numbers of robots{}.
Approaches for multi-robot trajectory planning often assume that the other vehicles operate with specific control strategies such as those involving induced velocity obstacles \cite{wu2012guaranteed, fiorini1998motion, chasparis2005linear, van2008reciprocal} and involving virtual structures or potential fields to maintain collision \cite{olfati2002distributed, chuang2007multi, zhou2018agile}.
These assumptions greatly reduce the dimensionality of the problem, but may not hold in general.
Rather than assuming specific control strategies of other robots{}, each robot{} could generate predictions over the future motion of all other robots{}.
Successful results of this type typically assume that the vehicles operate with very simple dynamics, such as
single integrator dynamics \cite{Zhou2017}, differentially flat systems \cite{lian2002real}, linear systems \cite{ahmadzadeh2009multi}.
However, when robots{} can communicate with each other, methods for centralized and/or cooperative multi-agent planning allow for techniques for scalability \cite{lewis2013cooperative, torreno2017cooperative, mylvaganam2017differential}.
One such method is sequential trajectory planning (STP) \cite{chen2015safe}, which coordinates robust multi-agent planning using a sequential priority ordering.
Priority ordering is commonly used in many multi-agent scenarios, particularly for aerospace applications.
In this work, we merge STP with FaSTrack to produce real-time planning for multi-agent systems.
\subsection{Sequential Trajectory Planning}
\textbf{\textit{Requirements:}} To apply STP, robots{} must be able to communicate trajectories and TEBs over a network.
\textbf{\textit{Implementation:}}
STP addresses the computational complexity of coupled motion planning by assigning a priority order to the robots{} and allowing higher-priority robots{} to ignore the planned trajectories of lower-priority robots{}.
The first-priority robot{} uses the \textbf{FaSTrack block} to plan a (time-dependent) trajectory through the environment while avoiding the \textbf{obstacle maps}.
This trajectory is shared across the network with all lower-priority robots{}.
The $i$-th robot{} augments the trajectories from robots{} $0:i-1$ by their respective TEBs, and treats them as time-varying obstacles.
The $i$-th robot{} determines a safe trajectory that avoids these time-varying tubes as well as the predicted state distributions of humans{}, and publishes this trajectory for robots{} $i+1:n$.
This process continues until all robots{} have computed their trajectory.
Each robot{} replans as quickly as it is able; in our experiments, this was between $50$--$300$ ms.
\subsection{Sequential Trajectory Planning in the \framework{} Framework}
By combining this method with FaSTrack for fast individual planning, the sequential nature of STP does not significantly affect overall planning time. In our implementation all computations are done on a centralized computer using the Robot Operating System (ROS), however this method can easily be performed in a decentralized manner. Note that STP does depend upon reliable, low-latency communication between the robots{}. If there are communication delays, techniques such as \cite{desai2017drona} may be used to augment each robot{}'s TEB by a term relating to time delay. |
2,877,628,091,558 | arxiv | \section{Introduction}\label{sec:intro}
In the pantheon of anisotropic integrable quantum spin chains, one
model stands out for its high degree of symmetry: the $\mathop{U_{q} sl(2)}\nolimits$-invariant
open spin-1/2 XXZ quantum spin chain, whose Hamiltonian is given by
\cite{Pasquier:1989kd}
\begin{eqnarray}
H = \sum_{k=1}^{N-1} \left[ \sigma^x_k \sigma^x_{k+1} +
\sigma^y_k \sigma^y_{k+1} + \tfrac{1}{2}( q + q^{-1}) \sigma^z_k
\sigma^z_{k+1}
\right] - \tfrac{1}{2}( q - q^{-1})
\left( \sigma^z_1 - \sigma^z_N \right) \,, \label{Hamiltonian}
\end{eqnarray}
where $N$ is the length of the chain, $\vec\sigma$ are the usual Pauli
spin matrices, and $q=e^{\eta}$ is an arbitrary complex parameter. As
is true for generic quantum integrable models, the Hamiltonian is a
member of a family of commuting operators that can be obtained from a
transfer matrix \cite{Sklyanin:1988yz}; and the eigenvalues of the
transfer matrix can be obtained in terms of admissible solutions
$\{\lambda_{k}\}$ of the corresponding set of Bethe equations
\cite{Alcaraz:1987uk, Sklyanin:1988yz, Pasquier:1989kd} \footnote{In order to reduce the size of
formulas, we denote the hyperbolic sine function ($\sinh$) by
$\mathop{\rm sh}\nolimits$.}
\begin{eqnarray}
\lefteqn{\mathop{\rm sh}\nolimits^{2N}\left(\lambda_{k} + \tfrac{\eta}{2}\right)
\prod_{\scriptstyle{j \ne k}\atop \scriptstyle{j=1}}^M
\mathop{\rm sh}\nolimits(\lambda_{k} - \lambda_{j} - \eta)\mathop{\rm sh}\nolimits(\lambda_{k} + \lambda_{j} - \eta)
} \nonumber \\
&&=\mathop{\rm sh}\nolimits^{2N}\left(\lambda_{k} - \tfrac{\eta}{2}\right)
\prod_{\scriptstyle{j \ne k}\atop \scriptstyle{j=1}}^M
\mathop{\rm sh}\nolimits(\lambda_{k} - \lambda_{j} + \eta)\mathop{\rm sh}\nolimits(\lambda_{k} + \lambda_{j} + \eta) \,, \nonumber \\
&&\qquad k = 1 \,, 2\,, \ldots \,, M \,, \qquad
M = 0\,, 1\,, \ldots\,,
\lfloor\tfrac{N}{2}\rfloor \,,
\label{BAE}
\end{eqnarray}
where $\lfloor k \rfloor$ denotes the integer not greater than $k$.
When the anisotropy parameter $\eta$ takes the values $\eta=i\pi/p$
with integer $p \ge 2$, and therefore $q=e^{\eta}$ is a root of unity,
several interesting new features appear. In particular, the symmetry
of the model is enhanced (for example, an $sl(2)$ symmetry arises from
the so-called divided powers of the quantum group generators); the
Hamiltonian has Jordan
cells~\cite{Dubail:2010zz,Vasseur:2011fi,MorinDuchesne:2011kd}; and
the Bethe equations (\ref{BAE}) admit continuous solutions
\cite{Gainutdinov:2015vba}, in addition to the usual discrete
solutions (the latter phenomenon also occurs for the closed XXZ chain
\cite{Baxter:1972wg, Fabricius:2000yx, Fabricius:2001yy, Baxter:2001sx,
Tarasov:2003xz}).
We have recently found \cite{Gainutdinov:2015vba} significant
numerical evidence that the Bethe equations have precisely the right
number of admissible solutions to describe all the distinct (generalized)
eigenvalues of the model's transfer matrix, even at roots of unity.
We focus here on the related problem of constructing, via the algebraic
Bethe ansatz, all $2^{N}$ (generalized) eigenvectors of the transfer
matrix.
For generic $q$, the construction of these eigenvectors is similar to the one
for the simpler spin-1/2 XXX chain: to each admissible solution of the
Bethe equations, there corresponds a Bethe vector, which is a
highest-weight state of $\mathop{U_{q} sl(2)}\nolimits$ \cite{Pasquier:1989kd, Hou:1991tc,
Mezincescu:1991rb}; and lower-weight states can be obtained by acting
on the Bethe vector with the quantum-group lowering
operator $F$.
However, at roots of unity $q=e^{i\pi/p}$ with integer $p \ge 2 $,
we find that there are two additional features:
\renewcommand{\theenumi}{\roman{enumi}}
\begin{enumerate}
\item Certain eigenvectors must be constructed using the continuous
solutions noted above. These solutions contain $p$ equally-spaced
roots (so-called exact complete $p$-strings), whose centers are
arbitrary, see Prop.~\ref{main-prop-1} for more details. This construction is a generalization of the one
proposed by Tarasov for the closed chain at roots of unity~\cite{Tarasov:2003xz}.
\item We propose that the generalized eigenvectors can be constructed
using similar string configurations of length up to $p-1$, except
the centers tend to infinity. We refer to Prop.~\ref{prop:gen-vect} for more details.
\end{enumerate}
We demonstrate explicitly for several values of $p$ and $N$ that the
complete set of (generalized) eigenvectors can indeed be obtained in this way.
The outline of this paper is as follows. In section
\ref{sec:preliminary} we briefly review results and notations
(specifically, the construction of the transfer matrix, the algebraic Bethe ansatz, and
$\mathop{U_{q} sl(2)}\nolimits$ symmetry) that are used later in the paper. In section
\ref{sec:pstrings} we work out in detail the construction noted in
item i above with the result formulated in Prop.~\ref{main-prop-1}, see in particular Eqs.
(\ref{Tarasovstate}) and (\ref{Tarasovstategen}). In section \ref{sec:generalized} we describe the
construction noted in item ii above with the final result in Prop.~\ref{prop:gen-vect}, see in particular
Eq. (\ref{genvec2}).
These two constructions are then used
in section \ref{sec:p2} to construct all the (generalized)
eigenvectors for the $p=2$ root of unity case with $N=4,5,6$,
as well as selected eigenvectors with $N=7, 9$.
We present all the (generalized)
eigenvectors for various values of $p>2$ and $N$ in section \ref{sec:pgt2}.
We conclude with a brief discussion in section \ref{sec:discuss}.
Some ancillary results are collected in four appendices.
In Appendix \ref{app:proj-mod-base}, we explicitly describe the
action of $\mathop{U_{q} sl(2)}\nolimits$ in tilting modules at roots of unity.
In Appendix \ref{sec:numerics}, we present numerical evidence for the
string solutions used in section
\ref{sec:generalized} for constructing generalized
eigenvectors. In Appendix \ref{sec:special}, we derive a special
off-shell relation (similar to the one found by Izergin and Korepin
\cite{Izergin:1982hy} for repeated Bethe roots), which we use in
Appendix \ref{sec:proof} to derive an off-shell relation
for generalized eigenvectors.
\section{Preliminaries}\label{sec:preliminary}
The transfer matrix and algebraic Bethe ansatz for the model
(\ref{Hamiltonian}) follow from the work of Sklyanin
\cite{Sklyanin:1988yz}, which was already reviewed in
\cite{Gainutdinov:2015vba}. However, we repeat here the main
results, both for the convenience of the reader and also to explain a
useful change in notation (see (\ref{shiftedB}) and subsequent formulas).
\subsection{Transfer matrix}\label{sec:transfer}
The basic ingredients of the transfer matrix are the R-matrix (solution
of the Yang-Baxter equation)
\begin{eqnarray}
R(u) = \left(\begin{array}{cccc}
\mathop{\rm sh}\nolimits(u+\eta) & 0 & 0 & 0\\
0 & \mathop{\rm sh}\nolimits(u) & \mathop{\rm sh}\nolimits(\eta) & 0 \\
0 & \mathop{\rm sh}\nolimits(\eta) & \mathop{\rm sh}\nolimits(u) & 0 \\
0 & 0 & 0 & \mathop{\rm sh}\nolimits(u+\eta)
\end{array} \right) \,,
\end{eqnarray}
and the left and right K-matrices (solutions of the boundary
Yang-Baxter equations) given by the diagonal matrices
\begin{eqnarray}
K^{+}(u)=\mathop{\rm diag}\nolimits(e^{-u-\eta}\,, e^{u+\eta}) \,, \qquad
K^{-}(u)=\mathop{\rm diag}\nolimits(e^{u}\,, e^{-u})\,,
\end{eqnarray}
respectively. The R-matrix is used to construct the monodromy matrices
\begin{eqnarray}
T_{a}(u) = R_{a1}(u) \cdots R_{aN}(u)\,, \qquad
\hat T_{a}(u) = R_{aN}(u) \cdots R_{a1}(u) \,.
\end{eqnarray}
Finally, the transfer matrix $t(u)$ is given by \cite{Sklyanin:1988yz}
\begin{eqnarray}
t(u) = \mathop{\rm tr}\nolimits_{a} K^{+}_{a}(u)\, {\cal U}_{a}(u)
\,,
\label{transfer}
\end{eqnarray}
where
\begin{eqnarray}
{\cal U}_{a}(u) = T_{a}(u)\, K^{-}_{a}(u)\, \hat T_{a}(u) \,.
\end{eqnarray}
The transfer matrix commutes for different values of the spectral parameter
\begin{eqnarray}
\left[ t(u) \,, t(v) \right] = 0 \,,
\label{commutativity}
\end{eqnarray}
and contains the Hamiltonian (\ref{Hamiltonian}) $H \sim t'(0)$ up
to multiplicative and additive constants.
\subsection{Algebraic Bethe ansatz}\label{sec:ABA}
The $A$, $B$, $C$, and $D$ operators of the algebraic Bethe ansatz
are defined by \cite{Sklyanin:1988yz}
\begin{eqnarray}
{\cal U}_{a}(u)
= \left( \begin{array}{cc}
A(u) & B(u) \\
C(u) & D(u) + \frac{\mathop{\rm sh}\nolimits \eta}{\mathop{\rm sh}\nolimits(2u+\eta)} A(u)
\end{array} \right) \,,
\end{eqnarray}
where $\left[B(u)\,, B(v) \right]=0$.
However, in order to avoid a later shift of the Bethe roots (see
e.g. Eq. (A.24) in \cite{Gainutdinov:2015vba}),
we now introduce a shifted $B$ operator
\begin{eqnarray}
{\cal B}(u) \equiv B(u-\tfrac{\eta}{2})
\label{shiftedB} \,.
\end{eqnarray}
We define the Bethe states using this shifted $B$ operator
\begin{eqnarray}
|\lambda_{1} \ldots \lambda_{M} \rangle = \prod_{k=1}^{M} {\cal
B}(\lambda_{k}) |\Omega\rangle\,,
\label{Bethestate}
\end{eqnarray}
where $|\Omega\rangle$ is the reference state with all spins up
\begin{eqnarray}
|\Omega\rangle = {1\choose 0}^{\otimes N} \,,
\label{reference}
\end{eqnarray}
and $\lambda_{1}\,, \ldots \,, \lambda_{M}$ remain to be specified.
The Bethe states satisfy the off-shell relation
\begin{eqnarray}
t(u) |\lambda_{1} \ldots \lambda_{M} \rangle = \Lambda(u)
|\lambda_{1} \ldots \lambda_{M} \rangle +
\sum_{m=1}^{M} \Lambda^{\lambda_{m}}(u)\, B(u)
\prod_{\scriptstyle{k \ne m}\atop \scriptstyle{k=1}}^M {\cal
B}(\lambda_{k}) |\Omega\rangle
\,,
\label{offshell}
\end{eqnarray}
where $\Lambda(u)$ is given by the T-Q
equation
\begin{eqnarray}
\Lambda(u) =
\frac{\mathop{\rm sh}\nolimits(2u+2\eta)}{\mathop{\rm sh}\nolimits(2u+\eta)}\mathop{\rm sh}\nolimits^{2N}(u+\eta)\frac{Q(u-\eta)}{Q(u)} +
\frac{\mathop{\rm sh}\nolimits(2u)}{\mathop{\rm sh}\nolimits(2u+\eta)}\mathop{\rm sh}\nolimits^{2N}(u)\frac{Q(u+\eta)}{Q(u)} \,,
\label{Lambda}
\end{eqnarray}
with
\begin{eqnarray}
Q(u)
=\prod_{k=1}^{M}\mathop{\rm sh}\nolimits\left(u-\lambda_{k}+\tfrac{\eta}{2}\right)\mathop{\rm sh}\nolimits\left(u+\lambda_{k}+\tfrac{\eta}{2}\right) = Q(-u-\eta)\,.
\label{Q}
\end{eqnarray}
Furthermore,
\begin{eqnarray}
\Lambda^{\lambda_{m}}(u) &=& \mathsf{f}(u,\lambda_{m}-\tfrac{\eta}{2})
\Bigg[\mathop{\rm sh}\nolimits^{2N}(\lambda_{m}+\tfrac{\eta}{2})
\prod_{\scriptstyle{k \ne m}\atop \scriptstyle{k=1}}^M
\frac{\mathop{\rm sh}\nolimits(\lambda_{m}-\lambda_{k}-\eta)
\mathop{\rm sh}\nolimits(\lambda_{m}+\lambda_{k}-\eta)}
{\mathop{\rm sh}\nolimits(\lambda_{m}-\lambda_{k}) \mathop{\rm sh}\nolimits(\lambda_{m}+\lambda_{k})}
\nonumber \\
&& -\mathop{\rm sh}\nolimits^{2N}(\lambda_{m}-\tfrac{\eta}{2})
\prod_{\scriptstyle{k \ne m}\atop \scriptstyle{k=1}}^M
\frac{\mathop{\rm sh}\nolimits(\lambda_{m}-\lambda_{k}+\eta)
\mathop{\rm sh}\nolimits(\lambda_{m}+\lambda_{k}+\eta)}
{\mathop{\rm sh}\nolimits(\lambda_{m}-\lambda_{k})
\mathop{\rm sh}\nolimits(\lambda_{m}+\lambda_{k})}\Bigg] \,,
\label{Lambdam}
\end{eqnarray}
where
\begin{eqnarray}
\mathsf{f}(u,v) = \frac{\mathop{\rm sh}\nolimits(2u+2\eta)\mathop{\rm sh}\nolimits(2v) \mathop{\rm sh}\nolimits \eta}
{ \mathop{\rm sh}\nolimits(u-v) \mathop{\rm sh}\nolimits(u+v+\eta) \mathop{\rm sh}\nolimits(2v+\eta)} \,.
\label{fuv}
\end{eqnarray}
It follows from the off-shell equation (\ref{offshell}) that the Bethe state $|\lambda_{1} \ldots
\lambda_{M} \rangle$ (\ref{Bethestate}) is an eigenstate
of the transfer matrix $t(u)$ (\ref{transfer}) with eigenvalue $\Lambda(u)$ (\ref{Lambda})
if the coefficients $\Lambda^{\lambda_{m}}$ of all the ``unwanted'' terms vanish; that is, according to (\ref{Lambdam}),
if $\lambda_{1}\,, \ldots \,, \lambda_{M}$ satisfy the Bethe
equations (\ref{BAE}). In particular, the
eigenvalues of the Hamiltonian (\ref{Hamiltonian}) are given by
\begin{eqnarray}
E = 2 \mathop{\rm sh}\nolimits^{2}\eta \sum_{k=1}^{M}
\frac{1}{\mathop{\rm sh}\nolimits(\lambda_{k}-\frac{\eta}{2})\,
\mathop{\rm sh}\nolimits(\lambda_{k}+\frac{\eta}{2})} + (N-1)\mathop{\rm ch}\nolimits\eta \,.
\label{energy}
\end{eqnarray}
We can restrict to solutions that are
\textit{admissible} \cite{Gainutdinov:2015vba}: all the $\lambda_{k}$'s are finite and pairwise distinct (no
two are equal), and each $\lambda_{k}$ satisfies
either
\begin{eqnarray}
\Re e(\lambda_{k} ) > 0 \qquad \mbox{ and } \qquad
-\frac{\pi}{2} < \Im m(\lambda_{k} ) \le \frac{\pi}{2}
\label{admissibleXXZa}
\end{eqnarray}
or
\begin{eqnarray}
\Re e(\lambda_{k} ) = 0 \qquad \mbox{ and } \qquad 0 < \Im m(\lambda_{k} ) < \frac{\pi}{2} \,.
\label{admissibleXXZb}
\end{eqnarray}
Moreover, for the root of unity case $\eta=i\pi/p$
with integer $p \ge 2$, we exclude
solutions containing exact complete $p$-strings, see section \ref{sec:pstrings}
below. All the admissible solutions of the Bethe equations
(\ref{BAE}) for small values of $p$ and $N$ are given in
\cite{Gainutdinov:2015vba}.
\subsection{$\mathop{U_{q} sl(2)}\nolimits$ symmetry}\label{sec:qg}
For generic $q$, the quantum group $\mathop{U_{q} sl(2)}\nolimits$ has generators $E\,, F\,, K$
that satisfy the relations
\begin{eqnarray}
K\, E\, K^{-1} = q^{2} E\,, \qquad K\, F\, K^{-1} = q^{-2} F\,,\qquad
\left[ E\,, F \right] = \frac{K - K^{-1}}{q-q^{-1}} \,.
\end{eqnarray}
These generators are represented on the spin chain by (see e.g. \cite{Gainutdinov:2012qy})
\begin{eqnarray}
E &=& \sum_{k=1}^{N} \mathbb{I} \otimes \cdots \otimes \mathbb{I} \otimes
\sigma_{k}^{+} \otimes q^{\sigma^{z}_{k+1}}\otimes \cdots \otimes
q^{\sigma^{z}_{N}} \,, \nonumber \\
F &=& \sum_{k=1}^{N} q^{-\sigma^{z}_{1}} \otimes \cdots \otimes
q^{-\sigma^{z}_{k-1}} \otimes
\sigma_{k}^{-} \otimes \mathbb{I} \otimes \cdots \otimes
\mathbb{I} \,, \nonumber \\
K &=& q^{\sigma^{z}_{1}} \otimes \cdots \otimes q^{\sigma^{z}_{N}} \,.
\end{eqnarray}
The transfer matrix has $\mathop{U_{q} sl(2)}\nolimits$ symmetry \cite{Kulish:1991np}
\begin{eqnarray}
\left[ t(u) \,, E \right] = \left[ t(u) \,, F \right] = \left[ t(u)
\,, K \right] = 0 \,.
\label{qgsymm}
\end{eqnarray}
Moreover, the transfer matrix commutes with $S^{z}$
\begin{eqnarray}
\left[ t(u) \,, S^{z} \right] = 0 \,, \qquad S^{z}=\tfrac{1}{2}\sum_{k=1}^N \sigma^z_k
\,,
\end{eqnarray}
and the Bethe states satisfy
\begin{eqnarray}
S^{z} |v_{1} \ldots v_{M} \rangle = (\tfrac{N}{2}-M) |v_{1} \ldots
v_{M} \rangle \,.
\label{Szeig}
\end{eqnarray}
As reviewed in \cite{Gainutdinov:2015vba}, the Bethe states are
$\mathop{U_{q} sl(2)}\nolimits$ highest-weight states of spin-$j$ representations $V_{j}$
with
\begin{eqnarray}
j=\ffrac{N}{2}-M\,,
\label{spinj}
\end{eqnarray}
and dimension
\begin{eqnarray}
\dim V_{j} = 2j+1=N-2M+1 \,.
\label{dimVj}
\end{eqnarray}
For the root of unity case $q=e^{i \pi/p}$, the generators satisfy
the additional relations
\begin{eqnarray}
E^{p}= F^{p} = 0\,, \qquad K^{2p}=1 \,.
\end{eqnarray}
The Lusztig's ``divided powers''~\cite{Chari:1994pz} are defined by (see e.g. \cite{Bushlanov:2009cv})
\begin{eqnarray}
e=\frac{1}{[p]_{q}!}K^{p}\, E^{p}\,, \qquad
f=\frac{(-1)^{p}}{[p]_{q}!} F^{p}\,, \qquad
h=\tfrac{1}{2}\left[e\,, f\right] \,,
\end{eqnarray}
where
\begin{eqnarray}
[n]_{q} = \frac{q^{n}-q^{-n}}{q-q^{-1}}\,, \qquad [n]_{q}! =
\prod_{k=1}^{n}[k]_{q}\,.
\end{eqnarray}
The generators $e, f, h$ obey the usual $sl(2)$ relations
\begin{eqnarray}
\left[h \,, e \right] = e\,, \qquad \left[h \,, f \right] = - f \,.
\end{eqnarray}
The transfer matrix also has this $sl(2)$ symmetry at roots of unity.
The space of states of the spin chain is given by the $N$-fold tensor
product of spin-1/2 representations $V_{1/2}$. As already reviewed in
\cite{Gainutdinov:2015vba}, for $q=e^{i \pi/p}$, this vector space
decomposes into a direct sum of tilting $\mathop{U_{q} sl(2)}\nolimits$-modules $T_j$
characterized by spin $j$,
\begin{eqnarray}
\bigl(V_{\frac{1}{2}}\bigr)^{\otimes N} = \bigoplus_{j=0(1/2)}^{N/2} d^0_j T_j,
\label{decomposition}
\end{eqnarray}
where the sum starts from $j=0$ for even $N$ and $j=1/2$ for odd $N$.
The multiplicities $d^0_j$ of these $T_j$ modules are given
by~\cite{Gainutdinov:2012mr}
\begin{eqnarray}
d_{j}^{0} = \sum_{n\ge 0} d_{j+ n p}
- \sum_{n \ge t(j)+1} d_{j+ n p -1 -2(j
\, {\rm mod } \, p)} \,, \qquad (j\ {\rm mod} \ p) \ne p-\frac{1}{2}
\,, \frac{p-1}{2} \,,
\label{dj0}
\end{eqnarray}
where $d_{j}$ is given by
\begin{eqnarray}
d_{j} = {N\choose \frac{N}{2}-j} - {N\choose \frac{N}{2}-j-1} \,,
\qquad \qquad d_{j} = 0 \quad {\rm for }\quad j > \frac{N}{2} \,,
\label{dj}
\end{eqnarray}
and
\begin{eqnarray}
t(j) = \left\{ \begin{array}{ll}
1 & \ {\rm for }\ (j\ {\rm mod} \ p) > \frac{p-1}{2} \,, \\
0 & \ {\rm for }\ (j\ {\rm mod} \ p) < \frac{p-1}{2} \,.
\end{array} \right.
\label{tj}
\end{eqnarray}
If $(j\ {\rm mod} \ p) = p-\frac{1}{2} \,, \frac{p-1}{2}$, then
$d_{j}^{0} =d_{j}$.
The dimensions of the tilting modules are given by \cite{Gainutdinov:2015vba}
\begin{eqnarray}
\dim T_j =
\begin{cases}
2j+1,&\qquad 2j+1\leq p \quad\text{or}\quad s(j)=0,\\
2(2j+1-s(j)),&\qquad \text{otherwise}\,,
\end{cases}
\label{dimTj}
\end{eqnarray}
where we set~\footnote{$(j\ {\rm mod} \ p)$ is the remainder on division of
$j$ by $p$.}
\begin{eqnarray}\label{sj}
s(j)=(2j+1)\;{\rm mod}\;p \ .
\end{eqnarray}
\subsection{General structure of the tilting modules}\label{sec:Tj-str}
For our analysis, we need an explicit structure and the $\mathop{U_{q} sl(2)}\nolimits$ action on the tilting modules $T_j$ that appear in the decomposition (\ref{decomposition}).
The structure of the tilting
$\mathop{U_{q} sl(2)}\nolimits$-modules was studied in
many works~\cite{Pasquier:1989kd,Martin:1991pk,Chari:1994pz,Read:2007qq,
Gainutdinov:2012mr}.
The tilting $U_q sl(2)$-modules $T_j$
in~\eqref{decomposition} for $2j+1$ less than $p$ or divisible by $p$ are
irreducible and isomorphic to the spin-$j$ modules (or $V_j$ in our
notations).\footnote{The tilting modules $T_j$ with $2j+1<p$ are
the type-II representations in \cite{Pasquier:1989kd}, while all
others are of type~I.}
Otherwise, each $T_j$ is indecomposable but reducible and contains
$V_j$ as a submodule while the quotient $T_j/V_j$ is isomorphic to
$V_{j-s(j)}$, where $s(j)$ is defined in~\eqref{sj}. Both the components $V_{j}$ and $V_{j-s(j)}$ are further
reducible but indecomposable: $V_j$ has the unique submodule isomorphic
to the head (or irreducible quotient) of the $V_{j-s(j)}$ module, and $V_{j-s(j)}$ has the unique submodule isomorphic
to the head of the $V_{j-p}$ module. We denote
the head of $V_j$ by $\Irr{j}$. Then, the sub-quotient structure of $T_j$
in terms of the irreducible modules $\Irr{j}$ can be depicted as
\begin{equation}
\xymatrix@R=22pt@C=1pt{
\mbox{}&\\
&T_j\quad:\
\mbox{}&\\
}
\xymatrix@R=22pt@C=10pt@W=4pt@M=6pt{
&\Irr{j-s(j)}\ar[dl]\ar[dr]&\\
\Irr{j-p}\ar[dr]
&&\mbox{}\;\Irr{j}\;\;\;\;\;\ar[dl]\\
&\Irr{j-s(j)}&
}
\label{Tj-diag}
\end{equation}
where arrows correspond to
irreversible action of $U_q sl(2)$ generators and we set
$\Irr{j}=0$ for $j<0$.
To compute dimensions $\dim \Irr{j}$ of the irreducible subquotients in~\eqref{Tj-diag}, we note the relation $\dim \Irr{j} = 2j+1 - \dim \Irr{j-s(j)}$
that follows from the discussion above~\eqref{Tj-diag}.
It is then easy to check the following formula
for dimensions\footnote{If $s(j)=0$,
then $2j+1$ is divisible by $p$, so the tilting module is
irreducible (of dimension $2j+1$ as noted above), and therefore the sub-quotient
structure is trivial.} by induction in $r\geq0$:
\begin{eqnarray}
\dim \Irr{j} = s(j)(r+1),\qquad \text{where}\quad 2j+1 \equiv rp + s(j).
\label{dimj}
\end{eqnarray}
Note that the
highest-weight vector in the irreducible module $\Irr{j}$ has $S^{z}=j$.
We shall refer to the four irreducible subquotients in
(\ref{Tj-diag}), starting from
the top $\Irr{j-s(j)}$ and going around clockwise, as
the ``top'' ${\bf T}_{j}$, ``right'' ${\bf R}_{j}$, ``bottom'' ${\bf
B}_{j}$, and ``left'' ${\bf L}_{j}$ nodes, respectively.
We refer the interested reader to Appendix~\ref{app:proj-mod-base} for the description of the basis and $\mathop{U_{q} sl(2)}\nolimits$-action in $T_j$.
\section{Bethe states for exact complete $p$-strings}\label{sec:pstrings}
For $\eta=i\pi/p$ with integer $p \ge 2 $ (so that $q=e^{\eta}$ is a
root of unity), the Bethe equations (\ref{BAE}) admit exact solutions
consisting of $p$ $\lambda$'s differing by $\eta$, e.g.
\begin{eqnarray}
\{ \mathop{v}\nolimits \,, \mathop{v}\nolimits + \eta\,, \mathop{v}\nolimits + 2\eta\,, \ldots \,,
\mathop{v}\nolimits + (p-1)\eta \}
\label{exactcompleterstring}
\end{eqnarray}
where $\mathop{v}\nolimits$ is {\em arbitrary}. Such solutions have been noticed in
the context of (quasi) periodic chains \cite{Baxter:1972wg,
Fabricius:2000yx, Fabricius:2001yy, Baxter:2001sx, Tarasov:2003xz},
and were called in \cite{Fabricius:2000yx} ``exact complete
$p$-strings.'' Such solutions do not lead to new eigenvalues of the
transfer matrix, and therefore, we do not regard such solutions as
admissible. Nevertheless, Bethe states corresponding to such
solutions are necessary in order to construct the complete
set of states when one or more tilting modules are spectrum
degenerate \cite{Gainutdinov:2015vba}.
The Bethe states (\ref{Bethestate}) corresponding to such solutions are naively null, since
\begin{eqnarray}
\prod_{r=0}^{p-1}{\cal B}(\mathop{v}\nolimits+ r\eta) = 0 \,,
\label{fusionid}
\end{eqnarray}
as already noticed by Tarasov for the (quasi) periodic chain in~\cite{Tarasov:2003xz, Tarasov:1991mf, Tarasov:1992aw}.\footnote{For
the closed chain, the corresponding product of $B$ operators is a
component (top-right corner) of a fused \cite{Kulish:1981gi, Kulish:1981bi} monodromy matrix;
and, for $\eta=i\pi/p$,
this fused monodromy matrix becomes block diagonal, and therefore the
top-right corner becomes zero. (See Proposition 5 parts (i) and (ii)
in \cite{Tarasov:1991mf}, and Lemmas 1.4 and 1.5 in
\cite{Tarasov:1992aw}.) The same logic applies to the open
chain, in view of
the open-chain generalization \cite{Mezincescu:1991ke} of the fusion procedure.}
We proceed, following \cite{Tarasov:2003xz} (see also \cite{Fabricius:2001yy}), by regularizing the solution and
taking a suitable limit. Therefore, we now define
\begin{eqnarray}
\eta_{0} \equiv \frac{i\pi}{p}\,, \qquad \mu \equiv \eta-\eta_{0}\,,
\label{eta0}
\end{eqnarray}
and we consider the limit $\mu \rightarrow 0$.
Given a usual Bethe state $|\lambda_{1} \ldots \lambda_{M} \rangle$ (\ref{Bethestate}),
we define the operators\footnote{For simplicity, we assume here that the $\lambda_i$'s are fixed and
do not depend on $\mu$.
In principle, the analysis presented here could be
generalized by not making any assumptions about the $\lambda_i$'s at the outset,
which in fact is the approach taken in \cite{Tarasov:2003xz} for the
closed chain. However, the result of such an analysis is that, in
order to obtain an eigenvector of the transfer matrix, the
$\lambda_i$'s must indeed be solutions of the Bethe equations with $\mu\rightarrow 0$.}
\begin{eqnarray}\label{Bmu-def}
\BTmunew{\mathop{v}\nolimits} =
\frac{1}{\mu} \prod_{r=0}^{p-1}{\cal B}(\mathop{v}\nolimits+ r\eta + \mu
x_{r+1})
\end{eqnarray}
and \footnote{The operator (\ref{BTnew-def}) is well
defined, since $\prod_{r=0}^{p-1}{\cal B}(\mathop{v}\nolimits+ r\eta + \mu x_{r+1})
= O(\mu)$ for $\mu\rightarrow 0$, as follows from (\ref{fusionid}) and the
fact ${\cal B}(u) = {\cal B}(u)\Big\vert_{\mu=0} + O(\mu)$,
and non-zero in general, as follows from examples we studied.
}
\begin{eqnarray}\label{BTnew-def}
\BTnew{\mathop{v}\nolimits}=
\lim_{\mu \rightarrow 0}
\BTmunew{\mathop{v}\nolimits} \,,
\end{eqnarray}
as well as the corresponding new states
\begin{eqnarray}
\vecTmu{\mathop{v}\nolimits}{\lambda_{1} \ldots \lambda_{M}} =
\BTmunew{\mathop{v}\nolimits}\,
|\lambda_{1} \ldots \lambda_{M} \rangle
\end{eqnarray}
and
\begin{eqnarray}
\vecT{\mathop{v}\nolimits}{\lambda_{1} \ldots \lambda_{M}} &=&
\BTnew{\mathop{v}\nolimits}\,
|\lambda_{1} \ldots \lambda_{M} \rangle
\nonumber \\
&=& \lim_{\mu \rightarrow 0}
\frac{1}{\mu} \prod_{r=0}^{p-1}{\cal B}(\mathop{v}\nolimits+ r\eta + \mu
x_{r+1})\, |\lambda_{1} \ldots \lambda_{M} \rangle \,,
\label{Tarasovstate}
\end{eqnarray}
where the transfer matrix $t(u)$ and the ${\cal B}$ operators
(including those used in the construction of the Bethe state $|\lambda_{1} \ldots \lambda_{M} \rangle$ of course)
should be understood to be constructed with generic anisotropy $\eta$ instead of $\eta_{0}$, and
$x_{1}, \ldots, x_{p}$ are still to be determined.
To this end, we obtain the
off-shell relation for this state (c.f. (\ref{offshell}))
\begin{eqnarray}
t(u) \vecTmu{\mathop{v}\nolimits}{\lambda_{1} \ldots \lambda_{M}} &=& X(u)
\vecTmu{\mathop{v}\nolimits}{\lambda_{1} \ldots \lambda_{M}} \nonumber\\
&+& \frac{1}{\mu}\sum_{m=1}^{M} Y_{m}\, B(u)
\prod_{r=0}^{p-1}{\cal B}(\mathop{v}\nolimits+ r\eta + \mu x_{r+1})\,
\prod_{\scriptstyle{k \ne m}\atop \scriptstyle{k=1}}^M {\cal
B}(\lambda_{k}) |\Omega\rangle \nonumber\\
&+& \frac{1}{\mu}\sum_{r=0}^{p-1} Z_{r}\, B(u)
\prod_{\scriptstyle{s \ne r}\atop \scriptstyle{s=0}}^{p-1}{\cal
B}(\mathop{v}\nolimits+ s\eta + \mu x_{s+1})\,
\prod_{k=1}^M {\cal B}(\lambda_{k}) |\Omega\rangle\,,
\label{offshell2}
\end{eqnarray}
and the limit $\mu \rightarrow 0$ remains to be performed.
Evidently, there are now two kinds of ``unwanted'' terms.
It is easy to see from (\ref{Lambda}) that $X(u)$, which appears
in the first line of (\ref{offshell2}), is given by
\begin{eqnarray}
X(u) =
\frac{\mathop{\rm sh}\nolimits(2u+2\eta)}{\mathop{\rm sh}\nolimits(2u+\eta)}\mathop{\rm sh}\nolimits^{2N}(u+\eta)\frac{Q(u-\eta)}{Q(u)}{\cal E}^{-}(u) +
\frac{\mathop{\rm sh}\nolimits(2u)}{\mathop{\rm sh}\nolimits(2u+\eta)}\mathop{\rm sh}\nolimits^{2N}(u)\frac{Q(u+\eta)}{Q(u)}{\cal E}^{+}(u) \,,
\end{eqnarray}
where $Q(u)$ is given by (\ref{Q}),
and the ${\cal E}^{\pm}(u)$ are defined by
\begin{eqnarray}
{\cal E}^{\pm}(u) =\frac{{\cal Q}(u\pm\eta)}{{\cal Q}(u)} \,,
\end{eqnarray}
where
\begin{eqnarray}
{\cal Q}(u) &=& \prod_{r=0}^{p-1}
\mathop{\rm sh}\nolimits(u-\mathop{v}\nolimits-(r-\tfrac{1}{2})\eta-\mu
x_{r+1})
\mathop{\rm sh}\nolimits(u+\mathop{v}\nolimits+(r+\tfrac{1}{2})\eta+\mu
x_{r+1}) \nonumber \\
&=& \prod_{r=0}^{p-1}
\mathop{\rm sh}\nolimits(u-\mathop{v}\nolimits-(r-\tfrac{1}{2})\eta)
\mathop{\rm sh}\nolimits(u+\mathop{v}\nolimits+(r+\tfrac{1}{2})\eta) \; +\; O(\mu)\,.
\label{calQ}
\end{eqnarray}
In the second line of (\ref{calQ}), we keep explicitly only the first term in the expansion around $\mu=0$ and neglect contributions that
vanish when $\mu$ vanishes.
We see that ${\cal E}^{\pm}(u) \rightarrow 1$
in the limit $\mu \rightarrow 0$, and therefore $X(u) \rightarrow
\Lambda(u)$.
Similarly, from (\ref{Lambdam}) we find that $Y_{m}$, which appears
in the second line of (\ref{offshell2}), is given~by
\begin{eqnarray}
Y_{m} &=& \mathsf{f}(u,\lambda_{m}-\tfrac{\eta}{2})
\Bigg[\mathop{\rm sh}\nolimits^{2N}(\lambda_{m}+\tfrac{\eta}{2})\, {\cal
E}^{-}(\lambda_{m}-\tfrac{\eta}{2})
\prod_{\scriptstyle{k \ne m}\atop \scriptstyle{k=1}}^M
\frac{\mathop{\rm sh}\nolimits(\lambda_{m}-\lambda_{k}-\eta)
\mathop{\rm sh}\nolimits(\lambda_{m}+\lambda_{k}-\eta)}
{\mathop{\rm sh}\nolimits(\lambda_{m}-\lambda_{k}) \mathop{\rm sh}\nolimits(\lambda_{m}+\lambda_{k})}
\nonumber \\
&& -\mathop{\rm sh}\nolimits^{2N}(\lambda_{m}-\tfrac{\eta}{2})\, {\cal
E}^{+}(\lambda_{m}-\tfrac{\eta}{2})
\prod_{\scriptstyle{k \ne m}\atop \scriptstyle{k=1}}^M
\frac{\mathop{\rm sh}\nolimits(\lambda_{m}-\lambda_{k}+\eta)
\mathop{\rm sh}\nolimits(\lambda_{m}+\lambda_{k}+\eta)}
{\mathop{\rm sh}\nolimits(\lambda_{m}-\lambda_{k})
\mathop{\rm sh}\nolimits(\lambda_{m}+\lambda_{k})} \Bigg] \,,
\label{Lambdam2}
\end{eqnarray}
and therefore $Y_{m} \rightarrow \Lambda^{\lambda_{m}}$ as $\mu
\rightarrow 0$. Hence, the ``unwanted'' terms of the first kind in
(\ref{offshell2}) vanish provided that $\lambda_{1}\,, \ldots \,,
\lambda_{M}$ satisfy the usual Bethe equations (\ref{BAE}) at
$\eta=\eta_0$. (The factor $1/\mu$ in the second line
of~\eqref{offshell2} is canceled by the contribution
from $\prod_{r=0}^{p-1}{\cal B}(\mathop{v}\nolimits+ r\eta + \mu x_{r+1})$ which
vanishes as fast as $O(\mu)$ for $\mu\rightarrow 0$, as we noticed
above.)
Finally, again from (\ref{Lambdam}) we find that $Z_{r}$, which appears
in the third line of (\ref{offshell2}), is given by
\begin{eqnarray}
Z_{r} &=& \mathsf{f}(u,\mathop{v}\nolimits+(r-\tfrac{1}{2})\eta)
\Bigg[\mathop{\rm sh}\nolimits^{2N}(\mathop{v}\nolimits+(r+\tfrac{1}{2})\eta)
\frac{Q(\mathop{v}\nolimits+(r-\tfrac{3}{2})
\eta)}{Q(\mathop{v}\nolimits+(r-\tfrac{1}{2})\eta)} {\cal Z}^{-}_{r} \nonumber \\
&& -\mathop{\rm sh}\nolimits^{2N}(\mathop{v}\nolimits+(r-\tfrac{1}{2})\eta)
\frac{Q(\mathop{v}\nolimits+(r+\tfrac{1}{2}) \eta)}{Q(\mathop{v}\nolimits+(r-\tfrac{1}{2})\eta)} {\cal Z}^{+}_{r}
\Bigg] \,,
\label{Lambdam3}
\end{eqnarray}
where
\begin{eqnarray}
{\cal Z}^{-}_{r} &=& \prod_{\scriptstyle{s \ne r}\atop \scriptstyle{s=0}}^{p-1}
\frac{\mathop{\rm sh}\nolimits((r-s-1)\eta + \mu(x_{r+1}-x_{s+1}))\,
\mathop{\rm sh}\nolimits(2\mathop{v}\nolimits+(r+s-1)\eta)}
{\mathop{\rm sh}\nolimits((r-s)\eta)\,
\mathop{\rm sh}\nolimits(2\mathop{v}\nolimits+(r+s)\eta)}\,, \nonumber \\
{\cal Z}^{+}_{r} &=& \prod_{\scriptstyle{s \ne r}\atop \scriptstyle{s=0}}^{p-1}
\frac{\mathop{\rm sh}\nolimits((r-s+1)\eta + \mu(x_{r+1}-x_{s+1}))\,
\mathop{\rm sh}\nolimits(2\mathop{v}\nolimits+(r+s+1)\eta)}
{\mathop{\rm sh}\nolimits((r-s)\eta)\,
\mathop{\rm sh}\nolimits(2\mathop{v}\nolimits+(r+s)\eta)}\,,
\end{eqnarray}
and we have again neglected contributions that vanish when $\mu$
vanishes.
We find
\begin{eqnarray}
\lim_{\mu\rightarrow 0} \frac{{\cal Z}^{-}_{r}}{\mu} = -(x_{r+1}-x_{r})
\frac{\mathop{\rm sh}\nolimits(2\mathop{v}\nolimits+2r\eta_{0})}{\mathop{\rm sh}\nolimits \eta_{0}\,
\mathop{\rm sh}\nolimits(2\mathop{v}\nolimits+(2r-1)\eta_{0})} \,, \qquad r \ne 0 \,,
\end{eqnarray}
while for $r=0$ the above result continues to hold except with $x_{0}=x_{p} +p$.
Similarly,
\begin{eqnarray}
\lim_{\mu\rightarrow 0} \frac{{\cal Z}^{+}_{r}}{\mu} = -(x_{r+2}-x_{r+1})
\frac{\mathop{\rm sh}\nolimits(2\mathop{v}\nolimits+2r\eta_{0})}{\mathop{\rm sh}\nolimits \eta_{0}\,
\mathop{\rm sh}\nolimits(2\mathop{v}\nolimits+(2r+1)\eta_{0})} \,, \qquad r \ne p-1 \,,
\end{eqnarray}
while for $r=p-1$ the above result continues to hold except with
$x_{p+1}=x_{1} -p$. We conclude that the ``unwanted'' terms of the
second kind in (\ref{offshell2}) vanish provided that $x_{1},
\ldots, x_{p}$ satisfy
\begin{eqnarray}
\frac{x_{r+1}-x_{r}}{x_{r+2}-x_{r+1}} =
\left(\frac{\mathop{\rm sh}\nolimits(\mathop{v}\nolimits+(r-\frac{1}{2})\eta_{0})}{\mathop{\rm sh}\nolimits(\mathop{v}\nolimits+(r+\frac{1}{2})\eta_{0})}\right)^{2N}
\frac{\mathop{\rm sh}\nolimits(2\mathop{v}\nolimits+(2r-1)\eta_{0})}{\mathop{\rm sh}\nolimits(2\mathop{v}\nolimits+(2r+1)\eta_{0})}
\frac{Q(\mathop{v}\nolimits+(r+\frac{1}{2})
\eta_{0})}{Q(\mathop{v}\nolimits+(r-\frac{3}{2}) \eta_{0})}
\label{ratio1}
\end{eqnarray}
for $r=0, 1, \ldots, p-1$, where
\begin{eqnarray}
x_{0}=x_{p} +p\,, \qquad x_{p+1}=x_{1} -p\,,
\label{BC}
\end{eqnarray}
and $Q(u)$ in (\ref{Q}) is to be evaluated with
$\eta=\eta_{0}$.
In order to solve (\ref{ratio1}) for $x_{1}, \ldots, x_{p}$, we
now make (along the lines of \cite{Tarasov:2003xz}) the following ansatz
\begin{eqnarray}
x_{r} = 1 - r - \frac{G(\mathop{v}\nolimits+r\eta_{0})}{F(\mathop{v}\nolimits)} \,,
\qquad r = 0, \ldots, p+1\,,
\label{xr}
\end{eqnarray}
where $F(u)$ and $G(u)$ are functions with periodicities $\eta_{0}$ and $i \pi$,
respectively,
\begin{eqnarray}
F(u+ \eta_{0}) = F(u) \,, \qquad G(u+ i\pi) = G(u)\,.
\label{periodicity}
\end{eqnarray}
Then the boundary conditions (\ref{BC}) are satisfied, and
\begin{eqnarray}
\frac{x_{r+1}-x_{r}}{x_{r+2}-x_{r+1}} =
\frac{H(\mathop{v}\nolimits+r\eta_{0})}{H(\mathop{v}\nolimits+(r+1)\eta_{0})} \,,
\label{ratio2}
\end{eqnarray}
where
\begin{eqnarray}
H(u) = G(u+r\eta_{0}) - G(u) + F(u) \,.
\label{Gcond}
\end{eqnarray}
The conditions (\ref{periodicity}) and (\ref{Gcond}) can be satisfied by
setting
\begin{eqnarray}
F(u) = \frac{1}{p} \sum_{k=0}^{p-1}H(u+ k\eta_{0})\,, \qquad
G(u) = \frac{1}{p} \sum_{k=1}^{p-1}k H(u+ k\eta_{0}) \,.
\label{FuGu}
\end{eqnarray}
Comparing (\ref{ratio1}) and (\ref{ratio2}), we see that $H(u)$ must
obey the functional relation
\begin{eqnarray}
\frac{H(u)}{H(u+\eta_{0})} = \left(\frac{\mathop{\rm sh}\nolimits(u-\frac{\eta_{0}}{2})}{\mathop{\rm sh}\nolimits(u+\frac{\eta_{0}}{2})}\right)^{2N}
\frac{\mathop{\rm sh}\nolimits(2u-\eta_{0})}{\mathop{\rm sh}\nolimits(2u+\eta_{0})}\frac{Q(u+\frac{\eta_{0}}{2})}{Q(u-\frac{3\eta_{0}}{2})} \,,
\label{Heqn}
\end{eqnarray}
which is satisfied by\footnote{One can multiply this solution by
any function with periodicity $\eta_{0}$, and it will still
be a solution of (\ref{Heqn}), though it will not change
the values of $x_r$'s. We are not aware of any other solutions of the
functional equation, and expect that this one will be enough to construct the complete basis of eigenstates.}
\begin{eqnarray}
H(u) = \frac{\mathop{\rm sh}\nolimits^{2N}(u-\frac{\eta_{0}}{2}) \mathop{\rm sh}\nolimits(2u-\eta_{0})}{Q(u-\frac{\eta_{0}}{2})\, Q(u-\frac{3\eta_{0}}{2})} \,,
\label{Hu}
\end{eqnarray}
We have therefore proved the following proposition.
\begin{Prop}\label{main-prop-1}
If $|\lambda_{1} \ldots \lambda_{M} \rangle$ is an
eigenstate of the transfer matrix $t(u)$ with eigenvalue $\Lambda(u)$,
then for any $v\in\mathbb{C}$ the corresponding state $\|\mathop{v}\nolimits; \lambda_{1} \ldots \lambda_{M}
\rangle\!\rangle$ constructed in (\ref{Tarasovstate}) using an exact
complete $p$-string, where $x_{r}$ are given by (\ref{xr}),
(\ref{FuGu}) and (\ref{Hu}) using~(\ref{Q}), is also an
eigenstate of the transfer matrix with the same eigenvalue $\Lambda(u)$.
\end{Prop}
By this proposition we see that the operator $\BTnew{\mathop{v}\nolimits}$
in~(\ref{BTnew-def}) maps the specific eigenstate $|\lambda_{1} \ldots
\lambda_{M} \rangle$ defined in (\ref{Bethestate}) to another
eigenstate of $t(u)$. But acting with $\BTnew{\mathop{v}\nolimits}$ on
other Bethe states does not give in general eigenstates, or saying
differently the operator $\BTnew{\mathop{v}\nolimits}$ does not in general commute
with $t(u)$, as its definition involves Bethe roots $ \lambda_{i}$ via
the function $Q(u)$.
\begin{Rem}\label{rem:Hp2}
For the particular case $p=2$, the $Q(u)$ function obeys
$Q(u+2\eta_{0}) = Q(u)$, and therefore the ratio of $Q(u)$ functions
in (\ref{Heqn}) equals $1$, which implies that $H(u)$ can be chosen
independently of $\{\lambda_{i} \}$, e.g. $H(u) =
\mathop{\rm sh}\nolimits^{2N}(u-\frac{\eta_{0}}{2}) \mathop{\rm sh}\nolimits(2u-\eta_{0})$; and therefore
$\{x_{r} \}$ and thus $\BTnew{v}$ are independent of $\{\lambda_{i} \}$.
This suggests that $\BTnew{v}$ might be a symmetry of $t(u)$ as it maps any Bethe state to another eigenstate of the {\sl same} eigenvalue.
We have verified numerically for $p=2$ and up to $N=6$ that
$\BTnew{\mathop{v}\nolimits}$ indeed commutes with $t(u)$ for any complex numbers $u$ and $v$.
\end{Rem}
Several examples of the construction in Proposition 3.1
with $p=2$ can be found in Sec.~\ref{sec:p2}, see e.g. Secs.~\ref{sec:p2N5}, \ref{sec:p2N6}, and~\ref{sec:p2N7}.
For $p>2$, the first appearance of an exact complete
$p$-string is for the case $p=3, N=8, M=0$, see Section D.6 in
\cite{Gainutdinov:2015vba}.
We have constructed the vector
$\vecT{\mathop{v}\nolimits}{-}$ (\ref{Tarasovstate}) numerically for this case, with a
generic value for $v$, and we have verified that it is an eigenvector of
the Hamiltonian with the same eigenvalue as the reference state
(namely, $E=3.5$), yet it is linearly independent from the reference
state. Moreover, it is a highest-weight vector with spin $j=1$, exactly as
required for the right node of the tilting module $T_1$
(recall the structure in~\eqref{Tj-diag} and its description above), which is
spectrum-degenerate with the tilting module $T_4$
containing the reference state.
\begin{Rem} The generalization to the case of more than one exact complete
$p$-string is straightforward: a vector with $m$ such $p$-strings is given by
\begin{eqnarray}
\vecT{\mathop{v}\nolimits_{1}, \ldots, \mathop{v}\nolimits_{m}}{\lambda_{1} \ldots \lambda_{M}} =
\prod_{i=1}^{m} \BTnew{\mathop{v}\nolimits_{i}}|\lambda_{1} \ldots \lambda_{M} \rangle \,,
\label{Tarasovstategen}
\end{eqnarray}
where $\BTnew{\mathop{v}\nolimits_{i}}$ is constructed as in~\eqref{BTnew-def} and with $\{ x_{i,r} \}$ given by
\begin{eqnarray}
x_{i,r} = 1 - r - \frac{G(\mathop{v}\nolimits_{i}+r\eta_{0})}{F(\mathop{v}\nolimits)} \,,
\qquad r = 0, \ldots, p+1\,, \qquad i = 1, \dots\,, m \,,
\label{xrgen}
\end{eqnarray}
with the same boundary conditions on $x_{i,r}$.
We note that the $S^z$-eigenvalue of~\eqref{Tarasovstategen} is
$\frac{N}{2}-M + mp$ and thus the operators $\prod_{i=1}^{m}
\BTnew{\mathop{v}\nolimits_{i}}$ describe $t(u)$ degeneracies between
$S^z$-eigenspaces that differ by a multiple of $p$. We stress that
these degeneracies are extra to the degeneracies corresponding to the
action by the divided powers of $U_q sl(2)$ that also change $S^z$ by
$\pm p$. We discuss below this new type of degeneracies. An example
with two exact complete $p$-strings (i.e., $m=2$, with $p=2$) is given
in Sec.~\ref{sec:p2N9}.
\end{Rem}
\section{Generalized Bethe states}\label{sec:generalized}
The usual Bethe states (\ref{Bethestate}) are, by construction,
ordinary eigenvectors of the transfer matrix $t(u)$. In order to
construct generalized eigenvectors (which, as noted in the
Introduction, appear at roots of unity), something different must be
done. We recall that {\em generalized} eigenvectors $|v\rangle$ are
defined as\footnote{The power in (\ref{geneig}) is $2$ because there
are Jordan cells of maximum rank $2$, and here $|v\rangle$ and
$|v'\rangle$ belong to a Jordan cell of rank $2$.}
\begin{eqnarray}
\bigl(t(u)-\Lambda(u)\mathbf{1} \bigr)^2 |v\rangle=0 \,,
\label{geneig}
\end{eqnarray}
or equivalently
\begin{eqnarray}
t(u)\, |v\rangle=\Lambda(u)\, |v\rangle + |v'\rangle \quad \mbox{ and
} \quad t(u)\, |v'\rangle=\Lambda(u)\, |v'\rangle \,.
\label{geneig-2}
\end{eqnarray}
We note that a generalized eigenvector, as $|v\rangle$ in~\eqref{geneig-2}, is defined only
up to the transformation
\begin{equation}\label{gen-transf}
|v\rangle \to \alpha |v\rangle + \beta |v'\rangle,\qquad \text{for}\quad \alpha,\beta\in\mathbb{C}.
\end{equation}
Generalized eigenvectors appear only in (direct sums of) the tilting $\mathop{U_{q} sl(2)}\nolimits$-modules $T_j$ with $s(j)$
non-zero, i.e. in the cases where
$T_j$ are indecomposable but reducible, and thus are described by the
diagram in~\eqref{Tj-diag}. This fact is borne out by the
explicit examples in our previous paper~\cite{Gainutdinov:2015vba},
see also~\cite{Dubail:2010zz,Vasseur:2011fi} and the proof for $p=2$ in~\cite{Gainutdinov:2012qy}.
As we will see further from an explicit
construction in this section, it is only the states in the head of
$T_j$ -- the
top sub-quotient $\Irr{j-s(j)}$ in~\eqref{Tj-diag} -- on which the Hamiltonian~\eqref{Hamiltonian}
is non-diagonalizable.
For the case $p=2$, it was already shown in~\cite{Gainutdinov:2012qy} using certain free fermion operators.
\subsection{Introduction and overview}
An important
clue to a Bethe ansatz construction of the generalized eigenvectors can already be learned by considering the simplest case, namely
a chain with two sites ($N=2$). Indeed, for this case and for generic values of $q$, the
eigenvectors of the Hamiltonian (\ref{Hamiltonian}) are given
by
\begin{eqnarray}
|{\bf v}_{1}\rangle &=& |\Omega\rangle = |\uparrow\uparrow \rangle = (1,0,0,0)^{T} \,, \nonumber \\
|{\bf v}_{2}\rangle &=& F |\Omega\rangle = q^{-1} |\uparrow\downarrow
\rangle + |\downarrow\uparrow\rangle =
(0,q^{-1},1,0)^{T} \,, \nonumber \\
|{\bf v}_{3}\rangle &=& \frac{1}{[2]_{q}}F^{2} |\Omega\rangle = |\downarrow\downarrow \rangle=
(0,0,0,1)^{T} \,, \nonumber \\
|{\bf v}_{4}\rangle &=& -q |\uparrow\downarrow
\rangle + |\downarrow\uparrow\rangle =
(0,-q,1,0)^{T} \,.
\label{eigenvecsbrute}
\end{eqnarray}
The first three vectors, which form a spin-1 representation of $\mathop{U_{q} sl(2)}\nolimits$,
have the same energy eigenvalue $E_{1}=\tfrac{1}{2}[2]_{q}$, while
the fourth vector
(a spin-0 representation) has the energy eigenvalue
$E_{0}=-\tfrac{3}{2}[2]_{q}$.
For $p=2$ (i.e., $q=e^{i \pi/2} = i$), the vectors $|{\bf
v}_{2}\rangle$ and $|{\bf v}_{4}\rangle$ evidently coincide (and
$E_{1}=E_{0}=0$),
signaling that the
Hamiltonian is no longer diagonalizable. A generalized eigenvector of
the Hamiltonian with generalized eigenvalue $0$
can be constructed from the $q \rightarrow i$ limit of
an appropriate linear combination of these two
vectors, e.g.,
\begin{eqnarray}
|{\bf w}\rangle = \lim_{q \rightarrow i} \frac{1}{[2]_{q}}\bigl( |{\bf v}_{4}\rangle - |{\bf
v}_{2}\rangle\bigr) =-(0,1,0,0)^{T} \,.
\label{genvecp2ex}
\end{eqnarray}
Let us now consider the corresponding Bethe ansatz description. For
generic $q$, the vector $|{\bf v}_{4}\rangle$ is given by
\begin{eqnarray}
|{\bf v}_{4}\rangle = a(\eta)\, {\cal B}(\nu) |\Omega\rangle \,, \qquad
\nu = \tfrac{1}{2} \log\Big[- \frac{\mathop{\rm sh}\nolimits(\frac{\eta}{2}+\frac{i
\pi}{4})}{\mathop{\rm sh}\nolimits(\frac{\eta}{2}-\frac{i\pi}{4})} \Big] \,,
\label{BAp2}
\end{eqnarray}
where $a$ depends on $\eta$ such that $a(i\pi/2)=0$.
As $q$ approaches $i$ (i.e.,
$\eta$ approaches $\tfrac{i \pi}{2}$), the Bethe root
$\nu$ in (\ref{BAp2})
goes to infinity. Indeed, setting
$\eta = \tfrac{i \pi}{2} - i \omega^{2}$, we find that
\begin{eqnarray}
\nu = -\log \omega + \tfrac{1}{2} \log 2 + O(\omega^{4})
\label{lambdaforsmallomega}
\end{eqnarray}
for $\omega$ near 0. Expanding the Bethe vector in a series
about $\omega=0$, we observe that
\begin{eqnarray}\label{Bnu-exp}
{\cal B}(\nu) |\Omega\rangle = \tfrac{i}{2}\, \omega^{-4} \left( F |\Omega\rangle\right)\big\vert_{\omega=0} +
O(\omega^{-2}) \,.
\end{eqnarray}
We therefore can subtract $\frac{i}{2}\omega^{-2} F |\Omega\rangle$ from $ \omega^{2}{\cal B}(\nu)|\Omega\rangle$ to get the final result
\begin{eqnarray}\label{gen-v}
|{\bf v}\rangle &\equiv& \lim_{\omega\rightarrow 0+} \left[ \omega^{2}{\cal
B}(\nu) |\Omega\rangle
- \tfrac{i}{2}\, \omega^{-2} F |\Omega\rangle \right] \nonumber\\
&=& (0,0,-1,0)^{T} =
i|{\bf w}\rangle - |{\bf v}_{2}\rangle\Big\vert_{q=i} \,,
\end{eqnarray}
which is a {\sl generalized} eigenvector of the Hamiltonian.
Note the similarity of the constructions in (\ref{genvecp2ex}) and
(\ref{gen-v}): both involve subtracting from a (generically) highest-weight state a contribution proportional
to $|{\bf v}_2\rangle = F |\Omega\rangle$ and taking the $q\rightarrow i$
limit.
The generalized eigenvector $|{\bf v}\rangle$ is evidently a linear
combination of the generalized eigenvector $|{\bf w}\rangle$ in (\ref{genvecp2ex}) and the eigenvector $|{\bf
v}_{2}\rangle$ in (\ref{eigenvecsbrute}), recall that the generalized eigenvector is defined up to the transformation~\eqref{gen-transf}.
A construction of generalized Bethe states similar to~\eqref{gen-v} is possible for general values of $N$ and $p$.
We observe from numerical studies
given in App.~\ref{sec:numerics} that, as the anisotropy parameter $\eta$ approaches
$\eta_{0}= i \pi/p$ with integer $p \ge 2$, the Bethe
roots corresponding to a generalized eigenvalue contain a string of
length $p' \in \{ 1, 2, \ldots, p-1\}$, whose center (real part) approaches
infinity.
In more detail, such a string is a set of $p'$ roots differing by $i
\pi/p'$, e.g.
\begin{eqnarray}
\nu_{k}^{\infty} = \nu_{0} + \frac{i \pi}{2p'}(p'-(2k-1)) \,, \qquad
k=1, \ldots, p' \,,
\label{pprimestring1}
\end{eqnarray}
with $\nu_{0} \rightarrow \infty$. As we shall
see below, the value of $p'$ is related to the spin $j$ of the tilting
module $T_{j}$ (the one containing the corresponding generalized eigenvector) by the simple formula
\begin{eqnarray}
p' = s(j) \,,
\label{p'formula}
\end{eqnarray}
where $s(j) \in \{ 1, 2, \ldots, p-1\}$ is defined in (\ref{sj}).
For $p=2$, the only
possibility is $p'=1$, i.e. an infinite real root, as already
discussed. For $p=3$, the only
possibilities are $p'=1$ and $p'=2$, where the latter consists of the
pair of roots $\nu_{0} \pm i \pi/4$ with $\nu_{0} \rightarrow \infty$.
For $p=4$, we can have $p'=1,2,3$; the $p'=3$
case consists of a triplet of roots $\nu_{0}\,, \nu_{0} \pm i \pi/3$
with $\nu_{0} \rightarrow \infty$, etc.
The corresponding Bethe state
has Bethe roots $\{\nu_{k}^{\infty}\}$
tending to infinity in the limit, and requires a certain subtraction to get a finite vector.
In a nutshell, our construction of generalized eigenvectors in a tilting module~$T_j$ starts with
the spin-$j$ highest-weight state that lives in
the right node denoted by
$\Irr{j}$ in the diagram~\eqref{Tj-diag}. This state can be constructed using the
ordinary ABA approach as in~(\ref{Bethestate}). Then, a generalized eigenstate
living in the top node $\Irr{j-s(j)}$ is constructed by applying a certain
$p'$-string of ${\cal B}(\nu_k)$ operators (with $\nu_k$ as in~\eqref{pprimestring1} but finite $\nu_0$)
on the usual Bethe state in $\Irr{j}$ at generic value of $\eta$, subtracting the image of $F^{p'}$ on the
spin-$j$ highest-weight state and taking the limit $\eta\to \eta_0$.
We give below details
of the construction with our final claim in Prop.~\ref{prop:gen-vect},
while our representation-theoretic interpretation is given in Sec.~\ref{sec:rep-th-descr}.
\subsection{General ABA construction of generalized
eigenstates}\label{genABAconst}
With these observations in mind, let
\begin{eqnarray}
|\vec\lambda\rangle \equiv |\lambda_{1} \ldots \lambda_{M}
\rangle = \prod_{k=1}^{M}{\cal
B}(\lambda_{k}) |\Omega\rangle
\label{v0onshell}
\end{eqnarray}
denote an on-shell Bethe vector, i.e., an
ordinary eigenvector of the transfer matrix
\begin{eqnarray}
t(u) |\vec\lambda\rangle = \Lambda(u) |\vec\lambda\rangle \,,
\label{ordinary}
\end{eqnarray}
where the eigenvalue $\Lambda(u)$ is given by (\ref{Lambda}).
This state is an $\mathop{U_{q} sl(2)}\nolimits$ highest-weight state with spin
$j=N/2-M$, see (\ref{spinj}).
Under the already-mentioned assumption that the top node
$\Irr{j-s(j)}$ of $T_j$ contains generalized eigenstates,
let us construct a generalized eigenvector $\vecG{p'}{\vec\lambda} \equiv \vecG{p'}{ \lambda_{1} \ldots \lambda_{M} }$
whose generalized eigenvalue is also
$\Lambda(u)$, where $p'=s(j)$. To this end, we now set
\begin{eqnarray}
\eta=\eta_{0} - i \omega^{2p'}\,, \qquad \eta_{0} = \frac{i\pi}{p}\,,
\label{eta02}
\end{eqnarray}
and look for a generalized eigenvector as the limit
\begin{eqnarray}
\vecG{p'}{\vec\lambda} = \lim_{\omega\rightarrow 0+} \vecGomega{p'}{\vec\lambda} \,,
\label{genvec}
\end{eqnarray}
where
\begin{eqnarray}
\vecGomega{p'}{\vec\lambda} = \alpha |\vec\nu, \vec\lambda_{\alpha}\rangle + \beta
F^{p'} |\vec\lambda_{\beta}\rangle \,,
\label{vecGommega}
\end{eqnarray}
with
\begin{eqnarray}
|\vec\nu, \vec\lambda_{\alpha}\rangle = \prod_{j=1}^{p'}{\cal B}(\nu_{j})\,
\prod_{k=1}^{M}{\cal B}(\lambda_{\alpha, k}) |\Omega\rangle
\,, \qquad
|\vec\lambda_{\beta}\rangle = \prod_{k=1}^{M}{\cal
B}(\lambda_{\beta, k}) |\Omega\rangle \,.
\label{genvecmore}
\end{eqnarray}
Note that the subscripts $\alpha$ and $\beta$ on $\lambda_{\alpha,
k}$ and $\lambda_{\beta, k}$ are simply labels (i.e., not indices)
that serve to distinguish $\lambda_{\alpha, k}$ from $\lambda_{\beta,
k}$ and from $\lambda_{k}$. Note that $\lambda_k$ is the Bethe solution precisely
at the root of unity, when $\omega=0$, while $\lambda_{\alpha,
k}$ and $\lambda_{\beta, k}$ are a priori different functions of $\omega$.
And we assume that,
as $\omega\rightarrow 0+$,
\begin{eqnarray}
\nu_{j} &\rightarrow& \nu_{j}^{\infty} \,,\nonumber \\
\lambda_{\alpha, k} &\rightarrow& \lambda_{k}\,,\nonumber \\
\lambda_{\beta, k} &\rightarrow& \lambda_{k}\,,
\label{limit-ass}
\end{eqnarray}
where $\nu_{j}^{\infty}$ is given in (\ref{pprimestring1}) with $\nu_0$ diverging as
$\nu_{0} = - \log \omega$.
However, the $\{ \nu_{j} \}$, $\{ \lambda_{\alpha, k} \}$, $\{ \lambda_{\beta, k} \}$
as well as the coefficients $\alpha$ and $\beta$ (actually certain powers of $\omega$) are still to be determined.
The ${\cal B}$ operators and the transfer matrix $t(u)$ should again
(as in Section \ref{sec:pstrings})
be understood to be constructed with anisotropy $\eta$ instead of
$\eta_{0}$.
Moreover, $F$ is the $\mathop{U_{q} sl(2)}\nolimits$ generator (see section \ref{sec:qg}) and
as an operator it also depends
on $q=e^{\eta}$.
We shall see that the
state $\vecG{p'}{\vec\lambda}$
or the limit~\eqref{genvec} is well defined and has the same transfer-matrix (generalized) eigenvalue as
$|\vec\lambda\rangle$ in (\ref{v0onshell}), and both states
belong to the same tilting module $T_j$, see Rem.~\ref{rem-Jordan-cell} below.
As in the usual ABA construction, the state $\vecG{p'}{\vec\lambda}$
in our construction also has the maximum value of
$S^{z}$ in the irreducible subquotient to which it belongs, namely, the top node $\Irr{j-s(j)}$.
We know from (\ref{Szeig}) and (\ref{genvecmore}) that this state has
$S^{z} = N/2 - M - p' = j - p'$.
On the other hand, we know from the general structure of tilting modules (\ref{Tj-diag}) that $\vecG{p'}{\vec\lambda}$
has $S^{z}= j - s(j)$. It follows that $p'=s(j)$, as already noted in (\ref{p'formula}).
Next, we observe that for $\omega\rightarrow 0$, the vector $|\vec\nu, \vec\lambda_{\alpha}\rangle$ has the power series expansion:
\begin{eqnarray}
|\vec\nu, \vec\lambda_{\alpha}\rangle = c\, \omega^{-2p' N} \left( F^{p'}
|\vec\lambda\rangle \right)\Big\vert_{\omega=0} + O(\omega^{-2p'
(N-1)}) \,,
\label{cformula}
\end{eqnarray}
where $c$ is some numerical factor. For $p'=1$, this
follows from the fact that $B(u) |\Omega\rangle \sim e^{2 N u} F |\Omega\rangle + O(e^{2
(N-1) u})$ for $u \rightarrow \infty$;
hence, for $u \sim -\log \omega$, $B(u) |\Omega\rangle \sim
\omega^{-2N} F |\Omega\rangle + O(\omega^{-2(N-1)})$. For $p'>1$, the result
(\ref{cformula}) is a conjecture, which we have checked in many
examples, see e.g. Secs. \ref{sec:genp3}, \ref{sec:genp4}.
It follows that
\begin{eqnarray}\label{gen-vec-fin}
\omega^{2p'(N-1)}|\vec\nu, \vec\lambda_{\alpha}\rangle
- c\, \omega^{-2p'} F^{p'} |\vec\lambda_{\beta}\rangle = O(\omega^{0})
\end{eqnarray}
for $\omega\rightarrow 0$. We therefore set
\begin{eqnarray}
\alpha = \omega^{2 p' (N-1)}\,, \qquad \beta = -c\, \omega^{-2 p'} \,,
\label{alphabeta}
\end{eqnarray}
which makes $\vecGomega{p'}{\vec\lambda}$ (\ref{vecGommega}) finite
for $\omega\rightarrow 0$.
According to the off-shell relation (\ref{offshell}), the transfer
matrix $t(u)$ has the following action on the off-shell Bethe vector $|\vec\nu,
\vec\lambda_{\alpha}\rangle$:
\begin{eqnarray}
t(u) |\vec\nu, \vec\lambda_{\alpha}\rangle =
\Lambda_{\alpha}(u) |\vec\nu, \vec\lambda_{\alpha}\rangle +
\sum_{i}\Lambda^{\nu_i}(u)\, B(u) | \hat\nu_{i}, \vec\lambda_{\alpha}\rangle
+ \sum_{i}\Lambda^{\lambda_{\alpha, i}}(u)\, B(u) | \vec\nu,
\hat\lambda_{\alpha,i}\rangle \,,
\label{offshellalpha}
\end{eqnarray}
where
a hat over a symbol means that it should be omitted,
i.e.
\begin{eqnarray}
| \hat\nu_{i}, \vec\lambda_{\alpha}\rangle =
\prod_{\scriptstyle{j \ne i}\atop \scriptstyle{j=1}}^{p'} {\cal
B}(\nu_{j}) \prod_{k=1}^{M} {\cal B}(\lambda_{\alpha,
k}) |\Omega\rangle \,, \qquad
| \vec\nu, \hat\lambda_{\alpha,i}\rangle =
\prod_{j=1}^{p'} {\cal B}(\nu_{j})
\prod_{\scriptstyle{k \ne i}\atop \scriptstyle{k=1}}^{M} {\cal B}(\lambda_{\alpha,
k}) |\Omega\rangle
\,,
\end{eqnarray}
and
\begin{eqnarray}
\Lambda_{\alpha}(u) &=&
\frac{\mathop{\rm sh}\nolimits(2u+2\eta)}{\mathop{\rm sh}\nolimits(2u+\eta)}\mathop{\rm sh}\nolimits^{2N}(u+\eta)\frac{Q_{\alpha}(u-\eta)
Q_{\nu}(u-\eta)}{Q_{\alpha}(u) Q_{\nu}(u)} \nonumber \\
&&+ \frac{\mathop{\rm sh}\nolimits(2u)}{\mathop{\rm sh}\nolimits(2u+\eta)}\mathop{\rm sh}\nolimits^{2N}(u)\frac{Q_{\alpha}(u+\eta) Q_{\nu}(u+\eta)}{Q_{\alpha}(u) Q_{\nu}(u)} \,,
\label{Lambdapp}
\end{eqnarray}
and $Q_{\nu}(u)$ and $Q_{\alpha}(u)$ are defined as
\begin{eqnarray}
Q_{\nu}(u) &=&
\prod_{j=1}^{p'}\mathop{\rm sh}\nolimits\left(u-\nu_{j}+\tfrac{\eta}{2}\right)\mathop{\rm sh}\nolimits\left(u+\nu_{j}+\tfrac{\eta}{2}\right) \,, \nonumber \\
Q_{\alpha}(u) &=&
\prod_{k=1}^{M}\mathop{\rm sh}\nolimits\left(u-\lambda_{\alpha,
k}+\tfrac{\eta}{2}\right)\mathop{\rm sh}\nolimits\left(u+\lambda_{\alpha,
k}+\tfrac{\eta}{2}\right) \,.
\end{eqnarray}
Moreover, according to~\eqref{Lambdam}, we have
\begin{eqnarray}
\Lambda^{\nu_i}(u) &=& \mathsf{f}(u,\nu_{i}-\tfrac{\eta}{2})
\Bigg[\mathop{\rm sh}\nolimits^{2N}(\nu_{i}+\tfrac{\eta}{2})
\frac{Q_{\alpha}(\nu_{i}-\tfrac{3\eta}{2})}{Q_{\alpha}(\nu_{i}-\tfrac{\eta}{2})}
\prod_{\scriptstyle{j \ne i}\atop \scriptstyle{j=1}}^{p'}
\frac{\mathop{\rm sh}\nolimits(\nu_{i}-\nu_{j}-\eta)
\mathop{\rm sh}\nolimits(\nu_{i}+\nu_{j}-\eta)}
{\mathop{\rm sh}\nolimits(\nu_{i}-\nu_{j}) \mathop{\rm sh}\nolimits(\nu_{i}+\nu_{j})}
\nonumber \\
&& -\mathop{\rm sh}\nolimits^{2N}(\nu_{i}-\tfrac{\eta}{2})
\frac{Q_{\alpha}(\nu_{i}+\tfrac{\eta}{2})}{Q_{\alpha}(\nu_{i}-\tfrac{\eta}{2})}
\prod_{\scriptstyle{j \ne i}\atop \scriptstyle{j=1}}^{p'}
\frac{\mathop{\rm sh}\nolimits(\nu_{i}-\nu_{j}+\eta)
\mathop{\rm sh}\nolimits(\nu_{i}+\nu_{j}+\eta)}
{\mathop{\rm sh}\nolimits(\nu_{i}-\nu_{j})
\mathop{\rm sh}\nolimits(\nu_{i}+\nu_{j})}\Bigg] \,,
\label{Lambdaip}
\end{eqnarray}
and
\begin{align}
\Lambda^{\lambda_{\alpha, i}}(u) &= \mathsf{f}(u,\lambda_{\alpha, i}-\tfrac{\eta}{2})
\Bigg[\mathop{\rm sh}\nolimits^{2N}(\lambda_{\alpha, i}+\tfrac{\eta}{2})
\frac{Q_{\nu}(\lambda_{\alpha,
i}-\tfrac{3\eta}{2})}{Q_{\nu}(\lambda_{\alpha, i}-\tfrac{\eta}{2})}
\prod_{\scriptstyle{j \ne i}\atop \scriptstyle{j=1}}^{M}
\frac{\mathop{\rm sh}\nolimits(\lambda_{\alpha, i}-\lambda_{\alpha, j}-\eta)
\mathop{\rm sh}\nolimits(\lambda_{\alpha, i}+\lambda_{\alpha, j}-\eta)}
{\mathop{\rm sh}\nolimits(\lambda_{\alpha, i}-\lambda_{\alpha, j}) \mathop{\rm sh}\nolimits(\lambda_{\alpha,
i}+\lambda_{\alpha, j})}
\nonumber \\
& -\mathop{\rm sh}\nolimits^{2N}(\lambda_{\alpha, i}-\tfrac{\eta}{2})
\frac{Q_{\nu}(\lambda_{\alpha,i}+\tfrac{\eta}{2})}{Q_{\nu}(\lambda_{\alpha, i}-\tfrac{\eta}{2})}
\prod_{\scriptstyle{j \ne i}\atop \scriptstyle{j=1}}^{M}
\frac{\mathop{\rm sh}\nolimits(\lambda_{\alpha, i}-\lambda_{\alpha, j}+\eta)
\mathop{\rm sh}\nolimits(\lambda_{\alpha, i}+\lambda_{\alpha, j}+\eta)}
{\mathop{\rm sh}\nolimits(\lambda_{\alpha, i}-\lambda_{\alpha, j})
\mathop{\rm sh}\nolimits(\lambda_{\alpha, i}+\lambda_{\alpha, j})}\Bigg] \,.
\label{Lambdaip2}
\end{align}
\medskip
Similarly, the action of the transfer matrix on the off-shell Bethe vector $|\vec\lambda_{\beta}\rangle$
is given by
\begin{eqnarray}
t(u) |\vec\lambda_{\beta}\rangle =
\Lambda_{\beta}(u) |\vec\lambda_{\beta}\rangle +
\sum_{i}\Lambda^{\lambda_{\beta, i}}(u)\, B(u) |\hat\lambda_{\beta, i}\rangle \,,
\label{offshellbeta}
\end{eqnarray}
where
\begin{eqnarray}
\Lambda_{\beta}(u) =
\frac{\mathop{\rm sh}\nolimits(2u+2\eta)}{\mathop{\rm sh}\nolimits(2u+\eta)}\mathop{\rm sh}\nolimits^{2N}(u+\eta)\frac{Q_{\beta}(u-\eta)}{Q_{\beta}(u)}
+ \frac{\mathop{\rm sh}\nolimits(2u)}{\mathop{\rm sh}\nolimits(2u+\eta)}\mathop{\rm sh}\nolimits^{2N}(u)\frac{Q_{\beta}(u+\eta)}{Q_{\beta}(u)} \,,
\label{Lambdapp2}
\end{eqnarray}
with
\begin{eqnarray}
Q_{\beta}(u)
=\prod_{k=1}^{M}\mathop{\rm sh}\nolimits\left(u-\lambda_{\beta,
k}+\tfrac{\eta}{2}\right)\mathop{\rm sh}\nolimits\left(u+\lambda_{\beta,
k}+\tfrac{\eta}{2}\right) \,,
\end{eqnarray}
and
\begin{eqnarray}
\Lambda^{\lambda_{\beta, i}}(u) &=& \mathsf{f}(u,\lambda_{\beta, i}-\tfrac{\eta}{2})
\Bigg[\mathop{\rm sh}\nolimits^{2N}(\lambda_{\beta, i}+\tfrac{\eta}{2})
\prod_{\scriptstyle{j \ne i}\atop \scriptstyle{j=1}}^{M}
\frac{\mathop{\rm sh}\nolimits(\lambda_{\beta, i}-\lambda_{\beta, j}-\eta)
\mathop{\rm sh}\nolimits(\lambda_{\beta, i}+\lambda_{\beta, j}-\eta)}
{\mathop{\rm sh}\nolimits(\lambda_{\beta, i}-\lambda_{\beta, j}) \mathop{\rm sh}\nolimits(\lambda_{\beta,
i}+\lambda_{\beta, j})}
\nonumber \\
&& -\mathop{\rm sh}\nolimits^{2N}(\lambda_{\beta, i}-\tfrac{\eta}{2})
\prod_{\scriptstyle{j \ne i}\atop \scriptstyle{j=1}}^{M}
\frac{\mathop{\rm sh}\nolimits(\lambda_{\beta, i}-\lambda_{\beta, j}+\eta)
\mathop{\rm sh}\nolimits(\lambda_{\beta, i}+\lambda_{\beta, j}+\eta)}
{\mathop{\rm sh}\nolimits(\lambda_{\beta, i}-\lambda_{\beta, j})
\mathop{\rm sh}\nolimits(\lambda_{\beta, i}+\lambda_{\beta, j})}\Bigg] \,.
\label{Lambdaip3}
\end{eqnarray}
We argue in Appendix \ref{sec:proof} that, in order for
$\vecG{p'}{\vec\lambda}$ (\ref{genvec}) to be a generalized eigenvector of the
transfer matrix, i.e., it obeys~\eqref{geneig}, it suffices to satisfy the following conditions:
\begin{eqnarray}
\lim_{\omega\rightarrow 0+} \beta \left( \Lambda_{\beta}(u)-\Lambda_{\alpha}(u) \right) &\ne& 0 \,,
\label{suffcond1I} \\
\lim_{\omega\rightarrow 0+} \beta \left( \Lambda_{\beta}(u)-\Lambda_{\alpha}(u) \right)^{2} &=& 0 \,,
\label{suffcond2I} \\
\lim_{\omega\rightarrow 0+} \omega^{2N} \beta \Lambda^{\nu_i}(u) &=& 0 \,,
\qquad i = 1,\ldots, p'\,,
\label{suffcond3I} \\
\lim_{\omega\rightarrow 0+} \beta \Lambda^{\lambda_{\alpha, i}}(u) &=& 0 \,,
\qquad i = 1,\ldots, M\,,
\label{suffcond4I} \\
\lim_{\omega\rightarrow 0+} \beta \Lambda^{\lambda_{\beta, i}}(u) &=& 0 \,,
\qquad i = 1,\ldots, M\,,
\label{suffcond5I}
\end{eqnarray}
where $\alpha$ and $\beta$ are given by~\eqref{alphabeta}.
Recalling the expressions (\ref{Lambdaip}) and (\ref{Lambdaip2}) for
$\Lambda^{\nu_i}(u)$ and $\Lambda^{\lambda_{\alpha, i}}(u)$, we see that the conditions
(\ref{suffcond3I})-(\ref{suffcond4I}) require that $\{\vec\nu,
\vec\lambda_{\alpha} \}$ be \textit{approximate} solutions (as $\omega\rightarrow 0$) of the Bethe equations\footnote{These are the usual Bethe equations~(\ref{BAE}) but with more Bethe
roots, since we now have both $\lambda_{\alpha}$'s and~$\nu$'s. The $\nu$'s
appear in the Bethe equations for the $\lambda_{\alpha}$'s through
$Q_{\nu}$ functions and vice-versa.}
\begin{multline}
\mathop{\rm sh}\nolimits^{2N}(\nu_{i}+\tfrac{\eta}{2})
Q_{\alpha}(\nu_{i}-\tfrac{3\eta}{2})
\prod_{\scriptstyle{j \ne i}\atop \scriptstyle{j=1}}^{p'}
\mathop{\rm sh}\nolimits(\nu_{i}-\nu_{j}-\eta)
\mathop{\rm sh}\nolimits(\nu_{i}+\nu_{j}-\eta)
\\
=\mathop{\rm sh}\nolimits^{2N}(\nu_{i}-\tfrac{\eta}{2})
Q_{\alpha}(\nu_{i}+\tfrac{\eta}{2})
\prod_{\scriptstyle{j \ne i}\atop \scriptstyle{j=1}}^{p'}
\mathop{\rm sh}\nolimits(\nu_{i}-\nu_{j}+\eta)
\mathop{\rm sh}\nolimits(\nu_{i}+\nu_{j}+\eta)
\label{BAEnu}
\end{multline}
and
\begin{multline}
\mathop{\rm sh}\nolimits^{2N}(\lambda_{\alpha, i}+\tfrac{\eta}{2})
Q_{\nu}(\lambda_{\alpha,
i}-\tfrac{3\eta}{2})
\prod_{\scriptstyle{j \ne i}\atop \scriptstyle{j=1}}^{M}
\mathop{\rm sh}\nolimits(\lambda_{\alpha, i}-\lambda_{\alpha, j}-\eta)
\mathop{\rm sh}\nolimits(\lambda_{\alpha, i}+\lambda_{\alpha, j}-\eta)
\\
= \mathop{\rm sh}\nolimits^{2N}(\lambda_{\alpha, i}-\tfrac{\eta}{2})
Q_{\nu}(\lambda_{\alpha,i}+\tfrac{\eta}{2})
\prod_{\scriptstyle{j \ne i}\atop \scriptstyle{j=1}}^{M}
\mathop{\rm sh}\nolimits(\lambda_{\alpha, i}-\lambda_{\alpha, j}+\eta)
\mathop{\rm sh}\nolimits(\lambda_{\alpha, i}+\lambda_{\alpha, j}+\eta) \,.
\label{BAElama}
\end{multline}
By `approximate solutions' we mean that the equations are satisfied up to a certain order in $\omega$,
not necessarily in all orders, i.e., we solve equations~\eqref{BAEnu}
and~\eqref{BAElama} in the sense of perturbation theory in the small
parameter $\omega$, until (\ref{suffcond3I})-(\ref{suffcond4I}) are satisfied.
Similarly for the condition~\eqref{suffcond5I}, it requires that
$\vec\lambda_{\beta}$ be an approximate solution of the Bethe
equations corresponding to $\Lambda^{\lambda_{\beta, i}}(u)$
in~(\ref{Lambdaip3}),
\begin{eqnarray}
\lefteqn{\mathop{\rm sh}\nolimits^{2N}(\lambda_{\beta, i}+\tfrac{\eta}{2})
\prod_{\scriptstyle{j \ne i}\atop \scriptstyle{j=1}}^{M}\mathop{\rm sh}\nolimits(\lambda_{\beta, i}-\lambda_{\beta, j}-\eta)
\mathop{\rm sh}\nolimits(\lambda_{\beta, i}+\lambda_{\beta, j}-\eta)}\nonumber\\
&&=\mathop{\rm sh}\nolimits^{2N}(\lambda_{\beta, i}-\tfrac{\eta}{2})
\prod_{\scriptstyle{j \ne i}\atop \scriptstyle{j=1}}^{M}
\mathop{\rm sh}\nolimits(\lambda_{\beta, i}-\lambda_{\beta, j}+\eta)
\mathop{\rm sh}\nolimits(\lambda_{\beta, i}+\lambda_{\beta, j}+\eta) \,.
\label{BAElambeta}
\end{eqnarray}
Let us therefore look for a solution $\{\vec\nu,
\vec\lambda_{\alpha} \}$ of the Bethe
equations \eqref{BAEnu}-\eqref{BAElama} with $M+p'$ Bethe
roots that approaches $\{\vec{\nu}^{\,\infty},
\vec\lambda \}$ as $\omega \rightarrow 0$, recall our assumption on the limit~\eqref{limit-ass}. We assume that for small
$\omega$ this solution is given by
\begin{eqnarray}
\nu_{j} &=& -\log \omega + \sum_{k\ge 1}a_{j k} \omega^{2(k-1)}+
\frac{i \pi}{2p'}(p'-(2j-1)) \,, \qquad
j=1, \ldots, p' \,, \nonumber \\
\lambda_{\alpha, j} &=& \lambda_{j} + \sum_{k\ge 1}b_{j k} \omega^{2p'k}
\,, \qquad j=1, \ldots, M \,,
\label{pprimestring2}
\end{eqnarray}
where the coefficients $\{ a_{j k}\,, b_{j k} \}$ are independent of $\omega$.
To determine these coefficients, we rewrite the Bethe
equations \eqref{BAEnu}-\eqref{BAElama} in the form
\begin{eqnarray}
{\rm BAE}_{k} = 0\,, \qquad k = 1, \ldots, M+p'\,,
\label{BEgk}
\end{eqnarray}
where ${\rm BAE}_{k}$ is defined as the difference of the left-hand and right-hand
sides.
We insert (\ref{eta02}) and (\ref{pprimestring2}) into
(\ref{BEgk}), perform series expansions about $\omega=0$,
and solve the resulting equations for $\{ a_{j k}\,, b_{j k} \}$, starting from the most singular terms
in the series expansions (the most singular term has obviously a finite
order in $\omega$).
In practice, the conditions (\ref{suffcond3I})-(\ref{suffcond4I})
are satisfied by keeping sufficiently many terms in the expansion (\ref{pprimestring2}).
Similarly, we can find a solution $\vec\lambda_{\beta}$ of the Bethe
equations
(\ref{BAElambeta}) with $M$ Bethe
roots that approaches $\vec\lambda$ as $\omega \rightarrow 0$. We assume that for small
$\omega$ this solution is given by
\begin{eqnarray}
\lambda_{\beta, j} &=& \lambda_{j} + \sum_{k\ge 1}c_{j k} \omega^{2p'k}
\,, \qquad j=1, \ldots, M \,,
\label{pprimestring3}
\end{eqnarray}
and we solve for the coefficients $\{ c_{j k} \}$ in a similar way.
We find in practice that, by keeping sufficiently many terms in the expansion
(\ref{pprimestring3}), the condition (\ref{suffcond5I}) is also satisfied.
In general, $\vec\lambda_{\beta} \ne \vec\lambda_{\alpha}$.
We then find by doing explicit expansion using
(\ref{pprimestring2}) and (\ref{pprimestring3}), with the same number of terms in the sums as in the previous step,
that $\Lambda_{\beta}(u)-\Lambda_{\alpha}(u)$ (recall the definitions
(\ref{Lambdapp}), (\ref{Lambdapp2})) is of order $\omega^{2 p'}$
\begin{eqnarray}\label{b-a}
\Lambda_{\beta}(u)-\Lambda_{\alpha}(u) = O(\omega^{2 p'}) \,.
\end{eqnarray}
For the choice of $\beta$ in (\ref{alphabeta}), it follows that both
conditions (\ref{suffcond1I}) and (\ref{suffcond2I}) are also
satisfied.
We have therefore demonstrated the following proposition, assuming that our conjecture~(\ref{cformula}) is true.
\begin{Prop} \label{prop:gen-vect}
For anisotropy $\eta=i \pi/p$ with integer $p \ge 2$,
given a Bethe eigenvector~$|\vec\lambda\rangle$ in~\eqref{v0onshell}
of the transfer matrix $t(u)$ with eigenvalue $\Lambda(u)$
(\ref{ordinary}),
a generalized eigenvector of rank 2 with the same generalized eigenvalue
is given by
\begin{eqnarray}
\vecG{p'}{\vec\lambda} =
\lim_{\omega\rightarrow0+}\left[ \omega^{2 p' (N-1)} |\vec\nu, \vec\lambda_{\alpha}\rangle
-c\, \omega^{-2 p'} F^{p'} |\vec\lambda_{\beta}\rangle \right]\,,
\label{genvec2}
\end{eqnarray}
where $p'$ equals $N-2M +1$ modulo $p$,
the vectors $|\vec\nu, \vec\lambda_{\alpha}\rangle$ and $|\vec\lambda_{\beta}\rangle$ are given by
(\ref{genvecmore}), $c$~is given by (\ref{cformula}), and
$\vec\nu$, $\vec\lambda_{\alpha}$, and $\vec\lambda_{\beta}$ are given by
the series expansions (\ref{pprimestring2}) and (\ref{pprimestring3}),
whose coefficients are determined by the Bethe equations
(\ref{BAEnu}), (\ref{BAElama}), and~(\ref{BAElambeta}) up to a certain order in $\omega$ such that
(\ref{suffcond3I})-(\ref{suffcond5I}) are satisfied.
\end{Prop}
\begin{Rem}\label{rem-Jordan-cell}
In this remark, we address the problem of constructing the whole
Jordan cell for the transfer matrix -- the states $|v'\rangle$ and
$|v\rangle$ in~\eqref{geneig-2} --
or what is the corresponding eigenvector $|v'\rangle$ for
the generalized eigenvector $|v\rangle=\vecG{p'}{\vec\lambda}$ constructed in~\eqref{genvec2}?
We give two arguments, one is computational and uses the results of
App.~\ref{sec:proof} where we stated Cor.~\ref{cor:vv} in the end. It
states that under the assumptions made in Prop.~\ref{prop:gen-vect}
$|v'\rangle$ is non-zero and equals $\kappa F^{p'}|\vec\lambda\rangle$
where $\kappa$ is the limit in~\eqref{suffcond1I}.
The other argument is less technical and counts only degeneracies.
First, the state $|v'\rangle$ should have the same $S^z=N/2-M-p'$ as
$|v\rangle=\vecG{p'}{\vec\lambda}$ has. Note further that $|v\rangle$
is in the same tilting module $T_{j=N/2-M}$ as the initial Bethe state
$|\vec\lambda\rangle$ because the two states have the same eigenvalue
$\Lambda(u)$ of the transfer matrix $t(u)$, and the ordinary Bethe
states of the same $M$ value are non-degenerate (with respect to
$t(u)$) at roots of unity~\cite{Gainutdinov:2015vba}.
Indeed, if the
generalized eigenstate $|v\rangle$ would belong to another copy of
$T_{j=N/2-M}$ not containing $|\vec\lambda\rangle$, we could obtain by
acting on $|v\rangle$
with ($p'$ power of) the raising $\mathop{U_{q} sl(2)}\nolimits$ generator $E$ a
highest-weight state, see the action in App.~\ref{app:proj-mod-base},
which is another Bethe state\footnote{or complete $p$-string operators
on a Bethe state with $M'$ lower than $M$ by a multiple of $p$.}, say
$|\vec\lambda'\rangle$, with the same $M$ and by construction the
same eigenvalue
$\Lambda(u)$ as $|v\rangle$, which
contradicts the
non-degeneracy result in~\cite{Gainutdinov:2015vba}, and thus
$|\vec\lambda'\rangle\sim|\vec\lambda\rangle$.
Further, the weight $S^z=N/2-M-p'$ is only doubly degenerate in
$T_{j=N/2-M}$: $|v\rangle=\vecG{p'}{\vec\lambda}$ and the vector
$F^{p'}|\vec\lambda\rangle$ in the bottom of $T_j$ have this weight.
We thus have~\eqref{geneig-2} with
\begin{eqnarray}
|v'\rangle \sim F^{p'}|\vec\lambda\rangle.
\end{eqnarray}
We have also checked this result explicitly for the
examples in Secs.~\ref{sec:genp2}\,-\,\ref{sec:genp4}.
\end{Rem}
We note that Prop.~\ref{prop:gen-vect} gives only sufficient
conditions on existence of the generalized
eigenvectors, and the construction if the conditions are satisfied.
Their actual existence is clear in the examples we consider below.
We give in
Secs.~\ref{sec:genp2}\,-\,\ref{sec:genp4} explicit examples of constructing
the Jordan cells and generalized eigenvectors for $p=2,3,4$ using the construction in Prop.~\ref{prop:gen-vect}.
Readers who are interested more in these
examples can skip the next subsection where we go back to representation theory and tilting modules.
\subsection{Representation-theoretic description}\label{sec:rep-th-descr}
We give here a representation-theoretic interpretation of our
construction in Prop.~\ref{prop:gen-vect} by analyzing the
contribution of $V_j$'s to different tilting modules in the
root-of-unity limit. Then, we also discuss the problem of counting
the (generalized) eigenvectors using this analysis.
We begin with the decomposition of the spin chain at
{\em generic} $q$
\begin{eqnarray}
\bigl(V_{\frac{1}{2}}\bigr)^{\otimes N} = \bigoplus_{j=0(1/2)}^{N/2} d_j V_j,
\label{decomposition-gen}
\end{eqnarray}
where the multiplicity $d_j$ of the spin-$j$ representation
$V_{j}$ is defined in~\eqref{dj}. It is instructive to compare this
decomposition with the one~\eqref{decomposition} at roots of unity in
terms of tilting modules $T_j$ with multiplicities $d^0_j\leq d_j$, see the
expression in~\eqref{dj0}. We will consider further only
those values of $j$ for which $2j+1$ modulo $p$
is nonzero (that is, $s(j)$ defined in (\ref{sj}) is nonzero),
i.e., when $T_j$ are indecomposable but reducible and thus contain
generalized eigenvectors,
recall the discussion after~\eqref{gen-transf}. The multiplicity $d^0_j$ is then strictly less than~$d_j$.
Each such $T_j$ contains $V_j$ as a proper
submodule. The corresponding spin-$j$ highest-weight state lives in
the node denoted by
$\stackrel{\circ}{\Irr{j}}$ in the left half of
Fig.~\ref{fig:pstring}. This state can be constructed using the
ordinary ABA approach as in (\ref{Bethestate}).
The rest $d_j-d^0_j=d^0_{j+p-s(j)}$
of the initial number of $V_j$'s are not submodules but
{\sl sub-quotients} in another tilting module -- in $T_{j+p-s(j)}$ (recall
the discussion in Sec.~\ref{sec:Tj-str}.)
Being `sub-quotient' here means that the spin-$j$ states lose the property ``highest-weight'' in the root-of-unity limit.
These states are generalized eigenstates of $t(u)$. They live in the node
$\stackrel{\bullet}{\Irr{j}}$ in the right half of
Fig.~\ref{fig:pstring}.
\begin{figure}
\begin{equation*}
\xymatrix@R=39pt@C=0pt@M=0pt@W=0pt{
\mbox{}&\\
&d^0_j \;\; \times\;\\
}
\xymatrix@R=22pt@C=4pt@W=3pt@M=4pt{
&\stackrel{\bullet}{\Irr{j-s}}\ar[dl]\ar[dr]&\\
\Irr{j-p}\ar[dr]
&&\mbox{}\;\stackrel{\circ}{\Irr{j}}\;\;\;\;\;\ar[dl]\\
&\Irr{j-s}&
}
\xymatrix@R=39pt@C=0pt@M=0pt@W=0pt{
\mbox{}&\\
&\bigoplus\quad d^0_{j+p-s} \;\; \times\\
}\!\!\!
\xymatrix@R=22pt@C=4pt@W=3pt@M=4pt{
&\stackrel{\bullet}{\Irr{\,j\,}}\ar[dl]\ar[dr]&&&&\\
{\mbox{}\quad\Irr{j-s}\;}\ar[dr]
&&\stackrel{\circ}{\Irr{j+p-s}}\;\;\ar[dl]\ar@/_2pc/@{-->}[ul]_{\bigl[p'-\text{string}\bigr]_2}&&&
|\Omega\rangle\ar@{-->}[lll]_{\mbox{}\quad\;\;\prod_i {\cal B}(\lambda_i)}\\
&\Irr{\,j\,}&&&&
}
\label{Vj-Tj}
\end{equation*}
\caption{The subquotient structure of the tilting module $d_j^0T_j
\oplus d_{j+p-s}^0T_{j+p-s}$ with the solid arrows corresponding to
the action of $\mathop{U_{q} sl(2)}\nolimits$; here, $s\equiv s(j)$ for brevity and $s(j)$ is
defined in~\eqref{sj}. The spin-$j$
highest-weight states (ordinary
eigenstates) live in the node $\stackrel{\circ}{\Irr{j}}$, on the left,
while the spin-$j$ generalized eigenstates are in
$\stackrel{\bullet}{\Irr{j}}$, on the right part of the diagram,
and are constructed from spin-$(j+p-s)$
highest-weight states -- the curvy arrow for the
$p'$-string of ${\cal B}(\nu_k)$ operators and $[\ldots]_2$ denotes the
subtraction of the action of $F^{p'}$ (the
construction of Prop.~\ref{prop:gen-vect}), with
$p'=p-s$.
The horizontal dashed arrow corresponds to the action of a product of ${\cal B}(\lambda_i)$
operators (the ordinary ABA construction).
}
\label{fig:pstring}
\end{figure}
We therefore expect that $d^0_j$ of the spin-$j$ Bethe states have a
well-defined limit as $q$ approaches a root
of unity and give ordinary
$t(u)$-eigenstates
living in $\stackrel{\circ}{\Irr{j}}$; and on the
other side, we expect irregular behavior of the $d_j-d^0_j$ Bethe
states -- the corresponding Bethe roots~$\nu_k$ go to infinity as~\eqref{pprimestring1} -- such that an appropriate limit
gives the generalized eigenstates
living in $\stackrel{\bullet}{\Irr{j}}$.
By Prop.~\ref{prop:gen-vect}, we
construct the latter states by applying the
$p'$-string of ${\cal B}(\nu_k)$ operators
on the usual Bethe states living in the node
$\stackrel{\circ}{\Irr{j+p-s}}$ in $T_{j+p-s}$
and subtracting the image of $F^{p'}$ (as in~\eqref{gen-vec-fin}) on the
spin-$(j+p-s)$ highest-weight state that guarantees absence of diverging terms in the limit.
We sketched this in the right half of
Fig.~\ref{fig:pstring} where the subtraction is schematically denoted by $[\ldots]_2$. Note that the difference in the highest
$S^z$-eigenvalues
in $\stackrel{\bullet}{\Irr{j}}$ and
in $\stackrel{\circ}{\Irr{j+p-s}}$ is $p-s(j)$.
(Recall that the number $j$ in $\langle j\rangle$
corresponds to the spin $S^z=j$ value of the highest-weight vector.)
Hence, the number $p'$ in the $p'$-string equals $p-s(j)$.
Similarly, we should use a string of length
$p'=s(j)$ to construct generalized eigenstates in
$\stackrel{\bullet}{\Irr{j-s}}$ out of Bethe eigenstates from
$\stackrel{\circ}{\Irr{j}}$, in the left part of
Fig.~\ref{fig:pstring}, as anticipated in (\ref{p'formula}) and in Prop.~\ref{prop:gen-vect}.
We give finally a comment about counting the (generalized) eigenstates.
The
limit of ordinary Bethe states gives as many linearly independent
states as the number of admissible
solutions of the Bethe equations at the root of unity, and we know~\cite{Gainutdinov:2015vba}
that there can be deviations of this number from $d^0_j$ (it is less than $d^0_j$ in general). Taking into account the
deviations $n_j$ studied in~\cite{Gainutdinov:2015vba} we should thus have $d^0_j-n_j$ linearly independent eigenstates
and the number $d^0_{j+p-s}-n_{j+p-s}$ of linearly independent
generalized eigenstates of spin-$j$. To construct the missing
eigenstates of spin-$j$ or highest-weight states in $\stackrel{\circ}{\Irr{j}}$, we should use the exact complete
$p$-strings from Sec.~\ref{sec:pstrings}.
We believe that the same complete
$p$-strings construction can be applied to generalized eigenvectors
and it recovers the total number $d^0_{j+p-s}$ of generalized eigenvectors of spin-$j$.
\subsection*{Examples}
We now illustrate the general construction (\ref{genvec2}) with
several explicit examples.
\subsection{$p=2$}\label{sec:genp2}
As already noted, for $p=2$, the only possibility is $p'=1$, i.e. an
infinite real root. For even~$N$ and irrespectively of the value of $M$,
the small-$\omega$ behavior of this root is given by
\begin{eqnarray}
\nu = -\log \omega + O(\omega^{0}) \,,
\label{nusmallomega}
\end{eqnarray}
as in (\ref{lambdaforsmallomega}) and (\ref{pprimestring2}).
We find that the construction (\ref{genvec2}) produces a generalized
eigenvector irrespectively of the values of the $O(\omega^{0})$ and higher-order terms.
Hence, for $p=2$ and even~$N$,
the generalized eigenvector $\vecG{1}{\vec\lambda}$ corresponding to the on-shell Bethe vector $|\vec\lambda\rangle$
with any value of $M$ is given by
\begin{eqnarray}
\vecG{1}{\vec\lambda} = \lim_{\omega\rightarrow0+} \Big[\omega^{2(N-1)} {\cal B}(\nu) -c\,
\omega^{-2} F \Big]|\vec\lambda\rangle\,,
\label{vecG1}
\end{eqnarray}
for some ``non-universal'' constant $c$ and $\nu$ is given by (\ref{nusmallomega}).
We denote by $\vecG{1}{-}$ the result for
the reference state (no Bethe roots) $|\vec\lambda\rangle = |\Omega\rangle$.
For odd $N$, there is no solution of the form (\ref{nusmallomega}),
which is in correspondence with the fact
that the Hamiltonian is diagonalizable at odd $N$.
For example, we have explicitly computed (\ref{vecG1})
with $|\vec\lambda\rangle = |\Omega\rangle$
for
$N=4,6,8$ using {\tt Mathematica}, and we have verified that the
result $\vecG{1}{-}$
is a generalized eigenvector of the Hamiltonian (\ref{Hamiltonian}), with
generalized eigenvalue 0:
\begin{eqnarray}
H^{2}\vecG{1}{-} =0 \,, \qquad
H \vecG{1}{-} \sim F |\Omega\rangle \,,
\end{eqnarray}
where we use $\sim$ to denote equality up to some
nonzero numerical factor.
\subsection{$p=3$}\label{sec:genp3}
For $p=3$, both $p'=1$ and $p'=2$ are possible.
\subsubsection{$p'=1$}\label{sec:genp3pp1}
Let us first consider the case $p'=1$, $N=6$ and $M=0$.
Following the procedure explained in (\ref{pprimestring2}) and
immediately below, we find that the corresponding $\nu$ is given by
\begin{eqnarray}
\nu= -\log \omega +\tfrac{1}{4}\log 3 - \tfrac{\sqrt{3}}{12}\omega^{2}
+ O(\omega^{4}) \,,
\end{eqnarray}
and the corresponding vector $\vecG{1}{-}$
is a generalized eigenvector of the Hamiltonian (\ref{Hamiltonian})
with generalized eigenvalue 5/2:
\begin{eqnarray}
(H- \tfrac{5}{2})^{2} \vecG{1}{-} =0 \,, \qquad
(H- \tfrac{5}{2}) \vecG{1}{-} \sim F |\Omega\rangle \,.
\end{eqnarray}
\subsubsection{$p'=2$}\label{sec:genp3pp2}
Let us now consider $p'=2$. An example is the case $N=4$ and $M=0$, for which $\vec\nu$
(\ref{pprimestring2}) is given by
\begin{eqnarray}
\nu_{1}\,, \nu_{2} = \pm \tfrac{i\pi}{4} -\log \omega + \tfrac{1}{8}\log(\tfrac{243}{4}) \mp
\tfrac{2i\sqrt{2}}{3^{5/4}}\omega^{2} -
\tfrac{13\sqrt{3}}{36}\omega^{4}
+ O(\omega^{6}) \,.
\end{eqnarray}
We have explicitly verified that
$\vecG{2}{-}$ is a generalized eigenvector of the Hamiltonian, with
generalized eigenvalue 3/2:
\begin{eqnarray}
(H- \tfrac{3}{2})^{2} \vecG{2}{-} =0 \,, \qquad
(H- \tfrac{3}{2}) \vecG{2}{-} \sim F^2 |\Omega\rangle \,.
\end{eqnarray}
Another example is the case $N=6$ and $M=1$. This is our first
example with $M>0$ (and $p>2$), which makes this case particularly interesting.
There are 4 solutions of
the Bethe equations (\ref{BAE}) with $p=3, N=6, M=1$, and let us
focus here on the simplest $\lambda=\tfrac{1}{2}\log 2 \approx 0.346574$.
By following the procedure described around
(\ref{pprimestring2})-(\ref{pprimestring3}), we obtain
\begin{eqnarray}
\nu_{1}\,, \nu_{2} &=& \pm \tfrac{i\pi}{4} -\log \omega +
\tfrac{1}{8}\log(108) \mp
\tfrac{19i\sqrt{2}}{16*3^{3/4}}\omega^{2} - \tfrac{34493}{21888\sqrt{3}}\omega^{4}
+ O(\omega^{6}) \,, \nonumber \\
\lambda_{\alpha} &=& \tfrac{1}{2}\log 2 -
\tfrac{3}{16}\sqrt{3}\omega^{4} + O(\omega^{8})\,, \nonumber \\
\lambda_{\beta} &=& \tfrac{1}{2}\log 2 -
\tfrac{1}{4}\sqrt{3}\omega^{4} + O(\omega^{8})\,.
\label{p3N6M1pp2}
\end{eqnarray}
Note that $\lambda_{\alpha} \ne \lambda_{\beta}$.
We have explicitly verified that the corresponding vector
$\vecG{2}{\lambda}$ (\ref{genvec2}) is a generalized eigenvector of
the Hamiltonian with generalized eigenvalue $-3/2$,
\begin{eqnarray}
(H+ \tfrac{3}{2})^{2} \vecG{2}{\lambda} =0 \,, \qquad
(H+ \tfrac{3}{2}) \vecG{2}{\lambda} \sim F^2|\lambda\rangle \,.
\end{eqnarray}
\subsection{$p=4$}\label{sec:genp4}
For $p=4$, we can have $p'=1, 2, 3$, but we illustrate here only two
of these three possibilities.
\subsubsection{$p'=1$}\label{sec:genp4pp1}
Let us first consider $p'=1$. An example is the case $p=4, N=4, M=0, p'=1$,
for which~$\nu$ from~(\ref{pprimestring2}) is given by
\begin{eqnarray}
\nu= -\log \omega +\tfrac{1}{4}\log 2 - \tfrac{1}{4}\omega^{2}
+ O(\omega^{4}) \,,
\end{eqnarray}
and the corresponding vector $\vecG{1}{-}$
is a generalized eigenvector of the Hamiltonian with generalized eigenvalue
$3\sqrt{2}/2$,
\begin{eqnarray}
(H- \tfrac{3}{2}\sqrt{2})^{2} \vecG{1}{-} =0 \,, \qquad
(H- \tfrac{3}{2}\sqrt{2}) \vecG{1}{-} \sim F |\Omega\rangle \,.
\end{eqnarray}
Another example is the case $p=4, N=6, M=1, p'=1$, which (as the
example in Eq. (\ref{p3N6M1pp2})) has $M>0$. There are 5 solutions of
the Bethe equations (\ref{BAE}) with $p=4, N=6, M=1$, and let us
focus here on the simplest $\lambda=\tfrac{1}{2}{\rm arcsinh}(1) \approx 0.440687$.
We find
\begin{eqnarray}
\nu &=& -\log \omega +
\tfrac{1}{4}\log 2 -
\tfrac{1}{6}\omega^{2} + O(\omega^{4}) \,, \nonumber \\
\lambda_{\alpha} &=& \tfrac{1}{2}{\rm arcsinh}(1) -
\tfrac{5}{12}\sqrt{2}\omega^{2} + O(\omega^{4}) \,, \nonumber \\
\lambda_{\beta} &=& \tfrac{1}{2}{\rm arcsinh}(1) -
\tfrac{1}{2}\sqrt{2}\omega^{2} + O(\omega^{4}) \,.
\end{eqnarray}
We have explicitly verified that the corresponding vector
$\vecG{1}{\lambda}$ (\ref{genvec2}) is a generalized eigenvector of
the Hamiltonian with generalized eigenvalue $\sqrt{2}/2$,
\begin{eqnarray}
(H- \tfrac{1}{2}\sqrt{2})^{2} \vecG{1}{\lambda} =0 \,, \qquad
(H- \tfrac{1}{2}\sqrt{2}) \vecG{1}{\lambda} \sim F |\lambda\rangle \,.
\end{eqnarray}
\subsubsection{$p'=3$}\label{sec:genp4pp3}
Let us now consider $p'=3$. An example is the case $p=4, N=6, M=0,
p'=3$, for which $\vec\nu$ (\ref{pprimestring2}) is given by
\begin{eqnarray}
\nu_{1} = \tfrac{i\pi}{3} -\log \omega + \tfrac{1}{12}\log(1352) -
(-\tfrac{1}{13})^{1/3}\omega^{2}-\tfrac{53}{12}(-\tfrac{1}{13})^{2/3}\omega^{4}
-\tfrac{3847}{3744}\omega^{6} + O(\omega^{8})\,, \nonumber \\
\nu_{2} = -\log \omega + \tfrac{1}{12}\log(1352) +
(\tfrac{1}{13})^{1/3}\omega^{2}-\tfrac{53}{12}(\tfrac{1}{13})^{2/3}\omega^{4}
-\tfrac{3847}{3744}\omega^{6} + O(\omega^{8})
\,, \quad \nu_{3} = \nu_{1}^{*} \,.
\end{eqnarray}
We have explicitly verified that the corresponding vector $\vecG{3}{-}$ (\ref{genvec2})
is a generalized eigenvector of the Hamiltonian with generalized eigenvalue
$5\sqrt{2}/2$,
\begin{eqnarray}
(H- \tfrac{5}{2}\sqrt{2})^{2} \vecG{3}{-} =0 \,, \qquad
(H- \tfrac{5}{2}\sqrt{2}) \vecG{3}{-} \sim F^{3} |\Omega\rangle \,.
\end{eqnarray}
\section{Complete sets of eigenstates for
$p=2$}\label{sec:p2}
For the case $p=2$, the decomposition of the space of states into tilting modules depends
fundamentally on the parity of $N$:
\subsection*{Even $N$}
For $p=2$ and even $N$, the decomposition (\ref{decomposition}) consists
of tilting modules $T_{j}$ of dimension $4j$, where $j$ is an
integer. Recall the diagram in~\eqref{Tj-diag}: each such module has a
right node (or simple subquotient) ${\bf R}_{j}$ of dimension $j+1$, a bottom
node ${\bf
B}_{j}$ of dimension $j$, a top
node ${\bf T}_{j}$ of dimension $j$, and a left node
${\bf L}_{j}$ of dimension $j-1$ (provided
that $j>1$).
We use the basis and $\mathop{U_{q} sl(2)}\nolimits$-action in $T_j$ in
App.~\ref{app:proj-mod-base} to make the following statements. The
right node consists of the vectors \footnote{The basis' construction
\eqref{basis-Rp2}-\eqref{basis-Lp2} is just an example of
the general one~\eqref{basis-R}-\eqref{basis-L} for any $p$ in the beginning of the next section. We found it is more convenient to describe the basis here as well.}
\begin{eqnarray}\label{basis-Rp2}
{\bf R}_{j}:\quad |v\rangle\,, f |v\rangle\,, f^{2} |v\rangle\,, \ldots \,, f^{j} |v\rangle\,,
\end{eqnarray}
where $|v\rangle$ can be either a usual Bethe state or a state
constructed from an exact complete 2-string; and $f$ is the $s\ell(2)$
lowering generator from $\mathop{U_{q} sl(2)}\nolimits$.
The bottom node consists of the vectors obtained by acting on the
right node with the $\mathop{U_{q} sl(2)}\nolimits$ lowering generator $F$
\begin{eqnarray}
{\bf B}_{j}:\quad F |v\rangle\,, F f |v\rangle\,, F f^{2} |v\rangle\,, \ldots \,, F f^{j-1} |v\rangle\,.
\end{eqnarray}
The top node consists of the \textit{generalized}
eigenvectors
\begin{eqnarray}
{\bf T}_{j}:\quad \vecG{1}{v}\,, f \vecG{1}{v}\,, f^{2} \vecG{1}{v}\,, \ldots \,, f^{j-1} \vecG{1}{v}\,,
\end{eqnarray}
where $\vecG{1}{v}$ is given by (\ref{vecG1}) with $|\vec\lambda \rangle
= |v\rangle $.
Finally, the left node ${\bf L}_j$ consists of
(ordinary) eigenvectors.
We first introduce states obtained by acting on the
top node with~$F$
\begin{eqnarray}\label{basis-Lp2}
\tilde{{\bf L}}_{j}:\quad F \vecG{1}{v}\,, F f \vecG{1}{v}\,, F f^{2} \vecG{1}{v}\,, \ldots \,, F f^{j-2} \vecG{1}{v}\,.
\end{eqnarray}
Together with~\eqref{basis-Rp2}, they form a basis in the direct sum ${\bf L}_j \oplus {\bf R}_j$, the states in ${\bf L}_j$ are linear combinations of those in $\tilde{{\bf L}}_{j}$ and ${\bf R}_{j}$. For later convenience, we will refer to $\tilde{{\bf L}}_{j}$ instead of ${\bf L}_{j}$, see more details in Sec.~\ref{sec:pgt2} for the general case.
We note that the generalized eigenvectors appear only in the top
node.
\subsection*{Odd $N$}
For $p=2$ and odd $N$, the decomposition (\ref{decomposition}) consists
of {\em irreducible} tilting modules $T_{j} = V_{j}$ of dimension $2j+1$, where $j$ is half-odd
integer -- indeed, the number $s(j)$ is zero for all these $j$, and all
$T_j$ are then irreducible following the discussion in
Sec.~\ref{sec:Tj-str}. Starting from a
highest-weight vector $|v\rangle$, the remaining vectors of the
multiplet are obtained by applying $F$ and powers of $f$.
For odd $N$ there are only ordinary eigenvectors (i.e., no
generalized eigenvectors), which is in agreement with~\cite{Gainutdinov:2012qy}.
\subsection*{Examples}
We now illustrate the above general framework by exhibiting ABA
constructions of complete sets of $2^{N}$ (generalized) eigenvectors
for the cases $N=4,5,6$. For each of these cases, we have explicitly
verified that the vectors are indeed (generalized)
eigenvectors of the Hamiltonian (\ref{Hamiltonian}) and are linearly
independent. The needed admissible solutions of
the Bethe equations for $p=2$ are given in Appendix C of
\cite{Gainutdinov:2015vba}.
We also consider
selected eigenvectors for the cases $N=7, 9$ in order to further illustrate
the construction in Sec.~\ref{sec:pstrings}.
We emphasize that when one or more modules in the decomposition (\ref{decomposition})
are spectrum-degenerate (which can occur for either odd or even $N$),
it is necessary to use
this construction (\ref{Tarasovstate}), (\ref{Tarasovstategen}) based on exact
complete 2-strings.
\subsection{$N=4$}
For $p=2, N=4$, the decomposition (\ref{decomposition}) is given by
\begin{eqnarray}
2 T_1 \oplus T_2 \,. \nonumber
\end{eqnarray}
The $T_{2}$ consists of the following 8 vectors:
\begin{eqnarray}
& {\bf R}_{2}:& \quad |v\rangle\,, f |v\rangle\,, f^{2} |v\rangle \,, \nonumber \\
T_{2}:\qquad & {\bf B}_{2}:& \quad F |v\rangle\,, F f |v\rangle\,, \nonumber \\
& {\bf T}_{2}:& \quad \vecG{1}{v}\,, f \vecG{1}{v}\,, \nonumber \\
& \tilde{{\bf L}}_{2}:& \quad F \vecG{1}{v}\,,
\label{N4T2}
\end{eqnarray}
where $|v\rangle = |\Omega\rangle$ is the reference state (\ref{reference}),
and $ \vecG{1}{v}= \vecG{1}{-}$. Each of the two
copies of $T_{1}$ consists of the following 4 vectors:
\begin{eqnarray}
& {\bf R}_{1}:& \quad |v\rangle\,, f |v\rangle\,, \nonumber \\
T_{1}:\qquad& {\bf B}_{1}:& \quad F |v\rangle\,, \nonumber \\
& {\bf T}_{1}:& \quad \vecG{1}{v}\,,
\label{N4T1}
\end{eqnarray}
where $|v\rangle = {\cal B}(\lambda) |\Omega\rangle$,
$\vecG{1}{v}= \vecG{1}{\lambda}$,
and $\lambda$ is an admissible solution
of the Bethe equations with $N=4$ and $M=1$, of which there are two:
$\lambda=0.440687$ and $\lambda=0.440687 + \tfrac{i\pi}{2}$.
All together we thus find $2^{4}=16$ vectors.
\subsection{$N=5$}\label{sec:p2N5}
For $p=2, N=5$, the space of states decomposes into a direct sum
of irreducible representations
\begin{eqnarray}
5 V_{\frac{1}{2}} \oplus 4 V_{\frac{3}{2}} \oplus V_{\frac{5}{2}} \,. \nonumber
\end{eqnarray}
The $V_{\frac{5}{2}}$, with dimension 6, has the reference state
$|\Omega\rangle$ as its highest weight state.
As noted in Appendix D of \cite{Gainutdinov:2015vba}, this module is
spectrum-degenerate with one copy of $V_{\frac{1}{2}}$;
the latter has dimension 2 and highest weight $\vecT{v_{1}}{-}$ i.e. an eigenvector constructed
from an exact perfect 2-string and no other Bethe roots (\ref{Tarasovstate}),
where $v_{1}$ is an arbitrary number
(for arbitrary $v_1$ and $v_1'$ we get two vectors different only by a scalar).
The other four copies of $V_{\frac{1}{2}}$
also have dimension 2, with highest-weight vectors
${\cal B}(\lambda_{1})\, {\cal
B}(\lambda_{2}) |\Omega\rangle $, where $\{ \lambda_{1},
\lambda_{2}\}$ is an admissible solution of the Bethe equations with
$N=5$ and $M=2$, of which there are four:
\begin{eqnarray}
&&\{0.337138, 0.921365\}\,, \qquad\qquad \{0.337138+ \tfrac{i\pi}{2},
0.921365\}\,,
\nonumber \\
&&\{0.337138, 0.921365+ \tfrac{i\pi}{2}\}\,, \quad
\{0.337138+ \tfrac{i\pi}{2}, 0.921365+ \tfrac{i\pi}{2}\}\,. \nonumber
\end{eqnarray}
Finally, each of the four copies of $V_{\frac{3}{2}}$ has dimension 4
and the highest-weight vector ${\cal B}(\lambda) |\Omega\rangle$, where $\lambda$ is an admissible solution
of the Bethe equations with $N=5$ and $M=1$, of which there are four:
$\lambda=0.337138$, $\lambda=0.337138 + \tfrac{i\pi}{2}$,
$\lambda=0.921365$, $\lambda=0.921365 + \tfrac{i\pi}{2}$.
All together we find $2^{5}=32$ vectors.
\subsection{$N=6$}\label{sec:p2N6}
For $p=2, N=6$, the decomposition (\ref{decomposition}) is given by
\begin{eqnarray}
5 T_1 \oplus 4 T_2 \oplus T_3 \,. \nonumber
\end{eqnarray}
The $T_{3}$ consists of the following 12 vectors:
\begin{eqnarray}
& {\bf R}_{3}:& \quad |v\rangle\,, f |v\rangle\,, f^{2} |v\rangle \,, f^{3}
|v\rangle \,, \nonumber \\
T_{3}:\qquad & {\bf B}_{3}:& \quad F |v\rangle\,, F f |v\rangle\,, F f^{2} |v\rangle\,, \nonumber \\
& {\bf T}_{3}:& \quad \vecG{1}{v}\,, f \vecG{1}{v}\,, f^{2} \vecG{1}{v}\,, \nonumber \\
& \tilde{{\bf L}}_{3}:& \quad F \vecG{1}{v}\,, F f \vecG{1}{v}\,,
\label{N6T3}
\end{eqnarray}
where $|v\rangle = |\Omega\rangle$ is the reference state. As noted in
Appendix D of \cite{Gainutdinov:2015vba}, this module is
spectrum-degenerate with one copy of $T_{1}$;
the latter has the
basis~(\ref{N4T1})
where $|v\rangle=\vecT{v_{1}}{-}$ is a generalized
eigenvector constructed from an exact perfect 2-string and no other
Bethe roots, and $v_{1}$ is an arbitrary number. The
remaining four copies of $T_{1}$ also have the
basis~(\ref{N4T1}),
where
$|v\rangle = {\cal B}(\lambda_{1})\, {\cal
B}(\lambda_{2}) |\Omega\rangle $, and $\{\lambda_{1},
\lambda_{2}\}$ is an admissible solution of the Bethe equations with
$N=6$ and $M=2$, of which there are four:
\begin{eqnarray}
&&\{0.274653, 0.658479\}\,, \qquad\qquad \{0.274653+ \tfrac{i\pi}{2},
0.658479\}\,,
\nonumber \\
&&\{0.274653, 0.658479+ \tfrac{i\pi}{2}\}\,, \quad
\{0.274653+ \tfrac{i\pi}{2}, 0.658479+ \tfrac{i\pi}{2}\}\,. \nonumber
\end{eqnarray}
Finally, each of the four copies of
$T_{2}$ has the
basis~(\ref{N4T2})
where $|v\rangle = {\cal B}(\lambda) |\Omega\rangle$, and $\lambda$ is an admissible solution
of the Bethe equations with $N=6$ and $M=1$, of which there are four:
$\lambda=0.274653$, $\lambda=0.274653 + \tfrac{i\pi}{2}$,
$\lambda=0.658479$, $\lambda=0.658479 + \tfrac{i\pi}{2}$.
All together we find $2^{6}=64$ vectors.
\subsection{$N=7$}\label{sec:p2N7}
For $p=2, N=7$, the decomposition (\ref{decomposition}) is given by
\begin{eqnarray}
14 V_{\frac{1}{2}} \oplus 14 V_{\frac{3}{2}} \oplus 6 V_{\frac{5}{2}}
\oplus V_{\frac{7}{2}} \,. \nonumber
\end{eqnarray}
For this case we do not enumerate all the eigenvectors, focusing
instead on those constructed with exact complete 2-strings.
As noted in Appendix D of \cite{Gainutdinov:2015vba}, $V_{\frac{7}{2}}$ is
spectrum-degenerate with {\em two} copies of $V_{\frac{3}{2}}$. The former,
with dimension 8, has the reference state $|\Omega\rangle$ as its highest weight state.
The latter have dimension 4 and have highest weights $\vecT{\mathop{v}\nolimits_{i}}{-}$ where $i =
1,2$, i.e. two eigenvectors constructed
from exact perfect 2-strings and no other Bethe roots. We have
explicitly verified that, provided
$\mathop{v}\nolimits_{1} \ne \mathop{v}\nolimits_{2}$ (but otherwise arbitrary), the eigenvectors $\vecT{\mathop{v}\nolimits_{1}}{-}$ and
$\vecT{\mathop{v}\nolimits_{2}}{-}$ are indeed linearly independent.
Moreover, the 6 $V_{\frac{5}{2}}$ are spectrum-degenerate with 6
$V_{\frac{1}{2}}$. The former, with dimension 6, have highest-weight vectors ${\cal B}(\lambda) |\Omega\rangle$, where $\lambda$ is an admissible solution
of the Bethe equations with $N=7$ and $M=1$, of which there are six:
\begin{eqnarray}
&& 0.232336\,, \qquad\qquad 0.525032\,, \qquad\qquad 1.09163\,,
\nonumber \\
&& 0.232336+ \tfrac{i\pi}{2} \,, \qquad 0.525032+ \tfrac{i\pi}{2}\,, \qquad 1.09163+ \tfrac{i\pi}{2}\,. \nonumber
\end{eqnarray}
Each of the corresponding $V_{\frac{1}{2}}$, with dimension 2, has the highest-weight vector $\vecT{v_{1}}{\lambda}$ i.e. an eigenvector constructed
from an exact perfect 2-string ($v_{1}$ is arbitrary) and the Bethe
root $\lambda$. These are the first examples of the construction (\ref{Tarasovstate})
that we meet involving a Bethe state other than the reference state.
However, since here $p=2$, then (as noted
in Rem.~\ref{rem:Hp2}) the $\{
x_{r} \}$ used in this construction do not depend on
$\lambda$.
\subsection{$N=9$}\label{sec:p2N9}
For $p=2, N=9$, the decomposition (\ref{decomposition}) is given by
\begin{eqnarray}
42 V_{\frac{1}{2}} \oplus 48 V_{\frac{3}{2}} \oplus 27 V_{\frac{5}{2}}
\oplus 8V_{\frac{7}{2}} \oplus V_{\frac{9}{2}} \,. \nonumber
\end{eqnarray}
Again for this case we do not enumerate all the eigenvectors, focusing
instead on those constructed with exact complete 2-strings.
As noted in Appendix D of \cite{Gainutdinov:2015vba}, $V_{\frac{9}{2}}$ is
spectrum-degenerate with {\em three} copies of $V_{\frac{5}{2}}$ as well as
with {\em two} copies of $V_{\frac{1}{2}}$. The module $V_{\frac{9}{2}}$,
with dimension 10, has the reference state $|\Omega\rangle$ as its highest weight state.
The $V_{\frac{5}{2}}$ have dimension 6 and have highest-weight vectors $\vecT{\mathop{v}\nolimits_{i}}{-}$ where $i =
1,2,3$, i.e. three eigenvectors constructed
from exact perfect 2-strings and no other Bethe roots. We have
explicitly verified that, provided
$\mathop{v}\nolimits_{1} \ne \mathop{v}\nolimits_{2} \ne \mathop{v}\nolimits_{3}$ (but otherwise arbitrary), the
three eigenvectors $\vecT{\mathop{v}\nolimits_{i}}{-}$ are indeed linearly
independent. The two $V_{\frac{1}{2}}$, each with dimension 2, are particularly interesting,
since they have highest-weight vectors $\vecT{\mathop{v}\nolimits_{i}, \mathop{v}\nolimits_{j}}{-}$, i.e.
with two exact perfect 2-strings (\ref{Tarasovstategen}). (This is
the first, and in fact only, such example that we meet in this work.)
We have
explicitly verified that there are precisely two such linearly
independent vectors. The modules $V_{\frac{9}{2}} \oplus 3
V_{\frac{5}{2}} \oplus 2 V_{\frac{1}{2}}$ account altogether for the
32 eigenvectors with eigenvalue 0,
as we observed in~\cite{Gainutdinov:2015vba}.
Moreover, each of the 8 $V_{\frac{7}{2}}$ is spectrum-degenerate with
2 copies of $V_{\frac{3}{2}}$. The former, with dimension 8, have highest-weight vectors
${\cal B}(\lambda) |\Omega\rangle$, where $\lambda$ is an admissible solution
of the Bethe equations with $N=9$ and $M=1$, of which there are eight:
\begin{eqnarray}
&& 0.178189\,, \qquad\qquad 0.381455\,, \qquad\qquad 0.658479\,, \qquad\qquad 1.21812\,,
\nonumber \\
&& 0.178189+ \tfrac{i\pi}{2} \,, \qquad 0.381455+ \tfrac{i\pi}{2}\,, \qquad 0.658479+ \tfrac{i\pi}{2}\,, \qquad 1.21812+ \tfrac{i\pi}{2}\,. \nonumber
\end{eqnarray}
The corresponding $V_{\frac{3}{2}}$, with dimension 4, have highest-weight vectors $\vecT{\mathop{v}\nolimits_{i}}{\lambda}$ where $i =
1,2$, i.e. two eigenvectors constructed
from exact perfect 2-strings and the Bethe root $\lambda$. Similarly
to the case $N=7$ (section \ref{sec:p2N7}), we have
explicitly verified that, provided
$\mathop{v}\nolimits_{1} \ne \mathop{v}\nolimits_{2}$ (but otherwise arbitrary), the eigenvectors
$\vecT{\mathop{v}\nolimits_{1}}{\lambda}$ and
$\vecT{\mathop{v}\nolimits_{2}}{\lambda}$ are indeed linearly independent;
and the $\{ x_{r} \}$ do not depend on $\lambda$.
The remaining 24 $V_{\frac{5}{2}}$ (i.e., those that are not
spectrum-degenerate with $V_{\frac{9}{2}}$, as discussed above) are spectrum-degenerate with 24
$V_{\frac{1}{2}}$. The former, with dimension 6, have highest-weight vectors
${\cal B}(\lambda_{1}){\cal B}(\lambda_{2}) |\Omega\rangle$, where
$\{\lambda_{1}, \lambda_{2}\}$ is an admissible solution
of the Bethe equations with $N=9$ and $M=2$, of which there are 24.
The corresponding $V_{\frac{1}{2}}$, with dimension 2, have highest
weights $\vecT{v_{1}}{\lambda_{1}, \lambda_{2}}$.
\section{Complete sets of eigenstates for $p>2$}\label{sec:pgt2}
We now exhibit ABA constructions of complete sets of $2^{N}$
(generalized) eigenvectors for various values of $p>2$ and $N$.
The decomposition (\ref{decomposition}) consists
of tilting modules $T_{j}$ of dimension $2j+1$ if $s(j)=0$, see~\eqref{sj}, and of dimension $4j+2- 2s(j) = 2pr$, where $j$ is an
integer or half-odd integer, and we set
\begin{eqnarray}
2j+1 \equiv rp + s\qquad \text{and} \qquad s\equiv s(j)
\end{eqnarray}
for brevity. Recall the diagram in~\eqref{Tj-diag}: each $T_j$ with non-zero $s(j)$ has a
right node (or simple subquotient) ${\bf R}_{j}$ of dimension $s(r+1)$, a bottom
node ${\bf
B}_{j}$ of dimension $(p-s)r$, a top
node ${\bf T}_{j}$ of dimension $(p-s)r$, and a left node
${\bf L}_{j}$ of dimension $s(r-1)$ (provided
that $r>1$).
We use the basis~\eqref{left-proj-basis-plus} and $\mathop{U_{q} sl(2)}\nolimits$-action in
$T_j$ in App.~\ref{app:proj-mod-base} to make the following
statements. The right node consists of the vectors
\begin{eqnarray}\label{basis-R}
{\bf R}_{j}:\quad \mathsf{r}_{k,l} = F^{k}f^l |v\rangle\,,\quad 0\le k\le s-1,\quad 0\le l\le r \,,
\end{eqnarray}
where $|v\rangle$ can be either a usual Bethe state or a state
constructed from an exact complete $p$-string
-- it is a highest-weight vector; and $f$ is the $s\ell(2)$
lowering
``divided power'' generator from $\mathop{U_{q} sl(2)}\nolimits$.
The bottom node consists of the vectors obtained by acting on the
right node with the $\mathop{U_{q} sl(2)}\nolimits$ lowering generator $F$
\begin{eqnarray}
{\bf B}_{j}:\quad \mathsf{b}_{n,m} = F^{s+n}f^m |v\rangle\,,\quad 0\le
n\le p-s-1,\quad 0\le m\le r-1 \,.
\end{eqnarray}
The top node consists of the \textit{generalized}
eigenvectors
\begin{eqnarray}
{\bf T}_{j}:\quad \mathsf{t}_{n,m} = F^{n}f^m \vecG{s}{v}\,,\quad 0\le n\le p-s-1,\quad 0\le m\le r-1 \,,
\end{eqnarray}
where $\vecG{s}{v}$ is given by (\ref{genvec2}).
Finally, the left node ${\bf L}_j$ consists of the (ordinary) eigenvectors $\mathsf{l}_{n,m}$. To construct the basis $\{\mathsf{l}_{n,m}\}$ in the left node ${\bf L}_j$, we first introduce states obtained by acting on the
top node with $F^{p-s}$:
\begin{eqnarray}\label{basis-L}
\tilde{{\bf L}}_{j}:\quad \tilde{\mathsf{l}}_{n,m} = F^{p-s+n}f^m \vecG{s}{v}\,,\quad 0\le n\le s-1,\quad 0\le m\le r-2 \,.
\end{eqnarray}
Together with~\eqref{basis-R}, they form a basis in the direct sum ${\bf L}_j \oplus {\bf R}_j$.
The vectors $\tilde{\mathsf{l}}_{n,m}$ do not belong to ${\bf L}_j$, they are a linear combination of $\mathsf{l}_{n,m}$ and $\mathsf{r}_{n,m+1}$: $\tilde{\mathsf{l}}_{n,m} =\frac{1}{r}(\mathsf{r}_{n,m+1}-\mathsf{l}_{n,m})$, compare with the $F$ action in App.~\ref{app:proj-mod-base}. We will use below the basis elements $\tilde{\mathsf{l}}_{n,m}$ instead $\mathsf{l}_{n,m}$.
In all the examples below, we have explicitly
checked that the vectors in~\eqref{basis-R}-\eqref{basis-L}
are indeed (generalized)
eigenvectors of the Hamiltonian (\ref{Hamiltonian}) and are linearly
independent, and thus give a basis in $T_j$ as they should. We have also verified
by the explicit
construction of the states that the dimensions of
the nodes in $T_j$ coincide with the values given by \eqref{Tj-diag}
and \eqref{dimj}
and reviewed just above.
We remind the reader that all the needed admissible solutions of the Bethe equations
(\ref{BAE}) are given in
Appendix E in \cite{Gainutdinov:2015vba}.
\subsection{$p=3, N=4$}\label{sec:p3N4}
For $p=3, N=4$, the decomposition (\ref{decomposition}) is given by
\begin{eqnarray}
T_0 \oplus 3 T_1 \oplus T_{2} \,. \nonumber
\end{eqnarray}
The $T_{2}$ (dimension 6) has the following basis,
see~\eqref{basis-R}-\eqref{basis-L} for $r=1$ (i.e. ${\bf L}_2$ is absent) and $s=2$:
\begin{itemize}
\item right node ${\bf R}_{2}$ consisting of 4 ordinary
eigenvectors (namely, $|\Omega\rangle\,, f |\Omega\rangle\,,
F |\Omega\rangle\,, F f |\Omega\rangle$);
\item bottom node ${\bf B}_{2}$ consisting of 1 ordinary vector
($F^{2} |\Omega\rangle$) ; and
\item top node ${\bf T}_{2}$ consisting of 1 generalized eigenvector
$\vecG{2}{-}$, which is described in
section \ref{sec:genp3pp2}.
\end{itemize}
Each of the three $T_{1}$ are irreducible representations of
dimension 3 consisting of a highest-weight vector
${\cal B}(\lambda) |\Omega\rangle$ plus two more states obtained by lowering
with $F$.
The three admissible solutions of (\ref{BAE}) with $p=3$, $N=4$, and $M=1$
are $\lambda=0.243868\,, \lambda=0.658479 \,,
\lambda=0.902347 + \tfrac{i\pi}{2}$.
The $T_{0}$ (dimension 1) consists of the vector
${\cal B}(\lambda_{1})\, {\cal B}(\lambda_{2}) |\Omega\rangle$, where
$\{\lambda_{1}\,, \lambda_{2} \} = \{ 0.256013,\\ 0.857073 \}$ is the
admissible solution of (\ref{BAE}) with $p=3$, $N=4$, $M=2$.
All together we find $2^{4}=16$ vectors.
\subsection{$p=3, N=5$}\label{sec:p3N5}
For $p=3, N=5$, the decomposition (\ref{decomposition}) is given by
\begin{eqnarray}
T_{\frac{1}{2}} \oplus 4 T_{\frac{3}{2}} \oplus T_{\frac{5}{2}} \,. \nonumber
\end{eqnarray}
Each of the four $T_{\frac{3}{2}}$ (dimension 6) has the following basis,
see~\eqref{basis-R}-\eqref{basis-L} for $r=1$ (i.e. ${\bf
L}_{\frac{3}{2}}$ is absent) and $s=1$:
\begin{itemize}
\item right node ${\bf R}_{\frac{3}{2}}$ consisting of the two ordinary
eigenvectors $|v\rangle$ and $f |v\rangle$, where $|v\rangle = {\cal
B}(\lambda)\, |\Omega\rangle$;
\item bottom node ${\bf B}_{\frac{3}{2}}$ consisting of the two
ordinary vectors $F |v\rangle$ and $F^{2} |v\rangle$; and
\item top node ${\bf T}_{\frac{3}{2}}$ consisting of the two generalized
eigenvectors $\vecG{1}{\lambda}$ and $F \vecG{1}{\lambda}$.
\end{itemize}
The four admissible solutions of (\ref{BAE}) with $p=3$, $N=5$, and $M=1$
are $\lambda= 0.189841\,, \lambda= 0.447048\,, \lambda=1.08394\,,
\lambda= 0.636889 + \tfrac{i\pi}{2}$.
The $T_{\frac{5}{2}}$ is an irreducible representation of
dimension 6 consisting of a highest-weight vector
$|\Omega\rangle$ plus five more vectors obtained by lowering
with $F$ and/or $f$.
The $T_{\frac{1}{2}}$ is an irreducible representation of dimension 2
consisting of the highest-weight vector
${\cal B}(\lambda_{1})\, {\cal B}(\lambda_{2}) |\Omega\rangle$, plus the vector
obtained by lowering with $F$, where
$\{\lambda_{1}\,, \lambda_{2} \} = \{ 0.201117\,, 0.504773 \}$ is the
admissible solution of (\ref{BAE}) with $p=3$, $N=5$, $M=2$.
All together we find $2^{5}=32$ vectors.
\subsection{$p=3, N=6$}\label{sec:p3N6}
For $p=3, N=6$, the decomposition (\ref{decomposition}) is given by
\begin{eqnarray}
T_0 \oplus 9 T_1 \oplus 4 T_{2} \oplus T_{3} \,. \nonumber
\end{eqnarray}
The $T_{3}$ (dimension 12) has the following basis,
see~\eqref{basis-R}-\eqref{basis-L} for $r=2$ and $s=1$:
\begin{itemize}
\item right node ${\bf R}_{3}$ consisting of 3 ordinary
eigenvectors (namely, $|\Omega\rangle\,, f |\Omega\rangle\,,
f^{2} |\Omega\rangle$);
\item bottom node ${\bf B}_{3}$ consisting of 4 ordinary
eigenvectors (namely, $F |\Omega\rangle$, $Ff |\Omega\rangle$,
$F^{2} |\Omega\rangle$, $F^{2}f |\Omega\rangle$);
\item top node ${\bf T}_{3}$ consisting of 4 generalized eigenvectors
($\vecG{1}{-}$, which is described in section \ref{sec:genp3pp1},
plus 3 more obtained by lowering with $f$ and/or $F$,
namely $f \vecG{1}{-}$, $F \vecG{1}{-}$, $F f \vecG{1}{-}$); and
\item left node,
or rather $\tilde{{\bf L}}_{3}$, consisting of 1 ordinary
eigenvector obtained by lowering the generalized eigenvector
($F^{2}\vecG{1}{-}$).
\end{itemize}
Each of the four $T_{2}$ (dimension 6) has the following basis:
\begin{itemize}
\item right node ${\bf R}_{2}$ consisting of
4 ordinary eigenvectors ($|\lambda\rangle = {\cal B}(\lambda)
|\Omega\rangle$ plus 3 more obtained
by lowering, namely, $f |\lambda\rangle\,, F |\lambda\rangle\,, F f
|\lambda\rangle$);
\item bottom node ${\bf B}_{2}$ consisting of 1 ordinary
eigenvector ($F^{2} |\lambda\rangle$); and
\item top node ${\bf T}_{2}$ consisting of the corresponding generalized eigenvector
$\vecG{2}{\lambda}$, an example
of which is described in section \ref{sec:genp3pp2}.
\end{itemize}
\nobreak
The four admissible solutions of (\ref{BAE}) with $p=3, N=6, M=1$ are
$\lambda=0.155953\,, \lambda=0.346574 \,, \lambda=0.658479 \,,
\lambda=0.502526 +\tfrac{i\pi}{2}$.
Each of the nine $T_{1}$ are irreducible representations of
dimension 3 consisting of a highest-weight vector
${\cal B}(\lambda_{1})\, {\cal B}(\lambda_{2}) |\Omega\rangle$ plus 2 more
obtained by lowering with $F$. The nine admissible solutions $\{\lambda_{1}\,,
\lambda_{2} \}$ of (\ref{BAE}) with $p=3, N=6, M=2$ are
\begin{eqnarray}
&& \{ 0.36275, 0.765051 \}\,, \{ 0.16097, 0.774681 \}\,, \{
0.706816 \pm 0.526679 i \}\,, \nonumber \\
&& \{ 0.151629, 1.00054 + \tfrac{i \pi}{2} \}\,, \{ 0.331821,
0.969804+ \tfrac{i \pi}{2} \}\,,
\{ 0.47492 + \tfrac{i \pi}{2}, 1.23081 + \tfrac{i \pi}{2}\}\,, \nonumber \\
&& \{ 0.164318, 0.376118\}\,, \{ 0.583386, 0.853782 + \tfrac{i
\pi}{2} \}\,, \{ 0.977905, 0.595372 + \tfrac{i \pi}{2} \}\,. \nonumber
\end{eqnarray}
The $T_{0}$ (which has dimension 1) consists of the vector
${\cal B}(\lambda_{1})\, {\cal B}(\lambda_{2})\, {\cal
B}(\lambda_{3}) |\Omega\rangle$, where
$\{\lambda_{1}\,, \lambda_{2}\,, \lambda_{3} \} = \{ 0.168223,
0.39058, 0.980264\}$ is the admissible solutions of (\ref{BAE}) with
$p=3, N=6, M=3$.
All together we thus find $2^{6}=64$ vectors.
\subsection{$p=4, N=4$}\label{sec:p4N4}
For $p=4, N=4$, the decomposition (\ref{decomposition}) is given by
\begin{eqnarray}
2 T_0 \oplus 2 T_1 \oplus T_{2} \,. \nonumber
\end{eqnarray}
The $T_{2}$ (dimension 8) has the following basis,
see~\eqref{basis-R}-\eqref{basis-L} for $r=1$ (i.e. ${\bf L}_2$ is absent) and $s=1$:
\begin{itemize}
\item right node ${\bf R}_{2}$ consisting of 2
ordinary eigenvectors (the reference state $|\Omega\rangle$ and $f |\Omega\rangle$);
\item bottom node ${\bf B}_{2}$ consisting of 3 ordinary
eigenvectors (namely, $F |\Omega\rangle\,, F^{2} |\Omega\rangle\,,
F^{3} |\Omega\rangle$); and
\item top node ${\bf T}_{2}$ consisting of 3 generalized eigenvectors
($\vecG{1}{-}$, which is described in section \ref{sec:genp4pp1}, plus 2 more obtained by
lowering with $F$).
\end{itemize}
Each of the two $T_{1}$ are irreducible representations of
dimension 3 consisting of a highest-weight vector
${\cal B}(\lambda) |\Omega\rangle$ plus 2 more obtained by lowering with $F$.
The two admissible solutions of (\ref{BAE}) with $p=4, N=4, M=1$
are $\lambda=0.173287\,, \lambda=0.440687$.
Each of the two $T_{0}$ (dimension 1) consists of the vector
${\cal B}(\lambda_{1})\, {\cal B}(\lambda_{2}) |\Omega\rangle$, where
$\{\lambda_{1}\,, \lambda_{2} \} = \{ 0.186864, 0.582103 \}\,, \{
0.703959 \pm 0.429694 i \} $ are the two admissible solutions of
(\ref{BAE}) with $p=4, N=4, M=2$.
All together we find $2^{4}=16$ vectors.
\subsection{$p=4, N=6$}\label{sec:p4N6}
For $p=4, N=6$, the decomposition (\ref{decomposition}) is given by
\begin{eqnarray}
4 T_0 \oplus 4 T_1 \oplus 5 T_{2} \oplus T_{3} \,. \nonumber
\end{eqnarray}
The $T_{3}$ (dimension 8) has the following basis,
see~\eqref{basis-R}-\eqref{basis-L} for $r=1$ (i.e. ${\bf L}_3$ is absent) and $s=3$:
\begin{itemize}
\item right node ${\bf R}_{3}$ consisting of 6
ordinary eigenvectors (namely, $|\Omega\rangle\,, f |\Omega\rangle$,\break
$F |\Omega\rangle\,, F^{2} |\Omega\rangle\,, F f |\Omega\rangle\,,
F^{2} f |\Omega\rangle$);
\item bottom node ${\bf B}_{3}$ consisting of 1 ordinary
eigenvector ($F^{3} |\Omega\rangle$ ); and
\item top node ${\bf T}_{3}$ consisting of 1 generalized eigenvector
$\vecG{3}{-}$, which is described in
section \ref{sec:genp4pp3}.
\end{itemize}
Each of the five $T_{2}$ (dimension 8) has the following basis:
\begin{itemize}
\item right node ${\bf R}_{2}$ consisting of 2 ordinary eigenvectors
($|\lambda\rangle = {\cal B}(\lambda) |\Omega\rangle$ and
$f|\lambda\rangle$);
\item bottom node ${\bf B}_{2}$ consisting of 3 ordinary
eigenvectors ($F |\lambda\rangle\,, F^{2} |\lambda\rangle\,,
F^{3} |\lambda\rangle$); and
\item top node ${\bf T}_{2}$ consisting of 3 generalized
eigenvectors ($\vecG{1}{\lambda}$, an example of which is described in section
\ref{sec:genp4pp1}, plus 2 more obtained by lowering with $F$).
\end{itemize}
The
corresponding five admissible solutions of (\ref{BAE}) with $p=4, N=6, M=1$
are $\lambda=0.111447$, $\lambda=0.243868$,
$\lambda=0.440687$, $\lambda=0.902347$, and
$\lambda=0.769926 +\tfrac{i\pi}{2}$.
Each of the four $T_{1}$ are irreducible representations of
dimension 3 consisting of the highest-weight vector
${\cal B}(\lambda_{1})\, {\cal B}(\lambda_{2}) |\Omega\rangle$ plus 2 more
obtained by lowering with $F$. The four admissible solutions $\{\lambda_{1}\,,
\lambda_{2} \}$ of (\ref{BAE}) with $p=4, N=6, M=2$ are
\begin{eqnarray}
&& \{ 0.260368, 0.516935 \}\,, \{ 0.11923, 0.269157 \}\,, \nonumber \\
&& \{ 0.116959, 0.523048 \} \,, \{ 0.393822 \pm 0.39281 i \} \,.
\end{eqnarray}
Each of the four $T_{0}$ (dimension 1) consists of the vector
${\cal B}(\lambda_{1})\, {\cal B}(\lambda_{2})\, {\cal
B}(\lambda_{3}) |\Omega\rangle$. The four admissible solutions
$\{\lambda_{1}\,, \lambda_{2}\,, \lambda_{3} \}$ of (\ref{BAE}) with
$p=4, N=6, M=3$ are
\begin{eqnarray}
&& \{ 0.124053, 0.285872, 0.670931 \}\,, \{ 0.116697, 0.77288 \pm
0.427941 i \}\,, \nonumber \\
&& \{ 0.261262, 0.749721 \pm 0.425077 i \}\,, \{ 0.583433, 0.593097
\pm 0.402559 i \}\,.
\end{eqnarray}
All together we thus find $2^{6}=64$ vectors.
\section{Discussion}\label{sec:discuss}
We have seen that, when $q$ is a root of unity
($q=e^{i\pi/p}$ with integer $p\ge 2$), the
$\mathop{U_{q} sl(2)}\nolimits$-invariant open spin-1/2 XXZ chain has two new types of
eigenvectors: eigenvectors corresponding to continuous solutions of
the Bethe equations (exact complete $p$-strings), and generalized
eigenvectors. We have proposed here general ABA constructions for
these two new types of eigenvectors. The construction for exact
complete $p$-strings (\ref{Tarasovstate}), (\ref{Tarasovstategen}) is
a generalization of the one proposed by Tarasov \cite{Tarasov:2003xz}
for the closed chain, while the construction of generalized
eigenvectors (\ref{genvec2}) is new. We have demonstrated in examples
with various values of $p$ and $N$ that these constructions are indeed
sufficient for obtaining the complete set of (generalized)
eigenvectors of the model.
The model (\ref{Hamiltonian})
at primitive roots of unity is
related to
the unitary $(p-1,p)$ conformal Minimal Models,
by restricting to the first $p-1$ irreducible tilting modules (see e.g. \cite{Pasquier:1989kd}),
as well as to logarithmic conformal field
theories if one keeps all the tilting modules~\cite{Read:2007qq, Gainutdinov:2012mr}.
We expect that our results
can be easily generalized to the case of
rational (non-integer) values of $p$, which is related to
non-unitary
Minimal Models.
Indeed, for rational $p=a/b$, with $a$, $b$ coprime and $a>b$, there
are two different cases $q^a=\pm1$, {\sl i.e.,} $b$ even or odd. For
odd $b$ (or $q^a=-1$ and $a$ can be odd or even), we have obviously the
same structure of the tilting $\mathop{U_{q} sl(2)}\nolimits$ modules, as the structure depends
only on the conditions on $q$ and it is the same as for $b=1$. The
repeated tensor products of the fundamental $\mathop{U_{q} sl(2)}\nolimits$ representations (or
the spin-chains) are decomposed in the same way as well (replacing $p$
by $a$, of course) and thus with the same multiplicities $d_j^0$, and
therefore our construction of the generalized eigenstates should be
the same but using $a$ instead of $p$, i.e., the $p'$ in the
$p'$-string takes values from 1 to $a-1$, etc. For even $b$ (or
$q^a=1$ and odd $a$), a more careful analysis is required.
According to~\cite{Chari:1994pz}, for the case of $q^a=1$, the tilting modules have the same structure as in Sec.~\ref{sec:Tj-str}, where
one should
again replace $p$ by $a$, and the multiplicities in the tensor
products are also identical to what we had here. The only real
difference will be in the values of the Bethe roots, as the spectrum
of the Hamiltonian is different for different choices of $a$ and $b$,
and thus the continuum limit too.
We also expect that similar
constructions can be used for quantum-group invariant spin chains at
roots of unity with higher spin and/or rank of the quantum-group
symmetry. It would be interesting to consider
similar constructions for supersymmetric ($\mathbb{Z}_2$-graded) spin chains,
such as the $U_q sl(2|1)$-invariant chain \cite{Foerster:1993fp}.
Of course, the algebraic Bethe ansatz would require nesting for rank greater than one,
which would render the corresponding constructions more complicated.
We are currently investigating the symmetry operators -- generators of a non-abelian symmetry of the transfer-matrix $t(u)$ -- responsible for
the higher degeneracies of the model, which are signaled by the
appearance of continuous solutions of the Bethe equations, whose
corresponding eigenvectors are obtained by the construction of section
\ref{sec:pstrings}. It would also be interesting to find a
group-theoretic understanding of the construction in section
\ref{sec:generalized} of generalized eigenvectors,
e.g. within the context
of the quantum affine algebra $U_{q} \widehat{sl}(2)$ or rather its
coideal $q$-Onsager subalgebra at roots of unity \cite{Baseilhac:2016}.
\section*{Acknowledgments}
AMG thanks Hubert Saleur for valuable discussions, and the IPhT in
Saclay for its hospitality.
The work of AMG was supported by DESY and Humboldt fellowships, C.N.R.S. and RFBR-grant 13-01-00386.
RN thanks Rodrigo Pimenta and Vitaly Tarasov
for valuable discussions, and the DESY
theory group for its hospitality.
The work of RN was supported in part by the National Science
Foundation under Grant PHY-1212337, and by a Cooper fellowship.
|
2,877,628,091,559 | arxiv | \section{Introduction}
The recent interesting paper of Hurwitz and Gbur \cite{hurwitz14} deals with nonradiating sources - sources whose fields are limited to a finite region of space and exactly equal to zero outside this region. The topic is in itself old, with roots going back to Ehrenfest (1910). Null-field sources have later been considered elsewhere, for instance in Ref.~\cite{nikolova05}. The interest in this topic is understandable, among other things because of its relationship to cloaking devices.
We will focus on two issues related to Ref.~\cite{hurwitz14}:
{\bf 1.} Consider first Maxwell's equations for monochromatic modes in Minkowski space ($\eta_{00}=-1, c=1$): $ {\rm curl}\, {\bf E}=i\omega {\bf B}, \quad {\rm div}\, {\bf B}=0, \,
{\rm curl} \,{\bf H}=-i\omega {\bf D}, \quad {\rm div }\,{\bf D}=0.$
Using $\bf D=E+P, B=H+M$ we obtain from this ${\rm curl}\, {\rm curl}\, {\bf E}-\omega^2 {\bf E}=\omega^2{\bf P}+i\omega \,{\rm curl}\, \bf M.$ Thus, if
\begin{equation}
-i\omega {\bf P}+{\rm curl}\, {\bf M}=0, \label{1}
\end{equation}
the above equation for $\bf E$ becomes homogeneous. That this really yields a null-field solution, can be verified via Green-function methods \cite{hurwitz14}.
The expression (\ref{1}), when written in a slightly more general form as $(\partial_0 {\bf P}+{\rm curl}\, \bf M)$, gives the total current density in matter when extraneous currents are omitted. One may ask: can one herefrom conclude that the electromagnetic force density can written simply as $ (\partial_0 {\bf P}+{\rm curl}\, \bf M)\times B$? This is a very central point in electrodynamics. The argument seems quite natural formally, and has occasionally been presented in the literature, the first one probably being Poincelot \cite{poincelot}. In our opinion the answer to the question is however no. Space forbids us to go into detail here, but the extensive investigation given in Ref.~\cite{brevik79} shows that all known experiments in optics are more naturally explained in terms of the Minkowski, or equivalently the Abraham, energy-momentum tensors. Cf. also Ref.~\cite{brevik14}. We emphasize that this conclusion rests primarily on {\it experimental}, not on theoretical, input.
{\bf 2.} Second, we point out that the above theory can conveniently be generalized to the case of curvilinear space when the metric is time-independent and time-orthogonal. Then the four-dimensional metric $g_{\mu\nu}$ reduces to $g_{00}$ and $g_{ik}$, the nondiagonal components being zero.
We introduce the antisymmetric pseudotensor $\epsilon_{ijk}=\gamma^{1/2}\delta_{ijk}, $
with $\gamma={\rm det}(g_{ik}),~ \delta_{123}=1$. There are two field tensors, $F_{\mu\nu}$ and $H^{\mu\nu}$, related to the electromagnetic fields via $F_{0i}=-E_i, \quad F_{ik}=\epsilon_{ikl}B^l,
H^{0i}=(-g_{00})^{-1/2}D^i, \quad H_i=\frac{1}{2}\epsilon_{ikl}H^{kl}. $ As the curvilinear space functions like a "medium" with permittivity equal to the permeability: $\varepsilon=\mu=(-g_{00})^{-1/2}$, the constitutive relations take the form ${\bf D}=(-g_{00})^{-1/2}({\bf E+\bf P}), \quad {\bf B}=(-g_{00})^{-1/2}({\bf H +M})$, whereby the field equation for $\bf E$ becomes ${\rm curl}\,[(-g_{00})^{1/2}{\rm curl}\,{\bf E}]-\omega^2(-g_{00})^{-1/2}{\bf E}
=\omega^2 (-g_{00})^{-1/2}{\bf P} +i\omega\, {\rm curl}\,\bf M. $
The null-field condition analogous to Eq.~(\ref{1}) is
\begin{equation}
-i\omega (-g_{00})^{-1/2}{\bf P} + {\rm curl}\,{\bf M} =0. \label{2}
\end{equation}
A few examples can here be considered, the most typical one being the Rindler space corresponding to a constant
acceleration $a$ along the $x$ axis with respect to the inertial background space. (This space even allows a huge Casimir effect at finite temperature \cite{zhao11}.) The metric is $ds^2=-a^2x^2dt^2+d{\bf r}^2$. Putting $a=1$ we obtain in this case the null-field condition $-(i\omega/x){\bf P}+{\rm curl}\, {\bf M}=0.$ Other typical examples are the anisotropic Kasner space, and the Schwarzschild space.
Note: this result concerns fundamental physics primarily. It shows the great validity of the electromagnetic formalism when generalized to curvilinear space, and is in general use in general relativity and cosmology. Under daily-life conditions the transformation technique has found an interesting, and perhaps unexpected, application in cloaking devices, as mentioned.
|
2,877,628,091,560 | arxiv | \section{Introduction}
In this work, we study the extremal functions of following variational problem of Log-Sobolev functional
\begin{equation}\label{eq:intro_log_sobolev_functional}
\lambda(\alpha_1,\alpha_2):=
\inf_{f\in\mathcal{F}}\int_{X}\left(|\nabla f|^2 +\alpha_1 f^2 - \frac{\alpha_2}{2} f^2 \log f^2\right)dm
\end{equation}
where $\alpha_1$ and $\alpha_2$ are in $\mathbb{R}$ and $\mathcal{F}:=\{f\in W^{1,2}: \|f\|_{L^2}=1\}$.
Of interest are the existence, regularity, positivity of non negative extremal functions, and analytic results such as Li-Yau or Harnack type estimates.
While those results are well-known in a smooth Riemannian setting, it seems natural to ask whether they can be extended and in which form to more general non smooth metric spaces.
While log-Sobolev inequality has vast applications in different branches of mathematics---see \citet{gross1975logarithmic,otto2000generalization, bakry2014analysis}---studying extremal of functionals including log-Sobolev functional is important on its own right.
For example, in \cite{zhang2012extremal}, the author shows that in the noncompact smooth manifold setting, the geometry of manifold at infinity will affect the existence of extremal of log-Sobolev functional and Perelman's $W$\--entropy.
Using these points, the author further shows that noncompact shrinking breathers of Ricci flow are gradient shrinking solitons.
Very recently, in \cite{ohta2020equality} the extremal function of log-Sobolev inequality is used together with optimal transport and needle decomposition technique to show some rigidity result of underlying weighted Riemannian manifold.
On the other hand, starting with works of Sturm \cite{sturm2006geometry,sturm2006geometry2} and Lott and Villani \cite{lott2009ricci}, the synthetic notion of Ricci curvature---referred to as $\text{CD}(K,N)$ condition---bounded from below by $K$ in $\mathbb{R}$ and dimension bounded from above by $1\leq N\leq \infty$ on general metric measure space without having a smooth structure was introduced and developed greatly in the last decade.
The key property of this notion is that it is compatible with the smooth Riemannian manifold and stable with respect to measured Gromov-Hausdorff convergence so that it includes the Ricci limit spaces and Alexandrov spaces.
Later, to rule out the Finsler manifold, the finer concept $\text{RCD}(K,\infty)$ was introduced by Ambrosio and coauthors in \cite{ambrosio2014metric} and the finite dimensional counterpart $\text{RCD}(K,N)$ was introduced by Gigli, Erbar and coauthor in \cite{gigli2015differential,erbar2015equivalence}.
Recently, the first and second order differential structure on $\text{RCD}(K,\infty)$ space based on the tangent module theory was developed by Gigli in \cite{gigli2018nonsmooth}, and finer geometric results such as rectifiability of $\text{RCD}(K,N)$ was also studied in \cite{mondino2019structure}.
With the development of analytic tools on these spaces, the geometric analysis on metric measure space satisfying synthetic Ricci curvature condition also developed quickly.
For instance, Li-Yau-Hamilton type estimate for heat flow \cite{garofalo2014li,jiang2015li_yau,jiang2016hamilton} and localized gradient and Li-Yau estimate of heat equation \cite{huang2020localized,zhang2016local}.
One of the breakthrough is \cite{zhang2016local}, where the authors develop a Omori-Yau type maximum principle on the $\text{RCD}^*(K,N)$ space and use it to show the pointwise Li-Yau type estimate for the locally weak solution of heat equation which may not share semigroup property.
Motivated by these works, it seems therefore natural to study extremal functions of log-Sobolev functional \eqref{eq:intro_log_sobolev_functional} on more general metric measure space.
In particular, we are interested in whether analytic results such as Li-Yau type estimate for non-negative extremal functions of log-Sobolev functional holding on smooth settings can be extended to non smooth metric measure spaces, in particular, those satisfying synthetic Ricci curvature condition.
To do so, one of the key points is to show the existence, boundedness, regularity and positivity of non-negative extremal functions of log-Sobolev functional on the metric measure space.
Our first main result, Theorem \ref{thm:var_general}, states that the log-Sobolev functional \eqref{eq:intro_log_sobolev_functional} with $\alpha_2>0$ on metric measure space satisfying $\text{RCD}^*(K,N)$ with $K>0$ and $N$ in $(2,\infty)$ admits non-negative extremal functions which satisfy certain Euler-Lagrangian equation.
Moreover, we show that all non-negative extremal functions are bounded, Lipschitz continuous and bounded away from 0.
We remark that while existence and Euler-Lagrangian equation problems are quite standard and similar to the smooth compact cases solved by Rothaus in \cite{rothaus1981logarithmic}, several other problems arise on metric measure spaces.
For instance, the positivity of non-negative extremal functions in \cite[pager 114]{rothaus1981logarithmic} is shown by relying heavily on the underlying smooth differential structure of Riemannian manifold, polar coordinate and exact asymptotic ratio near the pole so that the problem can be reduced to a one-dimensional ODE problem.
However, these smooth structures are lost in general metric measure spaces.
While for $\text{RCD}^*(K,N)$ spaces, the polar decomposition still works by using ``needle decomposition'' generalized to $\text{CD}(K,N)$ setting by Cavalletti and Mondino in \cite{cavalletti2017sharp}, see also \cite{cavalletti2020new} for $\text{MCP}(K,N)$, however, similar asymptotic ratio analysis seems to fail without further assumption on underlying metric measure space.
To overcome this difficulty, we make use of a maximum principle type argument for the De Giorgi class on some local domain, which was proved in \cite{kinnunen2001regularity}, to show that non-negative extremal functions are either bounded away from $0$ or vanish on the whole space.
This method works in very general metric measure spaces supporting locally doubling property and weak Poincaré inequality, which in particular contains $\text{RCD}^*(K,N)$ spaces.
Our second main result is in Theorem \ref{thm:estimate_Li_Yau}.
Based on the regularity and positivity results obtained, we recover a Li-Yau type estimate for the logarithmic transform of all non-negative extremal functions of \eqref{eq:intro_log_sobolev_functional}.
More precisely, for any non-negative extremal functions $u$, it holds that
\begin{equation}\label{eq:intro_2}
|\nabla v|^2 + (\alpha_2 -\beta K)v
\leq
\frac{N\alpha_2(1-\beta)}{4\beta}\left(1-\frac{\beta((2-\beta)K-\alpha_2)}{2\alpha_2(1-\beta)}\right)^2\quad \text{$m$-a.e.,}
\end{equation}
for any $\beta$ in $(0,1)$ and $v=\log u+(\lambda-\alpha_1)/\alpha_2$.
The same estimate for the smooth Riemannian cases was shown in \cite{wang1999harnack}, where the argumentation relies on the pointwise Bochner formula and smoothness property of local maximum points of the left side of \eqref{eq:intro_2}.
However, in the $\text{RCD}$ setting, neither of the functions on the left hand side of \eqref{eq:intro_2} is smooth and pointwise defined, nor the pointwise Bochner formula is available.
To overcome the difficulty, we follow a similar argument as in \cite{zhang2016local} by using Omori-Yau type maximum principle.
Note that, to avoid the sign problem of $|\nabla v|^2 +(\alpha_2-\beta K)v$, we use different auxiliary function $\phi$ as in \cite[Theorem 1.4]{zhang2016local} which is constructed from the distance function so that it has measure-valued Laplacian.
Our construction is based on the ``good'' cut-off functions from \cite[Lemma 3.1]{mondino2019structure}, which are smoother than the one in \cite{zhang2016local} to guarantee $L^2$-valued Laplacian.
Furthermore, for our purpose as well as independent interest, we prove a slightly generalized version of Omori-Yau maximum principle holding on the whole metric space of proper $\text{RCD}(K,\infty)$ than the one in \cite{zhang2016local}, which holds for bounded domains on $\text{RCD}^*(K,N)$ spaces.
While most arguments are similar, our proof follow so-called ``Sobolev-to-Lip'' property shared by all $\text{RCD}$ spaces rather than "weak maximum principle" used in \cite{zhang2016local} for $\text{RCD}^*(K,N)$.
Finally, we provide some applications of the regularity results and Li-Yau type estimates.
We show a Harnack type estimate as well as lower and upper bounds of the non-negative extremal functions of \eqref{eq:intro_log_sobolev_functional} depending only on geometry of space.
These generalize results in \cite{wang1999harnack} proved for Riemannian cases.
Using weak Bochner inequality, we also show that all non-negative extremal functions are constant when $0< \alpha_2 \leq KN/(N-1)$, which is well-known in the smooth setting.
The paper is organized as follows: in Section 2, we introduce notations and definitions about metric measure space and $\text{RCD}$ condition as well as some analytic results needed later.
In Section 3, we study the variational problem and show the existence, regularity and positivity of non-negative extremal functions of \eqref{eq:intro_log_sobolev_functional}.
Section 4 is dedicated to the Li-Yau type estimate for the non-negative extremal functions.
In Section 5, we show some applications of previous results.
Finally, in an Appendix, we prove the Omori-Yau type maximum principle for the proper $\text{RCD}(K,\infty)$ spaces.
\section{Preliminary and notations}
We briefly introduce terminologies and notations related to calculus.
For more details, we refer readers \citep{gigli2020lectures, gigli2018nonsmooth}.
Throughout this work, we denote by $(X,d,m)$ a metric measure space where $(X,d)$ is complete, separable and proper metric space and $m$ is a non-negative Radon measure which is finite on each bounded set.
We denote by $B(x,r)$ and $\bar{B}(x,r)$ the open and closed metric balls centered at $x$ in $X$ with radius $r>0$, respectively.
By $L^p:=L^p(X,m)$ for $1\leq p\leq \infty$ we denote the standard $L^p$ spaces with $L^p$-norm $\|\cdot\|_{p}$.
By $L^p_{loc}$ we denote those measurable functions $f:X \to \mathbb{R}$ such that $f\chi_B$ is in $L^p$ for any bounded subset $B$ of $X$ where $\chi_B$ denote the indicator function of the set $B$.
Let further $\text{Lip}:=\text{Lip}(X)$, $\text{Lip}_{loc}:=\text{Lip}_{loc}(X)$, and $\text{Lip}_{bs}:=\text{Lip}_{bs}(X)$ be the spaces of real-valued functions $f:X \to \mathbb{R}$ which are Lipschitz, locally Lipschitz, and Lipschitz with bounded support, respectively.
For $f$ in $\text{Lip}_{loc}$, we denote by $\text{lip}(f)$ the local Lipschitz constant, or slope, defined for any $x$ in $X$ as
\begin{equation*}
\text{lip}f(x):=\limsup_{y\rightarrow x}\frac{|f(y)-f(x)|}{d(y,x)},
\end{equation*}
if $x$ is not isolated and $\text{lip}f(x)=0$ if $x$ is isolated.
\subsection{Cheeger Energy, Laplacian and Calculous Tools}
The Cheeger energy is the $L^2$\--lower\--semicontinuous and convex functional $\text{Ch}:L^2\rightarrow [0,\infty]$ defined as
\begin{equation*}
\text{Ch}(f)
:=
\inf\left\{\liminf\frac{1}{2}\int_{X}\left(\text{lip}(f_n)\right)^2dm \colon (f_n)\subseteq \text{Lip}\cap L^2, \|f_n-f\|_2\rightarrow 0 \right\}.
\end{equation*}
The domain of $\text{Ch}$ is a linear space denoted by $W^{1,2}:=W^{1.2}(X)$ and called the Sobolev space.
For $f$ in $W^{1,2}$, we identify the canonical element $|\nabla f|$ called minimal relaxed gradient as the unique element with minimal $L^2$-norm, also minimal in $m$-a.e. sense, in the set
\begin{equation*}
\left\{G\in L^2: G = \lim \text{lip} f_n \text{ in }L^2\text{ for some } (f_n)\subseteq \text{Lip} \text{ such that } f_n\rightarrow f \text{ in $L^2$}\right\},
\end{equation*}
which provides an integral representation $\text{Ch}(f)=\int_{X}|\nabla f|^2 dm$.
The Sobolev space equipped with norm $\|f\|^2_{W^{1,2}}:=\|f\|^2_{2}+2\text{Ch}(f)$ is a Banach space and $\text{Lip}_{bs}$ is dense in $W^{1,2}$ with respect to $\|\cdot\|_{W^{1,2}}$.
We further denote by $W^{1,2}_{loc}:=\{f\in L^2_{loc}: \eta f\in W^{1,2} \text{ for any } \eta\in \text{Lip}_{bs}\}$ the space of locally Sobolev functions, and define the minimal relaxed gradient as $|\nabla f|:=|\nabla (\eta f)|$ $m$-a.e. on $\{\eta=1\}$ for $f$ in $W^{1,2}_{loc}$ where $\eta$ is in $\text{Lip}_{bs}$.
We say that $(X,d,m)$ is \emph{infinitesimally Hilbertian} if the Cheeger energy is a quadratic form, or equivalently, $W^{1,2}$ is a Hilbert space.
Under these assumptions, it can be proved that for any $f$ and $g$ in $W^{1,2}$, the limit
\begin{equation*}
\langle \nabla f, \nabla g \rangle
:=
\lim_{\varepsilon\rightarrow 0}\frac{|\nabla(f+\varepsilon g)|^2-|\nabla f|^2}{2\varepsilon}
\end{equation*}
exists in $L^1$ and it is a bilinear form from $W^{1,2}\times W^{1,2}$ to $L^1$, see \textcolor{blue}{\citep{ambrosio2014metric}}.
\begin{definition}\label{def:Laplacian}
Let $(X,d,m)$ be infinitesimally Hilbertian.
\begin{itemize}[fullwidth]
\item \textbf{Laplacian:} We say that $f$ in $W^{1,2}$, is in the domain of the Laplacian, denoted by $D(\Delta)$, provided that there exists $h$ in $L^2$ such that
\begin{equation}\label{eq:def_laplacian}
-\int_{X}\langle \nabla f, \nabla g \rangle dm
=
\int_{X} h g dm, \quad \text{for any } g \in W^{1,2}.
\end{equation}
In this case, we denote $\Delta f=h$.
\item \textbf{Measure\--Valued Laplacian:} We say that $f$ in $W^{1,2}_{loc}$ is in the domain of measure-valued Laplacian, denoted by $D(\bm{\Delta})$, provided that there exists a signed Radon measure $\mu$ on $X$ such that,\footnote{Recall that $X$ is assumed to be proper, and therefore any bounded set is compact on which Radon measures are finite.}
\begin{equation}\label{eq:def_measured_laplacian}
-\int_{X} \langle \nabla f, \nabla g \rangle dm
=
\int_{X} g d\mu, \quad \text{for any } g \in \text{Lip}_{bs}.
\end{equation}
In this case, we denote $\bm{\Delta}f=\mu$.
\end{itemize}
\end{definition}
By separating property of $\text{Lip}_{bs}$ for Radon measure and infinitesimally Hilbertian property, it's clear that both $\Delta$ and $\bm{\Delta}$ are well-defined, unique and linear operators.
Moreover, the two definitions are are compatible in the following sense: on the one hand, if $f$ is in $W^{1,2}$ with $\bm{\Delta}f=\rho m$ for some $\rho$ in $L^2$, then $f$ is in $D(\Delta)$ and $\Delta f=\rho$.
On the other hand, if $f$ is in $W^{1,2}$ such that $\Delta f\in L^1$, then $f$ is in $D(\bm{\Delta})$ and $\bm{\Delta}f=(\Delta f) m$, see \citep[Proposition 6.2.13]{gigli2020lectures}.
For $f$ in the domain of $\bm{\Delta}$, We denote the Lebesgue decomposition with respect to $m$
\begin{equation}
\bm{\Delta}f
=
(\bm{\Delta}^{ac}f )\cdot m + \bm{\Delta}^{s}f,
\end{equation}
where $\bm{\Delta}^{ac}f$ is the Radon-Nikodym density and $\bm{\Delta}^sf$ is the singular part of $\bm{\Delta}f$.
For $w$ in $W^{1,2}\cap L^{\infty}$, we define weighted Laplacian $\Delta_w$ in a similar way but with respect to reference measure $m_w:=e^{w}\cdot m$ and test functions in $W^{1,2}(X,m_w)$.
For such $w$, it can be shown that $W^{1,2}$ coincides with $W^{1,2}(X,m_w)$, and the minimal relaxed gradient induced by $m_w$ coincides with the one induced by $m$, see \citep[Lemma 4.11]{ambrosio2014calculus}.
Moreover, it holds that $\Delta_w f=\Delta f+\langle \nabla w, \nabla f \rangle$.
The following Lemma recaps calculus rules, the proof of which can be found in \citep{gigli2020lectures, zhang2016local,gigli2013pde}.
\begin{lemma}
Let $(X,d,m)$ be infinitesimally Hilbertian.
Then:
\begin{enumerate}[label=\textit{(\roman*)}]
\item \textbf{Locality:} $|\nabla f|=|\nabla g|$ on $\{f-g=c\}$ for any $f$, $g$ in $W^{1,2}$ and constant $c$.
\item \textbf{Chain rule:} for any $f$ in $W^{1,2}$ and Lipschitz function $\phi:\mathbb{R} \to \mathbb{R}$, it follows that
\begin{equation*}
|\nabla(\phi\circ f)|=|\phi'(f)||\nabla f|
\end{equation*}
In particular, if $\phi$ is a contraction, then $|\nabla(\phi\circ f)|\leq |\nabla f|$.
\item \textbf{Leibniz rule:} for any $f$, $g$ and $h$ in $W^{1,2}\cap L^{\infty}$, it follows that $fg$ is in $W^{1,2}$ and
\begin{equation*}
\langle \nabla(fg), \nabla h \rangle
=
f\langle \nabla g, \nabla h \rangle + g\langle \nabla f, \nabla h \rangle.
\end{equation*}
\item \textbf{Chain rule:} for any $f$ in $D(\Delta)\cap L^{\infty}$ and $C^2$-function $\phi:\mathbb{R}\rightarrow \mathbb{R}$, it follows that $\phi(f)$ is in $D(\Delta)$ and
\begin{equation}\label{eq:Chain_rule}
\Delta \phi (f)
=
\phi'(f) \Delta f + \phi''(f)|\nabla f|^2.
\end{equation}
\item \textbf{Leibniz rule:} for any $f$ and $g$ in $D(\bm{\Delta})\cap L^{\infty}$ such that $g$ is continuous and $\bm{\Delta}g$ is absolutely continuous with respect to $m$, then $fg$ is in $D(\bm{\Delta})$ and
\begin{equation}\label{eq:Leibniz}
\bm{\Delta}(fg)
=
f\bm{\Delta}g + g\bm{\Delta}f + 2\langle \nabla f, \nabla g \rangle\cdot m.
\end{equation}
\end{enumerate}
\end{lemma}
As $L^2$-lower semicontinuity and convexity of Cheeger energy, the heat semigroup $P_tf$ is defined as the gradient flow in $L^2$ of Cheeger energy starting from $f\in L^2$ based on classical Brezis\--Komura theory, which provides the existence and uniqueness results.
Moreover, for any $f\in L^2$, it holds that $t\mapsto P_t f$ is locally absolutely continuous on $(0,\infty)$ and $P_tf$ is in $D(\Delta)$ for all $t>0$ and
\begin{equation}
\frac{d}{dt}P_t f = \Delta P_t f, \quad \text{for almost all } t\in (0,\infty).
\end{equation}
Under the further assumption that $(X,d,m)$ is infinitesimally Hilbertian, the heat semigroup $P_t$ is linear, strongly continuous, contraction and order-preserving in $L^2$.
Moreover, $P_t$ can be extended into a linear, mass preserving and strongly continuous operator in $L^p$ for any $1\leq p<\infty$, see \citep{ambrosio2014calculus} and futher results therein.
Finally we adapt the above mentioned definition and results when restricted to balls $\Omega:=B(x,R)\subseteq X$ to which we refer to \citep[Section 2.3]{zhang2016local} for a detailed presentation.
It follows the same approach up to the following subtle fact that one needs to pay attention to the boundary as follows:
We denote by $\text{Lip}_c(\Omega)$ the space of Lipschitz functions on $\Omega$ with compact support in $\Omega$.
For $f$ in $\text{Lip}_{loc}(\Omega)$, we define the $W^{1,2}(\Omega)$-norm as
\begin{equation}
\|f\|^2_{W^{1,2}(\Omega)}
=
\|f\|^2_{L^2(\Omega)} + \|\text{lip}f\|^2_{L^2(\Omega)},
\end{equation}
and the Sobolev space $W^{1,2}(\Omega)$ as the $W^{1,2}(\Omega)$-closure of the set $\{f\in \text{Lip}_{loc}(\Omega): \|f\|_{W^{1,2}(\Omega)}<\infty\}$.
By similar procedures as the global one, one can easily define $|\nabla f|$ and local inner product $\langle \nabla f, \nabla g\rangle$ in $L^1(\Omega)$ for any $f$ and $g$ in $W^{1,2}(\Omega)$.
As for the Laplacian, it modifies as follows: A function $f$ in $W^{1,2}(\Omega)$ belongs to $D(\Delta,\Omega)$ provided that there exists $g$ in $L^2(\Omega)$ such that
\begin{equation*}
-\int_{\Omega} \langle \nabla f, \nabla \phi \rangle dm = \int_{\Omega} g \phi dm, \quad \text{for all }\phi \in \text{Lip}_c(\Omega),
\end{equation*}
and we denote $\Delta_{\Omega}f:=g$.
It can be easily checked that for $f$ in $D(\Delta)$, its restriction to $\Omega$ belongs to $D(\Delta,\Omega)$.
\subsection{RCD metric measure spaces}
\citet{gigli2015differential,erbar2015equivalence} introduced the notion of Riemannian curvature dimension $\text{RCD}(K,N)$ as the finite dimensional counterpart to $\text{RCD}(K,\infty)$, itself introduced in \citep{ambrosio2014metric} based on curvature-dimension condition proposed by \citep{lott2009ricci,sturm2006geometry,sturm2006geometry2} to rule out Finsler cases.
Later the finer notion called reduced Riemannian curvature-dimension condition $\text{RCD}^*(K,N)$ was proposed in \citep{erbar2015equivalence}, which satisfies the so-called ``local\--to\--global'' property of \citep{bacher2010localization}.
Very recently, \citet{cavalletti2021globalization} shows that $\text{RCD}(K,N)$ and $\text{RCD}^*(K,N)$ are equivalent if the reference measure is finite.
The notion of $\text{RCD}^*(K,N)$ condition can be introduced in several equivalent ways, see \citep{erbar2015equivalence}.
In this work we give a definition including the case where $N=\infty$ from an Eulerian point of view based on abstract $\Gamma$-calculus.
\begin{definition}
We say that a metric measure space $(X,d,m)$ satisfies the $\text{RCD}^*(K,N)$ condition for $K$ in $\mathbb{R}$ and $N$ in $[1,\infty]$ provided that
\begin{enumerate}[label=\textit{(\roman*)}]
\item $m(B(x,r))\leq C \exp(C r^2)$ for some $x$ in $X$ and $C>0$;
\item \textbf{Sobolev-to-Lip property:} Any $f$ in $W^{1,2}$ with $|\nabla f|$ in $L^{\infty}$ admits a Lipschitz representation $\tilde{f}$ in $\text{Lip}$ such that $f=\tilde{f}$ $m$-a.e and $\text{Lip}(f)=\||\nabla f|\|_{\infty}$.
\item $(X,d,m)$ is infinitesimally Hilbertian.
\item \textbf{Weak Bochner inequality:} for any $f$ and $g$ in $D(\Delta)$ with $\Delta f$ in $W^{1,2}$, $\Delta g$ in $L^\infty$ and $g \geq 0$ it holds
\begin{equation}\label{eq:weak_Bochner}
\int_{X}\Delta g \frac{|\nabla f|^2}{2}dm
\geq
\int_{X}g \left( \frac{1}{N}(\Delta f)^2 + \langle \nabla f, \nabla \Delta f \rangle + K |\nabla f|^2 \right)dm
\end{equation}
for any $f\in D(\Delta)$ with $\Delta f\in W^{1,2}$ and $g\in D(\Delta)$ with $\Delta g\in L^{\infty}$ and $g\geq 0$.
\end{enumerate}
\end{definition}
From now on, assume that $(X,d,m)$ is a $\text{RCD}^\ast(K,N)$ space with $N$ in $(1, \infty)$ and $K>0$, which is the framework of the results in this work
Several important geometrical and analytical properties hold -- sometimes in more general case.
First, $X$ satisfies the Bishop\--Gromov inequality, that is, for any $0<r<R$ and $x\in X$,
\begin{equation}
\frac{m(B(x,R))}{v_{K,N}(R)}
\leq
\frac{m(B(x,r))}{v_{K,N}(r)},
\end{equation}
where $v_{K,N}(r)$ is the volume of ball with radius $r>0$ in the model space with dimension $N$ and Ricci curvature $K$.
In particular, $X$ is globally doubling with constant $2^N$, that is,
\begin{equation}
m(B(x,2r)) \leq 2^N m(B(x,r)),\quad \text{for any } x\in X \text{ and } r>0.
\end{equation}
It also holds that $m(B(x,r))>0$ for any $r>0$ and $x$ in $X$, and therefore the support of reference measure $m$ is equal to $X$.
Second, $X$ supports the weak $(1,1)$-Poincaré inequality, see \citep[Theorem 1.1]{rajala2012interpolated}, that is, for any $x$ in $X$, $r>0$ and continuous function $f:X\rightarrow \mathbb{R}$ and any upper gradient $g$ of $f$, we have
\begin{equation}
\fint_{B(x,r)}\left| f- (f)_{x,r} \right|dm
\leq
C r \fint_{B(x,2r)}g dm,
\end{equation}
where the constant $C$ only depends on $K$, $N$ and $r$, and $\fint_{\Omega}f dm:=\frac{1}{m(\Omega)}\int_{\Omega}f dm$ and $(f)_{x,r}:=\int_{B(x,r)}f dm$.
By H\"older inequality, $X$ also supports weak $(1,q)$-Poincaré inequality for any $1<q<2$.
This implies that $X$ is connected, see \citep[Proposition 4.2]{bjorn2011nonlinear}.
In fact, although implicitly contained in the definition, $X$ is geodesic space on the support of reference measure.
Third, the Monnet-Myers theorem implies that $X$ is compact, $m(X)<\infty$ and $\text{diam}(X)\leq \pi\sqrt{(N-1)/K}$, see \citep[Theorem 4.26]{sturm2006geometry}.
Since normalization of measure does not affect $\text{RCD}^*(K,N)$, it is not restrictive to assume that $m(X)=1$ when $K>0$.
Fourth, in $\text{RCD}$ setting, for Sobolev function $f\in W^{1,2}$, it is possible to identify the gradient $\nabla f$, rather than modulus of the gradient $|\nabla f|$, as the unique element in the tangent module $L^2(TX)$, which is a $L^2(m)$-normed $L^{\infty}(m)$-module and has been introduced in \citep{gigli2020lectures,gigli2018nonsmooth}.
They also developed a second-order calculus on $\text{RCD}$ spaces such that the notion of Hessian $\text{Hess}f$, its pointwise norm $|\text{Hess}f|_{HS}\in L^2$ and covariant derivative are well-defined.
For the complete theory, we refer readers to \citep{gigli2020lectures,gigli2018nonsmooth}, we here only mention that we have the inclusion that $D(\Delta)\subseteq H^{2,2}$.
Consequently for any $f$ in $D(\Delta)\cap \text{Lip}$, we have $|\nabla f|^2$ in $W^{1,2}$ and
\begin{equation}\label{eq:H_2_2}
|\nabla |\nabla f|^2|
\leq
2 |\text{Hess}f|_{HS}|\nabla f|,
\end{equation}
where $|\text{Hess}(f)|_{HS}\in L^2$ is the pointwise norm of $\text{Hess}f$, see \citep[Theorem 3.3.18]{gigli2018nonsmooth} or \citep[Lemma 3.5]{debin2021quasi}.
\begin{remark}
Although there exist different notions for Sobolev space such as Newtonian spaces, see \citep{bjorn2011nonlinear,ambrosio2013density}, and different notions of weak upper gradients in the metric measure space, all these notions are equivalent to each other in the setting of $\text{RCD}^*(K,N)$ spaces.
In particular, the minimal relaxed gradient coincides with the minimal weak upper gradient, which is defined via test plans and geodesics, see \citep[Definition 2.1.8]{gigli2020lectures}, and $W^{1,2}$ is reflexive.
For more details, we refer to \citep{ambrosio2013density}.
\end{remark}
Fifth, we recall regularity result of Poisson equation on the ball, see \citep[Lemma 3.4]{zhang2016local} and \citep[Theorem 3.1]{koskela2003lipschitz} proced in the more general setting of Bakry\--Emery type curvature condition.
\begin{lemma}\label{lemma:Poisson_regularity_Zhu}
Let $(X,d,m)$ be a $\text{RCD}^*(K,N)$ space with $K$ in $\mathbb{R}$ and $N$ in $[1,\infty)$.
Let further $g$ in $L^{\infty}(B_R)$ where $B_R:=B(x_0,R)$ is a geodesic ball centered at some $x_0$ in $X$ with radius $R>0$.
Assume that $f$ is in $W^{1,2}(B_R)$ and $\Delta_{B_R}f=g$ on $B_R$ in the sense of distribution.
Then it holds that $|\nabla f|$ is in $L^{\infty}_{loc}(B_R)$ and
\begin{equation*}
\| |\nabla f|\|_{L^{\infty}(B_{R/2})}
\leq
C(N,K,R)\left(\frac{1}{m(B_R)}\|f\|_{L^1(B_R)} + \|g\|_{L^{\infty}(B_R)}\right).
\end{equation*}
\end{lemma}
We also recall the following results about Sobolev inequality and compact Sobolev embedding theorem in the $\text{RCD}(K,N)$ spaces setting with $K>0$ and $N$ in $(2,\infty)$, proved in \citep[Proposition 3.3,Proposition 4.2]{profeta2015sharp}.
\begin{lemma}\label{lemma:Pre_Sobolev_embedding}
Let $(X,d,m)$ be a $\text{RCD}^*(K,N)$ space with $K>0$ and $N$ in $(2,\infty)$.
\begin{enumerate}[label=\textit{(\roman*)}, fullwidth]
\item \textbf{Sobolev inequality:}
There exist constants $A\geq 1$ and $B>0$ depending only on $K$ and $N$, such that for every $f$ in $W^{1,2}$, it holds
\begin{equation*}
\| f \|^2_{2^*}
\leq
A \|f\|^2_2 + B \text{Ch}(f),
\end{equation*}
where $2^*=2N/(N-2)$ is the Sobolev conjugate of $2$.
\item \textbf{Rellich\--Kondrachov:} Let $(f_n)$ be a sequence in $W^{1,2}$ with $\sup_{n}\|f_n\|_{W^{1,2}}<\infty$.
Then there exists $f$ in $W^{1,2}$ and a subsequence $(f_{n_k})$ such that for every $1\leq q<2^*$, it holds that
\begin{equation*}
f_{n_k}\rightarrow f \quad \text{in } L^q(X,m).
\end{equation*}
\end{enumerate}
\end{lemma}
Finally, we mention a key result about the heat semigroup and resolvent of Laplacian.
Recall that for the heat semigroup $P_t$, we say $P_t$ is ultracontractive if for $1\leq p<q\leq \infty$, there exists a constant $C(t)>0$ such that for any $f$ in $L^p$, it holds that
\begin{equation*}
\| P_t f\|_{q} \leq C(t)\|f\|_{p},\quad t>0,
\end{equation*}
and we denote $\|P_t\|_{(p,q)}:=\sup_{\|f\|_{p}\leq 1}\|P_t f\|_{q}$.
The ultracontractive property is equivalent to the Sobolev inequality on the underlying metric measure space, see \citep[Theorem 6.3.1]{bakry2014analysis}.
For $\text{RCD}^{*}(K,N)$ with $K>0$ and $N$ in $(2,\infty)$, since Sobolev inequality holds, it follows that $P_t$ has the ultracontractive property.
More precisely, for $1\leq p<q\leq \infty$ and $0<t\leq 1$, it holds that
\begin{equation*}
\| P_t \|_{(p,q)} \leq C t^{-\frac{N}{2}(\frac{1}{p}-\frac{1}{q})},
\end{equation*}
where $C>0$ is a constant depending on $K$ and $N$.
As a consequence, the ultracontractive property of heat semigroup provides the following useful boundedness result for the resolvent operator of Laplacian $R_{\lambda}:=(\lambda I - \Delta)^{-1}$ for $\lambda>0$, which is needed later, the proof of which can be found in \citep[Lemma 4.1]{profeta2015sharp} and \citep[Corollary 6.3]{bakry2014analysis}.
\begin{lemma}\label{lemma:ultracontra_resolvent}
In the context of an $\text{RCD}(K,N)$ space with $K>0$ and $N$ in $(2,\infty)$, let $\lambda>0$.
If $1\leq p\leq N/2$, the resolvent operator $R_{\lambda}:L^p\rightarrow L^q$ is bounded for each $1\leq q < pN/(N-2p)$.
If $p>N/2$, then the resolvent operator $R_{\lambda}: L^p \rightarrow L^{\infty}$ is bounded.
\end{lemma}
\section{Existence of positive extremal functions}
From now on, we always assume that $(X,d,m)$ is a $\text{RCD}^*(K,N)$ space with $K>0$ and $N$ in $(2,\infty)$, and $m$ is a Borel probability measure.
We consider the following variational problem
\begin{equation}\label{eq:var_pbm_general}
\inf\left\{\int_{X}\left(|\nabla f|^2 +\alpha_1 f^2 - \frac{\alpha_2}{2} f^2 \log f^2\right)dm : f\in W^{1,2},\|f\|_2=1 \right\},
\end{equation}
where $\alpha_1$ and $\alpha_2$ are in $\mathbb{R}$.
The infimum quantity of variational problem \eqref{eq:var_pbm_general} is called the log Sobolev constant $\lambda:=\lambda(\alpha_1,\alpha_2)$ on $X$ with parameters $\alpha_1$ and $\alpha_2$, and the functional in \eqref{eq:var_pbm_general} is called log Sobolev functional.
\begin{definition}\label{def:extremal_function}
Provided that $\lambda=\lambda(\alpha_1,\alpha_2)$ is a finite number, we call a function $u$ in $W^{1,2}$ the extremal of variational problem \eqref{eq:var_pbm_general} if $\|u\|_2=1$ and
\begin{equation}
\lambda
=
\int_{X}\left( |\nabla u|^2 +\alpha_1 u^2 - \frac{\alpha_2}{2} u^2 \log u^2\right)dm.
\end{equation}
\end{definition}
In the following, we provide our main result of this section.
It states the existence of non-negative extremal functions of variational problem \eqref{eq:var_pbm_general}.
Moreover, we show that all non-negative extremal functions are actually Lipschitz continuous and bounded away from zero on $X$.
As a corollary, we show that the logarithmic transform of all non-negative extremal functions are Lipschitz and in the domain of Laplacian and satisfies some Euler\--Lagrangian equation.
Under the further assumption that $\lambda(\alpha_1, \alpha_2)$ the solutions thereof are non-constant.
\begin{theorem}\label{thm:var_general}
Let $\alpha_1$ and $\alpha_2$ be given constants with $\alpha_2>0$.
Then the log Sobolev constant $\lambda=\lambda(\alpha_1,\alpha_2)$ has finite value and the variational problem \eqref{eq:var_pbm_general} admits non-negative extremal functions.
Moreover, any non-negative extremal function $u$ satisfies the following properties:
\begin{enumerate}[label=\textit{(\roman*)}]
\item $u$ is in $D(\Delta)$ and satisfies
\begin{equation}\label{eq:var_pbm_pde}
-\Delta u = \alpha_2 u\log u + (\lambda - \alpha_1)u.
\end{equation}
Furthermore if $\lambda\neq \alpha_1$, then $u$ is non-constant.
\item $u$ is Lipschitz continuous;
\item $u$ is positive.\footnote{Since $X$ is compact, $u>\delta >0$ for some $\delta$.}
\end{enumerate}
\end{theorem}
\begin{corollary}\label{corollary:var_log_transform}
Let $\alpha_1$ and $\alpha_2$ be given constants with $\alpha_2>0$ and $u$ an arbitrary non-negative extremal function of \eqref{eq:var_pbm_general}.
Then $v:=\log u$ is Lipschitz and in $D(\Delta)$ and satisfies
\begin{equation}\label{eq:var_log_transform_pde}
-\Delta v = |\nabla v|^2 +\alpha_2v + \lambda-\alpha_1
\end{equation}
In particular, the equation $-\Delta f=|\nabla f|^2 + \alpha_2 f $ admits Lipschitz weak solutions which are non-constant whenever $\lambda\neq \alpha_1$.
\end{corollary}
\begin{remark}
A classical example of the above variational problem is the weak log-Sobolev inequalities, that is, the variational problems
\begin{equation}
\lambda_{\varepsilon}
=
\inf\left\{\frac{2}{\alpha_{\varepsilon}}\int_{X}|\nabla f|^2dm + \int_{X}\left(\varepsilon f^2-f^2\log f^2\right)dm: f\in W^{1,2}, \|f\|_2=1 \right\},
\end{equation}
where $\varepsilon\geq0$ and $\alpha_{\varepsilon}$ is defined as the supremum of those $C>0$ such that
\begin{equation*}
\int_{X}f^2\log f^2dm\leq \int_{X}\left(\frac{2}{C}|\nabla f|^2 + \varepsilon f^2\right) dm
\end{equation*}
for all $f$ in $W^{1,2}$ and $\| f\|_2 = 1$.
The log-Sobolev inequality in $\text{RCD}(K,N)$, see \citep{cavalletti2017sharp}, implies that $\alpha_{\varepsilon}\geq KN/(N-1)$ for all $\varepsilon\geq 0$, and $\alpha_{\varepsilon}<\infty$ when $\varepsilon$ is small enough.
Straightforward inspection shows that if $\alpha_{\varepsilon}$ has finite value, then $\lambda_{\varepsilon}=0$.
In this case, by using Theorem \ref{thm:var_general} and Corollary \ref{corollary:var_log_transform}, we obtain that the weak Sobolev inequality admits Lipschitz and positive extremal functions which are further non-constant if $\varepsilon > 0$.
Furthermore, we derive that the PDE $-\Delta f=|\nabla f|^2 +\alpha_{\varepsilon}f$ admits non-constant, Lipschitz and positive weak solutions if $\varepsilon> 0$.
\end{remark}
We first prove all assertions of Theorem \ref{thm:var_general} but positivity.
While the existence result follows classical methods in \citep{rothaus1981logarithmic} using variational techniques and compact Sobolev embedding, the boundedness and Lipschitz regularity results are different.
Our methods are based on ultracontractive property of resolvent operator and local regularity result of Poisson equation on a ball in Lemma \ref{lemma:Poisson_regularity_Zhu}.
In the following proofs, we denote by $C>0$ a universal constant which may vary from one step to the next.
\begin{proof}[First part of the proof of Theorem \ref{thm:var_general}]
\begin{enumerate}[label=\textbf{\textsc{Step \arabic*:}}, fullwidth]
\item We show the existence of non-negative extremal functions of variational problem \eqref{eq:var_pbm_general}.
Let $\mathcal{F}:=\{f\in W^{1,2}: \|f\|_2=1\}$ and $F:\mathcal{F}\rightarrow \mathbb{R}$ be the log Sobolev functional in \eqref{eq:var_pbm_general}, that is,
\begin{equation}
F(f):=F_{\alpha_1,\alpha_2}(f)
=
\int_{X}\left(|\nabla f|^2 + \alpha_1 f^2 - \frac{\alpha_2}{2} f^2\log f^2\right) dm, \quad \text{for }f\in \mathcal{F}.
\end{equation}
We claim that $F$ is coercive on $\mathcal{F}$.
Indeed, choosing $0<\delta<N/(N-2)$, since $\|f\|_2=1$, by Jensen's inequality, it follows that
\begin{equation}
\int f^2 \log f^2 dm
=
\frac{1}{\delta}\int \log |f|^{2\delta}(f^2 dm)
\leq
\frac{1+\delta}{\delta}\log \|f\|^2_{2+2\delta}.
\end{equation}
Note that the Sobolev inequality, see Lemma \ref{lemma:Pre_Sobolev_embedding}, implies that
\begin{equation}
\|f\|^2_{2+2\delta}
\leq
\|f\|^2_{2^*}
\leq
A\|f\|^2_2 + B\text{Ch}(f)
\leq
C(1+\text{Ch}(f)),
\end{equation}
and therefore
\begin{equation}
F(f)
\geq
2\text{Ch}(f) + \alpha_1 - \frac{(1+\delta)\alpha_2}{2\delta}\log C\left(1+ \text{Ch}(f)\right),
\end{equation}
which implies that $F\rightarrow \infty$ for $f$ in $\mathcal{F}$ with $\|f\|_{W^{1,2}}\rightarrow \infty$.
Let $(f_n)\subseteq \mathcal{F}$ be a minimizing sequence which can be assumed non-negative since $\text{Ch}(|f|)\leq \text{Ch}(f)$.
By the coercivity of $F$, it follows that $(f_n)$ is bounded in $W^{1,2}$.
The compact Sobolev embedding implies the existence of a non\--negative $u$ in $W^{1,2}\cap \mathcal{F}$ and a subsequence of $(f_n)$, relabelled as $(f_n)$, such that $f_n\rightarrow u$ strongly in $L^q$ for all $q$ in $[1,2^*)$.
By the $L^2$-semicontinuity of Cheeger energy as well as the $L^{2+\delta}$-continuity of $\int f^2\log f^2$ for small $\delta>0$,\footnote{Which can be shown from mean value theorem and equality $f^2\log f^2 -g^2\log g^2=2(|f|-|g|)(1+\log \theta^2)\theta$ where $\theta$ is between $|f|$ and $|g|$, see \textcolor{blue}{\citep[page 112]{rothaus1981logarithmic}}} it follows that $F$ reaches its minimum at $u$.
Furthermore, if $\alpha_1\neq \lambda$, straightforward inspection shows that $u\equiv 1$ cannot be the extremal function since otherwise $\lambda=\alpha_1$.
\item Given any non-negative extremal function $u$, we show that $u\in D(\Delta)$ and satisfies \eqref{eq:var_pbm_pde} by using classical Euler\--Lagrangian method.
Let $\phi$ in $\text{Lip}_{bs}$ be an arbitrary Lipschitz function and define the functional $G$ as follows
\begin{equation}
G(t,\beta)
=
F(u+t\phi) + \beta \left(\|u+t\phi\|_2^2 -1\right).
\end{equation}
The mean value theorem yields
\begin{equation}
\left|u^2\log u^2 - (u+t\phi)^2\log (u+t\phi)^2\right|
\leq
2t|\phi|\left|(1+\log \theta^2)\theta \right|
\end{equation}
with a function $\theta$ taking values between $|u|$ and $|u+t\phi|$.
From the inequality $|\theta\log \theta|\leq \max(1/e, C(\delta)\theta^{1+\delta})$ for some small $\delta>0$ together with the dominated convergence of Lebesgue it follows that
\begin{equation}
0
=
\frac{1}{2}\frac{dG(t, \beta)}{dt}\Big|_{t=0}
=
\int \langle \nabla u, \nabla \phi \rangle + \alpha_1 u \phi - \frac{\alpha_2}{2} u\phi \log u^2 - \alpha_2 u\phi + \beta u \phi dm.
\end{equation}
Since $\text{Lip}_{bs}$ is dense in $W^{1,2}$ and $u\log u$ is in $L^2$ by Sobolev inequality, it follows that
\begin{equation}\label{eq:var_pbm_2}
\int \langle \nabla u, \nabla \phi \rangle dm
=
\int \left(\frac{\alpha_2}{2} \phi u \log u^2 + (\alpha_2-\alpha_1-\beta)\phi u\right) dm, \quad \text{for all } \phi \in W^{1,2}.
\end{equation}
Plugging $\phi = u$ into \eqref{eq:var_pbm_2} and using the real value of the log Sobolev constant $\lambda$, it follows that $\alpha_2-\beta=\lambda$.
By definition of the Laplacian we deduce that $u$ is in $D(\Delta)$ and that $-\Delta u=\alpha_2 u\log u +(\lambda-\alpha_1)u$.
\item We show that $u$ is in $\text{Lip}$.
We start by showing that $u$ is in $L^{\infty}$.
Let $\beta>0$.
Note that using the resolvent of the Laplacian, \eqref{eq:var_pbm_pde} can be rewritten as
\begin{equation}\label{eq:var_pbm_3_resolvent}
u
=
\left(\beta I - \Delta\right)^{-1}\left( \alpha_2u\log u + (\beta+\lambda -\alpha_1)u\right)
=
R_{\beta}\left(\alpha_2u\log u + \alpha_3 u\right),
\end{equation}
where $I$ be identity map and $R_{\beta}$ is the resolvent operator and $\alpha_3=\beta+\lambda-\alpha_1$.
Note that in order to prove that $u$ is in $L^{\infty}$, it is sufficient to show that $g:=\alpha_2u\log u + \alpha_3u$ is in $L^r$ for some $r>N/2$, according to Lemma \ref{lemma:ultracontra_resolvent}.
By the Sobolev inequality, we know that $u$ is in $L^{2^*}$, which implies that $g$ is in $L^{r}$ for all $r\in (2,2^*)$, (recall that $2^\ast = 2N/(N-2)$).
Fix $\delta\geq 2N$.
It implies that $2<r_1<2^*$ for $1/r_1=1/2^*+1/\delta$.
If $r_1>N/2$, then Lemma \ref{lemma:ultracontra_resolvent} and identity \eqref{eq:var_pbm_3_resolvent} implies that $u\in L^{\infty}$.
If $r_1\leq N/2$, then Lemma \ref{lemma:ultracontra_resolvent} implies that $u\in L^r$ for all $r<r_1N/(N-2r_1)$.
Repeating the previous step, we can obtain that $g\in L^{r_2}$ for some $r_2<r_1N/(N-2r_1)$ such that
\begin{equation}\label{eq:var_pbm_add_1}
\frac{1}{r_2}
=
\frac{1}{r_1} - \frac{2}{N} + \frac{1}{\delta}
=
\frac{1}{2^*}-\frac{2}{N} + \frac{2}{\delta}.
\end{equation}
Define by induction $1/r_k:=1/2^*-2(k-1)/N+k/\delta$ if $r_{k-1}\leq N/2$ which by definition of $\delta$ implies that after finitely many iterations $g$ is in $L^{r_k}$ for some $r_k>N/2$ and deduce that $u$ is in $L^{\infty}$.
We now show that $u$ is actually in $\text{Lip}$.
Let $x_0$ be any point in $X$ and $B_R=B(x_0, R)$ be an open ball with $0<R<\text{diam}(X)/3$ and $g:=\alpha_2 u\log u + (\lambda-\alpha_1)u$.
Since $u$ is in $D(\Delta)$ and identity \eqref{eq:var_pbm_pde}, it follows by definition that $u$ belongs to $W^{1,2}(B_R)$ as well as $D(\Delta,B_R)$ and that $\Delta_{B_R}u=-g$ holds on $B_R$ in the distributional sense.
Hence, by Lemma \ref{lemma:Poisson_regularity_Zhu}, it follows that $|\nabla u|\in L^{\infty}_{loc}(B_R)$ and
\begin{equation}
\| |\nabla u|\|_{L^{\infty}(B_{R/2})}
\leq
C(N,K,R)\left(\frac{1}{m(B_R)}\|u\|_{L^2} + \|g\|_{L^{\infty}}\right).
\end{equation}
However, $X$ is compact and $m(B_R)>0$ by doubling property, implying that $|\nabla u|$ belongs to $L^{\infty}$.
Sobolev-to-Lipschitz property thus implies that $u$ has a Lipschitz conditions representation ending the proof of Theorem \ref{thm:var_general} but positivity.
\end{enumerate}
\end{proof}
As for the last assertion of Theorem \ref{thm:var_general}, the positivity of non-negative extremal functions, we address the following auxiliary lemma stating that any non-negative extremal function vanishing at one point must also vanish in a neighborhood of that point.
Our approach is based on a maximum principle type of result for the De Giorgi class proved in \citep{kinnunen2001regularity}.
\begin{lemma}\label{lemma:strict_positive}
Suppose that the hypothesis of Theorem \ref{thm:var_general} holds.
Let $u$ be any non-negative extremal function of variational problem \eqref{eq:var_pbm_general}.
Assume that $u(x_0)=0$ for some $x_0$ in $X$, then $u\equiv 0$ on a neighborhood of $x_0$.
\end{lemma}
\begin{proof}
From the first part of the Proof of Theorem \ref{thm:var_general}, any non-negative extremal function of \ref{thm:var_general} is Lipschitz continuous.
Furthermore, since $\alpha_2 >0$ and $\lambda$ is finite, the function $g(t):=\alpha_2 t\log t + (\lambda-\alpha_1)t\leq 0$ for all small enough $t>0$.
So we can find $r_0>0$ such that $g(u)\leq 0$ on $B(x_0,r_0)$.
We first claim that $-u$ is of De Giorgi class $\text{DG}_2(B(x_0,r_0))$.
In other terms, there exists $C>0$ such that for all $k$ in $\mathbb{R}$ and $z$ in $B(x_0,r_0)$ and all $0<\rho< R\leq \text{diam}(X)/3$ with $B(z,R)\subseteq B(x_0,r_0)$, it holds that
\begin{equation}
\int_{A_z(k,\rho)}|\nabla u|^2 dm
\leq \frac{C}{(R-\rho)^2}\int_{A_z(k,R)}(-u-k)^2dm,
\end{equation}
where $A_z(k,r)=\{x\in B(z,r): -u(x)>k\}$.
Indeed, let $\eta$ be a Lipschitz cut-off function such that $\eta=1$ on $B(z,\rho)$ and $\text{supp}(\eta)\subseteq B(z,R)$ with $|\nabla \eta|\leq C/(R-\rho)$ for some $C>0$.
Taking $\phi=\eta (-u-k)_{+}$ as test function for \eqref{eq:var_pbm_pde}, it follows that
\begin{equation}\label{eq:var_pos_1}
\int \langle \nabla(\eta(-u-k)_+), \nabla u \rangle dm
=
\int \eta (-u-k)_{+}\left(\alpha_2 u\log u + (\lambda-\alpha_1)u\right)dm.
\end{equation}
As for the left hand side of \eqref{eq:var_pos_1}, Lebniz rule yields
\begin{multline}\label{eq:var_pos_2}
\int \langle \nabla(\eta(-u-k)_{+}), \nabla u \rangle dm
\geq
\int_{\{-u>k\}} \eta |\nabla u|^2 dm - \int_{\{-u>k\}} (-u-k)_{+} |\nabla \eta||\nabla u | dm\\
\geq
\int_{A_z(k,\rho)}|\nabla u|^2 dm - \frac{C}{(R-\rho)^2}\int_{A_z(k,R)}(-u-k)^2_{+}dm - \int_{A_z(k,R)\setminus A_z(k,\rho)}C|\nabla u|^2dm,
\end{multline}
where we use the locality of minimal weak upper gradient $|\nabla(-u-k)|=|\nabla u|$ for the first inequality, Young's inequality, $|\nabla \eta|\leq C/(R-\rho)$ and $|\nabla \eta|=0$ on $B(z,\rho)$ for the second inequality.
As for the right hand side of \eqref{eq:var_pos_1}, by the very choice of $B(x_0,r_0)$ and non-negativity of $u$, it follows that $\alpha_2 u\log u + (\lambda-\alpha_1)u\leq 0$.
Hence
\begin{equation}\label{eq:var_pos_3}
\int \eta(-u-k)_{+}\left(\alpha_2u\log u + (\lambda-\alpha_1)u\right) dm
\leq 0.
\end{equation}
Taking equations \eqref{eq:var_pos_2} and \eqref{eq:var_pos_3}, we obtain
\begin{equation}
\int_{A_z(k,\rho)}|\nabla u|^2 dm
\leq
\frac{C}{(R-\rho)^2}\int_{A_z(k,R)} (-u-k)^2_{+}dm + C\int_{A_z(k,R)\setminus A_z(k,\rho)}|\nabla u|^2 dm.
\end{equation}
Adding $C\int_{A_z(k,\rho)}|\nabla u|^2dm$ and then dividing by $(1+C)$ on both sides, it follows that
\begin{equation}
\int_{A_z(k,\rho)}|\nabla u|^2dm
\leq
\frac{C}{(R-\rho)^2}\int_{A_z(k,R)}(-u-k)^2_{+}dm + \theta \int_{A_z(k,R)}|\nabla u|^2 dm,
\end{equation}
where $\theta=C/(1+C)<1$.
Hence, for all $0<\rho<r\leq R$, it holds
\begin{equation}
\int_{A_z(k,\rho)}|\nabla u|^2dm
\leq
\frac{C}{(r-\rho)^2}\int_{A_z(k,R)}(-u-k)^2_{+}dm + \theta \int_{A_z(k,r)}|\nabla u|^2 dm.
\end{equation}
Taking $f(r):=\int_{A_z(k,r)}|\nabla u|^2dm$.
Using \citep[Lemma 3.2]{kinnunen2001regularity}, see also \citep[Lemma 3.1]{mariano1983}, we obtain that
\begin{equation}
\int_{A_{z}(k,\rho)}|\nabla u|^2 dm
\leq
\frac{C}{(R-\rho)^2}\int_{A_z(k,R)}(-u-k)^2_{+}dm,
\end{equation}
which implies that $-u$ is of De Giorgi class $\text{DG}_2(B(x_0,r_0))$.
Note that by the assumption that $X$ is a $\text{RCD}^*(K,N)$ space with $K>0$ and $N\in (2,\infty)$, it follows that $X$ is compact, global doubling, and supports global weak $(1,1)$-Poincaré inequality, see \citep{rajala2012interpolated}.
Together with Hölder's inequality, $X$ supports global weak $(1,q)$-Poincare inequality for any $q$ in $(1,2)$.
Since the minimal weak upper gradient in $\text{RCD}$ setting coincides with the minimal weak upper gradient in Newtonian setting, see \cite{ambrosio2013density}, it follows that $\text{RCD}^*(K,N)$ space together with that $u\geq 0$ and $-u\in \text{DG}_2(B(x_0,r_0))$ satisfies the assumptions of \citep[Lemma 6.1 and Lemma 6.2]{kinnunen2001regularity}.
By contradiction, suppose that the assertion of our Lemma does not hold.
Let then $0<R<r_0$ and $x$ in $B(x_0,R)$ and $\tau:=u(x)>0$ be fixed.
By Lipschitz continuity of $u$, we can find $r>0$ with $B(x,r)\subseteq B(x_0,R)$ such that $u\geq\tau/2$ on $B(x, r)$.
Doubling property yields
\begin{equation}
\frac{m(B(x,r))}{m(B(x_0,R))}
\geq
\left(\frac{r}{R}\right)^N.
\end{equation}
Hence it follows that
\begin{equation}\label{eq:var_pos_final}
m\left(\left\{z\in B(x_0,R): u(z)\geq \tau/2\right\}\right)
\geq
m\left(B(x,r)\right)
\geq
\left(\frac{r}{R}\right)^N m(B(x_0,R)).
\end{equation}
Taking $\gamma:=1-(r/R)^N$, the inequality \eqref{eq:var_pos_final} implies that
\begin{equation}
m\left(\left\{x\in B(x_0,R): u< \tau/2\right\}\right)
\leq
\gamma m(B(x_0,R)).
\end{equation}
Since $0<\gamma <1$, \citep[Lemma 6.2]{kinnunen2001regularity} yields the existence of $\bar{\lambda}=\bar{\lambda}(\gamma)>0$ such that
\begin{equation}
u(x_0)=\inf_{B(x_0, R/2)}u \geq \frac{\bar{\lambda} \tau}{2}>0,
\end{equation}
which is a contradiction.
\end{proof}
We can now address the positivity assertion of Theorem \ref{thm:var_general}.
\begin{proof}[Final part of the proof of Theorem \ref{thm:var_general}]
\begin{enumerate}[label=\textbf{\textsc{Step \arabic*:}}, fullwidth]
\setcounter{enumi}{3}
\item We show the positivity of non-negative extremal function.
From the continuity of $u$, the set $A = \{x\colon u(x)=0\}$ is closed.
By contradiction, suppose that $A$ is non empty, by Lemma \ref{lemma:strict_positive} if follows that $A$ is also open.
Since $X$ is connected, it follows that $A = X$ and therefore $u\equiv 0$.
This however contradicts the fact that $\|u\|_2 =1$.
\end{enumerate}
\end{proof}
Finally we address the proof of Corollary \ref{corollary:var_log_transform}.
\begin{proof}[Proof of Corollary \ref{corollary:var_log_transform}]
Let $\phi(t)=\log t,t>0$ and $u$ be an arbitrary non-negative extremal function of variational problem \eqref{eq:var_pbm_general}.
By Theorem \ref{thm:var_general}, we know that $0<c\leq u \leq C$ for some some positive constants $c$ and $C$ and that $u$ is Lipschitz.
Since $\phi$ is $C^2$-function with bounded first and second derivative on $[c,C]$, it follows by chain rule that $v:=\phi(u)$ is Lipschitz, $v$ is in $D(\Delta)$ and
\begin{equation}
\Delta(\phi(u))
=
\phi'(u)\Delta u + \phi''(u)|\nabla u|^2
=
\frac{1}{u}\Delta u - \frac{1}{u^2}|\nabla u|^2.
\end{equation}
By \eqref{eq:var_pbm_pde} we get
\begin{equation}
-\Delta v = |\nabla v|^2 + \alpha_2 v +(\lambda-\alpha_1).
\end{equation}
Taking $\tilde{v}=v + (\lambda-\alpha_1)/\alpha_2$.
By locality of minimal weak upper gradient and Laplacian, we obtain that $\tilde{v}\in D(\Delta)$ satisfies $-\Delta \tilde{v}=|\nabla \tilde{v}|^2 + \alpha_{2}\tilde{v}$.
If $\lambda\neq\alpha_1$, then Theorem \ref{thm:var_general} implies that $u$ is non-constant, which also implies that $\tilde{v}$ is non-constant.
Since $\tilde{v}$ satisfies \eqref{eq:var_log_transform_pde}, we obtain the result.
\end{proof}
\section{Li-Yau type inequality for logarithmic extremal functions}
In this section, we derive a Li-Yau type estimate for the Lipschitz solutions of equation $-\Delta v=|\nabla v|^2 + \alpha v$, whose existence is guaranteed by Corollary \ref{corollary:var_log_transform}.
In particular, based on the regularity and positivity results obtained in previous section, this Li-Yau estimate holds for all logarithmic transform of non-negative extremal functions of \eqref{eq:var_pbm_general}.
\begin{theorem}\label{thm:estimate_Li_Yau}
Let $(X,d,m)$ be a $\text{RCD}^*(K,N)$ space with $K>0$ and $N$ in $(2,\infty)$.
Let $v\in \text{Lip}\cap D(\Delta)$ such that $-\Delta v=|\nabla v|^2 + \alpha v$ for some $\alpha>0$.
Then, for all $0< \beta <1$ it holds
\begin{equation}\label{eq:Li_Yau_result_1}
|\nabla v|^2 + (\alpha -\beta K)v
\leq
\frac{N\alpha(1-\beta)}{4\beta}\left(1-\frac{\beta((2-\beta)K-\alpha)}{2\alpha(1-\beta)}\right)^2,\quad \text{$m$-a.e.}
\end{equation}
\end{theorem}
\begin{corollary}\label{coro:Li_Yau_extremal}
Let $u$ be any non-negative extremal function of \eqref{eq:var_pbm_general} with log-Sobolev constant $\lambda(\alpha_1,\alpha_2)$ and $\alpha_2>0$.
Then $v:=\log u + (\lambda-\alpha_1)/\alpha_2$ satisfies Li-Yau type estimate \eqref{eq:Li_Yau_result_1} with $\alpha=\alpha_2$.
Moreover, if $0<\alpha_2\leq K$, then any non-negative extremal function is constant.
\end{corollary}
The proof of Theorem \ref{thm:estimate_Li_Yau} is divided into three parts: In a first step, we show regularity results for $|\nabla v|^2+(\alpha-\beta Kv)$ using the weak Bochner inequality \ref{eq:weak_Bochner}.
In a second step, following similar computation arguments as in \cite{wang1999harnack} we derive a lower bound of absolutely continuous part of $\bm{\Delta}(|\nabla v|^2 +(\alpha-\beta Kv))$, where all inequalities are understood in $m$-a.e. sense.
In a last step, we make use of a slightly generalized Omori-Yau type maximum principle proved in Appendix \ref{appendix} together with a ``good'' cut-off function inspired by \cite{mondino2019structure} to derive the desired Li-Yau estimate.
\begin{proof}[Proof of Theorem \ref{thm:estimate_Li_Yau}]
Recall that $\text{Test}^{\infty}:=\{f\in \text{Lip}\cap D(\Delta)\cap L^{\infty}: \Delta f\in W^{1,2}\cap L^{\infty}\}$ and $\text{Test}^{\infty}_{+}:=\{f\in \text{Test}^{\infty}: f\geq 0 \text{ $m$-a.e. on $X$}\}$.
\begin{enumerate}[label=\textbf{\textsc{Step \arabic*:}}, fullwidth]
\item We claim that $|\nabla v|^2$ is in $W^{1,2}\cap L^{\infty}$ and $|\nabla v|^2$ is in $D(\bm{\Delta})$ with $\bm{\Delta}^{s}(|\nabla v|^2)\geq 0$.
Indeed, by assumption $v$ is in $D(\Delta)$, hence $v$ belongs to $ H^{2,2}$ and
\begin{equation}
|\nabla |\nabla v|^2|
\leq
2 |\text{Hess} v|_{HS}|\nabla v|,
\end{equation}
with $|\text{Hess}v|_{HS}\in L^2$.
By the fact that $|\nabla v|$ is in $L^{\infty}$, we obtain that $|\nabla v|^2$ is also in $W^{1,2}$.
We now show that $|\nabla v|^2\in D(\bm{\Delta})$.
For any $\phi\in \text{Test}^{\infty}_{+}$, by weak Bochner inequality \eqref{eq:weak_Bochner}, it follows that
\begin{equation}\label{eq:est_2}
\int_X |\nabla v|^2 \Delta \phi dm
\geq
2\int_X \phi \left( \frac{(\Delta v)^2}{N} + \langle \nabla v, \nabla \Delta v \rangle + K |\nabla v|^2\right) d m
=
\int_X \phi d\mu,
\end{equation}
where $\mu=2((\Delta v)^2/N +\langle \nabla v, \nabla \Delta v\rangle + K|\nabla v|^2)m$.
By standard regularization via mollified heat flow, see \citep[Corollary 6.2.17]{gigli2020lectures}, the inequality \eqref{eq:est_2} holds for all $\phi\in \text{Lip}_{bs}^+$.
Then by \citep[Proposition 6.2.16]{gigli2020lectures}, it follows that $|\nabla v|^2\in D(\bm{\Delta})$ and that
\begin{equation}\label{eq:est_3}
\bm{\Delta}\left(|\nabla v|^2\right)
\geq
2\left( \frac{(\Delta v)^2}{N} + \langle \nabla v, \nabla \Delta v\rangle + K|\nabla v|^2 \right)\cdot m.
\end{equation}
In particular we obtain that $\bm{\Delta}^{s}(|\nabla v|^2)\geq 0$.
\item We provide lower bounds for $(\bm{\Delta}^{ac}g)$ based on inequality \eqref{eq:est_3} where $g:=|\nabla v|^2 +(\alpha - \beta K)v$ for some $0<\beta<1$ to be determined later.
First note that since $-\Delta v=|\nabla v|^2 +\alpha v$, it follows that $-\Delta v = g + \beta K v$.
Hence, from the first step, we get that $g$ is in $D(\bm{\Delta})$ and $\bm{\Delta}^{s}g=\bm{\Delta}^{s}(|\nabla v|^2)\geq 0$.
Then inequality \eqref{eq:est_3} implies that
\begin{multline}\label{eq:est_4}
\bm{\Delta}^{ac}g
=
\bm{\Delta}^{ac}(|\nabla v|^2 +(\alpha - \beta K)v)\\
\geq
2\frac{(\Delta v)^2}{N} + 2\langle \nabla v, \nabla \Delta v\rangle + 2K|\nabla v|^2 + (\alpha - \beta K)\Delta v.
\end{multline}
Plugging $\Delta v=-g-\beta K v$ into \eqref{eq:est_4}, it follows that
\begin{multline}\label{eq:Li_Yau_est_add_1}
\bm{\Delta}^{ac}g
\geq
2\frac{(g+\beta K v)^2}{N} - 2\langle \nabla v, \nabla (g + \beta K v) \rangle + 2K|\nabla v|^2 - (\alpha - \beta K)(g+\beta K v)\\
=
\frac{2}{N}\left(g^2 + 2\beta K g v + \beta^2K^2 v^2\right) -2\beta K |\nabla v|^2 - 2\langle \nabla v, \nabla g\rangle + 2K|\nabla v|^2\\
- (\alpha - \beta K)g - \beta K (\alpha-\beta K)v.
\end{multline}
Plugging identity $|\nabla v|^2= g-(\alpha-\beta K)v$ into the right hand of \eqref{eq:Li_Yau_est_add_1}, it follows that
\begin{multline}\label{eq:Li_Yau_add_0}
\bm{\Delta}^{ac}g
\geq
\frac{2}{N}g^2 + \left(\frac{4\beta K}{N}v + (2-\beta)K - \alpha\right)g
+ \frac{2\beta^2K^2}{N}v^2 \\
-K(2-\beta)(\alpha-\beta K)v - 2\langle \nabla v, \nabla g \rangle\\
=
\frac{2}{N}\left[ g + \left(\beta K v + \frac{N[(2-\beta)K-\alpha]}{4}\right)\right]^2 - \frac{2}{N}\left( \beta K v + \frac{N[(2-\beta)K-\alpha]}{4}\right)^2\\
+ \frac{2\beta^2K^2}{N}v^2 -K(2-\beta)(\alpha-\beta K)v - 2\langle \nabla v, \nabla g \rangle.
\end{multline}
For $a=N((2-\beta)K-\alpha)$ and $b=N\alpha(1-\beta)/\beta$, inequality \eqref{eq:Li_Yau_add_0} simplifies to
\begin{equation}\label{eq:est_Bochner_ineq_AC}
\bm{\Delta}^{ac}g
\geq
\frac{2}{N}(g + \beta K v + \frac{a}{4})^2 - \frac{2}{N}(Ka bv) - \frac{2}{N}\frac{a^2}{16} - 2\langle \nabla v, \nabla g\rangle.
\end{equation}
\item We show that the assumptions of Lemma \ref{lemma:maximum_principle} are valid for $g$ based on estimate \eqref{eq:est_Bochner_ineq_AC}.
Let $D:=\text{diam}(X)$ and fix $0<R< D/4$.
Since $X$ is compact and $g\in L^{\infty}$, we can find $x_0$ in $X$ such that:
\begin{equation}
M_1:=\esssup_{B(x_0,R)}g = \esssup_{X}g
\end{equation}
Define
\begin{equation*}
M_2 := \esssup_{X \setminus B(x_0, R)} g \leq M_1
\end{equation*}
By our choice of $R$ and doubling property of $X$, we have $m(B(x_0,R))>0$ and $m(X\setminus B(x_0,R))>0$.
Without loss of generality, we assume that $M_1>0$, otherwise nothing needs to be shown.
We consider different cases for $M_1$ and $M_2$.
\begin{enumerate}[fullwidth]
\item[Case 1:]$M_1>M_2$.
By regularity result in Step 1, we can apply Lemma \ref{lemma:maximum_principle} to $g$.
So taking $w=2v$ which is in $W^{1,2}\cap \text{Lip}_b$, it follows from Lemma \ref{lemma:maximum_principle} that we can find a sequence $(x_j)\subseteq X$ such that $g(x_j)>M_1 - 1/j$ and
\begin{equation}\label{eq:Li_Yau_add_2}
\bm{\Delta}^{ac}g(x_j)+\langle \nabla g, \nabla w \rangle(x_j)\leq 1/j.
\end{equation}
Plugging \eqref{eq:est_Bochner_ineq_AC} into \eqref{eq:Li_Yau_add_2} and letting $h_v= (K\beta bv+a^2/16)^{1/2}$, it follows that
\begin{multline}
g(x_j)
\leq
-\beta K v(x_j) - \frac{a}{4} + h_v(x_j) +
\sqrt{\frac{N}{2j}}\\
=
-\frac{1}{b}h_v(x_j)^2 + h_v(x_j) + \frac{a^2}{16b^2} - \frac{a}{4} + \sqrt{\frac{N}{2j}}\\
=
-\frac{1}{b}\left(h_v(x_j)-\frac{b}{2}\right)^2 + \frac{b}{4}\left(1-\frac{a}{2b}\right)^2 + \sqrt{\frac{N}{2j}}.
\end{multline}
For $j\rightarrow \infty$ together with $g(x_j)>M_1-1/j$, we obtain that
\begin{equation}
\esssup_{X}g
\leq
\frac{b}{4}\left(1-\frac{a}{2b}\right)^2.
\end{equation}
\item[Case 2:]$M_1=M_2$.
Let $\varepsilon\in (0,1/2)$.
By \citep[Lemma 3.1]{mondino2019structure}, we can find a Lipschitz cut-off function $\psi:X\rightarrow \mathbb{R}$ with $\psi\in D(\Delta)$ such that $0\leq \psi\leq 1$ and $\psi\equiv 1$ on $B(x_0,R)$ and $\text{supp}(\psi)\subseteq B(x_0,2R)$, and
\begin{equation}\label{eq:Naber_original_cut_off}
R^2|\Delta \psi| + R|\nabla \psi|
\leq C,
\end{equation}
where $C>0$ is a constant depending only on $K,N,D$.
Let $\phi_{\varepsilon}:X\rightarrow \mathbb{R}$ be defined as $\phi_{\varepsilon}=1-\varepsilon + \varepsilon \psi$ and $G_{\varepsilon}:=\phi_{\varepsilon}\cdot g$.
Note that since $\phi_{\varepsilon}\in D(\Delta)$, it holds that $\bm{\Delta}\phi_{\varepsilon}=\Delta \phi_{\varepsilon}\cdot m$.
Together with the fact that $\phi_{\varepsilon}$ is continuous, by the Leibniz rule for measure-valued Laplacian, it follows that
\begin{equation}
\bm{\Delta}(G_{\varepsilon})
=
\phi_{\varepsilon} \bm{\Delta}g + g\Delta\phi_{\varepsilon}\cdot m + 2\langle \nabla\phi_{\varepsilon}, \nabla g \rangle\cdot m.
\end{equation}
Together with the result in the first step of this proof that $\bm{\Delta}^{s}g\geq 0$, we deduce that
\begin{equation}
\bm{\Delta}^{s}(G_{\varepsilon})
=
\phi_{\varepsilon}\bm{\Delta}^{s}g
\geq 0.
\end{equation}
Furthermore, from \eqref{eq:Naber_original_cut_off}, it follows that
\begin{equation}\label{eq:Naber_cut_off}
\frac{|\nabla \phi_{\varepsilon}|^2}{\phi_{\varepsilon}}
\leq
\varepsilon^2\frac{C}{R^2}
\quad \text{and} \quad
|\Delta \phi_{\varepsilon}|
\leq \varepsilon \frac{C}{R^2}.
\end{equation}
Denote by $H_v$ the right side of \eqref{eq:est_Bochner_ineq_AC} without last term $-2\langle \nabla v, \nabla g\rangle$.
Then, by using $g=G_{\varepsilon}/\phi_{\varepsilon}$ and inequality \eqref{eq:est_Bochner_ineq_AC}, it follows that
\begin{multline}
\bm{\Delta}^{ac}(G_{\varepsilon})
=
\phi_{\varepsilon}\bm{\Delta}^{ac}g + \frac{G_{\varepsilon}}{\phi_{\varepsilon}}\Delta \phi_{\varepsilon} + 2\left\langle \nabla\phi_{\varepsilon}, \nabla\left(\frac{G_{\varepsilon}}{\phi_{\varepsilon}}\right)\right\rangle\\
\geq
\phi_{\varepsilon}\left(H_v - 2\left\langle \nabla v, \nabla g\right\rangle\right)
+2\left\langle \nabla \phi_{\varepsilon}, \nabla G_{\varepsilon}\right\rangle/\phi_{\varepsilon}
+\frac{G_{\varepsilon}}{\phi_{\varepsilon}}\left( \Delta \phi_{\varepsilon}-2\frac{|\nabla \phi_{\varepsilon}|^2}{\phi_{\varepsilon}}\right)
\end{multline}
Using estimate \eqref{eq:Naber_cut_off} with $\varepsilon^2<\varepsilon$, it follows that
\begin{multline}\label{eq:Li_Yau_add_3}
\bm{\Delta}^{ac}(G_{\varepsilon})
\geq
\phi_{\varepsilon}H_v -2\phi_{\varepsilon}\left( \left\langle \nabla v, \frac{\nabla G_{\varepsilon}}{\phi_{\varepsilon}}\right\rangle - \left\langle \nabla v, \frac{G_{\varepsilon}\nabla \phi_{\varepsilon}}{\phi_{\varepsilon}^2}\right\rangle\right)\\
+ 2\left\langle \nabla \phi_{\varepsilon}, \nabla G_{\varepsilon}\right\rangle/\phi_{\varepsilon} - \varepsilon\|g\|_{\infty}\frac{C}{R^2}\\
\geq
\phi_{\varepsilon}H_v - 2\left\langle \nabla (v - \log \phi_{\varepsilon}), \nabla G_{\varepsilon} \right\rangle + 2\frac{G_{\varepsilon}}{\phi_{\varepsilon}}\langle \nabla v, \nabla \phi_{\varepsilon}\rangle - \varepsilon\|g\|_{\infty}\frac{C}{R^2}\\
\geq
\phi_{\varepsilon}H_v - 2\langle \nabla(v-\log \phi_{\varepsilon}), \nabla G_{\varepsilon}\rangle - \varepsilon\|g\|_{\infty}\| |\nabla v|\|_{\infty}\frac{C}{R} - \varepsilon\|g\|_{\infty}\frac{C}{R^2}.
\end{multline}
Since $\esssup_{X\setminus B(x_0,2R)}g \leq \esssup_{X\setminus B(x_0,R)}g<M_1$, by definition of $\phi_{\varepsilon}$, it follows that
\begin{equation}
\esssup_{X\setminus B(x_0,2R)}G_{\varepsilon}
\leq
(1-\varepsilon)M_1
<
M_1
=
\esssup_{B(x_0,R)}G_{\varepsilon}
=
\esssup_{B(x_0,2R)}G_{\varepsilon}.
\end{equation}
Doubling property and $R<D/4$ implies that $m(B(x_0,2R))>0$ as well as $m(X\setminus B(x_0,2R))>0$ which together with the fact that $\bm{\Delta}^{s}(G_{\varepsilon})\geq 0$, allows us to apply Lemma \ref{lemma:maximum_principle} to $G_{\varepsilon}$.
Taking $w_{\varepsilon}=2v - 2\log \phi_{\varepsilon}\in W^{1,2}\cap \text{Lip}_b$, by Lemma \ref{lemma:maximum_principle}, it follows that there exists a sequence $(x_{j})\subseteq X$ such that $G_{\varepsilon}(x_{j})>\esssup_{X}G_{\varepsilon}-1/j$ and
\begin{equation}
\bm{\Delta}^{ac}G_{\varepsilon}(x_{j}) + \langle \nabla G_{\varepsilon}, \nabla w_{\varepsilon}\rangle(x_{j})
\leq 1/j.
\end{equation}
Plugging \eqref{eq:Li_Yau_add_3}, we obtain that
\begin{equation}
(1-\varepsilon)H_v(x_j)
\leq
\phi_{\varepsilon}(x_j)H_v(x_j)
\leq
\frac{1}{j} + \varepsilon C_1,
\end{equation}
where $C_1=C(K,N,D)\|g\|_{\infty}\left( \||\nabla f|\|_{\infty}/R + 1/R^2\right)$.
Following the similar argument in Case 1, we obtain that
\begin{equation}\label{eq:Li_Yau_add_4}
g(x_j)
\leq
\frac{b}{4}\left(1-\frac{a}{2b}\right)^2 + \sqrt{\frac{N(j^{-1}+\varepsilon C_1)}{2(1-\varepsilon)}}.
\end{equation}
Since $\esssup_{X}G_{\varepsilon}=\esssup_{X}g=M_1$, so multiplying $\phi(x_j)$ on both side of \eqref{eq:Li_Yau_add_4} and letting $j\rightarrow \infty$, it follows that
\begin{equation}\label{eq:est_6}
\esssup_{X}g
\leq
\frac{b}{4}\left(1-\frac{a}{2b}\right)^2 + \sqrt{\frac{\varepsilon NC_1}{2(1-\varepsilon)}}.
\end{equation}
Inequality \eqref{eq:est_6} holding for any $0<\varepsilon <1/2$, we send $\varepsilon$ to $0$ to obtain
\begin{equation}
\esssup_{X}g
\leq
\frac{b}{4}\left(1-\frac{a}{2b}\right)^2.
\end{equation}
\end{enumerate}
With Case 1 and Case 2, together with the definition of $a$ and $b$, we obtain inequality \eqref{eq:Li_Yau_result_1}.
\end{enumerate}
\end{proof}
\begin{proof}[Proof of Corollary \ref{coro:Li_Yau_extremal}]
By Theorem \ref{thm:var_general} and Corollary \ref{corollary:var_log_transform}, we know that $v\in \text{Lip}\cap D(\Delta)$ satisfies $-\Delta v=|\nabla v|^2 + \alpha_2 v$.
Hence inequality \eqref{eq:Li_Yau_result_1} holds for $\tilde{v}$.
If $\alpha_2\in (0,K]$, then taking $0<\beta< \alpha_2/K$ and letting $\beta\nearrow \alpha_2/K$.
Then it follows from \eqref{eq:Li_Yau_result_1} that $|\nabla \log u|^2=0$.
By the fact that $|\nabla \log u|^2\in W^{1,2}$ and Sobolev-to-Lip property, it follows that $u$ is constant.
\end{proof}
\section{Applications}
In this section, we present applications of the regularity result in Theorem \ref{thm:var_general} and Li-Yau type estimate in Theorem \ref{thm:estimate_Li_Yau}, which generalize results in \citep{wang1999harnack} in smooth Riemannian setting to the $\text{RCD}^*(K,N)$ setting.
Since the Lipschitz solution of $-\Delta v=|\nabla v|^2 +\alpha v$ is constant when $\alpha\in (0,K]$ as shown in Theorem \ref{thm:estimate_Li_Yau}, we only focus on the case where $\alpha>K$.
For notational simplicity, we define the non-negative constants $C_1$ and $C_2$ as follows
\begin{equation}
C_1(\beta):=\alpha - \beta K
\quad\text{and}\quad
C_2(\beta):= \frac{N\alpha(1-\beta)}{4\beta}\left(1-\frac{\beta((2-\beta)K-\alpha)}{2\alpha(1-\beta)}\right)^2,
\end{equation}
where $0<\beta<1$.
That that $C_1$ is positive.
The first direct consequence is a Harnack-type inequality for the non-negative extremal functions.
\begin{corollary}\label{coro:app_1}
Let $(X,d,m)$ be a $\text{RCD}^*(K,N)$ space with $K>0$ and $N$ in $(2,\infty)$.
Suppose that $v\in Lip$ satisfies $-\Delta v=|\nabla v|^2 +\alpha v$ for some $\alpha>K$.
Then for any $x,y\in X$, it holds that
\begin{equation}
e^{v(x)}
\leq
e^{(1-\varepsilon)v(y)}\exp\left(\frac{C_1(\beta) d^2(x,y)}{4\varepsilon} + \frac{\varepsilon C_2(\beta)}{C_1(\beta)}\right),
\end{equation}
for any $0<\varepsilon<1$.
\end{corollary}
\begin{proof}
First note that by Theorem \ref{thm:estimate_Li_Yau}, we have
\begin{equation}
|\nabla v |
\leq
\sqrt{C_2(\beta)-C_1(\beta)v},\quad m\text{-a.e.}
\end{equation}
Let $x_0,y_0\in X$ be arbitrary points in $X$ and let $\mu_0=\frac{1}{m(B(x_0,r))}m|_{B(x_0,r)}$ and $\mu_1=\frac{1}{m(B(y_0,r))}m|_{B(y_0,r)}$ for $0<r<\text{diam}(X)/3$.
By \citep[Proposition 1.5]{brue2020constancy} and the references therein, $\text{RCD}^*(K,N)$ has the essentially non-branching property.
As a consequence of which there exists a unique $W_2$-geodesic $(\mu_t)_{\{t\in [0,1]\}}$ joining $\mu_0$ and $\mu_1$ with $\mu_t\leq Cm$ for any $t$ in $[0,1]$ for some $C>0$, and test plan $\pi$ in $\mathcal{P}(C([0,1];X))$ such that $(e_t)_{\sharp}\pi=\mu_t$ for any $t$ in $[0,1]$ and $\pi$ is concentrated on the set of geodesics.
Let $\gamma\in \text{supp}(\pi)$ be an arbitrary geodesic.
By the continuity of $v$, let $s_1$ and $s_2$ in $[0,1]$ be the maximum and minimum respectively, that is
\begin{equation}
v(\gamma_{s_1})=\max_{s\in[0,1]}v(\gamma_s)
\quad\text{and}\quad
v(\gamma_{s_2})=\min_{s\in [0,1]}v(\gamma_s).
\end{equation}
Then by the definition of minimal weak upper gradient, it follows that
\begin{multline}
\left| v(\gamma_{s_1}) - v(\gamma_{s_2}) \right|
\leq
\int_{\min(s_1,s_2)}^{\max(s_1,s_2)}|\nabla v|(\gamma_t)|\dot{\gamma}_t|dt
\leq
d(\gamma_0,\gamma_1)\sqrt{C_2(\beta)-C_1(\beta)v(\gamma_{s_2})}.
\end{multline}
For $0<\varepsilon<1$, together with $v(\gamma_0)\leq v(\gamma_{s_1})$, it follows that
\begin{equation}
v(\gamma_0)
\leq
(1-\varepsilon)v(\gamma_{s_2}) + \varepsilon v(\gamma_{s_2}) + d(\gamma_0,\gamma_1)\sqrt{C_2(\beta)-C_1(\beta)v(\gamma_{s_2})}.
\end{equation}
For $a=\sqrt{C_2(\beta)-C_1(\beta)v(\gamma_{s_2})}$, we get
\begin{multline}\label{eq:app_coro1_1}
v(\gamma_0)
\leq
(1-\varepsilon)v(\gamma_{s_2}) - \frac{\varepsilon}{C_1(\beta)}a^2 + d(\gamma_0,\gamma_1)a + \frac{\varepsilon C_2(\beta)}{C_1(\beta)}\\
\leq
(1-\varepsilon)v(\gamma_{1}) + \frac{C_1(\beta) d^2(\gamma_0,\gamma_1)}{4\varepsilon} + \frac{\varepsilon C_2(\beta)}{C_1(\beta)}.
\end{multline}
Integrating \eqref{eq:app_coro1_1} with respect to $\pi$ on both side and noting that $(e_t)_{\sharp}\pi=\mu_t$, it follows that
\begin{equation}
\int v(x)d\mu_0
\leq
\int (1-\varepsilon)v(y) + \frac{C_1(\beta) d^2(x,y)}{4\varepsilon} + \frac{\varepsilon C_2(\beta)}{C_1(\beta)}d\tilde{\pi}(x,y),
\end{equation}
where $\tilde{\pi}=(e_0,e_1)_{\sharp}\pi$ is the optimal transport plan between $\mu_0$ and $\mu_1$.
Since $\mu_0$ and $\mu_1$ weakly converges to $\delta_{x_0}$ and $\delta_{y_0}$ in duality of $C_b$ as $r\rightarrow 0$, it follows that, up to a subsequence, $\tilde{\pi}$ weakly converges to $\delta_{x_0}\times \delta_{y_0}$.
Using $u=\exp(v)$, we obtain that
\begin{equation}
u(x_0)
\leq
u(y_0)^{1-\varepsilon}\exp\left(\frac{C_1(\beta) d^2(x_0,y_0)}{4\varepsilon} + \frac{\varepsilon C_2(\beta)}{C_1(\beta)}\right).
\end{equation}
\end{proof}
\begin{remark}
Note that using the deep result proved in \cite{cheeger1999} that the minimal weak upper gradient of Lipschitz function coincides with its local Lipschitz slope in the complete doubling metric space supporting weak $(1,p)$-Poincaré inequality for $p>1$, Corollary \ref{coro:app_1} can be shown directly following the lines of \citep[Corollary 2.2]{wang1999harnack} instead of using optimal transport methods.
\end{remark}
Next corollary is the estimates on the upper bound and lower bound of $v$ based on the dimension-free Harnack inequality in $\text{RCD}(K,\infty)$ in \citep{huaiqian2016dimension}.
The proof of which is essentially the same one as in the Riemannian case from \citep[Corollary 2.4]{wang1999harnack}.
For the sake of completeness, we provide the proof.
\begin{corollary}\label{coro:app_2}
Let $(X,d,m)$ be a $\text{RCD}^*(K,N)$ space with $K>0$ and $N$ in $(2,\infty)$.
Then for any non-negative extremal function $u$ of \eqref{eq:var_pbm_general} with log Sobolev constant $\lambda(\alpha_1,\alpha_2)$ with $\alpha_2>K$, it holds that
\begin{align}
&\sup_{X}\log u \leq \frac{\lambda-\alpha_1}{\alpha_2} + \alpha_2 \text{diam}(X)^2,\\
&\inf_{X}\log u
\geq
-\frac{27}{16}\alpha_2 \text{diam}(X)^2 - \frac{\lambda-\alpha_1}{\alpha_2}.
\end{align}
In particular, it holds that $|\nabla \log u|^2 \leq C_2(\beta)+\frac{27}{16}C_1(\beta)\alpha_2 \text{diam}(X)^2$ $m$-a.e. for any $0<\beta<1$.
\end{corollary}
\begin{proof}
Let $D:=\text{diam}(X)$ and $x$ and $y$ be the maximum and minimum point of $u$ on $X$, respectively, existence of which is guaranteed by the regularity of extremal functions proved in Theorem \ref{thm:var_general}.
As for the upper bound, firstly, by the dimension-free Harnack inequality in $\text{RCD}(K,\infty)$ shown in \citep[Theorem 3.1]{huaiqian2016dimension} for $p>1$, it follows that
\begin{equation}\label{eq:coro_app2_1}
\left(P_t u(x)\right)^p
\leq
P_t(u^p)(z) \exp\left(\frac{pKd^2(x,z)}{2(p-1)(e^{2Kt}-1)}\right),\quad \forall z\in X.
\end{equation}
Since $\|P_t u^2\|_1=\|u^2\|_1=1$ by the mass-preserving property of heat semigroup, we deduce from \eqref{eq:coro_app2_1} by taking $p=2$ that
\begin{equation}\label{eq:coro_app2_2}
\left(P_{t}u(x)\right)^2
\leq
\exp\left(\frac{KD^2}{e^{2Kt}-1}\right).
\end{equation}
Secondly, using heat equation, equation \eqref{eq:var_pbm_pde} and the commutation between $P_t$ and $\Delta$, it follows that
\begin{multline}
P_tu(x) - P_su(x)
=
\int_{s}^{t}P_{\tau}\Delta u(x)d\tau
=
-\int_{s}^{t}P_\tau\left(\alpha_2 u\log u +(\lambda-\alpha_1)u\right)(x)d\tau
\end{multline}
for any $0<s<t$.
Since $u$ is positive, it follows that $(u\log u)(z)\leq u(z)\log u(x)$ for any $z$ in $X$ which by the comparison principle for heat semigroup, yields
\begin{equation}
P_tu(x) - P_su(x)
\geq
-\alpha_2\log u(x)\int_{s}^{t}P_{\tau}u(x)d\tau
- (\lambda-\alpha_1)\int_{s}^{t}P_{\tau}u(x)d\tau.
\end{equation}
Gronwall's inequality further implies that
\begin{equation}\label{eq:coro_app2_x}
P_tu(x)
\geq
u(x)\exp\left(-\alpha_2 t \log u(x) -(\lambda-\alpha_1)t\right)
=
e^{-(\lambda-\alpha_1)t}u(x)^{1-\alpha_2 t}.
\end{equation}
So, together with \eqref{eq:coro_app2_2}, it follows that for any $0<t <1/\alpha_2$:
\begin{equation}
\log u(x)
\leq
\frac{(\lambda-\alpha_1)t}{1-\alpha_2t}+\frac{KD^2}{2(1-\alpha_2t)(e^{2Kt}-1)}
\leq
\frac{(\lambda-\alpha_1)t}{1-\alpha_2t}+\frac{D^2}{4(1-\alpha_2t)t},
\end{equation}
where we use $K/(e^{2Kt}-1)\leq 1/(2t)$ in the last inequality which for $t=1/(2\alpha_2)$ yields
\begin{equation}
\log u(x) \leq \frac{\lambda-\alpha_1}{\alpha_2} + \alpha_2 D^2.
\end{equation}
As for the lower bound, from $(u\log u)(z)\geq u(z)\log u(y)$ and similar arguments as above, it follows that
\begin{equation}\label{eq:coro_app2_y}
P_tu(y)
\leq
e^{-(\lambda-\alpha_1)t}u(y)^{1-\alpha_2 t}.
\end{equation}
Taking $z=y$ in dimension-free inequality \eqref{eq:coro_app2_1} and plugging \eqref{eq:coro_app2_x} and \eqref{eq:coro_app2_y}, it follows that
\begin{multline}
e^{-p(\lambda-\alpha_1)t}u(x)^{p-p\alpha_2 t}
\leq
\left(P_tu(x)\right)^p
\leq
P_t(u^p)(y)\exp\left(\frac{pKD^2}{2(p-1)(e^{2Kt}-1)}\right)\\
\leq
u(x)^{p-1}(P_tu)(y)\exp\left(\frac{pKD^2}{2(p-1)(e^{2Kt}-1)}\right)\\
\leq
u(x)^{p-1}e^{-(\lambda-\alpha_1)t}u(y)^{1-\alpha_2t}\exp\left(\frac{pKD^2}{2(p-1)(e^{2Kt}-1)}\right),
\end{multline}
which implies that
\begin{equation}
u(x)^{1-p\alpha_2 t}
\leq
e^{(p-1)(\lambda-\alpha_1)t}u(y)^{1-\alpha_2t}\exp\left(\frac{pKD^2}{2(p-1)(e^{2Kt}-1)}\right).
\end{equation}
Taking $p=1/(\alpha_2t)$ and $t=1/(3\alpha_2)$, we obtain that
\begin{equation}
\log u(y)
\geq
-\frac{27}{16}\alpha_2 D^2 - \frac{\lambda-\alpha_1}{\alpha_2}.
\end{equation}
\end{proof}
As a final application, following similar methods as in \cite[Lemma 5.3]{profeta2015sharp} together with Theorem \ref{thm:var_general}, we recover the classical result from \cite[Theorem 5.7.4]{bakry2014analysis} that any non-negative extremal functions with log-Sobolev constant $\lambda(\alpha_1,\alpha_2)$ is constant when $0<\alpha_2\leq KN/(N-1)$.
\begin{corollary}
Let $(X,d,m)$ be a $\text{RCD}^*(K,N)$ space with $K>0$ and $N\in (2,\infty)$.
Then any non-negative extremal function of \eqref{eq:var_pbm_general} with positive log-Sobolev constant $\lambda(\alpha_1, \alpha_2)$ is constant whenever $0< \alpha_2 \leq KN/(N-1)$.
\end{corollary}
\begin{proof}
Let $u$ be an arbitrary non-negative extremal function of \eqref{eq:var_pbm_general} with log-Sobolev constant $\lambda(\alpha_1,\alpha_2)$ with $0<\alpha_2 \leq KN/(N-1)$, and define $v=\log u$.
Let further $a$, $b$, and $d$ be reals to be determined later.
We first estimate $\int e^{bv}(\Delta v)^2dm$ from both PDE equation \eqref{eq:var_pbm_pde} and Bochner inequality, and then derive the desired result.
\begin{enumerate}[label=\textbf{\textsc{Step \arabic*:}}, fullwidth]
\item We first derive estimate from PDE equation \eqref{eq:var_pbm_pde}.
Let $\alpha_3:=\lambda-\alpha_1$ in \eqref{eq:var_pbm_pde}.
By the Lipschitz regularity of $v$ and regularity of $|\nabla v|^2$ proved in Step 1 in Theorem \ref{thm:estimate_Li_Yau}, it follows that $\phi:=e^{(b-1)v}\Delta v\in W^{1,2}$ and $\psi:=e^{(b-1)v}|\nabla v|^2\in W^{1,2}$.
On the one hand, applying $\phi$ and then $\psi$ to the right hand side of \eqref{eq:var_pbm_pde} it follows that
\begin{multline}\label{eq:app3_1}
I:= \int (\alpha_2 v + \alpha_3)e^{v}\phi dm = \int (\alpha_2 v + \alpha_3)e^{v}e^{(b-1)v}\Delta v dm\\
=
-\alpha_2 \int |\nabla v|^2 e^{bv}dm - b\int (\alpha_2 v+\alpha_3)|\nabla v|^2 e^{bv}dm\\
=-\alpha_2 \int |\nabla v|^2 e^{bv}dm - b\int (\alpha_2 v+\alpha_3)e^v\psi dm\\
=-\alpha_2 \int |\nabla v|^2 e^{bv}dm + b\int \Delta(e^v)e^{(b-1)v}|\nabla v|^2 dm\\
=-\alpha_2 \int |\nabla v|^2 e^{bv}dm + b\int |\nabla v|^4 e^{bv}dm +b \int (\Delta v)|\nabla v|^2 e^{bv}dm.
\end{multline}
On the other hand, applying $\phi$ to the left hand side of \eqref{eq:var_pbm_pde}, we get
\begin{equation}
I:=
\int -\Delta(e^v) e^{(b-1)v}\Delta v dm
=
-\int (\Delta v)|\nabla v|^2 e^{bv}dm - \int (\Delta v)^2 e^{bv}dm.
\end{equation}
Showing that
\begin{equation}\label{eq:app3_ineq_1}
\int (\Delta v)^2 e^{bv}dm
=
\alpha_2 \int |\nabla v|^2 e^{bv}dm - b\int |\nabla v|^4 e^{bv}dm -(b+1)\int (\Delta v)|\nabla v|^2 e^{bv}dm.
\end{equation}
\item We derive the estimate from Bochner inequality \eqref{eq:weak_Bochner}.
Note that $f:=e^{av}$ and $g:=e^{dv}$ satisfies the regularity requirement in \eqref{eq:weak_Bochner}.
So plugging $f$ and $g$ into \eqref{eq:weak_Bochner}, it follows that the left side of \eqref{eq:weak_Bochner} can be expressed as
\begin{equation}\label{eq:app3_3}
\frac{1}{2}\int \Delta(g)|\nabla f|^2dm
=
\frac{a^2d}{2}\int e^{(2a+d)v}(\Delta v)|\nabla v|^2 dm + \frac{a^2d^2}{2}\int e^{(2a+d)v}|\nabla v|^4dm,
\end{equation}
and right hand of \eqref{eq:weak_Bochner} can be expressed as
\begin{multline}\label{eq:app3_4}
-\frac{N-1}{N}\int g(\Delta f)^2 -\int \Delta f \langle \nabla g, \nabla f \rangle dm - K\int g|\nabla f|^2dm\\
=
-a^2\frac{N-1}{N}\int e^{(2a+d)}(\Delta v)^2 dm
- a^2\left(2a\frac{N-1}{N}+d\right)\int e^{(2a+d)v}(\Delta v)|\nabla v|^2 dm \\
- a^2\left(a^2\frac{N-1}{N}+ad\right)\int e^{(2a+d)v}|\nabla v|^4 + a^2 K \int e^{(2a+d)v}|\nabla v|^2 dm.
\end{multline}
From \eqref{eq:app3_3} and \eqref{eq:app3_4} it follows that \eqref{eq:weak_Bochner} reads as follows
\begin{multline}\label{eq:app3_ineq_2}
\int e^{(2a+d)v}(\Delta v)^2 dm
\geq
\frac{KN}{N-1}\int e^{(2a+d)v}|\nabla v|^2\\
-
\left(a^2 + \frac{N}{N-1}ad + \frac{N}{2(N-1)}d^2\right)\int e^{(2a+d)v}|\nabla v|^4 dm\\
- \left(2a + \frac{3N}{2(N-1)}d\right)\int e^{(2a+d)v}(\Delta v)|\nabla v|^2 dm.
\end{multline}
\item We conclude by comparing the coefficients of \eqref{eq:app3_ineq_1} and \eqref{eq:app3_ineq_2} and choosing particular value for $b,a,d$.
Let $\varepsilon>0$ be determined later.
Let $b,a,d$ and $\varepsilon>0$ satisfies the following system of equations:
\begin{equation}\label{eq:app3_system}
\left\{
\begin{aligned}
&b = 2a +d\\
&b-\varepsilon = a^2 +\frac{N}{N-1}ad + \frac{N}{2(N-1)}d^2\\
&b+1 = 2a + \frac{3N}{2(N-1)}d.
\end{aligned}
\right.
\end{equation}
This system \eqref{eq:app3_system} admits real-valued solutions if and only if $0<\varepsilon\leq 4N/(N+2)^2$.
Hence, choosing arbitrary $\varepsilon\in (0, 4N/(N+2)^2]$ and
\begin{align*}
d &= \frac{2(N-1)}{N+2},\\
a &=\frac{2}{N+2} + \sqrt{\frac{4N}{(N+2)^2}-\varepsilon},\\
b &= 2a+d,
\end{align*}
by comparing \eqref{eq:app3_ineq_1} and \eqref{eq:app3_ineq_2}, it follows that
\begin{equation}
\left(\alpha_2 - \frac{KN}{N-1}\right)\int |\nabla v|^2 e^{bv}dm
\geq
\varepsilon \int |\nabla v|^4 e^{bv}dm.
\end{equation}
If $0<\alpha_2\leq KN/(N-1)$, then it follows that $|\nabla v|$ has to be $0$, implying that $u$ is constant.
\end{enumerate}
\end{proof}
\begin{appendix}
\section{Omori-Yau Maximum Principle}\label{appendix}
In the appendix, we provide a slightly generalized version of Omori-Yau type maximum principle for the whole metric space of $\text{RCD}(K,\infty)$ spaces with $K$ in $\mathbb{R}$ which may not support doubling property.
To show it, we will first show the Kato's inequality in the proper $\text{RCD}(K,\infty)$ setting whose proof follows a similar argument as in \cite{zhang2016local}.
For the sake of completeness, we provide the complete argumentation.
Beforehand, recall that the definition of weak Laplacian.
We say an operator $L$ on $W^{1,2}_{loc}$ is the weak Laplacian provided that for each $f\in W^{1,2}_{loc}$, $Lf$ is a linear functional acting on $W^{1,2}\cap L^{\infty}$ with bounded support given by:
\begin{equation}
Lf(g):=
-\int \langle \nabla f, \nabla g \rangle dm,\quad \forall g\in W^{1,2}\cap L^{\infty}\text{ with bounded support}.
\end{equation}
For each $h$ in $W^{1,2}_{loc}\cap L^{\infty}$, $h\cdot Lf$ is the linear functional given by $h\cdot Lf(g):=Lf(hg)$ for each $g$ in $W^{1,2}\cap L^{\infty}$ with bounded support.
We say that $Lf$ is a signed Radon measure provided that there exists a signed Radon measure $\mu$ such that $Lf(g)=\int g d\mu$ for all $g\in W^{1,2}\cap L^{\infty}$ with bounded support.
It's clear that in this case, we have $f\in D(\bm{\Delta})$ and $Lf=\bm{\Delta}f$.
For $w\in W^{1,2}\cap L^{\infty}$ and $m_w:=e^{w}\cdot m$, $L_w$ denotes the weighted weak Laplacian on $W^{1,2}_{loc}$ given by
\begin{equation}
L_wf(g):= - \int \langle \nabla f, \nabla g \rangle dm_w,\quad \text{for all }g\in W^{1,2}\cap L^{\infty}\text{ with bounded support}.
\end{equation}
It's easy to check that $L_wf=e^w\cdot(Lf + \langle \nabla w, \nabla f \rangle m)$.
When $L_wf$ is a signed Radon measure, we denote by $L_wf=(L^{ac}_wf)\cdot m_w + L_w^{s}f$ its Lebesgue decomposition with respect to $m_w$.
Finally, we remark that using similar argument as in \cite[Lemma 3.2]{zhang2016local}, we have following chain rule: for $f\in W^{1,2}_{loc}\cap L^{\infty}$ and $\phi\in C^2(\mathbb{R})$, it holds that $\phi(f)\in W^{1,2}_{loc}\cap L^{\infty}$ and that
\begin{equation}
L\left( \phi(f)\right)
=
\phi'(f)\cdot Lf + \phi''(f)|\nabla f|^2\cdot m.
\end{equation}
\begin{lemma}\label{lemma:Kato_ineq}
(Kato's inequality)
Let $(X,d,m)$ be a proper $\text{RCD}(K,\infty)$ space and $w$ in $W^{1,2}\cap L^{\infty}$.
Suppose $f$ in $W^{1,2}_{loc}\cap L^{\infty}$ is such that $L_wf$ is a signed Radon measure and that $L^{s}_{w}f\geq 0$.
Then $L_w(f_{+})$ is a signed Radon measure such that
\begin{equation}
L_w(f_+) \geq \chi_{\{f> 0\}}L_{w}^{ac}f \cdot m_w,
\end{equation}
where $f_+:=\max\{f,0\}$ and $f_{-}:=\max\{-f, 0\}$.
\end{lemma}
\begin{proof}
It suffices to prove that $L_w(|f|)$ is a signed Radon measure and that
\begin{equation}\label{eq:Kato_ineq}
L_{w}(|f|)\geq \text{sgn}(f)\cdot L_wf,
\end{equation}
where $\text{sgn}(t)=1$ if $t>0$ and $\text{sgn}(t)=-1$ if $t<0$ and $\text{sgn}(t)=0$ if $t=0$.
Indeed, if inequality \eqref{eq:Kato_ineq} holds, then it follows that both $L_w(f_{+})$ and $L_w(f_{-})$ are signed Radon measure and \eqref{eq:Kato_ineq} implies that
\begin{equation}\label{eq:Kato_ineq_2}
L_w(f_+) + L_w(f_{-})
\geq
\chi_{\{f>0\}}\cdot L_w f - \chi_{\{f<0\}}\cdot L_w f.
\end{equation}
By the locality of minimal weak upper gradient and inner regularity of Radon measure, it is immediate to check that $L_w(f_{+})$ is concentrated on set $\{f\geq 0\}$.
Then inequality \eqref{eq:Kato_ineq_2} with the assumption that $L^{s}_{w}f \geq 0$ implies that
\begin{equation}
L_{w}(f_{+})
\geq
\chi_{\{f>0\}}L_{w}f
=
\chi_{\{f>0\}}\left( L^{ac}_wf\cdot m_w + L^{s}_wf\right)
\geq
\chi_{\{f>0\}}L^{ac}_w f\cdot m_w.
\end{equation}
We are left to show \eqref{eq:Kato_ineq}.
Let $\varepsilon>0$ and $\phi_{\varepsilon}(t):=\sqrt{t^2+\varepsilon^2}-\varepsilon$ and $f_{\varepsilon}:=\phi_{\varepsilon}(f)$.
Since $\phi_{\varepsilon}$ is in $C^2(\mathbb{R})$, it follows that $f_{\varepsilon}\leq |f|$ and that
\begin{equation}\label{eq:Kato_ineq_3}
|\nabla f_{\varepsilon}|
=
|\phi'_{\varepsilon}(f)||\nabla f|
=
\frac{|f|}{\sqrt{f^2+\varepsilon^2}}|\nabla f|
\leq
|\nabla f|.
\end{equation}
Further, the chain rule of $L_w$ for $f\in W^{1,2}_{loc}\cap L^{\infty}$ and $\psi(t)=t^2$ yields
\begin{multline}
2f\cdot L_{w}f + 2|\nabla f|^2\cdot m_w
=
L_{w}f^2
=
L_{w}\left((f_{\varepsilon}+\varepsilon)^2 -\varepsilon^2 \right)\\
=
2(f_{\varepsilon}+\varepsilon)L_wf_{\varepsilon} + 2|\nabla f_{\varepsilon}|^2 \cdot m_w.
\end{multline}
So by \eqref{eq:Kato_ineq_3}, it follows that
\begin{equation}\label{eq:Kato_ineq_4}
L_w f_{\varepsilon}
\geq
\frac{f}{f_{\varepsilon}+\varepsilon}\cdot L_w f.
\end{equation}
Now, let $\Omega\subseteq X$ be any arbitrary bounded open subset.
Clearly, $(f_{\varepsilon})$ and $f$ are in $W^{1,2}(\Omega,m_w)$.
Since $|\nabla f_{\varepsilon}|\leq |\nabla f|$ and $0\leq f_{\varepsilon}\leq |f|$, it follows that $(f_{\varepsilon})$ is bounded in $W^{1,2}(\Omega,m_w)$.
Since $W^{1,2}(\Omega,m_w)$ is reflexive, there exists a subsequence $(f_{\varepsilon_j})$ with $\varepsilon_j\searrow 0$ converging weakly in $W^{1,2}(\Omega,m_w)$ to some $g$.
As $f_{\varepsilon_j}\rightarrow |f|$ in $L^2(\Omega,m_w)$, we obtain that $g=|f|$ $m_w$-a.e. on $\Omega$.
Since $f_{\varepsilon_j}(x)+\varepsilon \rightarrow |f|(x)$ for all $x$ in $X$ and that $|f|/(f_{\varepsilon_j}+\varepsilon_j)\leq 1$, for any non-negative Lipschitz $\phi$ with $\text{supp}(\phi)\subset \Omega$, we obtain that
\begin{multline}\label{eq:Kato_ineq_5}
L_{\Omega}(\phi)
:=
-\int_{\Omega} \langle \nabla \phi, \nabla |f| \rangle dm_w
=
-\lim \int_{\Omega} \langle \nabla \phi, \nabla f_{\varepsilon_j} \rangle dm_w\\
=
\lim L_wf_{\varepsilon_j}(\phi)
\geq
\lim\int_{\Omega}\frac{\phi f}{f_{\varepsilon_j}+\varepsilon_j}dL_wf
=
\int_{\Omega} \phi \text{sgn}(f)dL_wf.
\end{multline}
Following a similar argument as in \citep[(4-28),(4-29) in Theorem 4.14]{cavalletti2020new}, there exists constant $C_{\Omega}>0$ such that $|L_{\Omega}(\phi)|\leq C_{\Omega}\max|\phi|$ for any Lipschitz function $\phi$ with $\text{supp}(\phi)\subset \Omega$.
Since $X$ is proper, by Riesz representation theorem, there exists a signed Radon measure $\mu_{\Omega}$ on $\Omega$ such that $L_{\Omega}(\phi)=\int_{\Omega}\phi d\mu_{\Omega}$ for each $\phi$ in $\text{Lip}_{c}(\Omega)$ and that $\mu_{\Omega}\geq \text{sgn}(f)L_wf$ on $\Omega$, see the remark before Theorem 1.2 in \citep{zhang2016local} or \citep[Proposition 6.2.16]{gigli2020lectures} for finite signed Radon measures.
Clearly $\mu_{\Omega_1}$ and $\mu_{\Omega_2}$ coincide on $\Omega_1\cap \Omega_2$ for any bounded open subsets $\Omega_1$ and $\Omega_2$.
Henceforth, there exists a unique signed Radon measure $\nu$ on $X$ such that $\nu|_{\Omega}=\mu_{\Omega}$ for all bounded open domain $\Omega$, and thus, we obtain that $L_w|f|$ is a signed Radon measure with $L_w|f| \geq \text{sgn}(f)\cdot L_wf$.
\end{proof}
We now show the Omori-Yau type maximum principle in $\text{RCD}(K,\infty)$ based on the previous Kato's inequality.
While the proof in \textcolor{blue}{\citep{zhang2016local}} relies on doubling property and weak Poincaré inequality of underlying metric measure space and weak maximum principle, our proof is based on the "Sobolev-to-Lip" property of $\text{RCD}(K,\infty)$ space which in general does not support doubling property.
\begin{lemma}\label{lemma:maximum_principle}
(Omori-Yau type maximum principle)
Let $(X,d,m)$ be a proper $\text{RCD}(K,\infty)$ space with $K$ in $\mathbb{R}$.
Let further $f$ in $W^{1,2}\cap L^{\infty} \cap D(\bm{\Delta})$ such that $\bm{\Delta}^{s}f\geq 0$.
Suppose that $f$ achieves one of its strict maximum in $X$ in the sense that there exits a bounded subset $U\subset X$ satisfying $m(U)>0$ and $m(X\setminus U)>0$ with
\begin{equation}
\esssup_{U}f > \esssup_{X\setminus U}f.
\end{equation}
Then, given any $w$ in $W^{1,2}\cap \text{Lip}_b$, for any $\varepsilon>0$, we have
\begin{equation}\label{eq:MP_1}
m\left( \left\{x\in X \colon f(x)\geq \esssup_{X}f-\varepsilon \text{ and } (\bm{\Delta}^{ac}f)(x) + \langle \nabla f, \nabla w \rangle(x)\leq \varepsilon \right\}\right)>0.
\end{equation}
In particular, there exists a sequence $(x_j)$ in $X$ such that $f(x_j)\geq \esssup f(x_j) -1/j$ and $(\bm{\Delta}^{ac}f)(x_j)+\langle \nabla f,\nabla w\rangle(x_j)\leq 1/j$.
\end{lemma}
\begin{proof}
We adapt the proof in \citep{zhang2016local}.
Let $M:=\esssup_{X}f$.
Suppose by contradiction that there exists $\varepsilon_0>0$ and $w$ in $W^{1,2}\cap \text{Lip}_b$ such that \eqref{eq:MP_1} fails.
Then for possibly smaller $\varepsilon_0>0$ such that $M-\varepsilon_0>\esssup_{X\setminus U}f$, it follows that $g:=(f-(M-\varepsilon_0))_{+}$ is in $W^{1,2}$ with $g= 0$ $m$-a.e. in $X\setminus U$, and that
\begin{equation}
m\left(\left\{x\in X: f(x)>M-\varepsilon_0\text{ and } \bm{\Delta}^{ac}f(x)+\langle \nabla f, \nabla w \rangle(x)\leq \varepsilon_0 \right\}\right)
=0.
\end{equation}
Then it follows that for $m$-a.e. $x$ in $\{y\in X: f(y)>M-\varepsilon_0\}$, we have
\begin{equation}
\bm{\Delta}^{ac}f(x) + \langle \nabla f, \nabla w \rangle(x)>\varepsilon_0.
\end{equation}
Note that since $w\in W^{1,2}\cap \text{Lip}_b$ and $\bm{\Delta}f$ is a signed Radon measure, it follows that $e^w(\bm{\Delta}f + \langle \nabla w, \nabla f\rangle m)$ is a well-defined signed Radon measure.
By the identity of weighted weak Laplacian $L_wf=e^w(Lf + \langle \nabla w, \nabla f \rangle m)$ and $Lf=\bm{\Delta}f$, it follows that
\begin{equation}
L^{ac}_wf \cdot m_w
=
e^{w}\left( L^{ac}f + \langle \nabla w, \nabla f\rangle \right)\cdot m
\geq
e^{-\|w\|_{L^\infty}}\varepsilon_0\cdot m
>0
\end{equation}
on $\{y\in X: f(y)> M-\varepsilon_0\}$.
Moreover, the identity of weighted weak Laplacian together with $\bm{\Delta}^{s}f\geq 0$ implies $L^{s}_{w}f \geq 0$, and that $f-(M-\varepsilon_0)$ is in $W^{1,2}_{loc}\cap L^{\infty}$, applying Lemma \ref{lemma:Kato_ineq} to $f-(M-\varepsilon_0)$, yields
\begin{equation}\label{eq:maximum_Principle_1}
L_w g
=
L_w\left(f-(M-\varepsilon_0)\right)_+
\geq
\chi_{\{f> M-\varepsilon_0\}}L^{ac}_{w}f\cdot m_w
\geq
0.
\end{equation}
By definition of $L_w$, it follows from \eqref{eq:maximum_Principle_1} that
\begin{equation}
-\int_X \langle \nabla g, \nabla g \rangle dm_w
=
L_wg(g)
\geq 0,
\end{equation}
which implies that $|\nabla g|=0$ $m_w$-a.e.
From $m$ being equivalent to $m_w$ since $w\in L^{\infty}$, we get that $|\nabla g|=0$ $m$-a.e.
Together with $g$ being in $W^{1,2}$ and $|\nabla g|$ in $L^{\infty}$, by Sobolev-to-Lip property of $\text{RCD}(K,\infty)$ spaces, it follows that $g$ admits a Lipschitz version $\tilde{g}$ with $\text{Lip}(\tilde{g})\leq \| |\nabla g|\|_{\infty}$.
Then, from $|\nabla g|=0$ $m$-a.e. as well as $g=0$ $m$-a.e on $X\setminus U$, it follows that $\tilde{g}$ is constant and $\tilde{g}\equiv 0$.
Hence, $f\leq M-\varepsilon_0$ $m$-a.e., which is a contradiction.
\end{proof}
\end{appendix}
\bibliographystyle{plainnat}
|
2,877,628,091,561 | arxiv | \section{Introduction}\label{sec:introduction}
Retrieving meaningful information from large amounts of data is a complex task, often impossible if those data have to be analyzed in their original domain. For this motivation, compact representations have been studied in different frameworks, ranging from information retrieval (see, e.g., \cite{achlioptas2003database,AndoniLSH2008,Toothpic_TMM}) to signal processing (see, e.g., \cite{Dono06,Cand06b}).
In information retrieval, hash functions are widely used to map data into compact representations, see, e.g., \cite{wan18} and references therein. Traditional hash functions compress arbitrary data to fixed length representations, preserve exact matches, and minimize collisions between different objects. An important purpose of hashing techniques is the evaluation of the similarity between sets of generic objects. This has many applications in information retrieval. An example is the search of near-duplicate documents: this can be performed by finding the number of bag-of-words shared by different documents, which are said to be near-duplicate if this number overcomes a given threshold. A popular technique to measure set similarity is the min-wise hashing (also known as MinHash) proposed by \cite{Broder1997}, and further analysed by \cite{Broder2000,Indyk1999,fast_similiary_sketching,beyond_minhash,exact_weighted_minhash}. MinHash approximately preserves the Jaccard coefficient, which is a popular similarity metric for bag-of-words and similar representations, and is used in a wide range of applications, see, e.g., \cite{Ping_Li_bbit}.
In signal processing, compact signal representations are usually referred to as embeddings, see \cite{Jacques2013,Boufounos2011Secure,adaptive_embedding}. Formally, an embedding is a transformation that maps a set of signals in a high dimensional space to a set in a lower dimensional space, in such a way that the geometry of the set is approximately preserved. The most famous embedding is probably that proposed by \cite{joh84}, which preserves Euclidean distances using random projections.
Hash functions and embeddings bear many similarities. For example, a class of efficient indexing techniques known as locality sensitive hashing (LSH) can be constructed using both traditional hash functions and embeddings, based on random projections, as studied by \cite{AndoniLSH2008}. Hence, it is not surprising that results in one field can be exploited to obtain significant advancements in the other field, and vice versa.
A novel
embedding for the Jaccard coefficient, called SparseHash, was proposed by \cite{sparsehash_icme}. SparseHash builds on the concept of sparsity. A signal $u\in\mathbb{R}^n$ is said to be sparse if it has few non-zero components. The index set of its non-zero components is called {\em support}. SparseHash can efficiently evaluate the similarity between sparse signals, in terms of support overlap. Its rationale is based on recent results by \cite{Bioglio2015,Ravazzi2016,rav18}, which show that the sparsity level of a signal, that is, the size of its support, can be efficiently estimated from compressed linear projections obtained through sparse random matrices.
Sparsity is envisaged also in information retrieval, as several compact representations, e.g., bag-of-words, produce sparse features, see, e.g. \cite{huang2008similarity}. Considering the example of near-duplicate documents, typically each document contains only few bag-of-words with respect to the general vocabulary. Given this observation, we can highlight a duality between sparse signals and sets of generic objects: a set $S\subseteq\Omega=\{1,\ldots,n\}$ can be represented as a signal $u\in\{0,1\}^n$, whose entries are $u_i=1$, if $i\in S$, and $u_i=0$, otherwise, and generally we expect that the number $k$ of ones is much smaller than $n$. Similarity between sets can then be interpreted as the overlap between the supports of sparse signals. In this perspective, SparseHash is a natural alternative to MinHash, which can be considered as benchmark for this kind of problems.
This letter extends our preliminary work \cite{sparsehash_icme}.
We propose two metrics to measure similarity in the embedded domain that depend linearly and nonlinearly, respectively, on the original Jaccard coefficient. Moreover, we present a deeper theoretical analysis of performance in terms of estimation of set similarities. We also introduce a new algorithm that implements SparseHash more efficiently, i.e., with the same asymptotic complexity of the MinHash bottom sketch by \cite{bottomk}. However, compared to the bottom sketch, SparseHash has binary measurements rather than real-valued ones, yielding improved compression efficiency.
The letter is organized as follows. In Section \ref{sec:background}, we introduce MinHash and random projections. In Section\ref{sec:proposed}, we illustrate SparseHash and discuss its implementation. In Section \ref{sec:theory}, we provide the theoretical analysis. Section \ref{sec:experimental} is devoted to numerical experiments; conclusions are reported in Section \ref{sec:conclusions}.
\section{Related work}\label{sec:background}
MinHash is a hashing method conceived to preserve the Jaccard coefficient. Specifically, MinHash generates $m\geq 1$ independent hash functions $h_1,\dots, h_m$ which return an integer value for each element of the original set $S$; then it selects the $m$ minimum values $\min_{u\in S}h_1(u),\dots,\min_{u\in S}h_m(u)$.
Different variants of MinHash have been devised to obtain more compact storage or lower computational complexity of the hashing operation. In particular, we mention (a) $b$-bit minwise hashing by \cite{Ping_Li_bbit}, which quantizes the hashes over $b$ bits instead of using integers, and (b) bottom-$m$ sketch by \cite{bottomk}, which selects the $m$ smallest values of a single hash function instead of the minima of $m$ independent hash functions; this conceptually substitutes the ``sampling with replacement'' operation performed by MinHash with a ``sampling without replacement''. Clearly, the bottom-$m$ sketch has a lower complexity in computing the hashes.
We mention that techniques have been proposed that aim at extending MinHash beyond Jaccard similarity, by assigning weights to the elements of the set, see \cite{simminhash,weighted_minhash}. This was motivated by bag-of-words representations where the count associated to each element may carry additional information. In the context of hashing functions, BitShred was also proposed by \cite{bitshred}, however it achieves substantially biased estimates of the Jaccard coefficient when compared to MinHash. Besides, Bloom filters have been proposed by \cite{Bloom}, which are space-efficient representations for set membership queries. They essentially hash each element of a set into positions of a bit array, and they could also be used to estimate the size of the intersection and union between sets. Bloom filters present some challenges like false positive errors, i.e., wrongly indicating that a non-member element is a member of the set (see \cite{luo18}); a comparison with them is postponed to future work.
Concerning random projections, one of the most famous methods for dimensionality reduction has been introduced by \cite{joh84}, which preserves Euclidean distances. Several extensions have been later proposed, that embed the angle between signals (see \cite{Charikar2002,Jacques2013}) or control the maximum distance that is embedded (see \cite{Boufounos2013}). Finally, we notice that sparse random matrices have received some attention for embedding $\ell_2$ or $\ell_1$ distances in \cite{Li:2006:VSR:1150402.1150436}.
\vspace{-0.1cm}
\section{SparseHash}\label{sec:proposed}
In this section, we illustrate SparseHash. In particular, we describe how to efficiently implement it, without generating the projection matrix, and we propose a novel faster approximated implementation. In the following, we denote the support of $u\in\mathbb{R}^{n}$ as
$\mathrm{supp}(u)=\{i\in\{1,\ldots,n\}: u_i\neq0\}$; $k$ is the sparsity level, that is, the cardinality of $\mathrm{supp}(u)$, and $\Sigma_k:=\{u\in\mathbb{R}^n:|\mathrm{supp}(u)|\leq k\}$.
We call random projection an algorithm that projects a vector $u\in\mathbb{R}^n$ onto a lower-dimensional subspace $\mathbb{R}^m$ by multiplying it by a random matrix $A\in\mathbb{R}^{m\times n}$, $m<n$, see \cite{Dono06}. The obtained vector $y=Au$ is referred to as measurement, and $\mathbb{R}^m$ is known as reduced space. The intuitive idea is that a properly designed random mapping projects data points onto a randomly selected subspace approximately preserving distances. Generally, dense random matrices are considered in the literature.
SparseHash consists of computing binary-quantized sparse random projections $y=|\text{sign}(Au)|$, where $A\in\mathbb{R}^{m\times n}$ is a $\gamma$-sparsified random matrix, defined as follows: with probability $1-\gamma$, $A_{ij}=0$; with probability $\gamma$, $A_{i,j}$ is generated according to an arbitrary continuous distribution with zero mean and finite variance.
$\gamma$-sparsified random matrices are efficient for sparsity estimation, as proven by \cite{Ravazzi2016}; their use is the core of SparseHash, and is the basis to provide a rigorous analysis of its efficiency.
SparseHash can be used to evaluate the similarity of sets of generic objects because, as already mentioned, a set $S\subseteq\Omega=\{1,\ldots,n\}$ that contains $k\ll n$ objects can be represented by a sparse signal $u\in\{0,1\}^n$, and set similarity can be interpreted as support overlap.
\subsection{Computation via hashing}\label{sec:cvh}
From a practical viewpoint, computing the measurements in SparseHash does not require to explicitly generate $A$ and perform the matrix-vector product. An efficient implementation is possible by using hash functions, see \cite{universal_hash}, which map an index in $S$ to a uniformly distributed value over the output range of the hash function (e.g., $b=64$ bits integers). Let $f_i$ be a hash function that maps its input to an integer in the range $[0,2^b-1]$ and define a threshold $\tau$, where $\gamma$ is as defined in the previous section. A measurement $y_i$ is zero if and only if the hash function returns a value below $\tau$ for all the indexes in the support. By defining $\tau=\gamma (2^b-1)$, measurements have the same probability to be zero as in the formulation $y=|sign(Au)|$ illustrated in the previous paragraph.
Multiple hash functions are used to generate $m$ measurements. This typically involves randomizing a seed of the hash function. Algorithm \ref{alg:sparsehash} summarizes all the steps required to generate the SparseHash measurements $y$.
\begin{algorithm}[h]
\begin{algorithmic}
{\small{
\Inputs{$\gamma$, $S=\mathrm{supp}(u)=\lbrace s_j \rbrace_{j=1}^k$}
\Initialize{$\tau \gets \gamma \left( 2^b-1 \right)$; $~~~y_i \gets 0$, $i=1,\ldots,m$}
\For{$i = 1, \ldots, m$}
\For{$j = 1, \ldots, k$}
\State $h_{ij} \gets f_i(s_j)$
\If{$h_{ij} < \tau$}
\State $y_i \gets 1$
\State \algorithmicbreak
\EndIf
\EndFor
\EndFor
}}
\end{algorithmic}
\caption{Computing SparseHash}
\label{alg:sparsehash}
\end{algorithm}
Algorithm \ref{alg:sparsehash} is equivalent to computing $y=\vert \mathrm{sign}(Au) \vert$ in the sense that it approximately yields the same probability to get a nonzero measurement $y_i$ as function of the size of the support. The equivalence would be exact if the hash function could generate output values that are truly uniformly distributed and whose range is large enough that that the quantization of probabilities is negligible. However, popular functions, e.g., \cite{murmurhash3}, are designed to be as uniform as possible and work using 32 bits or more as output range, which is large enough to be a good approximation. Finally, the procedure is repeatable, i.e., an hash function returns the same output to the same input.
\begin{algorithm}[p]
\begin{algorithmic}
{\small{
\Inputs{$\gamma$, $S=\mathrm{supp}(u)=\lbrace s_j \rbrace_{j=1}^k$}
\Initialize{$\tau \gets \gamma \left( 2^b-1 \right)$; $~~y_i \gets 0$, $i=1,\ldots,m$}
\For{$j = 1, \ldots, \vert S \vert$}
\State $h_{j} \gets f(s_j)$ \Comment{Compute hashes}
\EndFor
\For{$i = 1, \ldots, m$}
\State Generate random bot[i]\Comment{Generate bottom values}
\EndFor
\State Sort bot
\State head $\gets$ buildTree(bot)
\For{$j = 1, \ldots, k$}
\State ptr$\gets$head
\While{ptr $\neq$ NULL}
\If{$h_j < $ ptr.bottomValue}
\State ptr $\gets$ ptr$.$left
\Else
\If{$h_j < $ ptr.topValue}
\State $i \gets $ptr.measIndex; $~~~y_i \gets 1$
\If{ptr$.$measIndex $\neq m-1$} \Comment{Check right}
\State $p \gets$ ptr.measIndex $+1$
\While{$p < m$ and $h_j \geq $bot[$p$]}
\State $y_p \gets 1$; $~~p \gets p+1$
\EndWhile
\EndIf
\If{ptr.measIndex $\neq 0$} \Comment{Check left}
\State $p \gets$ ptr.measIndex $-1$
\While{$p \geq 0$ and $h_j < $bot[$p$]$+\tau$}
\State $y_p \gets 1$; $~~p \gets p-1$
\EndWhile
\EndIf
\State \algorithmicbreak \Comment{No more measurements can collide}
\Else
\State ptr $\gets$ ptr.right
\EndIf
\EndIf
\EndWhile
\EndFor
}}
\end{algorithmic}
\caption{Computing Fast SparseHash}
\label{alg:fastsparsehash}
\end{algorithm}
\begin{figure}[h]
\centering
\includegraphics[width=0.62\columnwidth]{./tree-crop.pdf}
\caption{Example of Fast SparseHash: a binary tree is generated from $m=7$ random windows, denoted as A to G. The membership of the $k=6$ hashes in the windows are determined by traversing the tree. The corresponding SparseHash measurements are 1101010. The tree structure must have a \textsc{right} and a \textsc{left} pointers, and an integer \textsc{measIndex} in $[0,m-1]$ storing the index of the measurement.}
\label{fig:tree}
\end{figure}
\subsection{Fast SparseHash}\label{sec:fast_sparsehash}
The technique to implement SparseHash just described requires to compute $\mathcal{O}(km)$ hashes (by hash, we mean the output of the hash function applied to a single entry of the support), and perform $\mathcal{O}(km)$ comparisons with the threshold $\tau$. The complexity is thus equivalent to that of MinHash, which computes $\mathcal{O}(km)$ hashes and performs $\mathcal{O}(km)$ operations to find the $m$ minima. However, there are variants of MinHash, in particular the bottom-$m$ variant by \cite{bottomk}, which reduce the complexity. SparseHash can be modified so as to require only $\mathcal{O}(k)$ hashes and $\mathcal{O}(k \log m)$ comparisons. We now illustrate this variant, that we call Fast SparseHash.
The main idea behind Algorithm \ref{alg:fastsparsehash} is to compute only one hash per support entry and to check if it falls inside one of $m$ randomly drawn windows of width $\tau$ instead of falling below a fixed threshold. A single hash is computed for each of the $k$ elements in the support and $m$ random windows of width $\tau=\gamma(2^b-1)$ are drawn over the output range of the hash function. Notice that the width $\tau$ of the window is exactly the same as in Algorithm \ref{alg:sparsehash}. A naive solution to compute the measurements would consist in setting a measurement to 1 if at least one of the hashes falls inside the window drawn for that measurement. However, this solution would still require $\mathcal{O}(km)$ comparisons. A better solution is to use a binary search tree to store the windows by means of their sorted bottom values so that the value of the measurement can be determined by traversing the tree, yielding a logarithmic complexity in the number of measurements. More precisely, the tree stores the measurement number $i \in \left[ 0,m-1 \right]$, the value of the bottom of the corresponding window as well as pointers to the two children. The tree is created in the following way. The bottom values are sorted in increasing order and their median value is inserted as root of the tree. Then the tree is recursively created following the rule that the left (respectively, right) subtree includes the windows with bottom values smaller (respectively, larger) than the parent. To determine whether the measurements are zero or nonzero, each hash of the set traverses the tree. The hash is first compared to the value of the window bottom stored in the root node to determine if it falls inside that window. If it does, the measurement corresponding to the index stored in that node is set to 1 and the next hash is examined. Otherwise, the right or left children is examined depending if the hash is below or above the bottom of the current window. This is repeated until a leaf is reached, or a measurement is set to 1. Partially overlapping windows are handled with a local search: if a hash is determined to fall inside a window, the windows whose bottoms are immediately smaller or larger are checked to determine if the hash also falls inside them. This is repeated until no neighboring windows report the hash falling inside. Algorithm \ref{alg:fastsparsehash} shows the whole procedure to compute measurements with Fast SparseHash, while Figure \ref{fig:tree} shows the process in a pictorial fashion. Table \ref{table:speed} reports an experimental comparison of the runtime for SparseHash and Fast SparseHash for various values of $k$ and $m$: Fast SparseHash is significantly faster and has a sublinear increase in runtime for increasing $m$ while SparseHash has a linear increase.
\begin{table}[t]
\caption{Fast SparseHash v. SparseHash - Runtime (sec.)}
\footnotesize
\centering
\begin{tabular}{cc|ccccc}
& & \multicolumn{3}{c}{$m$}\\
& & $10^4$ & $10^5$ & $10^6$ \\
\hline
&$10^4$ & \textbf{0.007} / 1.337 & \textbf{0.040} / 12.36 & \textbf{0.194} / 118.8 \\
$k$&$10^5$ & \textbf{0.023} / 11.26 & \textbf{0.061} / 115.5 & \textbf{0.280} / 1119 \\
&$10^6$ & \textbf{0.136} / 126.8 & \textbf{0.205} / 1163 & \textbf{0.476} / 11129
\end{tabular}
\label{table:speed}
\vspace{-0.3cm}
\end{table}
\vspace{-0.2cm}
\section{Analytical results}\label{sec:theory}
In the following, we present two metrics to compute set similarities from measurements in the reduced domain.
Given $u,v\in\mathbb{R}^n$ and their measurements $y,z \in \lbrace 0,1 \rbrace^m$ obtained via SparseHash, we are interested in defining a similarity metric between $y$ and $z$ that approximately embeds the Jaccard coefficient $J(S_u,S_v)=|S_u\cap S_v|/|S_u\cup S_v|$ of $S_u=\mathrm{supp}(u)$ and $S_v=\mathrm{supp}(v)$. To simplify the notation, let $J_{u,v}:=J(S_u,S_v)$.
\subsection{Jaccard coefficient}\label{sec:simsh}
Let $y,z \in \lbrace 0,1 \rbrace^m$. We define
$$\text{sim}_{\cup}(y,z):=\frac{1}{m}\sum_{i=1}^m\mathds{1}({\{y_i=0,z_i=0\}}),$$
$$\text{sim}_{\cap}(y,z):=\frac{\sum_{i=1}^m\mathds{1}({\{y_i=0\})\sum_{j=1}^m\mathds{1}(\{z_j=0\}})}{m\sum_{i=1}^m\mathds{1}({\{y_i=0,z_i=0\}})}$$
where $\mathds{1}({\{A\}})$ is the indicator function which returns 1 when $A$ is true, while $\mathds{1}({\{A,B\}})$ returns 1 when both A and B are true. Then, comparison in the reduced space can be done with the following similarity index:
\begin{equation}
\label{eq:similarity}
\text{sim}_{\sf{sh}}(y,z):=\frac{\log(\text{sim}_{\cap}(y,z))}{\log(\text{sim}_{\cup}(y,z))}.
\end{equation}
We notice that $\gamma$ must be designed so that this formula has small probability to be undefined. In the following theorem, we state that $\text{sim}_{\sf{sh}}$ is a random variable that concentrates around the Jaccard coefficient between the supports of the original signals. We emphasize that Theorem 1 is a more refined version of Proposition 1 in \cite{sparsehash_icme}. More precisely, a deeper theoretical analysis leads to a new estimation of the performance that is more explicit in terms of the main parameters $m,\gamma,k_{\min},k_{\max}$ and $N$.
\begin{theorem}\label{thm: concentration_Jaccard}
Let $\mathcal{X}_{N}=\{x_i\in \mathbb{R}^n:|\mathrm{supp}(x_i)|\in[k_{\min},k_{\max}]\}_{i=1}^N$ be a set of $N$ sparse vectors. For any $\epsilon>0, \beta>2$ and any integer $n$, let $m$ be a positive integer such that
$m> 32\frac{\log 4+\beta\log N}{\gamma^2k^2_{\min}\mathrm{e}^{-\gamma k_{\max}}\epsilon^2}$.
Then,
\begin{align}
\label{eq:expected_jaccard}
\mathbb{P}\left[\bigcup_{(u,v)\in\mathcal{X}_N}\left\{|\mathrm{sim}_{\sf{sh}}(Au,Av)- J_{u,v}|>\epsilon\right\}\right]\leq N^{-\beta+2}.
\end{align}
\end{theorem}
\textit{Sketch of the proof} {\color{black} For brevity, we report only the key steps of the proof; details are provided in the supplementary material}. Let $u\in\Sigma_{k_1}$ and $v\in\Sigma_{k_2}$. Let us define: $\zeta:=(1-\gamma)^{\frac{k_1+k_2}{1+J_{u,v}}}$.
Exploiting the Hoeffding's inequality (see \cite{citeulike:3392582}), we can prove the following inequalities:
\begin{equation}\label{e1}
\begin{split}
&\mathbb{P}\left(\left|\mathrm{sim}_{\cup}(Au,Av)-\zeta]\right|>\epsilon\right)\leq 2\mathrm{e}^{-2\epsilon^2m}\\
&\mathbb{P}\left(\left|\mathrm{sim}_{\cap}(Au,Av)-\zeta^{J_{u,v}}]\right|>\epsilon\right)\leq 6\mathrm{e}^{-m\min\left\{(1-\gamma)^{8k_{\max}}\frac{\epsilon^2}{8},\frac{(1-\gamma)^{4k}}{2}\right\}}.
\end{split}
\end{equation}
By using the fact that for any positive random variable $X$ such that $\mathbb{P}\left(|X-\mu_X|>\epsilon\right)\leq p_X(\epsilon)$ with $\mu_X>0$, it holds that
$\mathbb{P}\left(\left|\log(X)-\log(\mu_X)\right|>\epsilon\right)\leq p_X\left(\epsilon\mu_X\right)$, we can deduce the following inequalities from \eqref{e1}:
\begin{equation}
\begin{split}\label{e4}
&\mathbb{P}\left(\left|\log(\mathrm{sim}_{\cap}(Au,Av))-J_{u,v}\log\zeta\right|>\epsilon\right)\leq 6\mathrm{e}^{-m(1-\gamma)^{12k_{\max}}\frac{\epsilon^2}{8}};\\
&\mathbb{P}\left(\left|\log(\mathrm{sim}_{\cup}(Au,Av))-\log\zeta\right|>\epsilon\right)\leq 2\mathrm{e}^{-2m\epsilon^2(1-\gamma)^{4k_{\max}}}.
\end{split}
\end{equation}
Moreover, we can prove that, given $X_u=\frac{\sum_{i=1}^m\mathds{1}(\{(Au)_i=0\})}{m}$, \begin{equation}\label{e5}
\mathbb{P}\left(\left|X_u X_z-(1-\gamma)^{k_1+k_2}\right|>\epsilon\right)\leq 4\mathrm{e}^{-2m\min\left\{\frac{\epsilon^2}{9},\frac{(1-\gamma)^{2k_{\max}}}{4}\right\}}.
\end{equation}
Finally, by merging \eqref{e4} and \eqref{e5}, under the given assumption on $m$,
the thesis is easily derived.\qed
For large $n$, $k$, and $m$, $\gamma k$ represents
the average number of nonzeros in each row of $A$ that align with the support of cardinality $k$. This observation reveals three regimes, corresponding to the scaling of $\gamma$ and $k$: if $\gamma k_{\min}=\Theta(1)$, then $m=O(\log N)$ is sufficient to get the bound in Theorem \ref{thm: concentration_Jaccard}. In sharp contrast, if $\gamma k_{\max}\rightarrow\infty$ or $\gamma k_{\min}\rightarrow 0$, then $m$ must increase to guarantee the concentration with high probability.
\subsection{Hamming distance and LSH}\label{sec:dham}
As for MinHash, signals can be compared in the reduced space using the Hamming distance $d_{\sf{H}}$ between two hash codes.
From the law of large numbers we have the following properties. Given $u\in\Sigma_{k_1}$ and $v\in\Sigma_{k_2}$
{\small{\begin{equation}\label{eq:ham_formula}
E_{\text{sh}}:=\frac{\mathbb{E}[d_{\sf{H}}(Au,Av)]}{m}=(1-\gamma)^{k_1}+(1-\gamma)^{k_2}-2(1-\gamma)^{\frac{k_1+k_2}{1+J_{u,v}}}
\end{equation}}}
and
$
\widehat{V}:=\mathsf{Var}[d_{\sf{H}}(Au,Av)/m]= (1-E_{\text{sh}})E_{\text{sh}}/{m}.
$
For signals with similar sparsity degree $k_1\approx k_2=k$, by setting $(1-\gamma)^k=1/2$ in order to maximize the entropy of the binary measurements, we obtain:
$E_{\text{sh}}\approx1-2^{\frac{J-1}{1+J}}.$
The characterization of the relationship between the Hamming distance of the hashes and the original Jaccard coefficient derived in \eqref{eq:ham_formula} is important in the context of LSH. In a nutshell, LSH allows us to approximate nearest neighbor database searches with sublinear complexity (see \cite{AndoniLSH2008}), without scanning all the entries in the database. Let $\mathcal{X}\subset\mathbb{R}^n$ be a set of points with distance measure $d_{\mathcal{X}}$, and consider the $(R,c)$-NN problem where one is concerned with retrieving all the neighbors of the query point within a distance $R$, while discarding the points at distances greater than $cR$. An LSH family is defined as follows.
\begin{definition}
Let $p_1> p_2$ and $r_1<r_2$. A family $\mathcal{H}: \{h: \mathcal{X}\rightarrow\mathcal{U}\}$ is called $(r_1,r_2,p_1,p_2)$-sensitive for $d_{\mathcal{X}}$ if for any $x,\xi\in\mathcal{X}$ the following fact hold: if $d_{\mathcal{X}}( x,\xi) \leq r_1$ then $\mathbb{P}\left(h(\xi)=h(x)\right)\geq p_1$; if $d_{\mathcal{X}}( x,\xi) \geq r_2$ then $\mathbb{P}\left(h(\xi)=h(x)\right)\leq p_2$.
\end{definition}
In the $(c,R)$-NN problem, we set $r_1=R$ and $r_2=cR$ and we define a new family of functions
$
\mathcal{F}=\{f: \mathcal{X}\rightarrow \mathcal{U}^m\}
$
such that
$
f(x)=(h_1(x),\ldots,h_{m}(x))^{\top}
$,
where $h_i\in\mathcal{H}$ are chosen independently uniformly at random from $\mathcal{H}$. We notice that $f\in\mathcal{F}$ is $(r_1,r_2,p_1^m,p_2^m)$-sensitive for $d_{\mathcal{X}}$.
Fixed $L>0$, we define a new family $\mathcal G$ of hash functions $g$ constructed from $L$ random functions $f_1,\dots, f_L$ from $\mathcal F$. We say that $g(\xi)=g(x)$ if $f_i(\xi) = f_i(x)$ for at least one $i \in \{1,\ldots,L\}$. Since the members of $\mathcal F$ are independently chosen for any $g \in \mathcal G$, $\mathcal G$ is a $(r_1, r_2, 1- (1 - p_1^m)^L, 1 - (1 - p_2^m)^L)$-sensitive family.
During preprocessing, $L$ hash tables, each corresponding to a different hash function $g_i$, are constructed, by storing each $x\in\mathcal{X}$ in the bucket $g_i(x)$. Given a query item $\xi$, we retrieve first $3L$ data points that are hashed to the same bucket $g_i(\xi)$ with $i=1,\ldots, L$ and if there is a point $x^{\star}$ within distance $cR$ from $\xi$ we return 'yes' and $x^{\star}$, else we return 'no'.
If $ m={\log N}/{\log (1/p_2)}, L=N^{\rho}, \rho=\frac{\log(1/p_1)}{\log(1/p_2)}$
then the algorithm is successful with constant probability and the algorithm has the following properties: (a) preprocessing time is $O(N^{1+\rho} m T)$, where $T$ is the time to evaluate a function $h \in \mathcal H$ on an item; (b) storage is of order $O(NL+Nm)=O(N^{1+\rho}+Nm)$; (c) query time is $O(L(m T+nNp_2^m))=O(N^{\rho}(mT+n))$.
Notice that savings in terms of storage can be achieved when 1-bit measurements are used in place of real-valued measurements.
We now study the performance of LSH in terms of storage and time requirements to respond to a query, using MinHash and SparseHash as embeddings. Such performance metrics are entirely governed by embedding, more precisely, by the function that maps the Jaccard coefficient to the probability of having two equal bits. This function is linear in MinHash and nonlinear in SparseHash. In order to make a fair comparison, we use 1-bit MinHash, so that the storage complexity is equalized between the two approaches. Notice that $m$ is chosen so as to minimize $L$.
If a pair of signals in $\mathcal{X}$ has Jaccard coefficient $J$, then the probability that their hashes computed with SparseHash become a candidate pair is given by:
$P_{\text{sh}}=1-(1-p_{\text{sh}}^{m_{\text{sh}}})^{L_{\text{sh}}}$
with $p_{\text{sh}}=2^{\frac{J-1}{1+J}}$, while
for 1-bit MinHash,
$P_{\text{mh}}=1-(1-p_{\text{mh}}^{m_{\text{mh}}})^{L_{\text{mh}}}$
with $p_{\text{mh}}=(J+1)/2$. The following proposition states that, with equal $m$, SparseHash requires less tables, which enables shorter query times and lower storage requirements.
\begin{proposition}\label{prop:lsh_sh_vs_minhash}
If $P_{\text{mh}}=P_{\text{sh}},\ m_{\text{mh}}=m_{\text{sh}}=m
$, then
${L_{\text{sh}}}\leq {L_{\text{mh}}}.$
\end{proposition}
\begin{proof}
We have $$\frac{L_{\text{sh}}}{L_{\text{mh}}}
=\frac{\log\left(1-\left((J+1)/2\right)^{m}\right)}{\log\left(1-2^{m\frac{J-1}{1+J}}\right)}.$$ We then obtain $L_{sh}\leq L_{mh}$ if $2^{\frac{J-1}{1+J}}\geq (J+1)/2$, which holds for any $J\in [0,1]$. More details are provided in the supplementary material.
\end{proof}
\section{Numerical experiments}\label{sec:experimental}
In this section, we propose numerical experiments that validate the theoretical results and show the effective performance of SparseHash, both on synthetic and real datasets. The code for these experiments is available in \cite{sparsehash_code}.
\subsection{Numerical validation}\label{sub:nv}
In \cite{sparsehash_icme}, a numerical validation on the concentration of SparseHash around the true Jaccard coefficient was proposed. We retrieve the same experiment to validate the concentration on the Hamming distance.
\begin{figure}[ht]
\centering
\includegraphics[width=0.58\columnwidth]{./Gh.pdf}
\caption{Numerical validation: Hamming distance (mean and variance).}
\label{fig:hamming}
\end{figure}
We randomly generate a large number of signals with different amount of support overlap and compute their random projections via $\gamma$-sparsified matrices $A\in\mathbb{R}^{m,n}$. We set $n=1000$, $m=50$, and sparsity level $k=230$. Mean and variance are evaluated over 500 runs.
$\gamma$ is set as the value that maximizes the entropy of the binary measurements, i.e. generates zero or nonzero measurements with equal probability. Since $\mathbb{P}(f_i(u)=0)=(1-\gamma)^k$, we set:
$\gamma = 1-2^{-\frac{1}{k}}\approx 3\cdot10^{-3}.$
In Figure \ref{fig:hamming}, we depict the Hamming distance. The dashed cyan line is the theoretical mean $P_{sh}$ computed from the true Jaccard coefficient defined in \eqref{eq:ham_formula}. As expected, the experimental mean overlies the theoretical mean.
\subsection{Similar text documents}
\begin{figure*}
\centering
\includegraphics[width=0.681\columnwidth]{./precision_5.pdf}
\includegraphics[width=0.681\columnwidth]{./recall_5.pdf}
\includegraphics[width=0.681\columnwidth]{./roc05.pdf}
\caption{Experiment on similar text documents: precision and recall, threshold 0.5.}
\label{fig:precrec05}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.681\columnwidth]{./precision_6.pdf}
\includegraphics[width=0.681\columnwidth]{./recall_6.pdf}
\includegraphics[width=0.681\columnwidth]{./roc06.pdf}
\caption{Experiment on similar text documents: precision and recall, threshold 0.6.}
\label{fig:precrec06}
\end{figure*}
As discussed in \cite{sparsehash_icme}, the problem of finding near-duplicate or similar documents in an archive of text data is outstanding (see \cite{Broder1997,Broder1997clustering,Henzinger2006}). Documents can be represented with bag-of-words models. Given a vocabulary of $n$ words, we can associate a document with a $u\in\mathbb{R}^n$, where $u_i$ counts the occurrences of the $i$th word of the vocabulary in the document. Bag-of-words models typically yield sparse signals, as the number of different words appearing in a single document is usually much smaller than the size of the vocabulary. We now retrieve the experiment proposed in \cite{Ping_Li_bbit}, where the effects of quantization are evaluated on MinHash. We use the UCI dataset of New York Times articles (see \cite{NYTimes}), composed of about 300000 news articles, with a bag-of-words representation given for each article. The vocabulary contains $n=102660$ words.
The mean he number of different words used in each article is $k=232$; we use this value to assess $\gamma$ as set in Section \ref{sub:nv}.
We compare the performance of SparseHash (with both Jaccard and Hamming metrics) and 1-bit MinHash, in terms of precision and recall. Specifically, we define as similar the documents with Jaccard coefficient larger than a certain threshold, and we try to detect them. In figures \ref{fig:precrec05} and \ref{fig:precrec06}, we set the threshold to 0.5 and 0.6, respectively, and we show precision and recall as functions of the number of measurements $m$, and precision as a function of recall. We see that the precision of SparseHash with Jaccard metric outperforms 1-bit MinHash, in particular when $m<100$. For larger $m$, both methods are efficient, with precision close to 1. The recall is close to 1 for both methods. The gain obtained by SparseHash is well visualized also in the precision-recall curves, depicted for $m=48$ and $m=96$.
\subsection{Metagenome clustering}
Metagenome clustering is concerned with detecting communities of microorganisms starting from genomic sequences. A genomic sequence can be seen as a long string of A, C, T or G characters representing the four types of nucleotides. In the following, we deal with assembled sequences, which are reconstructed from overlapping partial reads produced by the sequencing instruments. Metagenome clustering is formulated in terms of pairwise distances between sequences. The distance metric of interest is 1-ANI: ANI is the average nucleotide identity, i.e., the percentage of unchanged nucleotides in the two sequences. Since the genomic sequences can be extremely long, dimensionality reduction methods are essential to efficiently compute distances. In \cite{Ondov2016}, the so-called MASH algorithm uses MinHash to this purpose. The authors split a sequence into substrings, called $\kappa$-mers, using a sliding window approach. In their experiments, $\kappa=21$. Each genomic sequence is then represented by the set of its $\kappa$-mers and the Jaccard coefficient between such sets correlates with the expected ANI. By using MinHash to compress the set of $\kappa$-mers, the authors achieve a significant dimensionality reduction.
In this section, we test SparseHash for the same purpose, and compare it to MinHash. The performance is evaluated on the capacity of preservation of the Jaccard coefficient. The dataset from \cite{human2012structure} is used, which contains $N=747$ sequences of various length. A sliding sequence of stride equal to 1 is used to generate the set of $\kappa$-mers and, by keeping only the unique $\kappa$-mers, the sets present different cardinalities, ranging from $k_{min}=4002$ to $k_{max}=219972647$. These sets are then embedded with MinHash or SparseHash and the full matrix of all pairwise similarities is generated by computing distances on the measurement vectors. Due to the computational infeasibility of computing exact Jaccard values to estimate the quality of the approximation produced by MinHash and SparseHash, we approximate them using a large number of SparseHash measurements ($m=5\cdot 10^6$) and MinHash measurements ($m=10^6$) and then average the two estimates. This is done to avoid any bias for a specific algorithm. We remark that we use the MASH code provided by \cite{Ondov2016} to implement MinHash. Due to the large values of $k$ and $m$, that code uses the bottom-$m$ MinHash sketch to accelerate the computation of the sketches. However, it is not amenable to binarization: each MinHash measurement requires 64 bits against 1 bit required by SparseHash. Figure \ref{fig:metagenome} shows the mean square error (MSE) on the matrix with all Jaccard pairwise similarities with respect to the true Jaccard values as function of the computation time required to compute all pairwise distances in the embedded space. It can be noticed that SparseHash outperforms MASH thanks to the 1-bit measurements and the faster $\mathrm{sim}_{\sf{sh}}$ and Hamming distance metrics that can be implemented with bitwise operations.
\begin{figure}
\centering
\includegraphics[width=0.53\columnwidth]{./perf.pdf}
\caption{Metagenome clustering: MSE on pairwise Jaccard matrix with computation time.}
\label{fig:metagenome}
\end{figure}
\section{Conclusion}\label{sec:conclusions}
In this letter, we have analyzed SparseHash, a novel embedding technique for dimensionality reduction of sets. Exploiting the concept of sparsity and concentration results, SparseHash is proven to preserve Jaccard metric. Efficient implementations and numerical experiments show that SparseHash outperforms MinHash in different applications. Future work will envisage the comparison with other strategies, e.g., Bloom filters.
\bibliographystyle{plain}
|
2,877,628,091,562 | arxiv |
\section{Introduction}
In order to reliably interpret current and upcoming measurements at
the LHC, precise QCD predictions for multi-jet final states are
indispensable. These include both fixed-order calculations, as well as
their combination with analytic resummation and/or parton shower event
generators, {\it e.g.}
\cite{Sjostrand:2007gs,Bahr:2008pv,Gleisberg:2003xi}, to sum leading
contributions of QCD corrections to all orders, such as to arrive at a
realistic final state modelling. Fixed-order calculations at leading
and next-to-leading order in the strong coupling are by now highly
automated, and frameworks to automatically resum a large class of
observables have been pioneered as well \cite{Banfi:2003je}. The
combination of NLO QCD corrections with event generators
\cite{Frixione:2002ik,
Nason:2004rx,Nagy:2005aa,Platzer:2011bc,Hoeche:2011fd,Hartgring:2013jma}
is an established research area, and first steps towards combining
analytic resummation and event generators have been undertaken
\cite{Alioli:2012fc}.
The efficient treatment of QCD colour structures is central to both
fixed-order and resummed perturbation theory. Particularly the use of
the colour flow basis has led to tremendously efficient
implementations of tree-level amplitudes
\cite{Maltoni:2002mq,Duhr:2006iq,Gleisberg:2008fv}, which can be used
both for leading order calculations, as well as one-loop corrections
within the context of recent methods requiring only loop integrand
evaluation (see \cite{Reuschle:2013qna} for the exact treatment of the
colour flow basis in the one-loop case). This colour basis is closely
linked to determining initial conditions for parton showering. After
evolving a partonic system through successive parton shower emissions,
while keeping track of the colour structures (in the large-$N$ limit),
colour flows also constitute the initial condition to hadronization
models; this includes the dynamics of how multiple partonic
scatterings are linked to hadronization. Colour reconnection models,
such as those described in \cite{Sandhoff:2005jh,Gieseke:2012ft}, are
exchanging colour between primordial hadronic configurations like
strings or clusters, and have proven to be of utmost phenomenological
relevance in the description of minimum bias and underlying event
data.
Despite its relevance to event generators, the colour flow basis has
typically not been considered in analytic resummation, most probably
for the reason of being not the most simple or minimal basis. While
recent work has focused on obtaining minimal (and even orthogonal)
colour bases \cite{Keppeler:2012ih}, an intuitive connection to the
physical picture is hard to maintain in such approaches. It is until
now an open question, whether amplitudes can be evaluated in a
similarly efficient way in such bases. Also, in analytic resummation,
a matching to a fixed-order calculation is usually mandatory and the
use of colour flow bases could allow to use the full power of
automated matrix element generators within this context. Understanding
soft gluon evolution in the colour flow basis thus seems to be a
highly relevant problem to address, which can also shed light on
colour reconnection models, being so far based on rather simple
phenomenological reasoning.
The purpose of the present work is to study soft gluon evolution in
the colour flow basis. While for a fixed, small number of partons the
exponentiation of the soft gluon anomalous dimension matrix can be
performed either analytically or numerically, the case for a large
number of partons is rapidly becoming intractable. This limitation
thus prevents insight into the soft gluon dynamics of
high-multiplicity systems relevant to both improved\hfill parton\\ shower
algorithms \cite{Schofield:2011zi,Platzer:2012qg} as well as colour
reconnection models. We will derive the general structure of the soft
anomalous dimension matrix in the colour flow basis for an arbitrary
number of partons, and tackle its exponentiation by successive
summation of large-$N$ powers in a regime where the kinematic
coefficients $\gamma$ are of comparable size to the inverse of the
number of colours, $\gamma N \sim 1$, leading to a computationally
much more simple problem than the full exponentiation. This strategy
can well be applied to a large number of partons in an efficient way.
This paper is organized as follows: In section~\ref{section:anomdim}
we set our notation and present the general form of the soft gluon
anomalous dimension. In section~\ref{sections:towers} we derive its
exponentiation and show how subsequent towers of large-$N$
contributions can be summed in a systematic
way. Section~\ref{sections:numerics} is devoted to a few numerical
studies of testing the accuracy of these approximations in a simple
setting of QCD $2\to 2$ scattering, while
section~\ref{sections:outlook} presents an outlook on possible future
applications before arriving at conclusions in
section~\ref{sections:conclusions}. A number of appendices is devoted
to calculational details and for reference formulae to achieve what we
will later call a next-to-next-to-next-to-leading colour (N$^3$LC)
resummation.
\section{Notation and Soft Anomalous Dimensions}
\label{section:anomdim}
We consider the soft-gluon evolution of an amplitude $|{\cal
M}_n\rangle$ involving $n$ coloured legs, either in the fundamental
or adjoint representation of $\text{SU}(N)$, with in general $N$
colour charges. The amplitude is a vector in both colour and spin
space, though we shall here mainly be interested in the colour
structure, decomposing the amplitude into a colour basis
$\{|\sigma\rangle\}$,
\begin{equation}
|{\cal M}_n\rangle = \sum_\sigma {\cal M}_{n,\sigma} |\sigma\rangle \ .
\end{equation}
We assume that all momenta of the amplitude are taken to be outgoing,
and will order the fundamental and adjoint representation legs
successively as
$$
\alpha=1_{{\mathbf N}},2_{\bar{{\mathbf N}}},...,(n_q-1)_{{\mathbf N}},n_{q,\bar{{\mathbf N}}},
(n_q+1)_{{\mathbf A}},...,(n_q+n_g)_{{\mathbf A}}
$$ for the case of $n_q$ fundamental and anti-fundamental, and $n_g$
adjoint representation legs. We will consider soft gluon evolution of
the amplitude,
\begin{equation}
|{\cal M}'_n\rangle = e^{\mathbf \Gamma} |{\cal M}_n\rangle \ ,
\end{equation}
with the soft anomalous dimension
\begin{equation}
{\mathbf \Gamma} = \sum_{\alpha\ne \beta} \Gamma^{\alpha\beta}\ {\mathbf T}_\alpha\cdot{\mathbf T}_\beta \ ,
\end{equation}
in terms of the usual colour charge products ${\mathbf
T}_\alpha\cdot{\mathbf T}_\beta$. Though sometimes basis independent
results can be obtained for the soft gluon evolution, {\it e.g.}
\cite{Forshaw:2008cq}, one in general sticks to a particular basis of
colour structures in order to obtain a matrix representation of
${\mathbf \Gamma}$ such that the exponentiation can be performed.
We shall here consider the {\it colour flow basis}, by translating all
colour indices into indices transforming either in the fundamental
(${\mathbf N}$) or the anti-fundamental ($\bar{{\mathbf N}}$)
representation. For a thorough derivation of this paradigm, including
a list of Feynman rules and their application to fixed-order
calculation, see for example \cite{Maltoni:2002mq}. Translating the
labelling of physical legs, $\alpha$, to a labelling of corresponding
colour and anti-colour `legs',
\begin{eqnarray}\nonumber
k & \leftrightarrow & \alpha = k_{\mathbf{N}} \\\nonumber
\overline{k-1} & \leftrightarrow & \alpha = k_{\bar{\mathbf{N}}}
\\\nonumber
\left.\begin{array}{c}
\overline{k-n_q/2}\\
k-n_q/2
\end{array}\right\}& \leftrightarrow & \alpha = k_{\mathbf{A}}
\end{eqnarray}
we are able to label the basis tensors in the colour flow basis by
permutations of the anti-colour indices relative to the colour indices,
\begin{equation}
|\sigma\rangle = \left|\begin{array}{ccc} 1 & ... & m\\ \sigma(1) & ... & \sigma(m)\end{array} \right\rangle
= \delta^{i_1}_{i_{\overline{\sigma(1)}}} \cdots \delta^{i_m}_{i_{\overline{\sigma(m)}}} \ ,
\end{equation}
where $m=n_q/2+n_g$\footnote{Notice that we do not impose a limitation
to colour structures as appearing for tree-level
calculations. Indeed, the gluon exchange will generate all possible
structures starting from only tree level ones.}. A pictorial
representation of these basis tensors is given in
figure~\ref{figures:basis}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{basis}
\end{center}
\caption{\label{figures:basis}An illustration of the colour basis
chosen for the case of two colour flows. Connected lines correspond
to Kronecker-$\delta$ symbols in the space of (anti-)fundamental
representation indices.}
\end{figure}
The colour charges (note that ${\mathbf T}_\alpha\cdot {\mathbf
T}_\beta = {\mathbf T}_\beta\cdot {\mathbf T}_\alpha$) translate as
(obvious cases relating colour and anticolour are not shown):
\begin{align}\nonumber
{\mathbf T}_\alpha\cdot {\mathbf T}_\beta =
{\mathbf T}_k\cdot {\mathbf T}_l &\quad & \alpha = k_{\mathbf{N}}, \beta = l_{\mathbf{N}} \\\nonumber
{\mathbf T}_\alpha\cdot {\mathbf T}_\beta =
{\mathbf T}_k\cdot {\mathbf T}_{\bar{l}} &\quad & \alpha = k_{\mathbf{N}}, \beta = (l+1)_{\bar{\mathbf{N}}} \\\nonumber
{\mathbf T}_\alpha\cdot {\mathbf T}_\beta =
{\mathbf T}_k\cdot {\mathbf T}_{\bar{l}} + {\mathbf T}_{k}\cdot {\mathbf T}_{l}
&\quad & \alpha = k_{\mathbf{N}}, \beta = (l+n_q/2)_{\mathbf{A}}\\\nonumber
{\mathbf T}_\alpha\cdot {\mathbf T}_\beta =
{\mathbf T}_{k}\cdot {\mathbf T}_{\bar{l}} + {\mathbf T}_{k}\cdot {\mathbf T}_{l} &\quad&
\alpha = (k+n_q/2)_{\mathbf{A}},\\\label{eqs:chargeTranslation}
+{\mathbf T}_{l}\cdot {\mathbf T}_{\bar{k}} + {\mathbf T}_{\bar{k}}\cdot {\mathbf T}_{\bar{l}}
&\quad & \beta = (l+n_q/2)_{\mathbf{A}}
\end{align}
and the colour flow charge products are expressed
as\footnote{$(t^a)^i {}_j (t^a)^k {}_l = \frac{1}{2}\left(\delta^i
{}_l \delta^k {}_j - (1/N) \delta^i {}_j \delta^k {}_l\right)$}
\begin{equation}
{\mathbf T}_i\cdot {\mathbf T}_j =
\frac{1}{2}\left(\delta^{i'}_j \delta^{j'}_i - \frac{1}{N} \delta^{i'}_i \delta^{j'}_j\right)
\end{equation}
for a system of two ${\mathbf N}$ (and similarly for a system of two
$\bar{{\mathbf N}}$) legs, and by
\begin{equation}
{\mathbf T}_i\cdot {\mathbf T}_{\bar{j}} =
-\frac{1}{2}\left(\delta^{i'} {}_{\bar{j}'} \delta^{\bar{j}} {}_{i} - \frac{1}{N} \delta^{i'} {}_i \delta^{\bar{j}} {}_{\bar{j}'}\right)
\end{equation}
for a ${\mathbf N}\bar{{\mathbf N}}$ correlation\footnote{Note that
appropriate crossing signs have to be included when considering
incoming quarks, {\it i.e.}, a factor of -1 for each correlator
involving an incoming quark or anti-quark as long as the anomalous
dimension coefficients and amplitudes are evaluated in the physical
regime.}. Hence the anomalous dimension reads
\begin{equation}
{\mathbf \Gamma} =
\sum_{i<j} ( \gamma_{ij} {\mathbf T}_i\cdot {\mathbf T}_j +
\gamma_{\bar{i}\bar{j}} {\mathbf T}_{\bar{i}}\cdot {\mathbf T}_{\bar{j}} ) +
\sum_{i, j} \gamma_{i\bar{j}} {\mathbf T}_i\cdot {\mathbf T}_{\bar{j}}\ ,
\end{equation}
where the form of the $\gamma$ can be inferred from
eq.~\ref{eqs:chargeTranslation}, {\it e.g.}\footnote{Note that we did
not assume $\Gamma^{\alpha\beta} = \Gamma^{\beta\alpha}$ in the
first place, as may be due to inclusion of recoil effects or further
contributions along the lines of dipole subtraction terms
\cite{Catani:1996vz}}
\begin{align}
\gamma_{kl} = \Gamma^{\alpha\beta} + \Gamma^{\beta\alpha}
& \quad & \alpha = k_{\mathbf{N}}, \beta = l_{\mathbf{N}} \\\nonumber
\gamma_{k\bar{l}} = \Gamma^{\alpha\beta} + \Gamma^{\beta\alpha}
& \quad & \alpha = k_{\mathbf{N}}, \beta = (l+1)_{\bar{\mathbf{N}}} \\\nonumber
\gamma_{k\bar{k}} = 0
& \quad & \alpha = \beta = (k+n_q/2)_{\mathbf{A}} \ .
\end{align}
Examples of the non-diagonal part of the colour correlators are given
in figure~\ref{figures:correlators}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{correlators}
\end{center}
\caption{\label{figures:correlators}Illustration of the non-diagonal
contributions to colour charge products acting on colour flow basis
tensors. Note that the `singlet' operators are entirely equivalent
to the `swapping' ones.}
\end{figure}
Since the colour flow charge products effectively describe one-gluon
exchange between two colour flow lines, the general form of the matrix
representation of ${\mathbf \Gamma}$ is straightforwardly found to be
given by\footnote{Our notation $[ |...| ]$ indicates that we refer to
the matrix element with respect to the given representation of the
amplitude as a complex vector, and not the quantity $\langle\tau |
{\mathbf \Gamma}|\sigma\rangle$, which will only coincide with the
former in an orthonormal basis, that being not the case for the
colour flow basis considered here, as well as not for most other
colour bases.}
\begin{equation}
[ \tau | {\mathbf \Gamma} | \sigma ] = \left(- N \Gamma_\sigma
+ \frac{1}{N} \rho \right) \delta_{\tau\sigma} + \Sigma_{\tau\sigma}\ ,
\end{equation}
where
\begin{equation}
\Gamma_\sigma = \frac{1}{2}\sum_{i} \gamma_{i\overline{\sigma(i)}} \ ,
\end{equation}
\begin{equation}
\rho = \frac{1}{2}\left(\sum_{i,j}
\gamma_{i\bar{j}} -
\sum_{i<j}
(\gamma_{ij} +\gamma_{\bar{i}\bar{j}})
\right) \ ,
\end{equation}
while the off-diagonal elements are given by
\begin{equation}
\label{eqs:sigmadef}
\Sigma_{\tau\sigma} =
\sum_{i,j}\Sigma_{ij\tau(i)\tau(j)} \delta_{\tau(i)\sigma(j)} \delta_{\tau(j)\sigma(i)}
\prod_{k\ne i,j} \delta_{\tau(k)\sigma(k)}
\end{equation}
with
\begin{equation}
\Sigma_{ijkl} = \frac{1}{2}(\gamma_{ij}+\gamma_{\bar{k}\bar{l}}
-\gamma_{i\bar{k}}-\gamma_{j\bar{l}}) \ ,
\end{equation}
{\it i.e.} only non-vanishing when connecting two basis tensors which
do not differ by more than a transposition in the permutation
identifying these (note that the Kronecker $\delta$'s in
eq.~\ref{eqs:sigmadef} ensure exactly one transposition between $\tau$
and $\sigma$ and the sum consists of solely one term).
\section{Summation of Large-$N$ Towers}
\label{sections:towers}
Though the exponentiation of the soft anomalous dimension matrix is
possible either analytically or using standard numerical algorithms
for a fixed (small) number of external legs, a general expression
seems yet out of reach, due to the rapid growth of the dimension of
colour space with the number of partons. In this section, we will
consider successive approximations to the full exponentiation by
subsequently summing towers of large-$N$ contributions, $\gamma^n
N^{n-k}$. To derive the form of the large-$N$ towers, let us start
from the structure of the soft anomalous dimension matrix,
\begin{equation}
{\mathbf \Gamma}
\equiv N \underline{\Gamma} + \underline{\Sigma} + \frac{1}{N} \rho \underline{1} \ ,
\end{equation}
where we choose an arbitrary ordering of the permutations to identify
these with the indices of the rows and columns of the matrix
representation, such that $\underline{\Gamma}={\rm
diag}(\{-\Gamma_\sigma\})$, and
$\underline{\Sigma}=(\Sigma_{\rho\sigma})$ does not contain any
diagonal elements.
The $n$-th power of the matrix representation then takes the form
\begin{equation}
{\mathbf \Gamma}^n \equiv
\sum_{k=0}^n\sum_{l=k}^n \left(\begin{array}{c}n\\k\end{array}\right) N^{n-l-k}\rho^k \underline{\Sigma}_{n-k,n-l} \ ,
\end{equation}
where $\underline{\Sigma}_{n,l}$ originates from powers of
$N\underline{\Gamma} + \underline{\Sigma}$,
\begin{equation}
\label{eqs:sigmapower}
(N\underline{\Gamma} + \underline{\Sigma})^n = \sum_{l=0}^n N^l \underline{\Sigma}_{n,l} \ ,
\end{equation}
with matrix elements given by (see appendix~\ref{sections:sigmarecursion}):
\begin{multline}
\label{eqs:sigmaelements}
(\underline{\Sigma}_{n,l})_{\tau\sigma} =\\
(-1)^l\sum_{\sigma_0,...,\sigma_{n-l}} \delta_{\tau\sigma_0}\delta_{\sigma_{n-l}\sigma}
\left(\prod_{\alpha=0}^{n-l-1} \Sigma_{\sigma_{\alpha}\sigma_{\alpha+1}}\right)\ \times\\
Q_{n-l,l}\left(\{\sigma_0,...,\sigma_{n-l}\},\Gamma\right) \ .
\end{multline}
Here $\Gamma = \{\Gamma_\sigma\}$ and the details of the polynomials
$Q_{k,l}$ are discussed in appendix~\ref{sections:qpolys}. The
exponentiation of the anomalous dimension matrix is then given by
\begin{multline}
\label{eqs:expsum}
[\tau | e^{\mathbf \Gamma} |\sigma] =
\sum_{l=0}^\infty \frac{(-1)^l}{N^l}\ \times\\\sum_{k=0}^l
\frac{(-\rho)^k}{k!}\sum_{\sigma_0,...,\sigma_{l-k}} \delta_{\tau\sigma_0}\delta_{\sigma_{l-k}\sigma}
\left(\prod_{\alpha=0}^{l-k-1} \Sigma_{\sigma_{\alpha}\sigma_{\alpha+1}}\right)\ \times\\
R(\{\sigma_0,...,\sigma_{l-k}\},\{\Gamma_\sigma\})
\end{multline}
where $R$ is worked out in appendix~\ref{sections:qpolys}.
We are now in the position to define successive summation of large-$N$
contributions. Eq.~\ref{eqs:expsum} suggests to define successive
summations at (next-to)$^d$-leading colour (N$^d$LC) by truncating the
sum at $l=d$. Owing to the properties of the $R$ functions, we find
that this prescription amounts to summing (schematically), the
following contributions (lower order contributions always implied):
\begin{eqnarray} \nonumber
\text{at LC} & : & 1 + \gamma N + \gamma^2 N^2 + ... \\\nonumber
\text{at NLC} & : & \left(\gamma + \frac{\gamma}{N}\right) (1 + \gamma N + \gamma^2 N^2 + ...) \\\nonumber
\text{at NNLC} & : & \left(\gamma + \frac{\gamma}{N}\right)^2 (1 + \gamma N + \gamma^2 N^2 + ...) \ ,
\end{eqnarray}
{\it i.e.} we consider a regime in which $\gamma N = {\cal O}(1)$ to
require resummation, while $\gamma\sim 1/N$ and $\gamma/N\sim 1/N^2$
can still be considered small in comparison the the $N$ enhancement of
the ${\cal O}(1)$ towers being resummed. We shall also consider the
case that we have (trivially) exponentiated all contributions stemming
from the $\rho$ contribution to the anomalous dimension matrix. This
resummation, which we will here refer to as N$^d$LC', is obtained by
only considering the $k=0$ terms in eq~\ref{eqs:expsum}, while
redefining the $\Gamma_\sigma$ appropriately, $\Gamma_\sigma' =
\Gamma_\sigma-\rho/N^2$. Then we sum towers of
$$
1 + \left(\gamma + \frac{\gamma}{N^2}\right) N + \left(\gamma + \frac{\gamma}{N^2}\right)^2 N^2 + ...
$$ with a prefactor of $(N\gamma + \gamma/N)^d$ at N$^d$LC'. Explicit
expressions of the $R$ functions as required through N$^3$LC are given
in appendix~\ref{sections:rexplicit}. Explicitly, at leading colour
(LC), we have
\begin{equation}
[\tau | e^{\mathbf \Gamma} |\sigma] = \delta_{\tau\sigma} e^{-N\Gamma_\sigma} + \text{NLC} \ ,
\end{equation}
whereas at next-to-leading colour (NLC), we have
\begin{multline}
[\tau|e^{\mathbf \Gamma}|\sigma] =
\delta_{\tau\sigma}e^{-N\Gamma_\sigma}\left(1 + \frac{\rho}{N}\right) - \\
\frac{1}{N}\Sigma_{\tau\sigma}\frac{e^{-N\Gamma_\tau} - e^{-N\Gamma_\sigma}}{\Gamma_\tau-\Gamma_\sigma}
+ \text{NNLC} \ .
\end{multline}
Note that the NLC summation is sufficient to recover the anomalous
dimension matrix upon a next-to-leading order expansion,
\begin{equation}
\left.[\tau|e^{\mathbf \Gamma}|\sigma]\right|_{\text{NLC}} = \delta_{\tau\sigma} +
[\tau|{\mathbf \Gamma}|\sigma] + {\cal O}(\gamma^2) \ .
\end{equation}
Also note that the structure of the approximated exponentiations
reflects the same approximation to be applied to the scalar product
matrix of the basis tensors: The basis is orthogonal at LC, at NLC
only scalar products between tensors differing by at most a
transposition need to be considered (and there is no non-vanishing
matrix element of the exponentiated soft anomalous dimension
connecting other tensors to this order), and similar observations apply
to higher order summations.
As a first assessment on the accuracy of the procedure outlined above,
let us consider the case of the evolution of two colour flows. Here,
the soft anomalous dimension matrix in the basis
$\{|12\rangle,|21\rangle\}$ takes the form
\begin{equation}
{\mathbf \Gamma} \equiv \begin{pmatrix} -N \Gamma_{12} + \frac{1}{N}\rho & \Sigma_{1212} \\
\Sigma_{1221} & -N \Gamma_{21} + \frac{1}{N}\rho \end{pmatrix} \ ,
\end{equation}
and its exact exponentiation is given by
\begin{multline}
\label{eqs:exp2by2}
e^{\mathbf \Gamma} \equiv
\frac{e^{\frac{1}{N}\rho} e^{-\frac{N}{2}(\Gamma_{12}+\Gamma_{21})}}{\kappa} \ \times \\
\begin{pmatrix} -\Delta \sinh\frac{\kappa}{2} + \kappa \cosh\frac{\kappa}{2} & 2\Sigma_{1212}\sinh\frac{\kappa}{2} \\
2\Sigma_{1221}\sinh\frac{\kappa}{2} & \Delta \sinh\frac{\kappa}{2} + \kappa \cosh\frac{\kappa}{2} \end{pmatrix}
\end{multline}
where $\Delta=N(\Gamma_{12}-\Gamma_{21})$ and
$\kappa=\sqrt{\Delta^2+4\Sigma_{1212}\Sigma_{1221}}$. Let us for the
moment assume that all $\gamma$ are real (though this is of course not
the general case); considering then a {\it phase space} region for
which $\Delta^2\gg 4\Sigma_{1212}\Sigma_{1221}$, $\kappa\sim
|\Delta|$, we recover the NLC' approximation, {\it i.e.}, there is a
phase space region where {\it purely kinematic reasons} give rise to a
NLC' expansion without having actually considered the very size of $N$
itself. Note that the different treatment of $\rho$, either absorbing
it into a redefinition of the $\Gamma_\sigma$, or treating it as
subleading itself, amounts -- for the case of $q\bar{q}$ singlet -- to
either keeping $C_F=(N^2-1)/(2N)$ exactly or doing a strict large-$N$
limit with $C_F\sim C_A/2$. An observation that these different
prescriptions account for the bulk of subleading-$N$ effects in a
colour-improved parton shower evolution \cite{Platzer:2012qg} has
already been made, though we are far from drawing an ultimate
conclusion here.
\section{Numerical Results}
\label{sections:numerics}
In this section we consider numerical results on summing subsequent
large-$N$ towers for the case of QCD $2\to 2$ scattering, $p_1,p_2\to
p_3,p_4$ with a simple assumption on the anomalous dimension matrix,
\begin{eqnarray}
\Gamma^{12} = \Gamma^{34} &=&
\frac{\alpha_s}{4\pi}\left( \frac{1}{2} \ln^2 \frac{s}{\mu^2} - i\pi \ln \frac{s}{\mu^2}\right) \\ \nonumber
\Gamma^{13} = \Gamma^{24} &=&
\frac{\alpha_s}{8\pi} \ln^2 \frac{|t|}{\mu^2} \\ \nonumber
\Gamma^{14} = \Gamma^{23} &=&
\frac{\alpha_s}{8\pi} \ln^2 \frac{|u|}{\mu^2}
\end{eqnarray}
in terms of standard Mandelstam variables $s,t,u$ and some resolution
scale $\mu$. This anomalous dimension corresponds to a jet veto in a
typical parton shower resolution variable, but otherwise should rather
be thought of as a generic example. We refer to
\cite{Kidonakis:1998nf} for a detailed discussion and note that a
colour flow approach for the quark-quark case has already been
considered in \cite{Sotiropoulos:1993rd}. We will explicitly consider
the matrix elements of the exponentiated anomalous dimension. For the
case of processes involving four (anti-) quarks, we can directly
compare to the analytic result in eq.~\ref{eqs:exp2by2}, while for the
other cases we study the convergence of successive approximations
(though exact results could also be obtained in these cases). All
calculations have been carried out with the C++ library
\texttt{CVolver}, which is available on request from the author.
For quark-quark scattering, we display numerical results for the real
and imaginary parts of the evolution matrix $e^{{\mathbf \Gamma}}$ in
figures~\ref{figures:diagonal} and \ref{figures:offdiagonal}.
Generally, we find that NLC summations are required to get a
reasonable approximation to the real part, while NNLC is required for
a similar description of the imaginary parts. At N$^3$LC we find a
sub-permille level agreement of the approximation with the exact
results. In figure~\ref{figures:primes} we compare the difference
between the native summation and the prime prescription, which clearly
improves the approximation of the exact result leading to an accuracy
at N$^2$LC', which is comparable to the N$^3$LC calculation.
\begin{figure}
\begin{center}
\input{diagonal-real}
\input{diagonal-imag}
\end{center}
\caption{\label{figures:diagonal}Real and imaginary parts of a
diagonal evolution matrix element for quark-quark scattering at
$s=100\ {\rm GeV}^2$, $\mu^2=25\ {\rm GeV}^2$ as a function of the
momentum transfer $|t|$, comparing the exact results to various
approximations. This matrix elements describes the amplitude to keep
a $t$-channel colour flow $\sigma$.}
\end{figure}
\begin{figure}
\begin{center}
\input{offdiagonal-real}
\input{offdiagonal-imag}
\end{center}
\caption{\label{figures:offdiagonal}Same as
figure~\ref{figures:diagonal} for an off-diagonal matrix
element. The matrix element considered describes the transition from
a $u$-channel colour flow $\tau$ to a $t$-channel one, $\sigma$.}
\end{figure}
\begin{figure}
\begin{center}
\input{diagonal-imag-prime}
\input{offdiagonal-real-prime}
\end{center}
\caption{\label{figures:primes}Comparison of the prime resummation
prescription compared to the native one for the same parameters as
used in figure~\ref{figures:diagonal}. Typically, a N$^2$LC'
summation reaches a similar accuracy as a N$^3$LC one, both
providing sub-permille agreement with the exact result.}
\end{figure}
For the other configurations contributing to QCD $2\to 2$ scattering
we find a similar pattern of convergence through successive orders.
We note, however, that some of the matrix elements for processes with
more and more colour flows are non-zero starting only from a high
enough order.
\section{Outlook on Possible Applications}
\label{sections:outlook}
The work presented here is relevant to cases where soft gluon
evolution is a required ingredient for precise predictions, but not
feasible in exact form owing to a large number of external legs
present. This, in particular, applies to improved parton shower
algorithms but also to analytic resummation for observables of
multi-jet final states. Looking at the convergence of the N$^d$LC
expansions, which can easily be implemented in an algorithmic way, one
can gain confidence of providing a reliable resummed prediction at
some truncation of the exponentiation. As for the case of parton
showers, the colour flow basis, being itself ingredient to many highly
efficient matrix element generators, offers unique possibilities to
perform Monte Carlo sums over explicit colour structures or charges,
such that efficient algorithms in this case seem to be within reach.
The requirement to study soft gluon dynamics for a large number of
legs is as well at the heart of the dynamics behind non-global
logarithms \cite{Dasgupta:2001sh}, when considered to more than the
first order in which they appear, and beyond leading colour. Another
application (which, in part, triggered the present work) is to gain
insight into the dynamics of colour reconnection models. A QCD
motivated and feasible colour reconnection model based on summing
large-$N$ towers is subject to ongoing work and will be presented
elsewhere.
Let us finally remark that N$^d$LC calculations in general do not
require matrix exponentiation and at most $d$ plain matrix
multiplications. Owing to the respective matrices being very
sparse\footnote{Note that this does not only apply to the colour flow
basis, but similar observations have been made for other choices,
{\it e.g.} \cite{Sjodahl:2009wx}}, this can be performed very
efficient. Indeed, one can imagine to perform a Monte Carlo summation
over colour structures by generating subsequent sequences of colour
flows to be considered. The number of possible sequences is very
limited given the fact that the $\Sigma$ matrices only contain
non-vanishing matrix elements for two colour flows which differ at
most by a transposition in the permutations labelling them.
\section{Conclusions}
\label{sections:conclusions}
In this paper we have investigated soft gluon evolution in the colour
flow basis, presenting the structure of the soft anomalous dimension
for any number of legs. We have then focused on systematic summation
of large-$N$ enhanced terms with the aim of providing successive
approximations to the exact exponentiation of the anomalous
dimension. We generally find a good convergence of these approximations
for a simple anomalous dimension in QCD $2\to 2$ scattering. The
present work can be used to perform soft gluon resummation for a large
number of external legs, where the full exponentiation is not feasible
anymore. It also forms the basis for improved parton shower evolution
and may shed light on the dynamics to be considered for colour
reconnection models.
Particularly in conjunction with matrix element generators, making use
of the colour flow basis, very efficient and highly automated
calculations can be performed owing to the algorithmic structure of
N$^d$LC approximations, including Monte Carlo sums over individual
colour structures. The C++ library \texttt{CVolver}
\cite{Platzer:CVolver}, which has been developed within this context
provides all required tools to do so.
\section*{Acknowledgments}
I am grateful to Malin Sj\"odahl and Mike Seymour for many valuable
discussions and comments on the work presented here. This work has
been supported in part by the Helmholtz Alliance `Physics at the
Terascale'.
|
2,877,628,091,563 | arxiv | \section{Introduction}
Random sequential adsorption of congruent spheres in the $d$-dimensional Euclidean space has been a topic of great interest across the sciences, serving as basic models in condensed matter and quantum physics~\cite{SBVK2014,LFCDJCZ2001,JCZRCL2000,SWM2010}, nanotechnology \cite{CKE2002,DGPLCM2002}, information theory and optimization problems~\cite{S2001,GMS2012,KT2013}.
Random sequential adsorption also arises naturally in experimental settings, ranging from the deposition of nano-scale particles on polymer surfaces, adsorption of proteins on solid surfaces to the creation of logic gates for quantum computing, and
many more applications in domains as diverse biology, ecology and sociology, see~\cite{CAP07,TS2010,T2013} for extensive surveys.
We refer with random sequential adsorption ($\text{\textsc{\textsc{rsa}}} $) to the dynamic process defined as follows:
At each time epoch, a point appears at a uniformly chosen location in space, and an attempt is made to place a sphere of radius $r$ with the chosen point as its center.
The new sphere must either fit in the empty area without overlap with the spheres deposited earlier, or its deposition attempt is discarded.
After $n$ deposition attempts, the quantity of interest is the proportion of accepted spheres, or equivalently, the volume covered by the accepted spheres. Fig.~\ref{fig:sfig1} illustrates an instance of this \text{\textsc{\textsc{rsa}}} process in 2D.
\begin{figure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[scale=.45]{visual-coupled-c15.pdf}
\caption{\text{\textsc{\textsc{rsa}}} in 2D}
\label{fig:sfig1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[scale=.45]{rgg-network-c15.pdf}
\caption{\text{\textsc{\textsc{rgg}}} network}
\label{fig:clique}
\end{subfigure}
\caption{(a) Random sequential adsorption in 2D with density $c=15$.
Dots indicate the centers of accepted (red) and discarded (blue) spheres.
(b) $\text{\textsc{\textsc{rgg}}} (15,2)$ graph with 1000 vertices: Two vertices share an edge if they are less than $2r$ distance apart, where $r$ is such that a vertex has on average $c=15$ neighbors. Notice the many local clusters.
}\label{fig:rsa}
\end{figure}
Equivalently, we may think of the interaction network of the $n$ chosen centers of spheres by drawing an edge between two points if they are at most $2r$ distance apart.
This is because a deposition attempt can block another deposition attempt if and only if the centers are at most $2r$ distance apart.
The obtained random graph is known as the random geometric graph ($\text{\textsc{\textsc{rgg}}} $) \cite{P2003}.
The fraction of accepted spheres can be obtained via the following greedy algorithm to find independent sets of $\text{\textsc{\textsc{rgg}}} $: Given a graph $G$, initially, all the vertices are declared inactive. Sequentially activate uniformly chosen inactive vertices of the graph and block the neighborhood until all the inactive vertices are exhausted.
We refer to the above greedy algorithm as $\text{\textsc{\textsc{rsa}}} $ on the graph $G$.
If $G$ has the same distribution as $\text{\textsc{\textsc{rgg}}} $ on $n$ vertices, then the final set of active vertices has the same distribution as the number of accepted spheres in the continuum after $n$ deposition attempts.
Thus, we one can equivalently study $\text{\textsc{\textsc{rsa}}} $ on $\text{\textsc{\textsc{rgg}}} $ to obtain the fraction of accepted spheres when $\text{\textsc{\textsc{rsa}}} $ is applied in continuum.
The precise setting in this paper considers $\text{\textsc{\textsc{rsa}}} $
in a finite-volume box $[0,1]^d$ with periodic boundary, filled with `small' spheres of radius $r$ and volume $V_d(r)$ \cite{PY02,SBVK2014,SJK15}.
Since the volume of $[0,1]^d$ is~1, the probability that two randomly chosen vertices share an edge in the interaction network is equal to the volume of a sphere of radius $2r$ given by
$V_d(2r)=\pi^{d/2} (2r)^d/\Gamma(1+d/2)
$.
Thus the average vertex degree in the $\text{\textsc{\textsc{rgg}}} $ is $c=n V_d(2r)$, and
since $c$ is also the average number of overlaps per sphere, with all other attempted spheres, we interchangeably use the terms density and average degree for $c$.
We operate in the sparse regime, where both $r\to 0$ and $n\to \infty$, so that $c\geq 0$ is an arbitrary but fixed constant.
In fact, maintaining a constant density $c$ in the large-network limit is necessary to observe a non-degenerate limit of the fraction of accepted spheres.
In other words, as we will see, the jamming fraction converges to 1 or 0 when $c$ converges to 0 or infinity.
Thus, in order for $c$ to remain fixed as $n\to\infty$, the radius should scale as a function of $n$ according to
\begin{equation}\label{radius}
r=r(n)=\frac{1}{2}\left[\frac{c \Gamma(1+d/2)}{n\pi^{d/2}}\right]^{1/d}.
\end{equation}
Notice that it is equivalent to consider the deposition of spheres with fixed radii into a box of growing volume.
We parameterize the $\text{\textsc{\textsc{rgg}}} $ model by the density $c$ and the dimension $d$, and henceforth write this as $\text{\textsc{\textsc{rgg}}} (c,d)$.
A typical instance of $\text{\textsc{\textsc{rgg}}} (5,2)$ with $n = 1000$ vertices is shown in Fig.~\ref{fig:clique}.
Let $J_n(c,d)$ be the fraction of active vertices in the $\text{\textsc{\textsc{rgg}}} (c,d)$ model on $n$ vertices.
While it was proved in~\cite{PY02} that $\lim_{n\to\infty}J_n(c,d)= J(c,d)$ exists, no quantitative characterization of $J(c,d)$ for dimensions $\geq 2$ has been provided till date, and
so far the main methods to study this problem rely on extensive simulations~\cite{DC02,B11,S1967,TUS2006,ZT2013,
TST1991,F1980}.
In this paper we propose a novel approach for the study of the fraction of accepted spheres that considers $\text{\textsc{\textsc{rsa}}} $ on a clustered random graph model, designed to match the local spatial properties of the $\text{\textsc{\textsc{rgg}}} $ model in terms of average degree and clustering.
Contrary to the $\text{\textsc{\textsc{rgg}}} $ model, the proposed random graph model is amenable to rigorous mathematical treatment, including exact analysis of the limiting jamming fraction and its fluctuations.
The paper is structured as follows: Section~\ref{sec:CRG} introduces the clustered random graph and the correspondence with the random geometric graph. Section~\ref{sec:results} presents the main results for the jamming fraction in the mean-field regime.
We also show through extensive simulations that the mean-field approximations are accurate for all densities and dimensions.
Sections~\ref{sec:proof}--\ref{sec:clustering-coeff} contain all the proofs, and we conclude in Section~\ref{sec:discussion} with a discussion.
\section{Clustered random graphs}\label{sec:CRG}
\begin{figure}
\centering
\includegraphics[scale=.6]{CRG-topology.pdf}
\caption{Example topology generated by the $\text{\textsc{\textsc{crg}}} (c,\alpha)$ model.}
\label{fig:crg}
\end{figure}
Random graphs serve to model large networked systems, but are typically unfit for capturing local clustering in the form of relatively many short cycles. This can be resolved by locally adding so-called households or small dense graphs~\cite{BBS13,CL14,T07,N09,KN10,SVV16,SVV16SR,HLS15}.
Vertices in a household have a much denser connectivity to all (or many) other household members, which enforces local clustering.
We now introduce a specific household model, called clustered random graph model ($\text{\textsc{\textsc{crg}}} $), designed for the purpose of analyzing the $\text{\textsc{\textsc{rsa}}} $ problem.
An arbitrary vertex in the \text{\textsc{\textsc{crg}}} model has local or short-distance connections with nearby vertices, and global or long-distance connections with the other vertices. When pairing vertices, the local and global connections are formed according to different statistical rules.
The degree distribution of a typical vertex is taken to be Poisson$(c)$ (approximately) in both the $\text{\textsc{\textsc{rgg}}} $ and $\text{\textsc{\textsc{crg}}} $ model.
Thus a typical vertex, when activated, blocks approximately Poisson$(c)$ other vertices.
In the $\text{\textsc{\textsc{crg}}} (c,\alpha)$ model however, the total mass of connectivity measured in the density parameter $c$, is split into $\alpha c$ to account for direct local blocking and $(1-\alpha)c$ to incorporate the propagation of spatial correlations over longer distances.
The $\text{\textsc{\textsc{crg}}} (c,\alpha)$ model with $n$ vertices is then defined as follows (see Fig.~\ref{fig:crg}):
\begin{itemize}
\item Partition the $n$ vertices into random households of size $1+\mathrm{Poisson}(\alpha c)$. This can be done by sequentially selecting
$1+\mathrm{Poisson}(\alpha c)$ vertices uniformly at random and declaring them as a household, and repeat this procedure until at some point
the next $1+\mathrm{Poisson}(\alpha c)$ random variable is at most the number of remaining vertices.
All the remaining vertices are then declared a household too, and the household formation process is completed.
\item Now that all vertices are declared members of some household, the random graph is constructed according to a local and a global rule.
The local rule says that all vertices in the same household get connected by an edge, leading to complete graphs of size 1+Poisson($\alpha c$).
The global rule adds a connection between
any two vertices belonging to two different households with probability $(1-\alpha)c/n$.
\end{itemize}
This creates a class of random networks with average degree $c$ and tunable level of clustering via the free parameter $\alpha$.
With the goal to design a solvable model for the $\text{\textsc{\textsc{rsa}}} $ process, the $\text{\textsc{\textsc{crg}}} (c,\alpha)$ model
has $nc/2$ connections to build a random structure that mimics the local spatial structure of the $\text{\textsc{\textsc{rgg}}} (c,d)$ model on $n$ vertices.
Seen as the topology underlying the $\text{\textsc{\textsc{rsa}}} $ problem, the $\text{\textsc{\textsc{crg}}} (c,\alpha)$ model incorporates local clusters of overlapping spheres, which occur naturally in random geometric graphs;
see Fig.~\ref{fig:clique}.
We can now also consider \text{\textsc{\textsc{rsa}}} on the $\text{\textsc{\textsc{crg}}} (c,\alpha)$ model, by using the greedy algorithm that constructs an independent set on the graph by sequentially selecting vertices uniformly at random, and placing them in the independent set unless they are adjacent to some vertex already chosen.
The jamming fraction $J^\star_n(c,\alpha)$ is then the size of the greedy independent set divided by the network size $n$.
From a high-level perspective, we will solve the \text{\textsc{\textsc{rsa}}} problem on the $\text{\textsc{\textsc{crg}}} (c,\alpha)$ model, and translate this solution into an equivalent result for $\text{\textsc{\textsc{rsa}}} $ on the $\text{\textsc{\textsc{rgg}}} (c,d)$.
Our ansatz is that for large enough $n$, a unique relation can be established between dimension $d$ in $\text{\textsc{\textsc{rgg}}} $ and the parameter $\alpha = \alpha_d$ in $\text{\textsc{\textsc{crg}}} $, so that the jamming fractions are comparable, i.e.,~$J_n(c, d)\approx J^\star_n(c, \alpha_d),$
and virtually indistinguishable in the large network limit.
In order to do so, we map the $\text{\textsc{\textsc{crg}}} (c,\alpha)$ model onto the $\text{\textsc{\textsc{rgg}}} (c,d)$ model by imposing two natural conditions.
The first condition matches the average degrees in both topologies, i.e., $c$ is chosen to be equal to $nV_d(2r)$.
The second condition tunes the local clustering.
Let us first describe the clustering in the $\text{\textsc{\textsc{rgg}}} $ model.
Consider two points chosen uniformly at random in a $d$-dimensional hypersphere of radius $2r$.
Then what is the probability that these two points are themselves at most $2r$ distance apart?
From the $\text{\textsc{\textsc{rgg}}} $ perspective, this corresponds to the probability that, conditional on two vertices $u$ and $v$ being neighbors, a uniformly chosen neighbor $w$ of $u$ is also a neighbor of $v$, which is known as the local clustering coefficient \cite{N2010}.
In the $\text{\textsc{\textsc{crg}}} (c,\alpha)$ model, on the other hand, the relevant measure of clustering is $\alpha$, the probability that a randomly chosen neighbor is a neighbor of one of its household members.
We then choose the unique $\alpha$-value that equates to the clustering coefficient of $\text{\textsc{\textsc{rgg}}} $.
Denote this unique value by $\alpha_d$, to express its dependence on the dimension~$d$.
In Section~\ref{sec:clustering-coeff}, we show that
\begin{eq}\label{eq:alpha-choice}
\alpha_d = d\int_{0}^1x^{d-1}I_{1-\frac{x^2}{4}}\Big(\frac{d+1}{2}, \frac{1}{2}\Big)\dif x
\end{eq}
with $I_z(a,b)$ the normalized incomplete beta integral.
Table~\ref{tab1} shows the numerical values of $\alpha_d$ for dimensions 1 to 5.
\begin{table}
\centering
\begin{tabular}{C{.8cm}|C{1.5cm}|C{1.5cm}|C{1.5cm}|C{1.5cm}|C{1.5cm}}
\hline
{\rm $d$} &1 & 2 & 3 & 4 & 5 \\
\hline
$\alpha_d$ &0.750000& 0.586503 & 0.468750 & 0.379755 & 0.310547\\
\hline
\end{tabular}%
\caption{$\alpha_d$ for dimensions 1 to 5.}
\label{tab1}
\end{table}
With the uniquely characterized $\alpha_d$ in \eqref {eq:alpha-choice}, the $\text{\textsc{\textsc{crg}}} (c,\alpha_d)$ model can now serve as a generator of random topologies for guiding the \text{\textsc{\textsc{rsa}}} process.
In contrast to the Euclidean space, $\text{\textsc{\textsc{rsa}}} $ on the $\text{\textsc{\textsc{crg}}} (c,\alpha_d)$ model is analytically solvable, even at later times when the filled space becomes more dense (large $c$).
To do so, we will extend the mean-field techniques recently developed for analyzing \text{\textsc{\textsc{rsa}}} on random graph models~\cite{DLM16,BJL15,BJM13,SJK15}.
The main goal of these works was to find greedy independent sets (or colorings) of large random networks.
All these results, however, were obtained for non-geometric random graphs, typically used as first approximations for sparse interaction networks in the absence of any known geometry.
\section{Main results}\label{sec:results}
\subsection{Limiting jamming fraction}
For the $\text{\textsc{\textsc{crg}}} (c,\alpha)$ model on $n$ vertices, recall that $J^\star_n(c,\alpha)$ denotes the fraction of active vertices at the end of the $\text{\textsc{\textsc{rsa}}} $ process.
We then have the following result, which characterize the limiting fraction:
\begin{theorem}[Limiting jamming fraction]\label{th:fluid}
For any $c>0$ and $\alpha\in [0,1]$, as $n\to\infty$, $J^\star_n(c,\alpha)$ converges in probability to $J^\star(c,\alpha)$, where $J^\star(c,\alpha)$ is the smallest nonnegative root of the deterministic function $x(t)$ described by the integral equation
\begin{equation}\label{diffeq}
x(t)=1-t-\int_0^t\left(\frac{x(s)\alpha c}{1-(\alpha c+1) s}+(1-\alpha)cx(s)\right)\dif s.
\end{equation}
\end{theorem}
The ODE~\eqref{diffeq} can be understood intuitively in terms of the algorithmic description in Section~\ref{sec:algo} that sequentially explores the graph while activating the allowed vertices.
Rescale time by $n$, so that after rescaling the algorithm has to end before time $t=1$ (because the network size is $n$). Now think of $x(t)$ as the fraction of neutral vertices at time $t$.
Then clearly $x(0) = 1$, and the drift $-t$ says that one vertex activates per time unit.
Upon activation, a vertex on average blocks its $\alpha c$ household members and $(1-\alpha)c$ other vertices outside its household.
At time $t$, the fraction of vertices that are not members of any discovered households equals on average $(1 - (1+\alpha c)t)$ and all vertices which are not part of any discovered households, are potential household members of the newly active vertex (irrespective of whether it is blocked or not).
Since household members are uniformly selected at random, only a fraction $x(t) / (1 - (1+\alpha c)t)$ of the new $\alpha c$ household members will belong to the set of neutral vertices.
Moreover, since all $x(t)n$ vertices are being blocked by the newly active vertex with probability $(1-\alpha)c/n$, on average $(1-\alpha)c x(t)$ neutral vertices will be blocked due to distant connections.
Notice that the graph will be maximally packed when $x(t)$ becomes zero, i.e., there are no neutral vertices that can become active. This explains why $J^\star(c,\alpha)$ should be the time $t$ when $x(t)=0$, i.e., smallest root of~\eqref{diffeq}.
\begin{figure}
\centering
\includegraphics[scale=0.6]{crg-rgg.pdf}
\caption{Validation of the mean-field limit $J^\star(c,\alpha_2)$ with the simulation results from $\text{\textsc{\textsc{crg}}} (c,\alpha_2)$, and $\text{\textsc{\textsc{rgg}}} (c,2)$ with 1000 vertices for $0\leq c\leq 30$.}
\label{fig:crg-rgg}
\end{figure}
Upon substituting $\alpha=\alpha_d$, $J^\star(c,\alpha_d)=\lim_{n\to\infty}J^\star_n(c,\alpha_d)$ is completely characterized by \eqref{diffeq} and serves an approximation for the intractable counterpart $J(c,d)$, the limiting jammed fraction for the $\text{\textsc{\textsc{rgg}}} (c,d)$ model.
The choice of $\alpha_d$, as discussed earlier, is given by \eqref{eq:alpha-choice} and shown in Table~\ref{tab1}.
Fig.~\ref{fig:crg-rgg} validates the mean-field limit for the \text{\textsc{\textsc{crg}}} model, and shows the theoretical values $J^\star(c,\alpha_2)$ from Theorem~\ref{th:fluid}, along with the simulated values of $J_n(c,2)$ on the $\text{\textsc{\textsc{rgg}}} (c,2)$ model for values of $c$ ranging from 0~to~30.
Fig.~\ref{fig:dimension-one-to-five} shows further comparisons between $J^\star(c,\alpha_d)$ and $J_n(c,d)$ for dimensions $d=3,4,5$, and densities $0\leq c\leq 30$.
All simulations use $n=1000$ vertices.
\begin{figure
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[scale=.4]{3d-a468.pdf}
\caption{3D}
\label{fig:3d}
\end{subfigure}%
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[scale=.4]{4d-a379.pdf}
\caption{4D}
\label{fig:4d}
\end{subfigure}
\begin{subfigure}{.33\textwidth}
\centering
\includegraphics[scale=.4]{5d-a310.pdf}
\caption{5D}
\label{fig:5d}
\end{subfigure}
\caption{Simulation with 1000 vertices of $\text{\textsc{\textsc{rgg}}} (c,d)$ and the value of $J^\star(c,\alpha_d)$ for $0\leq c\leq 30$ and $d=3,4,5$.
\label{fig:dimension-one-to-five}
}
\end{figure}
The remarkable agreement of the $J^\star(c,\alpha_d)$-curves with the simulated results across all dimensions shows that the integral equation \eqref{diffeq} accurately describes the mean-field large-network behavior of the $\text{\textsc{\textsc{rsa}}} $ process, not only for the \text{\textsc{\textsc{crg}}} model, but also for the $\text{\textsc{\textsc{rgg}}} $ model.
The following result is a direct consequence of Theorem~\ref{th:fluid}, and gives a simple law to describe the asymptotic fraction $J^\star(c,\alpha_d)$ in the large density ($c\to\infty$) regime.
\begin{corollary}\label{cor:J-large-c} As $c\to\infty$, $
J^\star(c,\alpha_d) \sim (1+\alpha_dc)^{-1}.
$
\end{corollary}
Hence, for large enough $c$, $J^\star(c,\alpha_d)\approx(1+\alpha_d c)^{-1}$ serves as an approximation for all dimensions.
Due to the accurate prediction provided by the \text{\textsc{\textsc{crg}}} model, the total scaled volume $c{J}(c,d)/2^d$ covered by the deposited
spheres in dimension $d$ can be well approximated.
Indeed, for large $c$, Corollary~\ref{cor:J-large-c} yields $J^\star(c,\alpha_d)\sim 1/(\alpha_dc)$, and in any dimension~$d$, our model leads to a precise characterization of the covered volume given by
\begin{equation}
J^\star(c,\alpha_d)\times\frac{c}{2^d} = \frac{1}{2^d\alpha_d}.
\end{equation}
Notice that $\alpha_d\to 0$ as $d\to\infty$.
Thus, the interaction network described by the $\text{\textsc{\textsc{crg}}} (c,\alpha_d)$ model becomes almost like the (pure) mean-field Erd\H{o}s-R\'enyi random graph model, which supports the widely believed conjecture that in high dimensions the interaction network associated with the random geometric graph loses its local clustering property \cite{DGLU11}.
\subsection{Fluctuations of the jamming fraction}
The next theorem characterizes the fluctuations of $J_n^\star(c,\alpha)$ around its mean:
\begin{theorem}[{CLT for jamming fraction}]
\label{th:jam-diffusion}
As $n\to\infty$,
$$\sqrt{n}(J_n^\star(c,\alpha)-J^\star(c,\alpha))\dto Z,$$
where $Z$ has a normal distribution with mean zero and variance $V^\star(c,\alpha)$. Here $J^\star(c,\alpha)$ is given by Theorem~\ref{th:fluid},
and $V^\star(c,\alpha)=\sigma_{xx}(J^\star(c,\alpha))$ with $\sigma_{xx}(t)$ being the unique solution of the system of differential equations, for $0\leq t<1/\mu$,\\
\begin{eq}\label{defn:functions0}
\frac{\dif \sigma_{xx}(t)}{\dif t}&=2\sigma_{xx}(t)f(t)+2\sigma_{xy}(t)g(t)+\beta(t),\\
\frac{\dif \sigma_{xy}(t)}{\dif t}&=\sigma_{xy}(t)f(t)+tg(t)\sigma^2+\sqrt{\beta(t)}\sigma\rho(t)
\end{eq}
with
\begin{eq}\label{defn:functions}
y(t)&=1-\mu t,\quad f(t)=-\frac{\mu-1}{y(t)}-\lambda, \quad g(t)=\frac{(\mu-1)x(t)}{y(t)^2},\\
\beta(t)&=\left[\frac{(\mu-1)}{y(t)}+\lambda\right]x(t),\quad
\rho(t)=\frac{\sigma}{\sqrt{\beta(t)}}\frac{x(t)}{y(t)}.
\end{eq}
\end{theorem}
Fig.~\ref{histogram} confirms that the asymptotic analytical variance given in~\eqref{defn:functions0} and~\eqref{defn:functions} is a sharp approximation for the \text{\textsc{\textsc{crg}}} model with only 2000 vertices.
\begin{figure
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[scale=.5]{histogram-c20.pdf}
\caption{Fitted normal curve}
\label{histogram}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[scale=.5]{normal-fit-increasing-c.pdf}
\caption{Effect of density on variance}
\label{figguass1}
\end{subfigure}
\caption{(a) Fitted normal curve for 2000 repetitions of the $\text{\textsc{\textsc{crg}}} (20,\alpha_2)$ model with 1000 vertices. The solid curve represents the normal density with properly scaled theoretical variance $V^\star(c,\alpha_2)$, centered around the sample mean.
(b) Fitted normal curves for the $\text{\textsc{\textsc{crg}}} (c,\alpha_2)$ model for increasing $c$ values 10, 20, and 30. As $c$ increases, the curve become more sub-Poissonian.
}
\end{figure}
Table~\ref{tab2} shows numerical values of $V^\star(c,\alpha_d)$
and compares the analytically obtained values of $J^\star(c,\alpha_d)$ and $V^\star(c,\alpha_d)$, and simulated mean and variance for the random geometric graph ensemble. The agreement again confirms the appropriateness of the $\text{\textsc{\textsc{crg}}} (c,\alpha_d)$ model for modeling the continuum $\text{\textsc{\textsc{rsa}}} $. Furthermore, $V^\star(c,\alpha_d)$ serves as an approximation for the value of $V(c,d)$, the asymptotic variance of $J(c,d)$ (suitably rescaled).
\begin{table}
\centering
\begin{tabular}{|C{1cm}|C{1cm}|C{1.75cm}|C{1.75cm}|C{1.75cm}|C{1.75cm}|}
\cline{3-6}
\multicolumn{2}{c|}{} &\multicolumn{2}{c}{\text{\textsc{\textsc{rgg}}} }&\multicolumn{2}{|c|}{\text{\textsc{\textsc{crg}}} }\\
\hline
\multicolumn{1}{|c|}{$n$} & $c$ & $J_n(c,2)$ & $V_n(c,2)$ & $J^\star(c,\alpha_2)$ & $V^\star(c,\alpha_2)$\\
\hline
200 & 10 & 0.1618 & 0.0166 & & \\
500 & 10 & 0.1608 & 0.0158 & 0.1454 & 0.0178 \\
1000 & 10 & 0.1623 & 0.0155 & & \\
\hline
200 & 20 & 0.0887 & 0.0062 & & \\
500 & 20 & 0.0892 & 0.0068 & 0.0786 & 0.0057 \\
1000 & 20 & 0.0890 & 0.0067 & & \\
\hline
200 & 30 & 0.0619 & 0.0039 & & \\
500 & 30 & 0.0620 & 0.0041 & 0.0538 & 0.0032 \\
1000 & 30 & 0.0615 & 0.0043 & & \\
\hline
\end{tabular}%
\caption{Comparison between the observed mean and scaled variance $n\mathrm{Var}(J_n(c,2))$ for the $\text{\textsc{\textsc{rgg}}} $ model, and the theoretical mean and variance from Theorem~\ref{th:jam-diffusion} in dimension 2.
The sample means and variances for the $\text{\textsc{\textsc{rgg}}} $ model are calculated over 150 samples.}
\label{tab2}
\end{table}
Fig.~\ref{figguass1} shows the density function of the random variable based on the Gaussian approximation in Theorem~\ref{th:jam-diffusion}.
We observe that both the mean and the fluctuations around the mean decrease with $c$.
Indeed, the variance-to-mean ratio has been typically observed to be smaller than one for \text{\textsc{\textsc{rsa}}} in the continuum, and it is generally believed that the jamming fractions are typically of sub-Poissonian nature with fluctuations that are not as large as for a Poisson distribution;
see for instance the Mandel Q parameter in quantum physics \cite{SJK15}.
So, while a closed-form expression remains out of reach (as for the Mandel Q parameter \cite{SJK15}), our solvable model gives a way to describe approximately the variance-to-mean ratio as $V^\star(c,\alpha_d)/J^\star(c,\alpha_d)$.
\section{Proof of Theorem~\texorpdfstring{\ref{th:fluid}}{3.1}}\label{sec:proof}
In this section we analyze several asymptotic properties of \text{\textsc{\textsc{rsa}}} on the $\text{\textsc{\textsc{crg}}} (c,\alpha)$ model.
In particular, we will prove Theorem~\ref{th:fluid}.
We first introduce an algorithm that sequentially activates the vertices while obeying the hard-core exclusion constraint, and then analyze the exploration algorithm (see~\cite{BJS15, DLM16, BJL15} for similar analyses in various other contexts).
The idea is to keep track of the number of vertices that are not neighbors of already actives vertices (termed unexplored vertices), so that when this number becomes zero, no vertex can be activated further.
The number of unexplored vertices can then be decomposed into a drift part which converges to a deterministic function and a fluctuation or martingale part which becomes asymptotically negligible in the mean-filed case (Theorem~\ref{th:fluid}) but gives rise to the a system of SDEs with variance \eqref{defn:functions0}.
The proof crucially relies on the Functional Laws of Large Numbers (FLLN) and the Functional Central Limit Theorem (FCLT).
The key challenge here is that the process that keeps track of the number of unexplored vertices while the exploration algorithm is running does not yield a Markov process, so we have to introduce another process to make the system Markovian and analyze this two-dimensional system.
For each vertex, the neighboring vertices inside and outside its own household will be referred to as `household neighbors' and `distant neighbors', respectively.
If $H$ denotes the size of the households, then $H\sim 1+\mathrm{Poisson}(\alpha c)$. Therefore, $\expt{H}=1+\alpha c$, and $\mathrm{Var}(H)=\alpha c$.
Furthermore, any two vertices belonging to two different households are
connected by an edge with probability $p_n=(1-\alpha)c/n$, so the number of distant neighbors is a Bin$(n-H-1,p_n)$ random variable, Poisson$((1-\alpha)c)$ in the large $n$ limit.
As mentioned earlier, the total number of neighbors, is then asymptotically given by a Poisson$(c)$ random variable.
In this section we fix $c>0$ and $\alpha\in[0,1]$, and simply write $J^\star_n$ and $J^\star$ for $J^\star_n(c,\alpha)$ and $J^\star(c,\alpha)$ respectively.\\
\noindent
{\bf Notation.}
We will use boldfaced letters to denote stochastic processes and vectors.
A sequence of random variables $\{X_n\}_{n\geq 1}$ is said to be $\ensuremath{O_{\scriptscriptstyle\PP}}(f(n))$, or $\ensuremath{o_{\scriptscriptstyle\PP}}(f(n))$, for some function $f:\R\to\R_+$, if the sequence of scaled random variables $\{X_n/f(n)\}_{n\geq 1}$ is tight, or converges to zero in probability, respectively.
We denote by $D_E[0,\infty)$ the set of all \emph{c\`adl\`ag} (right continuous left limit exists) functions from $[0,\infty)$ to a complete, separable metric space $E$, endowed with the Skorohod $J_1$ topology, and
by `$\dto$' and `$\pto$', convergence in distribution and in probability, respectively. In particular, if the sample paths of a stochastic process $\mathbf{X}$ are continuous, we write
$\mathbf{X}_n=\{X_n(t)\}_{t\geq 0}\xrightarrow{\scriptscriptstyle d} \mathbf{X}=\{X(t)\}_{t\geq 0}$, if for any $T\geq 0$,
\begin{equation}
\sup_{t\in [0,T]}|X_n(t)-X(t)|\pto 0\quad\text{as}\quad n\to\infty.
\end{equation}
\subsection{The exploration algorithm}\label{sec:algo}
Instead of fixing a particular realization of the random graph and then studying \text{\textsc{\textsc{rsa}}} on that given graph, we introduce an algorithm which sequentially \emph{activates} the vertices one-by-one, \emph{explores} the neighborhood of the activated vertices, and simultaneously builds the random graph topology on the activated and explored vertices.
The joint distribution of the random graph and active vertices obtained this way is same as those obtained by first fixing the random graph and then studying \text{\textsc{\textsc{rsa}}} .
The idea of exploring in the above fashion simplifies the whole analysis, since the evolution of the system can be described recursively in terms of the previous states, as described below in detail.
Observe that during the process of sequential activation, until the jamming state is reached, the vertices can be
in either of three states: active, blocked, and unexplored (i.e.~vertices with future potential activation).
Furthermore, there can be two types of blocked vertices: (i) blocked due to activation of some household neighbor,
or
(ii) none of the household neighbors is active, but there is an active distant neighbor.
Therefore, at each time $t\geq 0$, categorize the vertices into four sets:
\begin{itemize}
\item A$(t)$: set of all vertices active.
\item U$(t)$: set of all vertices that are not active and that have not been blocked by any vertex in~A$(t)$.
\item BH$(t)$: set of all vertices that belong to a household of some vertex in A$(t)$.
\item BO$(t)$: set of all vertices that do not belong to a household yet, but are blocked due to connections with some vertex in A$(t)$ as a distant neighbor.
\end{itemize}
Note that $\mathrm{BH}(t)\cup\mathrm{BO}(t)$ constitutes the set of all blocked vertices at time $t$, and $\mathrm{BH}(t)\cap\mathrm{BO}(t)=\emptyset$.
Initially, all vertices are unexplored, so that U$(0)= V(G)$, the set of all $n$ vertices.
At time step $t$,
one vertex $v$ is selected from U$(t-1)$ uniformly at random and
is transferred to A$(t)$, i.e., one unexplored vertex becomes active.
We now explore the neighbors of $v$, which can be of two types: the household neighbors, and the distant neighbors.
Further observe that $v$ can have its household neighbors only from the set
$\mathrm{U}(t-1)\cup\mathrm{BO}(t-1)\setminus \{v\}$, since each vertex in $\mathrm{BH}(t-1)$ already belongs
to some household.
Define
$$H(t)\sim \min\Big\{\mathrm{Poisson}(\alpha c), |\mathrm{U}(t-1)\cup\mathrm{BO}(t-1)\setminus \{v\}|\Big\},$$
i.e., draw a Poisson$(\alpha c)$ random variable independently of any other process, and if it is smaller than $|\mathrm{U}(t-1)\cup\mathrm{BO}(t-1)\setminus \{v\}|$, then take it to be the value of $H-1$, and otherwise set $H(t)=|\mathrm{U}(t-1)\cup\mathrm{BO}(t-1)\setminus \{v\}|$.
Select $H(t)$ vertices $\{u_1,u_2,\ldots,u_{H}\}$ at random from all vertices in $U(t-1)\cup \mathrm{BO}(t-1)\setminus \{v\}$.
These $H(t)$ vertices together form the household containing $v$, and are moved to $\mathrm{BH}(t)$, irrespective of the set they are selected from.
To explore the distant neighbors, select one by one, all the vertices in $\mathrm{U}(t-1)\cup \mathrm{BO}(t-1)\cup \mathrm{BH}(t-1)\setminus\{v,u_1,\ldots,u_{H}\}$,
and for every such selected vertex $\bar{u}$, put an edge between $\bar{u}$ and $v$ with probability $p_n$.
Denote the newly created distant neighbors that belonged to $\mathrm{U}(t-1)$
by $\{\bar{u}_1,\ldots,\bar{u}_d\}$, and move these vertices to $\mathrm{BO}(t)$.
In summary, the exploration algorithm yields the following recursion relations:
\begin{align*}
\mathrm{A}(t) &= \mathrm{A}(t-1)\cup\{v\},\\
\mathrm{U}(t) &= \mathrm{U}(t-1)\setminus\{v, u_1,u_2,\ldots,u_{H},\bar{u}_1,\ldots,\bar{u}_d\},\\
\mathrm{BH}(t) &= \mathrm{BH}(t-1)\cup \{u_1,u_2,\ldots,u_{H}\},\\
\mathrm{BO}(t) &= \mathrm{BO}(t-1)\cup \{\bar{u}_1,\ldots,\bar{u}_d\}.
\end{align*}
The algorithm terminates when there is no vertex left in the set U$(t)$ (implying that all vertices are either active or blocked), and outputs the cardinality of A$(t)$ as the number of active vertices in the jammed state.
\subsection{State description and martingale decomposition.}\label{sec:mart-decompose}
Denote for $t\geq 0$,
\begin{align*}
X_n(t):=|\mathrm{U}(t)|, \quad
Y_n(t):=|\mathrm{U}(t)\cup \mathrm{BO}(t)|.
\end{align*}
Observe that $\{(X_n(t),Y_n(t))\}_{t\geq 0}$ is a Markov chain.
At each time step, one new vertex becomes active, so that
$|\mathrm{A}(t)|=t$, and
the total number of vertices in the jammed state is given by the time step when $X_n(t)$ hits zero, i.e.,
the time step when the exploration algorithm terminates.
Let us now introduce the shorthand notation $\mu = \mathbb{E}[H] = 1+\alpha c$, $\sigma^2 = \var{H}= \alpha c$ and $\lambda = (1-\alpha) c$.
\paragraph*{\bf Dynamics of $X_n$.}
First we make the following observations:
\begin{itemize}
\item $X_n(t)$ decreases by one, when a new vertex $v$ becomes active.
\item The household neighbors of $v$ are selected from $Y_n(t-1)$ vertices, and $X_n(t)$ decreases by an amount of the number of such vertices which are in $\mathrm{U}(t-1)$.
\item $X_n(t)$ decreases by the number of distant neighbors of the newly active vertex that belong to $\mathrm{U}(t-1)$ (since they are transferred to $\mathrm{BO}(t)$).
\end{itemize}
Thus,
\begin{equation}
X_n(t+1)=X_n(t)-\xi_n(t+1)\quad\text{and}\quad X_n(0)=n
\end{equation}
with
\begin{equation}\label{defn:xi}
\xi_n(t+1)=1+\eta_1(t+1)+\eta_2(t+1),
\end{equation}
where conditionally on $(X_n(t),Y_n(t))$,
\begin{equation}\label{defn:eta1}
\eta_1(t+1)\sim \text{Hypergeometric}(X_n(t), Y_n(t), H(t)),
\end{equation}
i.e.,~$\eta_1(t+1)$ has a Hypergeometric distribution with favorable outcomes $X_n(t)$, population size $Y_n(t)$, and sample size $H(t)$.
Further, conditionally on $(X_n(t),Y_n(t),\eta_1(t+1))$,
\begin{equation}\label{defn:eta2}
\eta_2(t+1) \sim \mathrm{Bin}\Big(X_n(t)-1-\eta_1(t+1),\ \frac{\lambda}{n}\Big).
\end{equation}
Therefore, the drift function of the $\mathbf{X}_n$ process satisfies
\begin{equation}\label{eq:drift-X}
\begin{split}
\expt{\xi_n(t+1)| X_n(t), Y_n(t)}&=1+\frac{X_n(t)(\mu-1)}{Y_n(t)}+\left(X_n(t)-1-\frac{X_n(t)(\mu-1)}{Y_n(t)}\right)\frac{\lambda}{n}\\
&= 1+ \frac{X_n(t)(\mu-1)}{Y_n(t)}+ \frac{\lambda X_n(t)}{n} +\ensuremath{O_{\scriptscriptstyle\PP}}(n^{-1}),
\end{split}
\end{equation}
where, in the last step, we have used the fact that $X_n(t)\leq Y_n(t)$.
\paragraph*{\bf Dynamics of $Y_n$.}
The value of $Y_n$ does not change due to the creation of distant neighbors.
At time $t$, it can only decrease due to an activation
of a vertex $v$ (since it is moved to $\mathrm{A}(t)$), and the formation of a household, since all the vertices that make the
household of $v$, were in $\mathrm{U}(t-1)\cup\mathrm{BO}(t-1)$, and are moved to $\mathrm{BH}(t)$.
Thus, at each time step, $Y_n(t)$ decreases on average by an amount $\mu=1+\alpha c$,
the expected household size, except at the final step when the residual number of vertices can be smaller than the household size.
But this will not affect our asymptotic results in any way, and we will ignore it.
Hence,
\begin{equation}
Y_n(t+1)=Y_n(t)-\zeta_n(t+1) \quad\text{and}\quad Y_n(0)=n,
\end{equation}where
\begin{equation}
\expt{\zeta_n(t+1)|X_n(t), Y_n(t)}=\mu.
\end{equation}
\paragraph*{\bf Martingale decomposition.}
Using the Doob-Meyer decomposition \cite[Theorem 4.10]{KS91} of~$\bld{X}_n$, \eqref{eq:drift-X} yields the following martingale decomposition
\begin{equation*}
\begin{split}
X_n(t)
&=n-\sum_{i=1}^t\xi_n(i)=n+M_{n}^{\scriptscriptstyle X}(t)-t-\sum_{i=1}^t\bigg[\frac{X_n(i-1)(\mu-1)}{Y_n(i-1)}
+ \frac{\lambda X_n(i-1)}{n} +\ensuremath{O_{\scriptscriptstyle\PP}}(n^{-1})\bigg],
\end{split}
\end{equation*}
where $\bld{M}_n^{\scriptscriptstyle X}=\{M_n^{\scriptscriptstyle X}(t)\}_{t\geq 1}$ is a square-integrable martingale with respect to the usual filtration generated by the exploration algorithm.
Let us now define the scaled processes
$$x_n(t):=\frac{X_n(\floor{nt})}{n}\quad\text{and}\quad y_n(t):=\frac{Y_n(\floor{nt})}{n}.$$
Also define
\begin{equation}\label{eq:delta}
\delta(x,y):=(\mu-1) \frac{x}{y}+\lambda x,\quad\text{for}\quad 0\leq x\leq y,\ y>0.
\end{equation}
Thus, we can write
\begin{equation}\label{eq:mart-decompose}
\begin{split}
x_n(t)&=1+\frac{M_n^{\scriptscriptstyle X}(\floor{nt})}{n}-\frac{\floor{nt}}{n}
-\frac{1}{n}\sum_{i=1}^{\floor{nt}}\delta\bigg(\frac{X_n(i-1)}{n},\frac{Y_n(i-1)}{n}\bigg)+\ensuremath{O_{\scriptscriptstyle\PP}}(n^{-1})\\
&=1+\frac{M_n^{\scriptscriptstyle X}(\floor{nt})}{n}-t-\int_0^t\delta(x_n(s),y_n(s))\dif s
+\ensuremath{O_{\scriptscriptstyle\PP}}(n^{-1}).
\end{split}
\end{equation}
Similar arguments yield
\begin{eq}\label{eq:repr-yn}
y_n(t)=1+\frac{M_n^{\scriptscriptstyle Y}(\floor{nt})}{n}-\mu t +\ensuremath{O_{\scriptscriptstyle\PP}}(n^{-1}),
\end{eq}
where $\bld{M}_n^{\scriptscriptstyle Y} =\{M_n^{\scriptscriptstyle Y}(t)\}_{t\geq 1}$ is a square-integrable martingale with respect to a suitable filtration. We write $\bld{x}_n$ and $\bld{y}_n$ to denote the processes $(x_n(t))_{t\geq 0}$ an $(y_n(t))_{t\geq 0}$ respectively.
\subsection{Quadratic variation and covariation}\label{sec:qvqcv}
To investigate the scaling behavior of the martingales, we will now compute the respective quadratic variation and covariation terms.
For convenience in notation, denote by $\PP_t, \mathbb{E}_t$, $\mathrm{Var}_t$, $\mathrm{Cov}_t$, the conditional probability, expectation, variance and covariance, respectively, conditioned on $(X_n(t), Y_n(t))$.
Notice that, for the martingales $\bld{M}_n^{\scriptscriptstyle X}$ and $\bld{M}_n^{\scriptscriptstyle Y}$, the quadratic variation and covariation terms are given by
\begin{eq}\label{defn:var-covar-mart}
\langle M_n^{\scriptscriptstyle X} \rangle (\floor{nt})&= \sum_{i=1}^{\floor{nt}}\mathrm{Var}_{i-1}(\xi_n(i)),\\
\quad \langle M_n^{\scriptscriptstyle Y} \rangle (\floor{nt}) &= \sum_{i=1}^{\floor{nt}}\mathrm{Var}_{i-1}(\zeta_n(i)),\\
\langle M_n^{\scriptscriptstyle X}, M_n^{\scriptscriptstyle Y} \rangle (\floor{nt})& = \sum_{i=1}^{\floor{nt}}\mathrm{Cov}_{i-1}(\zeta_n(i),\xi_n(i)).
\end{eq}
Thus, the quantities of interest are $\mathrm{Var}_t(\xi_n(t+1))$, $\mathrm{Var}_t(\zeta_n(t+1))$ and $\mathrm{Cov}_t(\xi_n(t+1),\zeta_n(t+1))$, which we derive in the three successive claims.
\begin{claim}
For any $t\geq 1$,
\begin{equation}\label{var-zeta}
\mathrm{Var}_t(\zeta_n(t+1)) = \sigma^2.
\end{equation}
\end{claim}
\begin{proof}
The proof is immediate by observing that the random variable denoting the household size has variance~$\sigma^2$.
\end{proof}
\begin{claim}
For any $t\geq 1$,
\begin{eq}\label{var-xi}
\mathrm{Var}_t\left(\xi_n(t+1)\right) = \frac{X_n(t)(\mu-1)}{Y_n(t)}+\frac{\lambda X_n(t)}{n}+\ensuremath{O_{\scriptscriptstyle\PP}}(n^{-1}).
\end{eq}
\end{claim}
\begin{proof}
From the definition of $\xi_n$ in \eqref{defn:xi}, the computation of $\mathrm{Var}_t(\xi_n(t+1))$ requires computation of $\mathrm{Var}_t(\eta_1(t+1))$, $\mathrm{Cov}_t(\eta_1(t+1),\eta_2(t+1))$ and $\mathrm{Var}_t(\eta_2(t+1))$. Since $\eta_1$ follows a Hypergeometric distribution,
\begin{equation}
\begin{split}
&\mathbb{E}_t\left(\eta_1(t+1)(\eta_1(t+1)-1)\vert H\right)
=\frac{X_n(t)(X_n(t)-1)(H-1)(H-2)}{Y_n(t)(Y_n(t)-1)}
\end{split}
\end{equation}
and
\begin{eq}\label{var-eta1}
\mathrm{Var}_t\left( \eta_1(t+1) \right)&=\frac{X_n(t)(X_n(t)-1)\expt{(H-1)(H-2)}}{Y_n(t)(Y_n(t)-1)}+\mathbb{E}_t\left(\eta_1(t+1)\right)-\mathbb{E}_t^2\left(\eta_1(t+1)\right)\\
& = \frac{X_n^2(t)}{Y_n^2(t)}(\sigma^2+\mu^2-3\mu+2)+\frac{X_n(t)}{Y_n(t)}(\mu-1) - \frac{X_n^2(t)}{Y_n^2(t)}(\mu-1)^2+\ensuremath{O_{\scriptscriptstyle\PP}}(n^{-1})\\
& = \frac{X_n^2(t)}{Y_n^2(t)}(\sigma^2-\mu+1)+\frac{X_n(t)}{Y_n(t)}(\mu-1)+\ensuremath{O_{\scriptscriptstyle\PP}}(n^{-1})\\
& = \frac{X_n(t)}{Y_n(t)}(\mu-1)+\ensuremath{O_{\scriptscriptstyle\PP}}(n^{-1}),
\end{eq}
since $\sigma^2=\mu-1=\alpha c.$
Also, we have
\begin{eq}
&\mathbb{E}_t\left(\eta_2(t+1)(\eta_2(t+1)-1)\right)\\
&=\big[ (X_n(t)-1)(X_n(t)-2)-\mathbb{E}_t\left(\eta_1(t+1)\right) \left[ 2X_n(t)-3\right]
+\mathbb{E}_t\left(\eta_1^2(t+1)\right)\big]\bigg(\frac{\lambda}{n}\bigg)^2\\
& = \frac{\lambda^2 X_n^2(t)}{n^2} + \ensuremath{O_{\scriptscriptstyle\PP}}(n^{-1})
\end{eq}
and therefore
\begin{eq}\label{var-eta2}
\mathrm{Var}_t(\eta_2(t+1)) = \frac{\lambda X_n(t)}{n} +\ensuremath{O_{\scriptscriptstyle\PP}}(n^{-1}).
\end{eq}
Further,
\begin{eq}
\mathbb{E}_t\left(\eta_1(t+1)\eta_2(t+1)\right)
&=\mathbb{E}_t\left(\eta_1(t+1)(X_n(t)-1-\eta_1(t+1))\frac{\lambda}{n}\right)\\
&=\frac{\lambda}{n}\left[ (X_n(t)-1)\mathbb{E}_t(\eta_1(t+1))-\mathbb{E}_t(\eta_1^2(t+1))\right]\\
& = \frac{\lambda X_n(t)}{n}\frac{X_n(t)(\mu-1)}{Y_n(t)} +\ensuremath{O_{\scriptscriptstyle\PP}}(n^{-1}).
\end{eq}
Now, from \eqref{defn:eta1}, \eqref{defn:eta2},
\begin{equation*}
\mathbb{E}_t(\eta_1(t+1)) = \frac{\lambda X_n(t)}{n}+\ensuremath{O_{\scriptscriptstyle\PP}}(n^{-1}) \quad \text{and}\quad \mathbb{E}_t(\eta_2(t+1)) = \frac{X_n(t)(\mu-1)}{Y_n(t)}+\ensuremath{O_{\scriptscriptstyle\PP}}(n^{-1}),
\end{equation*}which implies that
\begin{eq}\label{covar-eta12}
\mathrm{Cov}_t(\eta_1(t+1),\eta_2(t+1)) = \ensuremath{O_{\scriptscriptstyle\PP}}(n^{-1}).
\end{eq}
Combining \eqref{var-eta1}, \eqref{var-eta2} and \eqref{covar-eta12}, gives~\eqref{var-xi}.
\end{proof}
\begin{claim}
For any $t\geq 1$,
\begin{eq}\label{covar-zeta-xi}
\mathrm{Cov}_t\left( \zeta_n(t+1),\xi_n(t+1)\right) = \frac{X_n(t)}{Y_n(t)}\sigma^2+\ensuremath{O_{\scriptscriptstyle\PP}}(n^{-1}).
\end{eq}
\end{claim}
\begin{proof}
Observe that
\begin{eq}
&\mathbb{E}_t\left(\zeta_n(t+1)\eta_1(t+1)\right)=\mathbb{E}_t\left(\zeta_n(t+1) \mathbb{E}_t\left(\eta_1(t+1)|\zeta_n(t+1)\right)\right)\\
& =\frac{X_n(t)}{Y_n(t)}\mathbb{E}_t\left(\zeta_n(t+1)(\zeta_n(t+1)-1)\right)=\frac{X_n(t)}{Y_n(t)}(\sigma^2+\mu^2-\mu),
\end{eq}
and therefore,
\begin{equation}\label{covar-zeta-eta1}
\mathrm{Cov}_t(\zeta_n(t+1),\eta_1(t+1))= \frac{X_n(t)}{Y_n(t)}\sigma^2.
\end{equation}
Thus,
\begin{eq}
&\exptt{\zeta_n(t+1)\eta_2(t+1)} = \exptt{\zeta_n(t+1)\exptt{\eta_2(t+1)\vert \eta_1(t+1),\zeta_n(t+1)}}\\
& = \exptt{\zeta_n(t+1)(X_n(t)-1-\eta_1(t+1))\frac{\lambda}{n}} = \lambda\mu \frac{X_n(t)}{n}+\ensuremath{O_{\scriptscriptstyle\PP}}(n^{-1})
\end{eq}
and hence
\begin{eq}\label{covar-zeta-eta2}
\mathrm{Cov}_t\left( \zeta_n(t+1),\eta_2(t+1)\right) = \ensuremath{O_{\scriptscriptstyle\PP}}(n^{-1}).
\end{eq}
Combining \eqref{covar-zeta-eta1} and \eqref{covar-zeta-eta2} yields~\eqref{covar-zeta-xi}.
\end{proof}
Based on the quadratic variation and covariation results above, the following lemma shows that the martingales when scaled by $n$, converge to the zero-process.
\begin{lemma}\label{lem:qv-order}
For any fixed $T\geq 0$, as $n\to\infty$,
\begin{eq}
\frac{1}{n}\sup_{t\leq T}|\bld{M}_n^{\scriptscriptstyle X}(\floor{nt})| \pto 0, \quad \frac{1}{n}\sup_{t\leq T}|\bld{M}_n^{\scriptscriptstyle Y}(\floor{nt})| \pto 0.
\end{eq}
\end{lemma}
\begin{proof
Observe that using \eqref{defn:var-covar-mart} along with \eqref{var-zeta} and \eqref{var-xi}, we can claim for any $T\geq 0$,
\begin{eq}
\langle \bld{M}_n^{\scriptscriptstyle X} \rangle (\floor{nT})=\ensuremath{O_{\scriptscriptstyle\PP}}(n),\quad
\langle \bld{M}_n^{\scriptscriptstyle Y} \rangle (\floor{nT})=\ensuremath{O_{\scriptscriptstyle\PP}}(n).
\end{eq}
Thus, from Doob's inequality \cite[Theorem 1.9.1.3]{LS89}, the proof follows.
\end{proof}
\subsection{Convergence of the scaled exploration process}
\label{sec:fluidconv}
Based on the estimates from Sections~~\ref{sec:mart-decompose},~and~\ref{sec:qvqcv}, we now complete the proof of Theorem~\ref{th:fluid}.
Recall the representations of $\bld{x}_n$, and $\bld{y}_n$ from \eqref{eq:mart-decompose} and \eqref{eq:repr-yn}.
Fix any $ 0 \leq T < 1/\mu$.
Observe that Lemma~\ref{lem:qv-order} immediately yields
\begin{eq}\label{fluid-y}
\sup_{t\leq T} |y_n(t)-y(t)| \pto 0.
\end{eq}
Next note that $\delta (x,y)$, as defined in~\eqref{eq:delta}, is Lipschitz continuous on $[0,1]\times[\epsilon, 1]$ for any $\epsilon >0$ and
we can choose this $\epsilon > 0$ in such a way that $y(t)\geq \epsilon$ for all $t\leq T$ (since $T<1/\mu$).
Therefore, the Lipschitz continuity of $\delta$ implies that there exists a constant $C>0$ such that
\begin{align*}
&\sup_{t\leq T}|\delta(x_n(t),y_n(t))-\delta(x(t),y(t))|
\leq C \Big(\sup_{t\leq T}| x_n(t)-x(t)|+\sup_{t\leq T}| y_n(t)-y(t)|\Big).
\end{align*}Thus,
\begin{align*}
\sup_{t\leq T} |x_n(t)-x(t)| &\leq \sup_{t\leq T}\frac{|M_n^{\scriptscriptstyle X}(\floor{nt})|}{n}
+\int_0^T \sup_{t\leq u} |\delta(x_n(t),y_n(t))
-\delta(x(t),y(t))| \dif u +\ensuremath{o_{\scriptscriptstyle\PP}}(1)\\
&\leq \varepsilon_n + C \int_0^T \sup_{t\leq u} | x_n(t)-x(t)| \dif u,
\end{align*}
where, by Lemma~\ref{lem:qv-order} and \eqref{fluid-y},
\begin{align*}
\varepsilon_n := \sup_{t\leq T}\frac{|M_n^{\scriptscriptstyle X}(\floor{nt})|}{n} + CT \sup_{t\leq T} |y_n(t)-y(t)|+\ensuremath{o_{\scriptscriptstyle\PP}}(1),
\end{align*}
which converges in probability to zero, as $n\to\infty$.
Using Gr\H{o}nwall's inequality~\cite[Theorem~5.1]{EK2009}, we get
\begin{equation}\label{eq:fluid-conv-eqn}
\sup_{t\leq T}|x_n(t)-x(t)|\leq \varepsilon_n \mathrm{e}^{CT}\pto 0.
\end{equation}
Finally, due to Claim~\ref{cl:upperbound} below we note that the smallest root of $x(t)$ is strictly smaller than $1/\mu$.
Also, the convergence in \eqref{eq:fluid-conv-eqn} holds for any $T<1/\mu$.
This concludes the proof of Theorem~\ref{th:fluid}.\qed
The claim below establishes that $J^{\star}<1/\mu$.
\begin{claim}\label{cl:upperbound}
$J^\star<1/\mu$.
\end{claim}
\begin{proof}
Recall that $\mu = (1+\alpha c)$ and $\lambda = (1-\alpha) c$.
Notice that \eqref{diffeq} gives a linear differential equation, and the solution is given by
\begin{eq}\label{eq:ode-sol}
x(t) = \mathrm{e}^{-\lambda t} (1-\mu t)^{\frac{\mu-1}{\mu}}\bigg(1-\int_0^t\mathrm{e}^{\lambda s}(1-\mu s)^{-1+\frac{1}{\mu}}\dif s\bigg), \quad t<\frac{1}{\mu}.
\end{eq}
Thus, the smallest root of the integral equation \eqref{diffeq} defined as $J^\star$ must be the smallest positive solution of
\begin{equation}\label{eq:sol-dif-eqn}
\mathcal{I}(t)=\int_0^t\mathrm{e}^{\lambda s}(1-\mu s)^{-1+\frac{1}{\mu}}\dif s = 1.
\end{equation}
The integrand in the left hand side of \eqref{eq:sol-dif-eqn} is positive, and tends to $\infty$ as $t$ increases to $1/\mu$.
Therefore, the integral $ \int_0^t\mathrm{e}^{\lambda s}(1-\mu s)^{-1+1/\mu}\dif s$ tends to infinity as well.
Thus, there must exist a solution of \eqref{eq:sol-dif-eqn} which is smaller that $1/\mu$.
This in turn implies that $J^\star<\mu^{-1}$.
\end{proof}
We now complete the proof of Corollary~\ref{cor:J-large-c}.
\begin{proof}[Proof of Corollary~\ref{cor:J-large-c}]
Observe from~\eqref{eq:ode-sol} and \eqref{eq:sol-dif-eqn} that, for $t<\mu^{-1}$,
\begin{align*}
&\mathcal{I}(t)\geq
1 - \mathrm{e}^{\lambda t}(1-\mu t)^{\frac{1}{\mu}}\int_0^t \frac{\dif s}{1-\mu s}
\geq 1-\mathrm{e}^{\frac{\lambda}{\mu}} \bigg[-\frac{1}{\mu} \log(1-\mu s)\bigg]_0^t \sim 1-\mathrm{e}^{\frac{\lambda}{\mu}} \frac{1}{\mu}\log(1-\mu t),
\end{align*}
and the last term is zero when
\begin{eq}
t = \frac{1}{\mu} (1-\mathrm{e}^{-\mu \mathrm{e}^{-\frac{\lambda}{\mu}}}) \sim \frac{1}{\mu} \text{ as } \mu\to\infty.
\end{eq}Since $J^\star \geq (1-\mathrm{e}^{-\mu \mathrm{e}^{-\frac{\lambda}{\mu}}})$, the proof is complete.
\end{proof}
\section{Proof of Theorem~\texorpdfstring{\ref{th:jam-diffusion}}{3.2}}\label{sec:diffconv}
Define the diffusion-scaled processes
\begin{eq}\label{eq:diff-scale}
\bar{X}_n(t):=\sqrt{n}(x_n(t)-x(t)), \quad
\bar{Y}_n(t):=\sqrt{n}(y_n(t)-y(t)),
\end{eq}
and the diffusion-scaled martingales
$$\bar{M}_n^{\scriptscriptstyle X} (t):= \frac{M_n^{\scriptscriptstyle X}(\floor{nt})}{\sqrt{n}},\quad\bar{M}_n^{\scriptscriptstyle Y} (t):= \frac{M_n^{\scriptscriptstyle Y}(\floor{nt})}{\sqrt{n}}.$$
Now observe from~\eqref{eq:mart-decompose} that
\begin{align*
\bar{X}_n(t)
&=\bar{M}_n^{\scriptscriptstyle X}(t)-(\mu-1)\left[ \int_0^t\frac{\bar{X}_n(s)}{y_n(s)}\dif s
+\int_0^tx(s)\sqrt{n}\left(\frac{1}{y_n(s)}-\frac{1}{y(s)}\right)\dif s\right]\\
&\hspace{7cm} -\lambda\int_0^t\bar{X}_n(s)\dif s+\ensuremath{O_{\scriptscriptstyle\PP}}(n^{-1/2})\\
&=\bar{M}_n^{ \scriptscriptstyle X}(t)-(\mu-1)\int_0^t\frac{\bar{X}_n(s)}{y_n(s)}\dif s
+ \int_o^t\frac{x(s)(\mu-1)}{y_n(s)y(s)}\bar{Y}_n(s) \dif s\\
&\hspace{7cm}-\lambda\int_0^t\bar{X}_n(s)\dif s+\ensuremath{o_{\scriptscriptstyle\PP}}(1).
\end{align*}
Therefore, we can write
\begin{eq}\label{diff-xn-simpl}
\bar{X}_n(t) &= \bar{M}_n^{\scriptscriptstyle X}(t) + \int_0^t f_n(s)\bar{X}_n(s)\dif s
+ \int_0^t g_n(s)\bar{Y}_n(s)\dif s + \ensuremath{o_{\scriptscriptstyle\PP}}(1),
\end{eq}
where
\begin{eq}
f_n(t) = -\frac{(\mu-1)}{y_n(t)} -\lambda, \quad g_n(t) = \frac{(\mu-1)x(t)}{y_n(t)y(t)}.
\end{eq}
Furthermore,~\eqref{eq:repr-yn} yields
\begin{eq}\label{diff-yn-simpl}
&\bar{Y}_n(t) = \sqrt{n}(y_n(t)-y(t)) = \bar{M}_n^{\scriptscriptstyle Y}(t) +\ensuremath{o_{\scriptscriptstyle\PP}}(1).
\end{eq}
Based on the quadratic variation and covariation results in Section~\ref{sec:qvqcv}, the following lemma shows that the martingales when scaled by $\sqrt{n}$ converge to a diffusion process described by an SDE.
\begin{lemma}[{Diffusion limit of martingales}]
\label{lem:conv-mart}
As $n\to\infty$, $(\bar{\bld{M}}_n^{\scriptscriptstyle X}, \bar{\bld{M}}_n^{\scriptscriptstyle Y})\dto (\bld{W}_1,\bld{W}_2)$, where the process $(\bld{W}_1,\bld{W}_2)$ is described by the {\rm SDE}
\begin{eq}
\dif W_1(t) = \sqrt{\beta(t)}\left[ \rho(t)\dif B_1(t)+\sqrt{1-\rho(t)^2}\dif B_2(t) \right], \quad \dif W_2(t) = \sigma \dif B_1(t)
\end{eq}with $\bld{B}_1$ and $\bld{B}_2$ two independent standard Brownian motions.
\end{lemma}
\begin{proof}
The idea is to use the martingale functional central limit theorem (cf.~\cite[Theorem 8.1]{PTRW07}), where the convergence of the martingales is characterized by the convergence of their quadratic variation process.
Using Theorem~\ref{th:fluid}, we compute the asymptotics of the quadratic variations and covariation of $\bar{\bld{M}}_n^{\scriptscriptstyle X}$ and $\bar{\bld{M}}_n^{\scriptscriptstyle Y}$.
From \eqref{eq:mart-decompose} and \eqref{var-zeta}, we obtain
\begin{align*}
\langle \bar{M}_n^{\scriptscriptstyle Y} \rangle (t) = \frac{1}{n} \sum_{i=1}^{\floor{nt}}\mathrm{Var}_{i-1}(\zeta_n(i)) \pto \sigma^2 t.
\end{align*}
Again, \eqref{eq:mart-decompose}, \eqref{var-xi} and Theorem~\ref{th:fluid} yields
\begin{align*}
\langle \bar{M}_n^{\scriptscriptstyle X} \rangle (t)
=\frac{1}{n}\sum_{i=1}^{\floor{nt}}\mathrm{Var}_{i-1}(\xi_n(i))
\pto
\int_0^t\left[\frac{(\mu-1)}{y(s)}+\lambda\right] x(s)\dif s
= \int _0^t\beta(s)\dif s.
\end{align*}
Finally, from \eqref{eq:mart-decompose}, \eqref{covar-zeta-xi} and Theorem~\ref{th:fluid} we obtain
\begin{align*}
\langle \bar{M}_n^{\scriptscriptstyle X}, \bar{M}_n^{\scriptscriptstyle Y} \rangle (t)=
\frac{1}{n} \sum_{i=1}^{\floor{nt}}\mathrm{Cov}_{i-1}(\zeta_n(i),\xi_n(i))
\pto \sigma^2\int_0^t\frac{x(s)}{y(s)}\dif s
= \int_0^t \rho(s)\times \sigma \sqrt{\beta(s)}\dif s.
\end{align*}
From the martingale functional central limit theorem,
we get that $(\bar{\bld{M}}_n^{\scriptscriptstyle X}, \bar{\bld{M}}_n^{\scriptscriptstyle Y})\dto (\hat{\bld{W}}_1,\hat{\bld{W}}_2)$, where $(\hat{\bld{W}}_1,\hat{\bld{W}}_2)$ are Brownian motions with zero means and quadratic covariation matrix
$$\begin{bmatrix}
\int _0^t\beta(s)\dif s & \int_0^t \rho(s)\times \sigma \sqrt{\beta(s)}\dif s\\
\int_0^t \rho(s)\times \sigma \sqrt{\beta(s)}\dif s & \sigma^2 t
\end{bmatrix}.
$$
The proof then follows by noting the fact that $(\bld{W}_1,\bld{W}_2)\disteq (\hat{\bld{W}}_1,\hat{\bld{W}}_2).$
\end{proof}
Having proved the above convergence of martingales, we now establish weak convergence of the scaled exploration process to a suitable diffusion process.
\begin{proposition}[{Functional CLT of the exploration process}]
\label{prop-diffusion}
As $n\to\infty$, $(\bar{\bld{X}}_n,\bar{\bld{Y}}_n)\dto (\bld{X},\bld{Y})$ where $(\bld{X},\bld{Y})$ is the two-dimensional stochastic process satisfying the SDE
\begin{eq}\label{eq:diffusion-SDE}
\dif X(t)&=\sqrt{\beta(t)}\left[ \rho(t)\dif B_1(t)+\sqrt{1-\rho(t)^2}\dif B_2(t) \right]
+f(t)X(t)\dif t + g(t)Y(t) \dif t,\\
\dif Y(t)&=\sigma\dif B_1(t),
\end{eq}
with $\bld{B}_1$, $\bld{B}_2$ being independent standard Brownian motions, and $f(t)$, $g(t)$ and $\rho(t)$ as defined in~\eqref{defn:functions}.
\end{proposition}
\begin{proof
First we show that $((\bar{\bld{X}}_n,\bar{\bld{Y}}_n))_{n\geq 1}$ is a stochastically bounded sequence of processes.
Indeed stochastic boundedness (and in fact weak convergence) of the $\bar{\mathbf{Y}}_n$ process follows from Lemma~\ref{lem:conv-mart}.
Further observe that for any $T<1/\mu$, by Theorem~\ref{th:fluid},
\begin{eq}\label{eq:limit-fg}
\sup_{t\leq T} |f_n(t)-f(t)| \pto 0, \qquad \sup_{t\leq T} |g_n(t)-g(t)| \pto 0,
\end{eq}
where $f,g$ are defined in \eqref{defn:functions}.
Therefore, for any $T<1/\mu$,
\begin{align*}
\sup_{t\leq T} |\bar{X}_n(t)|&\leq \sup_{t\leq T} |\bar{M}_n^{\scriptscriptstyle X}(t)| +T\sup_{t\leq T}|g_n(t)\bar{Y}_n(t)|
+ \sup_{t\leq T}|f_n(t)| \int_0^T \sup_{u\leq t} |\bar{X}_n(u)| \dif t,
\end{align*}
and again using Gr\H{o}nwall's inequality, it follows that
\begin{align*}
\sup_{t\leq T} |\bar{X}_n(t)| &\leq \Big(\sup_{t\leq T} |\bar{M}_n^{\scriptscriptstyle X}(t)| +T\sup_{t\leq T}|g_n(t)\bar{Y}_n(t)|\Big)
\times \exp\Big(T\sup_{t\leq T}|f_n(t)|\Big).
\end{align*}
Then stochastic boundedness of $(\bar{\bld{X}}_n)_{n\geq 1}$ follows from Lemma~\ref{lem:conv-mart}, \eqref{eq:limit-fg},
and the stochastic boundedness criterion for square-integrable martingales given in~\cite[Lemma 5.8]{PTRW07}.
From stochastic boundedness of the processes we can claim that any sequence $(n_k)_{k\geq 1}$ has a further subsequence $(n_k')_{k\geq 1}\subseteq (n_k)_{k\geq 1}$ such that
\begin{eq}
(\bar{\bld{X}}_{n'_k},\bar{\bld{Y}}_{n'_k})\dto (\bld{X}',\bld{Y}'),
\end{eq}
along that subsequence, where the limit $(\bld{X}',\bld{Y}')$ may depend on the subsequence $(n_k)_{k\geq 1}$.
However, due to the convergence result in Lemma~\ref{lem:conv-mart} and \eqref{eq:limit-fg}, the continuous mapping theorem (see~\cite[Section~3.4]{W02}) implies that the limit $(\bld{X}',\bld{Y}')$ must satisfy \eqref{eq:diffusion-SDE}.
Again, the solution to the SDE in~\eqref{eq:diffusion-SDE} is unique, and
therefore the limit $(\bld{X}',\bld{Y}')$ does not depend on the subsequence $(n_k)_{k\geq 1}$. Thus, the proof is complete.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{th:jam-diffusion}]
First observe that
$$\sqrt{n}(J_n^\star-J^\star)\dto X(J^\star)\quad\text{as}\quad n\to\infty.$$
Indeed this can be seen by the application of the hitting time distribution theorem in~\cite[Theorem~4.1]{EK2009}, and noting the fact that $x'(J^\star)=-1$.
Now since $\mathbf{X}$ is a centered Gaussian process, in order to complete the proof of Theorem~\ref{th:jam-diffusion}, we only need to compute $\mathrm{Var}(X(J^\star))$.
We will use the following known result~\cite[Theorem~8.5.5]{A74} to calculate the variance of $X(t)$.
\begin{lemma}[{Expectation and variance of SDE}]
Consider the $d$-dimensional stochastic differential equation given by
\begin{equation}
\dif Z(t) = (A(t)Z(t)+a(t))\dif t+ \sum_{i=1}^db_i(t)\dif B_i(t),
\end{equation}
where $Z(0)=z_0\in \R^d$, the $b_i$'s are $\R^d$-valued functions, and the $B_i$'s are independent standard Brownian motions, $i=1,\ldots, d$. Then given $Z(0)=x_0$, $Z(t)$ has a normal distribution with mean vector $m(t)$ and covariance matrix $V(t)$, where $m(t)$ and $V(t)$ satisfy the recursion relations
\begin{eq}
\frac{\dif}{\dif t} m (t) = A(t)m(t) + a(t),\quad
\frac{\dif}{\dif t} V(t) = A(t)V (t) + V (t)A^{\scriptscriptstyle T} (t) +\sum_{i=1}^d b_ib_i(t)^{\scriptscriptstyle T},
\end{eq}with initial conditions $m(0) = x_0$, and $V (0) = 0$.
\end{lemma}
In our case, observe from~\eqref{eq:diffusion-SDE} that
\begin{eq}
A(t)=\begin{bmatrix}
f(t) & g(t)\\
0 & 0
\end{bmatrix}, \
a(t)=\begin{bmatrix}
0\\0
\end{bmatrix},\
b_1(t)=
\begin{bmatrix}
\rho(t)\sqrt{\beta(t)}\\
\sigma
\end{bmatrix},\
b_2(t)=
\begin{bmatrix}
\sqrt{1-\rho(t)^2}\sqrt{\beta(t)}\\
0
\end{bmatrix}
\end{eq}
Denote the variance-covariance matrix of $(X(t),Y(t))$ by
\begin{equation}
V(t) = \begin{bmatrix}
\sigma_{xx}(t) & \sigma_{xy}(t)\\
\sigma_{xy}(t)& \sigma_{yy}(t)
\end{bmatrix}.
\end{equation}
Then
\begin{align*}
\frac{\dif }{\dif t} V(t)
&=
\begin{bmatrix}
\sigma_{xx}(t)f(t)+\sigma_{xy}(t)g(t)& \sigma_{xy}(t)f(t)+\sigma_{yy}(t)g(t)\\
0 & 0
\end{bmatrix}
+
\begin{bmatrix}
\sigma_{xx}(t)f(t)+\sigma_{xy}(t)g(t)& 0\\
\sigma_{xy}(t)f(t)+\sigma_{yy}(t)g(t) & 0
\end{bmatrix}\\
&\hspace{1cm}+
\begin{bmatrix}
\rho(t)^2\beta(t)&\sqrt{\beta(t)}\sigma\rho(t)\\
\sqrt{\beta(t)}\sigma\rho(t)&\sigma^2
\end{bmatrix}
+\begin{bmatrix}
(1-\rho(t)^2)\beta(t)&0\\
0&0
\end{bmatrix}\\
&=\begin{bmatrix}
2\sigma_{xx}(t)f(t)+2\sigma_{xy}(t)g(t)&\sigma_{xy}(t)f(t)+\sigma_{yy}(t)g(t)\\
\sigma_{xy}(t)f(t)+\sigma_{yy}(t)g(t)& 0
\end{bmatrix}
+
\begin{bmatrix}
\beta(t)&\sqrt{\beta(t)}\sigma\rho(t)\\
\sqrt{\beta(t)}\sigma\rho(t)&\sigma^2
\end{bmatrix}
\end{align*}
Therefore, the variance of $X(t)$ can be obtained from the solution of the recursion equations
\begin{eq}
\frac{\dif \sigma_{xx}(t)}{\dif t}&=2(\sigma_{xx}(t)f(t)+\sigma_{xy}(t)g(t))+\beta(t),\\
\frac{\dif \sigma_{xy}(t)}{\dif t}&=\sigma_{xy}(t)f(t)+\sigma_{yy}(t)g(t)+\sqrt{\beta(t)}\sigma\rho(t),
\end{eq}
and the proof is thus completed by noting that $\sigma_{yy}(t)=\sigma^2t$.
\end{proof}
\section{Clustering coefficient of random geometric graphs}
\label{sec:clustering-coeff}
The clustering coefficient for the random geometric graph was derived in \cite{DC02} along with an asymptotic formula, when the dimension becomes large.
Below we give an alternative derivation.
The formula~\eqref{eq:alpha-choice} is more tractable in all dimensions compared with the formula in~\cite{DC02}.
Consider $n$ uniformly chosen points on a $d$-dimensional box $[0,1]^d$ and connect two points $u,v$ by an edge if they are at most $2r$ distance apart.
Fix any three vertex indices $u,v$, and $w$.
We write $u\leftrightarrow v$ to denote that $u$ and $v$ share an edge.
The clustering coefficient for $\text{\textsc{\textsc{rgg}}} (c,d)$ on $n$ vertices is then defined by
\begin{equation}
C_n(c,d) := \prob{v \leftrightarrow w|u\leftrightarrow v, u\leftrightarrow w}.
\end{equation}
The following proposition explicitly characterizes the asymptotic value of $C_n(c,d)$ for any density $c$ and dimension $d$.
\begin{proposition}\label{th:cc}
For any fixed $c> 0$, and $d\geq 1$, as $n\to\infty,$
\begin{equation}\label{eq:cc}
C_n(c,d)\to C(d)=d\int_{0}^{1}x^{d-1}I_{1-\frac{x^2}{4}}\Big(\frac{d+1}{2}, \frac{1}{2}\Big) \dif x.
\end{equation}
\end{proposition}
\begin{proof}
Observe that the $\text{\textsc{\textsc{rgg}}} $ model can be constructed by throwing points sequentially at uniformly chosen locations independently, and then connecting to the previous vertices that are at most $2r$ distance away.
Since the locations of the vertices are chosen independently, without loss of generality we assume that in the construction of the $\text{\textsc{\textsc{rgg}}} $ model, the locations of $u,v,w$ are chosen in this respective order.
Now, the event $\{u\leftrightarrow v,u\leftrightarrow w,v \leftrightarrow w\}$ occurs if and only if $v$ falls within the $2r$ neighborhood of $u$ and $w$ falls within the intersection region of two spheres of radius $2r$, centered at $u$ and $v$, respectively.
Let $B_d(\bld{x},2r)$ denote the $d$-dimensional sphere with radius $2r$, centered at $\bld{x}$, and let $V_d(2r)$ denote its volume.
Since $r$ is sufficiently small, so that $B_d(\bld{x},2r)\subseteq [0,1]^d$, using translation invariance, we assume that the location of $u$ is $\bld{0}$.
Let $\bld{v},\bld{w}$ denote the positions in the $d$-dimensional space, of vertices $v$ and $w$, respectively.
Notice that, conditional on the event $\{\bld{v}\in B_d(\bld{0},2r)\}$, the position $\bld{v}$ is uniformly distributed over $B_d(\bld{0},2r)$.
Let $\bld{V}$ be a point chosen uniformly from $B_d(\bld{0},2r)$.
Then the above discussion yields
\begin{align}\label{eq:prob-area}
&C_n(c,d) =\frac{\prob{u\leftrightarrow v,u\leftrightarrow w,v \leftrightarrow w}}{\prob{u\leftrightarrow v, u\leftrightarrow w}}\nonumber\\
%
&= \frac{1}{(V_d(2r))^2}\int_{\bld{v}\in B_d(\bld{0},2r)}\prob{\bld{w}\in B_d(\bld{0},2r) \cap B_d(\bld{v},2r)}\dif \bld{v}\nonumber\\
%
& = \frac{1}{V_d(2r)} \mathbb{E}[|B_d(\bld{0},2r)\cap B_d(\bld{V},2r)|].
\end{align}
We shall use the following lemma to compute the expectation term in \eqref{eq:prob-area}.
\begin{lemma}[{\cite{sphere}}] For any $\bld{x}$ with $\|\bld{x}\| = \rho$, the intersection volume $|B_d(\bld{0},2r)\cap B_d(\bld{x},2r)|$ depends only on $\rho$ and $r$, and is given by
\begin{equation}
|B_d(\bld{0},2r)\cap B_d(\bld{x},2r)| = V_d(2r)\cdot I_{1-\frac{\rho^2}{16r^2}}\Big(\frac{d+1}{2}, \frac{1}{2}\Big),
\end{equation} where $I_z(a,b)$ denotes the normalized incomplete beta integral given by $$ I_z(a,b) = \frac{\int_0^z y^{a-1}(1-y)^{b-1}\dif y}{\int_0^1y^{a-1}(1-y)^{b-1} \dif y}.$$
\end{lemma}
Observe that the Jacobian corresponding to the transformation from the Cartesian coordinates $(x_1,\ldots,x_d)$ to the Polar coordinates $(\rho,\theta,\phi_1,\ldots,\phi_{d-2})$, is given by
\begin{equation}
J_d (\rho, \theta,\phi_1,\dots,\phi_{d-2}) = \rho^{d-1} \prod_{j=1}^{d-2} (\sin(\phi_j))^{d-1-j}.
\end{equation}
Thus, \eqref{eq:prob-area} reduces to
\begin{align*}
&C_n(c,d) = \frac{1}{(V_d(2r))^2} \int_{\bld{x}\in B_d(\bld{0},2r)}|B_d(\bld{0},2r)\cap B_d(\bld{x},2r)| \dif \bld{x}= \frac{1}{V_d(2r)} \int_{\|\bld{x}\|\leq 2r}I_{1-\frac{\|\bld{x}\|^2}{16r^2}}\Big(\frac{d+1}{2}, \frac{1}{2}\Big) \dif \bld{x}\\
%
&= \frac{1}{V_d(2r)}\int_{0}^{2r}\int_{0}^{2\pi} \int_{0}^\pi\dots \int_{0}^{\pi} \rho^{d-1}I_{1-\frac{\rho^2}{16r^2}}\Big(\frac{d+1}{2}, \frac{1}{2}\Big)\prod_{j=1}^{d-2} (\sin(\phi_j))^{d-1-j}\prod_{j=1}^{d-2}\dif \phi_j \dif \theta \dif \rho,
\end{align*}
and we obtain,
\begin{align*}
C_n(c,d) = \Big(\int_{0}^{2r}\rho^{d-1}\dif \rho\Big)^{-1}
\int_{0}^{2r}\rho^{d-1}I_{1-\frac{\rho^2}{16r^2}}\Big(\frac{d+1}{2}, \frac{1}{2}\Big)\dif \rho,
\end{align*}
since
$$V_d(2r) = 2\pi \bigg(\prod_{j=1}^{d-2}\int_0^\pi (\sin(\phi_j))^{d-1-j} \dif \phi_j\bigg) \int_0^{2r}\rho^{d-1}\dif \rho.$$
Therefore, putting $x = \rho/2r$, yields
\begin{align}
C_n(c,d) &= \Big(\int_0^1 x^{d-1}\dif x\Big)^{-1}\int_{0}^{1}x^{d-1}I_{1-\frac{x^2}{4}}\big(\frac{d+1}{2}, \frac{1}{2}\big)\dif x = d\int_{0}^{1}x^{d-1}I_{1-\frac{x^2}{4}}\big(\frac{d+1}{2}, \frac{1}{2}\big)\dif x,
\end{align}which proves the result.
\end{proof}
\section{Discussion}\label{sec:discussion}
We introduced a clustered random graph model with tunable local clustering and a sparse superimposed structure.
The level of clustering
was set to suitably match the local clustering in the topology generated by the random geometric graph.
This resulted in a unique parameter $\alpha_d$ that for each dimension $d$ creates a one-to-one mapping between the tractable random network model and the intractable random geometric graph.
In this way, we offer a new perspective for understanding $\text{\textsc{\textsc{rsa}}} $ on the continuum space in terms of \text{\textsc{\textsc{rsa}}} on random networks with local clustering.
Analysis of the random network model resulted in precise characterizations of the limiting jamming fraction and its fluctuation.
The precise results then served, using the one-to-one mapping, as predictions for the fraction of covered volume for \text{\textsc{\textsc{rsa}}} in the Euclidean space.
Based on extensive simulations we then showed that these prediction were remarkably accurate, irrespective of density or dimension.
In our analysis the random network model serves as a topology generator that replaces the topology generated by the random geometric graph.
While the latter is directly connected with the metric in the Euclidian space, the only spatial information in the topologies generated by the random network model is contained in the matched average degree and clustering.
One could be inclined to think that random topology generators such as the $\text{\textsc{\textsc{crg}}} (c,\alpha_d)$ model may not be good enough. Indeed, this random network model reduces all possible interactions among pairs of vertices to only two principal components: the local interactions due to the clustering, and a mean-field distant interaction. There is, however, building evidence that such randomized topologies can approximate rigid spatial topologies when the local interactions in both topologies are matched. Apart from this paper, the strongest evidence to date for this line of reasoning is \cite{K2016}, where it was shown that the typical ensembles from the latent-space geometric graph model can be modeled by an inhomogeneous random graph model that matches with the original graph in terms of the average degree and a measure of clustering.
We should mention that \cite{K2016} is restricted to one-dimensional models and does not deal with $\text{\textsc{\textsc{rsa}}} $, but it shares with this paper the perspective that matching degrees and local clustering can be sufficient for describing spatial settings.
\bibliographystyle{abbrv}
|
2,877,628,091,564 | arxiv | \section{Introduction}
The Universe is currently in an accelerated expansion epoch that has been observed through the type Ia supernovae (SNIa) \cite{Riess:1998,Perlmutter:1999}, and the large-scale structure (LSS) \cite{Abbott:2017wau}. Typically, this phenomenon is associated to a component known as dark energy (DE), and together with the one named dark matter (DM), it constitutes the dark sector that corresponds to about $96\%$ of the Universe \cite{Planck:2018}. The simplest cosmological model to explain this dark sector and also compatible with the observational data is the so-called $\Lambda$-Cold Dark Matter ($\Lambda$CDM). This model proposes a cosmological constant as the responsible of the accelerated expansion of the Universe, and a non-relativistic entity without pressure as the dark matter.
However, one of the open problems in the investigation of the dark sector is its division into DM and DE, which has been proven to be merely conventional since exists a degeneracy between both components, resulting from the fact that gravity only measures the total energy tensor~\cite{Kunz,Sandvik}. So, in the lack of a well confirmed detection (nongravitational) of the DM only the overall properties of the dark sector can be inferred from cosmological data, at the background and
perturbative levels. These results have driven the research to explore alternative models which consider a single fluid that behaves as DM, but also presents the effects of an effective negative pressure at some stage of the cosmic evolution. They are called Unified DM models (UDM) and examples of them are: (Generalized) Chaplygin fluids \cite{Chaplygin,Kamenshchik:2001,Bilic:2001,Fabris:2001}, logotropic dark fluid \cite{Chavanis:2016}, and more recently generalized perfect fluid models \cite{Hova2017, Almada:2018}. Apart of them, exists the possibility of explain the accelerated expansion of the Universe at late times as an effect of the effective negative pressure, due to bulk viscosity in the cosmic fluids, and was first considered in \cite{Padmanabhan:1987, Gron:1990}. Several models regarding this approach have been studied and constrained using cosmological data \cite{Xin-He:2009, XuDou:2011, Velten:2011, Calogero:2013, Normann:2016, Cruz_2018}.
A consistent description of the relativistic thermodynamics of non perfect fluids is the causal description framework given by the Israel-Stewart (IS) theory \cite{Israel1979}. Due to the high degree of nonlinearity of the differential equations involved, only some exact solution has been found for a simple Ansatz of the bulk viscosity coefficient $\xi$, as a function of the energy density $\rho$ of the fluid with dissipation. For the election $\xi =\xi _{0}\rho ^{1/2}$, a cosmological solution of polynomial type $H \approx ( t + const.)^{-1}$ for the Hubble rate was found as an Ansatz in \cite{MCruz:2017, Cruz2017}, which can describe accelerated, decelerated or even a phantom type cosmic expansion.
This solution can also be obtained in a systematic way by applying the factorization method to the dynamics equation for the Hubble rate. The factorization of second-order linear ordinary differential equations (ODEs), is a well established method to get solutions in an algebraic manner. It goes back to some works of Dirac to solve the spectral problem for the quantum oscillator \cite{dirac1}, and a further development due to Schrodinger's works on the factorization of the Sturm-Liouville equation \cite{schro1,schro2}. However, in recent times, the factorization technique has been developed and applied to find exact solutions of nonlinear second-order ODEs \cite{berkovich1,cornejo1,rosu1,wang1,rosu3,tiwari:2015,hazra:2012}. The basic concept follows the same pattern already used in linear equations, and it works efficiently for ODEs with polynomial nonlinearities. The method is well adapted to the Hubble rate ODE which raises for instance in viscous cosmological models \cite{Belinchon:2017,cornejo2:2013}.
The main aim of this work is to constrain this solution using the latest measurements of the Hubble parameter (OHD) and Type Ia Supernovae (SNIa), reported in \cite{Magana:2018} and \cite{Scolnic:2017caz}, respectively. Despite the fact that the expansion described by this solution does not present a transition from a decelerated phase to an accelerated one, which is an ultimate feature supported by the observational data, both phases can be well modeled by separated using the analytical solution obtained, as we will show in our results.
In the case of the non causal Eckart's approach, $\xi _{0}$ can be estimated, for example, directly from the
observational data~ \cite{Avelino2013}. Nevertheless, in the case of our solution, the observational constraints lead to allowed regions for $\xi_{0}$ and the parameter $\epsilon$, which is related to the non adiabatic contribution to the speed of sound in the viscous fluid, as it will be discussed in Section II.
Since the above mentioned parameters are involved in a constraint which is a necessary condition for maintaining the thermal equilibrium, we will discuss our results considering such constraint.
This paper is organized as follows: in section \ref{dos}, we describe
briefly the causal Israel-Stewart theory, showing the general differential equation to be solved. In section
\ref{sec:Solution}, we solve this differential equation by using the factorization technique. In section IV, we present the constraints for our model using the observational data coming from the direct measurements of the Hubble parameter and SNIa. Finally, in section V, we discuss our results.
\section{Israel-Stewart-Hiscock formalism} \label{dos}
In what follows we shall present briefly the Israel-Stewart-Hiscock
formalism to describe the thermodynamic properties and evolution of
a Universe filled with only one fluid as the main component, which
experiments dissipative process during its cosmic evolution. We
assume that this fluid obeys a barotropic EoS, $p=\omega \rho $,
where $p$ is the barotropic pressure and $0\leq \omega <1$. For a
flat FLRW Universe, the equation of constraint is
\begin{eqnarray}
3H^{2}=\rho. \label{eq:eq0}
\end{eqnarray}
In the ISH framework, the transport equation for the viscous pressure
$\Pi $ is given by \cite{Israel1979}
\begin{equation}
\tau\dot{\Pi}+ \left(1+\frac{1}{2}\tau\Delta\right)\Pi=-3\xi(\rho) , \label{eqforPi}
\end{equation}
where "dot" accounts for the derivative with respect to the cosmic
time. $\tau$ is the relaxation time, $\xi (\rho)$ is the bulk
viscosity coefficient, for which we assume the dependence upon the energy density $\rho$, $H$ is the Hubble parameter and $\Delta$ is defined by
\begin{equation}
\Delta = 3H+\frac{\dot{\tau}}{\tau }-\frac{\dot{\xi}}{\xi}-\frac{\dot{T}}{T}, \label{Delta}
\end{equation}
where $T$ is the barotropic temperature, which takes the form
$T=\beta \rho ^{\omega /\left(\omega +1\right)}$ that is the Gibbs
integrability condition when $p=\omega \rho $ and $\beta$ is a
positive parameter. We also have that~\cite{Maartens1996}
\begin{equation}
\frac{\xi}{\left(\rho +p\right)\tau} =
c_{b}^{2},\label{relaxationtime}
\end{equation}
where $c_{b}$ is the speed of bulk viscous perturbations (non-adiabatic contribution to the speed of sound in a dissipative
fluid without heat flux or shear viscosity), $c_{b}^{2}=\epsilon
\left(1-\omega \right)$ and $0<\epsilon \leq 1$, in order to ensure causality, with a dissipative speed of sound lower or equal to the speed of light. We shall also assume a power law dependence for $\xi$ in terms of the energy density of the main fluid, i.e., $\xi =\xi _{0}\rho ^{s}$ where $s$ is an arbitrary
parameter and $\xi _{0}$ a positive constant, in order to satisfy the second law of thermodynamics~\cite{Weinberg1971}. This particular election of $\xi(\rho)$ is rather arbitrary, but allows to obtain a differential equation for the Hubble parameter that can be integrated for some particular values of $s$, obtaining well known analytic
solutions. As we will discuss below, the case $s=1/2$ leads to the most simple form of the differential equation involved.
Using the barotropic EoS in Eq.(\ref{relaxationtime}), we obtain the following expression for the relaxation time
\begin{eqnarray}
\tau =\frac{\xi_0}{\epsilon (1-\omega ^{2}) }\rho ^{s-1}, \label{eq:eq3}
\end{eqnarray}
and according to Eq. (\ref{Delta})
\begin{eqnarray}
\Delta =\frac{3H}{\delta ( \omega ) }\left( \delta (
\omega ) -\frac{\dot{H}}{H^2}\right) , \label{eq:eq4}
\end{eqnarray}
where we have defined the $\delta ( \omega )$ parameter by
\begin{eqnarray}
\delta \left( \omega \right) \equiv \frac{3}{4}\left( \frac{1+\omega
}{1/2+\omega } \right). \label{eq:eq5}
\end{eqnarray}
So, for $0\leq \omega <1$, $\delta \left( \omega \right) >0$. Using Eqs. (\ref{eq:eq0}) and (\ref{eq:eq3}) we can write
\begin{eqnarray}
\tau H=\frac{3^{s-1}\xi _{0}}{\epsilon (1-\omega ^{2})}H^{2\left( s-1/2\right)
}. \label{eq:eq7}
\end{eqnarray}
For the particular case $s=1/2$ we obtain that
\begin{eqnarray}
\tau H=\frac{\xi _{0}}{\sqrt{3}\epsilon (1-\omega ^{2})}
. \label{eq:eq8}
\end{eqnarray}
In this case, the necessary condition for keeping the fluid description of the dissipative dark matter component is given by $\tau H < 1$, which leads to the upper limit for $\xi _{0}$
\begin{eqnarray}
\xi _{0} < \sqrt{3}\epsilon (1-\omega ^{2})
. \label{eq:eq9}
\end{eqnarray}
We will discuss later this condition when a cold dark matter fluid with dissipation, as the main component of a late time Universe, be constrained by the observational data.
The differential equation for the Hubble parameter can be constructed by using the conservation equation
\begin{eqnarray}
\dot{\rho}+3H\left[ \left( 1+\omega \right) \rho +\Pi \right] =0,
\label{eq:eq18}
\end{eqnarray}
the Eqs. (\ref{eq:eq0}) and (\ref{eqforPi}), and the relation $\xi \left( \rho \right) =\xi
_{0}\rho ^{s}$. So, we can obtain the following
differential equation
\begin{widetext}
\begin{eqnarray}
\left[ \frac{2}{3\left( 1-\omega ^{2}\right) }\left( \frac{3\left(
1+\omega \right)\dot{H}}{H^{2}}+\frac{\ddot{H}}{H^{3}}\right)
-3\right] H^{2\left( s-1/2\right) }+ \frac{1}{3^{s}\xi _{0}}\left[
1+\frac{3^{s-1}\xi _{0}\Delta H^{2\left( s-1\right)}}{2\left(
1-\omega ^{2}\right) } \right] \left[ 3\left( 1+\omega \right)
+\frac{2\dot{H}}{H^{2}}\right] = 0. \label{eq:eq19}
\end{eqnarray}
\end{widetext}
For $s=1/2$, the following Ansatz
\begin{eqnarray}
H\left( t\right) =A\left( t_{s}-t\right) ^{-1}, \label{Ansatz}
\end{eqnarray}
is a solution of Eq.(\ref{eq:eq19}) with a big rip singularity \cite{Cruz2017}, and the Ansatz
\begin{eqnarray}
H\left( t\right) =A\left( t -t_{s}\right) ^{-1}, \label{Ansatz1}
\end{eqnarray}
is also a solution which can describe cosmic evolutions with accelerated, linear and decelerated expansion
\cite{MCruz:2017}. In the next section, we will show that by using the factorization method this Ansatz can be obtained as a particular solution of the differential equation (\ref{eq:eq19}), which gives a deeper understanding of its particularity and its dependence on the initial conditions.
\section{Solving the differential equation for the Hubble rate} \label{sec:Solution}
The nonlinear differential equation for the Hubble function (\ref{eq:eq19})
can be rewritten for $s=1/2$ as follows
\begin{equation}\label{facto1}
\ddot{H} + \frac{\alpha_1}{H} {\Dot H}^2 + \alpha_2 H {\Dot H} + \alpha_3 H^3=0,
\end{equation}
where
\begin{eqnarray}
\alpha_1 & = & -\frac{3}{2\delta},\\
\alpha_2 & = & \frac{3}{2} + 3(1+\omega)-\frac{9}{4\delta}(1+\omega)+\frac{\sqrt{3}\epsilon(1-\omega^2)}{\xi_0},\\
\alpha_3 & = & \frac{9}{4}(1+\omega) + \frac{9}{2}\epsilon (1-\omega^2)\left[ \frac{1+\omega}{\sqrt{3}\xi_0} -1 \right],
\end{eqnarray}
are constant coefficients.
Let us consider the following factorization scheme \cite{cornejo1,rosu1,wang1} to obtain an exact particular solution of the Eq. (\ref{facto1}).
The nonlinear second order differential equation
\begin{equation}\label{facto1a}
\ddot{H} + f(H)\dot{H}^2 + g(H)\dot{H} + j(H)=0,
\end{equation}
where $\dot{H}=\frac{dH}{dt}=D_t H$, can be factorized in the form
\begin{equation}\label{facto2}
[D_t - \phi_1(H)\dot{H} - \phi_2(H)][D_t - \phi_3(H)]H=0.
\end{equation}
where $\phi_i(H)$ ($i=1,2,3$) are factoring functions to be found. Expanding Eq. (\ref{facto2}), one is able to group terms as follows \cite{wang1}
\begin{equation}\label{facto2a}
\ddot{H} -\phi_{1}\dot{H}^2 + \left( \phi_{1}\phi_{3}H - \phi_{2}-\phi_{3}-\frac{d\phi_{3}}{dH}H\right) \dot{H}+\phi_{2}\phi_{3}H =0.
\end{equation}
Then, by comparing Eq. (\ref{facto1a}) with Eq. (\ref{facto2a}), we get the following conditions%
\begin{align}
f\left( H\right) & = -\phi_{1},\label{facto3}\\
g\left( H \right) & = \phi_{1}\phi_{3}H - \phi_{2}-\phi_{3}-\frac{d\phi_{3}}{dH}H,\label{facto4}\\
j\left(H \right) & =\phi_{2}\phi_{3}H.\label{facto5}
\end{align}
Any factorization like (\ref{facto2}) of an scalar ODE in the form given in (\ref{facto1a}) allows to find a compatible first order ODE \cite{berkovich1}
\begin{equation}
[D_t - \phi_3(H)]H= D_tH - \phi_3(H)H=0,\label{facto5a}
\end{equation}
whose solution provides a particular solution of Eq. (\ref{facto1a}).
We apply now the previous scheme to Eq. (\ref{facto1}). The factoring function $\phi_1=-\frac{\alpha_{1}}{H}$ since $f(H)$ is explicitly given in Eq. (\ref{facto1}). Also, according to Eq. (\ref{facto5}) the two unknown functions $\phi_2$ and $\phi_3$ are easily obtained by merely factoring the polynomial expression $j(H)=\alpha_3 H^3$ given as well in Eq. (\ref{facto1}).
Then, the functions
\begin{equation}
\phi_{2}=a_{1}^{-1}H,\quad\text{and}\quad
\phi_{3}=a_{1}\alpha_3 H,\label{facto6}
\end{equation}
where $a_{1}(\neq0)$ is an arbitrary constant, are proposed.
The explicit value of $a_{1}$ is obtained by substituting $ g(H)= \alpha_2 H$ and the $\phi_i$ functions into Eq. (\ref{facto4}). Then, we get the constraint equation
\begin{equation}
\alpha_2 H = -(a_1\alpha_1 \alpha_3 + a_1^{-1} +2a_1\alpha_3 )H, \label{facto6a}
\end{equation}
and equating both sides of the equation provides
\begin{equation}
a_1 = \frac{-\alpha_2 \pm \sqrt{\alpha_2^2 - 4\alpha_3\left(2+\alpha_1\right)}}{2\alpha_3 \left( 2 + \alpha_1 \right) }. \label{facto6b}
\end{equation}
Therefore, the Eq. (\ref{facto1}) admits the factorization
\begin{equation}
\left[ D_t + \frac{\alpha_1}{H}\dot{H} -a_1^{-1}H\right]\left[D_t -a_1 \alpha_3 H\right]H = 0,\label{facto8}
\end{equation}
with the compatible first order ODE
\begin{equation}
\dot{H} -a_1 \alpha_3 H^2 = 0,\label{facto9}
\end{equation}
whose solution is also a particular solution of the Eq. (\ref{facto1}) factorized in the form (\ref{facto8}).
The integration of this equation generates one arbitrary integration constant, which can be written in explicit terms of an initial condition. If we consider the initial condition $H(t_0)=H_0$, where $H_0$ is the Hubble constant, then we get the following particular solution of Eq. (\ref{eq:eq19}) with $s=1/2$,
\begin{equation}
H(t)= \frac{A_{\pm}}{t-(t_0-\frac{A_{\pm}}{H_0})}, \label{facto10}
\end{equation}
where $A_{\pm}= -\frac{1}{\alpha_3 a_1}$,
or equivalently
\begin{equation}
A_{\pm}= \frac{2\sqrt{3}\epsilon(\omega^2-1)-6\xi_0\pm 2\sqrt{3\epsilon}\sqrt{\epsilon(\omega^2 -1)^2 +6\xi_0^2(1-\omega)}}{3(\omega+1)\left( -3\xi_0+2\epsilon (\omega -1)\left[ \sqrt{3}(1+\omega) - 3\xi_0 \right] \right)},
\label{facto25}
\end{equation}
with the restriction equation $\xi_0 \neq \frac{2\sqrt{3}\epsilon\left(\omega^2 -1 \right)}{3+6\epsilon\left(\omega-1\right)}$, which avoids $A_{\pm}$ to be an indeterminate function.
The above particular solution (\ref{facto10}) can also be written in the form
\begin{equation}
H(t)= \frac{A_{\pm}}{t-t_s}, \label{facto13}
\end{equation}
where $t_s=t_0-\frac{1}{H_0(1+q_0)}$, and $q_0$ is the initial value of the deceleration parameter, but since
\begin{equation}\label{eq:q}
1+q = -\frac{\dot{H}}{H^2} = \frac{1}{A_{\pm}} \,,
\end{equation}
this means that this solution represents an expansion with a constant deceleration parameter. Once a $q_0$ is given, a value is obtained for $A_{\pm}$ and a family of possible values for the parameters $\epsilon$, $\omega$ and $\xi_0$ can be evaluated from Eq. (\ref{facto25}). Or, once the value of $A_{\pm}$ is given, or constrained from the data, as it will be done in the next section, $q_0$ and the other ranges of the parameters can be evaluated.
The solution (\ref{facto13}) can also be written in terms of the redshift variable. For the scale factor one obtains
\begin{equation}
\frac{a}{a_0}= \left( \frac{t-t_s}{t_0-t_s} \right)^{A_{\pm}} = \frac{1}{1+z}. \label{facto14}
\end{equation}
Therefore,
\begin{equation}
H(z)= H_0 (1+z)^{1/A_{\pm}} \,, \label{facto15}
\end{equation}
where $H_0 = 100\,h\,\, km s^{-1} Mp c^{-1}$, and $h$ denotes the dimensionless Hubble constant. Notice that this form of the Hubble parameter is defined for both phases of the Universe, the accelerated and decelerated one. However, we can connect both phases by requiring the continuity of the Hubble parameter function at $z=z_t$, where $z_t$ is the accelerated-decelerated transition redshift. Then, we obtain
\begin{equation} \label{eq:Hz}
H(z)= \left\{ \begin{array}{lc}
H_0(1+z)^{1/\hat{A}_1}\,, & z \leq z_t\,, \\
\\
H_0 (1+z_t)^{1/\hat{A}_1 - 1/\hat{A}_2}(1+z)^{1/\hat{A}_2}\,, & z > z_t\,. \\
\end{array}
\right.
\end{equation}
In the above expression, $\hat{A}_1$ and $\hat{A}_2$ are the free parameters corresponding to the accelerated and decelerated phases respectively.
\section{Cosmological constraints}
In this section we describe the observational data used and build the $\chi^2$-function to perform the confidence regions of the free model parameters.
We employ a Chain Markov Monte Carlo
analysis based on emcee module \cite{Emcee:2013} by setting $5000$ chains with $500$ steps.
The nburn is stopped up to obtain a value of $1.1$ on each free parameter in the Gelman-Rubin criteria \cite{Gelman:1992}.
Table \ref{tab:priors} presents the priors considered for each parameter. We also set the redshift of the accelerated-decelerated transition as $z_t=0.64$ \cite{Moresco:2016mzx} in the Eq. (\ref{eq:Hz}). Then, in order to constrain the model parameters we use the Hubble parameter measurements and supernovae data, and the combined data.
\begin{table}
\caption{Priors considered for the model parameters.}
\centering
\begin{tabular}{| C{2cm} C{3cm} |}
\hline
Parameter & Prior \\
\hline
$\hat{A}_1$ & Flat in $[1,5]$ \\ [0.7ex]
$\hat{A}_2$ & Flat in $[0,1]$ \\ [0.7ex]
$h$ & Gaus$(0.7324,0.0174)$ \\ [0.7ex]
\hline
\end{tabular}
\label{tab:priors}
\end{table}
\subsection{Hubble Observational Data}
The direct way to observe the expansion rate of the Universe is through measurements of the Hubble parameter (OHD) as a function of the redshift, $H(z)$. The latest OHD obtained by using the differential age (DA) method \cite{Jimenez:2001gg}, are compiled in \cite{Magana:2018} and consist of $51$ Hubble parameter points covering the redshift range $[0,1.97]$. We constrain the free model parameters by minimizing the chi-square function
\begin{equation}\label{eq:chi2_ohd}
\chi^2_{OHD} = \sum_{i} \left( \frac{H_{th}(z_i) - H_{obs}}{\sigma_{obs}^i} \right)^2,
\end{equation}
where $H_{th}(z_i)$ and $H_{obs}(z_i) \pm \sigma_{obs}^i$ are the theoretical and observational Hubble parameter at the redshift $z_i$, respectively.
\subsection{Type Ia Supernovae}
We use the Pantheon dataset \cite{Scolnic:2017caz} consisting of $1048$ type Ia supernovae (SNIa) located into the range $0.01 < z < 2.3$. The comparison between data and model is obtained with the expression
\begin{equation}\label{eq:chi2_sn}
\chi^2_{SNIa} = (m_{th}-m_{obs}) \cdot {\rm Cov}^{-1} \cdot (m_{th}-m_{obs})^{T}
\end{equation}
where $m_{obs}$ is the observational bolometric apparent magnitude and ${\rm Cov}^{-1}$ is the inverse of the covariance matrix. $m_{th}$ is the theoretical estimation and is computed by
\begin{equation}
m_{th}(z) = \mathcal{M} + 5\, \log_{10}\left[ d_L(z)/10\,pc \right]\,.
\end{equation}
Here, $\mathcal{M}$ is a nuisance parameter and $d_L(z)$ is the dimensionless luminosity distance given by
\begin{equation}
d_L(z) = (1+z)\,c \int_0^z \frac{dz'}{H(z')}
\end{equation}
where $c$ is the speed of light.
\subsection{Joint analysis}
We also perform a joint analysis by defining the merit-of-function as
\begin{equation}
\chi^2_{joint} = \chi^2_{OHD} + \chi^2_{SNIa}\,,
\end{equation}
where $\chi^2_{OHD}$ and $\chi^2_{SNIa}$ are given in Eqs. (\ref{eq:chi2_ohd}) and (\ref{eq:chi2_sn}), respectively.
The best fitting parameters are obtained by setting the acceleration-deceleration transition $z_t=0.64$ \cite{Moresco:2016mzx}. Table \ref{tab:bf_values} presents the summary of the best estimates of the parameters for the dissipative unified dark matter (DUDM) model (see Eq. (\ref{eq:Hz})).
\begin{table*}
\caption{Best fit values of the free parameters of the UDM model.}
\centering
\begin{tabular}{|c c c c c c c c|}
\hline
Data & $\chi^2$ & $\hat{A}_1$ & $\hat{A}_2$ & $h$ & $\mathcal{M}$ & BIC & AIC \\
\hline
OHD & $34.10$ & $1.58^{+0.15}_{-0.12}$ & $0.84^{+0.02}_{-0.02}$ & $0.700^{+0.014}_{-0.014}$ & - & $57.69$ & $40.10$ \\ [0.7ex]
SNIa & $1029.48$ & $1.62^{+0.11}_{-0.10}$ & $0.71^{+0.16}_{-0.14}$ & $0.732^{+0.017}_{-0.017}$& $5.76^{+0.05}_{-0.05}$ & $1085.12$ & $1035.48$ \\ [0.7ex]
OHD+SNIa & $1064.91$ & $1.58^{+0.08}_{-0.07}$ & $0.84^{+0.02}_{-0.02}$ & $0.700^{+0.010}_{-0.010}$& $5.67^{+0.02}_{-0.02}$ & $1120.93$ & $1070.91$ \\ [0.7ex]
\hline
\end{tabular}
\label{tab:bf_values}
\end{table*}
Figure \ref{fig:plotHz} shows the best fit curves over OHD and SNIa samples at top and bottom panel, respectively, using the joint analysis. We observe an evident behaviour in the Hubble parameter between the DUDM and LCDM at $z<0$ (the future). While LCDM gives an Universe expansion smoothly, the DUDM model has an Universe expansion as big rip. From Eq. (\ref{eq:q}) and the joint analysis values, we estimate the decelerate parameter $q = -0.37$ and $0.19$ for the accelerated and decelerated phases, respectively. Notice that $q$ is constant during each phase.
\begin{figure}
\centering
\includegraphics[width=.9\linewidth]{plot_data_Hz.pdf} \\
\includegraphics[width=.9\linewidth]{plot_data_SNIa.pdf}
\caption{Joint best fit of DUDM and $\Lambda$CDM using the best fitting values of joint analysis}
\label{fig:plotHz}
\end{figure}
Figure \ref{fig:Contours} shows the 2D contours at $68$, $95$ and $99.7\,\%$ ($1,2$ and $3 \sigma$) confidence level (CL) and the 1D posterior distributions of the free model parameters. It shows a good agreement between the best fits within $1\sigma$ CL.
\begin{figure}
\centering
\includegraphics[width=.9\linewidth]{contour1.pdf}
\caption{2D-contours considering OHD (green), SNIa (gray), and joint analysis (blue) at $68,95,99.7\,\%$ confidence level.}
\label{fig:Contours}
\end{figure}
\section{Discussion}
In the following we will refer the mathematical expressions given in Eq. (\ref{facto25}) as $A_+$ or $A_-$, and the numerical values of each expression could be $\hat{A}_1$ or $\hat{A}_2$.
By using Eq. (\ref{facto25}), which gives $A_\pm$ as a function of the model parameters, we explore the behavior of $\xi_0$ as a function of $\epsilon$. These curves are shown in Figure \ref{fig:xi0vsEps} for several values of $\omega = 0, 0.05,0.1$ (from bottom curve to the top one), and are obtained when we consider the positive (top panel) and negative (bottom panel) sign in Eq. (\ref{facto25}), {\it i.e.}, $A_+$ and $A_-$ respectively. For $A_+$ we find values $\xi_0>0$ in the region $0.5<\epsilon<1$, and $\xi_0<0$ for $0<\epsilon<0.5$. Similarly, when we consider $A_-$, we find positive values of $\xi_0$ in the allowed region $\xi_0<\sqrt{3}\epsilon$ in $0.5<\epsilon<1$ for both epochs. In contrast, we find values of $\xi_0<0$ within $0<\epsilon<0.5$ for both epochs when any sign is considered.
Then, we discard the phase space of ${\epsilon, \xi_0}$ where $\xi_0<0$ because the second law of thermodynamics would be infringed. It is interesting to note that when we use $A_+$ and $\xi_0>0$, the curves for decelerated/accelerated epochs are not sensitive of $\hat{A}_{1,2}$ values (see Table \ref{tab:bf_values}).
\begin{figure}
\centering
\includegraphics[width=.9\linewidth]{plot_xi_of_eps_Asign1.pdf}
\includegraphics[width=.9\linewidth]{plot_xi_of_eps_Asignn1.pdf}
\caption{Top (bottom) panel displays the behavior of $\xi_0$ as a function of $\epsilon$ considering the positive (negative) sign of the Eq. (\ref{facto25}). The green (blue) color lines correspond to the $\hat{A}_1$ ($\hat{A}_2$) value. In the top panel, the green and blue lines in the region $\xi_0>0$ are superimposed. For both plots and each color, from bottom to top the green (blue) lines refer to $\omega = 0, 0.05, 0.1$, respectively.}
\label{fig:xi0vsEps}
\end{figure}
A further insight of the previous results can be done considering the effective EoS, $\omega_{eff}$, which is defined by
\begin{equation}\label{eoseff}
\omega_{eff} = -1 - \frac{2}{3}\frac{\dot{H}}{H^2} = -1 + \frac{2}{3}\frac{1}{\hat{A}}.
\end{equation}
where $\hat{A}$ takes the values $\hat{A}_1$ or $\hat{A}_2$ for accelerated or decelerated phase respectively, For the accelerated phase, we take, $\hat{A}_1= 1.58^{+0.08}_{-0.07}$, corresponding to the obtained value using OHD+SNIa data. In this case we obtain from Eq. (\ref{eoseff}) that $\omega_{eff}= -0.58^{+0.02}_{-0.02}$,
which means that the dissipative effects drive a quintessence like behavior. For the same set of data, $\hat{A}_2= 0.84^{+0.02}_{-0.02}$ in the decelerated phase and $\omega_{eff}= -0.21^{+0.02}_{-0.02}$, therefore in this case even with dissipative effects present in the dark matter fluid, they are not enough to drive acceleration. We find a deviation of $6.2\sigma$ over the region of quintessence ($\omega<-1/3$).
Despite the fact that the solution found does not display a smoothly transition in the deceleration parameter, it allows us to describe both phases separately by using such solution through the parameters $\hat{A}_1$ and $\hat{A}_2$, derived from cosmological data.
On the other hand, the condition to keep the fluid description of the dark matter component, which is an essential assumption of the thermodynamical formalism invoked, given by Eq. (\ref{eq:eq9}) provides the upper limit $\xi _{0}< \sqrt{3}\epsilon$ for a pressureless dark matter fluid. By simple inspection of the curves displayed in Figure \ref{fig:xi0vsEps}, it is easy to see that the constraint can be satisfied by both solutions $A_{+}$ and $A_{-}$ and also in the case of the decelerated and accelerated expansions. Nevertheless, this constraint is fulfilled for approximately $\epsilon > 0.82$ for the $A_{+}$ solution, and for $\epsilon > 0.5$ in the $A_{-}$ solution. This fact indicates that it is needed a great non adiabatic contribution to the speed of sound within the fluid. It is well known that the structure formation observed implies a very low speed of sound, consistent with a cold dark matter component. Therefore, this issue represents a weakness of the model. Moreover, in the case of the solution with accelerated expansion, the thermal equilibrium of the fluid can not be maintained. Besides, a positive entropy production and the convexity condition, ${d^2}S/d{t^2}< 0$, are only satisfied by the decelerated solution, as it was shown in \cite{MCruz:2017}.
The solution analyzed in this work takes the simple form given by Eq. (\ref{facto13}), which is too simple and clearly not a general solution of the IS formalism. In fact, only one initial condition is enough to determine the solution, and since it represents a cosmic expansion with a constant deceleration parameter, the other initial condition, $q_0$, necessary to determine the solution of a second order differential equation in the Hubble parameter, plays no role at all. It is hoped that more general solutions could overcome the above spotlighted difficulties.
Finally, we compare the DUDM and LCDM statistically through the Akaike information criterion (AIC) \cite{AIC:1974, Sugiura:1978}, and the Bayesian information criterion (BIC) \cite{schwarz1978}. The AIC and BIC are defined by ${\rm AIC} = \chi^2 + 2k$ and ${\rm BIC} = \chi^2 + 2k\, \ln(N)$, respectively, where $\chi^2$ is the $\chi^2$ function, $k$ is the number of degree of freedom and $N$ is the data size. The preferred model by data is the one with the minimum value of these quantities. In order to compare the models, we use the full data sample, OHD$+$SNI, and obtain the $\chi^2$ as the sum of the ones obtained in the decelerated and accelerated phases for the DUDM model. Then, we estimate a yield value of $\Delta AIC = AIC^{DUDM}-AIC^{LCDM} = 7.96$ and $\Delta BIC = BIC^{DUDM}-BIC^{LCDM} =19.96$, which suggest that the LCDM is the model preferred by the OHD$+$SNIa data used. This result is expected since the DUDM model contains a degree of freedom greater than LCDM.
In summary, we analyze an exact solution of a DUDM model using the most recent cosmological data of the Hubble parameter and SNIa, that cover the redshift region $0.01<z<2.3$. Although the exact solution under study was proposed as Ansatz in \cite{MCruz:2017, Cruz2017}, we are able to obtain it in a systematic way by following a factorization procedure \cite{cornejo1,rosu1,wang1}.
Due to the inability of the model to drive accelerated and decelerated phases with the same value of the main free parameters $A$ as is shown in Eq. (\ref{facto15}), we build the Hubble parameter of the model by connecting both phases as is expressed in Eq. (\ref{eq:Hz}) and being now the free parameters $\hat{A}_+$ and $\hat{A}_-$. Then, we employ an analysis using the combined data, OHD$+$SNIa, and considering the transition redshift $z_t=0.64$ \cite{Moresco:2016mzx} to constrain their values. According to Eq. (\ref{eq:Hz}) and (\ref{eq:q}), the model presents an acceleration (deceleration) phase when $\hat{A}_1>1$ ($\hat{A}_2<1$). In these epochs, we infer a constant value of $q=-0.37^{+0.03}_{-0.03}$ ($0.19^{+0.03}_{-0.03}$), and an effective EoS $\omega_{eff} = -0.58^{+0.02}_{-0.02}$ ($-0.21^{+0.02}_{-0.02}$). It is interesting to see that $\omega_{eff}$ is in the quintessence region for the accelerated epoch, while the decelerated phase is characterized by a negative effective EoS, even though it is not enough to drive an accelerated expansion of the Universe. We have also found that our solution can well fitted the cosmological data, and the evaluated values of $\xi _{0}$ from the constrained values of $A_{+}$ and $A_{-}$ always satisfy the condition of a fluid description for both phases, required from the thermodynamics formalism. Nevertheless, the high value of the speed of sound within the fluid is an undesirable behavior of the model.
It is important to point out that our solution is obtained assuming a Universe filled with only one fluid with dissipation, therefore it is clear that it can describe only the late time evolution. An extension of this model to early ages of the Universe requires to introduce radiation and evaluate the behavior of the linear perturbations. In the framework of the Eckart theory, the discussion of the linear perturbations has been realized, for example, in \cite{Barrow:2009}. The found results indicate that viscous dark matter leads to modifications of the large-scale CMB spectrum, weak
lensing and CMB-galaxy cross-correlations, which implies difficulties in order to fit the astronomical data. In the case of a perturbative study in the framework of the causal thermodynamics it was found in \cite{Piattella:2011} that numerical solutions for the gravitational potential seem to disfavour causal theory, whereas the truncated theory leads to results similar to those of the $\Lambda$CDM model for a very small bulk viscous speed.
Let us discuss here what can be a possible way to overcome this difficulty, which is present in this type of models. As we mentioned above, the division into DM and DE is merely conventional due to the degeneracy between both components, resulting from the fact that gravity only measures the total energy tensor. In the case of DUDM models, the viscous stress provide the negative pressure which allows accelerated phases, but the near equilibrium condition demanded in the thermodynamics approaches of relativistic viscous fluids, implies that the viscous stress must be lower than the equilibrium pressure of the fluid. In general, this condition is not fulfilled and one possibility is to go further and to consider a non-linear generalization of the causal linear thermodynamics of bulk viscosity, where deviations from equilibrium are allowed (see, for example, \cite{CRUZ2017159}). Other possibility is consider a cosmological scenario with dissipative DM and some other DE component. In
\cite{Cruz_2018}, an introduction of a cosmological constant is considered together with a dissipative DM component. This also allows in some regions to satisfy the near equilibrium condition. Of course, in this scenario UDM models with dissipation are abandoned as consistent models to describe the evolution of the Universe, and, on the other hand, we are assuming the division into DM and DE.
As a conclusion of the above discussion we can say that the solution found within the full causal Israel-Stewart-Hiscock formalism indicates that accelerated expansion compatible with OHD and SNIa data, can be obtained with only one dissipative DM component, but having a great non adiabatic contribution to the speed of sound within the fluid, which is not compatible with the structure formation. Further investigations are required to solve this drawback including some form of DE, along with the dissipative component.
\bigskip
\section*{Acknowledgments}
The authors acknowledge an anonymous referee for important suggestions in order to improve the presentation of the paper. This work was supported by the Universidad de Santiago de Chile, USACH, through Proyecto DICYT N$^{\circ}$ 041831CM (NC), Vicerrector\'ia de Investigaci\'on, Desarrollo e Innovaci\'on. NC acknowledges the hospitality of the Facultad de Ingenier\'ia, Universidad Aut\'onoma de Quer\'etaro, M\'exico, where part of this work was done. OCP would like to thank warm hospitality and financial support during a summer research stay at Department of Physics, USACH, and also thanks PRODEP project, M\'exico, for resources and financial support. AHA thanks SNI Conacyt and Instituto Avanzado de Cosmolog\'ia (IAC) collaborations. The authors thankfully acknowledge computer resources, technical advise and support provided by Laboratorio de Matem\'atica Aplicada y C\'omputo de Alto Rendimiento from CINVESTAV-IPN (ABACUS), Project CONACYT-EDOMEX-2011-C01-165873.
\bibliographystyle{unsrt}
|
2,877,628,091,565 | arxiv | \section{Introduction}
Nanocrystalline materials are of potential interest nowadays, because their remarkable properties, that differ from their parent bulk counterparts, have found a wide range of technological applications. Among them, particles made of magnetic compounds display a variety of magnetic behaviors, that differ substantially from their parent massive materials. \cite{Dormann_ACP97,Kodama_jmmm1999,Fiorani_book2005} Their distinct properties are mainly connected to the finite size effects related to the reduced number of magnetic ions in the enclosed volume. \cite{Iglesias_prb2001} Additionally, the surface and interface effects related to the symmetry breaking at physical boundaries of the materials cause spin disorder and frustration along with the interparticle interactions. \cite{Sabsabi_prb2013} An interesting class of nanoparticles (NP) is found when ferromagnetic (FM) and antiferromagnetic (AF) materials are combined together in a core/shell structure. The coupling at the interface between these two magnetic phases gives rise to the EB phenomenon, \cite{Nogues_JMMM1999,Nogues_physrep2005,Iglesias_jnn2008,Giri_jpcm2011,Manna_PhysRep2014} which is of fundamental interest and has found multiple technological applications depending on the specific composition and the characteristic size of the respective phases. \cite{Gierlings_PRB2002,Skumryev_Nature2003,Velthuis_jap2000,Radu_prb2003,Das_jac2009,Wu_jpcm2007}
The pinning mechanism, that results from EB, has been commercially explored for the magnetic field sensors and in modern magnetic read heads. \cite{Chappert_NatMater2007,Lopez-Ortega_PhysRep2015} Nevertheless, a clear cut connection between the observed EB phenomenology and parameters in core/shell NP, such as the size and thickness of the NP or the microscopic interfacial structure, is not well established yet.
Although pure metal particles would have desirable high values of saturation of magnetization, \cite{OHandley_2000} they have strong natural tendency to form parent oxide phases. \cite{Haneda_Nature1979,Giri_ASC2001} This process can be controlled under proper synthesis conditions to prevent further oxidation, leading to the formation of core/shell structures with the oxide phase (often an AF or ferrimagnetic material \cite{Goodenough_book1963}) usually formed at the outer part of the structures.
In the case of Co NP, most of the published studies\cite{Meiklejohn_prb56} report the formation of the CoO phase on the shell, although in some cases the presence of the Co$_3$O$_4$ has also been evidenced by structural \cite{Fontaina_NL2004,Li_SciRep2015} or magnetic characterization. \cite{Simeonidis_prb2011}
The possibility of observing EB in Co based nanostructures in contact with Co$_3$O$_4$ has been rather less investigated and the published studies focus on layered structures. \cite{Wang_ijmpb2005,Wang_ssc2005,You_jap2003,Wang_jac2008,Ahmad_jmmm2015} Synthesis of single phase CoO or Co$_3$O$_4$ NP have been achieved
by several authors, that have reported AF magnetic behavior with ordering temperatures reduced compared to the bulk values \cite{Bisht_scc2011,Ichiyanagi_poly2005,Dutta_jpcm2008} and remanence values much higher than those for the bulk due to finite-size effects. \cite{Simeonidis_prb2011,Silva_prb2010}
Here, we will explore the EB effect in Co/Co$_3$O$_4$ batches of nanoparticles with well defined average size, giving evidences that Co$_3$O$_4$ is the only phase present in our samples. The sizes of the crystallites forming core and shell are tuned by the controlled oxidation at different temperatures. Hence, we are able to show that, the variation of shell thickness, while keeping the fixed overall particle size, leads to the significant changes in the EB effect, that are stronger for the intermediate shell thicknesses. A detailed structural study of the samples allows to correlate this maximum EB to the maximum interfacial strain due to lattice mismatch and the associated increased anisotropy. By means of atomistic Monte Carlo (MC) simulations, we trace back the origin of maximum EB effect at the intermediate size and the changes in the magnetization reversal of interfacial surface spins as core/shell size is varied.
\begin{figure}[tbp]
\vskip 0.0 cm
\centering
\includegraphics[width = 0.95\columnwidth]{Fig1.png}
\caption {(a-d) XRD patterns of Co/Co$_3$O$_4$ nanostructures with different core/shell crystallite sizes. Continuous curves show the fit using Rietveld refinement. Difference plot are shown at the bottom of each pattern. Details of the indexed planes corresponding to Co and Co$_3$O$_4$ are given in (b) for the $21:6$ sample. Variation of the (e) crystallite sizes of the two phases and overall size of the particles with the annealing temperatures and (f) lattice constant ($a$) of the two phases with the crystallite size, as obtained from the refinement.}
\label{Fig.1}
\end{figure}
\section{Experimental}
Nanocrystalline Co embedded in the amorphous silica host is prepared with volume fraction $\varphi$=10 \% using sol gel route. Initially, Co metal powder (Aldrich, 99.99 \% pure) is dissolved in nitric acid (4.5 M). Citric acid is then added to the solution and homogenized thoroughly during 6 h to obtain a transparent reddish solution. Ethanolic tetraethyl orthosilicate (TEOS) is finally added to the solution as a source of the silica matrix in form of droplets and mixed vigorously for 12 h for obtaining a homogeneous solution. The final reddish solution is dried in open air very slowly to form a gel at room temperature, that is dried at 323 K for 15 days and subsequently decomposed at 873 K for 6 h in a continuous flow of H$_2$/Argon mixture (5\% H$_2$ and 95\% Argon). The as-synthesized Co nanoparticles ($\varphi \sim$ 10 \%) in a silica matrix are processed for controlled oxidation by annealing the sample in the range of 333 - 1023 K for 10 minutes each in order to grow desired Co/Co$_3$O$_4$ phase fractions. Henceforth, nine samples will be addressed as 25:2, 24:3, 21:6, 18:9, 15:12, 10:17, 5:22, 2:25, and 1:26, where numbers are the sizes of Co and Co$_3$O$_4$ phases, respectively in nm.
Chemical composition is confirmed using powder X-ray diffraction (XRD) studies (Seifert XRD 3000P) considering Cu K$\alpha$ radiation and electron diffraction attached with a Transmission Electron Microscopy (TEM) of JEOL TEM 2010. High resolution TEM images of the particles are used to assess their size, shape and crystalline planes of Co and Co$_3$O$_4$. X-ray photoemission spectroscopy (XPS) have been performed in an Omicron Nanotechnology spectrometer. Magnetization is recorded in a commercial SQUID magnetometer of Quantum Design (MPMS, XL). In the zero-field cooled (ZFC) protocol the sample is cooled in zero-field and the magnetization is measured in the warming mode with a static magnetic field. In the field-cooled (FC) protocol sample is cooled and measured in field.
\begin{figure}[tbp]
\vskip 0.0 cm
\centering
\includegraphics[width = 0.9\columnwidth]{Fig2.png}
\caption {(a) XPS spectrograms of Co 2p$_{3/2}$ (top panel) and O 1s (bottom panel) contributions. (b) TEM image to verify particle size. Inset: particle size histogram as fitted by a log-normal distribution function. (c) High resolution TEM image displaying nearly core shell structure composed of Co and Co$_3$O$_4$, respectively. Different planes corresponding to Co and Co$_3$O$_4$ are depicted. (d) Electron diffraction pattern displaying different planes for 15:12 sample.}
\label{Fig.2}
\end{figure}
\section{Structural Characterization}
\label{Sec_Charact}
The XRD patterns of Co and Co/Co$_3$O$_4$ of selected samples with different phase fractions are depicted in Figs. \ref{Fig.1}(a-d). The continuous curves show the corresponding fits obtained by Rietveld refinement with the MAUD software, using face-centered $Fm3m$ and $Fd3m$ space groups for Co and Co$_3$O$_4$, respectively. The details of the refinement are shown in Fig. \ref{Fig.1}(b), where the vertical bars at the bottom depict the diffraction peak positions of Co and Co$_3$O$_4$. The goodness of the fit is demonstrated by the difference plot shown below the diffraction patterns for all the compositions, which confirms absence of other impurity phases such as minor fraction of CoO. With increasing annealing temperature the Co$_3$O$_4$ phase grows at the expense of Co.
Average crystallite sizes of each component for different annealing temperatures are estimated using the Scherrer formula from broadening of the diffraction peaks, as obtained from the Rietveld refinement. \cite{Cullity_book2004} Individual average crystallite sizes of each component for annealing temperatures in the range $333-1023$ K are depicted in Fig. \ref{Fig.1}(e).
As shown, the average particle size $\sim$ 27 nm is almost constant for the different annealing temperatures.
The lattice constants ($a$) as obtained from the Rietveld refinements are plotted as a function of the individual crystallite sizes in Fig. \ref{Fig.1}(f). The obtained values are consistent with the previously reported results. \cite{Dutta_jpcm2008,Zhuo_jcgd2009}
For most of the samples the lattice constants of Co and Co$_3$O$_4$ deviate from their bulk counterpart values 3.54 and 8.09 \AA, \cite{Nishizawa_bapd1983,Dutta_jpcm2008} being higher and lower than in bulk, respectively. This reveals significant tensile strains on the Co cores of the particles caused by the formation of the oxide phase at the shell. Interestingly, a maximum value of both $a$ is found for the sample obtained at an annealing temperature of 473 K,
with crystallite sizes $\sim$ 18 and $\sim$ 9 nm for Co and Co$_3$O$_4$, respectively.
\begin{figure}[tbp]
\vskip 0.0 cm
\centering
\includegraphics[width = \columnwidth]{Fig3.png}
\caption {Thermal variations of ZFC and FC magnetization for Co:Co$_3$O$_4$ with (a) 24:3, (b) 15:12, and (c) 10:17. The corresponding insets show thermomagnetic irreversibility ($\Delta M$).}
\label{Fig. 3}
\end{figure}
Analysis of the XPS spectra have been performed in order to investigate the details of the chemical composition of the core and shell phases of the nanoparticles.
The results for the Co 2p$_{3/2}$ and O 1s contributions to XPS are depicted in Fig. \ref{Fig.2}(a) for the sample with 15:12 composition.
The oxidation states of Co atoms are obtained by deconvoluting the spectra for Co 2p$_{3/2}$ and O 1s contributions, as shown at the top and bottom panels of the figure.
The Co$^{2+}$, Co$^{3+}$ and Co contributions to the 2p$_{3/2}$ spectrum can be deconvoluted as shown in the corresponding subspectra (lines) that exhibit maxima with increasing binding energies, respectively.
A similar procedure is done for the O 1s spectrum, that can be deconvoluted into four contributions, exhibiting maxima with increasing energy corresponding to surface O, O-H, O-Co$^{2+}$, and O-Co$^{3+}$, respectively.
It is noted that the ratio of the area under the deconvoluted curves of Co$^{3+}$ and Co$^{2+}$ in Co 2p$_{3/2}$ spectrum, as well as that of O-Co$^{3+}$and O-Co$^{2+}$ curves in O 1s spectrum has nearly same value of 2:1, as expected for Co$_3$O$_4$.
Furthermore, the analysis provides a ratio of Co:(Co$^{2+}$+Co$^{3+}$) $\approx$ 50.3:49.7, which is close to the phase fraction ratio of Co:Co$_3$O$_4 \approx$ 52:48, as obtained from the Rietveld refinement. This two observations clearly corroborate the absence of any detectable contribution from CoO as also indicated by the XRD analysis mentioned above.
Figure \ref{Fig.2}(b) shows the TEM image of the sample 15:12. As depicted in the inset, there is a distribution of particle sizes that can be fitted using log-normal distribution function with an average size of $\sim$ 24 nm (consistent with that observed from the XRD results) and tails that extend from 5 to 40 nm (continuous curve).
A high resolution TEM image of a particle of the same sample is shown in Fig. \ref{Fig.2}(c), where we have indicated, the distinct Co and Co$_3$O$_4$ areas within the particle.
Moreover, we have identified the Co (111) diffraction planes within the darker core region and, outside, the distinctive planes of Co$_3$O$_4$, that could also be observed in the XRD patterns. These representative Co and Co$_3$O$_4$ planes can also be clearly resolved in the electron diffraction pattern shown in Fig. \ref{Fig.2}(d). All in all, the careful structural analysis does not show any convincing signature CoO phase, consistent with the XRD results.
\begin{figure*}[tbp]
\vskip 0.0 cm
\centering
\includegraphics[width = 12 cm]{Fig4.png}
\caption {Central portion of the field-cooled (dashed curve) and zero field-cooled (continuous curve) magnetic hysteresis loops at 5 K for Co:Co$_3$O$_4$ with crystallite sizes (in nm) (a) 25:2, (b) 24:3, (c) 21:6, (d) 18:9, (e) 15:12, (f) 10:17, (g) 5:22, (h) 2:25, and (i) 1:26. Cooling field for FC protocol is 10 kOe. Complete ZFC hysteresis loops are depicted in the corresponding insets.}
\label{Fig.4}
\end{figure*}
\section{Magnetic characterization}
The thermal variation of dc magnetization measured under ZFC and FC protocols with an applied magnetic field of 100 Oe are depicted in Fig. \ref{Fig. 3}(a-c) for samples 24:3, 15:12, and 10:17. The behavior of the three samples is similar, showing irreversibility up to the highest measured temperature of 300 K, which indicates that the nanoparticles have blocking temperatures above this value due to their relatively large size. Note that both curves decrease monotonously below 300 K and do not display any peak characteristic of the Néel temperature of CoO, which should be in the range of $235-293$ K depending on the particle size. \cite{Chandra_apl2012,Feygenson_prb2010,Simeonidis_prb2011}
The Co$_3$O$_4$ orders antiferromagnetically around 40 K. \cite{Roth_jpcs1964} Below $\sim$ 40 K, a weak anomaly can be observed in both the ZFC and FC curves marked by an upturn of the magnetization, which could be ascribed to the onset of the antiferromagnetic order \cite{Simeonidis_prb2011} in crystallites forming the shell. This is consistent with the significant decrease of the Néel temperature of Co$_3$O$_4$ in the range of 26$-$35 K depending on the finite size effect. \cite{Dutta_jpcm2008,Dreifus_mre2015}
With increasing Co$_3$O$_4$ fraction, the magnitude of the magnetization as well as the thermomagnetic irreversibility (defined as $\Delta M=M_{FC}-M_{ZFC}$) decrease, as a result of the increasing contribution of shell spins with reduced magnetization, as shown in Fig. \ref{Fig. 3} and the corresponding insets. Similar low temperature responses (not to be confused with the ones reported here) have been reported that are usually ascribed either to residual phases \cite{Simeonidis_prb2011} in Co/CoO nanoparticles or onset of spin-glass freezing \cite{Chandra_apl2012,Tracy_prb2005} for other core/shell compositions.
Hysteresis loops have been measured in between $\pm$ 20 kOe at 5 K for all the nine samples, as shown by the full loops in the insets of Figs. \ref{Fig.4}(a-i). For particles with dominant Co component at the particle core (samples 25:2, 24:3), the hysteresis loop exhibits the typical shape of a FM material, being reversible well below 20 kOe, and the high field magnetization reaches values close to saturation for bulk Co ($\sim 162 $ emu/g, see Fig. \ref{Fig.5}d).\cite{OHandley_2000}
With the increase of the oxide component at the shell, the loop shapes become more elongated with higher closure fields \cite{Simeonidis_prb2011,Chandra_apl2012} and a high field linear component with increasing contribution. This high field susceptibility can be ascribed to the contribution of uncompensated spins in the antiferromagnetic shell and core/shell interface\cite{Tracy_prb2005,Tracy_prb2006,Rinaldi-Montes_jmcc2016} as it dies off at temperatures higher than T$_N$ of Co$_3$O$_4$. For the most oxidized sample (sample 1:26) a linear field dependence extends over the whole field range as typically reported for purely antiferromagnetic nanoparticles or in bulk. \cite{Kovylina_nanot2009,Tracy_prb2005,Dutta_jpcm2008}
In order to probe the effect of the shell thickness on the EB effect, all the nine samples are cooled in $H_{cool}=$10 kOe from 300 K down to 5 K and the magnetic hysteresis loops are recorded subsequently. The main panels in Figs. \ref{Fig.4}(a-i) show a zoom of the central portion of the hysteresis loops for ZFC (continuous lines) and FC (dashed lines) protocols.
Note that hysteresis loops after FC appear clearly shifted to the negative fields with respect to ZFC ones as a consequence of the EB coupling between the FM core and AF shell spins.
In some cases, a vertical displacement is also observed.
The EB field and vertical shift are defined as $H_{E}=(H_C^++H_C^-)/2$, $M_E=[M(20\, {\rm kOe})+M(-20\, {\rm kOe})]/2$ respectively, where $H_C^+$ and $H_C^-$ are the coercive fields at the decreasing and increasing field loop branches. The dependence of these quantities as well as that of the coercive field $H_C$ and saturation magnetization$M_S$ on the crystallite size of the both phases is presented in Figs. \ref{Fig.5}(a-d).
\begin{figure}[tbp]
\vskip 0.0 cm
\centering
\includegraphics[width =0.95\columnwidth]{Fig5.png}
\caption {Variation of (a) $H_E$, (b) $H_C$, (c) $M_E$, and (d) $M_S$ with crystallite sizes of Co ($d_{Co}$) and Co$_3$O$_4$ ($d_{Co_3O_4}$).}
\label{Fig.5}
\end{figure}
The most remarkable feature is the nonmonotonic behavior of $H_E$ and $H_C$ presented in Fig. \ref{Fig.5}, which has been reported previously for Co/CoO nanoparticles. \cite{Gangopadhyay_jap1993,Iglesias_jnn2008,Kovylina_nanot2009,Feygenson_prb2010} It is clear that EB has to tend towards zero when either the oxide shell becomes thinner or the Co core diminishes. However, an argument based on finite size effects would indicate that with increasing the core diameter $H_E$ should increase resulting from the increase of surface interface. On the other hand, a decrease in the shell thickness would produce the the contrary effect. So, in order to clarify the origin of the nonmonotonic behavior, results of MC simulations will be presented in Sec. \ref{Sec_Sim}.
However, it should be noted that the maximum $H_E$ and $M_E$ is found for the sample with 18:9 composition, which was the one showing higher lattice mismatch. Therefore, lattice strain at the core/shell interface for the intermediate shell thickness might be playing a role by inducing a higher net magnetic moment at the interface. The result is in agreement with what it has been found at the interface of Au-Fe$_3$O$_4$ composites \cite{Chandra_Nano2014,Feygenson_prb2015} or Co/Co$_3$O$_4$ nanooctahedra, \cite{Li_SciRep2015} suggesting that, lattice mismatch correlates the magnetic anisotropy.
\begin{figure}[tbp]
\vskip 0.0 cm
\centering
\includegraphics[width = 0.95\columnwidth]{Fig6.png}
\caption {Cooling field ($H_{cool}$) dependence of (a) $H_E$ and (c) $H_C$ and temperature variation of (b) $H_E$ and (d) $H_C$ for three selected samples as indicated in the legend.}
\label{Fig.6}
\end{figure}
This hypothesis is further sustained by the behavior of $H_C$ shown in Fig. \ref{Fig.5}(b), that presents a maximum for sample 18:9, and with the results of the MC simulations presented in Sec. \ref{Sec_Sim}. Finally, we notice that vertical loop shifts indicated in Fig. \ref{Fig.5}(c) are concomitant to the observation of horizontal shifts and indicate the existence of a fraction of spins that remain pinned during the field reversal (see also the simulation results in Fig.\ref{Fig.7} below). Therefore, the coincidence of the maximum for both the quantities points to a relation between the increased interfacial anisotropy and stress as commented above.
We have also measured the thermal variation of FC hysteresis loops by cooling the sample in a 10 kOe cooling field from 300 K down to the various temperatures below 250 K and the $H_{cool}$ dependence by cooling the samples from 300 K down to 5 K in different cooling fields up to 50 kOe. The extracted $H_E$ and $H_C$ values and their variation with the mentioned parameters are presented in Fig. \ref{Fig.6} for 24:3, 15:12, and 10:17 samples.
The values of $H_E$ and $H_C$ increase rapidly with $H_{cool}$ initially and saturate for $H_{cool} >$ 10 kOe for the three selected samples. Therefore, we can exclude that the loop shifts observed in our samples are due to minor loop effects. No maximum in these quantities has been detected in our samples, in contrast with what is observed in some studies of single phase oxide nanoparticles, \cite{DelBianco_prb2004,Fiorani_jpcm2007,Vasilakaki_prb2009} where it was related to the glassy magnetic nature of surface spins.
The results displayed in Figs. \ref{Fig.6}(b,d) show that both $H_E$ and $H_C$ decrease with increasing temperature although following different tendencies depending on the sample. Remarkably, although the Néel temperature of Co$_3$O$_4$ is below $40$ K, the loop shifts persist up to $\sim$ 250 K or above, demonstrating the robustness of the exchange coupling between core and shell and the persistence of EB effects up to almost room temperature.
\begin{figure}[tbp]
\vskip 0.0 cm
\centering
\includegraphics[width = \columnwidth]{Res_Only_FC_Total_ShInt_Hor.png}
\caption {Simulated hysteresis loops for individual particles with the same dimensions as experimental samples with increasing shell thickness from outer to innermost loops. Panel (a) shows the normalized magnetization of the whole particle. Panel (b) shows the contribution of the interfacial spins at the shell.}
\label{Fig.7}
\end{figure}
\section{Simulation results}
\label{Sec_Sim}
To sustain the experimentally observed variation of $H_E$ and $H_C$ with particle morphology, we have conducted atomistic MC simulations of a model of core/shell nanoparticle \cite{Cabot_prb2009,Simeonidis_prb2011} based on the following Hamiltonian
\begin{equation}
{\cal H}=-\sum_{< i,j>} J_{ij}\vec{S_i} \cdot \vec{S_j}
-\sum_{i} K_i \left(\vec{S_i} \cdot \hat{n_i}\right)^2
-\sum_{i}\vec{S_i} \vec{h} \ ,
\label{Eq1}
\end{equation}
where $\vec{S}_i$ are Heisenberg classical unit vectors representing the magnetic ions and, in the last term, the magnitude of the magnetic field $\vec{H}$ is given in reduced units $h=\mu_S H$, with $\mu_S$ the atomic spin moment. Note also that all parameters used in the simulations will be given in temperature units, scaling them by the Boltzmann constant $k_B$ and that the simulation temperature includes a factor $1/S^2$.
The real values of the exchange and anisotropy constants for Co and Co$_3$O$_4$ have been used \cite{Balcells_apl09} and they are given by $J_{C}= 92.75$ K, $J_{Sh}= 0.23 J_C$ and $K_{Sh}= 40.1 J_C, K_{C}= 0.022 J_C$. The interface exchange coupling has been set to $J_{int}=-J_{C}$. The total radius of the simulated particles (containing $212095$ spins) has been taken as the mean radius of the real samples $R=38 a$ (where $a$ is the lattice constant) and nine shell thicknesses $t_{sh}/a=2.4, 4.4 ,7.7, 12.3, 17.5, 23.8, 30.8, 34.6, 36.5$ have been considered as studied experimentally. In order to mimick the observed presence of crystallites in the shell of the real nanoparticles (see Sec. \ref{Sec_Charact}), we have divided the shell in regions with different random anisotropy directions similar to that was done in the literature. \cite{Cabot_prb2009,Simeonidis_prb2011} This turns out to be crucial to reproduce the experimental phenomenology.
In Fig. \ref{Fig.7}(a) we present the simulated hysteresis loops as obtained after cooling down to 0.1 K under an applied magnetic field, $h_{FC}=10$ K, for different shell thicknesses. The loops display shifts contrary to the cooling field direction, that vary in a non-monotonous way with $t_{sh}$. With increasing $t_{sh}$ the fraction of AF spins increases. As a result of it, the remanent magnetization decreases and the loops become more similar to that of an AF material. The area of the loops decrease with an extended region having linear field dependence, in qualitative agreement with the experimentally observed behavior.
\begin{figure}[tbp]
\vskip 0.0 cm
\centering
\includegraphics[width = \columnwidth]{Drawing_Vert.png}
\caption {Snapshots of a slice of a nanoparticle with $t_{sh}/a=17.5$ (sample with 15:12 composition) showing the interfacial spin magnetic configurations at different points of the hysteresis loop (from left to right): (a) positive remanence point, (b) negative coercive field, (c) negative remanent point, and (d) positive coercive field. Cone colors vary depending their component along the field direction from red (along the field direction) to blue (contrary to the field direction) following the visible light spectrum.}
\label{Fig.8}
\end{figure}
The variation of the EB field can be more easily traced back to the magnetization reversal behavior of the interfacial spins at the shell, \cite{Iglesias_prb2005} whose contribution to the hysteresis loop is shown in Fig. \ref{Fig.7}(b). As can be clearly seen, the interfacial hysteresis loops present a clear asymmetry between the decreasing and increasing field branches and, some of them, display characteristic apexes near the coercive field points.\cite{Iglesias_prb2005,Wu_jpcm2007} Although the interface magnetization remains quite constant except near the coercive fields, the interfacial magnetization on the decreasing field branch is not equal (in absolute value) to that on the increasing field. This indicates that a considerable fraction of interfacial spins remains pinned during the field reversal.
\begin{figure}[tbp]
\vskip 0.0 cm
\centering
\includegraphics[width = \columnwidth]{Res_Heb_Hc_Hor.png}
\includegraphics[width = \columnwidth]{Res_Heb_Hc_Saurav5_Temp_Hor.png}
\caption {Upper panels show the variation of (a) the shift $h_{E}$ and (b) coercive field $h_C$ of the simulated hysteresis loops with the diameter of the particle core. Lower panels show the thermal dependence of (c) $h_{E}$ and (d) $h_C$ for a particle with shell size $t_{shell}= 17a$.}
\label{Fig.9}
\end{figure}
This can be directly checked by looking at snapshots of the interfacial spins configurations taken at different points of the hysteresis loops, as displayed in Fig. \ref{Fig.8} for a particle with $t_{sh}/a=17.5$. Comparing the magnetic configurations at the remanent [panels (a) and (c)] and coercive field points [panels (b) and (d)], we can see that the interfacial surface spins remain mostly oriented along the direction induced by the magnetic field applied during the initial cooling, as indicated by the absence of variation in the colors of the outer shell of spins. On the contrary, interfacial spins in contact with the core, are dragged during the quasiuniform core reversal as can be appreciated by change in color (reddish to blueish and vice versa) and orientation when going from remanent to the coercive field points.
In order to compare with the experimental results of Fig. \ref{Fig.5} and \ref{Fig.6}, we have calculated the coercive field and horizontal loop shifts as $h_C=(h_c^+-h_c^-)/2$, $h_{E}=(h_c^++h_c^-)/2$ from the hysteresis loops of Fig. \ref{Fig.7}. Their dependence on the core diameter is given in Fig. \ref{Fig.9}. Initially, both the quantities increase with increasing the particle core diameter starting from a fully AF particle corresponding to the increase of the interfacial region surface, as it is also observed experimentally. However, this tendency is broken as the core size increases further and a maximum in $h_{E}$ is observed for a core diameter of $20$ nm (shell thickness of $6$ nm). Below this value, the EB field progressively decreases as the shell thickness is reduced. This non-monotonous trend is in agreement with that observed experimentally.
The observed behavior for $h_{E}$ correlates with the changes in the contribution of the interfacial surface spins to the hysteresis loops displayed in Fig. \ref{Fig.7}(b), where it can be seen that the change in the fraction of pinned spins decreases for the particles with thinner shells (black, red and green curves) as compared to the one giving maximum EB (blue curve). Similar trend is observed for the $h_C$ curve, which can be understood by noticing that the coercive field, at difference of $h_{E}$, is directly related to the reversal of the core spins. As can be observed in Fig. \ref{Fig.10}, where a snapshot of the local changes in spin orientations between the positive and negative coercive field is depicted. The reversal of the core drags some of the inner spins at the interface, which explains the increase of $h_C$ with $D_{core}$. In contrast, spins at the outer part of the interface remain pinned, thus contributing to the loop shift. Finally, let us also remark that the results of the hysteresis loops at finite temperatures for sample with $t_{shell} = 17.5a $ shown in Figs. \ref{Fig.9}(c, d) indicate a thermal dependence $h_{E}$ and $h_C$ which is in agreement with the monotonous decrease also observed experimentally.
\begin{figure}[tbp]
\vskip 0.0 cm
\centering
\includegraphics[width = 0.6\columnwidth]{Saurav5_Diff_21-41_newc.png}
\caption {Snapshot showing a cut of the interfacial spin positions represented by spheres. Sphere coloring varies with the normalized magnitude of the difference between local spin orientations at the two coercive field points. Lighter color means more difference.}
\label{Fig.10}
\end{figure}
\section{Conclusions}
We report synthesis of Co based NP in the silica matrix. The controlled oxidation leads to the Co core with the Co$_3$O$_4$ shell structure. The core-shell size is varied keeping fixed overall size of the particle and negligible interparticle interaction is maintained through the dispersion of Co/Co$_3$O$_4$ NP in the silica matrix of 10 \% volume fraction. Absence of trace amount of other oxide phase such as, CoO, is confirmed from XRD, TEM, and XPS analysis. Although the parent oxide has much lower ordering temperature, we have reported the existence of EB bias effects that persist up to almost room temperature. The maximum EB is observed for the samples with intermediate shell thickness that is accompanied by the maximum value of the interfacial strain. The experimental results have been complimented with simulations based on atomistic spin model of individual NP with realistic sizes. The results of simulations reproduce qualitatively the observed EB phenomenology and suggest that interface pinning mechanism directs the EB effect and its dependence on the specific material parameters as well as geometry of the NP.
\begin{acknowledgements}
S.G. wishes to thank DST, India (Project No. SB/S2/CMP-029/2014) for the financial support. SQUID magnetometer of Quantum Design and TEM are used in this study under DST the project, Unit of Nanoscience at Indian Association for the Cultivation of Science, Jadavpur, India. O. I. acknowledges financial support form the Spanish MINECO (MAT2012-33037, MAT2015-68772-P (MINECO/FEDER)), Catalan DURSI (2014SGR220) and European Union FEDER Funds (Una manera de hacer Europa), also CSUC for supercomputer facilities.
\end{acknowledgements}
|
2,877,628,091,566 | arxiv | \section{introduction}
Quantization of the angular momentum is an important concept of the contemporary physics.
In the framework of quantum mechanics derivation of quantization of the angular momentum is based on one of the following statements:
\begin{enumerate}
\item Quantization of the eigenvalues follows from the requirement that the eigenfunctions of the operator of angular momentum must be regular, i.e. normalizable \cite{shiff}.
\item Quantization follows from the commutation relations of the operators of physical quantities \cite{messiah}, \cite{wein}.
\item Quantization follows from the requirement that the eigenfunction of the third component of the operator of the angular momentum must be a single valued periodic function with the period $2\pi$ \cite{landau}, \cite{blokh}.
\end{enumerate}
All three derivations lead to the same result, namely that the spectrum of the angular momentum consists of only integer numbers (in units of Planck constant $\hbar$; throughout $\hbar=1$).
In this article we revisit derivation based on the first two statements and show that these derivations are based on mathematically not correct and not self consistent considerations. As a result, solutions with a non-integer spectrum become admissible on an equal footing as solutions with the integer spectrum.
Consequently, the statement that in the framework of quantum mechanics the
eigenvalues of the square of the angular momentum and its third component are comprised of only integer numbers cannot be considered as strictly proven theoretical result.
Derivation based on the third statement will be analyzed in a subsequent publication where we will obtain the same result as in the present article, namely that the spectrum of the angular momentum may be comprised from both integers and non-integers.
For the clarity and comprehensibility of the arguments used in these two schemes of derivation it is convenient to give explicit analytic expressions of the eigenfunctions of the operators of angular momentum. For this reason we start with the discussion of details
of solving the eigenvalue/eigenfunction equations for the angular momentum.
The article is organized as follows: In section \ref{secII} we discuss properties of eigenfunctions of the operator of the square of angular momentum. In section \ref{secIII} we analyze mathematical arguments based on which, when solving for the eigenfunctions of the square of the angular momentum,
it is argued that the spectrum consists only of integers. We point out inaccuracy in using these arguments. In section \ref{secIV} we discuss the commutation relations of
angular momentum operators from which the spectrum of integer eigenvalues of the square of the angular momentum is obtained. Using the results of section \ref{secII} we indicate the
mathematical fallacy which leads to only the integer spectrum.
Main result of this article is that the solution of the eigenvalue problem of the orbital angular momentum contains physically admissible regular, i.e. normalizable eigenfunctions with the eigenvalues of the operator of angular momentum integer as well as non-integer.
Our conclusions are summarised in section \ref{secVI}.
\section{Regular and singular eigenfunctions of the operator of the square of the angular momentum}
\label{secII}
The eigenvalue equation for the square of the angular momentum takes its simplest form in spherical coordinates and reads:
\begin{eqnarray}
\hat M^2 \psi (\theta,\phi) &=& \left[ {1\over \sin\theta} \,\frac{\partial}{\partial\theta} \left( \sin\theta \,\frac{\partial}{\partial\theta}\right)
+ {1\over \sin^{2}\theta}\,\frac{\partial^2}{\partial\phi^2} +\lambda \right]
\psi(\theta,\phi)=0
\label{eq1}
\end{eqnarray}
where $\theta,\;\phi$ are spherical coordinates, $0\leq \theta \leq \pi, \ \ 0\leq \phi < 2\pi$, $\hat M^2 $ is the operator of the square of the angular momentum, $\lambda$ is its eigenvalue which we write as $\lambda=L(L+1)$, and without loss of generality we assume that $L\geq 0$.
Due to the commutativity of the operators of the square of the angular momentum and its third component $M_z=-i\partial/\partial \phi$
solutions to Eq.~(\ref{eq1}) can be written as a product $\psi(\theta;\phi)=\Psi_{Mm}(\theta;\phi |L;m)=\Psi_M(\xi | L;m)\Psi_m(\phi)$ the factors of which satisfy following equations:
\begin{eqnarray}
&& \hat M_z \Psi_m (\phi) = m \Psi_m (\phi) , \\
&& (1-\xi^2){d^2 \Psi_M\over d \xi^2}-2 \xi \,{d \Psi_M\over d \xi} -\left( \frac{m^2}{1-\xi^2} - \lambda\right )\,\Psi_M=0, \label{eq1.4}
\end{eqnarray}
where $\xi\equiv\cos\theta,\;-1\leq\xi\leq 1$ and $m$ is the value of the third component of the angular momentum.
The set of eigenfunctions $\Psi_M = \Psi_M(\xi | L;m)$ consists of subsets of regular and singular functions, regular being those that
have no singularities within the domain of $\xi$. Our aim is to identify these subsets.
It is convenient to present solution in the form $\Psi_M(\xi)=(1-\xi^2)^\beta F(\xi)$. Substituting in Eq.~(\ref{eq1.4}), setting $\beta^2=m^2/4$ and $\xi^2\equiv z$ we bring Eq.~(\ref{eq1.4})
to the standard form of the Gauss hypergeometric equation
(see, e.g., Eq.~15.5.1 in Ref.~\cite{Abramowitz})
\begin{eqnarray}
&& z (1- z) {d^2 F\over dz^2} + [c-z( a+b+1)] {dF\over dz}- a b F \nonumber\\
&& \ \ \ = z (1- z) {d^2F\over dz^2}+ [1/2-z( 3/2+2 \beta)] {dF\over dz}+[\lambda/4-\beta/2-\beta^2] F = 0 ,\label{gia}
\end{eqnarray}
where
\begin{eqnarray}
&& \ \ \ a =[1/2+2\beta+(1/4+\lambda)^{1/2}]/2,\ \ b =[1/2+2\beta - (1/4+\lambda)^{1/2}]/2; \ \ c = 1/2.
\label{eq6}
\end{eqnarray}
This equation has two linearly independent solutions (see Eqs.~15.5.3-4 of Ref.~\cite{Abramowitz}):
\begin{eqnarray}
&& {}_2 F_1 (a,b;1/2;z) = {}_2F_1(1/2+\beta+L/2,\beta-L/2;1/2;\xi^2), \\
&& z^{1/2}\, {}_2 F_1 (a+1/2,b+1/2;3/2;z) =\xi \; {}_2F_1(1+\beta+L/2,1/2+\beta-L/2;3/2;\xi^2) ,
\label{eq712}
\end{eqnarray}
where ${}_2F_1(a,b;c;\xi^2)$ is the Gaus's hypergeometric function \cite{Abramowitz}.
For the three possible values, $2\beta =\sqrt{m^2}=\{ |m|; +m,-m\}$, three different expressions are obtained for $\Psi_M(\xi | L;\beta)$. On the other hand, as the original equation (\ref{eq1.4}) depends on $m$
only quadratically, all three parametrisation of $\beta$ must lead to the same result. To demonstrate this invariance, let us give explicit expressions for $\Psi_M(\xi|L;\beta)$, the two linearly independent solutions of Eq.~(\ref{eq1.4}):
\begin{eqnarray}
\Psi^0_M(L;\beta)&=& (1-\xi^2)^{\beta} { }_2 F_1 \left( 1/2+\beta+L/2, \beta-L/2; \frac{1}{2};\xi^2\right), \label{first}\\
\Psi^1_M(L;\beta)&=& \xi (1-\xi^2)^{\beta} { }_2 F_1 \left( 1+\beta+L/2, 1/2+\beta-L/2; \frac{3}{2};\xi^2\right).
\label{eq8}
\end{eqnarray}
$\Psi^0_M(\xi)$ is an even function of $\xi$ and $\Psi^1_M(\xi)$ is an odd function of $\xi$. As mentioned above both functions must be invariant under the change of the sign of $m$. For $2\beta=|m|$ the invariance is explicit.
For $2\beta=\{ m;-m\}$
the invariance is not obvious but it can verified by using the following relation (see Eq.~15.3.3 of Ref.~\cite{Abramowitz}):
\begin{equation}
{}_2F_1(a,b;c;z)=(1-z)^{c-a-b}{}_2F_1(c-a,c-b;c;z)
\nonumber
\end{equation}
Using this relation it is straightforward to show that both $\Psi^0_M$ and $\Psi^1_M$ are even functions of $\beta$:
\begin{eqnarray}
\Psi^0_M(L;\beta)=\Psi^0_M(L; - \beta), \quad \Psi^1_M(L;\beta)= \Psi^1_M(L; -\beta).
\label{eq8}
\end{eqnarray}
Thus if some result is obtained in any one parameterization, then the same result can be obtained also in any other parameterisations. These parameterisations lead to different
degrees of complication in calculations, therefore we should use the most convenient form for the representation of the corresponding functions.
To single out the subset of normalisable functions let us study the singularities of functions $\Psi_M(L;\beta)$. These functions can be singular only for $\xi^2=1$.
For example, for $\beta\geq 0$ the factor $(1-\xi^2)^\beta$ in $\Psi_M(L;\beta)$ is regular and the hypegeometric functions may have singularities of the order $(1-\xi^2)^{-\beta-\varepsilon}$, $\varepsilon > 0$, leading to the singular solution $\sim (1-\xi^2)^{-\varepsilon}$.
However, if the parameters of the hypergeometric function ${}_2F_1(a,b;c;z)$ satisfy conditions $a=-k$ or $b=-k$, where $k$ is a non-negative integer, then this hypergeometric function turns into the $k$-th order polynomial
of $z$ \cite{Abramowitz}. Correspondingly, in this case hypergeometric functions will have no singularities. This conditions of truncating hypergeometric series, i.e. reducing hypergeometric functions into polynomials can be used to single out the subset of normalisable functions from the set of the solutions of Eq.~(\ref{eq1.4}).
As an example let us identify the regular functions for the solutions $\Psi_M(L;\beta)=\Psi^0_M(L;m/2)$, parameterization $2\beta=m$.
We have two independent conditions for terminating infinite hypergeometric series (\ref{first}), thus reducing it to polynomials:
\begin{eqnarray}
&& a=\frac{1}{2}+\frac{m}{2}+\frac{L}{2}=-k\;\rightarrow \; m=-L-1-2k,\nonumber\\
&& \Psi^0_M(L;m)|_{m=-L-1-2k}= (1-\xi^2)^{-\frac{(L+1)}{2}-k} \ { }_2 F_1 \left( - \frac{1}{2}-L-k, -k; \frac{1}{2};\xi^2\right), \label{firstt}
\end{eqnarray}
and
\begin{eqnarray}
&& b=\frac{m}{2}-\frac{L}{2} =-k\;\rightarrow\; m=L-2k,\nonumber \\
&& \Psi^0_M(L;m)|_{m=L-2k}= (1-\xi^2)^{\frac{L}{2}-k} \ { }_2 F_1 \left( -k,\frac{1}{2}+L-k; \frac{1}{2};\xi^2\right).
\label{eq10}
\end{eqnarray}
Function obtained from the first condition (\ref{firstt}) is singular for any non-negative integer $k$ because the exponent of $(1-\xi^2)^{-(L+1)/2-k}$ is a negative number and the second factor, the hypergeometric function is a polynomial and hence is a regular function of $\xi$.
The second condition (\ref{eq10}) leads to singular as well as regular subsets of functions. In particular, for $L/2-k\geq 0$, i.e. for $k<[L/2]$, where $[L/2]$ is an integer part of $L/2$ (remember that $k$ is integer), under condition that $0\leq L/2-[L/2]< 1$, both factors in Eq.~(\ref{eq10}) are regular.
For $L/2-k<0$ the factor $(1-\xi^2)^{L/2-k}$ is singular and hence $\Psi^0_M(L;m)|_{m=L-2k}$ is also singular.
We obtain that the eigenfunction, corresponding to the spectrum $m=-L-1-2k$, is singular:
\begin{equation}
\Psi^0_M(L;m)|_{m=-L-1-2k}= \Psi^{0,S}_M(L;-L-1-2k),
\label{eq11}
\end{equation}
and the set of eigenvalues $m=L-2k$ factorizes in two subsets:
\begin{eqnarray}
&& m=L-2k= m|^L_{(L-[L])} {\rm U} \, m|_{-\infty} ^{(L-[L]-2)},\nonumber\\
&& \Psi^0_M(L;m)|_{m=L-2k\geq 0}= \Psi^{0,R}_M\left(L;m|^L_{(L-[L])} \right),\nonumber\\
&& \Psi^0_M(L;m)|_{m=L-2k < 0}= \Psi^{0,S}_M(L;m|_{-\infty} ^{(L-[L]-2)}).
\label{eq12}
\end{eqnarray}
Here $A|^{A_{max}}_{A_{min}}\,{\rm U}\,B|^{B_{max}}_{B_{min}}$ stands for the union of sets $A$ and $B$, and in notations of functions the index $S$ indicates singularity of the corresponding functions and index $R$ - the regular character of the corresponding functions.
For the parameterization $2\beta=-m$ conditions for $\Psi^0_M$ being regular mirror those of Eq.~(\ref{eq12}). Instead of (\ref{firstt}) and (\ref{eq10}) we now have:
\begin{eqnarray}
&& a=\frac{1}{2}-\frac{m}{2}+\frac{L}{2}=-k \;\rightarrow\; m=L+1+2k,\nonumber\\
&& \Psi^0_M(L; -m)|_{m=L+1+ 2k} = (1-\xi^2)^{-\frac{(L+1)}{2}-k} \ { }_2 F_1 \left( - \frac{1}{2}-L-k, -k; \frac{1}{2};\xi^2\right), \label{firsttt}
\end{eqnarray}
and
\begin{eqnarray}
&& b=-\frac{m}{2}-\frac{L}{2} =-k\;\rightarrow\; m=-L+2k,\nonumber \\
&& \Psi^0_M(L;-m)|_{m=-L+2k}= (1-\xi^2)^{\frac{L}{2}-k} \ { }_2 F_1 \left( -k,\frac{1}{2}+L-k; \frac{1}{2};\xi^2\right),
\label{eq13}
\end{eqnarray}
That is, although the functions coincide with those of Eqs. (\ref{firstt}), (\ref{eq10}) respectively, the spectrum of $m$, determined by the condition of getting polynomials, mirrors the spectrum (\ref{eq12}):
\begin{eqnarray}
&& m=L+1+2k; \ \ \Psi^0_M(L;m)|_{m=L+1+2k}= \Psi^{0,S}_M(L;L+1+2k),\nonumber\\
&& m=-L+2k= m|_{-L}^{(-L+[L])} \, {\rm U} \, m|^{\infty}_{(-L+[L]+2)},\nonumber\\
&& \Psi^0_M(L;-m)|_{m=-L+2k\leq 0}= \Psi^{0,R}_M\left(L;-m|_{-L}^{(-L+[L])} \right),\nonumber\\
&& \Psi^0_M(L;-m)|_{m=-L+2k > 0}= \Psi^{0,S}_M\left(L;-m |^{\infty}_{(-L+[L]+2)}\right).
\label{eq1415}
\end{eqnarray}
We conclude that in the set of eigenfunctions $\Psi^0_M(L;\pm m)$ the subset of regular functions is given by the following spectrum of $m$:
\begin{eqnarray}
&& m^{(R)}= m|_{-L}^{(-L+[L])} \, {\rm U} \, m|^{L}_{(L-[L])},\nonumber\\
&& m|_{-L}^{(-L+[L])} = \{ -L;-L+2; \cdots;-L+[L]\}; \ \ m|^{L}_{(L-[L])} = \{ L;L-2; \cdots;L-[L]\},\nonumber\\
&& m^{(R)}= \{ -L;-L+2; \cdots;-L+[L]; L-[L]; \cdots; L-2;L\}.
\label{eq16}
\end{eqnarray}
The rest of the spectrum of $m$ which consists of values
\begin{eqnarray}
&& m^{(S1)} = \{ -\infty;\cdots; -L-3;-L-1\}; \ \ m^{(S2)} = \{ -\infty; \cdots; L-[L]-4;L-2[L/2]-2 \},\nonumber\\
&& m^{(S3)}= \{ L+1;L+3; \cdots;\infty\}; \ \ m^{(S4)}= \{ -L+[L]+2;-L+[L]+4; \cdots; \infty\},
\label{eq17}
\end{eqnarray}
corresponds to the subset of singular eigenfunctions in the set of eigenfunctions $\Psi^0_M(L;\pm m)$.
Same procedure is used for the second linearly independent function $\Psi^1_M$, for which we just
state the result. Regular functions and the corresponding spectrum have the form:
\begin{eqnarray}
&& m^{(R)}= m|_{-L+1}^{(-L+1+[L-1])} \, {\rm U} \, m|^{L-1}_{(L-1-[L-1])} \nonumber\\
&& = \{ -L+1;-L+3; \cdots;-L+1+[L-1]; L-1-[L-1]; \cdots;L-3; L-1\},\nonumber\\
&& \Psi^{1,R}_M\left(L;m^{(R)}\right)= \Psi^{1,R}_M\left(L; -m^{(R)}\right)=\xi(1-\xi^2)^{\frac{L-1}{2}-k}{}_2F_1\left( -k, \frac{1}{2}+L+k; \frac{3}{2}; \xi^2\right).
\label{eq18}
\end{eqnarray}
The subset of singular eigenfunctions and the corresponding spectrum are obtained in same way as for $\Psi^0_M$.
Finally, the subset of regular eigenfunctions and the corresponding spectrum can be described as follows:
\begin{enumerate}
\item
In the set of the two linearly independent solutions to the eigenvalue problem of the square of the angular momentum the subsets of regular eigenfunctions are generated by the mutually independent conditions of reducing hypergeometric functions to polynomials of $\cos\theta$.
\item
The subset $\Psi^{0,R}_M(L;m^{(R)})$ of linearly independent regular eigenfunctions corresponds to the spectrum of $m$
which is symmetric under the reflection of the sign: $m^{(R)}=m_0^{(R)}=m_0^{(-R)} \, {\rm U} \, m_0^{(+R)}$;
\begin{enumerate}
\item
The subsets are labeled by spectrum of $m$ which is a numeric sequence with the step size 2, $|m_j-m_{j-1}|=2$, (in units of $\hbar$), satisfying condition $m_0^{(-R)}=-m_0^{(+R)}$.
\item
The minimal value in the set $m_0^{(R)}$ is $m_{0min}^{(R)}=m_{0min}^{(-R)}=-L$ and the maximal value is $m_{0max}^{(R)}=m_{0max}^{(+R)}=L$.
\item
If $L$ is an integer, moving through spectra of $m$ with the above mentioned step size 2 we transit from subset labeled by $m_0^{(-R)}$ to subset labeled by $m_0^{(+R)}$ and vice versa. In other words, these subsets are continuations of each other.
If $L$ is not an integer, moving through spectra of $m$ with the above mentioned step size 2 does not lead to the transition from one subset to another, i.e. in this case the subsets are not continuations of each other.
\item
Eigenfunctions corresponding to the subsets $m_0^{(+R)}$ and $m_0^{(-R)}$ are the same, $\Psi^{0,R}_M(L;m_0^{(+R)})=\Psi^{0,R}_M(L;m_0^{(-R)})$.
\end{enumerate}
\item
The subset $\Psi^{1,R}_M(L;m^{(R)})$ of linearly independent regular eigenfunctions corresponds to the spectrum of $m$
which is symmetric under the reflection of the sign: $m^{(R)}=m_1^{(R)}=m_1^{(-R)} \, {\rm U} \, m_1^{(+R)}$;
\begin{enumerate}
\item
Same as 2(a) above.
\item
The minimal value in the set $m_1^{(R)}$ is $m_{1min}^{(R)}=m_{1min}^{(-R)}=-L+1$ and the maximal value is $m_{1max}^{(R)}=m_{1max}^{(+R)}=L-1$.
\item
Same as 2 (c) above.
\item
Eigenfunctions corresponding to the subsets $m_1^{(+R)}$ and $m_1^{(-R)}$ are the same, $\Psi^{1,R}_M(L;m_1^{(+R)})=\Psi^{1,R}_M(L;m_1^{(-R)})$.
\end{enumerate}
\end{enumerate}
Condition of reducing hypergeometric functions to polynomials generates singular functions as well. Since our goal is to identify and describe the subset of regular functions, we do not give explicit details of the subset of singular functions, just remark that for the $L$ integer singular functions are not generated in the sequence $\{\Psi(L,-L),\,\Psi(L,-L+2),\cdots,\Psi(L,L-2),\,\Psi(L,L)\}$, and appear in the sequence $\{\Psi(L,-L),\,\Psi(L,-L+1),\cdots,\Psi(L,L-1),\,\Psi(L,L)\}$.
Lastly we consider the case $2\beta=|m|$. This paremeterisation leads to a different picture. The linearly independent solutions are now expressed as
\begin{eqnarray}
&& \Psi_M(L;2\beta)=\Psi^0_M(L;|m|)= (1-\xi^2)^{|m|/2} { }_2 F_1 \left( \frac{1}{2}+ \frac{|m|}{2}+ \frac{L}{2}, \frac{|m|}{2}- \frac{L}{2}; \frac{1}{2};\xi^2\right), \nonumber \\
&& \Psi_M(L;2\beta)=\Psi^1_M(L;|m|)= \xi (1-\xi^2)^{|m|/2} { }_2 F_1 \left( 1+\frac{|m|}{2}+ \frac{L}{2},\frac{1}{2}+\frac{|m|}{2}- \frac{L}{2}; \frac{3}{2};\xi^2\right);
\label{eq19}
\end{eqnarray}
In distinct of parameterisation $2\beta=\pm m$, now only two conditions for getting polynomials remain:
$|m|/2-L/2=-k$ for $\Psi^0_M(L;|m|)$ and $|m|+1/2-L/2=-k$ for $\Psi^1_M(L;|m|)$.
By applying these conditions, $|m|=L-2k\geq 0$ and $|m|=L-1-2k\geq 0$, only regular functions and the corresponding spectra are obtained. The singular functions and corresponding spectra are not generated since $|m|>0$.
By enumerating integer values of $k$-parameter one enumerates all positive as well as all negative values of the spectrum of $m$.
That is, moving with the steps size 2 one is not transferred from the negative values of the spectrum to the positive ones and vice versa but rather both parts are united into one quantity $|m|$, and both parts of the spectrum, with the opposite signs, are simultaneously enumerated.
Therefore, to find regular functions as solutions in the eigenvalue/eigenfunction problem of the square of the angular momentum, the most convenient parameterisation is $2\beta=|m|$.
The sets of eigenvalues $m_0^{(R)}$ and $m_1^{(R)}$ and their corresponding eigenfunctions, corresponding to the parameterisation $2\beta=\pm m$, can be formally united in one set. Using numerical ordering from the smallest to the largest this united set is presented as a following sequence:
\begin{eqnarray}
m^{(R)} &=& \{ m_0^{(R)} {\rm U} \, m_1^{(R)} \} = \{-L;-L+1;-L+2;\cdots ; m_{max}^{(-R)} ; m_{min}^{(+R)};\cdots;L-2,L-1;L \} \nonumber\\
m_{max}^{(-R)} &=& -m_{min}^{(+R)},
\label{eq1.20}
\end{eqnarray}
where, depending on a numeric value of $L$, $m_{min}^{(+R)}$ is $(L-2[L/2])$ - the minimal positive value corresponding to $m_0^{(+R)}$, or $(L-1-2[(L-1)/2])$ - the minimal positive value corresponding to $m_1^{(+R)}$.
The set of regular eigenfunctions corresponding to ordering (\ref{eq1.20}) is:
\begin{eqnarray}
\Psi_M(L;m) &=& \left\{ \Psi^{0, R}_M(L;-L); \Psi^{1, R}_M(L;-L+1); \Psi^{0,R}_M(L;-L+2);\cdots ; \Psi^R_M(L;m_{max}^{(-R)}); \right. \nonumber\\
&& \left. \Psi^R_M(L;m_{min}^{(+R)}); \cdots ;\Psi^{0,R}_M(L;L-2);\Psi^{1,R}_M(L;L-1); \Psi^{0, R}_M(L;L)\right\}.
\label{eq1.21}
\end{eqnarray}
Both sets (\ref{eq1.20}) and (\ref{eq1.21}) are obtained by merging two sequences with steps size 2 such that each of them becomes a sequence with the step size 1. Conditions of getting polynomials that lead to $m_0^{(R)}$ and $\Psi^0_M(L;m_0^{(R)})$, are not compatible with the conditions of getting polynomials that lead to
$m_1^{(R)}$ and $\Psi^1_M(L;m_1^{(R)})$. Therefore in the above introduced sets with steps size 1 the functions $\Psi^0_M(L;m_0^{(R)})$ and $\Psi^1_M(L;m_1^{(R)})$ are not merged in one set by conditions of getting polynomials or by some similar
condition corresponding to any common feature, but rather only by the formal requirement to present the merged set as one with the step size 1. That is the reason why above we used the term "formal" for the sets (\ref{eq1.20}) and (\ref{eq1.21}).
Note that the maximum and minimum values of $m$, $m_{max,min}=\pm L$, reside in the spectrum $m_0^{(R)}$, correspondingly the eigenfunction
$\Psi_M(L,\pm L)$ is regular only when from the two linearly independent functions $\Psi_M^{0}$ and $\Psi_M^1$ the $\Psi_M^{0,R}(L,\pm L)$ is choosed as an eigenfunction. Second solution is singular, $\Psi_M(L,\pm L)=\Psi_M^{1,S}(L,\pm L)$. Thus the sequence of regular eigenfunctions (\ref{eq1.21}) starts from $\Psi_M^0(L,-L)$ and ends with $\Psi_M^0(L,L)$. Moving with the steps size 2 up from $\Psi_M^0(L,-L)$ or down from $\Psi_M^0(L,L)$ we obtain sequences of functions $\Psi_M^0(L,-L+2k)$ and $\Psi_M^0(L,L-2k)$. The functions from these sequences satisfy $\Psi_M^0(L,-L+2k)=\Psi_M^0(L,L-2k)$.
It is important to note that in establishing the subset of regular eigenfunctions of the operator of the square of the orbital momentum no constraint arises on the angular momentum $L$. Indeed, reducing hypergeometric function to a polynomial is possible for any $L$, integer as well as non integer. As an example, the subset of regular eigenfunctions (\ref{eq1.21}) for the case $L=2$ is comprised from functions with $m\in \{ -2,-1,0,1,2\}$ and for $L=2.2$ from functions with $m\in\{-2.2,-1.2,-0.2,0.2,1.2,2.2\}$.
\section{Analysis of the spectrum of eigenvalues generated by the requirement for the eigenfunctions to be regular
\label{secIII}
As shown in section \ref{secII}, equation (\ref{eq1.4})
has two linearly independent
solutions $\Psi^0_{M}(\xi|L;m) = \Psi^0_{M}(-\xi|L;m)$ and $\Psi^1_{M}(\xi|L;m) = -\Psi^1_{M}(-\xi| L,m) $.
Clearly any linear combination of these functions
\begin{eqnarray}
\Psi_{M}(\xi|L;m) = C_0 \Psi^0_{M}(\xi|L;m) + C_1 \Psi^1_{M}(\xi |L,m),
\label{eq2.2}
\end{eqnarray}
where $C_0$ and $C_1$ are arbitrary numerical coefficients, is also a solution to the same linear differential equation (\ref{eq1.4}).
In addition to the purely mathematical attribute of linear differential equation, stating that solution can always be presented as a linear combination (\ref{eq2.2}), in quantum mechanics there exists an analogous
physical condition, the principle of superposition. According to this principle if physical system can be in states described by regular wave functions
$\Psi_1^R$ and $\Psi_2^R$ then it can be also be in a state described by the wave function
\begin{eqnarray}
\Psi^R = C_1^R \Psi_1^R + C_2^R \Psi^R_2.
\label{eq2.3}
\end{eqnarray}
Despite the similarity of Eqs. (\ref{eq2.2}) and (\ref{eq2.3}) there are substantial differences between these two relations. Namely, Eq.~(\ref{eq2.3}) is a sum of a regular functions while there is no such a requirement for terms in Eq.~(\ref{eq2.2}) and indeed, as we have seen in previous section, depending on conditions on $L,\,m$, functions $\Psi^0_M,\,\Psi^1_M$ may be regular as well as singular. Also, in the physical principle of superposition $\Psi_1^R$ and $\Psi^R_2$ may correspond to the
two different eigenvalues of the same observable, e.g. $\Psi_1^R=\Psi_m(\phi|m_1)=\exp(i m_1\phi)$ and $\Psi^R_2=\Psi_m(\phi|m_2)=\exp(i m_2\phi)$, while nothing similar is meant in Eq.~(\ref{eq2.2}). On the contrary,
the necessary condition of mixing in Eq.~(\ref{eq2.2}) is that eigenvalues of the given quantity corresponding to both terms must be the same. Because of this restriction Eq.~(\ref{eq2.2}) is not a condition equivalent to Eq.~(\ref{eq2.3}) when the
terms in Eq.~(\ref{eq2.2}) may be singular. Such a case is realised by the eigenfunctions of the square of the angular momentum $\Psi^0_M(\xi|L;m)$ and $\Psi^1_M(\xi|L;m)$. For the Eqs. (\ref{eq2.2}) and (\ref{eq2.3})
to be equivalent in the sense that Eq.~(\ref{eq2.3}) could be obtained from Eq.~(\ref{eq2.2}), these functions must be regular and from the analysis of the previous section we know that the functions
$\Psi^0_M(\xi|L;m)$ and $\Psi^1_M(\xi|L;m)$, $|m|\in |m_0^R|=L-2k$ and $|m|\in |m_1^R|=L-1-2k$ cannot be regular at the same time. That is, $\Psi^0_M(\xi|L;m)$ is regular for the numerical value of $m\in m_0^R(L;k)$ and for the same value of $m$
$\Psi^1_M(\xi|L;m)$ is necessarily singular, i.e. un-normalisable, and vice versa, $\Psi^1_M(\xi|L;m)$ is regular for the numerical value of $m\in m_1^R(L;k)$ and for the same value of $m$
$\Psi^0_M(\xi|L;m)$ is necessarily singular, i.e. un-normalisable. Therefore, in presenting solution to the eigenvalue/eigenfunction problem for the angular momentum in the form (\ref{eq2.2}) some procedure must be employed in order to filter out the regular function from the combination (\ref{eq2.2}).
Let us recall that
the solution to the quantum mechanical problem of the angular momentum historically was presented in terms
of the well known associated Legendre functions $P^\mu_\nu, \,Q^\mu_\nu$ (see, e.g. \cite{shiff}, \cite{messiah}).
These functions, being linear combinations of fundamental solutions $\Psi^0_M$ and $\Psi^1_M$, are not necessarily regular and for reducing them to a regular solution a procedure of filtering coefficients is used.
To clarify details of this filtering procedure let us consider expressions for the associated Legendre functions $P^\mu_\nu$ and $Q^\mu_\nu$ in terms of $\Psi^0_M$ and $\Psi^1_M$(see, e.g. Ref.~\cite{Abramowitz}):
\begin{eqnarray}
&& \left[ P^\mu_\nu(\xi)/(-4)^{-|m|/2} \pi^{1/2}\right] |_{\nu=L; \mu=-|m|} = \left[ C_0 \Psi^0_{M}(\xi| L; |m|) + C_1 \Psi^1_{M}(\xi | L; |m| )\right],\nonumber\\
&& C_0 = \left[ \Gamma(1/2-L/2+|m|/2) \Gamma(1+L/2+|m|/2)\right]^{-1},
\nonumber\\
&& C_1 = -2 \left[ \Gamma(1/2+L/2+|m|/2) \Gamma(-L/2+|m|/2)\right]^{-1},\label{Gia}
\end{eqnarray}
and
\begin{eqnarray}
&& \left[ e^{i\mu\pi} Q^\mu_\nu (\xi)/(-4)^{-|m|/2} \pi^{1/2}\right] |_{\nu=L; \mu=-|m|} = \left[ C_3 \Psi^0_{M}(\xi | L; |m|) + C_4 \Psi^1_{M}(\xi | L; |m| )\right], \nonumber\\
&& 2 C_3 e^{\pm i(|m|+L+1)\pi/2} = \Gamma(1/2+L/2-|m|/2)/2 \Gamma(1+L/2+|m|/2), \nonumber\\
&& C_4e^{\pm i(|m|+L)\pi/2} = \Gamma(1+L/2-|m|/2)/\Gamma(1/2+L/2+|m|/2),
\label{eq2.4}
\end{eqnarray}
where $\Gamma(z)$ is the Euler gamma function \cite{Abramowitz}.
Since $\Psi^0_M$ and $\Psi^1_M$ are not simultaneously regular for a fixed values of $L$ and $|m|$, to obtain a regular solution we can use the following strategy: for the regular $\Psi^0_M$ coefficient in front of $\Psi^1_M$ must vanish and vice versa. This is the filtering procedure mentioned above.
As shown in the previous section, conditions for solutions to be regular result in the following relations:
\begin{equation}
\label{mcond}
|m|=L-2k, \quad |m|=L-1-2k.
\end{equation}
Correspondingly, the mixing coefficients of Eqs.~(\ref{Gia}) and (\ref{eq2.4}) take the form:
\begin{eqnarray}
&& C_0(L;|m|)|_{|m|=L-2k} = \left[ \Gamma(1/2-L+k) \Gamma(1+k)\right]^{-1},\nonumber\\
&& C_1(L;|m|)|_{|m|=L-2k} = -2 \left[ \Gamma(1/2+k) \Gamma(-L+k)\right]^{-1},\nonumber\\
&& C_3(L;|m|)|_{|m|=L-2k} \sim \Gamma(1/2+k)/ \Gamma(1+L-k),\nonumber\\
&& C_4(L;|m|)|_{|m|=L-2k} \sim \Gamma(1+k)/ \Gamma(1/2+L-k),\nonumber\\
&& C_0(L;|m|)|_{|m|=L-1-2k} = \left[\Gamma(1-L+k) \Gamma(3/2+k)\right]^{-1},\nonumber\\
&& C_1(L;|m|)|_{|m|=L-1-2k} = -2 \left[ \Gamma(1+k) \Gamma(1/2-L+k)\right]^{-1},\nonumber\\
&& C_3(L;|m|)|_{|m|=L-1-2k} \sim \Gamma(1+k)/ \Gamma(3/2+L-k),\nonumber\\
&& C_4(L;|m|)|_{|m|=L-1-2k} \sim \Gamma(3/2+k) / \Gamma(1+L-k).
\label{eq2.5}
\end{eqnarray}
Associated Legendre functions are regular when the following filtering requirements are satisfied:
\begin{eqnarray}
&& C_1(L;|m|)|_{|m|=L-2k} = 0; \label{eq2.6.1a}\\
&& C_4(L;|m|)|_{|m|=L-2k} = 0; \label{eq2.6.1b}\\
&& C_0(L;|m|)|_{|m|=L-1-2k} =0; \label{eq2.6.2a}\\
&& C_3(L;|m|)|_{|m|=L-1-2k} =0;
\label{eq2.6.2b}
\end{eqnarray}
It is seen from the explicit form of the mixing coefficients that the condition of Eq.~(\ref{eq2.6.1b}) cannot be satisfied. Condition (\ref{eq2.6.1a}) is satisfied if $L$ is a non-negative integer number.
Similarly, condition (\ref{eq2.6.2b}) cannot be satisfied, while the condition (\ref{eq2.6.2a}) is satisfied for a non-negative integer $L$. Associated Legendre functions are regular, i.e. admissible only when $L$ is a non-negative integer; otherwise they are singular, i.e. non admissible.
Let us demonstrate this on a concrete numerical examples.
We consider following values of $L,\,m$: $L=\{ 2;3/2;1.2\}$ and $m=\{\pm2; \pm3/2; \pm1.2\}$. For the functions and mixing coefficients of Eq.~(\ref{eq2.5}) we obtain:
\begin{eqnarray}
&& \Psi_M^0(L=2;|m|=|\pm 2|)=\Psi_M^0(2;2)^R,\ \Psi_M^1(L=2;|m|=|\pm 2|)=\Psi_M^1(2;2)^S; \nonumber\\
&& C_0(2;|\pm 2) = \left[ \Gamma\left(-3/2\right) \Gamma(1)\right]^{-1}\neq 0; \nonumber\\
&& C_1(2;|\pm 2) = -2 \left[ \Gamma\left(1/2\right) \Gamma(-2)\right]^{-1}=0; \ P^2_2(\xi)=P^2_2(\xi)^R \nonumber\\
&& C_3(2;|\pm 2) \sim \Gamma\left(1/2\right) /\Gamma(3)\neq 0; \nonumber\\
&& C_4(2;|\pm 2) \sim \Gamma\left(3/2\right)/\Gamma(3)\neq 0; \ \ \ Q^2_2(\xi)=Q^2_2(\xi)^S \nonumber\\
&& \nonumber\\
&& \Psi_M^0(L=3/2;|m|=|\pm 3/2|)=\Psi_M^0(3/2;3/2)^R; \ \Psi_M^1(L=3/2;|m|=|\pm 3/2|)=\Psi_M^1(3/2; 3/2)^S; \nonumber\\
&& C_0(3/2;|\pm 3/2) = \left[ \Gamma\left(-1\right) \Gamma(1)\right]^{-1}= 0; \nonumber\\
&& C_1(3/2;|\pm 3/2) = -2 \left[ \Gamma\left(1/2\right) \Gamma(-3/2)\right]^{-1}\neq 0; \ P^{3/2}_{3/2}(\xi)=P^{3/2}_{3/2}(\xi)^S \nonumber\\
&& C_3(3/2;|\pm 3/2) \sim \Gamma\left(1/2\right) /\Gamma(5/2)\neq 0; \nonumber\\
&& C_4(3/2;|\pm 3/2) \sim \Gamma\left(1\right)/\Gamma(2)\neq 0; \ \ \ Q^{3/2}_{3/2}(\xi)=Q^{3/2}_{3/2}(\xi)^S \nonumber\\
&& \nonumber\\
&& \Psi_M^0(L=1.2;|m|=|\pm 1.2|)=\Psi_M^0(1.2;1.2)^R; \ \Psi_M^1(L=1.2;|m|=|\pm 1.2|)=\Psi_M^1(1.2;1.2)^S; \nonumber\\
&& C_0(1.2;|\pm 1.2) = \left[ \Gamma\left(0.7\right) \Gamma(1)\right]^{-1}\neq 0; \nonumber\\
&& C_1(1.2;|\pm 1.2) = -2 \left[ \Gamma\left(1/2\right) \Gamma(-1.2)\right]^{-1}\neq0; \ P^{1.2}_{1.2}(\xi)=P^{1.2}_{1.2}(\xi)^S \nonumber\\
&& C_3(1.2;|\pm 1.2) \sim \Gamma\left(1/2\right) /\Gamma(2.2)\neq 0; \nonumber\\
&& C_4(1.2;|\pm 1.2) \sim \Gamma\left(1\right) /\Gamma(1.7)\neq0; \ \ \ Q^{1.2}_{1.2}(\xi)=Q^{1.2}_{1.2}(\xi)^S.
\label{eqnonumber}
\end{eqnarray}
Analogous results are obtained for the other values of $m$. For a non-negative integer $L$ the functions $P^{-|m|}_L(\xi)$ are regular and functions $Q^{-|m|}_L(\xi)$ remain singular. For an integer values of $L$ the set of Eq.~(\ref{eq1.20}) becomes
a well known set (see, e.g. Ref.~\cite{shiff}):
\begin{eqnarray}
&& m_{max}^{(-R)} =-m_{min}^{(+R)} =0; \ m^{(R)}= \{ -L; -L+1;-L+2; \cdots ; 0; \cdots; L-2;L-1;L \}; \nonumber\\
&& L=\{ 0; 1; 2; \cdots; \infty \};
\label{eq2.7}
\end{eqnarray}
Thus, if one chooses to present solution of the eigenvalue problem of the angular momentum in terms of associated Legendre functions then from the requirement that eigenfunction has to be regular it follows that $L$ is necessarily integer and the spectrum of $m$ is given by Eq.~(\ref{eq2.7}).
In doing so some set of the regular functions and corresponding non-integer values of $L$ disappear from the solution.
But there is no any mathematical or physical argument or requirement that would dictate that the solution of the eigenvalue equation Eq.~(\ref{eq1}) should necessarily be presented in form (\ref{eq2.2}).
As it is shown in the previous section, $\Psi_M^0,\,\Psi_M^1$ are the linearly independent solutions of Eq.~(\ref{eq1}) and, depending on which regularity condition out of Eq.~(\ref{mcond}) is realized, one of these two functions
becomes a regular function and can be chosen as a solution to the eigenvalue problem of the square of the orbital momentum. This choice, from the theoretical point of view, is no worse and no better than choice in terms of $P^\mu_\nu,\,Q^\mu_\nu$. Solution presented by only $\Psi_M^0$ or by only $\Psi_M^1$ is regular and in distinct of presenting solution in terms of $P^\mu_\nu,\,Q^\mu_\nu$, does not generate any constraint on $L$; regular, i.e. physically admissible solutions exist for a non-integer $L$ as well.
Therefore, we conclude that the spectrum of Eq.~(\ref{eq2.7}) is just an artefact of
presenting eigenfunctions in the form of Eq.~(\ref{eq2.2}).
Different approach used to demonstrate that the spectrum is given by Eq.~(\ref{eq2.7}), i.e. that $L$ acquires only integer values, is based on the analysis of commutation relations of the angular momentum operators.
In the next section we address these arguments.
\section{Analysis of the spectrum of eigenvalues generated by the commutation relations of the angular momentum operators}
\label{secIV}
Let as analyze the reasoning based on the commutation relations used to demonstrate that the eigenvalues of the
square of the angular momentum and its third component can be only integer numbers (see, e.g. \cite{shiff}, \cite{messiah}). It is formulated as follows:
if $|L,m>$ is a normalisable state vector satisfying
\begin{equation}
\hat M^2|L;m> = L(L+1) |L;m>; \ \ \hat M_z|L;m> = m |L;m>,
\label{eq3.1}
\end{equation}
then the following mathematical relations must hold (see e.g. Ref.~\cite{shiff}, section XIII):
\begin{enumerate}
\item
$-L\leq m\leq L$;
\item
If $m=L$ then $\hat M^+|L;L>=0$, where $\hat M^+=\hat M_x+i \hat M_y$, \\if $m=-L$ then $\hat M^-|L;L>=0$, where $\hat M^-=\hat M_x-i \hat M_y$.
\item
If $m\neq L$ then $\hat M^+|L;m>$ is an eigenvector with eigenvalues of the angular momentum $L(L+1)$ and $(m+1)$,\\if $m\neq -L$ then $\hat M^-|L;m>$ is an eigenvector with eigenvalues of the angular momentum $L(L+1)$ and $(m-1)$.
\item
If $ (\hat M^{\pm})^p|L;m>\neq 0$ then it is an eigenvector with eigenvalues of the square of angular momentum and its third momentum correspondingly $L(L+1)$ and $(m\pm p)$.
\item
In the sequences of eigenvectors $ \hat M^+ |L;m>; (\hat M^{+})^2|L;m>; \cdots ; (\hat M^{+})^p|L;m>$ and $ \hat M^- |L;m>; (\hat M^{-})^2|L;m>; \cdots ; (\hat M^{-})^q|L;m>$ there always can be found such values of $p$ and $q$
that the following two relations simultaneously hold:
\begin{equation}
\label{mpq}
m+p=L,\quad m-q=-L,
\end{equation}
i.e. acting repeatedly on any $|L;m\rangle$ with $\hat M^+$ and $\hat M^-$ we obtain both $|L;L\rangle$ and $|L;-L\rangle$. Consequently, as $p$ and $q$ are positive integer numbers, the difference $(m+p)-(m-q)=(p+q)=2L$ is also an integer number.
\end{enumerate}
After that it is assumed that the quantization of the angular momentum is proven.
To analyze proof of quantization based on the statements 1-5, let us first note
that from the theoretical standpoint if any argument is not determined either by commutation relations or by physical requirements then
the condition stated in this argument is not necessarily to hold.
Such an argument, relevant for our discussion is the point 5 above. We demonstrate that point 5 is not always valid
on the example with the three numerical values of $L$: $L=2$ - corresponding to integer values, $L=3/2$ - corresponding to half-integer values, and $L=1.2$ - non-integer value, which is also not a half-integer. Let us assume at this stage that the conditions of items 1-5 are indeed satisfied and consider the following eigenvalues of $L$: $\{ 2;3/2;1.2 \}$. Since Eq.~(\ref{eq1.4}) contains $m$ quadratically, both $\Psi_{Mm}(L;m)$ and $\Psi_{Mm}(L;-m)$ are solutions to this equation. Eigenfunctions $\Psi_{Mm}(2;\pm 2)$, $\Psi_{Mm}(3/2; \pm 3/2)$ and $\Psi_{Mm}(1.2;\pm 1.2)$ are regular, i.e. normalizable solutions of Eq.~(\ref{eq1.4}).
Then, according to the points 1-5 above, the following sequences will also be eigenfunctions:
\begin{eqnarray}
&& \{ \hat M^{\pm}\Psi_{Mm}(2;\mp 2),\,(\hat M^{\pm})^2\Psi_{Mm}(2;\mp 2),\,(\hat M^{\pm})^3\Psi_{Mm}(2;\mp 2),\,(\hat M^{\pm})^4\Psi_{Mm}(2;\mp2)\},\nonumber\\
&& \{ \hat M^{\pm}\Psi_{Mm}(3/2;\mp 3/2),\,(\hat M^{\pm})^2\Psi_{Mm}(3/2;\mp 3/2),\,(\hat M^{\pm})^3\Psi_{Mm}(3/2;\mp 3/2) \}, \nonumber\\
&& \{ \hat M^{\pm}\Psi_{Mm}(1.2;\mp 1.2),\,(\hat M^{\pm})^2\Psi_{Mm}(1.2;\mp 1.2);\cdots \}. \nonumber
\end{eqnarray}
The $(L;m)$ values corresponding to these sequences of functions are:
\begin{eqnarray}
&& L=2; \ \ \ \ \ m\downarrow=\{ 2;1;0;-1;-2\}; \ \ \ \ \ \ \ \ \ \ \ m\uparrow=\{-2;-1;0;1;2\}; \nonumber\\
&&L=3/2; \ \ m\downarrow=\{ 3/2;1/2;-1/2;-3/2\}; \ \ m\uparrow=\{-3/2;-1/2;1/2;3/2\}; \nonumber\\
&& L=1.2; \ \ \ m\downarrow=\{ 1.2;0.2;-0.8;\cdots \}; \ \ \ \ \ \ \ m\uparrow=\{-1.2;-0.2;0.8;\cdots \}.\nonumber\\
\end{eqnarray}
As seen from these expressions, in case $L=2$ by acting on functions $\Psi_{Mm}(L;\pm L)$ with the operators $\hat M^+$ and $\hat M^-$, two identical sets are generated: $m\downarrow$, obtained by applying $\hat M^-$ to $\Psi_{Mm}(2;2)$, and $m\uparrow$, obtained by applying $\hat M^+$ to $\Psi_{Mm}(2;-2)$.
These sets have a property that moving down with the unit step from the value $m_{max}=2$ we arrive to the minimal value $m_{min}=-2$ and vice versa, i.e. from the value $m_{min}=-2$ we move up to the maximal
value $m_{max}=2$. For $L=3/2$ we get the same result - sets $m\uparrow$ and $m\downarrow$ are identical.
In the case of $L=1.2$ the sets $m\downarrow=\{ 1.2;0.2;-0.8; \cdots \}$, generated by $\hat M^-$, and $m\uparrow=\{ -1.2;-0.2; 0.8; \cdots \}$, generated by $\hat M^+$, differ and have no intersection. Starting with any element of $m\downarrow$ by repeatedly acting with $\hat M^+$ we arrive at $m_{max}=1.2$, however by repeatedly acting with $\hat M^-$ we do not arrive at the minimal value in $m\uparrow$, to
$m_{min}=-1.2$. Hence, in case of a non-integer $L$ point 5, stating that for the sequences of functions generated by acting with $\hat M^-$ and $\hat M^+$ one necessarily finds such integers $p$ and $q$ that
starting with some some $\Psi_{Mm}(L;m)$ of this sequence, by acting with $(\hat M^+)^p$ and $(\hat M^-)^q$ one simultaneously obtains $\Psi_{Mm}(L;L)$ as well as $\Psi_{Mm}(L;-L)$, is no longer valid.
In case of the non-integer values of $L$, if by acting with $(\hat M^+)^p$ on $\Psi_{Mm}(L;m)$ we obtain $\Psi_{Mm}(L;L)$, then this state necessarily belongs to the spectrum of type $m\uparrow$ and by acting on it with $(\hat M^-)^q$ we cannot obtain
$\Psi_{Mm}(L;-L)$. Similarly, if by acting with $(\hat M^-)^q$ on $\Psi_{Mm}(L;m)$ we obtain $\Psi_{Mm}(L;-L)$ then this state necessarily belongs to the spectrum of type $m\downarrow$ and by acting on it with $(\hat M^+)^p$ we cannot obtain
$\Psi_{Mm}(L;L)$. In other words Eq.~(\ref{mpq}) is no longer valid and consequently, angular momentum quantization can not be proved.
Let us ask where the requirements formulated in point 5 above come from, what are they based upon.
These requirement, namely that by repeatedly applying operator $\hat M^{+}$ to $\Psi_{Mm}(L;m)$ we arrive to $\Psi_{Mm}(L;L)$, and, repeatedly applying operator $\hat M^-$ to the same $\Psi_{Mm}(L;m)$ we arrive to $\Psi_{Mm}(L;-L)$, do not follow neither from commutation relations nor from any physical arguments.
Consequently we conclude that in the framework of the algebra of commutation relations the conditions stated in points 5 above is not the one which must be necessarily satisfied Violation of the requirement in point 5 does not contradict to any physical requirement or commutation relations.
Therefore the integer, half-integer and also any other real values of $L$ are compatible with the algebra of commutation relations as well as with physical requirements.
We finish this section by listing how, by acting with operators $\hat M^{\pm}$ upon $\Psi_{Mm}^0,\,\Psi_{Mm}^1$, results move from the set of regular functions to set of singular functions, vice versa, or remain in the original set. First let us recapitulate result from section \ref{secII} stating that if $\Psi^0_{Mm}(L;m)$ is a regular function for fixed $L$, then $\Psi^1_{Mm}(L;m)$ is necessarily singular and vice versa.
We omit lengthy straightforward calculations and just state the results:
\begin{enumerate}
\item
For any $L$, if $m$ and $m\pm 1$ belong to the spectrum with the same sign, that is if both $m$ and $m\pm 1$ are positive or both $m$ and $m\pm 1$ are negative, then $\hat M^{\pm}\Psi_{Mm}(L;m)^R\sim \Psi_{Mm}(L;m\pm 1)^R$ and $\hat M^{\pm}\Psi_{Mm}(L;m)^S\sim \Psi_{Mm}(L;m\pm 1)^S$. In other words, in this case if $\Psi$ is regular (singular), the $\hat M^{\pm}\Psi$ is also regular (singular).
\item
If $L$ is non-integer, $L\neq[L]$ and $m$ and $(m\pm 1)$ belong to spectra with different signs, then $\hat M^{\pm}\Psi_{Mm}(L;m)^R\sim\Psi_{Mm}(L;m\pm 1)^S$. In other words, operators $\hat M^{(\pm)}$ bring the regular functions $\Psi_{Mm}(L;m)^R$ to singular functions $\Psi_{Mm}(L;m\pm 1)^S$.
\end{enumerate}
Along the statements above an important feature is that
$m|_{(-\infty)}^{(L)}$ and $m|^{(+\infty)}_{(-L)}$ numerical sequences have no intersection unless $L$ is integer or half-integer. The case of integer $L$ is distinguished by the fact that instead of statement 2 we now have:
3. If $L$ is integer, $L=[L]$ and $m$ and $(m\pm 1)$ belong to spectra with different signs, then $\hat M^{\pm}\Psi_{Mm}(L;m)^R\sim \Psi_{Mm}(L;m\pm 1)^R$. In other words, operators $\hat M^{(\pm)}$ bring regular functions $\Psi_{Mm}(L;m)^R$ to regular functions $\Psi_{Mm}(L;m\pm 1)^R$.
The case of half-integer $L$ is described by statement 2, corresponding to non-integer numbers:
\begin{eqnarray}
&& \hat M^- \Psi^0_{Mm}(L;1/2)^R|_{[L]=2k} = (L+1/2)^2 \Psi^1_{Mm}(L;-1/2)^S|_{[L]=2k}; \nonumber \\
&& \hat M^+ \Psi^1_{Mm}(L;-1/2)^S|_{[L]=2k} = - \Psi^0_{Mm}(L;1/2)^R|_{[L]=2k}; \nonumber \\
&& \hat M^- \Psi^1_{Mm}(L;1/2)^R|_{[L]=2k+1} = \Psi^0_{Mm}(L;-1/2)^S|_{[L]=2k+1}; \nonumber \\
&& \hat M^+ \Psi^0_{Mm}(L;-1/2)^S|_{[L]=2k+1} = (L+1/2)^2 \Psi^1_{Mm}(L;1/2)^R|_{[L]=2k+1}; \nonumber \\
&& \hat M^+ \Psi^0_{Mm}(L;-1/2)^R|_{[L]=2k} = (L+1/2)^2 \Psi^1_{Mm}(L;1/2)^S|_{[L]=2k}; \nonumber \\
&& \hat M^- \Psi^1_{Mm}(L;1/2)^S|_{[L]=2k} = - \Psi^0_{Mm}(L;-1/2)^R|_{[L]=2k}; \nonumber \\
&& \hat M^+ \Psi^1_{Mm}(L;-1/2)^R|_{[L]=2k+1} = \Psi^0_{Mm}(L;1/2)^S|_{[L]=2k+1}; \nonumber \\
&& \hat M^- \Psi^0_{Mm}(L;1/2)^S|_{[L]=2k+1} = (L+1/2)^2 \Psi^1_{Mm}(L;-1/2)^R|_{[L]=2k+1};
\label{eq3.10}
\end{eqnarray}
Correspondingly, for the non-integer $L$ by acting with the operators $\hat M^{(\pm)}$ the set of singular functions is attached to the set of regular functions (for the half-integer $L$ this fact is also mentioned in Refs.~\cite{Pauli}, \cite{mer})
and this attachment is transparent in both directions, that is, by acting with operators $\hat M^{(\pm)}$ we move from singular functions to regular ones and vice versa.
In connection with the above results following important comment is in order:
as shown in section \ref{secII}, obtaining regular eigenfunctions and the spectrum via condition of reducing hypergeometric expressions for $\Psi^
{0,1}_{Mm}$ to polynomials, singular eigenfunctions do not appear at
all when the parameterization $2\beta\equiv\sqrt{m^2}=|m|$ is used. However, when using operators $\hat M^{(\pm)}$ to establish the spectrum as it is done, e.g. in \cite{wein}, singular functions are generated even for the parameterization $2\beta=|m|$, since the operators $\hat M^{\pm}$, containing $d/d\xi$, lower the exponent $\beta$ in the expression $\Psi=(1-\xi^2)^{\beta}\,F$ and acting repeatedly with $\hat M^{\pm}$ results in a singular function of $\xi$. If for establishing the set of normalizable eigenfunctions one uses procedure described in section \ref{secII} and not the method based on using operators $\hat M^{\pm}$, the singular functions do not appear in the set of eigenfunctions.
In the quantum-mechanical problem of the angular momentum operators $\hat M^{\pm}$ are often treated on the same footing as operators corresponding to observable and the
results of mathematical operations connected with their actions are considered as the conditions which have to be satisfied. For example, in Ref.~\cite{wein} the set of normalizable eigenfunctions is defined
as the subset of functions obtained from $\Psi_L(L;\pm L)$ by acting with $\hat M^{(\pm)}$. The reason for giving to these operators such a high status stems from the paper by Pauli \cite{Pauli} in which the the issue of non-uniqueness of the eigenfunctions of the angular momentum operator is addressed. There is however no reason for giving these operators such a special status as they do not belong to a complete set of commuting operators.
The analysis of the quantization of eigenvalues is connected not only to the properties of eigenfunctions of the square of the angular momentum but also to the non-uniqueness of the eigenfunctions of the third component of the angular momentum. Therefore arguments by Pauli will be
addressed in our next publication where we will consider the issue wether $m$ is only integer by analyzing the properties of the eigenfunctions and eigenvalues of the third component of the angular momentum.
\section{Conclusions}
\label{secVI}
From the above analysis we conclude:
\begin{itemize}
\item
A set of eigenfunctions of the operator of the square of orbital momentum, $\Psi_{Mm}(\xi,\phi |L;m)$, consists of subsets of singular and non-singular functions.
\item
Eigenvalues of the square of the angular momentum corresponding to non-singular, i.e. normalizable eigenfunctions can be integer as well as non-integer.
\item
The main statement of the analysis of the solutions to the eigenvalue/eigenstate equations, citied in textbooks of quantum mechanics, that the only integer eigenvalues are admissible, is an artefact of
considering a specific linear combination of linearly independent solutions (realised as an associated Legendre functions, the so called Spherical Harmonics). This requirement is neither a physical nor a mathematical necessary condition.
\item
If the condition of normalisability of eigenfunctions is realised by imposing conditions of getting polynomials then the subset of singular functions does not appear when the parameterization $(m^2)^{1/2}=|m|$ is chosen. This parameterization preserves in the expressions of eigenfunctions the symmetry present in the initial equation
$\hat M^2(m)=\hat M^2(-m)$.
\item
In the operator formalism the set of physical states is completely factorised from the set of singular functions only in the case when $L$ is integer.
\item
In the operator formalism the statement that $L$ can be only integer is a result of assigning to the operators $\hat M^{(\pm)}$ higher status than it follows from the principles of quantum mechanics.
\item
To guarantee that $L$ acquires only integer values one either needs to find additional arguments on top to those which are usually stated when using the mechanism of solving the eigenvalue problem
of the angular momentum and/or applying the algebra of commutation relations, or alternatively, one has to admit that in the framework of quantum mechanics $L$ may be integer as well as non-integer.
\end{itemize}
We are indebted with J.~T.~Gegelia for useful discussions and critically reading the manuscript.
|
2,877,628,091,567 | arxiv | \section{Introduction}
Unitary representations realized in the spaces of the holomorphic sections of equivariant holomorphic line bundles appear in various areas of the representation theory of Lie groups. For instance, we can recall the Borel-Weil theory for compact Lie groups, the holomorphic discrete series and its analytic continuation for Hermite Lie groups, the Bargmann-Fock representation for the Heisenberg group, and the Auslander-Kostant theory for solvable Lie groups. We shall formulate such unitary representations as follows. Let $\mathcal{M}$ be a connected complex manifold, let $\mathrm{Aut}_{hol}(\mathcal{M})$ be the holomorphic automorphism group of $\mathcal{M}$, let $G_0\subset\mathrm{Aut}_{hol}(\mathcal{M})$ be a connected subgroup which acts on $\mathcal{M}$ transitively, and let $L$ be a $G_0$-equivariant holomorphic line bundle over $\mathcal{M}$. We denote by $\Gamma^{hol}(L)$ the space of holomorphic sections of $L$. Let $l$ be the representation of $G_0$ given by
\begin{equation*}
l(g)s(z)=gs(g^{-1}z)\quad(g\in G_0,s\in\Gamma^{hol}(L),z\in\mathcal{M}).
\end{equation*}
Let us consider all $G_0$-equivariant holomorphic line bundles $L$ over $\mathcal{M}$ and the following fundamental questions:
\begin{enumerate}
\item[(Q1)] What is the condition that the representation $l$ of $G_0$ is unitarizable?
\item[(Q2)] Which unitarizations are equivalent as unitary representations of $G_0$?
\end{enumerate}
Here we make precise the class of representations we study.
\begin{definition}
We say that the representation $l$ is {\it unitarizable} if there exists a nonzero Hilbert space $\mathcal{H}\subset\Gamma^{hol}(L)$ satisfying the following conditions:
\begin{enumerate}
\item[(i)]the inclusion map $\iota:\mathcal{H}\hookrightarrow \Gamma^{hol}(L)$ is continuous with respect to the open compact topology of $\Gamma^{hol}(L)$,
\item[(ii)] $l(g)\mathcal{H}\subset \mathcal{H}\,(g\in G_0)$ and $\|l(g)s\|=\|s\|\,(g\in G_0, s\in\mathcal{H})$, where $\|\cdot\|$ denotes the norm of $\mathcal{H}$.
\end{enumerate}
This notion is closely related to the holomorphic induction introduced by Auslander and Kostant. We will mention the relation later. For a unitarizable representation $l$, we call the subrepresentation $(l,\mathcal{H})$ a {\it unitarization} of the representation $(l,\Gamma^{hol}(L))$ of $G_0$.
\end{definition}
A Hilbert space $\mathcal{H}$ satisfying the condition (i) is a reproducing kernel Hilbert space. The following theorem is known.
\begin{theorem}[{\cite[Theorem 6]{ishi 2011}}, {\cite{kobayashi}}, {\cite{kunze}}]\label{uniqueness of unitarification1}
A Hilbert space giving a unitarization of $l$ is unique if it exists. In particular, the unitarization is irreducible.
\end{theorem}
In this paper, we shall give a complete answer to the questions (Q1) and (Q2) in the case that $\mathcal{M}$ is a bounded homogeneous domain $\mathcal{D}$ and $G_0\subset\mathrm{Aut}_{hol}(\mathcal{D})$ is the identity component $G$ of a real algebraic group. Here it is known \cite[Theorem 3.2]{kaneyuki} that $\mathrm{Aut}_{hol}(\mathcal{D})$ admits a structure of a Lie group and its identity component is isomorphic to the identity component of a linear algebraic group. The identity component of $\mathrm{Aut}_{hol}(\mathcal{D})$, which is denoted by $\mathrm{Aut}_{hol}(\mathcal{D})^o$ is an example of $G$. When $\mathcal{D}$ is symmetric, any parabolic subgroup of $\mathrm{Aut}_{hol}(\mathcal{D})^o$ is also an example of $G$. Now we introduce a notion of Iwasawa subgroup of a Lie group.
\begin{definition}
For a Lie group $G_0$, we call a subgroup $B_0\subset G_0$ an {\it Iwasawa} subgroup of $G_0$ if $B_0$ is a maximal connected real split solvable Lie subgroup of $G_0$.
\end{definition}
It is known that the isotropy subgroup of $\mathrm{Aut}_{hol}(\mathcal{D})^o$ at a point $p\in\mathcal{D}$ is a maximal compact subgroup of $\mathrm{Aut}_{hol}(\mathcal{D})^o$, and in our setting, it follows that an Iwasawa subgroup $B$ of $G$ acts on $\mathcal{D}$ simply transitively (see \cite[Chapter 4, Theorem 4.7]{encyclopedia}).
\begin{definition}
An analytic function $m:G\times \mathcal{D}\rightarrow \mathbb{C}^\times$ is called a {\it multiplier} if the following cocycle condition is satisfied:
\begin{equation*}
m(gg',z)=m(g,g'z)m(g',z)\quad(g,g'\in G, z\in \mathcal{D}).
\end{equation*}
Moreover, a multiplier $m$ is called a holomorphic multiplier if $m(g,z)$ is holomorphic in $z\in \mathcal{D}$.
\end{definition}
Let $m:G\times\mathcal{D}\rightarrow\mathbb{C}^\times$ be a holomorphic multiplier. Let $E_m$ be the $G$-equivariant trivial line bundle $\mathcal{D}\times\mathbb{C}$, where the $G$-action on $E_m$ is defined by
\begin{equation*}
g(z,\zeta)=(gz,m(g,z)\zeta).
\end{equation*}
Since a bounded homogeneous domain is a contractible Stein manifold, every holomorphic line bundle over $\mathcal{D}$ is trivial. Thus there exists a holomorphic multiplier $m:G\times\mathcal{D}\rightarrow\mathbb{C}^\times$ such that $L$ and $E_m$ are isomorphic as $G$-equivariant holomorphic line bundles.
Let $\mathcal{O}(\mathcal{D})$ denote the space of holomorphic functions on $\mathcal{D}$. We identify $\Gamma^{hol}(E_m)$ with $\mathcal{O}(\mathcal{D})$, and let us denote $T_m$ the representation $l$ for $E_m$. The representation $T_m$ of $G$ is described as
\begin{equation*}
T_m(g)f(z)=m(g^{-1},z)^{-1}f(g^{-1}z)\quad(f\in\mathcal{O}(\mathcal{D})).
\end{equation*}
The scalar-valued holomorphic discrete series and its analytic continuation is a special case of our object. In this case $\mathcal{D}$ is a bounded symmetric domain and $G$ is a semisimple Lie group which is locally isomorphic to the group $\mathrm{Aut}_{hol}(\mathcal{D})$. Let $\gamma$ be a complex number, let $\mathcal{D}$ be an irreducible bounded symmetric domain, and let $J:\mathrm{Aut}_{hol}(\mathcal{D})^o\times\mathcal{D}\rightarrow\mathbb{C}^\times$ denote the complex Jacobian. Consider the following representation of $\mathrm{Aut}_{hol}(\mathcal{D})^o$ on the space $\mathcal{O}(\mathcal{D})$:
\begin{equation*}
T_{J^{-\gamma}}(g)f(z)=J(g^{-1},z)^\gamma f(g^{-1}z)\quad(g\in\mathrm{Aut}_{hol}(\mathcal{D})^o,f\in\mathcal{O}(\mathcal{D})).
\end{equation*}
To be precise, we should consider $J(g^{-1},z)^\gamma$ as a function defined on $\widetilde{\mathrm{Aut}_{hol}(\mathcal{D})^o}\times\mathcal{D}$, where $\widetilde{\mathrm{Aut}_{hol}(\mathcal{D})^o}$ denotes the universal covering group of $\mathrm{Aut}_{hol}(\mathcal{D})^o$. The unitarizations of the above representations $ T_{J^{-\gamma}}$ are highest weight unitary representations, and the equivalence classes of these unitary representations are determined by their highest weights $\gamma$. On the other hand, not all $ T_{J^{-\gamma}}$ are unitarizable. First we consider the condition that $ T_{J^{-\gamma}}$ has a nontrivial $\widetilde{\mathrm{Aut}_{hol}(\mathcal{D})^o}$-invariant subspace which is given as a weighted Bergman space. The condition is a special case of the Harish-Chandra condition \cite{harishV, harishVI}. More generally, the set of $\gamma$ for which $ T_{J^{-\gamma}}$ is unitarizable is called the Wallach set of $\mathcal{D}$, and is determined by Vergne and Rossi \cite{vergne} and Wallach \cite{wallach}.
We can consider the same kind of representations for bounded homogeneous domains. Let $\mathcal{D}$ be a (not necessarily symmetric) bounded homogeneous domain. Ishi shows the following theorem.
\begin{theorem}[Ishi, {\cite[Proposition 14]{ishi 2011}}]
Let $\mathcal{H}\subset \mathcal{O}(\mathcal{D})$ be a reproducing kernel Hilbert space. Suppose that $ T_{J^{-\gamma}}(b)\mathcal{H}\subset \mathcal{H}$ for all $b\in B$, and $\| T_{J^{-\gamma}}(b)f\|=\|f\|$ for all $b\in B$ and $f\in\mathcal{O}(\mathcal{D})$. Then we have $ T_{J^{-\gamma}}(g)\mathcal{H}\subset \mathcal{H}$ for all $g\in \widetilde{\mathrm{Aut}_{hol}(\mathcal{D})}$, and $\| T_{J^{-\gamma}}(g)f\|=\|f\|$ for all $g\in \widetilde{\mathrm{Aut}_{hol}(\mathcal{D})}$ and $f\in\mathcal{O}(\mathcal{D})$.
\end{theorem}
\begin{theorem}[Ishi, {\cite{ishi 2013}}]
Unitarizations of $ T_{J^{-\gamma}}$ and $ T_{J^{-\gamma'}}$ are equivalent as unitary representations of $\widetilde{\mathrm{Aut}_{hol}(\mathcal{D})^o}$ if and only if $\gamma=\gamma'$.
\end{theorem}
When $\mathcal{D}$ is an irreducible bounded symmetric domain, every $\mathrm{Aut}_{hol}(\mathcal{D})^o$-equivariant holomorphic line bundle over $\mathcal{D}$ is isomorphic to $E_{J^{-\gamma}}$ for some $\gamma\in\mathbb{C}$. On the other hand, for $G\subsetneq \mathrm{Aut}_{hol}(\mathcal{D})^o$, it can happen that there exists a $G$-equivariant holomorphic line bundle $L$ such that $L$ is not isomorphic to $L_{J^{-\gamma}}$ for any $\gamma\in\mathbb{C}$ as a $G$-equivariant holomorphic line bundle. Moreover, when $\mathcal{D}$ is not symmetric, the same can happen even for $G=\mathrm{Aut}_{hol}(\mathcal{D})^o$ (see Section \ref{Applicatio}).
As we will see in Section \ref{EXISTENCEO}, we can reduce the question (Q1) for $G$ to the question for $B$. Let $\mathcal{D}$ be a bounded homogeneous domain, and let $L$ be a $G$-equivariant holomorphic line bundle over $\mathcal{D}$.
\begin{theorem}[see Theorem \ref{LetmGtimes}]\label{Letmathcal}
Let $\mathcal{H}\subset \Gamma^{hol}(L)$ be a reproducing kernel Hilbert space. Suppose that $l(b)\mathcal{H}\subset \mathcal{H}$ for all $b\in B$ and $\|l(b)s\|=\|s\|$ for all $b\in B$ and $s\in\Gamma^{hol}(L)$. Then we have $l(g)\mathcal{H}\subset \mathcal{H}$ for all $g\in G$ and $\|l(g)s\|=\|s\|$ for all $g\in G$ and $s\in\Gamma^{hol}(L)$. Namely, the unitarizability as the representation of $B$ implies the one as the representation of $G$.
\end{theorem}
We fix a reference point $p\in\mathcal{D}$. Let $L_p$ be the fiber over the point $p$, and let $K$ be the isotropy subgroup of $G$ at $p$. Note that $K$ is a maximal compact connected subgroup of $G$. Concerning the question (Q2), we obtain
\begin{theorem}[see Theorem \ref{main}]\label{LetLkappak1}
Let $L$ and $L'$ be $G$-equivariant holomorphic line bundles over $\mathcal{D}$. Suppose that $\mathcal{H}\subset \Gamma^{hol}(L)$ and $\mathcal{H}'\subset\Gamma^{hol}(L')$ give unitarizations of representations $l$ and $l'$, respectively. Then $(l,\mathcal{H})$ and $(l',\mathcal{H}')$ are equivalent as unitary representations of $G$ if and only if $(l|_B,\mathcal{H})$ and $(l'|_B,\mathcal{H}')$ are equivalent as unitary representations of $B$ and the actions of $K$ on the fibers $L_p$ and $L'_p$ coincide.
\end{theorem}
Now we give a concrete parametrization of the $G$-equivariant holomorphic line bundles $L$ for which the representations $l$ are unitarizable, and we shall give the partition of the parameter set $\Theta(G)$ which corresponds to the equivalence classes of the unitarizations. In other words, the partition gives an answer to the question (Q2), and describes the classification of the unitary representations of $G$ obtained by unitarizations.
Let $\mathfrak{g}=\mathrm{Lie}(G)$, and let $\mathfrak{g}_-\subset \mathfrak{g}_\mathbb{C}$ be the complex subalgebra defined by
\begin{equation*}
\mathfrak{g}_-=\left\{Z=X+iY\in \mathfrak{g}_\mathbb{C}; \left.\frac{d}{dt}\right|_{t=0}e^{tX}p+i\left.\frac{d}{dt}\right|_{t=0}e^{tY}p\in T_{p}^{0,1}\mathcal{D}\right\}.
\end{equation*}
Let $\mathfrak{k}=\mathrm{Lie}(K)$. Clearly $\mathfrak{k}\subset \mathfrak{g}_-$. By Tirao and Wolf {\cite[Theorem 3.6]{tirao}}, the set of equivalence classes of $G$-equivariant holomorphic line bundles over $\mathcal{D}$ can be identified with the set
\begin{equation*}
\mathcal{L}(G)=\left\{\theta\in\mathfrak{g}_-^*;\begin{array}{c}\theta \text{ is a complex one-dimensional representation of }\mathfrak{g}_-\\\text{ such that } \theta|_\mathfrak{k} \text{ lifts to a representation of }K\end{array} \right\}.
\end{equation*}
Let $L$ be a $G$-equivariant holomorphic line bundle corresponding to $\theta\in\mathcal{L}(G)$. There exists a $G$-invariant Hermitian metric on $L$ and we consider the space $\Gamma^2(L)$ of square integrable holomorphic sections of $L$. If $\Gamma^2(L)\neq\{0\}$, then $\Gamma^2(L)$ gives the unitarization, and $(l,\Gamma^2(L))$ is nothing else but the holomorphically induced representation in \cite{Auslander} from a ``polarization" $\mathfrak{g}$ at $\xi\in\mathfrak{g}^*$, where $i\xi|_{\mathfrak{g}_-}=\theta$ (see p. 11). We note that even though $\Gamma^2(L)=\{0\}$, there may exist $\mathcal{H}\neq\{0\}$ giving a unitarization of $l$. Let $\mathfrak{b}=\mathrm{Lie}(B)$. We identify $T_p\mathcal{D}$ with $\mathfrak{b}$. Then a $B$-invariant K\"{a}hler metric on $\mathcal{D}$ defines a normal $j$-algebra $(\mathfrak{b},j,\omega)$ (see \cite[Part III, Lemma 1]{CIME}). Let $\mathfrak{a}$ denote the orthogonal complement of $[\mathfrak{b},\mathfrak{b}]$ in $\mathfrak{b}$ with respect to the inner product $\langle\cdot,\cdot\rangle=\omega([j\cdot,\cdot])$ on $\mathfrak{b}$. Put $\mathfrak{a}_-=\mathfrak{g}_-\cap\mathfrak{a}_\mathbb{C}$. The set of equivalence classes of $B$-equivariant holomorphic line bundles over $\mathcal{D}$ is parametrized by $\mathfrak{a}_-^*$. Let $r=\dim\mathfrak{a}$. For $\varepsilon=(\varepsilon_1,\cdots,\varepsilon_r)\in\{0,1\}^r$, we put $Z(\varepsilon)=\{\underline{\zeta}=(\zeta_1,\cdots, \zeta_r)\in\mathbb{R}^r; \zeta_k=0 \text{ for all }k \text{ such that }\varepsilon_k=1 \}$. Ishi \cite{ishi 1999} gives the subset $\Theta$ of $\mathfrak{a}_-^*$ and the partition
\begin{equation*}
\Theta=\bigsqcup_{\varepsilon\in\{0,1\}^r}\bigsqcup_{\underline{\zeta}\in Z(\varepsilon)}\Theta(\varepsilon,\underline{\zeta})
\end{equation*}
such that a representation $l$ of $B$ is unitarizable if and only if the corresponding parameter belongs to $\Theta$ and unitarizations of $l$ and $l'$ are equivalent if and only if the corresponding parameters belong to the same $\Theta(\varepsilon,\underline{\zeta})$. Combining Theorem \ref{Letmathcal} and Theorem \ref{LetLkappak1} with the results of \cite{ishi 1999, ishi 2011}, we obtain a method of giving a concrete parametrization in question. Let
\begin{equation*}
\Lambda=\{\lambda\in \mathfrak{z}(\mathfrak{k})^*;i\lambda=d\chi|_{\mathfrak{z}(\mathfrak{k})}\text{ for some one-dimensional representation } \chi \text{ of } K\}.
\end{equation*}
For $\varepsilon\in\{0,1\}^r$, $\underline{\zeta}\in Z(\varepsilon)$, and $\lambda\in \Lambda$, we put
\begin{equation*}
\Theta(G,\varepsilon,\underline{\zeta},\lambda)=\{\theta\in\mathcal{L}(G);\theta|_{\mathfrak{a}_-}\in\Theta(\varepsilon,\underline{\zeta}), \theta|_{\mathfrak{z}(\mathfrak{k})}=i\lambda\}.
\end{equation*}
Set
\begin{equation*}
P=\{(\varepsilon,\underline{\zeta},\lambda)\in\{0,1\}^r\times \mathbb{R}^r\times \Lambda; \underline{\zeta}\in Z(\varepsilon), \Theta(G,\varepsilon,\underline{\zeta},\lambda)\neq \emptyset\}.
\end{equation*}
Then the set
\begin{equation*}
\Theta(G)=\{\theta\in\mathcal{L}(G);\theta|_{\mathfrak{a}_-}\in\Theta\}
\end{equation*}
and the partition
\begin{equation*}
\Theta(G)=\bigsqcup_{(\varepsilon,\underline{\zeta},\lambda)\in P} \Theta(G,\varepsilon,\underline{\zeta},\lambda)
\end{equation*}
describe the set of equivalence classes $[L]$ of $G$-equivariant holomorphic line bundles such that the representations $l$ of $G$ are unitarizable and the partition of the set corresponding to the unitary equivalence classes of representations $l$ of $G$. In Section \ref{Applicatio}, we see an example of the set $\Theta(G)$ and the partition
\begin{equation*}
\Theta(G)=\bigsqcup_{(\varepsilon,\underline{\zeta},\lambda)\in P} \Theta(G,\varepsilon,\underline{\zeta},\lambda)
\end{equation*}
for a five-dimensional non-symmetric bounded homogeneous domain which is biholomorphic to the Siegel domain
\begin{equation*}
\mathcal{D}(\Omega_1)=\left\{U=\left[\begin{array}{ccc}z_1&0&z_4\\0&z_2&z_5\\z_4&z_5&z_3\end{array}\right]\in \mathrm{Sym}(3,\mathbb{C});\Im U\gg 0\right\},
\end{equation*}
where $G$ is the identity component of the holomorphic automorphism group of the domain.
As a byproduct of the proof of Theorem \ref{LetLkappak1}, we obtain the following theorem.
\begin{theorem}[see Corollary \ref{LetEandEbe}]\label{LetLkappak}
Let $L$ and $L'$ be $G$-equivariant holomorphic line bundles over $\mathcal{D}$. Suppose that the actions of $K$ on the fibers $L_p$ and $L'_p$ coincide. Then $L$ and $L'$ are isomorphic as $K$-equivariant holomorphic line bundles.
\end{theorem}
Let us explain the organization of this paper. In Section \ref{EXISTENCEO}, we prove Theorem \ref{Letmathcal}. In Section \ref{Normaljalg}, we review the theory of normal $j$-algebras. In Section \ref{Algebraicp}, first we prove Lemma \ref{Letpartial} about a property of the gradation of the Lie algebra $\mathfrak{aut}_{hol}(\mathcal{D})$ of $\mathrm{Aut}_{hol}(\mathcal{D})$ and its bracket relations. After that, Section \ref{Algebraicp} is devoted to the proof of Proposition \ref{Forasubalg2}, that is a generalization of Lemma \ref{Letpartial} in which $\mathfrak{aut}_{hol}(\mathcal{D})$ gets replaced by $\mathfrak{g}$. Proposition \ref{Forasubalg2} plays an important role in the proof of Theorem \ref{Supposethat}, which implies Theorem \ref{LetLkappak} immediately. In Section \ref{Unitaryequ}, we show Theorem \ref{LetLkappak1} using Theorem \ref{Supposethat}. In Section \ref{Applicatio}, we see an example of the set $\Theta(G)$ and the partition \begin{equation*}
\Theta(G)=\bigsqcup_{(\varepsilon,\underline{\zeta},\lambda)\in P} \Theta(G,\varepsilon,\underline{\zeta},\lambda)
\end{equation*}
for the five-dimensional non-symmetric bounded homogeneous domain mentioned above.
\section{Existence of unitarizations}\label{EXISTENCEO}
Throughout this paper, for a Lie group $G_0$, we denote its Lie algebra by the corresponding Fraktur small letter $\mathfrak{g}_0$.
\subsection{General theory of holomorphic multiplier representations}\label{sec:1.1}
We review the theory of homogeneous holomorphic vector bundles and the theory of holomorphic multiplier representations in \cite{ishi 2011,kobayashi,tirao}.
Let $\mathcal{D}_0$ be a domain in $\mathbb{C}^N$, and let $G_0$ be a Lie group which acts holomorphically on $\mathcal{D}_0$. We assume that the action of $G_0$ on $\mathcal{D}_0$ is analytic, i.e. the map $G_0\times\mathcal{D}_0\ni(g,z)\mapsto gz\in\mathcal{D}_0$ is analytic. Let $\mathcal{V}$ be a finite-dimensional complex vector space.
\begin{definition}
An analytic function $m:G_0\times \mathcal{D}_0\rightarrow GL(\mathcal{V})$ is called a {\it multiplier} if the following cocycle condition is satisfied:
\begin{equation*}
m(gg',z)=m(g,g'z)m(g',z)\quad(g,g'\in G_0, z\in \mathcal{D}_0).
\end{equation*}
Moreover, a multiplier $m$ is called a holomorphic multiplier if $m(g,z)$ is holomorphic in $z\in \mathcal{D}_0$.
\end{definition}
\begin{remark}
When $\mathcal{V}=\mathbb{C}$, let
\begin{equation*}
\mathcal{G}=\{m:G_0\times\mathcal{D}_0\rightarrow \mathbb{C}^\times;m\text{ is a holomorphic multiplier}\}.
\end{equation*}
Pointwise multiplication of holomorphic multipliers gives $\mathcal{G}$ the natural structure of a group. We write the product of two elements $m,m'$ of $\mathcal{G}$ as $mm'$.
\end{remark}
Let $m:G_0\times\mathcal{D}_0\rightarrow GL(\mathcal{V})$ be a holomorphic multiplier. Let $T_{m}$ be the representation of $G_0$ defined by
\begin{equation*}
T_{m}(g)f(z)=m(g^{-1},z)^{-1}f(g^{-1}z) \quad(g\in G_0, f\in \mathcal{O}(\mathcal{D}_0,\mathcal{V}),z\in\mathcal{D}_0),
\end{equation*}
where $\mathcal{O}(\mathcal{D}_0,\mathcal{V})$ denotes the space of vector-valued holomorphic functions on $\mathcal{D}_0$. When $\mathcal{V}=\mathbb{C}$, a power of the complex Jacobian $J(g,z)^{-\gamma}\,(g\in G_0, z\in \mathcal{D}_0,\gamma\in\mathbb{Z})$ is an example of a holomorphic multiplier.
We fix a reference point $p_0\in\mathcal{D}_0$. Let $(\mathfrak{g}_0)_-\subset (\mathfrak{g}_0)_\mathbb{C}$ be the complex subalgebra defined by
\begin{equation}\label{mathfrakg0l}
(\mathfrak{g}_0)_-=\left\{Z=X+iY\in (\mathfrak{g}_0)_\mathbb{C}; \left.\frac{d}{dt}\right|_{t=0}e^{tX}p_0+i\left.\frac{d}{dt}\right|_{t=0}e^{tY}p_0\in T_{p_0}^{0,1}\mathcal{D}_0\right\},
\end{equation}
and let $\theta_m:(\mathfrak{g}_0)_-\rightarrow\mathfrak{gl}(\mathcal{V})$ be the complex linear map given by
\begin{equation*}
\theta_m(Z)=\left.\frac{d}{dt}\right|_{t=0}m(e^{tX},p_0)+i\left.\frac{d}{dt}\right|_{t=0}m(e^{tY},p_0)\quad(Z=X+iY\in(\mathfrak{g}_0)_-).
\end{equation*}
The smooth map
\begin{equation*}
F:G_0 \ni g\mapsto m(g,p_0)\in GL(\mathcal{V}) \end{equation*}
satisfies
\begin{equation*}
(F_*)_g\left(\left.\frac{d}{dt}\right|_{t=0} ge^{tX}\right)=\left.\frac{d}{dt}\right|_{t=0} m(g,e^{tX}p_0)m(e^{tX},p_0)\quad(g\in G_0,X\in\mathfrak{g}_0).
\end{equation*}
For $X\in\mathfrak{g}_0$, let us use the same symbol $X$ to denote the corresponding left invariant vector field on $G_0$. We extend $(F_*)_g$ to a $\mathbb{C}$-linear map for all $g\in G_0$. At the identity element $e$ of $G_0$, this is a complex-linear map $(F_*)_e:(\mathfrak{g}_0)_\mathbb{C}\rightarrow \mathfrak{gl}(\mathcal{V})$. Then for $Z\in(\mathfrak{g}_0)_-$, we have
\begin{equation*}
(F_* Z)_{F(g)}=\theta_m(Z)_{F(g)}\quad(g\in G_0).
\end{equation*}
Thus for $Z,Z'\in(\mathfrak{g}_0)_-$, we have
\begin{equation*}\begin{split}
\theta_m([Z,Z'])&=\theta_m([Z,Z'])_e=(F_*)_e[Z,Z']_e
=[\theta_m(Z),\theta_m(Z')]_e
\\&=[\theta_m(Z),\theta_m(Z')].
\end{split}\end{equation*}
We see from the above equation that $\theta_m:(\mathfrak{g}_0)_-\rightarrow\mathfrak{gl}(\mathcal{V})$ is a complex representation of $(\mathfrak{g}_0)_-$. Consider the action of $G_0$ on the trivial bundle $\mathcal{D}_0\times\mathcal{V}$ given by
\begin{equation}\label{gzzetagzmg}
g(z,v)=(gz,m(g,z)v)\quad(g\in G_0,z\in\mathcal{D}_0,v\in\mathcal{V}).
\end{equation}
We denote by $E_{m}$ the $G_0$-equivariant holomorphic vector bundle $\mathcal{D}_0\times\mathcal{V}$.
\begin{lemma}[{\cite[Lemma 1]{ishi 2011}}]\label{equivalentline}
Let $m, m': G_0\times \mathcal{D}_0\rightarrow GL(\mathcal{V})$ be holomorphic multipliers. Then $E_{m}$ and $E_{m'}$ are isomorphic as $G_0$-equivariant holomorphic vector bundles if and only if there exists a matrix-valued holomorphic function $f:\mathcal{D}_0\rightarrow GL(\mathcal{V})$ such that
\begin{equation}\label{mult2gzfgz}
m'(g,z)=f(gz)m(g,z)f(z)^{-1}\quad(g\in G_0, z\in\mathcal{D}_0).
\end{equation}
\end{lemma}
\begin{definition}
We say that two holomorphic multipliers $m,m': G_0\times \mathcal{D}_0\rightarrow GL(\mathcal{V})$ are $G_0$-{\it equivalent} if they satisfy \eqref{mult2gzfgz} with some matrix-valued function $f$.
\end{definition}
The next theorem is fundamental for our paper. Let $K_0$ be the isotropy subgroup of $G_0$ at $p_0$.
\begin{theorem}[{\cite[Theorem 3.6]{tirao}}]\label{fundamentalone}
Suppose that the group $G_0$ acts on $\mathcal{D}_0$ transitively.
Let $m,m': G_0\times \mathcal{D}_0\rightarrow GL(\mathcal{V})$ be holomorphic multipliers. Then holomorphic vector bundles $E_{m}$ and $E_{m'}$ are isomorphic as $G_0$-equivariant holomorphic vector bundles if and only if $\theta_{m}(Z)=\theta_{m'}(Z)$ for all $Z\in(\mathfrak{g}_0)_-$.
\end{theorem}
From now on, we discuss the representation $T_m$ and its unitarizations.
\begin{definition}
We say that the representation $T_{m}$ is {\it unitarizable} if there exists a nonzero Hilbert space $\mathcal{H}\subset\mathcal{O}(\mathcal{D}_0,\mathcal{V})$ satisfying the following conditions:
\begin{enumerate}
\item[(i)]the inclusion map $\iota:\mathcal{H}\hookrightarrow \mathcal{O}(\mathcal{D}_0,\mathcal{V})$ is continuous with respect to the open compact topology of $\mathcal{O}(\mathcal{D}_0,\mathcal{V})$,
\item[(ii)] $T_{m}(g)\mathcal{H}\subset \mathcal{H}\,(g\in G_0)$ and $\|T_{m}(g)f\|=\|f\|\,(g\in G_0, f\in\mathcal{H})$, where $\|\cdot\|$ denotes the norm of $\mathcal{H}$.
\end{enumerate}
For a unitarizable representation $T_{m}$, we call the subrepresentation $(T_{m},\mathcal{H})$ a {\it unitarization} of the representation $T_{m}$ of $G_0$.
\end{definition}
A Hilbert space $\mathcal{H}$ satisfying the condition (i) is a reproducing kernel Hilbert space.
\begin{remark}
Let $m,m':G_0\times\mathcal{D}_0\rightarrow GL(\mathcal{V})$ be holomorphic multipliers. If $m$ and $m'$ are $G_0$-equivalent and $T_{m}$ is unitarizable, then $T_{m'}$ is also unitarizable, and the unitarizations are equivalent as unitary representations of $G_0$.
On the other hand, even though $m$ and $m'$ are not $G_0$-equivalent, the unitarizations of $T_{m}$ and $T_{m'}$ can be equivalent as unitary representations of $G_0$ (see Section \ref{Applicatio}).
\end{remark}
We fix a Hermitian inner product on $\mathcal{V}$. Suppose that a holomorphic multiplier representation $T_{m}$ has a unitarization $(T_{m},\mathcal{H})$. For $v\in\mathcal{V}$ and $w\in\mathcal{D}_0$, let $\mathcal{K}_{v,w}\in \mathcal{O}(\mathcal{D}_0,\mathcal{V})$ be the function defined by
\begin{equation*}
(f,\mathcal{K}_{w,v})_\mathcal{H}=(f(w),v)_\mathcal{V}\quad(f\in\mathcal{O}(\mathcal{D}_0,\mathcal{V})).
\end{equation*}
Let $\mathcal{K}:\mathcal{D}_0\times\mathcal{D}_0\rightarrow\mathrm{End}(\mathcal{V})$ be the reproducing kernel of $\mathcal{H}$ defined by
\begin{equation*}
\quad \mathcal{K}(z,w)v=\mathcal{K}_{w,v}(z)\quad(z,w\in\mathcal{D}_0,v\in\mathcal{V}).
\end{equation*}
Then $\mathcal{K}$ satisfies
\begin{equation}\label{mathcalKgz}
\mathcal{K}(gz,gw)=m(g,z)\mathcal{K}(z,w)m(g,w)^*\quad(z,w\in\mathcal{D}_0, g\in G_0).
\end{equation}
The next lemma shows that the converse also holds.
\begin{lemma}[{\cite[Lemma 5]{ishi 2011}}]\label{lem:1}
Let $\mathcal{H}\subset\mathcal{O}(\mathcal{D}_0,\mathcal{V})$ be a Hilbert space with reproducing kernel $\mathcal{K}$ and let $m: G_0\times\mathcal{D}_0\rightarrow GL(\mathcal{V})$ be a holomorphic multiplier. Then
$(T_{m}, \mathcal{H})$ is a unitarization of $T_{m}$ if and only if \eqref{mathcalKgz} holds.
\end{lemma}
The next theorem is also fundamental for our paper.
\begin{theorem}[{\cite[Theorem 6]{ishi 2011}}, {\cite{kobayashi}}, {\cite{kunze}}]\label{uniqueness of unitarification}
If $G_0$ acts on $\mathcal{D}_0$ transitively and the map $K_0\ni k\mapsto m(k,p_0)\in GL(\mathcal{V})$ defines an irreducible representation of $K_0$, then a Hilbert space giving a unitarization of $T_{m}$ is unique if it exists. In particular, the unitarization is irreducible.
\end{theorem}
For any $g \in G_0$, $v\in \mathcal{V}$ and $f\in\mathcal{H}$, we have
\begin{equation*}\begin{split}
(f,T_{m}(g)\mathcal{K}(\cdot,p_0)v)&=(T_{m}(g^{-1})f,\mathcal{K}(\cdot,p_0)v)=(T_{m}(g^{-1})f(p_0),v)
\\&=(m(g,p_0)^{-1}f(gp_0),v).
\end{split}\end{equation*}
The right hand side of the above equation is a $C^\omega$-function of $g\in G_0$. Hence $\mathcal{K}(\cdot,p_0)v$ is a $C^\omega$-vector of the representation $(T_{m},\mathcal{H})$.
Let $\mathcal{V}=\mathbb{C}$. For $X\in\mathfrak{g}_0$ and $z\in\mathcal{D}_0$, we have
\begin{equation*}\begin{split}
dT_m(X)\mathcal{K}_{p_0}(z)&
=\left.\frac{d}{dt}\right|_{t=0} T_m(e^{tX})\mathcal{K}_{p_0}(z)
\\&=\left.\frac{d}{dt}\right|_{t=0} m(e^{-tX},z)^{-1}\mathcal{K}_{p_0}(e^{-tX}z)
\\&=\left.\frac{d}{dt}\right|_{t=0} \overline{m(e^{tX},p_0)^{-1}}\mathcal{K}(z,e^{tX}p_0)
\\&=-\mathcal{K}(z,p_0)\left.\frac{d}{dt}\right|_{t=0} \overline{m(e^{tX},p_0)}+\left.\frac{d}{dt}\right|_{t=0} \mathcal{K}(z,e^{tX}p_0).
\end{split}\end{equation*}
Thus for $Z=X+iY\in(\mathfrak{g}_0)_-$ and $z\in\mathcal{D}_0$, we have
\begin{equation*}\begin{split}
dT_m(X-iY)\mathcal{K}_{p_0}(z)&=-\mathcal{K}(z,p_0)\left.\frac{d}{dt}\right|_{t=0} \overline{m(e^{tX},p_0)+im(e^{tY},p_0)}\\&\quad+\left.\frac{d}{dt}\right|_{t=0} \mathcal{K}(z,e^{tX}p_0)-i\left.\frac{d}{dt}\right|_{t=0}\mathcal{K}(z,e^{tY}p_0).
\end{split}\end{equation*}
Since $\left.\frac{d}{dt}\right|_{t=0} e^{tX}p_0-i\left.\frac{d}{dt}\right|_{t=0} e^{tY}p_0\in T_{p_0}^{1,0}\mathcal{D}_0$, it follows that
\begin{equation}\label{dTmoverlin}
dT_m(\overline{Z})\mathcal{K}(\cdot,p_0)=-\overline{\theta_m(Z)}\mathcal{K}(\cdot,p_0)\quad(Z\in(\mathfrak{g}_0)_-),
\end{equation}
where $\overline{X+iY}=X-iY$ for $X,Y\in\mathfrak{g}_0$.
In general, for a unitary representation $(\pi,\mathcal{H}_0)$ of an arbitrary Lie group $G_0$, the moment map $J:\mathcal{P}(\mathcal{H}_0^\infty)\ni [v]
\mapsto J_{[v]}\in\mathfrak{g}_0^*$ is defined by
\begin{equation*}
J_{[v]}(X)=\frac{1}{i}\frac{(d\pi(X)v,v)_{\mathcal{H}_0}}{(v,v)_{\mathcal{H}_0}}\quad(X\in\mathfrak{g}_0).
\end{equation*}
Let $J:\mathcal{P}(\mathcal{H}^\infty)\ni [v]
\mapsto J_{[v]}\in\mathfrak{g}_0^*$ be the moment map of $(T_{m},\mathcal{H})$, and we put
\begin{equation}\label{xiJKcdotpi}
\xi=J_{[\mathcal{K}(\cdot,p_0)]}\in\mathfrak{g}_0^*.
\end{equation}
Then by (\ref{dTmoverlin}), we have
\begin{equation*}
\theta_m(Z)=i\xi(Z)\quad(Z\in (\mathfrak{g}_0)_-).
\end{equation*}
\subsection{The case of bounded homogeneous domains}
A bounded domain is said to be homogeneous if the holomorphic automorphism group acts on the domain transitively. Let $\mathcal{D}\subset \mathbb{C}^N$ be a domain which is biholomorphic to a bounded homogeneous domain. It is well known that the holomorphic automorphism group $\mathrm{Aut}_{hol}(\mathcal{D})$ of $\mathcal{D}$ has a canonical structure of a Lie group from the viewpoint of the group action.
\begin{definition}
For an arbitrary Lie group $G_0$, we call a maximal connected real split solvable Lie subgroup of $G_0$ an {\it Iwasawa} subgroup.
\end{definition}
Let $\mathrm{Aut}_{hol}(\mathcal{D})^o$ be the identity component of $\mathrm{Aut}_{hol}(\mathcal{D})$. It is known \cite[Theorem 3.2]{kaneyuki} that $\mathrm{Aut}_{hol}(\mathcal{D})^o$ is isomorphic to the identity component of a linear real algebraic Lie group. Let $G$ be the identity component of a real algebraic subgroup of $\mathrm{Aut}_{hol}(\mathcal{D})^o$ which acts on $\mathcal{D}$ transitively. For any linear real algebraic group $G_0$, the identity component $G_0^o$ can be topologically decomposed into the direct product of a maximal compact subgroup of $G_0$ and an Iwasawa subgroup of $G_0$ (see \cite[Chapter 4, Theorem 4.7]{encyclopedia}). We fix a reference point $p\in\mathcal{D}$. It is known that the isotropy subgroup of $\mathrm{Aut}_{hol}(\mathcal{D})^o$ at $p$ is a maximal compact subgroup of $\mathrm{Aut}_{hol}(\mathcal{D})^o$.
The group $G$ contains an Iwasawa subgroup $B$ of $\mathrm{Aut}_{hol}(\mathcal{D})^o$ which acts on $\mathcal{D}$ simply transitively, and hence we can identify $\mathcal{D}$ with $B$.
Note that the isotropy subgroup $K$ of $G$ at $p$ is connected because $\mathcal{D}$ is simply connected and $G$ is connected.
In general a bounded homogeneous domain is a contractible Stein manifold. Thus every $G$-equivariant holomorphic line bundle over $\mathcal{D}$ is isomorphic as a $G$-equivariant holomorphic line bundle to $E_{m}=\mathcal{D}\times \mathbb{C}$ with some holomorphic multiplier $m: G\times \mathcal{D}\rightarrow \mathbb{C}^\times$. For $p\in\mathcal{D}$, let $\mathfrak{g}_-\subset \mathfrak{g}_\mathbb{C}$ be the complex subalgebra defined by \eqref{mathfrakg0l}.
\begin{theorem}[{\cite[Theorem 3.6]{tirao}}]\label{fundamentaltwo}
Let $\theta:\mathfrak{g}_-\rightarrow\mathfrak{gl}(\mathcal{V})$ be a complex representation of $\mathfrak{g}_-$ whose restriction to $\mathfrak{k}$ lifts to a representation of $K$. Then there exists a holomorphic multiplier $m: G\times\mathcal{D}\rightarrow GL(\mathcal{V})$ such that $\theta(Z)=\theta_{m}(Z)$ for all $Z\in\mathfrak{g}_-$.
\end{theorem}
We get the following lemma by the decomposition $G=BK$.
\begin{lemma}\label{extendm}
Let $m:G\times\mathcal{D}\rightarrow GL(\mathcal{V})$ be a \textup{(}not necessarily holomorphic\textup{)} multiplier. If a $\mathcal{V}$-valued function $f$ on $\mathcal{D}$ satisfies
\begin{equation}\label{multconp1q}
m(k,p)f(p)=f(p)\quad(k\in K)
\end{equation}
and
\begin{equation}\label{fbzmultbzf}
f(bz)=m(b,z)f(z)\quad(b\in B, z\in\mathcal{D}),
\end{equation}
then we have
\begin{equation}\label{fgzmultgzf}
f(gz)=m(g,z)f(z)\quad(g\in G, z\in\mathcal{D}).
\end{equation}
\end{lemma}
\begin{proof}
We consider the $G$-equivariant vector bundle $\mathcal{D}\times\mathcal{V}$, and regard $f$ as a section of the vector bundle. Then \eqref{fbzmultbzf} means that the section $f$ is $B$-invariant under the action of $B$, and \eqref{fgzmultgzf} means that the section $f$ is $G$-invariant under the action of $G$. Therefore, since $B$ acts on $\mathcal{D}$ transitively, \eqref{fbzmultbzf} and \eqref{fgzmultgzf} are equivalent.
\end{proof}
The following proposition is just an application of the previous lemma.
\begin{theorem}\label{LetmGtimes}
Let $m:G\times \mathcal{D}\rightarrow GL(\mathcal{V})$ be a holomorphic multiplier, and let $\mathcal{H}\subset \mathcal{O}(\mathcal{D},\mathcal{V})$ be a reproducing kernel Hilbert space. We fix a Hermitian inner product on $\mathcal{V}$ such that $m(k,p)\in U(\mathcal{V})$ for all $k\in K$. Suppose that the reproducing kernel $\mathcal{K}$ of $\mathcal{H}$ satisfies $\mathcal{K}(p,p)\in \mathrm{Hom}_K(\mathcal{V},\mathcal{V})$, and the representation $T_m$ satisfies $T_{m}(b)\mathcal{H}\subset\mathcal{H}\,(b\in B)$ and $\|T_{m}(b)f\|=\|f\|\,(b\in B,f\in\mathcal{H})$. Then we have $T_{m}(g)\mathcal{H}\subset\mathcal{H}\,(g\in G)$ and $\|T_{m}(g)f\|=\|f\|\,(g\in G,f\in\mathcal{H})$.
\end{theorem}
\begin{proof}[Proof]
By Lemma \ref{lem:1}, for all $g\in B$, we have
\begin{equation}\label{mathcalKbz}
\mathcal{K}(gz,gz)=m(g,z)K(z,z)m(g,z)^*\quad( z\in\mathcal{D}).
\end{equation}
Let $\mathcal{K}^d$ be the $\mathrm{End}(\mathcal{V})$-valued function on $\mathcal{D}$ given by $\mathcal{K}^d(z)=\mathcal{K}(z,z)$, and let $\tilde{m}:G\times \mathcal{D}\rightarrow GL(\mathrm{End}(\mathcal{V}))$ be a multiplier defined by
\begin{equation*}
\tilde{m}(g,z)A=m(g,z)\circ A\circ m(g,z)^*\quad(A\in \mathrm{End}(\mathcal{V})).
\end{equation*}
Applying Lemma \ref{extendm} to $\tilde{m}$ and $\mathcal{K}^d$, we see that \eqref{mathcalKbz} holds for all $g\in G$. By the analytic continuation, the equation
\begin{equation*}
\mathcal{K}(gz,gw)=m(g,z)\mathcal{K}(z,w)m(g,w)^*\quad( g\in G, z,w\in\mathcal{D})
\end{equation*}
holds. This proves the result by Lemma \ref{lem:1}.
\end{proof}
\begin{remark}
When $\mathcal{V}=\mathbb{C}$, the condition $\mathcal{K}(p,p)\in \mathrm{Hom}_K(\mathcal{V},\mathcal{V})$ in the previous proposition holds automatically.
\end{remark}
\section{Normal $j$-algebras and bounded homogeneous domains}\label{Normaljalg}
In this section, we review the theory of normal $j$-algebras in \cite{datri,miatello,pyatetskii,Rossi,CIME} and explain the relationship between normal $j$-algebras and bounded homogeneous domains.
For $X\in\mathfrak{aut}_{hol}(\mathcal{D})$, let $X^\#$ denote the vector field on $\mathcal{D}$ given by
\begin{equation*}
X^\#_z=\left.\frac{d}{dt}\right|_{t=0} e^{tX}z\quad(z\in\mathcal{D}).
\end{equation*}
We fix a $B$-invariant K\"{a}hler metric $\langle\langle\cdot,\cdot\rangle\rangle$ on $\mathcal{D}$ such that $\langle\langle j_0X,j_0 Y\rangle\rangle=\langle\langle X,Y\rangle\rangle$ for all vector fields $X,Y$ over $\mathcal{D}$, where $j_0$ denotes the complex structure on $\mathcal{D}$ induced from the one of $\mathbb{C}^N$. For example, $\langle\langle\cdot,\cdot\rangle\rangle$ may be the Bergman metric on $\mathcal{D}$, or if $\mathcal{D}$ is contained in a complex domain $\hat{\mathcal{D}}$ of larger dimension as $B$-submanifold, then we can take $\langle\langle\cdot,\cdot\rangle\rangle$ as the restriction of the Bergman metric of $\hat{\mathcal{D}}$ to $\mathcal{D}$. Let $j$ be the complex structure on $\mathfrak{b}$ given by
\begin{equation*}
(jX)^\#_p=j_0X^\#_p\quad(X\in\mathfrak{b}),
\end{equation*}
and let $\langle\cdot,\cdot\rangle$ be the inner product on $\mathfrak{b}$ given by
\begin{equation*}
\langle X,Y\rangle=\langle\langle X^\#,Y^\#\rangle\rangle(p)\quad(X,Y\in\mathfrak{b}).
\end{equation*}
By Gindikin, Piatetski-Shapiro, and Vinberg \cite[Part III, Lemma 1]{CIME}, there exists a linear form $\omega\in\mathfrak{b}^*$ such that
\begin{equation*}
\langle X,Y\rangle=\omega([jX,Y])\quad(X,Y\in\mathfrak{b}),
\end{equation*}
and $(\mathfrak{b},j,\omega)$ is a normal $j$-algebra. Namely, $\mathfrak{b}$ is a real split solvable Lie algebra with the equality
\begin{equation}\label{XYjjXyjXjY}
[X,Y]+j[jX,Y]+j[X,jY]=[jX,jY]\quad(X,Y\in\mathfrak{b})
\end{equation}
and the bilinear form $\langle X,Y\rangle=\omega([jX,Y])\,(X,Y\in\mathfrak{b})$ is a $j$-invariant inner product, that is,
\begin{equation*}
\langle X,X\rangle>0\quad(X\neq 0\in\mathfrak{b}),
\end{equation*}
\begin{equation*}
\langle jX,jY\rangle=\langle X,Y\rangle \quad(X,Y\in\mathfrak{b}).
\end{equation*}
It is known that $\mathfrak{a}=[\mathfrak{b},\mathfrak{b}]^\perp$ is a Cartan subalgebra of $\mathfrak{b}$. For $\alpha\in\mathfrak{a}^*$, let $\mathfrak{b}_\alpha$ be the root space associated to $\alpha$ given by \begin{equation*}
\mathfrak{b}_\alpha=\{X\in\mathfrak{b};[A,X]=\alpha(A)X\,(A\in\mathfrak{a})\}.
\end{equation*}
\begin{theorem}[Piatetski-Shapiro, {\cite[Chapter 2, Section 3 and 5]{pyatetskii}}]\label{Forasuitab}
For a suitable basis $A_1,\cdots, A_r$ of $\mathfrak{a}$, the following assertions hold: if we put $E_k=-jA_k$, then we have $[A_k,E_l]=\delta_{k,l}E_l\,(1\leq k,l\leq r)$, if we denote the dual basis of $A_1,\cdots, A_r$ by $\alpha_1,\cdots, \alpha_r\in\mathfrak{a}^*$, then we have
\begin{equation*}
\mathfrak{b}=\mathfrak{b}(0)\oplus\mathfrak{b}(1/2)\oplus\mathfrak{b}(1),
\end{equation*}
where
\begin{equation*}\begin{split}
&\mathfrak{b}(0)=\mathfrak{a}\oplus
\sideset{}{^\oplus}\sum_{1\leq k<l\leq r}\mathfrak{b}_{(\alpha_l-\alpha_k)/2},\quad
\mathfrak{b}(1/2)=\sideset{}{^\oplus}\sum_{1\leq k\leq r}\mathfrak{b}_{\alpha_k/2},
\\&
\mathfrak{b}(1)=\sideset{}{^\oplus}\sum_{1\leq k\leq r}\mathfrak{b}_{\alpha_k}\oplus
\sideset{}{^\oplus}\sum_{1\leq k<l\leq r}\mathfrak{b}_{(\alpha_l+\alpha_k)/2},
\end{split}\end{equation*}
and the equalities $\mathfrak{b}_{\alpha_k}=\mathbb{R}E_k$, $j\mathfrak{b}_{(\alpha_l-\alpha_k)/2}=\mathfrak{b}_{(\alpha_l+\alpha_k)/2}$, and $j\mathfrak{b}_{\alpha_k/2}=\mathfrak{b}_{\alpha_k/2}$ hold. We have the relation \begin{equation*}
[\mathfrak{b}(\gamma),\mathfrak{b}(\gamma')]\subset\mathfrak{b}(\gamma+\gamma')\quad(\gamma,\gamma'=0,1/2,1),
\end{equation*}
where we put $\mathfrak{b}(\gamma)=0$ for $\gamma>1$.
\end{theorem}
Following \cite[Chapter 2, Section 5]{pyatetskii}, we introduce the Siegel domain $\mathcal{D}(\Omega,Q)$ on which the group $B$ acts simply transitively as affine automorphisms as follows. Put
\begin{equation*}
E=E_1+\cdots +E_r.
\end{equation*}
Let $B(0)$ be the connected Lie subgroup of $B$ with Lie algebra $\mathfrak{b}(0)$, and let $\Omega=\mathrm{Ad}(B(0))E\subset \mathfrak{b}(1)$. Let $Q:(\mathfrak{b}(1/2),j)\times (\mathfrak{b}(1/2),j)\rightarrow \mathfrak{b}(1)_\mathbb{C}$ be the sesquilinear map defined by \begin{equation*}
Q(V,V')=\frac{1}{4}([jV,V']+i[V,V'])\quad(V,V'\in\mathfrak{b}(1/2)).
\end{equation*}
Then $\Omega\subset\mathfrak{b}(1)$ is an open convex cone containing no straight lines, and $B(0)$ acts on $\Omega$ simply transitively. One has $Q(V,V)\in \overline{\Omega}\setminus\{0\}$ for all $V\in\mathfrak{b}(1/2)\setminus\{0\}$. Let
\begin{equation*}
\mathcal{D}(\Omega,Q)=\{(U,V)\in \mathfrak{b}(1)_\mathbb{C}\oplus\mathfrak{b}(1/2):\Im U-Q(V,V)\in\Omega\}.
\end{equation*}
The subgroup $B(0)$ acts on $\mathcal{D}(\Omega,Q)$ by
\begin{equation*}
t_0(U,V)=(\mathrm{Ad}(t_0)U,\mathrm{Ad}(t_0)V)\quad(t_0\in B(0), (U,V)\in\mathcal{D}(\Omega,Q)),
\end{equation*}
and for $U_0\in\mathfrak{b}(1)$ and $V_0\in\mathfrak{b}(1/2)$, the element $\exp(U_0+V_0)$ of $B$ acts on $\mathcal{D}(\Omega,Q)$ by
\begin{equation}\label{expX0W0ZWZ}\begin{split}
\exp(U_0+V_0)(U,V)=(U+U_0+2iQ(V,V_0)+iQ(V_0,V_0),V+V_0)
\\((U,V)\in\mathcal{D}(\Omega,Q)).
\end{split}\end{equation}
Define $\mathcal{C}:\mathcal{D}(\Omega,Q)\rightarrow\mathcal{D}$ by $\mathcal{C}(b(iE,0))=bp\,(b\in B)$. Then the map $\mathcal{C}$ is biholomorphic and is a generalization of the Cayley transform.
\begin{remark}\label{Bydatrithe}
\begin{enumerate}
\item[(i)] The exponential map $\exp:\mathfrak{b}\rightarrow B$ is bijective (\cite{fujiwara}, Theorem 5.2.16), and we have $B=B(0)\ltimes \exp(\mathfrak{b}(1/2)\oplus\mathfrak{b}(1))$ (see Lemma \ref{LetG0beaco}).
\item[(ii)] By J.E. D'Atri \cite{datri}, the decomposition
\begin{equation*}
[\mathfrak{b},\mathfrak{b}]=
\sideset{}{^\oplus}\sum_{1\leq k<l\leq r}\mathfrak{b}_{(\alpha_l-\alpha_k)/2}\oplus
\sideset{}{^\oplus}\sum_{1\leq k\leq r}\mathfrak{b}_{\alpha_k/2}
\oplus\sideset{}{^\oplus}\sum_{1\leq k\leq r}\mathfrak{b}_{\alpha_k}\oplus
\sideset{}{^\oplus}\sum_{1\leq k<l\leq r}\mathfrak{b}_{(\alpha_l+\alpha_k)/2}
\end{equation*}
is orthogonal with respect to $\langle\cdot,\cdot \rangle$.
\item[(iii)] The number $r=\dim \mathfrak{a}$ is called the rank of $\mathfrak{b}$.
\item[(iv)] An open convex cone $\Omega_0$ in a finite-dimensional vector space $\mathcal{V}$ is called regular if $\Omega_0$ contains no straight lines, and is called homogeneous if the group
\begin{equation*}
G(\Omega_0)=\{A\in GL(\mathcal{V});A\Omega_0=\Omega_0\}
\end{equation*}
acts on $\Omega_0$ transitively. Thus the open convex cone $\Omega$ in $\mathfrak{b}(1)$ is regular and homogeneous.
\end{enumerate}
\end{remark}
\begin{example}
Let $q\geq r\geq 1$. The domain
\begin{equation*}
\mathcal{D}_I(q,r)=\{z\in M(q,r;\mathbb{C});\|z\|_{op}<1 \} \end{equation*}
is a bounded symmetric domain of type I, where $\|z\|_{op}$ denotes the operator norm of $z$. Put
\begin{equation*}\begin{split}
H_r(\mathbb{C})=\{U\in M_r(\mathbb{C});U=\overline{{}^tU}\},
\\\mathcal{P}_r=\{U\in H_r(\mathbb{C});U\gg 0\}.
\end{split}\end{equation*}
We have the following isomorphisms:
\begin{equation*}\begin{split}
&\mathfrak{b}(1)\simeq H_r(\mathbb{C}),\quad\Omega\simeq\mathcal{P}_r,\quad\mathfrak{b}(1/2)\simeq M(q-r,r;\mathbb{C}),
\end{split}\end{equation*}
and the following domain is biholomorphic to $\mathcal{D}_I(q,r)$:
\begin{equation*}\begin{split}
\mathcal{D}(\mathcal{P}_r,\mathcal{Q})&\simeq\left\{\left(\begin{array}{c}U\\V\end{array}\right)\in M(q,r;\mathbb{C});\Im U-\mathcal{Q}(V,V)\gg 0\right\},
\end{split}\end{equation*}
where $\mathcal{Q}(V,V')=\tfrac{1}{2}\overline{{}^tV'}V$.
\end{example}
\begin{lemma}\label{LetG0beaco}
Let $G_0$ be a connected and simply connected real split solvable Lie group, let $\exp:\mathfrak{g}_0\rightarrow G_0$ be the exponential map, and let $\mathfrak{h},\mathfrak{h}'\subset\mathfrak{g}_0$ be subalgebras such that $\mathfrak{g}_0=\mathfrak{h}\ltimes \mathfrak{h}'$. Then the subsets $\exp(\mathfrak{h})$ and $\exp(\mathfrak{h}')$ of $G_0$ are connected Lie subgroups of $G_0$.
\end{lemma}
\begin{proof}
Let $H$ and $H'$ be connected and simply connected Lie groups with Lie algebras $\mathfrak{h}$ and $\mathfrak{h}'$, respectively. By {\cite[Theorem 1.125]{Knapp}}, there exists an action $\tau$ of $H$ on $H'$ by automorphisms such that the Lie algebra of the semidirect product $H\times_\tau H'$ is isomorphic to $ \mathfrak{h}\ltimes\mathfrak{h}'$. Let $\tilde{H}$ and $\tilde{H'}$ be the connected Lie subgroups of $G_0$ with Lie algebras $\mathfrak{h}$ and $\mathfrak{h}'$, respectively. Since Lie groups $G_0$ and $H\times_\tau H'$ are isomorphic, the connected Lie subgroups $\tilde{H}$ and $\tilde{H'}$ are simply connected. By {\cite[Theorem 5.2.16]{fujiwara}}, we have $\exp(\mathfrak{h})=\tilde{H}$ and $\exp(\mathfrak{h}')=\tilde{H'}$ since $\mathfrak{h}$ and $\mathfrak{h}'$ are also exponential.
\end{proof}
\section{Algebraic properties of $\mathfrak{g}$}\label{Algebraicp}
\subsection{Holomorphic complete vector fields on Siegel domains}\label{Structuret}\label{Gradingstr}
Let $\mathcal{U}$ be a finite-dimensional vector space over $\mathbb{R}$, let $\Omega_0\subset\mathcal{U}$ be an open regular convex cone, let $\mathcal{V}$ be a finite-dimensional vector space over $\mathbb{C}$, and let $Q_0:\mathcal{V}\times\mathcal{V}\rightarrow \mathcal{U}_\mathbb{C}$ be a $\Omega_0$-positive Hermitian map, that is,
\begin{equation*}
Q_0(v,v)\in\overline{\Omega_0}\setminus\{0\}\quad(v\in\mathcal{V}\setminus\{0\}).
\end{equation*}
The following domain $\mathcal{D}(\Omega_0, Q_0)\subset \mathcal{U}_\mathbb{C}\oplus\mathcal{V}$ is called a Siegel domain:
\begin{equation*}
\mathcal{D}(\Omega_0, Q_0)=\{(u,v)\in\mathcal{U}_\mathbb{C}\oplus\mathcal{V};\Im u-Q_0(v,v)\in\Omega_0\}.
\end{equation*}
Let $\mathfrak{X}=\mathfrak{X}(\mathcal{D}(\Omega_0, Q_0))$ be the space of complete holomorphic vector fields on $\mathcal{D}(\Omega_0, Q_0)$. The map
\begin{equation*}
\mathfrak{aut}_{hol}(\mathcal{D}(\Omega_0, Q_0))\ni X\mapsto X^\#\in\mathfrak{X}
\end{equation*}
is bijective, and we have
\begin{equation*}
[X,Y]^\#=[Y^\#,X^\#]\quad(X,Y\in\mathfrak{aut}_{hol}(\mathcal{D}(\Omega_0, Q_0))).
\end{equation*}
For $u_0\in \mathcal{U}$, let $\partial_{u_0}$ be the holomorphic vector field on $\mathcal{D}(\Omega_0, Q_0)$ given by
\begin{equation*}
\partial_{u_0}(u,v)=(u_0,0)\quad((u,v)\in\mathcal{D}(\Omega_0, Q_0)).
\end{equation*}
Here for every $(u,v)\in\mathcal{D}(\Omega_0,Q)$, we identify the tangent space $T_{(u,v)}\mathcal{D}(\Omega_0, Q_0)$ with $\mathcal{U}_\mathbb{C}\oplus\mathcal{V}$, and we consider a vector filed $X\in\mathfrak{X}$ as a $(\mathcal{U}_\mathbb{C}\oplus\mathcal{V})$-valued function. We denote by $D_X$ the corresponding differential operator
\begin{equation*}
D_X f(u,v)=\left.\frac{d}{dt}\right|_{t=0} f((u,v)+tX(u,v)),
\end{equation*}
where $f$ is a vector-valued smooth function on $\mathcal{D}(\Omega_0,Q_0)$. Then we have
\begin{equation*}
[X,Y]=D_XY-D_YX\quad(X,Y\in\mathfrak{X}).
\end{equation*}
For $v_0\in \mathcal{V}$, let $\tilde{\partial}_{v_0}$ be the holomorphic vector field on $\mathcal{D}(\Omega_0, Q_0)$ given by
\begin{equation*}
\tilde{\partial}_{v_0}(u,v)= (2iQ_0(v,v_0),v_0)\quad((u,v)\in\mathcal{D}(\Omega_0, Q_0)).
\end{equation*}
For complex endomorphisms $\mathcal{A}\in\mathfrak{gl}(\mathcal{U}_\mathbb{C})$ and $\mathcal{B}\in \mathfrak{gl}(\mathcal{V})$, let $\mathcal{X}(\mathcal{A},\mathcal{B})$ be the holomorphic vector field on $\mathcal{D}(\Omega_0, Q_0)$ given by \begin{equation*}
\mathcal{X}(\mathcal{A},\mathcal{B})(u,v)=(\mathcal{A}u,\mathcal{B}v)\quad((u,v)\in\mathcal{D}(\Omega_0, Q_0)).
\end{equation*}
We say $\mathcal{B}\in\mathfrak{gl}(\mathcal{V})$ is associated with $\mathcal{A}\in\mathfrak{gl}(\mathcal{U}_\mathbb{C})$ if the equality
\begin{equation*}
\mathcal{A}Q_0(v,v')=Q_0(\mathcal{B}v,v')+Q_0(v,\mathcal{B}v')\quad(v,v'\in \mathcal{V})
\end{equation*}
holds. Let $\partial$ be the infinitesimal generator of the one-parameter subgroup $\mathcal{D}(\Omega_0,Q_0)\ni (u,v)\mapsto (e^{t}u,e^{t/2}v)\in\mathcal{D}(\Omega_0,Q_0)\,(t\in\mathbb{R})$. Then we have $\partial(u,v)=(u,1/2v)\,((u,v)\in\mathcal{D}(\Omega_0,Q_0))$, that is,
\begin{equation*}
\partial=\mathcal{X}(\mathrm{id}_{\mathcal{U}_\mathbb{C}},\tfrac{1}{2}\mathrm{id}_\mathcal{V}).
\end{equation*}
For $\gamma\in\mathbb{R}$, we put
\begin{equation*}
{\mathfrak{X}}(\gamma)=\{X\in\mathfrak{X};[\partial,X]=\gamma X\}.\end{equation*}
Let $\mathfrak{g}(\Omega_0)$ denote the Lie algebra of the Lie group $G(\Omega_0)$.
\begin{theorem}[Kaup, Matsushima, Ochiai, {\cite[Theorem 4 and 5]{kaup}}]\label{kaupth}
The Lie algebra $\mathfrak{X}$ has the following gradation:
\begin{equation*}
\mathfrak{X}=\mathfrak{X}(-1)\oplus\mathfrak{X}(-1/2)\oplus\mathfrak{X}(0)\oplus\mathfrak{X}(1/2)\oplus\mathfrak{X}(1),
\end{equation*}
and the non-positive part $\sum_{\gamma\leq 0}\mathfrak{X}(\gamma)$ is the Lie algebra corresponding to the group of affine automorphisms of $\mathcal{D}(\Omega_0,Q_0)$. One has
\begin{equation*}
\mathfrak{X}(-1)=\{\partial_{u_0};u_0\in \mathcal{U}\},
\end{equation*}
\begin{equation*}
\mathfrak{X}(-1/2)=\{\tilde{\partial}_{v_0};v_0\in \mathcal{V}\},
\end{equation*}
and \begin{equation*}\begin{split}
\mathfrak{X}(0)=\{\mathcal{X}(\mathcal{A},\mathcal{B});\mathcal{A}\in\mathfrak{g}(\Omega_0),\mathcal{B}\in\mathfrak{gl}(\mathcal{V}),\mathcal{B}\textup{ is associated with }\mathcal{A}\}.
\end{split}\end{equation*}
\end{theorem}
We denote by $\mathcal{D}(\Omega_0)$ the tube domain $\{u\in\mathcal{U}_\mathbb{C};\Im u\in\Omega_0\}$, which is a special case of the Siegel domain with $\mathcal{V}=0$ and $Q_0=0$, and for $\mathcal{A}\in\mathfrak{gl}(\mathcal{U}_\mathbb{C})$, let $\mathcal{X}(\mathcal{A})$ be the holomorphic vector field on $\mathcal{D}(\Omega_0)$ given by $\mathcal{X}(\mathcal{A})(u)=\mathcal{A}u\,(u\in\mathcal{D}(\Omega_0))$. Then we see that
\begin{equation*}
\mathfrak{X}(\mathcal{D}(\Omega_0))(0)=\{\mathcal{X}(A);\mathcal{A}\in\mathfrak{g}(\Omega_0)\}.
\end{equation*}
We have the following formulas (see {\cite[Chapter V, \S 1]{satake}}):
\begin{equation}\label{XABUAU}
[\mathcal{X}(\mathcal{A},\mathcal{B}),\partial_{u_0}]=-\partial_{\mathcal{A}{u_0}},
\end{equation}
\begin{equation}\label{XABVBV}
[\mathcal{X}(\mathcal{A},\mathcal{B}),\tilde{\partial}_v]=-\tilde{\partial}_{\mathcal{B}v},
\end{equation}
\begin{equation}\label{XABABXAABB}
[\mathcal{X}(\mathcal{A},\mathcal{B}),\mathcal{X}(\mathcal{A}',\mathcal{B}')]=-\mathcal{X}([\mathcal{A},\mathcal{A}'],[\mathcal{B},\mathcal{B}']).
\end{equation}
Next we see explicit descriptions of $\mathfrak{X}(1/2)$ and $\mathfrak{X}(1)$.
\begin{proposition}[Satake, {\cite[Chapter V, Proposition 2.1]{satake}}]\label{Everyeleme}
Every element of $\mathfrak{X}(1/2)$ is uniquely written as
\begin{equation}\label{mathcalYPh}
\mathcal{Y}_{\Phi,c}(u,v)=(2iQ_0(v,\Phi(\overline{u})),\Phi(u)+c(v,v))\quad((u,v)\in\mathcal{D}(\Omega_0, Q_0))
\end{equation}
with a $\mathbb{C}$-linear map $\Phi:\mathcal{U}_\mathbb{C}\rightarrow \mathcal{V}$ and a symmetric $\mathbb{C}$-bilinear map $c:\mathcal{V}\times\mathcal{V}\rightarrow \mathcal{V}$ which satisfy the following conditions:
\begin{flalign}\label{PhiV0Umaps}
&\text{for each }v_0\in\mathcal{V},\text{ the linear map}&\tag{Y1}
\end{flalign}
\begin{equation*}
\Phi_{v_0}:\mathcal{U}\ni u\mapsto \Im Q_0(\Phi(u),v_0)\in\mathcal{U}
\end{equation*}
belongs to $\mathfrak{g}(\Omega_0)$,
\begin{equation}\label{HVcVV2iHPhiHVVVquadVVinmathcalV}
Q_0(c(v',v'),v)=2iQ_0(v',\Phi(Q_0(v,v')))\quad(v,v'\in\mathcal{V}).\tag{Y2}
\end{equation}
Conversely, for any pair $(\Phi,c)$ satisfying $(\ref{PhiV0Umaps})$ and $(\ref{HVcVV2iHPhiHVVVquadVVinmathcalV})$, the vector field $\mathcal{Y}_{\Phi,c}$ given by $(\ref{mathcalYPh})$ belongs to $\mathfrak{X}(1/2)$.
\end{proposition}
Let $e\in\Omega_0$. As we shall see at the end of this subsection, every vector field $\mathcal{Y}_{\Phi,c}$ is uniquely determined by the vector $\Phi(e)\in\mathcal{V}$, so that $\mathcal{Y}_{\Phi,c}$ will be also written as $\mathcal{Y}_\Phi$.
\begin{proposition}[Satake, {\cite[Chapter V, Proposition 2.2]{satake}}]
Every element of $\mathfrak{X}(1)$ is uniquely written as
\begin{equation}\label{mathcalZab}
\mathcal{Z}_{a,b}(u,v)=(a(u,u),b(u,v))\quad((u,v)\in\mathcal{D}(\Omega_0, Q_0))
\end{equation}
with a symmetric $\mathbb{R}$-bilinear map $a:\mathcal{U}\times \mathcal{U}\rightarrow \mathcal{U}$ \textup{(}which we extend to a $\mathbb{C}$-bilinear map $a:\mathcal{U}_\mathbb{C}\times\mathcal{U}_\mathbb{C}\rightarrow\mathcal{U}_\mathbb{C}$\textup{)} and a $\mathbb{C}$-bilinear map $b:\mathcal{U}_\mathbb{C}\times \mathcal{V}\rightarrow \mathcal{V}$ which satisfy the following conditions:
\begin{flalign}\label{mathcalAU0}
&\text{for each }u_0\in\mathcal{U},\text{ the linear map}&\tag{Z1}
\end{flalign}
\begin{equation*}
\mathcal{A}_{u_0}:\mathcal{U}\ni u\mapsto a(u_0,u)\in\mathcal{U}
\end{equation*}
belongs to $\mathfrak{g}(\Omega_0)$,
\begin{flalign}\label{mathcalBU0}
&\textit{for any }u_0\in\mathcal{U}, \textit{ the linear map }&\tag{Z2}
\end{flalign}
\begin{equation*}\mathcal{B}_{u_0}:\mathcal{V}\ni v\mapsto \tfrac{1}{2}b(u_0,v)\in\mathcal{V}
\end{equation*}
is associated with $\mathcal{A}_{u_0}$, and $\Im\mathrm{\,tr\,}\mathcal{B}_{u_0}=0 $,
\begin{flalign}\label{UmapstoImH}
&\text{for any }v,v'\in\mathcal{V}, \text{ the linear map }&\tag{Z3}
\end{flalign}
\begin{align*}
\mathcal{U}\ni u\mapsto \Im Q_0(b(u,v),v')\in\mathcal{U}
\end{align*}
belongs to $\mathfrak{g}(\Omega_0)$,
\begin{equation*}\label{HVbHVVVHbH}
Q_0(b(Q_0(v'',v'),v''),v)=Q_0(v'',b(Q_0(v,v''),v'))\quad(v,v',v''\in\mathcal{V}). \tag{Z4}
\end{equation*}
Conversely, for any pair $(a,b)$ satisfying $(\ref{mathcalAU0})$, $(\ref{mathcalBU0})$, $(\ref{UmapstoImH})$, and $(\ref{HVbHVVVHbH})$, the vector field $\mathcal{Z}_{a,b}$ given by $(\ref{mathcalZab})$ belongs to $\mathfrak{X}(1)$.
\end{proposition}
\begin{example}
Let
\begin{equation*}\begin{split}
\mathcal{D}(\mathcal{P}_r,\mathcal{Q})&\simeq\left\{\left(\begin{array}{c}U\\V\end{array}\right)\in M(q,r;\mathbb{C});\Im U-\mathcal{Q}(V,V)\gg 0\right\},
\end{split}\end{equation*}
where $\mathcal{Q}(V,V')=\tfrac{1}{2}\overline{{}^tV'}V$. We put
for $\Phi\in M(q-r,r;\mathbb{C})$
\begin{equation*}
\mathcal{Y}_{\Phi}(U,V)=(2i\mathcal{Q}(V,\Phi \overline{{}^tU}),\Phi U+V\overline{{}^t\Phi}Vi).
\end{equation*}
Then we have
\begin{equation*}
\mathfrak{X}(1/2)=\{\mathcal{Y}_{\Phi};\Phi\in M(q-r,r;\mathbb{C})\}.
\end{equation*}
We put for $a\in H_r(\mathbb{C})$
\begin{equation*}
\mathcal{Z}_a(U,V)=(UaU,VaU).
\end{equation*}
Then we have
\begin{equation*}
\mathfrak{X}(1)=\{\mathcal{Z}_a;a\in H_r(\mathbb{C})\}.
\end{equation*}
\end{example}
\begin{lemma}[Satake, {\cite[Chapter V, \S2]{satake}}]
The following hold:
\begin{equation}\label{partialUYP}
[\partial_{u},\mathcal{Y}_\Phi]=\tilde{\partial}_{\Phi(u)},
\end{equation}
\begin{equation}\label{tildepartialVYPhiXAB}
[\tilde{\partial}_v,\mathcal{Y}_\Phi]=\mathcal{X}(\mathcal{A},\mathcal{B}),
\end{equation}
where $\mathcal{A}$ and $\mathcal{B}$ are given by $\mathcal{A}=4\Phi_v$ and $\mathcal{B}:\mathcal{V}\ni v'\mapsto 2i\Phi(Q_0(v',v))+2c(v,v')\in\mathcal{V}$. Moreover we have
\begin{equation}\label{partialUZa}
[\partial_u,\mathcal{Z}_{a,b}]=2\mathcal{X}(\mathcal{A}_u,\mathcal{B}_u).
\end{equation}
\end{lemma}
We fix a reference point $(ie,0)\in\mathcal{D}(\Omega_0, Q_0)$. Next we see a description of the subalgebra \begin{equation*}
\mathfrak{X}_{(ie,0)}=\{X\in\mathfrak{X};X(ie,0)=0\}.
\end{equation*}
Let $\partial'$ be the element of $\mathfrak{X}(0)$ given by
\begin{equation*}
\partial'(u,v)=(0,iv)\quad((u,v)\in\mathcal{D}(\Omega_0, Q_0)),
\end{equation*}
and let $\psi_{e}:\mathfrak{X}(1/2)\rightarrow \mathfrak{X}(-1/2)$, and $\varphi_{e}:\mathfrak{X}(1)\rightarrow\mathfrak{X}(-1)$ be linear maps given by
\begin{equation*}
\psi_{e}=\mathrm{ad}(\partial')\mathrm{ad}(\partial_{e})|_{\mathfrak{X}(1/2)},
\end{equation*}
and
\begin{equation*}
\varphi_{e}=\frac{1}{2}\mathrm{ad}(\partial_{e})^2|_{\mathfrak{X}(1)},
\end{equation*}
respectively. Put
\begin{equation*}
\mathfrak{m}=\{X+\psi_{e}(X);X\in\mathfrak{X}(1/2)\},
\end{equation*}
\begin{equation*}
\mathfrak{m}'=\{X+\varphi_{e}(X); X\in\mathfrak{X}(1)\}.
\end{equation*}
\begin{theorem}[Kaup, Matsushima, Ochiai, {\cite[Theorem 6]{kaup}}]\label{Xie0mathca}
\begin{equation*}
\mathfrak{X}_{(ie,0)}=(\mathfrak{X}_{(ie,0)}\cap\mathfrak{X}(0))+\mathfrak{m}'+\mathfrak{m}.
\end{equation*}
\end{theorem}
We note that $\varphi_{e}$ and $\psi_{e}$ are injective (see {\cite[p. 211$-$212]{satake}}),
and by {\cite[p. 215]{satake}}, we have
\begin{equation}\label{psiYPhitil}
\psi_{e}(\mathcal{Y}_\Phi)=\tilde{\partial}_{-i\Phi(e)}.
\end{equation}
\subsection{Relationship between $\mathfrak{X}(1/2)$ and $\mathfrak{X}(1)$}\label{Algebraicp12}
First we prove the following lemma on the relationship between $\mathfrak{X}(1/2)$ and $\mathfrak{X}(1)$.
\begin{lemma}\label{Letpartial}
Let $\partial_{u_0}\in\mathfrak{X}(-1)$. If $[\partial_{u_0},\mathfrak{X}(1)]=\{0\}$, then we have $[\partial_{u_0},\mathfrak{X}(1/2)]=\{0\}$.
\end{lemma}
\begin{proof}
Let $\mathcal{Y}_\Phi\in \mathfrak{X}(1/2)$. Then $\mathcal{Y}_{i\Phi}\in\mathfrak{X}(1/2)$ by Proposition \ref{Everyeleme}. We put $\mathcal{Z}_{a,b}=[\mathcal{Y}_\Phi,\mathcal{Y}_{i\Phi}]$. Then the equality
\begin{equation*}
a(u,u)=4Q_0(\Phi(u),\Phi(u))\quad(u\in \mathcal{U})
\end{equation*}
holds (see {\cite[Chapter V, Lemma 2.5]{satake}}). It follows from (\ref{partialUZa}) that $0=[\partial_{u_0},\mathcal{Z}_{a,b}]=2\mathcal{X}(\mathcal{A}_{u_0},\mathcal{B}_{u_0})$. Hence
\begin{equation*}
0=\mathcal{A}_{u_0}(u_0)=a(u_0,u_0)=4Q_0(\Phi(u_0),\Phi(u_0)).
\end{equation*}
By the $\Omega_0$-positivity of $Q_0$, we get $\Phi(u_0)=0$. Using (\ref{partialUYP}), we get
\begin{equation*}
[\partial_{u_0},\mathcal{Y}_\Phi]=\tilde{\partial}_{\Phi(u_0)}=0.
\end{equation*}
\end{proof}
\begin{lemma}\label{ForXABinma}
Let $\mathcal{X}(\mathcal{A},\mathcal{B})\in\mathfrak{X}(0)$, and let $\mathcal{Y}_{\Phi,c}\in\mathfrak{X}(1/2)$. We define a $\mathbb{C}$-linear map $\Phi':\mathcal{U}_\mathbb{C}\rightarrow \mathcal{V}$ and a $\mathbb{C}$-bilinear map $c':\mathcal{V}\times\mathcal{V}\rightarrow\mathcal{V}$ by $\mathcal{Y}_{\Phi',c'}=[\mathcal{Y}_\Phi,\mathcal{X}(\mathcal{A},\mathcal{B})]$. Then the followings hold:
\begin{enumerate}
\item[$(\mathrm{i})$]
$\Phi'(u)=-\Phi(\mathcal{A}u)+\mathcal{B}\Phi(u)\quad(u\in \mathcal{U}_\mathbb{C})$,
\item[$(\mathrm{ii})$]
$c'(v,v)=\mathcal{B}c(v,v)-2c(\mathcal{B}v,v)\quad(v\in\mathcal{V})$.
\end{enumerate}
\end{lemma}
\begin{proof}
For $(u,v)\in\mathcal{D}(\Omega_0, Q_0)$, we have
\begin{equation*}\begin{split}
[&\mathcal{Y}_{\Phi,c},\mathcal{X}(\mathcal{A},\mathcal{B})](u,v)=(2i\mathcal{A}Q_0(v,\Phi(\overline{u})),\mathcal{B}(\Phi(u)+c(v,v)))\\&\quad-(2iQ_0(v,\Phi(\overline{\mathcal{A}u}))+2iQ_0(\mathcal{B}v,\Phi(\overline{u})),\Phi(\mathcal{A}u)+2c(\mathcal{B}v,v))\in\mathcal{U}_\mathbb{C}\oplus\mathcal{V}.
\end{split}\end{equation*}
We see from this expression that the image of $[\mathcal{Y}_{\Phi,c},\mathcal{X}(\mathcal{A},\mathcal{B})](u,v)$ under the 2nd projection $\mathcal{U}_\mathbb{C}\oplus\mathcal{V}\ni(u,v)\mapsto v\in\mathcal{V}$ is equal to
\begin{equation*}
\mathcal{B}\Phi(u)-\Phi(\mathcal{A}u)+\mathcal{B}c(v,v)-2c(\mathcal{B}v,v),
\end{equation*}
which is same as $\Phi'(u)+c'(v,v)$.
\end{proof}
\begin{lemma}\label{ForZabazzp}
Let $\mathcal{Z}_{a,b}\in\mathfrak{X}(1)$, and let $\mathcal{X}(\mathcal{A},\mathcal{B})\in\mathfrak{X}(0)$. We define $\mathbb{C}$-bilinear maps $a':\mathcal{U}_\mathbb{C}\times\mathcal{U}_\mathbb{C}\rightarrow\mathcal{U}_\mathbb{C}$ and $b':\mathcal{U}_\mathbb{C}\times\mathcal{V}\rightarrow \mathcal{V}$ by $\mathcal{Z}_{a',b'}=[\mathcal{Z}_{a,b},\mathcal{X}(\mathcal{A},\mathcal{B})]$. Then the followings hold:
\begin{enumerate}
\item[$(\mathrm{i})$]
$a'(u,u)=\mathcal{A}a(u,u)-2a(\mathcal{A}u,u)\quad(u\in\mathcal{U}_\mathbb{C})$,
\item[$(\mathrm{ii})$]
$b'(u,v)=\mathcal{B}b(u,v)-b(\mathcal{A}u,v)-b(u,\mathcal{B}v)\quad(u\in\mathcal{U}_\mathbb{C},v\in\mathcal{V})$.
\end{enumerate}
\end{lemma}
\begin{proof}
For $(u,v)\in\mathcal{D}(\Omega_0, Q_0)$, we have
\begin{equation*}\begin{split}
[\mathcal{Z}&_{a,b},\mathcal{X}(\mathcal{A},\mathcal{B})](u,v)\\&=(\mathcal{A}a(u,u),\mathcal{B}b(u,v))-(2a(\mathcal{A}u,u),b(\mathcal{A}u,v)+b(u,\mathcal{B}v))\\&=(\mathcal{A}a(u,u)-2a(\mathcal{A}u,u),\mathcal{B}b(u,v)-b(\mathcal{A}u,v)-b(u,\mathcal{B}v))\in\mathcal{U}_\mathbb{C}\oplus\mathcal{V},
\end{split}\end{equation*}
which is same as $(a'(u,u),b'(u,v))$.
\end{proof}
\begin{proposition}\label{Whenr1fora}
Assume that $\dim\,\mathcal{U}=1$ and that $\mathfrak{f}$ is a subalgebra of $\mathfrak{X}$ which contains $\mathfrak{X}(-1/2)$ and $\partial$. Then for any $\mathcal{Y}_{\Phi}\in{\mathfrak{f}}$, we have $\mathcal{Y}_{i\Phi}\in{\mathfrak{f}}$.
\end{proposition}
\begin{proof}
When $\Phi(e)=0$, we have $\psi_e(\mathcal{Y}_\Phi)=\tilde{\partial}_{-i\Phi(e)}=0$\,(\ref{psiYPhitil}). Since $\psi_e$ is injective, we have $\mathcal{Y}_\Phi=0$. Thus $\Phi=0$ and the result follows. In what follows, we assume that $\Phi(e)\neq 0$. We define $\mathbb{C}$-linear maps $\mathcal{A}\in\mathfrak{gl}(\mathcal{U}_\mathbb{C}),\mathcal{B}\in\mathfrak{gl}(\mathcal{V})$, and $\Phi':\mathcal{U}_\mathbb{C}\rightarrow\mathcal{V}$ by \begin{equation*}
\mathcal{X}(\mathcal{A},\mathcal{B})=[ \tilde{\partial}_{\Phi(e)},\mathcal{Y}_{\Phi}],\quad \mathcal{Y}_{\Phi'}=[\mathcal{Y}_{\Phi},\mathcal{X}(\mathcal{A},\mathcal{B})].
\end{equation*}
By assumption, we have $\mathcal{X}(\mathcal{A},\mathcal{B})\in\mathfrak{f}$ and $\mathcal{Y}_{\Phi'}\in{\mathfrak{f}}$. Our goal is to prove that $\Phi'(e)=Ci\Phi(e)$ with some constant $C\in\mathbb{R}\setminus\{0\}$. Indeed, if the equation $\Phi'(e)=Ci\Phi(e)$ holds, then we have
\begin{equation*}
\psi_e(\mathcal{Y}_{\Phi'})=\tilde{\partial}_{-i\Phi'(e)}=\tilde{\partial}_{C\Phi(e)}=C\tilde{\partial}_{\Phi(e)}=C\psi_e(\mathcal{Y}_{i\Phi})=\psi_e(C\mathcal{Y}_{i\Phi}),
\end{equation*}
and since $\psi_e$ is injective, $C\mathcal{Y}_{i\Phi}=\mathcal{Y}_{\Phi'}\in{\mathfrak{f}}$. Put $v_0=\Phi(e)$. By (\ref{tildepartialVYPhiXAB}), we have
\begin{equation*}
\mathcal{A}e=4\Im Q_0(v_0,\Phi(e))=4\Im Q_0(v_0,v_0)=0.
\end{equation*}
Since $\mathcal{U}=\mathbb{R}e$, we can define a Hermitian form $q_0$ on $\mathcal{V}$ by
\begin{equation*}
Q_0(v,v')=q_0(v,v')e\quad(v,v'\in\mathcal{V}).
\end{equation*}
Using (\ref{HVcVV2iHPhiHVVVquadVVinmathcalV}), for any $v \in \mathcal{V}$, we have
\begin{equation*}\begin{split}
Q_0&(c(v_0,v_0),v)=2iQ_0(v_0,\Phi(Q_0(v,v_0)))=2iQ_0(v_0,q_0(v,v_0)\Phi(e))
\\&=2iQ_0(v_0,q_0(v,v_0)v_0)=2iq_0(v_0,v)Q_0(v_0,v_0)=Q_0(2iq_0(v_0,v_0)v_0,v).
\end{split}\end{equation*}
Hence
$c(v_0,v_0)=2iq_0(v_0,v_0)v_0$. Using (\ref{tildepartialVYPhiXAB}), we get
\begin{equation*}\begin{split}
\mathcal{B}v_0&=2i\Phi(Q_0(v_0,v_0))+2c(v_0,v_0)=2iq_0(v_0,v_0)\Phi(e)+4iq_0(v_0,v_0)v_0\\&=6iq_0(v_0,v_0)v_0.
\end{split}\end{equation*}
Thanks to Lemma \ref{ForXABinma} (i), we obtain
\begin{equation*}\begin{split}
\Phi'(e)=-\Phi(\mathcal{A}e)+\mathcal{B}\Phi(e)=\mathcal{B}v_0=6iq_0(v_0,v_0)v_0=6q_0(v_0,v_0)i\Phi(e)
\end{split}\end{equation*}
with $6q_0(v_0,v_0)\neq 0$.
\end{proof}
\subsection{Vector fields on homogeneous Siegel domains}
Consider the action of the group $B$ on the domain $\mathcal{D}(\Omega,Q)$ as in Section \ref{Normaljalg}. Let $\mathfrak{X}=\mathfrak{X}(\mathcal{D}(\Omega,Q))$ be the space of complete holomorphic vector fields on $\mathcal{D}(\Omega,Q)$. For a subspace $\mathcal{W}\subset \mathfrak{aut}_{hol}\mathcal{D}(\Omega,Q)$, let $\mathcal{W}^\#=\{X^\#;X\in \mathcal{W}\}\subset \mathfrak{X}$. Now we have
\begin{equation*}
T^\#=\mathcal{X}(\mathrm{ad}(T)|_{\mathfrak{b}(1)_\mathbb{C}},\mathrm{ad}(T)|_{\mathfrak{b}(1/2)})\quad(T\in\mathfrak{b}(0)).
\end{equation*}
Thus $\partial=(jE)^\#\in\mathfrak{b}^\#$. By (\ref{expX0W0ZWZ}), we also have
\begin{equation*}
U^\#=\partial_U\quad(U\in\mathfrak{b}(1)),
\end{equation*}
\begin{equation*}
V^\#=\tilde{\partial}_{V}\quad(V\in\mathfrak{b}(1/2)).
\end{equation*}
From these expressions, we see that
\begin{equation*}
\mathfrak{b}(1)^\#=\mathfrak{X}(-1),\quad \mathfrak{b}(1/2)^\#=\mathfrak{X}(-1/2),\quad\mathfrak{b}(0)^\#\subset\mathfrak{X}(0).
\end{equation*}
Note that there is a natural action of $G$ on $\mathcal{D}(\Omega,Q)$ which is given as the transfer of the action of $G$ on $\mathcal{D}$ by means of the biholomorphic map $\mathcal{C}$. We also have the $B$-invariant metric $\langle\langle\cdot,\cdot\rangle\rangle$ on $\mathcal{D}(\Omega,Q)$ which is the transfer of the metric $\langle\langle\cdot,\cdot\rangle\rangle$ on $\mathcal{D}$. We denote by $\nabla$ the Levi-Civita connection on $(\mathcal{D}(\Omega,Q),\langle\langle\cdot,\cdot\rangle\rangle)$. We define a map $\tilde{\nabla}:\mathfrak{b}\times\mathfrak{b}\rightarrow \mathfrak{b}$ by \begin{equation}\label{dtexpttild}
\left.\frac{d}{dt}\right|_{t=0}\exp(t\tilde{\nabla}_{X}Y)(iE,0)=(\nabla_{X^\#}Y^\#)_{(iE,0)}.
\end{equation}
Then we have
\begin{equation}\label{2langletild}
-2\langle \tilde{\nabla}_{X}{Y},Z\rangle=\langle[X,Y],Z\rangle-\langle[Z,X],Y\rangle-\langle X,[Z,Y]\rangle\quad(X,Y,Z\in\mathfrak{b}).
\end{equation}
We see from the above equation that $\tilde{\nabla}_{X}Y=\tilde{\nabla}_{Y}X$ for all $X,Y\in\mathfrak{b}(1)$.
\begin{lemma}\label{For1krtild}
Let $1\leq k\leq r$, and let $X\in\mathfrak{b}(1)$. Then \begin{equation*}
\tilde{\nabla}_{E_{k}}X=j\mathrm{ad} (A_k)X.
\end{equation*}
\end{lemma}
\begin{proof}
For $1\leq l< m\leq r$, put $n_{ml}=\dim\mathfrak{b}_{(\alpha_{m}+\alpha_{l})/2}\geq 0$. We take an orthogonal basis $(E_{ml}^{\kappa})_{\kappa=1}^{n_{ml}}$ of $\mathfrak{b}_{(\alpha_m+\alpha_{l})/2}$ such that \begin{equation*}
[jE_{ml}^{\kappa},E_{ml}^{\kappa}]=E_{m}.
\end{equation*}
For $1\leq l\leq r$, put $C_{l}=\langle E_{l},E_{l}\rangle$. Then for $1\leq l<m\leq r$ and $1\leq \kappa\leq n_{ml}$, we have
\begin{equation*}
\langle E_{ml}^{\kappa},E_{ml}^{\kappa}\rangle=\omega(E_{m})=\omega([jE_{m},E_{m}])=C_{m}.
\end{equation*}
For $X=\sum_{1\leq m'\leq r}x_{m'}E_{m'}+\sum_{1\leq k'<l'\leq r}\sum_{1\leq \lambda\leq n_{l'k'}}x_{l'k'}^{\lambda}E_{l'k'}^{\lambda}$ and $1\leq l\leq r$, we have
\begin{equation*}\begin{split}
2&\langle \tilde{\nabla}_{E_k}X,jE_l\rangle=\langle[jE_l,E_k],X\rangle+\langle E_k,[jE_l,X]\rangle
\\&=\langle \delta_{kl}E_k,X\rangle+\textstyle\left\langle E_k,\frac{1}{2}\sum_{k'=1}^{l-1}\sum_{\lambda=1}^{n_{lk'}} x_{lk'}^{\lambda}E_{lk'}^{\lambda}+x_lE_l+\frac{1}{2}\sum_{l'=l+1}^r\sum_{\lambda=1}^{n_{l'l}} x_{l'l}^{\lambda}E_{l'l}^{\lambda}\right\rangle
\\&=2\delta_{kl}C_kx_l.
\end{split}\end{equation*}
On the other hand, we have
\begin{equation*}\begin{split}
\langle&\mathrm{ad}(A_k)X,E_l\rangle \\&=\textstyle\left\langle\frac{1}{2}\sum_{k'=1}^{k-1}\sum_{\lambda=1}^{n_{kk'}} x_{kk'}^{\lambda}E_{kk'}^{\lambda}+x_kE_k+\frac{1}{2}\sum_{l'=k+1}^r\sum_{\lambda=1}^{n_{l'k}} x_{l'k}^{\lambda}E_{l'k}^{\lambda},E_l\right\rangle\\&=\delta_{kl}C_kx_k.
\end{split}
\end{equation*}
For $1\leq m<l\leq r$ and $1\leq \kappa\leq n_{lm}$, we have
\begin{equation*}\begin{split}
2\langle\tilde{\nabla}_{E_k}X,jE_{lm}^{\kappa}\rangle&=\langle[jE_{lm}^{\kappa},E_k],X\rangle+\langle E_k,[jE_{lm}^{\kappa},X]\rangle
\\&=\langle \delta_{mk}E_{lm}^{\kappa},X\rangle+\langle E_k,[jE_{lm}^{\kappa},x_{lm}^{\kappa}E_{lm}^{\kappa}]\rangle
\\&=\langle \delta_{mk}E_{lm}^{\kappa},X\rangle+\langle E_k,x_{lm}^{\kappa}E_l\rangle
\\&=\delta_{mk}C_lx_{lm}^{\kappa}+\delta_{kl}C_{l}x_{lm}^{\kappa}.
\end{split}\end{equation*}
On the other hand, we have
\begin{equation*}\begin{split}
\langle &\mathrm{ad}(A_k)X,E_{lm}^{\kappa}\rangle\\&=
\textstyle\left\langle\frac{1}{2}\sum_{k'=1}^{k-1}\sum_{\lambda=1}^{n_{kk'}}x_{kk'}^{\lambda}E_{kk'}^{\lambda}+x_kE_k+\frac{1}{2}\sum_{l'=k+1}^r\sum_{\lambda=1}^{n_{l'k}}x_{l'k}^{\lambda}E_{l'k}^{\lambda},E_{lm}^{\kappa}\right\rangle
\\&=\frac{1}{2}(\delta_{mk}C_lx_{lm}^{\kappa}+\delta_{kl}C_lx_{lm}^{\kappa}).
\end{split}\end{equation*}
Therefore we get
\begin{equation*}
\langle \mathrm{ad}(A_k)X,Y\rangle=2\langle \tilde{\nabla}_{E_k}X,jY\rangle\quad(Y\in\mathfrak{b}(1)).
\end{equation*}
Moreover, we see from (\ref{2langletild}) that
\begin{equation}
\langle \tilde{\nabla}_{E_k}X,Y\rangle=0\quad(Y\in\mathfrak{b}(1)\oplus\mathfrak{b}(1/2)).
\end{equation}
The proof is complete.
\end{proof}
\begin{lemma}\label{ForF}
Let $1\leq k\leq r$, and let $\mathcal{A}\in\mathfrak{g}(\Omega)$. Then
\begin{equation*}
\mathcal{A}E_k\in\sideset{}{^\oplus}\sum_{1\leq m\leq r}\mathfrak{b}_{(\alpha_k+\alpha_m)/2}.
\end{equation*}
\end{lemma}
\begin{proof}
The connected Lie subgroup of $B$ with the Lie algebra $\mathfrak{b}(0)\oplus\mathfrak{b}(1)$ is an Iwasawa subgroup of $\mathrm{Aut}_{hol}(\mathcal{D}(\Omega))$, and we have
\begin{equation*}
\mathfrak{g}(\Omega)=\mathfrak{g}_{E}(\Omega)\oplus \{\mathrm{ad}(X);X\in\mathfrak{b}(0)\},
\end{equation*}
where $\mathfrak{g}_E(\Omega)$ is the Lie algebra of the Lie group
\begin{equation*}
G_E(\Omega)=\{A\in G(\Omega);AE=E\}.
\end{equation*}
The result for $\mathcal{A}=\mathrm{ad}(X)$ with $X\in\mathfrak{b}(0)$ follows from (\ref{XYjjXyjXjY}). Let $\mathcal{A}\in\mathfrak{g}_E(\Omega)$. By (\ref{XABUAU}), we have $[\mathcal{X}(\mathcal{A}),E_k^\#]=-(\mathcal{A}E_k)^\#$. Let $\langle\langle\cdot,\cdot\rangle\rangle'$ be the Bergman metric on $\mathcal{D}(\Omega)$, and let $\nabla'$ denote the connection on $(\mathcal{D}(\Omega),\langle\langle\cdot,\cdot\rangle\rangle')$. Then we also have the map $\tilde{\nabla'}:(\mathfrak{b}(0)\oplus\mathfrak{b}(1))\times(\mathfrak{b}(0)\oplus\mathfrak{b}(1))\rightarrow\mathfrak{b}(0)\oplus\mathfrak{b}(1)$, which is defined by (\ref{dtexpttild}). Let $1\leq l\leq r$ and $l\neq k$. Since $\mathcal{X}(\mathcal{A})$ generates isometries of $\mathcal{D}(\Omega)$, we have
\begin{equation}\label{mathcalXma}\begin{split}
[\mathcal{X}(\mathcal{A}),\nabla'_{E_l^\#}E_k^\#]&=\nabla'_{[\mathcal{X}(\mathcal{A}),E_l^\#]}E_k^\#+\nabla'_{E_l^\#}[\mathcal{X}(\mathcal{A}),E_k^\#]
\\&=-\nabla'_{(\mathcal{A}E_l)^\#}{E_k^\#}-\nabla'_{E_l^\#}(\mathcal{A}E_k)^\#.
\end{split}\end{equation}
By Lemma \ref{For1krtild}, we have $(\nabla'_{E_l^\#}E_k^\#)_{iE}=0$. By looking at the value of (\ref{mathcalXma}) at $iE\in\mathcal{D}(\Omega)$, we have
\begin{equation}\label{0tildenabl}
0=\tilde{\nabla'}_{\mathcal{A}E_l}{E_k}+\tilde{\nabla'}_{E_l}\mathcal{A}E_k=\tilde{\nabla'}_{E_k}{\mathcal{A}E_l}+\tilde{\nabla'}_{E_l}\mathcal{A}E_k.
\end{equation}
We remark that the equation (\ref{0tildenabl}) can be seen from \cite{dorfmeister 1979} and \cite{dorfmeister 1982}.
By Lemma \ref{For1krtild},
we have
\begin{equation*}
[A_k,\mathcal{A}E_l]=-[A_l,\mathcal{A}E_k]\in\mathfrak{b}.
\end{equation*}
Thus $[A_k,\mathcal{A}E_l]$ belongs to both $\sum_{1\leq m\leq r}^\oplus\mathfrak{b}_{(\alpha_k+\alpha_m)/2}$ and $\sum_{1\leq m\leq r}^\oplus\mathfrak{b}_{(\alpha_l+\alpha_m)/2}$, and hence we obtain
\begin{equation}\label{AkmathcalA}
[A_l,\mathcal{A}E_k]\in\mathfrak{b}_{(\alpha_l+\alpha_k)/2}\quad(l\neq k).
\end{equation}
If $\textstyle\mathcal{A}E_k\notin\sum_{1\leq m\leq r}^\oplus\mathfrak{b}_{(\alpha_k+\alpha_m)/2}$, then $\mathcal{A}E_k$ can be written as $\mathcal{A}E_k=X+Y$ with $X\in\mathfrak{b}_{(\alpha_{l'}+\alpha_{k'})/2}\setminus\{0\}$ and $Y\in\sum_{\substack{1\leq k''\leq l''\leq r\\(k'',l'')\neq (k',l')}}^\oplus\mathfrak{b}_{(\alpha_{l''}+\alpha_{k''})/2}$ for some $k'\neq k$ and $l'\neq k$ satisfying $1\leq k'\leq l'\leq r$. Then
$[A_{l'},X+Y]=CX+[A_{l'},Y]$ with $C=1/2$ or $1$. Thus $[A_{l'},\mathcal{A}E_k]\notin \mathfrak{b}_{(\alpha_l+\alpha_k)/2}$, which contradicts (\ref{AkmathcalA}). Hence it follows that $\mathcal{A}E_k\in\sum_{1\leq m\leq r}^\oplus\mathfrak{b}_{(\alpha_k+\alpha_m)/2}$.
\end{proof}
\begin{lemma}\label{ForV}
Let $V\in\mathfrak{b}(1/2)$, and let $1\leq k\leq r$. If $[\mathfrak{b}(1/2),V]\subset\sum_{1\leq m\leq r}^\oplus\mathfrak{b}_{(\alpha_k+\alpha_{m})/2}$, then $V\in\mathfrak{b}_{\alpha_k/2}$.
\end{lemma}
\begin{proof}
Let $V=\sum_{1\leq m\leq r}V_{m}$ with $V_{m}\in\mathfrak{b}_{\alpha_{m}/2}$, and suppose that $V\notin \mathfrak{b}_{\alpha_k/2}$. Then there exists $1\leq m_0\leq r$ such that $m_0\neq k$ and $V_{m_0}\neq 0$. We have
\begin{equation*}
[jV_{m_0},V_{m_0}]\in\mathfrak{b}_{\alpha_{m_0}}\setminus\{0\}.
\end{equation*}
Thus $[jV_{m_0}, V]\notin \sum_{1\leq m\leq r}^\oplus\mathfrak{b}_{(\alpha_k+\alpha_{m})/2}$.
\end{proof}
\begin{lemma}\label{foranyY}
Let $\mathcal{Y}_\Phi\in\mathfrak{X}(1/2)$. Then the followings hold:
\begin{enumerate}
\item [$(\mathrm{i})$]
$\Phi(E_k)\in\mathfrak{b}_{\alpha_k/2}\quad(1\leq k\leq r)$,
\item[$(\mathrm{ii})$]
$[A_k^\#,[A_l^\#,\mathcal{Y}_\Phi]]=0\quad(1\leq k\leq r, 1\leq l\leq r, k\neq l)$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $1\leq k\leq r$, $1\leq l\leq r$, and $k\neq l$. We define $\mathbb{C}$-linear maps $\Phi^{l},\Phi^{lk}:\mathfrak{b}(1)_\mathbb{C}\rightarrow\mathfrak{b}(1/2)$ by \begin{equation*}
\mathcal{Y}_{\Phi^{l}}=[\mathcal{Y}_\Phi,A_l^\#],\quad\mathcal{Y}_{\Phi^{lk}}=[\mathcal{Y}_{\Phi^{l}},A_k^\#].
\end{equation*}
By Lemma \ref{ForXABinma} (i), we have
\begin{equation*}\begin{split}
&\Phi^{lk}(E)
\\&=-\Phi^{l}([A_k,E])+[A_k ,\Phi^{l}(E)]\\&=-(-\Phi([A_l,[A_k,E]])+[A_l,\Phi([A_k,E])])+[A_k,-\Phi([A_l,E])+[A_l,\Phi(E)]]
\\&=-[A_l,\Phi([A_k,E])]-[A_k,\Phi([A_l,E])]=-[A_l,\Phi(E_k)]-[A_k,\Phi(E_l)].
\end{split}\end{equation*}
Thus to prove (ii), it is enough to show that
\begin{equation}\label{AlPhiEk0}
[A_l,\Phi(E_k)]=0.
\end{equation}
For any $V\in\mathfrak{b}(1/2)$, we have \begin{equation*}
[V,\Phi(E_k)]=-4\Im Q(\Phi(E_k),V)=-4\Phi_V(E_k).
\end{equation*}
Since $\Phi_V\in\mathfrak{g}(\Omega)$, Lemma \ref{ForF} implies that
\begin{equation*}
[V,\Phi(E_k)]\in\sideset{}{^\oplus}\sum_{1\leq m\leq r}\mathfrak{b}_{(\alpha_k+\alpha_m)/2}\quad(V\in\mathfrak{b}(1/2)).
\end{equation*}
Thus Lemma \ref{ForV} shows that $\Phi(E_k)\in\mathfrak{b}_{\alpha_k/2}$. Hence (\ref{AlPhiEk0}) holds, and the proof is complete.
\end{proof}
Put
\begin{equation*}
\mathfrak{f}=\mathfrak{g}^\#,\quad \mathfrak{f}(\gamma)=\{X\in\mathfrak{f}:[\partial,X]=\gamma X\}\quad(\gamma\in\mathbb{R}).
\end{equation*}
\begin{lemma}\label{mathfrakzm}
The center $\mathfrak{z}(\mathfrak{f})$ of $\mathfrak{f}$ is trivial.
\end{lemma}
\begin{proof}
Let $X\in\mathfrak{z}(\mathfrak{f})$. Then we have $X\in\mathfrak{f}(0)$ since $[\partial, X]=0$. Put $X=\mathcal{X}(\mathcal{A},\mathcal{B})\in\mathfrak{f}(0)$. By (\ref{XABUAU}), for any $U\in\mathfrak{b}(1)$, we have
\begin{equation*}
0=[\mathcal{X}(\mathcal{A},\mathcal{B}),\partial_U]=-\partial_{\mathcal{A}U}.
\end{equation*}
Thus $\mathcal{A}=0$, and also $\mathcal{B}=0$ by (\ref{XABVBV}). Now we see that
\begin{equation*}
X=\mathcal{X}(\mathcal{A},\mathcal{B})=0.
\end{equation*}
\end{proof}
We shall extend the result of Proposition \ref{Whenr1fora} to the case of bounded homogeneous domains of arbitrary ranks by induction (see Proposition \ref{forasubalg}). From now on, we assume that $r\geq 2$. We define subalgebras $\check{\mathfrak{b}}\subset \mathfrak{b}$ and $\check{\mathfrak{f}}\subset \mathfrak{f}$ by
\begin{equation*}\begin{split}
\check{\mathfrak{b}}=
\sideset{}{^\oplus}\sum_{2\leq k\leq r}\langle A_k\rangle&\oplus
\sideset{}{^\oplus}\sum_{2\leq k<l\leq r}\mathfrak{b}_{(\alpha_l-\alpha_k)/2}\oplus
\sideset{}{^\oplus}\sum_{2\leq k\leq r}\mathfrak{b}_{\alpha_k/2}\\&\oplus
\sideset{}{^\oplus}\sum_{2\leq k\leq r}\mathfrak{b}_{\alpha_k}\oplus
\sideset{}{^\oplus}\sum_{2\leq k<l\leq r}\mathfrak{b}_{(\alpha_l+\alpha_k)/2}
\end{split}\end{equation*}
and
\begin{equation*}
\check{\mathfrak{f}}=\{X\in\mathfrak{f};[X,A_1^\#]=[X,E_1^\#]=0\}.
\end{equation*}
Put \begin{equation*}
\check{\mathfrak{f}}(\gamma)=\check{\mathfrak{f}}\cap\mathfrak{f}(\gamma)\quad(\gamma\in\mathbb{R}),
\end{equation*}
\begin{equation*}
\check{\mathfrak{b}}(\gamma)=\check{\mathfrak{b}}\cap\mathfrak{b}(\gamma)\quad(\gamma=0,1/2,1).
\end{equation*}
Then $\check{\mathfrak{b}}$ is a normal $j$-algebra of rank $r-1$.
Define
\begin{equation*}
\check{\Omega}=\exp (\check{\mathfrak{b}}(0))(E_2+\cdots +E_r),
\end{equation*}
\begin{equation*}
\mathcal{D}(\check{\Omega},\check{Q})=\{(U,V)\in\check{\mathfrak{b}}(1)_\mathbb{C}\oplus\check{\mathfrak{b}}(1/2);\Im U-Q(V,V)\in\check{\Omega}\}.
\end{equation*}
According to Lemma \ref{Omega1E1Om} below, the following inclusion holds:
\begin{equation*}
iE_1+\mathcal{D}(\check{\Omega},\check{Q})\subset\mathcal{D}(\Omega,Q).
\end{equation*}
\begin{lemma}\label{Omega1E1Om}
$\check{\Omega}+E_1=\Omega\cap(\check{\mathfrak{b}}(1)+E_1)$.
\end{lemma}
\begin{proof}
It follows from \cite[Proposition 2.5]{ishi 2000}.
\end{proof}
\begin{lemma}\label{ForY12andZ}
\begin{enumerate}
\item[$\textup{(i)}$] Let $\mathcal{Y}_\Phi\in\check{\mathfrak{X}}(1/2)$. Then
\begin{equation*}
\mathcal{Y}_\Phi(iE_1+U,V)\in\check{\mathfrak{b}}(1)_\mathbb{C}\oplus\check{\mathfrak{b}}(1/2)\quad(U\in\check{\mathfrak{b}}(1)_\mathbb{C}, V\in\check{\mathfrak{b}}(1/2)).
\end{equation*}
\item[$\textup{(ii)}$]
An element $\mathcal{Y}_\Phi$ of ${\mathfrak{X}}(1/2)$ belongs to $\check{\mathfrak{X}}(1/2)$ if and only if $[\mathcal{Y}_\Phi,A_1^\#]=0$.
\end{enumerate}
\end{lemma}
\begin{proof}
(i) Let $\mathcal{Y}_\Phi\in\check{\mathfrak{X}}(1/2)$. By Lemma \ref{ForXABinma} (i), we have \begin{equation*}
\Phi([A_1,U])=[A_1,\Phi(U)]\quad(U\in\mathfrak{b}(1)_\mathbb{C}).
\end{equation*}
Thus
\begin{equation*}
\Phi([A_1,E])=\Phi([A_1,[A_1,E]])=[A_1,\Phi([A_1,E])],
\end{equation*}
which implies
\begin{equation}\label{PhiE1PhiA1}
\Phi(E_1)=\Phi([A_1,E])=0.
\end{equation}
Let $U\in\check{\mathfrak{b}}(1)_\mathbb{C}$, and let $V\in\check{\mathfrak{b}}(1/2)$. Lemma \ref{ForXABinma} (i) shows that $[A_1,\Phi(U)]=0$, and hence
\begin{equation}\label{PhiZinmath}
\Phi(U)\in\check{\mathfrak{b}}(1/2).
\end{equation}
By (\ref{PhiE1PhiA1}) and (\ref{PhiZinmath}), we see that
\begin{equation}\label{2itildeQPh}
2iQ(V,\Phi(-iE_1+\overline{U}))=2iQ(V,\Phi(\overline{U}))\in\check{\mathfrak{b}}(1)_\mathbb{C}.
\end{equation}
On the other hand, from Lemma \ref{ForXABinma} (ii), it follows that
$[A_1,c(V,V)]=0$. Thus $c(V,V)\in\check{\mathfrak{b}}(1/2)$.
We see from from $(\ref{PhiZinmath})$ and (\ref{2itildeQPh}) that
\begin{equation*}
\Phi(iE_1+U)+c(V,V)\in\check{\mathfrak{b}}(1/2),
\end{equation*}
which proves (i).
(ii) Let $\mathcal{Y}_\Phi\in\mathfrak{X}(1/2)$, and suppose that $[\mathcal{Y}_\Phi,A_1^\#]=0$. Then we see from (\ref{partialUYP}) and (\ref{PhiE1PhiA1}) that
\begin{equation*}
[E_1^\#,\mathcal{Y}_\Phi]=(\Phi(E_1))^\#=0.
\end{equation*}
This proves (ii).
\end{proof}
To simplify some of the notation, we abbreviate $\psi_E:\mathfrak{X}(1/2)\rightarrow \mathfrak{X}(-1/2)$ and $\varphi_E:\mathfrak{X}(1)\rightarrow \mathfrak{X}(-1)$ as $\psi$ and $\varphi$, respectively.
\begin{lemma}\label{thefollowing}
Consider $\mathcal{D}(\check{\Omega},\check{Q})$ as a complex submanifold of $\mathcal{D}(\Omega,Q)$. Then for $\mathcal{Y}_\Phi\in\check{\mathfrak{X}}(1/2)$, we have $\mathcal{Y}_\Phi|_{iE_1+\check{\mathfrak{b}}(1)_\mathbb{C}\oplus \check{\mathfrak{b}}(1/2)}\in \mathfrak{X}(\mathcal{D}(\check{\Omega},\check{Q}))(1/2)$, and the map $\check{\mathfrak{X}}(1/2)\ni \mathcal{Y}_\Phi\mapsto \mathcal{Y}_\Phi|_{iE_1+\check{\mathfrak{b}}(1)_\mathbb{C}\oplus \check{\mathfrak{b}}(1/2)}\in\mathfrak{X}(\mathcal{D}(\check{\Omega},\check{Q}))(1/2)$ is injective.
\end{lemma}
\begin{proof}
First by Lemma \ref{Omega1E1Om}, the following equality holds:
\begin{equation*}
iE_1+\mathcal{D}(\check{\Omega},\check{Q})=\mathcal{D}(\Omega,Q)\cap (iE_1+\check{\mathfrak{b}}(1)_\mathbb{C}\oplus\check{\mathfrak{b}}(1/2)).
\end{equation*} Let $\mathcal{Y}_\Phi\in\check{\mathfrak{X}}(1/2)$, and let $X$ be the element of $\mathfrak{aut}_{hol}(\mathcal{D}(\Omega,Q))$ such that $X^\#=\mathcal{Y}_\Phi$. Let $y:\mathbb{R}\rightarrow \mathrm{Aut}_{hol}(\mathcal{D}(\Omega,Q))$ denote the one-parameter subgroup of $\mathrm{Aut}_{hol}(\mathcal{D}(\Omega,Q))$ given by $y(t)=\exp(tX)\,(t\in\mathbb{R})$. Then $y$ preserves $\mathcal{D}(\Omega,Q)\cap(iE_1+\check{\mathfrak{b}}(1)_\mathbb{C}\oplus\check{\mathfrak{b}}(1/2))$ by Lemma \ref{ForY12andZ}. Thus
$\mathcal{Y}_\Phi|_{iE_1+\check{\mathfrak{b}}(1)_\mathbb{C}\oplus \check{\mathfrak{b}}(1/2)}\in\mathfrak{X}(\mathcal{D}(\check{\Omega},\check{Q}))$. Let $\partial$ be the vector field on $\mathcal{D}(\Omega,Q)$ defined in Section \ref{Gradingstr}. Since
\begin{equation*}
[(A_2+\cdots +A_r)^ \#, \mathcal{Y}_\Phi]=[(A_1+\cdots +A_r)^\#,\mathcal{Y}_\Phi]=[\partial, \mathcal{Y}_\Phi]=\tfrac{1}{2}\mathcal{Y}_\Phi,
\end{equation*}
we have $\mathcal{Y}_\Phi|_{iE_1+\check{\mathfrak{b}}(1)_\mathbb{C}\oplus \check{\mathfrak{b}}(1/2)}\in\mathfrak{X}(\mathcal{D}(\check{\Omega},\check{Q}))(1/2)$. It remains to show that the map $\check{\mathfrak{X}}(1/2)\ni \mathcal{Y}_\Phi\mapsto \mathcal{Y}_\Phi|_{iE_1+\check{\mathfrak{b}}(1)_\mathbb{C}\oplus \check{\mathfrak{b}}(1/2)}\in\mathfrak{X}(\mathcal{D}(\check{\Omega},\check{Q}))(1/2)$ is injective. Suppose that $\mathcal{Y}_\Phi|_{iE_1+\check{\mathfrak{b}}(1)_\mathbb{C}\oplus \check{\mathfrak{b}}(1/2)}=0$. The image of $\mathcal{Y}_\Phi(iE,0)$ under the projection $\mathfrak{b}(1)_\mathbb{C}\oplus\mathfrak{b}(1/2)\ni(U,V)\mapsto V\in\mathfrak{b}(1/2)$ is $i\Phi(E)$, which is equal to $0$. Hence we see from (\ref{psiYPhitil}) that $\psi(\mathcal{Y}_\Phi)=\tilde{\partial}_{-i\Phi(E)}=0$. Since $\psi$ is injective, we obtain $\mathcal{Y}_\Phi=0$.
\end{proof}
\begin{lemma}\label{ForZabinma}
Let $\mathcal{Z}_{a,b}\in\check{\mathfrak{X}}(1)$. Then
\begin{equation*}
\mathcal{Z}_{a,b}(iE_1+U,V)\in\check{\mathfrak{b}}(1)_\mathbb{C}\oplus\check{\mathfrak{b}}(1/2)\quad(U\in\check{\mathfrak{b}}(1)_\mathbb{C}, V\in\check{\mathfrak{b}}(1/2)).
\end{equation*}
\end{lemma}
\begin{proof}
By Lemma \ref{ForZabazzp}, for $U\in\mathfrak{b}(1)_\mathbb{C}$ and $V\in\mathfrak{b}(1/2)$, we have
\begin{equation}\label{[A_1,a(U,U)]=2a([A_1,U],U)}
[A_1,a(U,U)]=2a([A_1,U],U)
\end{equation}
and
\begin{equation}\label{[A_1,b(U,V)]=b(U,[A_1,V])+b([A_1,U],V)}
[A_1,b(U,V)]=b([A_1,U],V)+b(U,[A_1,V]).
\end{equation}
Put $U=iE_1$. Then (\ref{[A_1,a(U,U)]=2a([A_1,U],U)}) becomes
\begin{equation*}
[A_1,a(iE_1,iE_1)]=2a(iE_1,iE_1).
\end{equation*}
Hence we have
\begin{equation}\label{aiE1iE10}
a(iE_1,iE_1)=0.
\end{equation}
Let $U\in\check{\mathfrak{b}}(1)_\mathbb{C}$, and let $V\in\check{\mathfrak{b}}(1/2)$. Then (\ref{[A_1,a(U,U)]=2a([A_1,U],U)}) gives
\begin{equation}\label{A1aZZ0}
[A_1,a(U,U)]=0.
\end{equation}
From (\ref{[A_1,a(U,U)]=2a([A_1,U],U)}), (\ref{aiE1iE10}), and (\ref{A1aZZ0}), it follows that
\begin{equation*}\begin{split}
2[A_1,a(iE_1,U)]&=[A_1,a(iE_1+U,iE_1+U)]\\&=2a([A_1,iE_1+U],iE_1+U)\\&=2a(iE_1,U).
\end{split}\end{equation*}
Thus $a(iE_1,U)\in({\mathfrak{b}_{\alpha_1}})_\mathbb{C}$. At the same time, for $k\neq 1$, we have $a(E_1,E_k)=\mathcal{A}_{E_1}E_k\in\sum_{1\leq m\leq r}^\oplus\mathfrak{b}_{(\alpha_m+\alpha_k)/2}$ by Lemma \ref{ForF}, and hence $\mathcal{A}_{E_1}E_k=0$. Thus we get $\mathcal{A}_{E_1}E=0$, which implies $\mathcal{A}_{E_1}\in\mathfrak{g}_E(\Omega)$. Hence the map $\mathcal{A}_{E_1}:\mathfrak{b}(1)\rightarrow \mathfrak{b}(1)$ is an orthogonal transformation with respect to the inner product $\langle\cdot, \cdot\rangle'$ on $\mathfrak{b}(0)\oplus\mathfrak{b}(1)$ defined by
\begin{equation*}
\langle X,Y\rangle'=\langle\langle X^\#,Y^\#\rangle\rangle'_{iE}\quad(X,Y\in\mathfrak{b}(0)\oplus\mathfrak{b}(1)).
\end{equation*}
Thus the equality
\begin{equation}\label{aE1ZAE1Z0}
a(E_1,U)=\mathcal{A}_{E_1}U=0
\end{equation}
follows from the equation $\mathcal{A}_{E_1}^2U=0$. By (\ref{aiE1iE10}), (\ref{A1aZZ0}), and (\ref{aE1ZAE1Z0}), we get
\begin{equation}\label{aiE1checkU}
a(iE_1+U,iE_1+U)=a(U,U)+2a(iE_1, U)=a(U,U)\in\check{\mathfrak{b}}(1)_\mathbb{C}.
\end{equation}
On the other hand, by (\ref{[A_1,b(U,V)]=b(U,[A_1,V])+b([A_1,U],V)}), we have
\begin{equation}\label{A1biE1ZWbi}
[A_1,b(iE_1+U,V)]=b(iE_1,V)
\end{equation}
and
\begin{equation}\label{A1bZW0}
[A_1,b(U,V)]=0.
\end{equation}
Now (\ref{A1biE1ZWbi}) implies $b(iE_1,V)=0$, and (\ref{A1bZW0}) implies $b(U,V)\in\check{\mathfrak{b}}(1/2)$. Thus
\begin{equation}\label{biE1checkU}
b(iE_1+U,V)=b(U,V)\in\check{\mathfrak{b}}(1/2).
\end{equation}
We see from (\ref{aiE1checkU}) and (\ref{biE1checkU}) that (i) follows.
\end{proof}
\begin{lemma}\label{forx=-1}
Consider $\mathcal{D}(\check{\Omega},\check{Q})$ as a complex submanifold of $\mathcal{D}(\Omega,Q)$. Then for any $\gamma\in\{-1,-1/2,0,1/2,1\}$ and $X\in\check{\mathfrak{X}}(\gamma)$, we have
\begin{equation*}
X|_{iE_1+\check{\mathfrak{b}}(1)_\mathbb{C}\oplus \check{\mathfrak{b}}(1/2)}\in\mathfrak{X}(\mathcal{D}(\check{\Omega},\check{Q}))(\gamma).
\end{equation*}
\end{lemma}
\begin{proof}
Let $U_0\in\mathfrak{b}(1)$, and suppose $[\partial_{U_0},A_1^\#]=0$. Then $[A_1,U_0]=0$ by (\ref{XABUAU}), and we have $U_0\in\check{\mathfrak{b}}(1)$. Since
\begin{equation*}
\partial_{U_0}(iE_1+U,V)=(U_0,0)\quad(U\in\check{\mathfrak{b}}(1)_\mathbb{C}, V\in\check{\mathfrak{b}}(1/2)),
\end{equation*}
the result for $\gamma=-1$ follows. Let $V_0\in\mathfrak{b}(1/2)$, and suppose $[\tilde{\partial}_{V_0},A_1^\#]=0$. Then $[A_1,V_0]=0$ by (\ref{XABVBV}), and $V_0\in\check{\mathfrak{b}}(1/2)$. Since
\begin{equation*}
\tilde{\partial}_{V_0}(iE_1+U,V)=(2iQ(V,V_0),V_0)\quad(U\in\check{\mathfrak{b}}(1)_\mathbb{C}, V\in\check{\mathfrak{b}}(1/2)),
\end{equation*}
the result for $\gamma=-1/2$ follows.
Let $\mathcal{X}(\mathcal{A},\mathcal{B})\in\mathfrak{X}(0)$, and suppose $[\mathcal{X}(\mathcal{A},\mathcal{B}),A_1^\#]=0$. Then we see from (\ref{XABABXAABB}) that
\begin{equation*}
\mathcal{A}\mathrm{ad}(A_1)=\mathrm{ad}(A_1)\mathcal{A}\,\text{ and }\,\mathcal{B}\mathrm{ad}(A_1)=\mathrm{ad}(A_1)\mathcal{B}.
\end{equation*}
Hence we have
\begin{equation*}
[A_1,\mathcal{A}U]=[A_1, \mathcal{B}V]=0\quad(U\in\check{\mathfrak{b}}(1)_\mathbb{C}, V\in\check{\mathfrak{b}}(1/2)).
\end{equation*}
Thus $\mathcal{A}U\in\check{\mathfrak{b}}(1)_\mathbb{C}, \mathcal{B}V\in\check{\mathfrak{b}}(1/2)$ for all $U\in\check{\mathfrak{b}}(1)_\mathbb{C}$ and $V\in\check{\mathfrak{b}}(1/2)$. Now suppose $[\mathcal{X}(\mathcal{A},\mathcal{B}),E_1^\#]=0$. Then $\mathcal{A}E_1=0$ by (\ref{XABUAU}). We have
\begin{equation*}
\mathcal{X}(\mathcal{A},\mathcal{B})(iE_1+U,V)=(\mathcal{A}(iE_1+U),\mathcal{B}V)=(\mathcal{A}U,\mathcal{B}V)\quad(U\in\check{\mathfrak{b}}(1)_\mathbb{C}, V\in\check{\mathfrak{b}}(1/2)),
\end{equation*}
which shows the assertion for $\gamma=0$. We have shown the result for $\gamma=1/2$ in Lemma \ref{thefollowing}. From Lemma \ref{ForZabinma} and the same arguments as in Lemma \ref{thefollowing}, the result for $\gamma=1$ follows. This completes the proof.
\end{proof}
\begin{remark}\label{Theequalit}
The equality
\begin{equation*}
\check{\mathfrak{f}}=\check{\mathfrak{f}}(-1)\oplus\check{\mathfrak{f}}(-1/2)\oplus\check{\mathfrak{f}}(0)\oplus\check{\mathfrak{f}}(1/2)\oplus\check{\mathfrak{f}}(1)
\end{equation*}
shows that for any $X\in\check{\mathfrak{X}}$, we have $X|_{iE_1+\check{\mathfrak{b}}(1)_\mathbb{C}\oplus \check{\mathfrak{b}}(1/2)}\in\mathfrak{X}(\mathcal{D}(\check{\Omega},\check{Q}))$, and the map
\begin{equation*}
\check{\mathfrak{X}}\ni X \mapsto X|_{iE_1+\check{\mathfrak{b}}(1)_\mathbb{C}\oplus \check{\mathfrak{b}}(1/2)}\in \mathfrak{X}(\mathcal{D}(\check{\Omega},\check{Q}))
\end{equation*}
defines a Lie algebra homomorphism.
\end{remark}
\begin{proposition}\label{forasubalg}
For any $\mathcal{Y}_\Phi\in\mathfrak{f}$, one has $ \mathcal{Y}_{j\Phi}\in\mathfrak{f}$.
\end{proposition}
\begin{proof}
We show the assertion by induction on rank $r$ of the normal $j$-algebra. For the case $r=1$, we have shown the assertion in Proposition \ref{Whenr1fora}. Let $r\geq 2$. We define a $\mathbb{C}$-linear map $\Phi':\mathfrak{b}(1)_\mathbb{C}\rightarrow\mathfrak{b}(1/2)$ by
\begin{equation*}
\mathcal{Y}_{\Phi'}=[\mathcal{Y}_\Phi,A_1^\#].
\end{equation*}
By Lemma \ref{foranyY} (ii), we have
\begin{equation*}
[\mathcal{Y}_{\Phi'},A_k^\#]=[[\mathcal{Y}_\Phi,A_1^\#],A_k^\#]=0\quad(2\leq k\leq r).
\end{equation*}
Thus
\begin{equation*}\begin{split}
[\mathcal{Y}_{\Phi+2\Phi'},A_1^\#]&=\mathcal{Y}_{\Phi'}+[\mathcal{Y}_{2\Phi'},A_1^\#]=\mathcal{Y}_{\Phi'}+[\mathcal{Y}_{2\Phi'},(A_1+\cdots+A_r)^\#]\\&=\mathcal{Y}_{\Phi'}+[\mathcal{Y}_{2\Phi'},\partial]=\mathcal{Y}_{\Phi'}-\mathcal{Y}_{\Phi'}=0.
\end{split}\end{equation*}
From Lemma \ref{ForY12andZ}, it follows that $\mathcal{Y}_{\Phi+2\Phi'}\in\check{\mathfrak{f}}$. We denote by $R$ the Lie algebra homomorphism in Remark \ref{Theequalit}.
Since $(\check{\mathfrak{b}})^\#\subset \check{\mathfrak{f}}$, one has $R((\check{\mathfrak{b}})^\#)\subset R(\check{\mathfrak{f}})\subset\mathfrak{X}(\mathcal{D}(\check{\Omega},\check{Q}))$. By the inductive hypothesis and the equality $R(\mathcal{Y}_{\Phi+2\Phi'})=\mathcal{Y}_{({\Phi+2\Phi'}|_{\check{\mathfrak{b}}(1)_\mathbb{C}+\check{\mathfrak{b}}(1/2)})}$, we get \begin{equation*}
\mathcal{Y}_{(j({\Phi+2\Phi'})|_{\check{\mathfrak{b}}(1)_\mathbb{C}+\check{\mathfrak{b}}(1/2)})}\in R(\check{\mathfrak{f}}).
\end{equation*}
Hence we see from Lemma \ref{thefollowing} that
\begin{equation*}
\mathcal{Y}_{j(\Phi+2\Phi')}\in\check{\mathfrak{f}}.
\end{equation*}
Clearly
\begin{equation*}
\mathcal{Y}_{j\Phi}=\mathcal{Y}_{j(\Phi+2\Phi')}-2\mathcal{Y}_{j\Phi'}.
\end{equation*}
Thus in order to show that $Y_{j\Phi}\in \mathfrak{f}$, it suffices to show that
\begin{equation}\label{YiPsi1inmathfrakg12}\mathcal{Y}_{j\Phi'}\in\mathfrak{f}.
\end{equation}
Next we prove \eqref{YiPsi1inmathfrakg12}. Let $c':\mathfrak{b}(1/2)\times\mathfrak{b}(1/2)\rightarrow \mathfrak{b}(1/2)$ be the $\mathbb{C}$-bilinear map such that $\mathcal{Y}_{\Phi'}=\mathcal{Y}_{\Phi',c'}$. We define $\mathbb{C}$-linear maps $\Phi'':\mathfrak{b}(1)_\mathbb{C}\rightarrow \mathfrak{b}(1/2)$, $\mathcal{A}\in\mathfrak{gl}(\mathfrak{b}(1)_\mathbb{C})$, and $\mathcal{B}\in\mathfrak{gl}(\mathfrak{b}(1/2))$ by
\begin{equation*}
\mathcal{X}(\mathcal{A},\mathcal{B})=[\tilde{\partial}_{\Phi'(E)},\mathcal{Y}_{\Phi'}],\,\text{ and }\,\mathcal{Y}_{\Phi''}=[\mathcal{Y}_{\Phi'},\mathcal{X}(\mathcal{A},\mathcal{B})].
\end{equation*}
The inclusion relation $\mathfrak{b}^\#\subset \mathfrak{f}$ shows that $\mathcal{Y}_{\Phi''}\in\mathfrak{f}$. Put $V_0=\Phi'(E)$. If $V_0=0$, then $\mathcal{Y}_{\Phi'}=0$, and it follows that $\Phi'=0$. Thus $0=\mathcal{Y}_{j\Phi'}\in\mathfrak{f}$. In what follows, we assume that $V_0\neq 0$. Lemma \ref{ForXABinma} (i) shows that $\Phi''(E)=-\Phi'(\mathcal{A}E)+\mathcal{B}\Phi'(E)$. And by (\ref{tildepartialVYPhiXAB}), we have
\begin{equation}\label{AE4Psi1V_0E4}
\mathcal{A}E=4\Phi'_{V_0}(E)=4\Im Q(V_0,V_0)=0
\end{equation}
and
\begin{equation}\label{BPsi1E2iPs}
\mathcal{B}\Phi'(E)=2j\Phi'(Q(V_0,V_0))+2c'(V_0,V_0).
\end{equation}
From Lemma \ref{ForXABinma} (i) and Lemma \ref{foranyY} (ii), it follows that
\begin{equation*}
V_0=\Phi'(E)=-\Phi([A_1,E])+[A_1,\Phi(E)]=-\tfrac{1}{2}\Phi(E_1)\in\mathfrak{b}_{\alpha_1/2}.
\end{equation*}
Thus we see from (\ref{HVcVV2iHPhiHVVVquadVVinmathcalV}) that
\begin{equation*}\begin{split}
Q(c'(V_0,V_0),V)=2iQ(V_0,\Phi'(Q(V,V_0)))\in\Bigl(\sideset{}{^\oplus}\sum_{1\leq m\leq r}\mathfrak{b}_{(\alpha_m+\alpha_1)/2}\Bigr)_\mathbb{C}\\(V\in\mathfrak{b}(1/2)).
\end{split}\end{equation*}
By Lemma \ref{ForV}, we have $c'(V_0,V_0)\in\mathfrak{b}_{\alpha_1/2}$. We define a Hermitian form $q$ on $\mathfrak{b}_{\alpha_1/2}$ by
\begin{equation*}
Q(W,W')=q(W,W')E_1\quad(W,W'\in\mathfrak{b}_{\alpha_1/2}).
\end{equation*}
By (\ref{HVcVV2iHPhiHVVVquadVVinmathcalV}), for any $V\in\mathfrak{b}_{\alpha_1/2}$, we have
\begin{equation*}\begin{split}
Q(c'(V_0,V_0),V)&=2iQ(V_0,\Phi'(Q(V,V_0)))=2iQ(V_0,q(V,V_0)\Phi'(E))
\\&=2iQ(V_0,q(V,V_0)V_0)=2iq(V_0,V)Q(V_0,V_0)
\\&=Q(2jq(V_0,V_0)V_0,V).
\end{split}\end{equation*}
Thus we get \begin{equation}\label{c1V0V02jqV}
c'(V_0,V_0)=2jq(V_0,V_0)V_0.
\end{equation}
From (\ref{AE4Psi1V_0E4}), (\ref{BPsi1E2iPs}), and (\ref{c1V0V02jqV}), it follows that
\begin{equation*}\begin{split}
\Phi''(E)&=\mathcal{B}\Phi'(E)=2j\Phi'(Q(V_0,V_0))+4jq(V_0,V_0)V_0=6jq(V_0,V_0)V_0
\\&=6jq(V_0,V_0)\Phi'(E)
\end{split}\end{equation*}
with $q(V_0,V_0)\neq 0$. Thus \eqref{YiPsi1inmathfrakg12} holds, and the proof is complete.
\end{proof}
\begin{proposition}\label{Forasubalg2}
Let $\partial_{U_0}\in\mathfrak{X}(-1)$. If $[\partial_{U_0},\mathfrak{f}(1)]=0$, then one has $[\partial_{U_0},\mathfrak{f}(1/2)]=0$.
\end{proposition}
\begin{proof}
We replace $\mathfrak{X}(\gamma)$ by $\mathfrak{f}(\gamma)$ for $\gamma=-1,1/2,1$ in the proof of Lemma \ref{Letpartial}. To complete the proof, it is enough to show that $\mathcal{Y}_{j\Phi}\in\mathfrak{f}(1/2)$. Hence the result follows from Proposition \ref{forasubalg}.
\end{proof}
\section{Unitary equivalences among the unitarizable representations}\label{Unitaryequ}
\subsection{Unitary equivalences among representations of $B$}\label{Constructi}
We take $(iE,0)\in\mathcal{D}(\Omega,Q)$ as a reference point of $\mathcal{D}(\Omega,Q)$. Let $M: G\times \mathcal{D}(\Omega,Q)\rightarrow\mathbb{C}^\times$ be a holomorphic multiplier. Set $\mathfrak{b}_-=\mathfrak{g}_-\cap \mathfrak{b}_\mathbb{C}$.
\begin{proposition}[Rossi and Vergne, {\cite[Proposition 4.21]{Rossi}}]
Let $\tau:\mathfrak{b}\rightarrow \mathfrak{b}_-$ be the $\mathbb{R}$-linear map defined by
\begin{equation*}
\tau(U+V+T)=(V+ijV)/2+T+ijT\quad(U\in\mathfrak{b}(1),V\in\mathfrak{b}(1/2),T\in\mathfrak{b}(0)).
\end{equation*}
Then $\tau$ is a Lie algebra homomorphism, and if $\tau$ is extended to a $\mathbb{C}$-linear map $\tau:\mathfrak{b}_\mathbb{C}\rightarrow\mathfrak{b}_\mathbb{C}$, then we have $\tau|_{\mathfrak{b}_-}=\mathrm{id}_{\mathfrak{b}_-}$.
\end{proposition}
Put $\theta=\theta_M\in(\mathfrak{g}_-)^*$.
\begin{theorem}[Ishi, {\cite[Theorem 12]{ishi 2011}}]
Let $\chi^\theta$ be the function on $B$ defined by
\begin{equation}\label{chithetaex}
\chi^\theta(\exp X)=e^{\theta\circ \tau(X)}\quad(X\in\mathfrak{b}).
\end{equation}
Then the function
\begin{equation*}
B\times\mathcal{D}(\Omega,Q)\ni(b,(U,V))\mapsto\chi^\theta(b)\in\mathbb{C}^\times
\end{equation*}
is a holomorphic multiplier and is $B$-equivalent to $M$.
\end{theorem}
By Lemma \ref{equivalentline}, there exists a holomorphic function $f: \mathcal{D}(\Omega,Q)\rightarrow \mathbb{C}^\times$ such that the equality
\begin{equation*}
\chi^{\theta}(b)=f(b(U,V))M(b,(U,V))f(U,V)^{-1}\quad(b\in B, (U,V)\in\mathcal{D}(\Omega,Q))
\end{equation*}
holds. We define a holomorphic multiplier $M_\theta:G\times\mathcal{D}(\Omega,Q)\rightarrow \mathbb{C}^\times$ by
\begin{equation*}
M_\theta(g,(U,V))=f(g(U,V))M(g,(U,V))f(U,V)^{-1}\quad(g\in G, (U,V)\in\mathcal{D}(\Omega,Q)).
\end{equation*}
Then holomorphic multipliers $M$ and $M_\theta$ are $G$-equivalent by Lemma \ref{equivalentline}. Now we see that
\begin{equation*}
\left.\frac{d}{dt}\right|_{t=0} M (e^{tX},(iE,0))=\left.\frac{d}{dt}\right|_{t=0} M_{\theta}(e^{tX},(iE,0))\quad(X\in\mathfrak{k}).
\end{equation*}
Since the maps $K\ni k\mapsto M(k,(iE,0))\in\mathbb{C}^\times$ and $K\ni k\mapsto M_{\theta}(k,(iE,0))\in\mathbb{C}^\times$ define one-dimensional representations of $K$, we have
\begin{equation*}
M(k,(iE,0))=M_{\theta}(k,(iE,0))\quad(k\in K).
\end{equation*}
Clearly, we also have
\begin{equation*}
M_\theta(b,(U,V))=\chi^\theta(b)\quad(b\in B).
\end{equation*}
Now we assume that the representation $T_M$ is unitarizable. Let $\xi\in\mathfrak{g}^*$ be the linear form given by \eqref{xiJKcdotpi}. We denote the unitarization of the representation $\chi^{i\xi}$ by $(\chi^{i\xi},\mathcal{H}_\xi)$, and we denote the reproducing kernel of $\mathcal{H}_\xi$ by $\mathcal{K}^\xi$.
We consider the unitary representations $(T_{\chi^{i\xi'}},\mathcal{H}_{\xi'})$ which are obtained by holomorphic multipliers $M':G\times\mathcal{D}(\Omega,Q)\rightarrow\mathbb{C}^\times$. We review the construction of the intertwining operators among the representations $(T_{\chi^{i\xi'}},\mathcal{H}_{\xi'})$ of $B$ in \cite{ishi 1999, ishi 2001}. The group $B(0)$ acts on $\mathfrak{b}(1)^*$ by
\begin{equation*}
\langle U,t_0\ell\rangle=\langle t_0^{-1}U,\ell\rangle\quad(U\in\mathfrak{b}(1), t_0\in B(0), \ell\in\mathfrak{b}(1)^*).
\end{equation*}
\begin{theorem}[Ishi, \cite{ishi 1999}]
There exists a unique $\mathrm{Ad}(B(0))$-orbit $\mathcal{O}_\xi^*\subset \mathfrak{b}(1)^*$ and a unique measure $d\nu_\xi$ on $\mathcal{O}_\xi^*$ such that
\begin{equation*}\label{dnuxutFchi}
d\nu_{\xi}(t_0\ell)=|\chi^{\xi}(t_0)|^2d\nu_{\xi}(\ell)\quad(\ell\in\mathcal{O}_\xi^*, t_0\in B(0)),
\end{equation*}
\begin{equation*}
\int_{\mathcal{O}_\xi^*}e^{-\langle U,\ell\rangle}\,d\nu_\xi(\ell)<\infty\text{ for all } U\in\Omega.
\end{equation*}
If $\chi^{i\xi}$ and $\chi^{i\xi'}$ define equivalent unitarizations, then $\mathcal{O}_\xi^*=\mathcal{O}_{\xi'}^*$.
\end{theorem}
In \cite{ishi 1999}, $\mathcal{O}_\xi^*$ and $d\nu_\xi$ are written as $\mathcal{O}_\varepsilon^*$ and $d\mathcal{R}_{\Re s^*}^*$, respectively. The dual cone $\Omega^*\subset\mathfrak{b}(1)^*$ of $\Omega$ is defined by
\begin{equation*}
\Omega^*=\{\ell\in\mathfrak{b}(1)^*;\langle U,\ell\rangle> 0\text{ for all }U\in\overline{\Omega}\backslash\{0\}\}.
\end{equation*}
For $\ell\in\overline{\Omega^*}$, let $Q_\ell$ be the Hermitian form on $\mathfrak{b}(1/2)$ given by
\begin{equation*}
Q_\ell(V,V')=\langle 2Q(V,V'),\ell\rangle\quad(V,V'\in\mathfrak{b}(1/2)).
\end{equation*}
Then $Q_\ell$ is positive definite. Let
\begin{equation*}
N_\ell=\{V\in\mathfrak{b}(1/2);Q_\ell(V,V)=0\},
\end{equation*}
and let $\mathcal{F}_\ell$ be the space of holomorphic functions $F$ on $\mathfrak{b}(1/2)$ such that
\begin{enumerate}
\item[(i)]$F(V+V')=F(V)$ for all $V\in \mathfrak{b}(1/2)$ and $V'\in N_\ell$,
\item[(ii)]$\|F\|_{\mathcal{F}_\ell}^2=\int_{\mathfrak{b}(1/2)/N_\ell} |F(V)|^2e^{-Q_\ell(V,V)}\,d\mu_\ell([V])<\infty$,
\end{enumerate}
where $d\mu_\ell$ denotes the Lebesgue measure on $\mathfrak{b}(1/2)/N_\ell$ normalized in such a way that
\begin{equation*}
\|1\|_{\mathcal{F}_\ell}=1.
\end{equation*}
Let $\mathcal{L}_{\xi}$ be the function space consists of all equivalence classes of measurable functions $f$ on $\mathcal{O}_\xi^*\times \mathfrak{b}(1/2)$ such that
\begin{enumerate}
\item[(i)]
$f(\ell,\cdot)\in\mathcal{F}_\ell$ for almost all $\ell\in\mathcal{O}_\xi^*$ with respect to the measure $d\nu_{\xi}$,
\item[(ii)]
$\|f\|_{\mathcal{L}_\xi}^2=\int_{\mathcal{O}^*_\xi}\|f(\ell,\cdot)\|_{\mathcal{F}_\ell}^2d\nu_{\xi}(\ell)<\infty$.
\end{enumerate}
\begin{theorem}[Ishi, {\cite[Theorem 4.10]{ishi 1999}}]\label{Fouriertra}
The map $\phi_{\xi}:\mathcal{L}_{\xi}\rightarrow\mathcal{H}_{\xi}$ defined by
\begin{equation*}
\phi_{\xi} f(U,V)=\int_{\mathcal{O}_\xi^*}e^{i\langle U,\ell\rangle}f(\ell,V)\,d\nu_{\xi}(\ell)\quad((U,V)\in\mathcal{D}(\Omega,Q))
\end{equation*}
gives a Hilbert space isomorphism.
\end{theorem}
We define a unitary representation $\check{T}_{\chi^{i\xi}}$ of $B$ on $\mathcal{L}_\xi$ by
\begin{equation*}
\phi_\xi(\check{T}_{\chi^{i\xi}}(b)f)=T_{\chi^{i\xi}}(b)\phi_\xi(f)\quad(b\in B, f\in \mathcal{L}_\xi).
\end{equation*}
For $(U_0,V_0)\in\mathcal{D}(\Omega,Q)$, let
\begin{equation*}
k_{(U_0,V_0)}(\ell,V)=e^{-i\langle \overline{U_0}, \ell\rangle}e^{Q_\ell(V,V_0)}\quad((\ell,V)\in\mathcal{O}_\xi^*\times\mathfrak{b}(1/2)).
\end{equation*}
Then $k_{(U_0,V_0)}\in\mathcal{L}_{\xi}$, and we have the following equalities (see {\cite[p. 450]{ishi 1999}}):
\begin{equation}\label{PhixifZWfk}
\phi_{\xi} f(U,V)=(f|k_{(U,V)})_{\mathcal{L}_{\xi}}\quad((U,V)\in\mathcal{D}(\Omega,Q), f\in\mathcal{L}_{\xi}),
\end{equation}
\begin{equation}\label{kUVkZWmath}\begin{split}
(k_{(U',V')}|k_{(U,V)})_{\mathcal{L}_{\xi}}=\mathcal{K}^{\xi}((U,V),(U',V'))
\\((U,V),(U',V')\in\mathcal{D}(\Omega,Q)).
\end{split}\end{equation}
Suppose that the unitarizations of $T_{\chi^{i\xi}}$ and $T_{\chi^{i\xi'}}$ are equivalent as unitary representations of $B$. As in {\cite[p. 541]{ishi 2001}}, we fix a function $\Upsilon\neq 0$ on $\mathcal{O}_{\xi}^*$, which is also a function on $\mathcal{O}_{\xi'}^*$, such that
\begin{equation*}
\Upsilon(t_0\ell)=\overline{\chi^{i\xi}(t_0)\chi^{-i\xi'}(t_0)}\Upsilon(\ell)\quad(t_0\in B(0),\ell\in\mathcal{O}_{\xi}^*).
\end{equation*}
\begin{proposition}[Ishi, {\cite[Proposition 4.5]{ishi 2001}}]\label{Thefollowingm}
There exists a nonzero constant $C$ such that the following map $\check{\Psi}_{\xi,\xi'}:\mathcal{L}_{\xi}\rightarrow\mathcal{L}_{\xi'}$ gives the intertwining operator between the unitary representations $(\check{T}_{\chi^{i\xi}},\mathcal{L}_{\xi})$ and $(\check{T}_{\chi^{i\xi'}},\mathcal{L}_{\xi'})$ of $B$:
\begin{equation*}
\check{\Psi}_{\xi,\xi'}f(\ell,V)=C\Upsilon(\ell)f(\ell,\,V)\quad(f\in\mathcal{L}_{\xi},(\ell,V)\in \mathcal{O}_{\xi}^*\times\mathfrak{b}(1/2)).
\end{equation*}
\end{proposition}
Let $\Delta_{\xi}$ and $\Delta_{\xi,\xi'}$ be the functions on $\Omega$ define by
\begin{equation*}
\Delta_{\xi}(t_0E)=|\chi^{i\xi}(t_0)|^2,\quad \Delta_{\xi,\xi'}(t_0E)=\overline{\chi^{i\xi}(t_0)\chi^{-i\xi'}(t_0)}\quad(t_0\in B(0)).
\end{equation*}
\begin{proposition}[Ishi, {\cite[Corollary 2.5 and Proposition 4.6]{ishi 1999}}]
Two functions $\Delta_{\xi}$ and $\Delta_{\xi,\xi'}$ extend to functions on $\Omega+i\mathfrak{b}(1)$ holomorphically, and for $(U,V),(U',V')\in\mathcal{D}(\Omega,Q)$, we have
\begin{equation*}
\mathcal{K}^{\xi}((U,V),(U',V'))=\Delta_{\xi}\left(\tfrac{U-\overline{U'}}{i}-2Q(V,V')\right).
\end{equation*}
\end{proposition}
\subsection{Unitary equivalences among representations of $G$}\label{Intertwini}
Suppose that the unitarizations $(T_{M_{i\xi}},\mathcal{H}_{\xi})$ and $(T_{M_{i\xi'}},\mathcal{H}_{\xi'})$ are equivalent as unitary representations of $B$. By Schur's lemma and the decomposition $G=BK$, the unitarizations are equivalent as unitary representations of $G$ if and only if the intertwining operator between representations $(T_{M_{i\xi}}|_B,\mathcal{H}_{\xi})$ and $(T_{M_{i\xi'}}|_B,\mathcal{H}_{\xi'})$ preserves the actions of $K$. From this point of view, we get the equation (\ref{diff}) in Proposition \ref{Thefollowingc} below which determines whether the unitarizations are equivalent as unitary representations of $G$.
From now on, we assume that $(T_{M_{i\xi}}|_B,\mathcal{H}_{\xi})$ and $(T_{M_{i\xi'}}|_B,\mathcal{H}_{\xi'})$ are equivalent as unitary representations of $B$. Let $\Psi_{\xi,\xi'}=\phi_{\xi'}\circ\check{\Psi}_{\xi,\xi'}\circ\phi_{\xi}^{-1}:\mathcal{H}_{\xi}\rightarrow\mathcal{H}_{\xi'}$.
\begin{lemma}\label{prop:2}
There exists a nonzero constant $C'$ such that
\begin{equation*}
\begin{split}
\Psi_{\xi,\xi'}(\mathcal{K}_{(iE,0)}^{\xi})(U,V)&=C'\mathcal{K}_{(iE,0)}^{\xi'}(U,V)\Delta_{\xi,\xi'}\left(\frac{U-\overline{iE}}{i}\right)\quad((U,V)\in\mathcal{D}(\Omega,Q)).
\end{split}
\end{equation*}
\end{lemma}
\begin{proof}
Put $\mathcal{K}'=\Psi_{\xi,\xi'}(\mathcal{K}_{(iE,0)}^{\xi})$. By (\ref{PhixifZWfk}) and (\ref{kUVkZWmath}), for $(U,V)\in\mathcal{D}(\Omega,Q)$, we have
\begin{equation*}
\phi_{\xi}(k_{(iE,0)})(U,V)=(k_{(iE,0)}|k_{(U,V)})_{\mathcal{L}_{\xi}}=\mathcal{K}^{\xi}((U,V),(iE,0))=\mathcal{K}_{(iE,0)}^{\xi}(U,V).
\end{equation*}
Thus
\begin{equation*}
\mathcal{K}'=\phi_{{\xi'}}\circ\check{\Psi}_{{\xi},{\xi'}}(k_{(iE,0)}).
\end{equation*}
Hence by Theorem \ref{Fouriertra} and Proposition \ref{Thefollowingm}, we have
\begin{equation*}\begin{split}
\mathcal{K}'(U,V)&=C\int_{\mathcal{O}_{\xi'}^*}e^{i\langle U,\ell\rangle}k_{(iE,0)}(\ell,V)\Upsilon(\ell)\,d\nu_{\xi'}(\ell)
\\&=C\int_{\mathcal{O}_{\xi'}^*}e^{i\langle U-\overline{iE},\ell\rangle}\Upsilon(\ell)\,d \nu_{\xi'}(\ell).
\end{split}\end{equation*}
When $(U,V)=(iU_0,V)$ with $U_0\in \Omega$, we see that
\begin{equation*}
i\langle U-\overline{iE},\ell\rangle=i\langle iU_0+iE,\ell\rangle=-\langle U_0+E, \ell\rangle.
\end{equation*}
It follows that $U_0+E\in\Omega$ from $U_0-Q(V,V)\in\Omega$ and $Q(V,V)\in\overline{\Omega}$ (see Remark \ref{Wenotethat}). Thus there exists $t_0\in B(0)$ such that $t_0E=U_0+E$. Then
\begin{equation*}\begin{split}
\mathcal{K}'(iU_0,V)&=C\int_{\mathcal{O}_{\xi'}^*}e^{-\langle t_0\cdot E,\ell\rangle}\Upsilon(\ell)\,d\nu_{\xi'}(\ell)
\\&=C|\chi^{i\xi'}(t_0)|^2\int_{\mathcal{O}_{\xi'}^*}e^{-\langle E, \ell\rangle}\Upsilon(t_0\ell)\,d\nu_{\xi'}(\ell)
\\&=C|\chi^{i\xi'}(t_0)|^2\overline{\chi^{i\xi}(t_0)\chi^{-i\xi'}(t_0)}\int_{\mathcal{O}_{\xi'}^*} e^{-\langle E,\ell\rangle}\Upsilon(\ell)\,d\nu_{\xi'}(\ell)
\\&=C'\Delta_{\xi'}(U_0+E)\Delta_{\xi,\xi'}(U_0+E)
\\&=C'\Delta_{\xi'}\left(\frac{iU_0+iE}{i}\right)\Delta_{\xi,\xi'}\left(\frac{iU_0+iE}{i}\right),
\end{split}\end{equation*}
where we put $C'=C\int_{\mathcal{O}_{\xi'}^*} e^{-\langle E,\ell\rangle}\Upsilon(\ell)\,d\nu_{\xi'}(\ell)$. By the analytic continuation, we have
\begin{equation*}\begin{split}
\mathcal{K}'(U,V)=C'\mathcal{K}_{(iE,0)}^{{\xi'}}(U,V)\Delta_{\xi,\xi'}\left(\frac{U-\overline{iE}}{i}\right)\quad((U,V)\in\mathcal{D}(\Omega,Q)).
\end{split}\end{equation*}
\end{proof}
\begin{remark}\label{Wenotethat}
Let $\Omega_0\subset \mathbb{R}^{N_0}$ be an open convex cone, let $v\in \Omega_0$, and let $v'\in\overline{\Omega_0}$. Then it follows that $v+v'\in \Omega_0$. Indeed, let $B_\epsilon(v)$ be an open ball of radius $\epsilon >0$ centered at $v$ satisfying $B_\epsilon(v)\subset \Omega_0$. Then we have $v_0+v'\in\overline{\Omega_0}$ for all $v_0\in B_\epsilon(v)$. Hence we obtain $v+v'\in \mathrm{Int}(\overline{\Omega_0})$. It is known that for a convex set $S_0\subset \mathbb{R}^{N_0}$, the equality $\mathrm{Int}(\overline{S_0})=\mathrm{Int}(S_0)$ holds. Thus we have $v+v'\in \Omega_0$ since $\Omega_0$ is a convex set.
\end{remark}
If the unitarizations $(T_{M_{i\xi}},\mathcal{H}_{\xi})$ and $(T_{M_{i\xi'}},\mathcal{H}_{\xi'})$ are equivalent as unitary representations of $G$, then the equality
\begin{equation}\label{eq:1}
T_{M_{i\xi'}}(k)\Psi_{\xi,\xi'}(\mathcal{K}_{(iE,0)}^{\xi})=\Psi_{\xi,\xi'}(T_{M_{i\xi}}(k)\mathcal{K}_{(iE,0)}^{\xi})\quad(k\in K)
\end{equation}
holds. The converse is also true as we shall see in the next proposition. In what follows, we put $(U(g),V(g))=g(U,V)$ for $g\in G$ and $(U,V)\in\mathcal{D}(\Omega,Q)$.
\begin{proposition}\label{Thefollowingc}
The following are equivalent:
\begin{enumerate}
\item[$(\mathrm{i})$]
the unitarizations of $T_{M}$ and $T_{M'}$ are equivalent as unitary representations of $G$,
\item[$(\mathrm{ii})$]
$(\ref{eq:1})$ holds,
\item[$(\mathrm{iii})$] the following equality holds:
\begin{equation}\label{diff}\begin{split}
\Delta_{{\xi},{\xi'}}\left(\dfrac{U(k^{-1})-\overline{iE}}{i}\right)=MM'^{-1}(k,(iE,0))
\Delta_{{\xi},{\xi'}}\left(\frac{U-\overline{iE}}{i}\right)
\\(k\in K, (U,V)\in\mathcal{D}(\Omega, Q)).
\end{split}\end{equation}
\end{enumerate}\end{proposition}
\begin{proof}
First we show that (i) and (ii) are equivalent. Thanks to the remark preceding Proposition \ref{Thefollowingc}, it is enough to show that (ii) implies (i). We suppose that (\ref{eq:1}) holds. Let $b\in B$, and let $k\in K$. Then we can write $kb=b'k'$ with $b'\in B$ and $k'\in K$, and we have
\begin{equation*}\begin{split}
\Psi&_{\xi,\xi'}(T_{M_{i\xi}}(k)T_{M_{i\xi}}(b)\mathcal{K}_{(iE,0)}^{\xi})
=\Psi_{\xi,\xi'}(T_{M_{i\xi}}(b')T_{M_{i\xi}}(k')\mathcal{K}_{(iE,0)}^{\xi})
\\&=T_{M_{i\xi'}}(b')\Psi_{\xi,\xi'}(T_{M_{i\xi}}(k')\mathcal{K}_{(iE,0)}^{\xi})
=T_{M_{i\xi'}}(b')T_{M_{i\xi'}}(k')\Psi_{\xi,\xi'}(\mathcal{K}_{(iE,0)}^{\xi})\\&
=T_{M_{i\xi'}}(k)T_{M_{i\xi'}}(b)\Psi_{\xi,\xi'}(\mathcal{K}_{(iE,0)}^{\xi})=T_{M_{i\xi'}}(k)\Psi_{\xi,\xi'}(T_{M_{i\xi}}(b)\mathcal{K}_{(iE,0)}^{\xi}).
\end{split}\end{equation*}
Now $\Psi_{\xi,\xi'}$ is continuous, and the subspace of $\mathcal{H}_{\xi}$ generated by $T_{M_{i\xi}}(b)\mathcal{K}_{(iE,0)}^{\xi}\,(b\in B)$ is dense in $\mathcal{H}_{\xi}$. Thus
\begin{equation*}
\Psi_{\xi,\xi'}(T_{M_{i\xi}}(k)f)=T_{M_{i\xi'}}(k)\Psi_{\xi,\xi'}(f)\quad(k\in K, f\in\mathcal{H}_{\xi}),
\end{equation*}
which implies $(T_{M_{i\xi}},\mathcal{H}_{\xi})$ and $(T_{M_{i\xi'}},\mathcal{H}_{\xi'})$ are equivalent as unitary representations of $G$. Thus (i) follows. Next we show that (ii) and (iii) are equivalent.
By Lemma \ref{prop:2} and the transformation law of the reproducing kernel, for $k\in K$, we have
\begin{equation*}\begin{split}
{C'}^{-1}&T_{M_{i\xi'}}(k)\Psi_{{\xi},{\xi'}}(\mathcal{K}_{(iE,0)}^{\xi})(U,V)
\\&
=M_{i\xi'}(k^{-1},(U,V))^{-1}\mathcal{K}_{(iE,0)}^{{\xi'}}(U(k^{-1}),V(k^{-1}))\Delta_{{\xi},{\xi'}}\left(\frac{U(k^{-1})-\overline{iE}}{i}\right)\\
&=M'(k,(iE,0))\mathcal{K}_{(iE,0)}^{{\xi'}}(U,V)\Delta_{{\xi},{\xi'}}\left(\frac{U(k^{-1})-\overline{iE}}{i}\right).
\end{split}
\end{equation*}
For $k\in K$, we also have
\begin{equation*}\begin{split}
{C'}^{-1}&\Psi_{\xi,\xi'}(
T_{M_{i\xi}}(k)\mathcal{K}_{(iE,0)}^{\xi}
)(U,V)={C'}^{-1}\Psi_{\xi,\xi'}(M(k,(iE,0))\mathcal{K}_{(iE,0)}^{\xi})(U,V)
\\&=M(k,(iE,0))\mathcal{K}_{(iE,0)}^{{\xi'}}(U,V)\Delta_{{\xi},{\xi'}}\left(\frac{U-\overline{iE}}{i}\right).
\end{split}\end{equation*}
Thus (\ref{eq:1}) holds if and only if
\begin{equation*}\begin{split}
\Delta_{{\xi},{\xi'}}\left(\frac{U(k^{-1})-\overline{iE}}{i}\right)=MM'^{-1}(k,(iE,0))\Delta_{{\xi},{\xi'}}\left(\frac{U-\overline{iE}}{i}\right)
\\(k\in K, (U,V)\in\mathcal{D}(\Omega,Q)).
\end{split}\end{equation*}
\end{proof}
\subsection{Isotropy representation}\label{Isotropyre}
In this subsection, we shall consider the isotropy representation $\rho: K\rightarrow GL(\mathfrak{g}/{\mathfrak{k}})$. We identify $\mathfrak{b}$ with $\mathfrak{g}/\mathfrak{k}$ by the map $\mathfrak{b}\ni X\mapsto X+\mathfrak{k}\in\mathfrak{g}/\mathfrak{k}$. We denote by $\omega'\in\mathfrak{b}^*$ the Koszul form on $\mathfrak{b}$ which is defined by
\begin{equation*}
\omega'(X)=\mathrm{tr}_\mathfrak{b}(\mathrm{ad}(jX)-j\mathrm{ad}(X))\quad(X\in\mathfrak{b}).
\end{equation*}
Then $(\mathfrak{b},j,\omega')$ is a normal $j$-algebra. Put
\begin{equation*}
\langle X,Y\rangle'=\omega'([jX,Y])\quad(X,Y\in\mathfrak{b}),
\end{equation*}
and let $\langle\cdot,\cdot \rangle''$ be the Hermitian form on $\mathfrak{b}$ given by
\begin{equation*}
\langle X,Y\rangle''=\langle X,Y\rangle'+i\langle X,jY \rangle'\quad(X,Y\in\mathfrak{b}).
\end{equation*}
Then we can regard $\mathfrak{b}$ as a complex Hilbert space, and $\rho$ is a unitary representation of $K$. Define
\begin{equation*}
\mathfrak{b}_{triv}=\{X\in\mathfrak{b};[X,\mathfrak{k}]\subset\mathfrak{k}\}.
\end{equation*}
\begin{lemma}\label{IfXinmathf}
For any $X\in\mathfrak{b}_{triv}$, we have $[X,\mathfrak{k}]=\{0\}$.
\end{lemma}
\begin{proof}
By Lemma \ref{mathfrakzm}, we can regard $\mathfrak{g}$ as a subalgebra of $\mathfrak{gl}(\mathfrak{g})$. We denote the connected Lie subgroup of $GL(\mathfrak{g})$ with Lie algebra $\mathfrak{b}\subset \mathfrak{gl}(\mathfrak{g})$ by $B_0$. We put $N'=\dim \mathfrak{g}$. Choose an Iwasawa subgroup $B'$ of $GL(\mathfrak{g})$ which contains $B_0$. By \cite[Chapter 4, Theorem 4.9]{encyclopedia}, the group $B'$ is realized as the subgroup $L(N')\subset GL(N',\mathbb{R})$ of lower triangular matrices with positive diagonal entries in some basis of $\mathfrak{g}$. Hence $\mathfrak{b}$ is realized as a subalgebra of $\mathfrak{l}(N')$. For $X\in\mathfrak{l}(N')$, the linear map $\mathrm{ad}(X):\mathfrak{gl}(N',\mathbb{R})\rightarrow \mathfrak{gl}(N',\mathbb{R})$ has only real eigenvalues. Let $X\in\mathfrak{b}_{triv}$. Then the linear map $\mathrm{ad}(X):\mathfrak{g}\rightarrow\mathfrak{g}$ also has only real eigenvalues. On the other hand, we have $\mathrm{ad}(X)[\mathfrak{k},\mathfrak{k}]\subset[\mathfrak{k},\mathfrak{k}]$, and $\mathrm{ad}(X)|_{[\mathfrak{k},\mathfrak{k}]}$ has only pure imaginary eigenvalues and is diagonalizable. Hence $\mathrm{ad}(X)|_{[\mathfrak{k},\mathfrak{k}]}=0$. Put \begin{equation*}
\mathfrak{t}=\mathfrak{z}(\mathfrak{k}),\quad \mathbb{T}=\exp \mathfrak{t}\subset G,\quad N''=\dim \mathfrak{t}.
\end{equation*}
Then the Lie group $\mathbb{T}$ is isomorphic to
\begin{equation*}
(S^1)^{N''}=\{(\zeta_1,\cdots, \zeta_{N''})\in\mathbb{C}^{N''};|\zeta_l|=1\text{ for all }l=1,\cdots,N''\}.
\end{equation*}
Let $F:(S^1)^{N''}\rightarrow\mathbb{T}$ be an isomorphism, and let $t\in\mathbb{R}$. The map
\begin{equation*}
\mathrm{Inn}(e^{tX}):\mathbb{T}\ni g\mapsto e^{tX}ge^{-tX}\in \mathbb{T}
\end{equation*}
defines an automorphism of $\mathbb{T}$. Thus there exists a map $\mathbb{R}\ni t\mapsto (m_1(t),\cdots,m_{N''}(t))\in\mathbb{Z}^{N''}$ such that
\begin{equation*}
\mathrm{Inn}(e^{tX})\circF(\zeta_1,\cdots,\zeta_{N''})=F(\zeta_1^{m_1(t)},\cdots,\zeta_{N''}^{m_{N''}(t)})
\end{equation*}
for all $(\zeta_1,\cdots,\zeta_{N''})\in (S^1)^{N''}$. The Lie algebra of $(S^1)^{N''}$ is isomorphic to
\begin{equation*}
(i\mathbb{R})^{N''}=\{(i\gamma_1,\cdots,i\gamma_{N''}):\gamma_l\in \mathbb{R}\text{ for }l=1,\cdots,N''\},
\end{equation*}
and we have
\begin{equation*}\begin{split}
\mathrm{Ad}(e^{tX})\circ(F_*)_e(i\gamma_1,\cdots, i\gamma_{N''})=(F_*)_e(im_1(t)\gamma_1,\cdots,im_{N''}(t)\gamma_{N''})
\end{split}\end{equation*}
for all $(i\gamma_1,\cdots,i\gamma_{N''})\in (i\mathbb{R})^{N''}$, where $(F_*)_e:(i\mathbb{R})^{N''}\rightarrow\mathfrak{t}$ is the differential of $F$ at $e\in (S^1)^{N''}$. Since $\mathbb{Z}^{N''}$ is discrete, we have $\mathrm{ad}(X)|_{\mathfrak{t}}=0$. Thus it follows that $[X,\mathfrak{k}]=\{0\}$.
\end{proof}
Let $\gamma\in\{-1,-1/2,0,1/2,1\}$, and let $\mathfrak{g}(\gamma)\subset \mathfrak{g}$ be the subspace given by $\mathfrak{g}(\gamma)^\#=\mathfrak{f}(\gamma)$. Then the following equalities hold:
\begin{equation*}
\mathfrak{g}(\gamma)=\{X\in\mathfrak{g}; \mathrm{ad}(jE)X=-\gamma X\},
\end{equation*}
\begin{equation*}
\mathfrak{g}=\mathfrak{g}(-1)\oplus\mathfrak{g}(-1/2)\oplus\mathfrak{g}(0)\oplus\mathfrak{g}(1/2)\oplus\mathfrak{g}(1).
\end{equation*}
Note that
\begin{equation*}
\mathfrak{b}(1)=\mathfrak{g}(-1),\quad \mathfrak{b}(1/2)=\mathfrak{g}(-1/2),\quad \mathfrak{b}(0)\subset \mathfrak{g}(0).
\end{equation*}
Let $\mathfrak{n}, \mathfrak{n}'\subset \mathfrak{g}$ be the subalgebras given by $\mathfrak{n}^\#=\mathfrak{m}$ and $\mathfrak{n}'^\#=\mathfrak{m}'$.
Then we have
\begin{equation*}
(\mathfrak{n}\cap\mathfrak{g})^\#=\mathfrak{m}\cap\mathfrak{f}=\{X+\psi(X);X\in\mathfrak{f}(1/2)\},
\end{equation*}
\begin{equation*}
(\mathfrak{n}'\cap\mathfrak{g})^\#=\mathfrak{m}'\cap\mathfrak{f}=\{X+\varphi(X);X\in\mathfrak{f}(1)\},
\end{equation*}
\begin{equation*}
\mathfrak{k}=(\mathfrak{k}\cap\mathfrak{g}(0))\oplus\mathfrak(\mathfrak{n}\cap\mathfrak{g})\oplus(\mathfrak{n}'\cap\mathfrak{g}).
\end{equation*}
From now on, for $X\in\mathfrak{g}$, let $X_\gamma$ denote the projection of $X$ on $\mathfrak{g}(\gamma)$.
\begin{proposition}\label{b0inv}
The subalgebra $\mathfrak{b}_{triv}\subset\mathfrak{b}$ is $\mathrm{ad}(jE)$-invariant. \end{proposition}
\begin{proof}
Let $X\in\mathfrak{b}_{triv}$. By Lemma \ref{IfXinmathf}, for $Y_0\in\mathfrak{k}\cap\mathfrak{g}(0)$, we have
\begin{equation*}\begin{split}
[Y_0,X]=[Y_0,X_{-1}]+[Y_0,X_{-1/2}]+[Y_0,X_{0}]=0
\end{split}\end{equation*}
with $[Y_0,X_{-1}]\in\mathfrak{g}(-1)$, $[Y_0,X_{-1/2}]\in\mathfrak{g}(-1/2)$, and $[Y_0,X_{0}]\in\mathfrak{g}(0)$. Clearly $[Y_0,X_{-1/2}]=0, [Y_0,X_{-1}]=0$. Thus
\begin{equation*}\begin{split}
\mathrm{ad}(Y_0)\mathrm{ad}(jE)(X)&=\mathrm{ad}(Y_0)\left(\frac{1}{2}X_{-1/2}+X_{-1}\right)=0.
\end{split}\end{equation*}
For $Y'=Y'_{-1}+Y'_{1}\in \mathfrak{n}'\cap\mathfrak{g}$, we have
\begin{equation*}
[Y',X]=[Y'_{-1},X_0]+[Y'_1,X_{-1}]+[Y'_1,X_{-1/2}]+[Y'_1,X_0]=0
\end{equation*}
with $[Y'_{-1},X_0]\in\mathfrak{g}(-1), [Y'_1,X_{-1}]\in\mathfrak{g}(0), [Y'_1,X_{-1/2}]\in\mathfrak{g}(1/2)$, and $[Y'_1,X_0]\in\mathfrak{g}(1)$.
Clearly
\begin{equation*}
[Y'_1,X_{-1}]=0, [Y'_1,X_{-1/2}]=0,
\end{equation*}
and we see from Proposition \ref{Forasubalg2} that
\begin{equation}\label{X12X10quad}
[Y_{1/2},X_{-1}]=0\quad(Y_{1/2}\in\mathfrak{g}(1/2)).
\end{equation}
We have
\begin{equation*}\begin{split}
\mathrm{ad}(Y')\mathrm{ad}(jE)(X)&=\mathrm{ad}(Y')\left(\frac{1}{2}X_{-1/2}+X_{-1}\right)\\&
=\left[Y'_{-1}+Y'_1,\frac{1}{2}X_{-1/2}+X_{-1}\right]\\&
=\frac{1}{2}[Y'_1,X_{-1/2}]+[Y'_1,X_{-1}]=0.
\end{split}\end{equation*}
For $Y=Y_{-1/2}+Y_{1/2}\in\mathfrak{n}\cap\mathfrak{g}$, we have
\begin{equation*}\begin{split}
[&Y,X]\\&=[Y_{-1/2},X_{-1/2}]+[Y_{-1/2},X_0]+[Y_{1/2},X_{-1}]+[Y_{1/2},X_{-1/2}]+[Y_{1/2},X_0]=0
\end{split}\end{equation*}
with $[Y_{-1/2},X_{-1/2}]\in\mathfrak{g}(-1), [Y_{-1/2},X_0]+[Y_{1/2},X_{-1}]\in\mathfrak{g}(-1/2),[Y_{1/2},X_{-1/2}]\in\mathfrak{g}(0)$, and $[Y_{1/2},X_0]\in\mathfrak{g}(1/2)$. By (\ref{X12X10quad}), we have
\begin{equation*}\begin{split}
\mathrm{ad}(Y)\mathrm{ad}(jE)(X)&=\mathrm{ad}(Y)\left(\frac{1}{2}X_{-1/2}+X_{-1}\right)\\&
=\left[Y_{-1/2}+Y_{1/2},\frac{1}{2}X_{-1/2}+X_{-1}\right]\\&
=\frac{1}{2}[Y_{-1/2},X_{-1/2}]+\frac{1}{2}[Y_{1/2},X_{-1/2}]+[Y_{1/2},X_{-1}]=0.
\end{split}\end{equation*}
Thus for any $X\in\mathfrak{b}_{triv}$ and $W\in\mathfrak{k}$, we have
\begin{equation*}
\mathrm{ad}(W)\mathrm{ad}(jE)X=0.
\end{equation*}
This completes the proof.
\end{proof}
\begin{proposition}\label{addinv}
The subspace $\mathfrak{b}_{triv}^\perp\subset\mathfrak{b}$ is $\mathrm{ad}(jE)$-invariant.
\end{proposition}
\begin{proof}
By Remark \ref{Bydatrithe} (i), for $X\in\mathfrak{b}_{triv}^\perp$ and $Y\in\mathfrak{b}_{triv}$, we have
\begin{equation*}\begin{split}
\langle\mathrm{ad}(jE)X,Y\rangle=\langle X,\mathrm{ad}(jE)Y\rangle,
\end{split}\end{equation*}
which is equal to $0$ by Proposition \ref{b0inv}. This implies that the assertion holds.
\end{proof}
\subsection{Actions of the isotropy subgroup on holomorphic vector bundles}
\begin{lemma}\label{compa}
Let $G_0$ be a connected compact Lie group, and let $(\pi,\mathcal{V})$ be a finite-dimensional unitary representation of $G_0$. For $X\in\mathfrak{g}_0$ and $\zeta\in\mathbb{C}$, put
\begin{equation*}
\mathcal{V}(X,\zeta)=\{v\in\mathcal{V}; d\pi(X)v=\zeta v\}.
\end{equation*}
If $\pi$ is irreducible and nontrivial, then
\begin{equation*}
\mathcal{V}=\sum_{X\in\mathfrak{g}_0,\zeta\in\mathbb{C}\backslash\{0\}}\mathcal{V}(X,\zeta).
\end{equation*}
\end{lemma}
\begin{proof}
Let $\mathcal{V}_1=\sum_{X\in\mathfrak{g}_0,\zeta\in\mathbb{C}\backslash\{0\}}\mathcal{V}(X,\zeta)$, and let $\mathbb{T}$ be a maximal torus of $G_0$. If $\mathcal{V}_1=0$, then the character $\chi_\pi(g)=\mathrm{tr}\,\pi(g)\,(g\in G_0)$ satisfies $\chi_\pi|_\mathbb{T}=\dim \mathcal{V}$ identically. Two finite-dimensional representations of $G_0$ are equivalent if and only if their character are equal (see \cite[Part 2, Corollary 5.3.4]{wolf}), and any two maximal tori of $G_0$ are conjugate (see \cite[Corollary 4.35]{Knapp}), so that $\pi$ is trivial. This contradicts the assumption. Thus $\mathcal{V}_1\neq 0$. Let $v\in \mathcal{V}(X,\zeta)$. For $g\in G_0$, we have
\begin{equation*}\begin{split}
d\pi(\mathrm{Ad}(g)X)\pi(g)v&=\left.\frac{d}{dt}\right|_{t=0}\pi(ge^{tX}g^{-1}g)v=\left.\frac{d}{dt}\right|_{t=0}\pi(ge^{tX})v\\&=\pi(g)d\pi(X)v=\zeta\pi(g)v,
\end{split}\end{equation*}
so that $\pi(g)v\in \mathcal{V}(\mathrm{Ad}(g)X,\zeta)$.
Hence $\mathcal{V}_1$ is a $G_0$-invariant subspace of $\mathcal{V}$. Since $\pi$ is a finite-dimensional irreducible representation and $\mathcal{V}_1\neq 0$, we have $\mathcal{V}=\mathcal{V}_1$.
\end{proof}
Let $\theta:\mathfrak{g}_-\rightarrow\mathbb{C}$ be a complex representation of $\mathfrak{g}_-$, and let $\chi^\theta:B\rightarrow \mathbb{C}^\times$ be the representation of $B$ given by \eqref{chithetaex}.
\begin{theorem}\label{extension}
Suppose that $\theta(\mathfrak{k})=0$. Extend the representation $d\chi^\theta:
\mathfrak{b}\rightarrow\mathbb{C}$ of $\mathfrak{b}$ to a linear map $d\chi^\theta:
\mathfrak{g}\rightarrow\mathbb{C}$ by the zero-extension along with the decomposition $\mathfrak{g}=\mathfrak{b}\oplus\mathfrak{k}$. Then $d\chi^\theta:\mathfrak{g}\rightarrow \mathbb{C}$ defines a representation of $\mathfrak{g}$.
\end{theorem}
\begin{proof}
First we show that $\mathfrak{b}_{triv}^\perp=\sum_{W\in\mathfrak{k},\zeta\in\mathbb{C}\backslash\{0\}}\mathfrak{b}(W,\zeta)$. Since every irreducible subrepresentation of $(\rho,\mathfrak{b}_{triv}^\perp)$ is nontrivial, we see from Lemma \ref{compa} that $\mathfrak{b}_{triv}^\perp\subset\sum_{W\in\mathfrak{k},\zeta\in\mathbb{C}\backslash\{0\}}\mathfrak{b}(W,\zeta)$. Conversely for $W\in\mathfrak{k}$, $\zeta\in\mathbb{C}\backslash\{0\}$, $X\in\mathfrak{b}(W,\zeta)$, and $X'\in\mathfrak{b}_{triv}$, we have $\langle X,X'\rangle=0$. This shows that $\mathfrak{b}_{triv}^\perp\supset\sum_{W\in\mathfrak{k},\zeta\in\mathbb{C}\backslash\{0\}}\mathfrak{b}(W,\zeta)$. Thus
\begin{equation*}
\mathfrak{b}_{triv}^\perp=\sum_{W\in\mathfrak{k}, \zeta\in\mathbb{C}\backslash\{0\}}\mathfrak{b}(W,\zeta)=\sum_{W\in\mathfrak{k}, \gamma\in\mathbb{R}\backslash\{0\}}\mathfrak{b}(W,i\gamma).
\end{equation*}
Let $\gamma\in\mathbb{R}\backslash\{0\}$, and let $W\in\mathfrak{k}$. Second we show that $\theta (X+ijX)=0$ for all $X\in \mathfrak{b}(W,i\gamma)$. Let $X\in\mathfrak{b}(W,i\gamma)$. Then
\begin{equation*}\begin{split}
0&=[\theta(W),\theta(X+ijX)]=\theta([W,X+ijX])=\theta([W,X]+i[W,jX])
\\&=\theta(d\rho(W)X+id\rho(W)(jX))=\theta(\gamma j X-\gamma i X)=-\gamma i\theta(X+ijX).
\end{split}\end{equation*}
This proves that $\theta(X+ijX)=0$, and hence
\begin{equation*}
\theta(X+ijX)=0\quad(X\in\mathfrak{b}_{triv}^\perp).
\end{equation*}
Let $X\in\mathfrak{b}_{triv}^\perp$. Then we have $X_{-1/2}, X_0\in\mathfrak{b}_{triv}^\perp$ by Proposition \ref{addinv}. Thus
\begin{equation*}
d\chi^\theta(X)=\theta (\tau(X))=\theta((X_{-1/2}+ijX_{-1/2})/2+X_0+ijX_0)=0.
\end{equation*}
We see from the above equality that $d\chi^\theta([\mathfrak{b},\mathfrak{k}])=0$. Now let $X,X'\in\mathfrak{g}$, and write $X=Y+W,X'=Y'+W'$ with $Y,Y'\in\mathfrak{b}$ and $W,W'\in\mathfrak{k}$. Then we have
\begin{equation*}
d\chi^\theta([X,X'])=d\chi^\theta([Y,Y'])=[d\chi^\theta(Y),d\chi^\theta(Y')]=[d\chi^\theta(X),d\chi^\theta(X')].
\end{equation*}
This completes the proof.
\end{proof}
Let $M,M':G\times\mathcal{D}(\Omega,Q)\rightarrow \mathbb{C}^\times$ be holomorphic multipliers. Put $\theta=\theta_{M}, \theta'=\theta_{M'}$.
\begin{theorem}\label{Supposethat}
Suppose that $M(k,(iE,0))=M'(k,(iE,0))$ for all $k\in K$. Then $M_{\theta}(k,(U,V))=M_{\theta'}(k,(U,V))$ for all $k\in K$ and $(U,V)\in\mathcal{D}(\Omega,Q)$.
\end{theorem}
\begin{proof}
By Theorem \ref{extension}, the representation $d\chi^{\theta-\theta'}:\mathfrak{b}\rightarrow \mathbb{C}$ of $\mathfrak{b}$ extends to a representation of $\mathfrak{g}$. Let us use the same symbol $d\chi^{\theta-\theta'}$ to denote the extension of the representation. Let $\widetilde{G}$ be the universal covering group of $G$. We denote the covering homomorphism by $\tilde{p}:\widetilde{G}\rightarrow G$. Then we have $\widetilde{G}=\tilde{p}^{-1}(B)^{o}\tilde{p}^{-1}(K)$. Since the map $\tilde{p}|_{\tilde{p}^{-1}(B)^o}:\tilde{p}^{-1}(B)^o\rightarrow B$ is bijective, we have $\tilde{p}^{-1}(B)^o\cap \tilde{p}^{-1}(K)=\{e\}$. Let $\chi':\tilde{G}\rightarrow \mathbb{C}^\times$ be the lifting of $d\chi^{\theta-\theta'}:\mathfrak{g}\rightarrow\mathbb{C}$ to a representation of $\widetilde{G}$. For $g\in G$, let $g'$ and $g''$ be elements of $\tilde{G}$ such that $\tilde{p}(g')=\tilde{p}(g'')=g$. Then
\begin{equation*}
g'g''^{-1}\in \tilde{p}^{-1}({e})\subset \tilde{p}^{-1}(K).
\end{equation*}
Thus it follows that $\chi'(g'g''^{-1})=1$. Hence $\chi'$ descends to $G$, i.e. there exists a representation $\chi:G\rightarrow \mathbb{C}^\times$ such that $\chi'=\chi\circ\tilde{p}$. Now $\chi$ defines a holomorphic multiplier $\chi:G\times\mathcal{D}(\Omega,Q)\rightarrow \mathbb{C}^\times$, and we have \begin{equation*}
M_{\theta}M_{\theta'}^{-1}\chi^{-1}(k,(iE,0))=1\quad(k\in K),
\end{equation*}
and
\begin{equation*}
M_{\theta}M_{\theta'}^{-1}\chi^{-1}(b,(U,V))=1\quad(b\in B,(U,V)\in\mathcal{D}(\Omega,Q)).
\end{equation*}
By the same arguments in Lemma \ref{extendm}, we have
\begin{equation*}
M_{\theta}M_{\theta'}^{-1}\chi^{-1}(g,(U,V))=1\quad(g\in G,(U,V)\in\mathcal{D}(\Omega,Q)).
\end{equation*}
This proves the result.
\end{proof}
\begin{corollary}\label{LetEandEbe}
Let $L$ and $L'$ be $G$-equivariant holomorphic line bundles over a bounded homogeneous domain $\mathcal{D}$. Suppose that the actions of $K$ on the fibers $L_p$ and $L'_p$ coincide. Then $L$ and $L'$ are isomorphic as $K$-equivariant holomorphic line bundles.
\end{corollary}
\subsection{Unitary equivalences and the actions of the isotropy subgroup}
We see one formula on the function $\Delta_{\xi,\xi'}$.
Let $\theta$ and $\theta'$ be one-dimensional complex representations of $\mathfrak{g}_-$. Then we have
\begin{equation}\label{chii}\begin{split}
\Delta_{\xi,\xi'}\left(\tfrac{U(b)-\overline{U'(b)}}{i}-2Q(V(b),V'(b))\right)=\overline{\chi^{\theta-\theta'}(b)}\Delta_{\xi,\xi'}\left(\tfrac{U-\overline{U'}}{i}-2Q(V,V')\right)\\(b\in B,(U,V),(U',V')\in\mathcal{D}(\Omega,Q)).
\end{split}\end{equation}
Let $M,M':G\times\mathcal{D}(\Omega,Q)\rightarrow \mathbb{C}^\times$ be holomorphic multipliers.
\begin{proposition}\label{pretheorem}
Suppose that the representations $T_{M}$ and $T_{M'}$ have unitarizations $(T_M, \mathcal{H})$ and $(T_{M'},\mathcal{H}')$ and that they are equivalent as unitary representations of $B$. Then $(T_{M}, \mathcal{H})$ and $(T_{M'}, \mathcal{H}')$ are equivalent as unitary representations of $G$ if and only if
\begin{equation*}
M(k,(iE,0))=M'(k,(iE,0))\quad(k\in K).
\end{equation*}
\end{proposition}
\begin{proof}
First we show the `only if' part. Putting $(U,V)=(iE,0)$ in (\ref{diff}), we obtain $M(k,(iE,0))=M'(k,(iE,0))$ for all $k\in K$. Second we show the `if' part. Suppose that $M(k,(iE,0))=M'(k,(iE,0))$ for all $k\in K$. Let $\xi$ and $\xi'$ be the linear forms on $\mathfrak{g}$ given by \eqref{xiJKcdotpi}. Then (\ref{chii}) gives
\begin{equation*}\begin{split}
&\Delta_{\xi,\xi'}\left(\tfrac{U(b)-\overline{U(b)}}{i}-2Q(V(b),V(b))\right)
=\overline{\chi^{i\xi-i\xi'}(b)}\Delta_{\xi,\xi'}\left(\tfrac{U-\overline{U}}{i}-2Q(V,V)\right)
\\&=\overline{M_{i\xi}M_{-i\xi'}(b,(U,V))}\Delta_{\xi,\xi'}\left(\tfrac{U-\overline{U}}{i}-2Q(V,V)\right)\quad(b\in B, (U,V)\in\mathcal{D}).
\end{split}\end{equation*}
Then Lemma \ref{extendm} and Theorem \ref{Supposethat} show that
\begin{equation*}\begin{split}
\Delta_{\xi,\xi'}&\left(\tfrac{U(k)-\overline{U(k)}}{i}-2Q(V(k),V(k))\right)
\\&=\overline{M_{i\xi}M_{-i\xi'}(k,(U,V))}\Delta_{\xi,\xi'}\left(\tfrac{U-\overline{U}}{i}-2Q(V,V)\right)
\\&=\Delta_{\xi,\xi'}\left(\tfrac{U-\overline{U}}{i}-2Q(V,V)\right)\quad(k\in K, (U,V)\in\mathcal{D}(\Omega,Q)).
\end{split}\end{equation*}
By the analytic continuation, we have
\begin{equation*}\begin{split}
\Delta_{\xi,\xi'}\left(\tfrac{U(k)-\overline{U'(k)}}{i}-2Q(V(k),V'(k))\right)=\Delta_{\xi,\xi'}\left(\tfrac{U-\overline{U'}}{i}-2Q(V,V')\right)\\\quad(k\in K,(U,V),(U',V')\in\mathcal{D}(\Omega,Q)).
\end{split}\end{equation*}
Putting $(U',V')=(iE,0)$ in the above equation, we obtain
\begin{equation*}
\Delta_{\xi,\xi'}\left(\frac{U(k)-\overline{iE}}{i}\right)=\Delta_{\xi,\xi'}\left(\frac{U-\overline{iE}}{i}\right)\quad(k\in K,(U,V)\in\mathcal{D}(\Omega,Q)).
\end{equation*}
Thus we get the equation (\ref{diff}), and hence the unitary representations $(T_{M},\mathcal{H})$ and $(T_{M'},\mathcal{H}')$ of $G$ are equivalent by Proposition \ref{Thefollowingc}. The proof is complete.
\end{proof}
\begin{theorem}\label{main}
Let $m,m':G\times\mathcal{D}\rightarrow \mathbb{C}^\times$ be holomorphic multipliers. Suppose that there exist Hilbert spaces $\mathcal{H}$ and $\mathcal{H}'$ of holomorphic functions on $\mathcal{D}$ which give the unitarizations of $T_m$ and $T_{m'}$, respectively. Then the following two conditions are equivalent:
\begin{enumerate}
\item[$(\mathrm{i})$]
$(T_{m},\mathcal{H})$ and $(T_{m'},\mathcal{H}')$ are equivalent as unitary representations of $G$.
\item[$(\mathrm{ii})$]
$(T_{m},\mathcal{H})$ and $(T_{m'},\mathcal{H}')$ are equivalent as unitary representations of $B$, and $m(k,p)=m'(k,p)$ for all $k\in K$.
\end{enumerate}
\end{theorem}
\begin{proof}
We assume that $(T_{m},\mathcal{H})$ and $(T_{m'},\mathcal{H}')$ are equivalent as unitary representations of $B$. Let $M,M':G\times \mathcal{D}(\Omega, Q)\rightarrow \mathbb{C}^\times$ be holomorphic multipliers given by $M(g,(U,V))={m}(g,\mathcal{C}((U,V)))$, $M'(g,(U,V))={m'}(g,\mathcal{C}((U,V)))$. We see from Proposition \ref{pretheorem} that the unitarizations of $T_{M}$ and $T_{M'}$ are equivalent as unitary representations of $G$ if and only if $M(k,(iE,0))=M'(k,(iE,0))$ for all $k\in K$. The map
\begin{equation*}
\mathcal{C}^*:\mathcal{O}(\mathcal{D})\ni f\mapsto f\circ \mathcal{C}\in\mathcal{O}(\mathcal{D}(\Omega,Q))
\end{equation*}
intertwines representations $T_{{m}}$ and $T_{M}$ of $G$ and also intertwines representations $T_{{m'}}$ and $T_{M'}$ of $G$. Thus \text{(i)} holds if and only if $(T_{M},\mathcal{C}^*(\mathcal{H}))$ and $(T_{M'},\mathcal{C}^*(\mathcal{H}'))$ are equivalent as unitary representations of $G$.
Moreover, by Proposition \ref{pretheorem}, the representations $(T_{M},\mathcal{C}^*(\mathcal{H}))$ and $(T_{M'},\mathcal{C}^*(\mathcal{H}'))$ are equivalent as unitary representations of $G$ if and only if
\begin{equation*}\begin{split}
m(k,p)=M(k,(iE,0))=M'(k,(iE,0))=m'(k,p) \quad(k\in K).
\end{split}\end{equation*}
Thus (i) and (ii) are equivalent.
\end{proof}
\section{Application to a certain bounded homogeneous domain}\label{Applicatio}
In this section, we see an application of Theorem \ref{main}. We consider the following domain:
\begin{equation*}
\mathcal{D}(\Omega_1)=\left\{U=\left[\begin{array}{ccc}z_1&0&z_4\\0&z_2&z_5\\z_4&z_5&z_3\end{array}\right]\in \mathrm{Sym}(3,\mathbb{C});\Im U\gg 0 \right\}.
\end{equation*}
Let $\mathcal{U}=\left\{U_0=\left[\begin{array}{ccc}x_1&0&x_4\\0&x_2&x_5\\x_4&x_5&x_3\end{array}\right];x_1,\cdots ,x_5\in\mathbb{R}\right\}$, and let
\begin{equation*}
\Omega_1=\mathcal{U}\cap \mathcal{P}(3,\mathbb{R}),
\end{equation*}
where $\mathcal{P}(3,\mathbb{R})$ denotes the homogeneous convex cone consists of all $3$-by-$3$ real positive-definite symmetric matrices.
The domain $\mathcal{D}(\Omega_1)$ is a Siegel domain of tube type, i.e. $\mathcal{D}(\Omega_1)=\mathcal{U}+i\,\Omega_1$.
We see the description of the holomorphic automorphism group of $\mathcal{D}(\Omega_1)$ which is determined by Geatti \cite{geatti 1987}. Let $y_1,\cdots,y_5\in\mathbb{R}$ and let $y_1,y_2,y_3>0$. Put \begin{equation*}
T_0=\left[\begin{array}{ccc}y_{1}&0&0\\0&y_{2}&0\\y_4&y_5&y_3\end{array}\right].
\end{equation*}
Let $x_1,\cdots, x_5\in\mathbb{R}$, and put
\begin{equation*}
U_0=\left[\begin{array}{ccc}x_1&0&x_4\\0&x_2&x_5\\x_4&x_5&x_3\end{array}\right].
\end{equation*}
Let
\begin{equation*}
gl_{T_0}:\mathcal{D}(\Omega_1)\ni U\mapsto T_0U{}^tT_0\in\mathcal{D}(\Omega_1),
\end{equation*}
\begin{equation*}
t_{U_0}:\mathcal{D}(\Omega_1)\ni U\mapsto U+U_0\in\mathcal{D}(\Omega_1),
\end{equation*}
and for $\vartheta,\gamma\in\mathbb{R}$, and $U\in\mathcal{D}(\Omega_1)$, let $k_{\vartheta,\gamma}(U)$
\begin{equation*}\begin{split}
=\large\left[\begin{array}{ccc}
\frac{\sin \vartheta+z_1\cos\vartheta}{\cos\vartheta-z_1\sin\vartheta}&0&\frac{z_4}{\cos\vartheta-z_1\sin\vartheta}\\
0&\frac{\sin\gamma+z_2\cos\gamma}{\cos\gamma-z_2\sin\gamma}&\frac{z_5}{\cos\gamma-z_2\sin\gamma}\\\frac{z_4}{\cos\vartheta-z_1\sin\vartheta}&\frac{z_5}{\cos\gamma-z_2\sin\gamma}&z_3+\frac{\sin\vartheta z_4^2}{\cos\vartheta-z_1\sin\vartheta}+\frac{\sin\gamma z_5^2}{\cos\gamma-z_2\sin\gamma}\end{array}\right].
\end{split}\end{equation*}
\begin{theorem}[Geatti, \cite{geatti 1987}]\label{Theidentit}
The group $G=\mathrm{Aut}_{hol}(\mathcal{D}(\Omega_1))^o$ is generated by $gl_{T_0}$, $t_{U_0}$, and $k_{\vartheta,\gamma}$.
\end{theorem}
Put $\mathcal{T}=\left\{T_0;y_1,\cdots,y_5\in\mathbb{R}, y_1,y_2,y_3>0 \right\}$. Let $B=\langle gl_{T_0},t_{U_0}\rangle_{T_0\in\mathcal{T},U_0\in\mathcal{U}}$ be the subgroup of $G$ generated by $gl_{T_0}$ and $t_{U_0}$. Then $B$ acts on $\mathcal{D}(\Omega_1)$ simply transitively and is an Iwasawa subgroup of $G$. We take $iI_3\in\mathcal{D}(\Omega_1)$ as a reference point of $\mathcal{D}(\Omega_1)$. By Theorem \ref{Theidentit}, we have $\langle k_{\vartheta,\gamma}\rangle_{\vartheta,\gamma\in\mathbb{R}}=K$. Let $j$ be the complex structure on $\mathfrak{b}$ defined in Section \ref{Normaljalg}. The following holomorphic vector fields on $\mathcal{D}(\Omega_1)$ are given by the action of a one-parameter subgroup of $\langle dl_{T_0}\rangle_{T_0\in\mathcal{T}}$:
\begin{equation*}\begin{split}
A_1^\#&=\left[\begin{array}{ccc}{z_1} & 0 & z_4/2\\
0 & 0 & 0\\
z_4/2 & 0 & 0\end{array}\right],\quad A_2^\#=\left[\begin{array}{ccc}0 & 0 & 0\\
0 & {z_2} & z_5/2\\
0 & z_5/2 & 0\end{array}\right],\\
A_3^\#&=\left[\begin{array}{ccc}0 & 0 & z_4/2\\
0 & 0 & z_5/2\\
z_4/2 & z_5/2 & {z_3}\end{array}\right],\quad
A_{3,1}^\#=\left[\begin{array}{ccc}0 & 0 & {z_1}\\
0 & 0 & 0\\
{z_1} & 0 & 2 {z_4}\end{array}\right],\\
A_{3,2}^\#&=\left[\begin{array}{ccc}0 & 0 & 0\\
0 & 0 & {z_2}\\
0 & {z_2} & 2 {z_5}\end{array}\right].
\end{split}\end{equation*}
The following holomorphic vector fields on $\mathcal{D}(\Omega_1)$ are given by the action of a one-parameter subgroup of $\langle t_{U_0}\rangle_{U_0\in\mathcal{U}}$:
\begin{equation*}\begin{split}
E_1^\#&=\left[\begin{array}{ccc}1 & 0 & 0\\
0 & 0 & 0\\
0 & 0 & 0\end{array}\right],\quad
E_2^\#=\left[\begin{array}{ccc}0 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 0\end{array}\right],\quad
E_3^\#=\left[\begin{array}{ccc}0 & 0 & 0\\
0 & 0 & 0\\
0 & 0 & 1\end{array}\right],\\
E_{3,1}^\#&=\left[\begin{array}{ccc}0 & 0 & 1\\
0 & 0 & 0\\
1 & 0 & 0\end{array}\right],\quad
E_{3,2}^\#=\left[\begin{array}{ccc}0 & 0 & 0\\
0 & 0 & 1\\
0 & 1 & 0\end{array}\right].
\end{split}\end{equation*}
The following holomorphic vector fields on $\mathcal{D}(\Omega_1)$ are given by the action of a one-parameter subgroup of $\langle k_{\vartheta,\gamma}\rangle_{\vartheta, \gamma\in\mathbb{R}}$:
\begin{equation*}
W_1^\#=\left[\begin{array}{ccc}-{{{z_1}}^{2}}-1 & 0 & -{z_1} {z_4}\\
0 & 0 & 0\\
-{z_1} {z_4} & 0 & -{{{z_4}}^{2}}\end{array} \right],\quad
W_2^\#=\left[\begin{array}{ccc}0 & 0 & 0\\
0 & -{{{z_2}}^{2}}-1 & -{z_2} {z_5}\\
0 & -{z_2} {z_5} & -{{{z_5}}^{2}}\end{array}\right].
\end{equation*}
Consider the bijection $\mathfrak{g}\ni X\mapsto X^\#\in\mathfrak{X}(\mathcal{D}(\Omega_1))$,
and let $A_1, A_2, A_3, A_{3,1}, A_{3,2}$ denote the elements of $\mathfrak{g}$ whose images under the above map are $A_1^\#, A_2^\#, A_3^\#, A_{3,1}^\#, A_{3,2}^\#$, respectively, and set $E_1, E_2, E_3, E_{3,1},E_{3,2}, W_1, W_2$ in the same way. Then $r=3$, $\mathfrak{a}=\langle A_1,A_2,A_3\rangle$,
\begin{equation*}
\mathfrak{b}_{(\alpha_l-\alpha_k)/2}=\langle A_{l,k}\rangle,\quad \mathfrak{b}_{(\alpha_l+\alpha_k)/2}=\langle E_{l,k}\rangle\quad(1\leq k< l\leq 3),
\end{equation*}
and
\begin{equation*}
\mathfrak{b}_{\alpha_k}=\langle E_k \rangle\quad(1\leq k\leq 3)
\end{equation*}
in Theorem \ref{Forasuitab}. We have
\begin{equation*}
\mathfrak{b}_-=\langle E_1+iA_1, E_2+iA_2, E_3+iA_3, E_{3,1}+iA_{3,1}, E_{3,2}+iA_{3,2} \rangle,
\end{equation*}
and since $[\mathfrak{b}_-,\mathfrak{b}_-]=\mathfrak{b}_-\cap[\mathfrak{b},\mathfrak{b}]_\mathbb{C}$, we have
\begin{equation*}
[\mathfrak{b}_-,\mathfrak{b}_-]=\mathfrak{b}_-\cap(\sideset{}{^\oplus}\sum_{1\leq k<l\leq r}\mathfrak{b}_{(\alpha_l-\alpha_k)/2}\oplus\sideset{}{^\oplus}\sum_{1\leq k<l\leq r}\mathfrak{b}_{(\alpha_l+\alpha_k)/2})_\mathbb{C}.
\end{equation*}
The subspace $[\mathfrak{k},\mathfrak{b}_-]$ is generated by the following elements:
\begin{equation*}\begin{split}
&[W_1,E_1+iA_1]=-2A_1+i(2E_1+W_1), \,[W_1,E_{3,1}+iA_{3,1}]=iE_{3,1}-A_{3,1},\\& [W_2,E_2+iA_2]=-2A_2+i(W_2+2E_2),\, [W_2,E_{3,2}+iA_{3,2}]=iE_{3,2}-A_{3,2}.
\end{split}\end{equation*}
Clearly, $[\mathfrak{k},\mathfrak{k}]=0$.
Thus every $\xi\in\mathfrak{g}^*$ satisfying $\xi([\mathfrak{g}_-,\mathfrak{g}_-])=0$ can be written as
\begin{equation*}\begin{split}
\xi=\xi(x, y, n, n')=xE_{3}^*+yA_{3}^*+\frac{n}{2}(2W_1^*-E_{1}^*)+\frac{n'}{2}(2W_2^*-E_{2}^*)\\\quad (x,y,n,n' \in\mathbb{R}).
\end{split}\end{equation*}
If the representation $i\xi|_{\mathfrak{k}}:\mathfrak{k}\rightarrow\mathbb{C}$ lifts to a representation of $K$, then
$n,n'\in\mathbb{Z}$.
Let $x, y\in\mathbb{R}$ and let $n,n'\in \mathbb{Z}$. We shall apply Theorem 13 in \cite{ishi 2011} to the representation $T_{\chi^{i\xi}}$ with $\xi=\xi(x,y,n,n')$. The theorem gives the set of all parameters $(x,y,n,n')$ such that the representation $T_{\chi^{i\xi}}$ of $B$ is unitarizable, and defines the equivalence relation on the set which corresponds to the unitary equivalence among the unitarizable representations. We see from Theorem 13(i) in \cite{ishi 2011} that the representation $T_{\chi^{i\xi}}$ has a unitarization if
\begin{equation*}
x<0,\quad n>0,\quad n'>0
\end{equation*}
or
\begin{equation*}
x=0,\quad n\geq 0,\quad n'\geq 0.
\end{equation*}
Put
\begin{equation*}
\Theta(G)=\left\{\theta\in\mathfrak{g}_-^*;\begin{array}{c}\theta \text{ is a one-dimensional representation of }\mathfrak{g}_-\text{ such that }\\ \text{ its restriction to } \mathfrak{k}\text{ lifts to a representation of }K\text{ and}\\\text{the representation }T_{M_\theta}\text{ of }G\text{ is unitarizable}\end{array} \right\}.
\end{equation*}
By Theorem \ref{LetmGtimes}, it follows that
\begin{equation*}\begin{split}
\Theta(G)=&\{i\xi(x, y, n, n') ;x<0, y\in\mathbb{R}, n,n'\in\mathbb{Z}_{>0} \}
\\&\bigsqcup\{i\xi(0, y, n, n'); y\in\mathbb{R}, n,n'\in\mathbb{Z}_{\geq0} \}.
\end{split}\end{equation*}
We see from Theorem \ref{fundamentalone} and Theorem \ref{fundamentaltwo} that $\Theta(G)$ parametrizes the following set:
\begin{equation*}
\left\{[L];\begin{array}{c}L\text{ is a }G\text{-equivariant holomorphic line bundle over }\mathcal{D}(\Omega_1)\text { such that}\\\text{ the representation }l \text{ of } G\text{ is unitarizable}\end{array}\right\},
\end{equation*}
where $[L]$ denotes the equivalence class of $L$ of $G$-equivariant holomorphic line bundles over $\mathcal{D}(\Omega_1)$. Let
\begin{equation*}
\Theta_{B,-}=\{\xi(x, y, n, n'); x<0, y\in\mathbb{R}, n,n'\in\mathbb{Z}_{>0}\}
\end{equation*}
and
\begin{equation*}
\Theta_{B,0,y,n,n'}=\{\xi(0, y, n, n')\}\quad(y\in\mathbb{R}, n, n'\in\mathbb{Z}_{\geq 0}).
\end{equation*}
We see from Theorem 13(iii) in \cite{ishi 2011} that the partition of $\Theta(G)$ corresponding to the unitary equivalence classes of representations of $B$ is described as follows:
\begin{equation*}
\Theta(G)=\Theta_{B,-}\large{\bigsqcup}\bigsqcup_{y\in\mathbb{R}, n, n'\in\mathbb{Z}_{\geq 0}}\Theta_{B,0,y,n,n'}.
\end{equation*}
Let
\begin{equation*}\begin{split}
\Theta_{G,-,n,n'}=\{\xi(x, y, n, n'); x<0, y\in\mathbb{R}\}\quad(n, n'\in\mathbb{Z}_{>0})
\end{split}\end{equation*}
and
\begin{equation*}
\Theta_{G,0,n,n'}=\Theta_{B,0,n,n'} =\{\xi(0,y,n,n')\}\quad(y\in\mathbb{R},n,n'\in\mathbb{Z}_{\geq 0}).
\end{equation*}
By Theorem \ref{main}, it follows that the partition of $\Theta(G)$ corresponding to the unitary equivalence classes of representations of $G$ is described as follows:
\begin{equation*}
\Theta(G)=\bigsqcup_{n,n'\in\mathbb{Z}_{>0}}\Theta_{G,-,n,n'}\displaystyle\bigsqcup\bigsqcup_{y\in\mathbb{R}, n, n'\in\mathbb{Z}_{\geq 0}}\Theta_{G,0,y,n,n'}.
\end{equation*}
\section*{Acknowledgements}
The author would like to thank Professor H. Ishi for a lot of helpful advice on this paper. The author shows his greatest appreciation to Professor T. Uzawa for his insightful comments, and Professor M. Pevzner for valuable discussions.
|
2,877,628,091,568 | arxiv | \section{Introduction}
Leibniz algebras appear from the cohomology study done by Loday in 1993 \cite{loday1} and they are further investigated by several authors as
Ayupov, Casas and others (\cite{Ayupov}, \cite{Casas}). In the cohomology study there is an important family of Leibniz algebras: those whose length of the gradation is maximum. The remarkable fact that an algebra can be decomposed into direct sum of subspaces of dimension 1 makes easier the calculations of the derivations since they induce the corresponding gradation of the group of cohomologies.
The main goal of this paper is to continue the study of the $p$-filiform Leibniz algebras of maximum length. These algebras play a main role in mathematics over the last years, either in the classification theory or in geometrical, analytical and physical applications.
In early works we have already closed the classification of the $p$-filiform Leibniz algebras of maximum length for $0 \leq p \leq 2$ (see \cite{Ayupov}, \cite{J.Lie.Theory2}). Here we study the 3-filiform Leibniz algebras of maximum length, their spaces of derivations and their first cohomology group.
Moreover, we will use three programs very helpful to obtain the classification of maximum length algebras and their space of derivations.
Recall \cite{loday1} that an algebra $\ll$ over a field $F$ is called a Leibniz algebra if it satisfies the following Leibniz identity:
$$[x,[y,z]]=[[x,y],z]-[[x,z],y], \forall x,y,z \in \ll $$
where $[.,.]$ denotes the multiplication in $\ll$.
Consider an arbitrary algebra $\ll$ in the set of n-dimensional Leibniz algebras over a field $F$.
Let $B=\{ e_1,$ $ e_2,$ $\cdots$ $e_n\}$ be a basis of $\ll$. Then $\ll$ is determined, un to isomorphisms, by the multiplication
rule for the basis elements; namely,
$$[e_i,e_j]=\displaystyle\sum_{k=1}^n \gamma_{ij}^k e_k$$
where $\gamma_{ij}^k$ are the structure constants. Therefore, fixing a basis, we can regard each algebra of dimension
n over a field F as a point in the $n^3$-dimensional space of structure constants endowed with the Zariski
topology.
From now on the Leibniz algebras will be considered over the field of complex numbers $\mathbb{C}$, and with finite dimension. Let $\ll$ be a Leibniz algebra, then $\ll$ is naturally filtered by the descending central sequence $\ll^1=\ll$, $\ll^{k+1}=[\ll^k,\ll]$ with $k\geq 1.$ Thus, a nilpotent algebra $\ll$ has nilindex equal to $s$ if $s$ is the minimum integer such that $\ll^s \neq \{0\}$ and $\ll^{s+1}=\{0\}.$
A Leibniz algebra $\ll$ is $\mathbb{Z}$-graded if $\ll=\oplus_{i \in \mathbb{Z}}V_i$,
where $[V_i,V_j]\subseteq V_{i+j}$ for any $i,j \in \mathbb{Z}$ with a finite number of non null spaces $V_i$.
We will say that a $\mathbb{Z}$-graded Leibniz algebra $\ll$ admits a \emph{connected gradation} if $\ll=V_{k_1}\oplus V_{k_1+1} \oplus \dots \oplus V_{k_1+t}$ and $V_{k_1+i}\neq <0>$ for any $i$ $(0 \leq i \leq t)$.
Let us define the naturally graded algebras as follows:
\begin{defn}
Let us take $\ll_i=\ll^i/\ll^{i+1}$, $1\leq i \leq k$ and $gr \ll=\ll_1 \oplus \ll_2 \oplus \dots \oplus \ll_k$. Then $[\ll_i,\ll_j]\subseteq \ll_{i+j}$
and we obtain the graded algebra $gr \ll$. If $gr \ll$ and $\ll$ are isomorphic, in notation $gr\ll \cong \ll$, we say that $\ll$
is a naturally graded algebra.
\end{defn}
The above constructed gradation is called \emph{natural gradation}.
\begin{defn}\label{def:length}
The number $l( \oplus \ll)=l(V_{k_1}\oplus V_{k_1+1} \oplus \dots \oplus V_{k_1+t})=t+1$ is called the
length of the gradation, where $ \oplus \ll$ is a connected gradation. The gradation $\oplus \ll$ has maximum length if $l(\oplus \ll)=dim( \ll)$.
\end{defn}
We define the length of an algebra $\ll$ by:
\begin{center}
$l(\ll)=\max \{l(\oplus \ll) \hbox{ such that } \oplus \ll = V_{k_1}\oplus \dots \oplus V_{k_t}\hbox{ is a connected gradation}\}.$
\end{center}
An algebra $\ll$ is called of maximum length if $l(\ll)=dim (\ll)$.
The set $R(\ll)=\{x \in \ll: [y,x]=0, \ \forall y \in \ll\}$ is called \emph{the right annihilator of $\ll$}. $R_x$ denotes the operator $R_x: \ll \rightarrow \ll$ such that $R_x(y)=[y,x], \ \forall y \in \ll$ and it is called the right operator. The set $Cent(\ll)=\{z \in \ll: [x,z]=[z,x]=0, \ \forall x \in \ll\}$ is called \emph{the center of $\ll$}.
Let $x$ be a nilpotent element of the set $\ll \setminus \ll^2$. For the nilpotent operator $R_x$ we define a descending sequence $C(x)=(n_1,n_2, \dots, n_k)$, which consists of the dimensions of the Jordan blocks of the operator $R_x$. In the set of such sequences we consider the lexicographic order, that is,
$C(x)=(n_1,n_2, \dots, n_k)< C(y)=(m_1, m_2, \dots, m_s)$ if and only if there exists $i \in \mathbb{N}$ such that $n_j=m_j$ for any $j<i$ and $n_i<m_i$.
\begin{defn}\label{def:char.seq}
The sequence $C(\ll)=\max C(x)_{x \in \ll \setminus \ll^2}$ is called the characteristic sequence of the algebra $\ll$.
\end{defn}
Let $\ll$ be an $n$-dimensional nilpotent Leibniz algebra and $p$ a non negative integer ($p<n$).
\begin{defn}\label{def:p-fili}
The Leibniz algebra $\ll$ is called $p$-filiform if $C(\ll)=(n-p,\underbrace{1,\dots,1}_{p})$. If $p=0$, $\ll$ is called null-filiform
and if $p=1$ it is called filiform.
\end{defn}
Therefore, an algebra with the characteristic sequence $(n-2,1,1)$ is called 2-filiform, whereas a nilpotent algebra with nilindex $n-2$ is called quasi-filiform. Note that in the Lie algebras case both definitions coincide.
\begin{defn}\label{def:derivation}
A linear transformation $d$ of a Leibniz algebra $\ll$ is called a derivation of $\ll$ if
$$d([x,y])=[d(x),y]+[x,d(y)] \hbox{ for any } x,y \in \ll.$$
Denote by $Der(\ll)$ the set of all derivations.
\end{defn}
It is clear that the right operator $R_x $ is a derivation for any $x \in \ll$. Derivations of this type are called inner derivations. Similar to the Lie algebras case the set of the inner derivations forms an ideal of the algebra $Der(\ll).$
Since our algebra is $\mathbb{Z}-$graded, i.e $\ll=\oplus_{i \in \mathbb{Z}} V_i$, this gradation induces a gradation of the algebra $Der(\ll)=\oplus_{i \in \mathbb{Z}} W_i$ in the following way:
$$W_i=\{d_i \in Der(\ll): d_i(x) \in V_{i+j} \hbox{ for any } x \in V_j \}.$$
\
For an $n$-dimensional algebra of maximum length it is easy to see that $Der(\ll)=W_{-n} \oplus \dots \oplus W_{n}$ (see \cite{Omirov2}).
For more details see the definition of the cohomology groups for Leibniz algebras introduced in \cite{loday-piras}.
\section{$3$-filiform non-Lie Leibniz algebras of maximum length}
In this section we are going to continue the classification of the $p$-filiform Leibniz algebras of ma\-xi\-mum length. The study of the filiform and 2-filiform cases has been already done in \cite{J.Lie.Theory2}, so we are going to continue with the 3-filiform Leibniz algebras case.
The used technique in this section is as follows: we will extend the naturally graded 3-filiform Leibniz algebras by using the natural gradations. In this way, we can distinguish two cases: the natural graded Lie algebras and the natural graded non-Lie algebras. The study of the first case was closed in \cite{3-filiform}, so we explain the results obtained in the second family. After that we will work with a homogeneous basis and we will assume that the associated gradation has maximum length. Finally, we will use some programs implemented in the software \textit{Mathematica} (which will be explained below) as well as properties of the gradation and of the nilpotence to arrive at a contradiction or at the classification.
I would like to stress in the fact that using computer programs is very helpful to achieve the presented classification. Two programs will be used in this section: the program of the Leibniz identity and the program of isomorphisms. The first program computes the Leibniz identity of a Leibniz algebra and was presented in \cite{JSC}. The second one establishes when two algebras are isomorphic, moreover we have added some subroutines to know if two algebras are isomorphic or not, when one of them is an uniparameter family. It returns the value of the parameter for which would be isomorphics. The algorithmic method can be found on \cite{mitesis}.
The implementation of these programs are presented in low and fixed dimension. Then we will formulate the generalizations, proving by induction the results for arbitrary fixed dimension. Finally, point out that the algorithmic method of these programs are presented with a step-by-step explanation in the following Web site: http://personal.us.es/jrgomez.
\
\subsection{Non split case}
\
In this section we will restrict our study to classify the 3-filiform Leibniz algebras of maximum length, which are the extension of the non split and naturally graded non-Lie Leibniz algebras. The Lie case has been closed in \cite{3-filiform}, where there is not any non split 3-filiform Leibniz algebra of maximum length.
First of all, let us see the classifications of the naturally graded 3-filiform Leibniz algebras (\cite{NGp-F}).
\begin{thm}
Let $\ll$ be a complex $n$-dimensional non-split naturally graded $3$-filiform non-Lie Leibniz algebra and $n \geq 7$. Then there exists a basis $\{e_1,e_2,\dots, e_{n-3},f_1,f_2,f_3\}$ of the algebra, such that $\ll$ is isomorphic to
$$L^{1}:\begin{cases}
[e_{i},e_{1}]=e_{i+1}, &1\leq i \leq n-4, \\
[e_{1},f_{1}]=f_{3}, \\
[e_{i},f_{2}]=e_{i+1}, &1\leq i \leq n-4.
\end{cases}$$
\end{thm}
\begin{thm}
Let $\ll$ be a complex $n$-dimensional non split $3$-filiform non-Lie Leibniz algebra and $n \geq 7$. Then $l(\ll)<n.$
\end{thm}
\begin{dem}
The natural gradation of $L^1$ is: $\ll_1\oplus \dots\oplus \ll_{n-3}$ where $\ll_1=<e_1,f_1,f_2>$, $\ll_2=<e_2,f_3>$ and $\ll_i=<e_i>$ for $3\leq i \leq n-3$. We are going to study the length of its extension, which is denoted by $\widetilde{L^1}.$ Point out that we call \textit{the extension of the algebra} as the natural generalization of the structural constants of the algebra, using the information of its associated natural gradation.
Note that $\{e_2,e_3, \dots,e_{n-3},f_3\}$ belong to the ideal $R(L^1)$ and $e_{n-3} \in Cent(L^1)$. Moreover $[e_1,f_1]+[f_1,e_1] \in R(L^1)$, then we conclude $f_3 \in R(L^1)$. Finally, by taking the change of basis $e_1^{'}=e_1$, $e_{i+1}^{'}=[e_i^{'},e_1^{'}]$ for $1 \leq i \leq n-4$, $f_1^{'}=f_1$ and $f_2^{'}=f_2$, we can write the law of $\widetilde{L^1}$ as:
$$\begin{cases}
[e_i,e_1]=e_{i+1}, &1\leq i \leq n-4,\\
[e_1,f_1]=f_3+(*)e_3+\dots+(*)e_{n-3},\\
[e_i,f_2]=e_{i+1}+(*)e_{i+2}\dots+(*)e_{n-3}, &1\leq i \leq n-4,\\
[f_i,e_1]=(*)e_3+\dots+(*)e_{n-3}, &1\leq i\leq 2,\\
[f_3,e_1]=(*)e_4+\dots+(*)e_{n-3},\\
[f_i,f_j]=(*)e_3+\dots +(*)e_{n-3}, & 1\leq i,j \leq 2,\\
[f_3,f_i]=(*)e_4+\dots+(*)e_{n-3}, & 1 \leq i \leq 2\\
[e_i,f_1]=(*)e_{i+2}+ \dots+ (*)e_{n-3}, & 2 \leq i \leq n-5,
\end{cases}$$
where the asterisks $(*)$ denote the corresponding coefficients in the products.
A crucial tool in the proof is the construction of a homogeneous basis, which generators are:
\begin{align*}
\widetilde{x_s}&=e_1+ \sum_{i=2}^{n-3}a_ie_i +\sum_{j=1}^{3}a_{n-3+j}f_j,\\
\widetilde{x_t}&=f_1+ \sum_{i=1}^{n-3}b_ie_i +\sum_{j=2}^{3}b_{n-3+j}f_j,\\
\widetilde{x_u}&=f_2+ \sum_{i=1}^{n-3}c_ie_i +\sum_{j=1, j\neq 2}^{3}c_{n-3+j}f_j.
\end{align*}
Therefore the products of the generators of $\widetilde{L^1}$ can be defined in the new basis as follows:
\begin{align*}
[\widetilde{x_s},\widetilde{x_s}]&=(1+a_{n-1})e_2+(*)e_3+ \dots + (*)e_{n-3}+a_{n-2}f_3,\\
[\widetilde{x_t},\widetilde{x_t}]&=b_1(b_1+b_{n-1})e_2+(*)e_3 + \dots+ (*)e_{n-3}+b_1f_3,\\
[\widetilde{x_u},\widetilde{x_u}]&=c_1(1+c_1)e_2+ (*)e_3+ \dots+ (*)e_{n-3}+c_1 c_{n-2} f_3,\\
[\widetilde{x_s},\widetilde{x_t}]&=(b_1+b_{n-1})e_2+(*)e_3 + \dots+ (*)e_{n-3}+f_3,\\
[\widetilde{x_t},\widetilde{x_s}]&=b_1(1+a_{n-1})e_2+(*)e_3 + \dots+ (*)e_{n-3}+b_1a_{n-2}f_3, \\
[\widetilde{x_s},\widetilde{x_u}]&=(1+c_1)e_2+(*)e_3 + \dots+ (*)e_{n-3}+c_{n-2}f_3, \\
[\widetilde{x_u},\widetilde{x_s}]&= c_1(1+a_{n-1})e_2+(*)e_3 + \dots+ (*)e_{n-3}+c_1a_{n-2}f_3,\\
[\widetilde{x_t},\widetilde{x_u}]&=b_1(1+c_1)e_2+(*)e_3 + \dots+ (*)e_{n-3}+b_1c_{n-2}f_3, \\
[\widetilde{x_u},\widetilde{x_t}]&=c_1(b_1+b_{n-1})e_2+(*)e_3 + \dots+ (*)e_{n-3}+c_1f_3.\\
\end{align*}
Since $\{\widetilde{x_s},\widetilde{x_t},\widetilde{x_u}\}$ are linearly independent, then $det\left(\begin{array}{ccc}
1 & a_{n-2} & a_{n-1}\\
b_1 & 1 & b_{n-1}\\
c_1 & c_{n-2} & 1
\end{array}\right)\neq 0.$
\
$\bullet$ Case 1: If $1+a_{n-1} \neq 0,$ we have the following subcases:
\
\fbox{Case 1.1:} If $[\widetilde{x_s},\widetilde{x_s}]$ and $[\widetilde{x_s},\widetilde{x_t}]$ are linearly independent, we take the homogeneous basis $y_1=\widetilde{x_s},$ $y_i=[y_{i-1},y_1]$ for $2 \leq i \leq n-3,$ $z_1=\widetilde{x_t},$ $z_2=\widetilde{x_u}$ and $z_3=[\widetilde{x_s},\widetilde{x_t}]=[y_1,z_1]$, where $$[[\underbrace{\widetilde{x_s},\widetilde{x_s}],...,\widetilde{x_s}}_{\hbox{i-times}}]=(1+a_{n-1})^{i-1}e_i+(*)e_{i+1}+ \dots+(*)e_{n-3} \hbox{ for } 3 \leq i \leq n-3,$$
obtaining the gradation: $V_{k_s}\oplus V_{2k_s}\oplus \dots \oplus V_{(n-3)k_s}\oplus V_{k_t}\oplus V_{k_u}\oplus V_{k_s+k_t}.$ Let us assume that the gradation has maximum length, therefore $k_s, k_t,k_u$ are pairwise different.
It is enough to consider the products $[z_2,y_1]$ and $[y_1,z_2]$ to prove that it is not possible that the gradation has maximum length.
Consider $[z_2,y_1]=[\widetilde{x_u},\widetilde{x_s}]=c_1[\widetilde{x_s},\widetilde{x_s}]=c_1y_2$. Since $[z_2,y_1] \in V_{k_s+k_u}$, $y_{2} \in V_{2k_s}$ and $k_s \neq k_u$, then we conclude $c_1=0$.
On the other hand $[y_1,z_2]=(1+c_1)e_2+(*)e_3 + \dots+ (*)e_{n-3}+c_{n-2}f_3=e_2+(*)e_3+\dots+(*)e_{n-3}+\beta_1f_3=Ay_2$ with $A \neq 0$. We also have $[y_1,z_2] \in V_{k_s+k_u}$ and $y_{2} \in V_{2k_s}$, therefore $k_u=k_s$, which is a contradiction with the assumption of maximum length. Hence there is no maximum length algebra in this subcase.
\
\fbox{Case 1.2:} If $[\widetilde{x_s},\widetilde{x_s}]$ and $[\widetilde{x_s},\widetilde{x_t}]$ are linearly dependent, i.e., $(b_1+b_{n-1})a_{n-2}=1+a_{n-1}.$ Note that we can assert that $a_{n-2} \neq 0$ and $b_1+b_{n-1} \neq 0$ from the assumption $1+a_{n-1} \neq 0$.
\
\textbf{Case 1.2.1:} If $[\widetilde{x_s},\widetilde{x_t}]$ and $[\widetilde{x_s},\widetilde{x_u}]$ are linearly independent, we distinguish two possibilities:
\
\emph{\textbf{If $c_{n-2} \neq 0,$}}
let us take the basis composed of the following vectors $y_1=\widetilde{x_s},$ $y_i=[y_{i-1},y_1]$ for $2 \leq i \leq n-3,$ $z_1=\widetilde{x_t},$ $z_2=\widetilde{x_u}$ and
$z_3=[\widetilde{x_s},\widetilde{x_u}]=[y_1,z_2]$, and the maximum length gradation
$V_{k_s}\oplus V_{2k_s}\oplus \dots \oplus V_{(n-3)k_s}\oplus V_{k_t}\oplus V_{k_u}\oplus V_{k_s+k_u}$. A contradiction will be obtained
by computing $[y_1,z_1].$ Since
\begin{align*}
[y_1,z_1]&=\underbrace{(b_1+b_{n-1})}_{\neq 0}e_2+ (*)e_3+ \dots+(*)e_{n-3}+\underbrace{c_{n-2}}_{\neq 0}f_3 \in V_{k_s+k_t},\\
y_2=[\widetilde{x}_s,\widetilde{x}_s]&=\underbrace{(1+a_{n-1})}_{\neq 0}e_2+(*)e_3+ \dots + (*)e_{n-3}+\underbrace{a_{n-2}}_{\neq 0}f_3 \in V_{2k_s},\\
z_3=[\widetilde{x_s},\widetilde{x_u}]&=\underbrace{(1+c_1)}_{\neq 0}e_2+(*)e_3 + \dots+ (*)e_{n-3}+\underbrace{c_{n-2}}_{\neq 0}f_3 \in V_{k_s+k_u},
\end{align*}
then either $[y_1,z_1]=Ay_2$ with $A \neq 0$ or $[y_1,z_1]=Bz_3$ with $B \neq 0$. By properties of the gradation, we achieve $k_s=k_t$ in the first case and $k_u=k_t$ in the other case. Both equalities contradict the maximum length of $\widetilde{L^1}.$
\
\emph{\textbf{If $c_{n-2}=0,$}} we can assume that $1+c_1 \neq 0.$ Otherwise we would have:
\begin{align*}
[\widetilde{x_s},\widetilde{x_s}]&=(1+a_{n-1})e_2+(*)e_3+ \dots + (*)e_{n-3}+a_{n-2}f_3,\\
[\widetilde{x_s},\widetilde{x_t}]&=(b_1+b_{n-1})e_2+(*)e_3 + \dots+ (*)e_{n-3}+f_3= \alpha[\widetilde{x_s},\widetilde{x_s}],\\
[\widetilde{x_s},\widetilde{x_u}]&=(*)e_3 + \dots+ (*)e_{n-3},
\end{align*}
and the others products would be linear combinations of these. Therefore, it would not be possible to generate the element $y_2$ or $z_3$ in the new basis. Hence, by taking the new basis $y_1=\widetilde{x_s},$
$y_2=[\widetilde{x_s},\widetilde{x_u}],$ $y_i=[y_{i-1},y_1]$ for $3 \leq i \leq n-3,$ $z_1=\widetilde{x_t},$ $z_2=\widetilde{x_u}$ and
$z_3=[\widetilde{x_s},\widetilde{x_t}]=[y_1,z_1]$, we get a contradiction as above, by calculating the product $[y_1,y_1]$.
Since $$[y_1,y_1]=\underbrace{(1+a_{n-1})}_{\neq 0}e_2+(*)e_3+ \dots + (*)e_{n-3}+\underbrace{a_{n-2}}_{\neq 0}f_3,$$
there are two possibilities, either
$[y_1,y_1]=Ay_2$ with $A\neq 0$ or $[y_1,y_1]=Bz_3$ with $B\neq 0$. Analogously to the above case, both equalities contradict the hypothesis of maximum length of $\widetilde{L^1}.$
\
\textbf{Case 1.2.2:} If $[\widetilde{x_s},\widetilde{x_t}]$ and $[\widetilde{x_s},\widetilde{x_u}]$ are linearly dependent it is not possible to
construct a homogeneous basis, because all the products of the generators can be written as follows:
\begin{align*}
[\widetilde{x_t},\widetilde{x_t}]&=b_1[\widetilde{x_s},\widetilde{x_t}]=b_1\alpha[\widetilde{x_s},\widetilde{x_s}],\\
[\widetilde{x_u},\widetilde{x_u}]&=c_1[\widetilde{x_s},\widetilde{x_u}]=c_1\beta[\widetilde{x_s},\widetilde{x_s}],\\
[\widetilde{x_s},\widetilde{x_t}]&=\alpha[\widetilde{x_s},\widetilde{x_s}],\\
[\widetilde{x_t},\widetilde{x_s}]&=b_1[\widetilde{x_s},\widetilde{x_s}], \\
[\widetilde{x_s},\widetilde{x_u}]&=\beta[\widetilde{x_s},\widetilde{x_s}], \\
[\widetilde{x_u},\widetilde{x_s}]&= c_1[\widetilde{x_s},\widetilde{x_s}],\\
[\widetilde{x_t},\widetilde{x_u}]&=b_1[\widetilde{x_s},\widetilde{x_u}]=b_1\beta[\widetilde{x_s},\widetilde{x_s}], \\
[\widetilde{x_u},\widetilde{x_t}]&=c_1[\widetilde{x_s},\widetilde{x_t}]=c_1\alpha[\widetilde{x_s},\widetilde{x_s}],
\end{align*}
such that, all of them are linearly dependent of $[\widetilde{x_s},\widetilde{x_s}].$
\
$\bullet$ Case 2: If $1+a_{n-1}=0.$\\
\
\fbox{Case 2.1:} If $1+c_1 \neq 0$.\\
\
\textbf{Case 2.1.1:} If $[\widetilde{x_s},\widetilde{x_t}]$ and $[\widetilde{x_s},\widetilde{x_u}]$ are linearly independent, we take the new basis $y_1=\widetilde{x_s}$, $z_1=\widetilde{x_t}$, $z_2=\widetilde{x_u},$ $y_2=[\widetilde{x_s},\widetilde{x_u}]=[y_1,z_2],$ $y_i=[y_{i-1},z_2]$ for $3\leq i \leq n-3,$ and $z_3=[\widetilde{x_s},\widetilde{x_t}]=[y_1,z_1],$ where
$$y_{i+1}=[[\widetilde{x_s},\underbrace{\widetilde{x_u}],...,\widetilde{x_u}}_{i-times}]=(1+c_1)^i e_{i+1}+\dots+(*)e_{n-3}$$
with $2 \leq i \leq n-4,$ giving rise to the following gradation: $V_{k_s}\oplus V_{k_s+k_u}\oplus V_{k_s+2k_u}\oplus \dots \oplus V_{k_s+(n-4)k_u}\oplus V_{k_t}\oplus V_{k_u} \oplus V_{k_t+k_s}$ of maximum length.
In order to prove that there is no maximum length algebra in this subcase it is enough to study the values of $k_u,$ $k_t$ and $k_s$ such
as the gradation considered previously has maximum length. From properties of the gradation it is easy to check that the gradation is connected if and only if
$k_u=\pm 1.$ Without loss of generality we can assume $k_u=1.$ Let us study the values of $k_t$ and $k_s$.
\begin{itemize}
\item[$ $] \textbf{Case a: If $k_s>0.$}
\begin{figure}[ht]
\centering{
\subfigure[Subcase a.1]{\includegraphics[width=0.5\textwidth]{Suba1}}
\hspace{0.05\textwidth}
\subfigure[Subcase a.2]{\includegraphics[width=0.4\textwidth]{Suba2}}
\hspace{0.05\textwidth}
}
\end{figure}
\begin{figure}[ht]
\centering{
\subfigure[Subcase a.3]{\includegraphics[width=0.5\textwidth]{Suba3}}
}
\end{figure}
\textbf{Subcase a.1: If $k_s=2.$} Under these hypothesis we have $k_t=n-1,$ $V_{k_t+k_s}=V_{n+1}$ and $V_{n}=<0>.$ Hence the considered gradation is not connected.
\textbf{Subcase a.2: If $k_s=3.$} Then $k_t=2$ and $V_{k_t+k_s}=V_5=<z_3,y_3>$, so the length of the gradation in not maximum.
\textbf{Subcase a.3: If $k_s>3.$} Then there is some $V_p$ with $2 \leq p \leq k_s$ such that $V_p=<0>,$ which contradicts the connectedness of the gradation.
\item[$ $] \textbf{Case b: If $k_s<0.$}
\begin{figure}[ht]
\centering{
\subfigure[ Subcase b.1 and Subcase b.2]{\includegraphics[width=0.8\textwidth]{Subb12}}
}
\end{figure}
\textbf{Subcase b.1: $k_s=4-n$}. Under these hypothesis and by connectedness we have either $k_t=3-n$ or $k_t=2$. If $k_t=3-n$ then
$V_{k_t+k_s}=V_{7-2n}.$ Since $n\geq 7$, hence $V_{7-2n}$ is vanish and the gradation is not connected. On the other hand, if $k_t=2$ then
$V_{k_t+k_s}=V_{6-n}=<y_3,z_3>$, which gives a contradiction with the assumption of maximum length.
\textbf{Subcase b.2: $k_s \neq 4-n$}. This subcase never gives a maximum length gradation because the subspace $V_0$ is always vanish ($z_1\notin V_0$).
\end{itemize}
\textbf{Case 2.1.2}: If $[\widetilde{x_s},\widetilde{x_t}]$ and $[\widetilde{x_s},\widetilde{x_u}]$ are linearly dependent we have:
$$(1):\begin{cases}
[\widetilde{x_s},\widetilde{x_s}]=(*)e_3+ \dots+(*)e_{n-3}+a_{n-2}f_3\\
[\widetilde{x_s},\widetilde{x_t}]=(b_1+b_{n-1})e_2+(*)e_3+ \dots+(*)e_{n-3}+f_3=\alpha[\widetilde{x_s},\widetilde{x_u}],\\
[\widetilde{x_s},\widetilde{x_u}]=\underbrace{(1+c_1)}_{\neq 0}e_2+(*)e_3+ \dots+(*)e_{n-3}+c_{n-2}f_3. \\
\end{cases}$$
and the others products are linearly dependent of these. Therefore we can assume $a_{n-2} \neq 0,$ otherwise it would not be possible to get a basis because
$$\begin{array}{cc}
det \left(\begin{array}{cc}
b_1+b_{n-1}& 1 \\
1+c_1 & c_{n-2}
\end{array}\right)= 0\ \ \hbox{ and }&
det \left(\begin{array}{ccc}
1 & 0 & -1 \\
b_1 & 1 & b_{n-1}\\
c_1 & c_{n-2} & 1
\end{array}\right)= 0,
\end{array}$$
which implies that $\widetilde{x_s},$ $\widetilde{x_t}$ and $\widetilde{x_u}$ are linearly dependent.
Let us take the new basis $y_1=\widetilde{x_s},$ $z_1=\widetilde{x_t},$ $z_2=\widetilde{x_u},$ $y_2=[\widetilde{x_s},\widetilde{x_u}]=[y_1,z_2],$
$y_i=[y_{i-1},z_2]$ for $3 \leq i \leq n-3$ and $z_3=[y_1,y_1],$ where
$$y_{i}=[[\widetilde{x_s},\underbrace{\widetilde{x_u}],...,\widetilde{x_u}}_{(i-1)-times}]=(1+c_1)^{i-1} e_{i}+(*)e_{i+1}+\dots+(*)e_{n-3}$$
with $3 \leq i \leq n-3.$ Its associated maximum length gradation is: $V_{k_s}\oplus V_{k_s+k_u}\oplus V_{k_s+2k_u}\oplus \dots \oplus V_{k_s+(n-4)k_u}\oplus V_{k_t}\oplus V_{k_u}
\oplus V_{2k_s}$. Since $[\widetilde{x_s},\widetilde{x_t}]$ is linearly dependent
of $[\widetilde{x_s},\widetilde{x_u}]$ and from properties of the gradation we conclude that $k_t=k_u,$ which is a contradiction with the assumption of
maximum length.
\fbox{Case 2.2:} If $1+c_1=0,$ it is not possible to construct a basis because all the products of the generators can be written as follows:
\begin{align*}
[\widetilde{x_s},\widetilde{x_s}]&=(*)e_3+ \dots+(*)e_{n-3}+a_{n-2}f_3\\
[\widetilde{x_t},\widetilde{x_t}]&=b_1[\widetilde{x_s},\widetilde{x_t}]=(*)e_3+ \dots+(*)e_{n-3}+b_1a_{n-2}f_3,\\
[\widetilde{x_u},\widetilde{x_u}]&=c_1[\widetilde{x_s},\widetilde{x_u}]=(*)e_3+ \dots+(*)e_{n-3}+c_1c_{n-2}f_3,\\
[\widetilde{x_s},\widetilde{x_t}]&=(*)e_3+ \dots+(*)e_{n-3}+f_3,\\
[\widetilde{x_t},\widetilde{x_s}]&=b_1[\widetilde{x_s},\widetilde{x_s}]=(*)e_3+ \dots+(*)e_{n-3}+b_1a_{n-2}f_3, \\
[\widetilde{x_s},\widetilde{x_u}]&=(*)e_3+ \dots+(*)e_{n-3}+c_{n-2}f_3, \\
[\widetilde{x_u},\widetilde{x_s}]&= c_1[\widetilde{x_s},\widetilde{x_s}]=(*)e_3+ \dots+(*)e_{n-3}+c_1a_{n-2}f_3,\\
[\widetilde{x_t},\widetilde{x_u}]&=b_1[\widetilde{x_s},\widetilde{x_u}]=(*)e_3+ \dots+(*)e_{n-3}+b_1c_{n-2}f_3, \\
[\widetilde{x_u},\widetilde{x_t}]&=c_1[\widetilde{x_s},\widetilde{x_t}]=(*)e_3+ \dots+(*)e_{n-3}+c_1f_3,\\
\end{align*}
such that, the element $y_2$ can not be generated in this case. Therefore the proof is closed.
\end{dem}
\subsection{Split case}
\
\
This section is devoted to the study of the 3-filiform Leibniz algebras of maximum length, whose naturally graded algebras are split. Furthermore we will focus our attention in the non standard families. The definitions of standard and non standard algebras are the following:
\begin{defn}
Let $\ll$ be a split maximum length algebra and let $k$ be an integer where $\ll=N_1 \oplus N_2 \oplus \dots \oplus N_k$. The algebra $\ll$ is called standard if $N_1$, $N_2$, $\dots$ and $N_k$ are algebras of maximum length. Otherwise the algebra $\ll$ is called non standard.
\end{defn}
\begin{exam}
The list of standard 3-filiform Leibniz algebras of maximum length consists of the following algebras: ma\-xi\-mum length null-filiform Leibniz algebras $\oplus \CC^3$, maximum length filiform Leibniz algebras $\oplus \CC^2$ and maximum length 2-filiform Leibniz algebras $\oplus \CC$. Note that these algebras have already been studied in \cite{Ayupov} and \cite{J.Lie.Theory2}.
\end{exam}
Due to the previous example, we reduce our study to the non standard families, i.e., we study the extension of the naturally graded filiform non-Lie Leibniz algebras $\oplus \CC^2$ and the naturally graded 2-filiform non-Lie Leibniz algebras $\oplus \CC$. It should be remarked that the null-filiform case will not be studied because its extension always gives a standard algebra. The Lie case has already been done in \cite{3-filiform}, where the classification is presented in the following theorem:
\begin{thm}
Let $\ll$ be a $(n+1)$-dimensional non standard 3-filiform Leibniz algebra whose associated naturally graded algebra is a Lie algebra. Then $n$ is odd and the algebra $\ll$ is isomorphic to the maximum length Lie algebra:
$$N: \begin{cases}
[e_{i-1},e_0]=e_i, \quad 2 \leq i \leq n-2,\\
[e_{n-3},e_1]=-e_{n-1},\\
[e_{n-4},e_2]=e_{n-1},\\
[e_{i},e_{n-2-i}]=(-1)^{i-1}e_{n-1}, \quad 3 \leq i \leq \lfloor\frac{n-3}{2}\rfloor,\\
[f_1,e_0]=e_{n-1}.
\end{cases}$$
\end{thm}
\fbox{2-Filiform case}
\
Cabezas, Camacho and Rodr\'{\i}guez gave the classification of the naturally graded 2-filiform non-Lie Leibniz algebras in \cite{J.Lie.Theory2}. They proved that, up to isomorphisms, there are two algebras under these hypothesis, which are not split. These algebras are defined by the following table of multiplications:
\begin{align*}
KF_4:&\begin{cases}
[e_i,e_1]=e_{i+1}, \quad 1 \leq i \leq n-3,\\
[e_1,e_{n-1}]=e_n+\alpha_3e_3+ \dots + \alpha_{n-2}e_{n-2},\\
[e_{n-1},e_{n-1}]=\beta_3e_3+\beta_4e_4+ \dots + \beta_{n-2}e_{n-2},\\
[e_i,e_{n-1}]=\beta_{i,i+2}e_{i+2}+\beta_{i,i+3}e_{i+3}+ \dots + \beta_{i,n-2}e_{n-2}, \quad 2 \leq i \leq n-4,\\
[e_n,e_{n-1}]=\gamma_4e_4+ \dots + \gamma_{n-2}e_{n-2}.\\
\end{cases}\\[2mm]
KF_5:&\begin{cases}
[e_i,e_1]=e_{i+1}, \quad 1 \leq i \leq n-3,\\
[e_1,e_{n-1}]=e_2+e_n+\alpha_3e_3+ \dots + \alpha_{n-2}e_{n-2},\\
[e_{n-1},e_{n-1}]=\beta_3e_3+\beta_4e_4+ \dots + \beta_{n-2}e_{n-2},\\
[e_i,e_{n-1}]=e_{i+1}+\beta_{i,i+2}e_{i+2}+\beta_{i,i+3}e_{i+3}+ \dots + \beta_{i,n-2}e_{n-2}, \quad 2 \leq i \leq n-4,\\
[e_n,e_{n-1}]=\gamma_4e_4+ \dots + \gamma_{n-2}e_{n-2}.\\
\end{cases}
\end{align*}
Due to the above classification we obtain the following theorem:
\begin{thm}
Let $\ll$ be a $(n+1)-dimensional$ 3-filiform non-Lie Leibniz algebra of maximum length, whose associated naturally graded algebra is $KF_4\oplus \mathbb{C}.$ Then $\ll$ is isomorphic to either $M$ or one of the algebra of the family $M^{1,\alpha}$:
$$\begin{array}{cc}
M: \begin{cases}
[y_i,y_1]=y_{i+1}, \quad 1 \leq i \leq n-3,\\
[y_1,y_{n-1}]=y_n,\\
[z_1,y_{n-1}]=y_{n-2},
\end{cases}&
M^{1,\alpha}:\begin{cases}
[y_i,y_1]=y_{i+1}, \quad 1 \leq i \leq n-3,\\
[y_1,y_{n-1}]=y_n,\\
[y_{n-1},z_{1}]=y_{n-2},\\
[z_1,y_{n-1}]=\alpha y_{n-2}, \quad \alpha\in \mathbb{C}.\\
\end{cases}
\end{array}$$
\end{thm}
\begin{dem}
The extension of the algebra $KF_4\oplus \mathbb{C}$, via the natural gradation, is:
$$\widetilde{\ll}:\begin{cases}
[e_i,e_1]=e_{i+1}+(*)e_{i+2}+\dots+(*)e_{n-2}, &1\leq i \leq n-3,\\
[e_1,e_{n-1}]=e_n+(*)e_3+\dots+(*)e_{n-2},\\
[e_{n-1},e_{n-1}]=(*)e_{3}\dots+(*)e_{n-2},\\
[e_i,e_{n-1}]=(*)e_{i+2}+\dots+(*)e_{n-2}, &2\leq i\leq n-4,\\
[e_n,e_{n-1}]=(*)e_4+\dots+(*)e_{n-2},\\
[e_i,f_1]=(*)e_{i+2}+\dots +(*)e_{n-2}, &1 \leq i \leq n-4,\\
[e_{n-1},f_1]=(*)e_3+\dots+(*)e_{n-2},\\
[e_n,f_1]=(*)e_4+ \dots + (*)e_{n-2},\\
[f_1,e_i]=(*)e_{i+2}+\dots+(*)e_{n-2}, &1 \leq i \leq n-4, \\
[f_1,e_{n-1}]=(*)e_3+\dots+(*)e_{n-2},\\
[f_1,e_n]=(*)e_4+ \dots + (*)e_{n-2},\\
[f_1,f_1]=(*)e_{3}+ \dots+ (*)e_{n-2},
\end{cases}$$
where the asterisks $(*)$ denote the corresponding coefficients in the products.
We are going to get the homogenous basis by considering the generators:
\begin{align*}
\widetilde{x_s}&=e_1+ \sum_{i=2}^n a_ie_i+b_1f_1,\\
\widetilde{x_t}&=e_{n-1}+ \sum_{i=1, i\neq n-1}^n A_ie_i+B_1f_1,\\
\widetilde{x_u}&=f_1+ \sum_{i=1}^n \alpha_ie_i.
\end{align*}
Let us consider the following products, since they will be very useful in the rest of the proof:
\begin{align*}
[\widetilde{x_s},\widetilde{x_s}]&=e_2+(*)e_3+\dots+ (*)e_{n-2}+a_{n-1}e_n,\\
[\widetilde{x_s},\widetilde{x_t}]&=A_1e_2+(*)e_3+ \dots + (*)e_{n-2}+e_n,\\
[[\underbrace{\widetilde{x_s},\widetilde{x_s}], \dots,\widetilde{x_s}}_{\hbox{i-times}}]&=e_{i}+(*)e_{i+1}+ \dots+(*)e_{n-2}, \hbox{ with } 3 \leq i \leq n-2.
\end{align*}
Let us take the homogeneous basis $y_1=\widetilde{x_s},$ $y_i=[y_{i-1},y_1]$ for $2 \leq i \leq n-2,$ $y_{n-1}=\widetilde{x_t},$
$y_n=[y_1,y_{n-1}],$ $z_1=\widetilde{x_u}$ and the associated maximum length gradation $V_{k_s}\oplus V_{2k_s} \oplus \dots \oplus
V_{(n-2)k_s} \oplus V_{k_t} \oplus V_{k_t+k_s} \oplus V_{k_u}.$ This gradation is connected if and only if $k_s=\pm 1.$ Without loss of generality
we can assume $k_s=1$ (the case $k_s=-1$ is analogous). We are going to continue the proof studying the possible values that the subindices
$k_t$ and $k_u$ can get to obtain a maximum length gradation.
\
$\bullet$ Case 1: If $k_t>0$, there are the following possibilities:
\begin{figure}[ht]
\centering{
\subfigure[Subcase 1.1]{\includegraphics[width=0.3\textwidth]{Subc11}}
\hspace{0.05\textwidth}
\subfigure[Subcase 1.2]{\includegraphics[width=0.4\textwidth]{Subc12}}
\hspace{0.05\textwidth}
}
\end{figure}
\begin{figure}[ht]
\centering{
\subfigure[Subcase 1.3]{\includegraphics[width=0.5\textwidth]{Subc13}}
}
\end{figure}
\textbf{Subcase 1.1: If $k_t =n-1,$} then from the connectedness of the gradation we obtain $k_u=0$ or $k_u=n+1.$ But $k_u=0$ is not possible because
$z_1 \in V_{k_u}$ and $z_1$ is a generator. If $k_u=n+1$ then $[y_i,z_1] \in V_{n+1+i}=<0>$, $[z_1,y_i] \in V_{n+1+i}=<0>$ for $1 \leq i \leq n-2$. Moreover
$[z_1,y_{n-1}]$ and $[y_{n-1},z_1]$ belong to the subspace $V_{2n}=<0>$. On the other hand $[z_1,y_n],$ $[y_n,z_1] \in V_{2n+1}=<0>$ and
$[z_1,z_1] \in V_{2n+2}=<0>.$ Then $z_1 \in Cent(\widetilde{\ll}),$ giving rise to a standard algebra.
\textbf{Subcase 1.2: If $k_t=n.$} By a similar previous reason, it is clear that $[y_i,z_1]=[z_1,y_i]=0$ for $3 \leq i \leq n$ because
those products belong to $V_{n-1+i}=<0>.$ Moreover $[z_1,y_1],[y_1,z_1] \in V_n=<y_{n-1}>$, but this is not possible since $y_{n-1}$ is a generator,
such that $y_{n-1} \in \widetilde{\ll}\setminus \widetilde{\ll}^2$, while $[z_1,y_1],[y_1,z_1] \in \widetilde{\ll}^2.$
Since $y_2 \in R(\widetilde{\ll})$ and from the Leibniz identity we affirm that $[z_1,y_2]=[y_2,z_1]=0.$ Finally, since $[z_1,z_1] \in V_{2n-2}=<0>$, we conclude that $z_1 \in Cent(\widetilde{\ll}),$ such that, the obtained algebra is standard.
\textbf{Subcase 1.3: If $k_t>n,$} the gradation is not connected because either $V_{n-1}=<0>$ or $V_{n}=<0>$.\\
$\bullet$ Case 2: If $k_t<0$, we distinguish:
\begin{figure}[ht]
\centering{
\subfigure[Subcase 2.1]{\includegraphics[width=0.4\textwidth]{Subc21}}
\hspace{0.05\textwidth}
\subfigure[Subcase 2.2]{\includegraphics[width=0.5\textwidth]{Subc22}}
\hspace{0.05\textwidth}
}
\end{figure}
\textbf{Subcase 2.1: If $k_t=-1,$} then from the connectedness of the gradation either $k_u=-2$ or $k_u=n-1$. In the first case we are going to prove that the obtained algebra is standard because $z_1 \in Cent(\widetilde{\ll})$. From properties of the gradation we have $[y_i,z_1], [z_1,y_i] \in V_{i-2}=<y_{i-2}>$ for $2 \leq i \leq n-2,$ but from the law of $\widetilde{\ll}$ we know $[y_i,z_1]=\alpha_1e_{i+1}+(*)e_{i+2}+ \dots+(*)e_{n-2}$ and $[z_1,y_i]=(*)e_{i+2}+\dots+(*)e_{n-2}.$ Therefore $[y_i,z_1]=0$ and $[z_1,y_i]=0$ are concluded for $2 \leq i \leq n-2.$ On the other hand $[y_1,z_1], [z_1,y_1] \in V_{-1}=<y_{n-1}>,$ which is not possible because $y_{n-1}$ is a generator of $\widetilde{\ll}.$ Finally, since $[z_1,y_{n-1}], [y_{n-1},z_1] \in V_{-3}=<0>$ and $y_n \in Cent(\widetilde{\ll}),$ we have finally proved that $z_1 \in Cent(\widetilde{\ll})$, such that, the algebra is standard.
\
If $k_u=n-1,$ $[z_1,y_i]=[y_i,z_1]=0$ for $1 \leq i \leq n-2$ because they belong to $V_{n-1+i}=<0>$ and $[z_1,z_1]=0$ because $[z_1,z_1]\in V_{2n-2}=<0>.$
Moreover since $y_n \in Cent(\widetilde{\ll})$ we assert that $[z_1,y_n]=[y_n,z_1]=0.$ On the other hand, from properties of the gradation we can write $[z_1,y_{n-1}]=\alpha y_{n-2}$ and
$[y_{n-1},z_1]=\beta y_{n-2}.$ Due to $\{y_2,y_3, \dots, y_{n-2}\} \in R(\widetilde{\ll})$, $y_n \in Cent(\widetilde{\ll})$ and the above calculations, it is enough to
compute the products: $[y_{n-2},y_1],$ $[y_{n-1},y_1]$ and $[y_{i},y_{n-1}],$ for $2 \leq i \leq n-1,$ to know the law of $\widetilde{\ll}$ in the homogeneous basis.
From properties of the gradation it is clear that $[y_{n-2},y_1] \in V_{n-1}=<z_1>.$ But from the definition of the descending central sequence we have $[y_{n-2},y_1] \in \widetilde{\ll}^2$ and $z_1 \in \widetilde{\ll}\setminus \widetilde{\ll}^2$. Hence we get $[y_{n-2},y_1]=0.$ By the same arguments, it can be concluded that $[y_1,y_{n-2}]=0.$ Furthermore $[y_{n-1},y_{n-1}]=0$ since $[y_{n-1},y_{n-1}] \in V_{-2}=<0>.$
Besides it can be proved that $[y_3,y_{n-1}] \in V_2=<y_2>$ and from the law of $\widetilde{\ll}$ we can write $[y_3,y_{n-1}]=A_1e_4+(*)e_5+ \dots+e_{n-2}=A_1y_4.$ So we conclude $A_1=0$ and $[y_3,y_{n-1}]=0,$ because if $A_1 \neq 0,$ then $V_2\supseteq [y_3,y_{n-1}]=A_1y_4 \in V_4$, which contradicts the assumption of maximum length of the gradation. Analogously we get $[y_{n-1},y_3]=0.$ In addiction, as
$[y_{n-1},y_1]=A_1e_2+(*)e_3+\dots+(*)e_{n-2}+A_1a_{n-1}e_n= (*)e_3+\dots+(*)e_{n-2}$ and $[y_{n-1},y_1] \in V_{0}=<y_n>,$ then holds $[y_{n-1},y_1]=0.$
In summary, the obtained law of the maximum length algebra is:
$$\widetilde{\ll}:\begin{cases}
[y_i,y_1]=y_{i+1}, \quad 1 \leq i \leq n-3,\\
[y_1,y_{n-1}]=y_n,\\
[y_i,y_{n-1}]=\gamma_iy_{i-1}, \quad 2 \leq i \leq n-2,\\
[z_1,y_{n-1}]=\alpha y_{n-2},\\
[y_{n-1},z_1]=\beta y_{n-2}.
\end{cases}$$
Finally, by using the program of the Leibniz identity it is easy to prove that $\gamma_i=0$ for $2 \leq i \leq n-2.$ Further by considering the dimension of $R(\ll),$ we assume $\beta=0$ or $\beta=1$. On the one hand if $\beta=0$ it is necessary that $\alpha \neq 0$ and by a trivial change of basis we can take $\alpha=1$. This gives rise to $M$. On the other hand ($\beta=1$), by using the program of the isomorphism we obtain the family $M^{1,\alpha}$, with $\alpha \in \mathbb{C}.$
\
\textbf{Subcase 2.2: If $k_t \neq -1,$} we only attain standard algebras or not connected gradations, by similar arguments as in previous cases.
\end{dem}
\begin{thm}
Let $\ll$ be a $(n+1)-dimensional$ 3-filiform non-Lie Leibniz algebra, whose associated naturally graded algebra is $KF_5\oplus \mathbb{C}.$ Then $l(\ll)\leq n.$
\end{thm}
\begin{dem}
The proof is achieved by using a similar reasoning to that followed in the previous theorem: to take a homogeneous basis and the
associated maximum gradation, to use the properties of the gradation and the above programs.
\end{dem}
\fbox{Filiform case}
\
Ayupov and Omirov in \cite{Ayupov} obtained the classification of naturally graded filiform non-Lie Leibniz algebras in arbitrary dimension. They proved that, up to isomorphisms, there are three algebras for each dimension $n$. We are going to show only one, because the other ones are either a split algebra or a Lie algebra.
$$NGF_1: \begin{cases}
[e_1,e_1]=e_3,\\
[e_i,e_1]=e_{i+1}, \quad 2 \leq i \leq n-1.
\end{cases}$$
Extending the algebra $NGF_1 \oplus \mathbb{C}^2$, via the natural gradation, the following result is attained:
\begin{thm}
Let $\ll$ be a $(n+2)$-dimensional 3-filiform non-Lie Leibniz algebra of maximum length, whose associated naturally graded algebra is
$NGF_1\oplus \mathbb{C}^2,$ with $n \geq 8.$ Then $l(\ll)\leq n+1.$
\end{thm}
\begin{dem}
As in previous proofs, the first step is to consider the extension of the algebra $NGF_1 \oplus \mathbb{C}^2$, by using its natural gradation, and to get a
homogeneous basis derived from the generators
\begin{align*}
\widetilde{x_s}&=e_1+ \sum_{i=2}^n a_ie_i+b_1f_1+b_2f_2,\\
\widetilde{x_t}&=e_{2}+ \sum_{i=1, i\neq 2}^{n}
A_ie_i+B_1f_1+B_2f_2,\\
\widetilde{x_u}&=f_1+ \sum_{i=1}^n \alpha_ie_i+\beta_2f_2,\\
\widetilde{x_v}&=f_2+ \sum_{i=1}^n \gamma_ie_i+\mu_1f_1.
\end{align*}
The main products of these generators are:
$$(2): \begin{cases}
[\widetilde{x_s},\widetilde{x_s}]&=(1+a_2)e_3+ (*)e_4 + \dots+ (*)e_{n-2},\\
[\widetilde{x_t},\widetilde{x_s}]&=(1+A_1)e_3+ (*)e_4 + \dots+ (*)e_{n-2},\\
[\widetilde{x_u},\widetilde{x_s}]&=(\alpha_1+\alpha_2)e_3+ (*)e_4 + \dots+ (*)e_{n-2},\\
[\widetilde{x_v},\widetilde{x_s}]&=(\gamma_1+\gamma_2)e_3+ (*)e_4 + \dots+ (*)e_{n-2},\\
\end{cases}$$
because the other products are linearly dependent of these.
The next step is to assume that the associated gradation with that basis has maximum length. Let us see in details.
\
$\bullet$ Case 1: If $1+a_2 \neq 0,$ we take the basis $y_1=\widetilde{x_s},$ $y_2=\widetilde{x_t},$ $y_3=[y_1,y_1],$ $y_i=[y_{i-1},y_1]$ for
$4 \leq i \leq n$, $z_1=\widetilde{x_u}$ and $z_2=\widetilde{x_v}$ and the associated gradation $V_{k_s}\oplus V_{2k_s} \oplus
V_{(n-1)k_s} \oplus V_{k_t} \oplus V_{k_u} \oplus V_{k_v},$ whose length is maximum. Note that $$(3):[\underbrace{[\widetilde{x_s},\widetilde{x_s}], \dots,\widetilde{x_s}}_{\hbox{i-times}}]=(1+a_2)e_{i+1}+ (*)e_{i+2}+ \dots+ (*)e_{n-2}, \ 2 \leq i \leq n-1.$$
From (2) and (3) we conclude that $[y_2,y_1]$ is linearly dependent of $y_3.$ In addiction
from properties of the gradation $[y_2,y_1] \in V_{k_t+k_s}$ and $y_3 \in V_{2k_s}.$ Finally, by the hypothesis of maximum length we know that $k_s \neq k_t$. These facts imply $[y_2,y_1]=0$, hence $A_1=-1$ (see (2)). On the other hand $[y_1,y_2]=A_1(1+a_2)e_3+ (*)e_4+ \dots+ (*)e_{n-2}=-y_3,$ $[y_1,y_2] \in V_{k_s+k_t}$ and $y_3 \in V_{2k_s}$, then $V_{k_s+k_t}=V_{2k_s}$, such that, $k_s=k_t$
which is not possible. We conclude that there is no maximum length algebra in this case.
\
$\bullet$ Case 2: If $1+a_2=0,$ we have to distinguish the following cases:
\
\textbf{Subcase 2.1: If} $A_1=0,$ we take the new basis $y_1=x_s,$ $y_2=x_t,$ $y_i=[y_{i-1},y_1]$ for $3 \leq i \leq n,$ $z_1=x_u$ and $z_2=x_v.$ The associated maximum length gradation is: $V_{k_s}\oplus V_{k_t} \oplus V_{k_t+k_s}\oplus V_{k_t+2k_s} \oplus \dots \oplus V_{k_t+(n-2)k_s} \oplus V_{k_u} \oplus V_{k_v}.$
We now consider all the possible product in the new basis, obtaining the following law:
$$\begin{cases}
[y_1,y_1]=y_3,\\
[y_i,y_1]=y_{i+1}, \qquad 3 \leq i \leq n-1,\\
[y_i,y_{2}]=P_iy_{i+4}, \quad 2 \leq i \leq n-4,\\
[y_{i},z_{1}]=Q_iy_{i+2}, \quad 2 \leq i \leq n-2,\\
[y_i,z_{2}]=R_iy_{i+3}, \quad 2 \leq i \leq n-3.
\end{cases}$$
It is clear to see, by induction on $i$ and by calculating the Leibniz identity on
$[[y_i,y_1],y_2],$ $[[y_i,y_1],z_1]$ and $[[y_i,y_1],z_2]$, that $P_i=Q_i=R_i=0$ for $i\geq 3,$ respectively. Moreover by applying the program of the Leibniz identity, we prove that
$A_2=B_2=D_2=0$ (for more details see the fo\-llo\-wing Web site: http://personal.us.es/jrgomez). Therefore the obtained algebra is standard.\\
\textbf{Subcase 2.2:} If $A_1 \neq 0 \wedge A_1 \neq -1,$ then we can take the same previous homogeneous basis. We get a contradiction with the assumption of maximum length by considering the product $[y_2,y_2].$ From the law of $\widetilde{\ll}$ we have $[y_2,y_2]=A_1(A_1+1)e_3 +(*)e_4+\dots +(*)e_{n-2}=A_1y_3.$ Since we had assumed that $A_1 \neq 0$, $[y_2,y_2] \in V_{2k_t}$ and $y_3 \in V_{k_t+2k_s}$, then it can be concluded that $k_s=k_t$, which is a contradiction. \\
\textbf{Subcase 2.3:} If $A_1 \neq 0 \wedge A_1=-1,$ since $\widetilde{x_u}$ and $\widetilde{x_v}$ play a symmetric role, we can assume
that $\alpha_1+\alpha_2 \neq 0.$ Otherwise it was not possible to construct a homogeneous basis generated by
$\widetilde{x_u}$, $\widetilde{x_t},$ $\widetilde{x_u}$ and $\widetilde{x_v}$. Therefore we take the basis $y_1=x_s,$ $y_2=x_u,$ $y_i=[y_{i-1},y_1]$ for $3 \leq i \leq n,$ $z_1=x_t$ and $z_2=x_v$.
If $\alpha_1 \neq 0,$ then $[y_2,y_2]=[\widetilde{x_u},\widetilde{x_u}]=\alpha_1(\alpha_1+\alpha_2)e_3+(*)e_4+ \dots+(*)e_{n-2}=\tau y_3 \neq 0.$ Since $[y_2,y_2]\in V_{2k_u},$ $y_3 \in V_{k_t+k_s}$ and $\tau \neq 0$, $k_t=k_s$ is achieved, which is not possible because it contradicts the maximum length. Therefore $\alpha_1=0$, $\alpha_2 \neq 0$ and $[\widetilde{x_u},\widetilde{x_t}]=-\alpha_2e_3+(*)e_4+ \dots+(*)e_{n-2},$ obtaining, by a similar way, the same contradiction.
\end{dem}
\section{Applications of maximum length.}
As we mentioned in the introduction, the algebras of maximum length allow to study some cohomological properties easily, such as the space of derivations and the first cohomology group (see \cite{loday-piras}).
We have centred on the computational support again in order to tackle these cohomological pro\-per\-ties. We will use a third program, the program of derivations, that allows to determinate a basis of the space of derivations of an algebra of maximum length. From here, the cohomology study can be easily completed by using similar arguments as in \cite{Ayupov}, \cite{Dz1}--\cite{Reyes1}, \cite{Omirov2}, \cite{Ve}. As in the other programs, the implementation is presented in low and fixed dimension (see \cite{mitesis} for more details). Then we will formulate the generalizations, proving by induction the results for arbitrary fixed dimension.
\begin{pr}
\
\begin{itemize}
\item $dim(\mathcal{D}er(N))=3\displaystyle\frac{n-1}{2}+7.$\\
\item $dim(\mathcal{D}er(M))=n+6.$\\
\item $dim(\mathcal{D}er(M^{1,\alpha}))=n+5.$
\end{itemize}
\end{pr}
\begin{dem}
The proof is carried out by using the program of derivations, whose calculations are presented with a step-by-step explanation in the following Web site:
http://personal.us.es/jrgomez.
\end{dem}
\begin{cor}
\
\begin{itemize}
\item $dim(\mathcal{H}^1(N,N))=\displaystyle\frac{n+19}{2}$.\\
\item $dim(\mathcal{H}^1(M,M))=n+4$.\\
\item $dim(\mathcal{H}^1(M^{1,\alpha},M^{1,\alpha}))=n+2$.
\end{itemize}
\end{cor}
\begin{dem}
The proof is carried out by using the characterization $H^{1}(\ll,\ll)=\mathcal{D}er(\ll)\setminus \mathcal{I}nn(\ll),$ where $\mathcal{I}nn(\ll)$ denotes the set of the inner derivations of $\ll.$
\end{dem}
\
|
2,877,628,091,569 | arxiv | \section{Introduction}
\label{sec:intro}
Feedback control is ubiquitous in classical engineering. However, its extension to the quantum realm has been challenging due to the unique character of the quantum measurement, which requires coupling of the observed quantum system to a classical measurement apparatus. Consequently, measurement-based quantum control has to deal with the fundamental effect of stochastic measurement back action on the quantum system, along with the need to amplify quantum signals up to macroscopic levels and high latency of classical controllers in comparison to typical quantum dynamic time scales~\cite{Brif.NJP.12.075008.2010, Wiseman.Milburn.book.2014}. An alternative approach that has attracted significant interest in the last decade is coherent quantum feedback control (CQFC)~\cite{Zhang.James.CSB.57.2200.2012, Gough.PTRSA.370.5241.2012, Combes.arXiv.1611.00375.2016}, which considers networks where the quantum system of interest (called the \emph{plant}) is controlled via coupling (either direct or, more often, through intermediate quantum fields) to an auxiliary quantum system (called the \emph{controller}). CQFC schemes utilize coherent quantum signals circulating between the plant and controller, thus avoiding the need for signal amplification and associated excess noise. Also, both plant and controller can evolve on the same time scale, which eliminates the latency issues. Due to these advantages, CQFC makes it possible to engineer quantum networks with new and unique characteristics~\cite{Gough.PTRSA.370.5241.2012, Combes.arXiv.1611.00375.2016, Jacobs.NJP.16.073036.2014, Yamamoto.PRX.4.041029.2014}.
The theoretical foundation of CQFC is a powerful framework based on input-output theory, which is used for modeling networks of open quantum systems connected by electromagnetic fields~\cite{Hudson.CMP.93.301.1984, Gardiner.Collett.PRA.31.3761.1985, Gardiner.PRL.70.2269.1993, Wiseman.Milburn.PRA.49.4110.1994} (see also~\cite{Zhang.James.CSB.57.2200.2012, Gough.PTRSA.370.5241.2012, Combes.arXiv.1611.00375.2016} for reviews). Moreover, recent developments, including the SLH formalism~\cite{Gough.James.IEEE-TAC.54.2530.2007, Gough.James.CMP.287.1109.2009, Gough.PRA.81.023804.2010}, the quantum hardware description language (QHDL)~\cite{Tezak.PTRSA.370.5270.2012}, and the QNET software package~\cite{QNET.url}, have added important capabilities for, respectively, modular analysis, specification, and simulation of such quantum optical networks. Together, the existing theoretical tools enable efficient and automated design and modeling of CQFC networks.
Proposed and experimentally demonstrated applications of CQFC include the development of autonomous devices for preparation, manipulation, and stabilization of quantum states~\cite{Kerckhoff.PRL.105.040502.2010, Kerckhoff.NJP.13.055022.2011, Hamerly.PRL.109.173602.2012, Hamerly.PRA.87.013815.2013, Zhang.IEEE-TAC.57.1997.2012, Liu.JPB.48.105501.2015}, disturbance rejection by a dynamic compensator~\cite{Mabuchi.PRA.78.032323.2008}, linear-optics implementation of a modular quantum memory~\cite{Nurdin.Gough.QIC.15.1017.2015}, generation of optical squeezing~\cite{Gough.Wildfeuer.PRA.80.042107.2009, Iida.IEEE-TAC.57.2045.2012, Crisafulli.OE.21.18371.2013, Nemet.Parkins.PRA.94.023809.2016}, generation of quantum entanglement between optical field modes~\cite{Yan.PRA.84.062304.2011, Zhou.SciRep.5.11132.2015, Shi.Nurdin.QIP.14.337.2015, Shi.Nurdin.arXiv.1502.01070.2015, Shi.Nurdin.QIC.15.1141.2015, Shi.Nurdin.arXiv.1508.04584.2015}, coherent estimation of open quantum systems~\cite{Miao.PRA.92.012115.2015, Roy.arXiv.1502.03729.2016}, and ultra-low-power optical processing elements for optical switching~\cite{Mabuchi.PRA.80.045802.2009, Mabuchi.APL.98.193109.2011, Santori.PRAppl.1.054005.2014} and analog computing~\cite{Pavlichin.Mabuchi.NJP.16.105017.2014, Tezak.Mabuchi.EPJ-QT.2.10.2015}. In addition to tabletop bulk-optics implementations, CQFC networks have been also implemented using integrated silicon photonics~\cite{Sarovar.EPJ-QT.3.14.2016} and superconducting microwave devices~\cite{Kerckhoff.PRL.109.153602.2012, Kerckhoff.PRX.3.021013.2013}.
Squeezed states of light~\cite{Collett.Gardiner.PRA.30.1386.1984, Collett.Walls.PRA.32.2887.1985, Wu.JOSAB.4.1465.1987, Lvovsky.chapter.2015} have found numerous applications in quantum metrology and quantum information sciences, including interferometric detection of gravitational waves~\cite{Caves.PRD.23.1693.1981, Grote.PRL.110.181101.2013}, continuous-variable quantum key distribution (CV-QKD)~\cite{Furrer.PRL.109.100502.2012, Furrer.PRA.90.042325.2014, Madsen.NatCommun.3.1083.2012, Jacobsen.arXiv.1408.4566.2014, Eberle.NJP.15.053049.2013, Gehring.NatCommun.6.8795.2015}, generation of Gaussian entanglement~\cite{Eberle.NJP.15.053049.2013, Gehring.NatCommun.6.8795.2015, Ast.OL.41.5094.2016}, and quantum computing with continuous-variable cluster states~\cite{Yukawa.PRA.78.012301.2008, Gu.PRA.79.062318.2009, Weedbrook.RMP.84.621.2012, Menicucci.PRL.112.120504.2014}. Different applications require squeezed states with different properties. For example, detectable gravitational waves are expected to have frequencies in the range from $10$~Hz to $10$~kHz, and, consequently, quadrature squeezed states used to increase the measurement sensitivity in interferometric detectors should have a high degree of squeezing at sideband frequencies in this range. On the other hand, in CV-QKD the secure key rate is proportional to the bandwidth of squeezing, and hence it would be useful to generate states with squeezing bandwidth extending to $100$~MHz or even higher. It would be also of interest to extend the maximum of squeezing to high sideband frequencies.
In recent years, there have been remarkable advances in the generation of squeezed states~\cite{Takeno.OE.15.4321.2007, Vahlbruch.PRL.97.011101.2006, Eberle.PRL.104.251102.2010, Mehmet.OE.19.25763.2011, Khalaidovski.CQG.29.075001.2012, Mehmet.PRA.81.013814.2010, Ast.OL.37.2367.2012, Ast.OE.21.13572.2013, Baune.OE.23.16035.2015, Yan.PRA.85.040305.2012, Kaiser.Optica.3.362.2016, Dutt.PRAppl.3.044005.2015, Dutt.OL.41.223.2016}, however, achieving significant control over the squeezing spectrum still remains an ongoing effort. In 2009, Gough and Wildfeuer~\cite{Gough.Wildfeuer.PRA.80.042107.2009} proposed to enhance squeezing in the output field of a degenerate optical parametric oscillator (OPO) by incorporating the OPO into a CQFC network, where a part of the output beam is split off and then fed back into the OPO. Iida et al.~\cite{Iida.IEEE-TAC.57.2045.2012} reported an experimental demonstration of this scheme, while N\'emet and Parkins~\cite{Nemet.Parkins.PRA.94.023809.2016} proposed to modify it by including a time delay into the feedback loop. Another significant modification of this scheme was proposed and experimentally demonstrated by Crisafulli~et~al.~\cite{Crisafulli.OE.21.18371.2013}, who included a second OPO to act as the controller, with the plant OPO and the controller OPO coupled by two fields propagating between them in opposite directions. Due to the presence of quantum-limited gains in both arms of the feedback loop, this CQFC network has a very rich dynamics. In particular, by tuning the network's parameters it is possible to significantly vary the squeezing spectrum of its output field, for example, shift the maximum of squeezing from the resonance to a high-frequency sideband~\cite{Crisafulli.OE.21.18371.2013}.
The full range of performance of the CQFC network of two coupled OPOs as a squeezed-light source, however, still remains to be explored. In this paper, we study the limits of the network's performance by performing two types of optimizations: (1) maximizing the degree of squeezing at a chosen sideband frequency and (2) maximizing the average degree of squeezing over a chosen bandwidth; in both cases, the searches are executed over the space of network parameters with experimentally motivated bounds. To maximize the chances of finding a globally optimal solution, we use the PyGMO package of global optimization algorithms~\cite{pygmo.url} and employ a hybrid strategy which executes in parallel eight searches (using seven different global algorithms). Before each optimization is completed, the searches are repeated multiple times, and intermediate solutions are exchanged between them after each repetition. This strategy enabled us to discover that the CQFC network, when optimally operated, is capable of achieving a remarkably high degree of squeezing at sideband frequencies and bandwidths as high as $100$~MHz, with a very effective utilization of the available pump power. We also find that the obtained optimal solutions are quite robust to small variations of phase parameters.
\section{Background}
\label{sec:back}
The derivations in this section largely follow those in Refs.~\cite{Gough.Wildfeuer.PRA.80.042107.2009, Crisafulli.OE.21.18371.2013}, with some additional details and modifications.
\subsection{Input-output model of a quantum optical network}
\label{sec:IO-model}
Consider a network of coupled linear and bilinear optical elements such as mirrors, beam-splitters, phase-shifters, lasers, and degenerate OPOs. The quantum theory of such a network considers quantized cavity field modes which are coupled through cavity mirrors to external (input and output) quantum fields~\cite{Gardiner.Collett.PRA.31.3761.1985, Gardiner.PRL.70.2269.1993, Wiseman.Milburn.PRA.49.4110.1994}. Let $n$ be the number of the network's input ports (equal to the number of output ports) and $m$ be the number of cavities (in this model, we assume that each cavity supports one internal field mode). Let $\mathbf{a}$, $\mathbf{a}_{\mathrm{in}}$, and $\mathbf{a}_{\mathrm{out}}$ denote vectors of boson annihilation operators for, respectively, the cavity modes, the input fields, and the output fields:
\begin{equation}
\label{eq:mode-vectors}
\mathbf{a} = \begin{bmatrix} a_1 \\ \vdots \\ a_m \end{bmatrix} , \quad
\mathbf{a}_{\mathrm{in}} = \begin{bmatrix} a_{\mathrm{in},1} \\ \vdots \\ a_{\mathrm{in},n} \end{bmatrix} , \quad
\mathbf{a}_{\mathrm{out}} = \begin{bmatrix} a_{\mathrm{out},1} \\ \vdots \\ a_{\mathrm{out},n} \end{bmatrix} .
\end{equation}
Assuming that all input fields are in the vacuum state, the network is fully described by the $(\mathbf{S}, \mathbf{L}, H)$ model (also called the SLH model)~\cite{Gough.James.IEEE-TAC.54.2530.2007, Gough.James.CMP.287.1109.2009, Gough.PRA.81.023804.2010}, which includes the $n \times n$ matrix $\mathbf{S}$ that describes the scattering of external fields, the $n$-dimensional vector $\mathbf{L}$ that describes the coupling of cavity modes and external fields, and the Hamiltonian $H$ that describes the intracavity dynamics. For the model considered here, elements $\{S_{i j}\}$ of $\mathbf{S}$ are c-numbers, while $H$ and elements $\{L_i\}$ of $\mathbf{L}$ are operators on the combined Hilbert space of all cavity modes in the network. The Heisenberg equations of motion (also known as quantum Langevin equations) for the cavity mode operators $\{a_{\ell}(t)\}$ are ($\hbar = 1$)
\begin{equation}
\label{eq:HEOM-1}
\frac{d a_{\ell}}{d t} = -i [a_{\ell},H] + \mathcal{L}_L [a_{\ell}] + \Gamma_l ,
\quad \ell = 1, \ldots, m .
\end{equation}
Here, $\mathcal{L}_L$ is the Lindblad superoperator:
\begin{equation}
\label{eq:Lindblad-superoperator}
\mathcal{L}_L [a_{\ell}] = \sum_{i = 1}^n \left( L_i^{\dag} a_{\ell} L_i
- \frac{1}{2} L_i^{\dag} L_i a_{\ell} - \frac{1}{2} a_{\ell} L_i^{\dag} L_i \right) ,
\end{equation}
and $\Gamma_l$ is the noise operator:
\begin{equation}
\label{eq:noise-operator}
\Gamma_l = \mathbf{a}_{\mathrm{in}}^\dag \mathbf{S}^\dag [a_{\ell},\mathbf{L}]
+ [\mathbf{L}^\dag , a_{\ell}] \mathbf{S} \mathbf{a}_{\mathrm{in}} ,
\end{equation}
where $\mathbf{a}_{\mathrm{in}}^\dag = [a_{\mathrm{in},1}^\dag , \ldots , a_{\mathrm{in},n}^\dag]$ and $\mathbf{L}^\dag = [L_1^\dag , \ldots , L_n^\dag]$ are row vectors of respective Hermitian conjugate operators. The generalized boundary condition for the network is
\begin{equation}
\label{eq:boundary-condition}
\mathbf{a}_{\mathrm{out}} = \mathbf{S} \mathbf{a}_{\mathrm{in}} + \mathbf{L} .
\end{equation}
For the type of networks that we consider, elements of $\mathbf{L}$ are linear in annihilation operators of the cavity modes, i.e.,
\begin{equation}
\label{eq:L-vector}
\mathbf{L} = \mathbf{K} \mathbf{a} ,
\end{equation}
where $\mathbf{K}$ is an $n \times m$ complex matrix with elements $\{K_{i \ell} = [ L_i, a_{\ell}^\dag ]\}$,
and the Hamiltonian has the bilinear form:
\begin{equation}
\label{eq:Ham-2}
H = \mathbf{a}^\dag \mathbf{\Omega} \mathbf{a}
+ {\textstyle\frac{i}{2}} \mathbf{a}^\dag \mathbf{W} \mathbf{a}^\ddag
- {\textstyle\frac{i}{2}} \mathbf{a}^{\mathsf{T}} \mathbf{W}^\dag \mathbf{a} ,
\end{equation}
where $\mathbf{a}^\dag = [a_1^\dag , \ldots , a_m^\dag]$ and $\mathbf{a}^\ddag = \mathbf{a}^{\dag \mathsf{T}}$ are, respectively, row and column vectors of boson creation operators for the cavity modes, $\mathbf{\Omega}$ is an $m \times m$ Hermitian matrix, and $\mathbf{W}$ is an $m \times m$ complex matrix. With such $\mathbf{L}$ and $H$, the Heisenberg equations of motion~\eqref{eq:HEOM-1} take the form:
\begin{equation}
\label{eq:HEOM-2}
\frac{d \mathbf{a}}{d t} = \mathbf{V} \mathbf{a} + \mathbf{W} \mathbf{a}^\ddag
+ \mathbf{Y} \mathbf{a}_{\mathrm{in}} ,
\end{equation}
where $\mathbf{V} = - \frac{1}{2} \mathbf{K}^\dag \mathbf{K} - i \mathbf{\Omega}$ is an $m \times m$ complex matrix and $\mathbf{Y} = -\mathbf{K}^\dag \mathbf{S}$ is an $m \times n$ complex matrix.
To obtain the transfer-matrix function from input to output fields, we seek the solution of Eq.~\eqref{eq:HEOM-2} in the frequency domain. Using the Fourier transform, we define:
\begin{subequations}
\label{eq:FT}
\begin{align}
& b(t) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} d \omega \, b(\omega) e^{-i \omega t} , \\
& b^\dag(t) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} d \omega \, b^\dag(-\omega) e^{-i \omega t} ,
\end{align}
\end{subequations}
where $b(t)$ stands for any element of $\mathbf{a}(t)$, $\mathbf{a}_{\mathrm{in}}(t)$, and $\mathbf{a}_{\mathrm{out}}(t)$. The field operators are in the interaction frame, and therefore $\omega$ is the sideband frequency (relative to the carrier frequency). We also use the double-length column vectors of the form:
\begin{equation}
\label{eq:b-breve}
\breve{\mathbf{b}}(\omega) = \begin{bmatrix} \mathbf{b}(\omega) \\ \mathbf{b}^\ddag (-\omega)
\end{bmatrix} ,
\end{equation}
where $\mathbf{b}(\omega)$ stands for either of $\mathbf{a}(\omega)$, $\mathbf{a}_{\mathrm{in}}(\omega)$, and $\mathbf{a}_{\mathrm{out}}(\omega)$. With this notation, Eq.~\eqref{eq:HEOM-2} together with its Hermitian conjugate can be transformed into one matrix equation and solved for $\breve{\mathbf{a}}(\omega)$ in the frequency domain:
\begin{equation}
\label{eq:HEOM-FD}
\breve{\mathbf{a}}(\omega) = (\breve{\mathbf{A}} + i\omega \mathbf{I}_{2 m})^{-1}
\breve{\mathbf{K}}^\dag \breve{\mathbf{S}}\, \breve{\mathbf{a}}_{\mathrm{in}}(\omega) .
\end{equation}
Here, $\mathbf{I}_{2 m}$ is the $2 m \times 2 m$ identity matrix, $\breve{\mathbf{A}} = \Delta(\mathbf{V},\mathbf{W})$, $\breve{\mathbf{K}} = \Delta(\mathbf{K},\mathbf{0})$, $\breve{\mathbf{S}} = \Delta(\mathbf{S},\mathbf{0})$, and we use the notation:
$\Delta(\mathbf{A},\mathbf{B}) = \begin{bmatrix} \mathbf{A} & \mathbf{B} \\ \mathbf{B}^\ast & \mathbf{A}^\ast
\end{bmatrix}$.
Analogously, the boundary condition of Eq.~\eqref{eq:boundary-condition} together with its Hermitian conjugate can be transformed into one matrix equation in the frequency domain:
\begin{equation}
\label{eq:boundary-condition-FD}
\breve{\mathbf{a}}_{\mathrm{out}}(\omega) = \breve{\mathbf{S}} \breve{\mathbf{a}}_{\mathrm{in}}(\omega)
+ \breve{\mathbf{K}} \breve{\mathbf{a}}(\omega) .
\end{equation}
In Eqs.~\eqref{eq:HEOM-FD} and \eqref{eq:boundary-condition-FD}, $\breve{\mathbf{a}}(\omega)$ is a $2 m$-dimensional vector, $\breve{\mathbf{a}}_{\mathrm{in}}(\omega)$ and $\breve{\mathbf{a}}_{\mathrm{out}}(\omega)$ are $2 n$-dimensional vectors, $\breve{\mathbf{A}}$ is a $2 m \times 2 m$ matrix, $\breve{\mathbf{K}}$ is a $2 n \times 2 m$ matrix, and $\breve{\mathbf{S}}$ is a $2 n \times 2 n$ matrix. By substituting Eq.~\eqref{eq:HEOM-FD} into Eq.~\eqref{eq:boundary-condition-FD}, one obtains the quantum input-output relations in the matrix form:
\begin{equation}
\label{eq:IO-FD}
\breve{\mathbf{a}}_{\mathrm{out}}(\omega) = \breve{\mathbf{Z}}(\omega) \breve{\mathbf{a}}_{\mathrm{in}}(\omega) ,
\end{equation}
where
\begin{equation}
\label{eq:TF-1}
\breve{\mathbf{Z}}(\omega) = \left[ \mathbf{I}_{2 n}
+ \breve{\mathbf{K}} (\breve{\mathbf{A}} + i\omega \mathbf{I}_{2 m})^{-1} \breve{\mathbf{K}}^\dag \right]
\breve{\mathbf{S}}
\end{equation}
is the network's transfer-matrix function. The $2 n \times 2n$ matrix $\breve{\mathbf{Z}}(\omega)$ can be decomposed into the block form:
\begin{equation}
\label{eq:TF-2}
\breve{\mathbf{Z}}(\omega) = \begin{bmatrix} \mathbf{Z}^- (\omega) & \mathbf{Z}^+ (\omega) \\
{\mathbf{Z}^+ (-\omega)}^\ast & {\mathbf{Z}^- (-\omega)}^\ast \end{bmatrix} ,
\end{equation}
where $\mathbf{Z}^- (\omega)$ and $\mathbf{Z}^+ (\omega)$ are $n \times n$ matrices. Correspondingly, input-output relations of Eq.~\eqref{eq:IO-FD} can be expressed for each of the output fields ($i = 1,\ldots,n$) as:
\begin{subequations}
\label{eq:IO-FD-1}
\begin{align}
a_{\mathrm{out},i}(\omega) = \sum_{j=1}^n \big[ & Z_{i j}^-(\omega) a_{\mathrm{in},j}(\omega) \nonumber \\
& + Z_{i j}^+(\omega) a_{\mathrm{in},j}^\dag(-\omega) \big], \\
a_{\mathrm{out},i}^\dag(-\omega) = \sum_{j=1}^n \big[ & {Z_{i j}^+(-\omega)}^\ast a_{\mathrm{in},j}(\omega) \nonumber \\
& + {Z_{i j}^-(-\omega)}^\ast a_{\mathrm{in},j}^\dag(-\omega) \big].
\end{align}
\end{subequations}
\subsection{Squeezing spectrum}
\label{sec:SS}
Consider the quadrature of the $i$th output field in time and frequency domains:
\begin{subequations}
\label{eq:X-quadrature}
\begin{align}
& X_i (t,\theta) = a_{\mathrm{out},i}(t) e^{-i \theta} + a_{\mathrm{out},i}^\dag(t) e^{i \theta} , \\
& X_i (\omega,\theta) = a_{\mathrm{out},i}(\omega) e^{-i \theta} + a_{\mathrm{out},i}^\dag(-\omega) e^{i \theta} ,
\end{align}
\end{subequations}
where $\theta$ is the homodyne phase. The power spectral density of the quadrature's quantum noise (commonly referred to as the \emph{squeezing spectrum}) is~\cite{Collett.Gardiner.PRA.30.1386.1984, Collett.Walls.PRA.32.2887.1985}:
\begin{equation}
\label{eq:spectrum-1}
\mathcal{P}_i(\omega,\theta) = 1 + \int_{-\infty}^{\infty} d \omega'
\langle {:\! X_i (\omega,\theta) , X_i (\omega',\theta)\! :} \rangle ,
\end{equation}
where $:\,:$ denotes the normal ordering of boson operators and $\langle x , y \rangle = \langle x y \rangle - \langle x \rangle \langle y \rangle$. Since all input fields are in the vacuum state, $\langle X_i (\omega,\theta) \rangle = \langle X_i (\omega',\theta) \rangle = 0$, and one obtains:
\begin{align}
\mathcal{P}_i(\omega,\theta) = & \: 1 + \mathcal{N}_i(\omega) + \mathcal{N}_i(-\omega) \nonumber \\
& + \mathcal{M}_i(\omega) e^{-2 i \theta} + {\mathcal{M}_i(\omega)}^\ast e^{2 i \theta} ,
\label{eq:spectrum-2}
\end{align}
where
\begin{subequations}
\label{eq:NM-spectrum-1}
\begin{align}
& \mathcal{N}_i(\omega) = \int_{-\infty}^{\infty} d \omega'
\langle a_{\mathrm{out},i}^\dag(-\omega') a_{\mathrm{out},i}(\omega) \rangle , \\
& \mathcal{M}_i(\omega) = \int_{-\infty}^{\infty} d \omega'
\langle a_{\mathrm{out},i}(\omega) a_{\mathrm{out},i}(\omega') \rangle .
\end{align}
\end{subequations}
By substituting Eqs.~\eqref{eq:IO-FD-1} into Eqs.~\eqref{eq:NM-spectrum-1} and evaluating expectation values for vacuum input fields, one obtains:
\begin{subequations}
\label{eq:NM-spectrum-2}
\begin{align}
& \mathcal{N}_i(\omega) = \sum_{j=1}^n \left| Z_{i j}^+(\omega) \right|^2 , \\
& \mathcal{M}_i(\omega) = \sum_{j=1}^n Z_{i j}^-(\omega) Z_{i j}^+(-\omega) .
\end{align}
\end{subequations}
In this work, we are only concerned with squeezing properties of the field at one of the output ports. We will designate this port as corresponding to $i = 1$ and denote the squeezing spectrum of this output field as $\mathcal{P}(\omega,\theta) = \mathcal{P}_1(\omega,\theta)$. In squeezing generation, the figure of merit is the quantum noise change relative to the vacuum level, measured in decibels, and since $\mathcal{P}_{\mathrm{vac}}(\omega,\theta) = 1$, the corresponding spectral quantity is
\begin{equation}
\label{eq:Q}
\mathcal{Q}(\omega,\theta) = 10 \log_{10} \mathcal{P}(\omega,\theta) .
\end{equation}
Negative values of $\mathcal{Q}$ correspond to quantum noise reduction below the vacuum level (i.e., squeezing of the quadrature uncertainty). The maximum degree of squeezing corresponds to the minimum value of $\mathcal{Q}$. The maximum and minimum of $\mathcal{P}(\omega,\theta)$ as a function of $\theta$,
\begin{equation}
\label{eq:Ppm}
\mathcal{P}^+(\omega) = \max_{\theta} \mathcal{P}(\omega,\theta) , \quad
\mathcal{P}^-(\omega) = \min_{\theta} \mathcal{P}(\omega,\theta) ,
\end{equation}
are power spectral densities of the quantum noise in anti-squeezed and squeezed quadrature, respectively. Analogously to Eq.~\eqref{eq:Q}, logarithmic spectral measures of anti-squeezing and squeezing for the two quadratures are defined as $\mathcal{Q}^{\pm}(\omega) = 10 \log_{10} \mathcal{P}^{\pm}(\omega)$, respectively.
Expressing $\mathcal{M}(\omega)$ as $\mathcal{M}(\omega) = |\mathcal{M}(\omega)| e^{i\theta_{\mathcal{M}}(\omega)}$ and using Eq.~\eqref{eq:spectrum-2}, it is easy to find (we omit the subscript $i = 1$ for simplicity):
\begin{equation}
\mathcal{P}^{\pm}(\omega) = 1 + \mathcal{N}(\omega) + \mathcal{N}(-\omega)
\pm 2 |\mathcal{M}(\omega)| ,
\end{equation}
with anti-squeezed and squeezed quadrature corresponding to $\theta = \theta_{\mathcal{M}}(\omega)/2$ and $\theta = [\theta_{\mathcal{M}}(\omega) - \pi]/2$, respectively. Note that, in general, these optimum values of the homodyne phase $\theta$ depend on the sideband frequency $\omega$, so, for example, if the goal is to maximize the degree of squeezing at a particular sideband frequency $\omega_{\mathrm{opt}}$, then the optimum phase value $\theta_{\mathrm{opt}} = [\theta_{\mathcal{M}}(\omega_{\mathrm{opt}}) - \pi]/2$ should be selected accordingly.
\section{Squeezing from a single OPO}
\label{sec:OPO-1}
A network that produces squeezed light by means of a single degenerate OPO~\cite{Wu.JOSAB.4.1465.1987} is schematically shown in Fig.~\ref{fig:OPO_single_scheme}. The OPO consists of a nonlinear crystal enclosed in a Fabry-P\'erot cavity. The pump field for the OPO is assumed to be classical and not shown in the scheme. Each partially transparent mirror in the network (including cavity mirrors and a beamsplitter) has two input ports and two output ports. A vacuum field enters into each input port. The OPO cavity has a fictitious third mirror to model intracavity losses (mainly due to absorption in the crystal as well as scattering and Fresnel reflection at the crystal's facets). The beamsplitter B models losses in the output transmission line (e.g., due to coupling into a fiber) and inefficiencies in the homodyne detector (not shown) used to measure the squeezing spectrum of the output field. Taking into account all optical elements, the network is modeled as having four input ports, four output ports, and one cavity mode ($n = 4$, $m = 1$).
\begin{figure}[htbp]
\centering
\includegraphics[width=1.0\columnwidth]{Fig_OPO_single_scheme.pdf}
\caption{A schematic depiction of the single OPO network.}
\label{fig:OPO_single_scheme}
\end{figure}
\begin{table}[htbp]
\caption{\label{tab:params-1}Parameters of the single OPO network.}
\begin{ruledtabular}
\begin{tabular}{lll}
Parameter & Type & Description \\ \hline
$\kappa_1$ & Positive & Leakage rate for the left cavity mirror \\
$\kappa_2$ & Positive & Leakage rate for the right cavity mirror \\
$\kappa_3$ & Positive & Leakage rate for intracavity losses \\
$\omega_0$ & Real & Frequency detuning of the cavity \\
$\xi$ & Complex & Pump amplitude of the OPO \\
$\theta_{\mathrm{B}}$ & Real & Rotation angle of the beamsplitter \\
\end{tabular}
\end{ruledtabular}
\end{table}
Parameters of the single OPO network are described in Table~\ref{tab:params-1}. With $\xi = |\xi| e^{i \theta_{\xi}}$, there is a total of seven real parameters. Note that we use angular frequencies throughout this paper. For each cavity mirror, the leakage rate is
\begin{equation}
\label{eq:kappa-T-relation}
\kappa_i = \frac{c T_i}{2 l_{\mathrm{eff}}} , \quad i = 1,2,3,
\end{equation}
where $T_i$ is the power transmittance of the $i$th mirror, $c$ is the speed of light, and $l_{\mathrm{eff}}$ is the effective cavity length (taking into account the length and refractive index of the crystal). To simplify the notation, we also use alternative parameters:
\begin{equation}
\gamma = \kappa_1 + \kappa_2 + \kappa_3 , \\
\end{equation}
to denote the total leakage rate (including losses) from the cavity, and
\begin{equation}
t_{\mathrm{B}} = \cos(\theta_{\mathrm{B}}) , \ \ \ r_{\mathrm{B}} = \sin(\theta_{\mathrm{B}}) ,
\end{equation}
to denote, respectively, the transmittivity and reflectivity of the beamsplitter.
The QNET package~\cite{QNET.url} is used to derive the $(\mathbf{S}, \mathbf{L}, H)$ model of the network, and the resulting components of the model are
\begin{eqnarray*}
& \mathbf{S} = \begin{bmatrix}
0 & t_{\mathrm{B}} & 0 & - r_{\mathrm{B}} \\
1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & r_{\mathrm{B}} & 0 & t_{\mathrm{B}}
\end{bmatrix} , \quad
\mathbf{L} = \begin{bmatrix}
\sqrt{\kappa_2} t_{\mathrm{B}} a \\
\sqrt{\kappa_1} a \\
\sqrt{\kappa_3} a \\
\sqrt{\kappa_2} r_{\mathrm{B}} a
\end{bmatrix}, & \\
& H = \omega_0 a^\dag a + {\textstyle\frac{i}{2}} \xi a^{\dag 2} - {\textstyle\frac{i}{2}} \xi^\ast a^2 , &
\end{eqnarray*}
where $a$ is the annihilation operator of the cavity field mode. Using the formalism of Sec.~\ref{sec:IO-model}, we obtain: $\mathbf{\Omega} = \omega_0$, $\mathbf{W} = \xi$,
\begin{align*}
& \mathbf{K} = \left[ \sqrt{\kappa_2} t_{\mathrm{B}}, \sqrt{\kappa_1}, \sqrt{\kappa_3}, \sqrt{\kappa_2} r_{\mathrm{B}} \right]^{\mathsf{T}}, \\
& \mathbf{V} = -\eta , \quad
\mathbf{Y} = -\left[\sqrt{\kappa_1}, \sqrt{\kappa_2}, \sqrt{\kappa_3}, 0 \right], \\
& \breve{\mathbf{A}} = \begin{bmatrix}
- \eta & \xi \\
\xi^\ast & - \eta^\ast
\end{bmatrix}, \\
& (\breve{\mathbf{A}} + i\omega \mathbf{I}_2)^{-1} = \displaystyle
-\frac{1}{\lambda(\omega)} \begin{bmatrix}
\eta^\ast - i\omega & \xi \\
\xi^\ast & \eta - i\omega
\end{bmatrix},
\end{align*}
where we defined auxiliary parameters:
$$
\eta = {\textstyle\frac{1}{2}} \gamma + i \omega_0 , \quad
\lambda(\omega) = (\eta^\ast - i\omega) (\eta - i\omega) - |\xi|^2 .
$$
These results make it straightforward to analytically compute the transfer-matrix function $\breve{\mathbf{Z}}(\omega)$ of Eq.~\eqref{eq:TF-1}. Since we are only interested in squeezing properties of the field at the output port~1, it is sufficient to use only the respective rows of matrices $\mathbf{Z}^- (\omega)$ and $\mathbf{Z}^+ (\omega)$, i.e.,
\begin{subequations}
\label{eq:IO-OPO-1}
\begin{align}
& \mathbf{Z}_1^-(\omega) = \frac{\sqrt{\kappa_2} t_{\mathrm{B}} (\eta^\ast - i\omega)
}{\lambda(\omega)} \mathbf{Y}
+ \left[ 0, t_{\mathrm{B}}, 0, - r_{\mathrm{B}} \right] , \\
& \mathbf{Z}_1^+(\omega) = \frac{\sqrt{\kappa_2} t_{\mathrm{B}} \xi}{\lambda(\omega)} \mathbf{Y} .
\end{align}
\end{subequations}
By substituting elements of $\mathbf{Z}_1^-(\omega)$ and $\mathbf{Z}_1^+(\omega)$ into Eqs.~\eqref{eq:NM-spectrum-2}, we obtain:
\begin{subequations}
\label{eq:NM-spectrum-OPO-1}
\begin{align}
& \mathcal{N}_1(\omega) = \frac{\gamma \kappa_2 T_{\mathrm{B}} |\xi|^2}{|\lambda(\omega)|^2}, \\
& \mathcal{M}_1(\omega) = \frac{\gamma (\eta^\ast - i\omega) - \lambda(\omega)}{|\lambda(\omega)|^2}
\kappa_2 T_{\mathrm{B}} \xi ,
\end{align}
\end{subequations}
where $T_{\mathrm{B}} = t_{\mathrm{B}}^2$ is the power transmittance of the beam splitter. Using Eq.~\eqref{eq:spectrum-2}, the resulting squeezing spectrum is
\begin{equation}
\label{eq:PSD-OPO-1}
\mathcal{P}(\omega,\theta) = 1 + 2 \kappa_2 T_{\mathrm{B}} |\xi| \frac{\gamma |\xi|
+ \mu(\omega) \cos\varphi + \gamma \omega_0 \sin\varphi}{|\lambda(\omega)|^2} ,
\end{equation}
where $\mu(\omega) = \frac{1}{4}\gamma^2 + |\xi|^2 + \omega^2 - \omega_0^2$ and $\varphi = \theta_{\xi} - 2 \theta$. The spectra for anti-squeezed and squeezed quadrature are obtained as the maximum and minimum (cf.~Eq.~\eqref{eq:Ppm}) of $\mathcal{P}(\omega,\theta)$ in Eq.~\eqref{eq:PSD-OPO-1} for $\varphi = \tan^{-1} [\gamma \omega_0 / \mu(\omega)]$ and $\varphi = \tan^{-1} [\gamma \omega_0 / \mu(\omega)] + \pi$, respectively, and are given by
\begin{equation}
\label{eq:S-pm-1}
\mathcal{P}^{\pm}(\omega) = 1 \pm 2 \kappa_2 T_{\mathrm{B}} |\xi| \frac{\sqrt{\mu^2(\omega)
+ \gamma^2 \omega_0^2} \pm \gamma |\xi|}{|\lambda(\omega)|^2} .
\end{equation}
In order to compare the theoretical spectra with experimental data, it is common to express the pump amplitude as
\begin{equation}
\label{eq:x-scaled}
|\xi| = {\textstyle\frac{1}{2}} \gamma x , \quad x = \sqrt{P/P_{\mathrm{th}}} ,
\end{equation}
where $P$ is the OPO pump power and $P_{\mathrm{th}}$ is its threshold value. Analogously to the scaled pump amplitude $x = 2 |\xi|/\gamma$, it is convenient to use scaled frequencies $\Omega = 2 \omega/\gamma$ and $\Omega_0 = 2 \omega_0/\gamma$. With this notation, Eq.~\eqref{eq:S-pm-1} takes the form:
\begin{equation}
\label{eq:S-pm-2}
\mathcal{P}^{\pm}(\omega) = 1 \pm 4 T_{\mathrm{B}} \rho x
\frac{\sqrt{(1 + y^2)^2 + 4 \Omega_0^2} \pm 2 x}{(1 - y^2)^2 + 4 \Omega^2} ,
\end{equation}
where $\rho = \kappa_2/\gamma = T_2/(T_1 + T_2 + L)$ is the escape efficiency of the cavity, $L = T_3$ denotes the intracavity power loss, and $y^2 = x^2 + \Omega^2 - \Omega_0^2$.
In the case of zero detuning, $\omega_0 = 0$, the squeezing spectrum of Eq.~\eqref{eq:PSD-OPO-1} becomes
\begin{equation}
\label{eq:PSD-OPO-1-0}
\mathcal{P}(\omega,\theta) = 1 + 2 \kappa_2 T_{\mathrm{B}} |\xi| \frac{\gamma |\xi|
+ (\frac{1}{4}\gamma^2 + |\xi|^2 + \omega^2) \cos\varphi}{(\frac{1}{4}\gamma^2 - |\xi|^2 - \omega^2)^2
+ \gamma^2 \omega^2} .
\end{equation}
The corresponding spectra for anti-squeezed and squeezed quadrature are obtained for $\varphi = 0$ and $\varphi = \pi$, respectively. They can be expressed by taking $\Omega_0 = 0$ in Eq.~\eqref{eq:S-pm-2}, which reproduces the familiar result~\cite{Collett.Walls.PRA.32.2887.1985, Wu.JOSAB.4.1465.1987}:
\begin{equation}
\label{eq:S-pm-2-0}
\mathcal{P}^{\pm}(\omega) = 1 \pm T_{\mathrm{B}} \rho \frac{4 x}{(1 \mp x)^2 + \Omega^2} .
\end{equation}
The spectra of Eq.~\eqref{eq:S-pm-2-0} have Lorentzian shapes with maximum (for anti-squeezing) and minimum (for squeezing) at the resonance (zero sideband frequency), and with the degree of squeezing rapidly decreasing as the sideband frequency increases. For applications such as CV-QKD, it would be valuable to significantly extend the squeezing bandwidth. It would be also of interest to achieve a maximum degree of squeezing (i.e., a minimum value of $\mathcal{P}^{-}$) at a high-frequency sideband. Therefore, we investigate whether such modifications of the squeezing spectrum are possible by using a nonzero value of the cavity's frequency detuning.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\columnwidth]{Fig_OPO_single_spectrum_with_detuning.pdf}
\caption{Squeezing spectra of the output light field from a single OPO network with different values of the cavity's frequency detuning $\omega_0/2\pi$ (given in the legend). Logarithmic power spectral densities of the quantum noise in anti-squeezed and squeezed quadrature, $\mathcal{Q}^{\pm}(\omega) = 10 \log_{10} \mathcal{P}^{\pm}(\omega)$, are shown versus the sideband frequency $\omega/2\pi$ for $\mathcal{P}^{\pm}(\omega)$ of Eq.~\eqref{eq:S-pm-2}. The values of network parameters are listed in the text.}
\label{fig:OPO_single_spectrum}
\end{figure}
Consider a single OPO with a set of experimentally motivated parameters: pump power $P = 1.5$~W, pump wavelength $\lambda_p = 775$~nm, and signal wavelength $\lambda_s = 1550$~nm; an MgO:PPLN crystal with length $l_c = 20$~mm, refractive index (at $\lambda_s$) $n_s = 2.1$, and effective nonlinear coefficient $d_{\mathrm{eff}} = 14$~pm/V; a Fabry-P\'erot cavity with effective length $l_{\mathrm{eff}} = 87$~mm, left mirror reflectance $R_1 = 0.98$ ($T_1 = 0.02$, $\kappa_1 / 2\pi \approx 5.484$~MHz), right mirror reflectance $R_2 = 0.85$ ($T_2 = 0.15$, $\kappa_2 / 2\pi \approx 41.132$~MHz), intracavity loss $L = 0.02$ ($\kappa_3 / 2\pi \approx 5.484$~MHz), and total leakage rate $\gamma/2\pi \approx 52.1$~MHz; output transmission line loss $L_{\mathrm{tl}} = R_{\mathrm{B}} = 0$ ($T_{\mathrm{B}} = 1$). These parameters correspond to OPO's threshold power $P_{\mathrm{th}} \approx 14.86$~W and scaled pump amplitude $x = \sqrt{P/P_{\mathrm{th}}} \approx 0.318$. Using these parameters, we compute the squeezing spectra $\mathcal{P}^{\pm}(\omega)$ of Eq.~\eqref{eq:S-pm-2} for three detuning values: $\omega_0/2\pi = \{0, 25, 50\}$~MHz. The resulting logarithmic spectra $\mathcal{Q}^{\pm}(\omega) = 10 \log_{10} \mathcal{P}^{\pm}(\omega)$ for anti-squeezed and squeezed quadrature are shown in Fig.~\ref{fig:OPO_single_spectrum}. These results indicate that, while the use of nonzero detuning can increase the degree of squeezing at higher-frequency sidebands as compared to the case of $\omega_0 = 0$, this increase is very small. Also, no improvement in the squeezing bandwidth (quantified as the average degree of squeezing over a selected bandwidth) is achieved through the use of nonzero detuning. These observations motivate us to explore the use of the CQFC network with two coupled OPOs as a light source with the potential to generate a widely tunable squeezing spectrum.
\begin{figure*}[htbp]
\centering
\includegraphics[width=1.8\columnwidth]{Fig_OPO_CQFN_scheme.pdf}
\caption{A schematic depiction of the CQFC network of two coupled OPOs.}
\label{fig:OPO_CQFN_scheme}
\end{figure*}
\begin{table*}[htbp]
\caption{\label{tab:params-2}Parameters of the CQFC network of two coupled OPOs.}
\begin{ruledtabular}
\begin{tabular}{lll}
Parameter & Type & Description \\ \hline
$\kappa_{\mathrm{p} 1}$ & Positive & Leakage rate for the left mirror of the plant OPO cavity \\
$\kappa_{\mathrm{p} 2}$ & Positive & Leakage rate for the right mirror of the plant OPO cavity \\
$\kappa_{\mathrm{p} 3}$ & Positive & Leakage rate for losses in the plant OPO cavity \\
$\omega_{\mathrm{p}}$ & Real & Frequency detuning of the plant OPO cavity \\
$\xi_{\mathrm{p}}$ & Complex & Pump amplitude of the plant OPO \\ \hline
$\kappa_{\mathrm{c} 1}$ & Positive & Leakage rate for the left mirror of the controller OPO cavity \\
$\kappa_{\mathrm{c} 2}$ & Positive & Leakage rate for the right mirror of the controller OPO cavity \\
$\kappa_{\mathrm{c} 3}$ & Positive & Leakage rate for losses in the controller OPO cavity \\
$\omega_{\mathrm{c}}$ & Real & Frequency detuning of the controller OPO cavity \\
$\xi_{\mathrm{c}}$ & Complex & Pump amplitude of the controller OPO \\ \hline
$\phi_1$ & Real & Phase shift of the first phase shifter \\
$\phi_2$ & Real & Phase shift of the second phase shifter \\
$\theta_1$ & Real & Rotation angle of the first beamsplitter \\
$\theta_2$ & Real & Rotation angle of the second beamsplitter \\
$\theta_3$ & Real & Rotation angle of the third beamsplitter \\
\end{tabular}
\end{ruledtabular}
\end{table*}
\section{Squeezing from a network of two coupled OPOs}
\label{sec:OPO-2}
The CQFC network that includes two coupled degenerate OPOs~\cite{Crisafulli.OE.21.18371.2013} is schematically shown in Fig.~\ref{fig:OPO_CQFN_scheme}. Each OPO consists of a nonlinear crystal enclosed in a Fabry-P\'erot cavity. Pump fields for both OPOs are assumed to be classical and not shown in the scheme. From the control theory perspective, OPO1 is considered to be the \emph{plant} and OPO2 the (quantum) \emph{controller}. Each partially transparent mirror in the network (including cavity mirrors and beamsplitters) has two input ports and two output ports. A vacuum field enters into each input port, except for two input ports of cavity mirrors used for the feedback loop between the plant and controller. Each OPO cavity has a fictitious third mirror to model intracavity losses. Beamsplitters B1 and B2 represent the light diverted to lock the cavities as well as losses in optical transmission lines between the OPO cavities. Beamsplitter B3 represents losses in the output transmission line (e.g., due to coupling into a fiber) and inefficiencies in the homodyne detector (not shown) used to measure the squeezing spectrum of the output field. Phase shifters P1 and P2 are inserted into transmission lines between the OPOs to manipulate the interference underlying the CQFC control. Taking into account the feedback loop between the plant and controller, the network is modeled as having seven input ports, seven output ports, and two cavity modes ($n = 7$, $m = 2$).
Parameters of the network of two coupled OPOs are listed in Table~\ref{tab:params-2}. With $\xi_{\mathrm{p}} = |\xi_{\mathrm{p}}| e^{i \theta_{\mathrm{p}}}$ and $\xi_{\mathrm{c}} = |\xi_{\mathrm{c}}| e^{i \theta_{\mathrm{c}}}$, there is a total of 17 real parameters. The relationship between leakage rate and power transmittance of a cavity mirror is given, similarly to Eq.~\eqref{eq:kappa-T-relation}, by
\begin{equation}
\label{eq:kappa-T-relation-2}
\kappa_{\mathrm{p} i} = \frac{c T_{\mathrm{p} i}}{2 l_{\mathrm{p,eff}}} , \quad
\kappa_{\mathrm{c} i} = \frac{c T_{\mathrm{c} i}}{2 l_{\mathrm{c,eff}}} , \quad i = 1,2,3,
\end{equation}
where $T_{\mathrm{p} i}$ ($T_{\mathrm{c} i}$) is the power transmittance of the $i$th mirror and $l_{\mathrm{p,eff}}$ ($l_{\mathrm{c,eff}}$) is the effective cavity length for the plant (controller). To simplify the notation, we also use alternative parameters:
\begin{equation}
\gamma_{\mathrm{p}} = \kappa_{\mathrm{p} 1} + \kappa_{\mathrm{p} 2} + \kappa_{\mathrm{p} 3} , \quad
\gamma_{\mathrm{c}} = \kappa_{\mathrm{c} 1} + \kappa_{\mathrm{c} 2} + \kappa_{\mathrm{c} 3}
\end{equation}
to denote the total leakage rate (including losses) from, respectively, the plant and controller cavities,
\begin{equation}
t_i = \cos(\theta_i) , \ \ \ r_i = \sin(\theta_i) , \ \ \ i = 1,2,3
\end{equation}
to denote, respectively, the transmittivity and reflectivity of each beamsplitter, and
\begin{equation}
\phi = \phi_1 + \phi_2
\end{equation}
to denote the total phase shift for the feedback roundtrip path. Similarly to Eq.~\eqref{eq:x-scaled}, we also define the scaled pump amplitudes $x_{\mathrm{p}}$ and $x_{\mathrm{c}}$ for the plant and controller OPOs, respectively:
\begin{equation}
\label{eq:x-scaled-2}
x_{\mathrm{p}} = \frac{2 |\xi_{\mathrm{p}}| }{ \gamma_{\mathrm{p}} } = \sqrt{ \frac{P_{\mathrm{p}} }{ P_{\mathrm{p,th}} }} , \quad
x_{\mathrm{c}} = \frac {2 |\xi_{\mathrm{c}}| }{ \gamma_{\mathrm{c}} } = \sqrt{ \frac{P_{\mathrm{c}} }{ P_{\mathrm{c,th}} }} ,
\end{equation}
where $P_{\mathrm{p}}$ ($P_{\mathrm{c}}$) is the OPO pump power and $P_{\mathrm{p,th}}$ ($P_{\mathrm{c,th}}$) is its threshold value for the plant (controller).
The QNET package~\cite{QNET.url} is used to derive the $(\mathbf{S}, \mathbf{L}, H)$ model of the network, and the resulting components of the model are
\begin{equation}
\mathbf{S} = \begin{bmatrix}
t_1 t_2 t_3 e^{i \phi} & - r_1 t_2 t_3 e^{i \phi} & - r_2 t_3 e^{i \phi_2} & - r_3 & 0 & 0 & 0 \\
r_1 & t_1 & 0 & 0 & 0 & 0 & 0 \\
t_1 r_2 e^{i \phi_1} & - r_1 r_2 e^{i \phi_1} & t_2 & 0 & 0 & 0 & 0 \\
t_1 t_2 r_3 e^{i \phi} & - r_1 t_2 r_3 e^{i \phi} & - r_2 r_3 e^{i \phi_2} & t_3 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 1
\end{bmatrix},
\end{equation}
\begin{equation}
\label{eq:L-2OPOs}
\mathbf{L} = \begin{bmatrix}
t_3 \left(\sqrt{\kappa_{\mathrm{p} 1}} t_1 t_2 e^{i \phi}
+ \sqrt{\kappa_{\mathrm{p} 2}}\right) a_{\mathrm{p}}
+ \sqrt{\kappa_{\mathrm{c} 2}} t_2 t_3 e^{i \phi_2} a_{\mathrm{c}} \\
\sqrt{\kappa_{\mathrm{p} 1}} r_1 a_{\mathrm{p}} \\
\sqrt{\kappa_{\mathrm{p} 1}} t_1 r_2 e^{i \phi_1} a_{\mathrm{p}}
+ \sqrt{\kappa_{\mathrm{c} 2}} r_2 a_{\mathrm{c}} \\
r_3 \left(\sqrt{\kappa_{\mathrm{p} 1}} t_1 t_2 e^{i \phi}
+ \sqrt{\kappa_{\mathrm{p} 2}}\right) a_{\mathrm{p}}
+ \sqrt{\kappa_{\mathrm{c} 2}} t_2 r_3 e^{i \phi_2} a_{\mathrm{c}} \\
\sqrt{\kappa_{\mathrm{c} 1}} a_{\mathrm{c}} \\
\sqrt{\kappa_{\mathrm{p} 3}} a_{\mathrm{p}} \\
\sqrt{\kappa_{\mathrm{c} 3}} a_{\mathrm{c}}
\end{bmatrix},
\end{equation}
\begin{align}
H = & \left(\omega_{\mathrm{p}} + \mathrm{Im}\,\nu \right) a_{\mathrm{p}}^{\dag} a_{\mathrm{p}}
+ \omega_{\mathrm{c}} a_{\mathrm{c}}^{\dag} a_{\mathrm{c}}
+ \left( {\textstyle\frac{i}{2}} \nu_{12} a_{\mathrm{p}}^{\dag} a_{\mathrm{c}} + \text{H.c.} \right) \nonumber \\
& + \left[ {\textstyle\frac{i}{2}} \left(
\xi_{\mathrm{p}} a_{\mathrm{p}}^{\dag 2} + \xi_{\mathrm{c}} a_{\mathrm{c}}^{\dag 2} \right)
+ \text{H.c.} \right] ,
\label{eq:Ham-fb}
\end{align}
where $a_{\mathrm{p}}$ and $a_{\mathrm{c}}$ denote, respectively, the annihilation operators of the plant's and controller's cavity field modes, and we defined auxiliary parameters:
\begin{eqnarray*}
& \nu_1 = \sqrt{\kappa_{\mathrm{c} 2} \kappa_{\mathrm{p} 1}} t_1 e^{i \phi_1} , \quad
\nu_2 = \sqrt{\kappa_{\mathrm{c} 2} \kappa_{\mathrm{p} 2}} t_2 e^{i \phi_2}, & \\
& \nu_{12} = \nu_1^{\ast} - \nu_2 , \quad
\nu = \sqrt{\kappa_{\mathrm{p} 1} \kappa_{\mathrm{p} 2}} t_1 t_2 e^{i \phi}. &
\end{eqnarray*}
By comparing Eq.~\eqref{eq:Ham-fb} to the corresponding Hamiltonian without feedback:
\begin{equation}
\label{eq:Ham-nf}
H_{\text{nf}} = \omega_{\mathrm{p}} a_{\mathrm{p}}^{\dag} a_{\mathrm{p}}
+ \omega_{\mathrm{c}} a_{\mathrm{c}}^{\dag} a_{\mathrm{c}}
+ \left[ {\textstyle\frac{i}{2}} \left(
\xi_{\mathrm{p}} a_{\mathrm{p}}^{\dag 2} + \xi_{\mathrm{c}} a_{\mathrm{c}}^{\dag 2} \right)
+ \text{H.c.} \right] ,
\end{equation}
we observe that two main effects induced by feedback are (1) the appearance of an effective interaction between the plant's and controller's cavity modes, governed by the term $\frac{i}{2} \nu_{12} a_{\mathrm{p}}^{\dag} a_{\mathrm{c}} + \text{H.c.}$, and (2) the modification of the plant detuning by $\mathrm{Im}\,\nu$ which is proportional to $\sin\phi$.
Using the formalism of Sec.~\ref{sec:IO-model}, we obtain:
\begin{align*}
& \mathbf{\Omega} = \begin{bmatrix}
\omega_{\mathrm{p}} + \mathrm{Im}\,\nu & \frac{i}{2} \nu_{12} \\
-\frac{i}{2} \nu_{12}^\ast & \omega_{\mathrm{c}}
\end{bmatrix}, \quad
\mathbf{W} = \begin{bmatrix}
\xi_{\mathrm{p}} & 0 \\
0 & \xi_{\mathrm{c}}
\end{bmatrix}, \\
& \mathbf{K} = \begin{bmatrix}
t_3 \left(\sqrt{\kappa_{\mathrm{p} 1}} t_1 t_2 e^{i \phi} + \sqrt{\kappa_{\mathrm{p} 2}}\right) &
\sqrt{\kappa_{\mathrm{c} 2}} t_2 t_3 e^{i \phi_2} \\
\sqrt{\kappa_{\mathrm{p} 1}} r_1 & 0 \\
\sqrt{\kappa_{\mathrm{p} 1}} t_1 r_2 e^{i \phi_1} &
\sqrt{\kappa_{\mathrm{c} 2}} r_2 \\
r_3 \left(\sqrt{\kappa_{\mathrm{p} 1}} t_1 t_2 e^{i \phi} + \sqrt{\kappa_{\mathrm{p} 2}}\right) &
\sqrt{\kappa_{\mathrm{c} 2}} t_2 r_3 e^{i \phi_2} \\
0 & \sqrt{\kappa_{\mathrm{c} 1}} \\
\sqrt{\kappa_{\mathrm{p} 3}} & 0 \\
0 & \sqrt{\kappa_{\mathrm{c} 3}}
\end{bmatrix}, \\
& \mathbf{V} = - \begin{bmatrix}
\eta_{\mathrm{p}} & \nu_2 \\
\nu_1 & \eta_{\mathrm{c}}
\end{bmatrix}, \\
& \mathbf{Y} = \begin{bmatrix}
- \sqrt{\kappa_{\mathrm{p} 1}} - \sqrt{\kappa_{\mathrm{p} 2}} t_1 t_2 e^{i \phi} &
- \sqrt{\kappa_{\mathrm{c} 2}} t_1 e^{i \phi_1} \\
\sqrt{\kappa_{\mathrm{p} 2}} r_1 t_2 e^{i \phi} & \sqrt{\kappa_{\mathrm{c} 2}} r_1 e^{i \phi_1} \\
\sqrt{\kappa_{\mathrm{p} 2}} r_2 e^{i \phi_2} & 0 \\
0 & 0 \\
0 & - \sqrt{\kappa_{\mathrm{c} 1}} \\
- \sqrt{\kappa_{\mathrm{p} 3}} & 0 \\
0 & - \sqrt{\kappa_{\mathrm{c} 3}}
\end{bmatrix}^{\mathsf{T}}, \\
& \breve{\mathbf{A}} = \begin{bmatrix}
-\eta_{\mathrm{p}} & -\nu_2 & \xi_{\mathrm{p}} & 0 \\
-\nu_1 & -\eta_{\mathrm{c}} & 0 & \xi_{\mathrm{c}} \\
\xi_{\mathrm{p}}^\ast & 0 & -\eta_{\mathrm{p}}^\ast & -\nu_2^\ast \\
0 & \xi_{\mathrm{c}}^\ast & -\nu_1^\ast & -\eta_{\mathrm{c}}^\ast
\end{bmatrix},
\end{align*}
where we used additional auxiliary parameters:
$$
\eta_{\mathrm{p}} = {\textstyle\frac{1}{2}} \gamma_{\mathrm{p}} + i \omega_{\mathrm{p}} + \nu , \quad
\eta_{\mathrm{c}} = {\textstyle\frac{1}{2}} \gamma_{\mathrm{c}} + i \omega_{\mathrm{c}} .
$$
It is possible to analytically invert the matrix $\breve{\mathbf{A}} + i\omega \mathbf{I}_4$ in order to obtain the transfer-matrix function $\breve{\mathbf{Z}}(\omega)$ and squeezing spectrum $\mathcal{P}(\omega,\theta)$ in analytic form. However, the resulting expressions are too complicated and visually uninformative to be shown here. For practical purposes, it is more efficient to numerically evaluate $\breve{\mathbf{Z}}(\omega)$ and $\mathcal{P}(\omega,\theta)$ for any given set of parameter values.
\section{Squeezing optimization procedure}
\label{sec:optim}
\subsection{Objective function}
In order to quantitatively investigate the tunability of the squeezing spectrum in the CQFC network of two coupled OPOs, we numerically optimize the degree of squeezing at various sideband frequencies. Specifically, we minimize the objective function of the form:
\begin{equation}
\label{eq:J}
J = \mathcal{P}^{-}(\omega_{\mathrm{opt}}) + g \mathcal{P}^{-}(\omega_{\mathrm{opt}}) \mathcal{P}^{+}(\omega_{\mathrm{opt}}) ,
\end{equation}
where $\omega_{\mathrm{opt}}$ is the selected sideband frequency. The first term in Eq.~\eqref{eq:J} is the minimum of the squeezing spectrum at $\omega_{\mathrm{opt}}$, while the second term is the uncertainty product times the weight parameter $g$. This second term is included in order to eliminate solutions with a very large uncertainty of the anti-squeezed quadrature. In all optimization results shown below, the weight parameter is $g = 0.001$. With such a small value of $g$, the difference between the values of $J$ and $\mathcal{P}^{-}$ is always insignificant, and therefore, for the sake of simplicity, we refer to the problem of minimizing $J$ as \emph{squeezing optimization}.
All solutions encountered during a search are checked to satisfy the Routh--Hurwitz stability criterion~\cite{Bishop.Dorf.chapter.2000}, i.e., that all eigenvalues of the matrix $\breve{\mathbf{A}}$ in Eq.~\eqref{eq:TF-1} have negative real parts. Any unstable solution is eliminated from the consideration by assigning to it a very large objective value ($J = 10^6$).
\subsection{Optimization variables}
For a given $\omega_{\mathrm{opt}}$, the objective $J$ is a function of the network parameters --- seven real parameters for the single OPO network:
\begin{equation}
\{ T_1, T_2, L, \omega_0, x, \theta_{\xi}, L_{\mathrm{tl}} \} ,
\end{equation}
and 17 real parameters for the CQFC network of two coupled OPOs:
\begin{eqnarray}
\left\{ T_{\mathrm{p} 1} , T_{\mathrm{p} 2}, L_{\mathrm{p}}, \omega_{\mathrm{p}}, x_{\mathrm{p}}, \theta_{\mathrm{p}}, T_{\mathrm{c} 1} , T_{\mathrm{c} 2}, L_{\mathrm{c}}, \omega_{\mathrm{c}}, x_{\mathrm{c}}, \theta_{\mathrm{c}}, \right. \nonumber \\
\left. \phi_1, \phi_2, L_1, L_2, L_3 \right\} .
\end{eqnarray}
Recall that, for the single OPO network, $L = T_3$ is the intracavity power loss and $L_{\mathrm{tl}} = R_{\mathrm{B}}$ is the power loss in the output transmission line. Similarly, for the CQFC network of two coupled OPOs, $L_{\mathrm{p}} = T_{\mathrm{p} 3}$ and $L_{\mathrm{c}} = T_{\mathrm{c} 3}$ are the intracavity power losses for the plant and controller OPOs, respectively, and $L_i = r_i^2$ $(i = 1,2,3)$ are power losses in the transmission lines. In cases where the two intracavity loss values are equal, we denote $L_{\mathrm{in}} = L_{\mathrm{p}} = L_{\mathrm{c}}$, and where the three transmission line loss values are equal, we denote $L_{\mathrm{out}} = L_1 = L_2 = L_3$.
Numerical simulations demonstrate that an increase in any of the losses always leads to a deterioration of squeezing, and therefore if a loss parameter can vary in a specified interval $[L_{\mathrm{l}}, L_{\mathrm{u}}]$, an optimization will always converge to the lower bound $L_{\mathrm{l}}$. Therefore, it makes sense to to exclude the loss parameters from the optimization variables, i.e., to execute each optimization with all loss parameters having pre-assigned fixed values (of course, these values can vary from one optimization run to another to explore various experimentally relevant regimes). Consequently, there remain five optimization variables for the single OPO network:
\begin{equation}
\{ T_1, T_2, \omega_0, x, \theta_{\xi} \} ,
\end{equation}
and 12 optimization variables for the CQFC network of two coupled OPOs:
\begin{equation}
\{ T_{\mathrm{p} 1} , T_{\mathrm{p} 2}, \omega_{\mathrm{p}}, x_{\mathrm{p}}, \theta_{\mathrm{p}}, T_{\mathrm{c} 1} , T_{\mathrm{c} 2}, \omega_{\mathrm{c}},
x_{\mathrm{c}}, \theta_{\mathrm{c}}, \phi_1, \phi_2 \} .
\end{equation}
Each optimization variable $z$ can vary in an interval $[z_{\mathrm{l}}, z_{\mathrm{u}}]$ (where $z_{\mathrm{l}}$ is the lower bound and $z_{\mathrm{u}}$ is the upper bound). The bound intervals are
\begin{itemize}
\item $[0, 2 \pi]$ for all phase variables ($\theta_{\xi}$, $\theta_{\mathrm{p}}$, $\theta_{\mathrm{c}}$, $\phi_1$, $\phi_2$);
\item $[-\omega_{\mathrm{u}}, \omega_{\mathrm{u}}]$ for all cavity detuning frequencies ($\omega_0$, $\omega_{\mathrm{p}}$, $\omega_{\mathrm{c}}$);
\item $[0, T_{\mathrm{u}}]$ for all power transmittances of actual cavity mirrors ($T_1$, $T_2$, $T_{\mathrm{p} 1}$, $T_{\mathrm{p} 2}$, $T_{\mathrm{c} 1}$, $T_{\mathrm{c} 2}$);
\item $[0, x_{\mathrm{u}}]$ for all scaled pump amplitudes ($x$, $x_{\mathrm{p}}$, $x_{\mathrm{c}}$).
\end{itemize}
The values of upper bounds $\omega_{\mathrm{u}}$, $T_{\mathrm{u}}$ and $x_{\mathrm{u}}$ are specified (along with the values of losses) for each optimization run.
In all optimizations, the fixed physical parameters are selected the same for all OPOs: pump wavelength $\lambda_p = 775$~nm, signal wavelength $\lambda_s = 1550$~nm; an MgO:PPLN crystal with length $l_c = 20$~mm, refractive index (at the signal wavelength) $n_s = 2.1$, and effective nonlinear coefficient $d_{\mathrm{eff}} = 14$~pm/V; a Fabry-P\'erot cavity with effective length $l_{\mathrm{eff}} = 87$~mm. These values are characteristic for a typical tabletop experiment with bulk-optics components.
\begin{table*}[htbp]
\caption{\label{tab:alg-comparison}Performance of different algorithms for squeezing optimization in the CQFC network of two coupled OPOs. The table shows the best degree of squeezing, $\mathcal{Q}^{-}(\omega_{\mathrm{opt}}) = 10 \log_{10} \mathcal{P}^{-}(\omega_{\mathrm{opt}})$ (in dB), found using various algorithms, for $L_{\mathrm{in}} = 0.01$, $L_{\mathrm{out}} = 0.05$, $\omega_{\mathrm{u}}/2\pi = 100.0$~MHz, $x_{\mathrm{u}} = 0.3$, $T_{\mathrm{u}} = 0.9$, and five different $\omega_{\mathrm{opt}}$ values: $\omega_{\mathrm{opt}}/2\pi = \{5, 25, 50, 100, 200\}$~MHz. Optimizations for each individual algorithm execute four parallel searches with the population sizes of $N_{\mathrm{pop}} = 30$, and the evolutions are repeated $N_{\mathrm{ev}} = 30$ times (with solution exchanges between the searches after the completion of each evolution except the last one). Algorithm parameters such as the number of no improvements before halting the optimization, $N_{\mathrm{stop}}$, the number of generations, $N_{\mathrm{gen}}$, and the number of iterations, $N_{\mathrm{iter}}$, are indicated in the table. The hybrid strategy (eight parallel searches using seven global algorithms) is described in the text.}
\begin{ruledtabular}
\begin{tabular}{lrrrrr}
{} & \multicolumn{5}{c}{$\omega_{\mathrm{opt}}/2\pi$}\\
Algorithm & $5$~MHz & $25$~MHz & $50$~MHz & $100$~MHz & $200$~MHz \\
\hline
Sequential Least SQuares Programming (local only) & $-4.270$ & $-4.021$ & $-3.396$ & $-2.676$ & $-1.809$\\
Compass Search (local only) & $-8.824$ & $-7.540$ & $-8.274$ & $-8.113$ & $-2.527$\\
Compass Search guided by Monotonic Basin Hopping ($N_{\mathrm{stop}} = 5$) & $-9.105$ & $-7.611$ & $-7.037$ & $-8.255$ & $-7.540$\\
Artificial Bee Colony ($N_{\mathrm{gen}} = 200$) & $-9.791$ & $-8.945$ & $-8.788$ & $-8.427$ & $-7.811$\\
Covariance Matrix Adaptation Evolution Strategy ($N_{\mathrm{gen}} = 500$) & $-9.798$ & $-8.869$ & $-8.806$ & $-8.423$ & $-7.811$\\
Differential Evolution, variant 1220 ($N_{\mathrm{gen}} = 800$) & $-9.805$ & $-8.626$ & $-8.809$ & $-8.429$ & $-7.813$\\
Differential Evolution with p-best crossover ($N_{\mathrm{gen}} = 1000$) & $-9.805$ & $-8.953$ & $-8.808$ & $-8.429$ & $-7.813$\\
Improved Harmony Search ($N_{\mathrm{iter}} = 1000$) & $-9.805$ & $-8.949$ & $-8.808$ & $-8.429$ & $-7.813$\\
Particle Swarm Optimization, variant 5 ($N_{\mathrm{gen}} = 1$) & $-9.219$ & $-8.623$ & $-7.090$ & $-8.332$ & $-7.617$\\
Particle Swarm Optimization, variant 6 ($N_{\mathrm{gen}} = 1$) & $-8.811$ & $-7.936$ & $-7.403$ & $-7.536$ & $-5.932$\\
Simple Genetic Algorithm ($N_{\mathrm{gen}} = 1000$) & $-9.805$ & $-7.665$ & $-8.809$ & $-8.429$ & $-7.813$\\
Corana's Simulated Annealing ($N_{\mathrm{iter}} = 20000$) & $-7.432$ & $-5.015$ & $-4.893$ & $-6.110$ & $-4.754$\\
Hybrid strategy (eight parallel searches using seven global algorithms) & $-9.805$ & $-8.953$ & $-8.809$ & $-8.429$ & $-7.813$\\
\end{tabular}
\end{ruledtabular}
\end{table*}
\subsection{Optimization methodology}
Preliminary optimization runs using local algorithms (e.g., Sequential Least Squares Programming) demonstrated that different choices of initial parameter values resulted in different solutions of varying quality. These results mean that the fitness landscape contains multiple local optima. In order to reach a solution of very high quality, we decided to use global search methods. Specifically, we used PyGMO, a suite of global (stochastic) algorithms~\cite{pygmo.url}. Since these global algorithms are heuristic in nature, they do not guarantee the convergence to a global optimum; in fact, as shown in Table~\ref{tab:alg-comparison}, while multiple global methods are capable of finding high-quality solutions, the performance varies between different algorithms as well as between optimizations with different values of $\omega_{\mathrm{opt}}$ for the same algorithm.
To maximize the chances of finding a globally optimal solution, we employed a hybrid strategy, where each optimization executes in parallel eight searches (using seven different global algorithms), with a fully connected topology of solution exchanges between them. These eight searches include two instances of Artificial Bee Colony and one instance of each: Covariance Matrix Adaptation Evolution Strategy, Differential Evolution variant 1220, Differential Evolution with $p$-best crossover, Improved Harmony Search, Particle Swarm Optimization variant 5, and Compass Search guided by Monotonic Basin Hopping. Each optimization uses the population size of $N_{\mathrm{pop}} = 30$ for each of the global searches, and the evolutions are repeated $N_{\mathrm{ev}} = 30$ times (with solution exchanges between the searches after the completion of each evolution except the last one); the algorithm parameters (the number of no improvements before halting the optimization, $N_{\mathrm{stop}}$, the number of generations, $N_{\mathrm{gen}}$, and the number of iterations, $N_{\mathrm{iter}}$) used in the searches are the same as those shown in Table~\ref{tab:alg-comparison} for individual algorithms. As indicated by the results in Table~\ref{tab:alg-comparison}, this hybrid strategy consistently finds the best solution, as compared to any individual algorithm. Multiple trials with larger values of $N_{\mathrm{pop}}$, $N_{\mathrm{ev}}$, $N_{\mathrm{gen}}$, and $N_{\mathrm{iter}}$ did not typically result in an improvement of the solution quality, and thus did not warrant the increased run time.
\section{Squeezing optimization results}
\label{sec:results}
First of all, we would like to compare the performance of the CQFC network of two coupled OPOs versus that of the single OPO network, in terms of the maximum degree of squeezing achievable under comparable conditions. Figures~\ref{fig:Qmin_vs_xb_and_Rb}~and~\ref{fig:Qmin_vs_xb_and_wb} show the optimized degree of squeezing, $\mathcal{Q}^{-}(\omega_{\mathrm{opt}})$, at $\omega_{\mathrm{opt}}/2\pi = 100$~MHz, for both networks, versus the upper limits on various network parameters ($T_{\mathrm{u}}$ and $x_{\mathrm{u}}$ in Fig.~\ref{fig:Qmin_vs_xb_and_Rb}, and $\omega_{\mathrm{u}}$ and $x_{\mathrm{u}}$ in Fig.~\ref{fig:Qmin_vs_xb_and_wb}), with constant loss values: $L = L_{\mathrm{in}} = 0.01$, $L_{\mathrm{tl}} = L_{\mathrm{out}} = 0.1$. We observe that the CQFC network of two coupled OPOs generates stronger squeezing than the single OPO network, even as total losses in transmission lines in the former are three times larger than those in the latter (30\% versus 10\%). In both networks, the maximum degree of squeezing increases with both $T_{\mathrm{u}}$ (more light is allowed to leave the cavities) and $x_{\mathrm{u}}$ (higher pump power), with these increases being roughly linear for the single OPO network and faster than linear in the CQFC network of two coupled OPOs. These results demonstrate that the feedback makes it possible to more effectively utilize the available pump power.
\begin{figure}[htbp]
\centering
\includegraphics[width=1.0\columnwidth]{fig_paper_12vars_and_5vars_3D_Qmin_vs_xb_and_Rb.pdf}
\caption{The optimized degree of squeezing, $\mathcal{Q}^{-}(\omega_{\mathrm{opt}})$, for (a) the CQFC network of two coupled OPOs and (b) the single OPO network, versus the upper limits on the power transmittance of cavity mirrors, $T_{\mathrm{u}}$, and the scaled pump amplitude, $x_{\mathrm{u}}$. Other parameters are $\omega_{\mathrm{opt}}/2\pi = 100$~MHz, $\omega_{\mathrm{u}}/2\pi = 100$~MHz, $L = L_{\mathrm{in}} = 0.01$, $L_{\mathrm{tl}} = L_{\mathrm{out}} = 0.1$.}
\label{fig:Qmin_vs_xb_and_Rb}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=1.0\columnwidth]{fig_paper_12vars_and_5vars_3D_Qmin_vs_xb_and_wb.pdf}
\caption{The optimized degree of squeezing, $\mathcal{Q}^{-}(\omega_{\mathrm{opt}})$, for (a) the CQFC network of two coupled OPOs and (b) the single OPO network, versus the upper limits on the cavity detuning frequency, $\omega_{\mathrm{u}}$, and the scaled pump amplitude, $x_{\mathrm{u}}$. Other parameters are $\omega_{\mathrm{opt}}/2\pi = 100$~MHz, $T_{\mathrm{u}} = 0.9$, $L = L_{\mathrm{in}} = 0.01$, $L_{\mathrm{tl}} = L_{\mathrm{out}} = 0.1$.}
\label{fig:Qmin_vs_xb_and_wb}
\end{figure}
Figure~\ref{fig:Qmin_vs_xb_and_wb} also shows that, for both networks, the maximum degree of squeezing is independent of the upper limit $\omega_{\mathrm{u}}$ on the cavity detuning frequency; furthermore, we found that in most cases the maximum degree of squeezing is actually achieved with zero detuning. In all results shown below, optimizations used the upper limit value $\omega_{\mathrm{u}}/2\pi = 100$~MHz.
We also investigate the dependence of the maximum degree of squeezing, $\mathcal{Q}^{-}(\omega_{\mathrm{opt}})$, on the sideband frequency $\omega_{\mathrm{opt}}$ at which it is optimized. This dependence is shown in Fig.~\ref{fig:Qmin_vs_fopt_1}, for both networks, for different values of transmission line losses and pump amplitude bound. We observe that the CQFC network of two coupled OPOs not only generates stronger squeezing than the single OPO network, but that the degradation of squeezing associated with the increase of $\omega_{\mathrm{opt}}$ is substantially slower in the former than in the latter. The capability of the CQFC network to moderate the degradation of squeezing at higher values of $\omega_{\mathrm{opt}}$ is associated with a rather abrupt change in the regime of network operation, which is manifested by a rapid change in the slope of the curves in subplots (a) and (c) of Fig.~\ref{fig:Qmin_vs_fopt_1}.
\begin{figure}[htbp]
\centering
\includegraphics[width=1.0\columnwidth]{fig_paper_12vars_and_5vars_Qmin_vs_fopt.pdf}
\caption{The optimized degree of squeezing, $\mathcal{Q}^{-}(\omega_{\mathrm{opt}})$, versus $\omega_{\mathrm{opt}}$, for the CQFC network of two coupled OPOs (subplots (a) and (c)) and the single OPO network (subplots (b) and (d)). The transmission line losses are $L_{\mathrm{tl}} = L_{\mathrm{out}} = 0.01$ in subplots (a) and (b), and $L_{\mathrm{tl}} = L_{\mathrm{out}} = 0.1$ in subplots (c) and (d). Each subplot shows four curves corresponding to different values of $x_{\mathrm{u}}$ ($x_{\mathrm{u}} = \{0.1, 0.2, 0.3, 0.4\}$), as indicated in the legend. Other parameters are $T_{\mathrm{u}} = 0.9$, $L = L_{\mathrm{in}} = 0.01$.}
\label{fig:Qmin_vs_fopt_1}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=1.0\columnwidth]{fig_paper_12vars_Qmin_vs_fopt.pdf}
\caption{The optimized degree of squeezing, $\mathcal{Q}^{-}(\omega_{\mathrm{opt}})$, versus $\omega_{\mathrm{opt}}$, for the CQFC network of two coupled OPOs. The values of $x_{\mathrm{u}}$ are: (a) $x_{\mathrm{u}} = 0.1$, (b) $x_{\mathrm{u}} = 0.2$, (c) $x_{\mathrm{u}} = 0.3$, and (d) $x_{\mathrm{u}} = 0.4$. Each subplot shows six curves corresponding to different values of transmission line losses: $L_{\mathrm{out}} = \{0.01, 0.05, 0.1, 0.15, 0.2, 0.25\}$, as indicated in the legend. Other parameters are $T_{\mathrm{u}} = 0.9$, $L_{\mathrm{in}} = 0.01$.}
\label{fig:Qmin_vs_fopt_2}
\end{figure}
To explore further the emergence of this new operation regime, we focus on the CQFC network of two coupled OPOs, with Fig.~\ref{fig:Qmin_vs_fopt_2} showing the dependence of the maximum degree of squeezing on $\omega_{\mathrm{opt}}$ for more values of transmission line losses. We observe that the value of $\omega_{\mathrm{opt}}$ at which the operation regime switches, increases with both $x_{\mathrm{u}}$ and $L_{\mathrm{out}}$. The difference between the curve slopes in the low-$\omega_{\mathrm{opt}}$ and high-$\omega_{\mathrm{opt}}$ regimes decreases as $L_{\mathrm{out}}$ increases.
\begin{figure*}[htbp]
\centering
\includegraphics[width=1.7\columnwidth]{fig_paper_12vars_T_vs_fopt.pdf}
\caption{The optimal values of power transmittances of cavity mirrors, $T_{\mathrm{p} 1}$ (subplots (a), (b), (c)), $T_{\mathrm{p} 2}$ (subplots (d), (e), (f)), $T_{\mathrm{c} 1}$ (subplots (g), (h), (i)), and $T_{\mathrm{c} 2}$ (subplots (j), (k), (l)), versus $\omega_{\mathrm{opt}}$, for the CQFC network of two coupled OPOs. The transmission line losses are $L_{\mathrm{out}} = 0.01$ (subplots (a), (d), (g), (j)), $L_{\mathrm{out}} = 0.1$ (subplots (b), (e), (h), (k)), and $L_{\mathrm{out}} = 0.2$ (subplots (c), (f), (i), (l)). Each subplot shows four curves corresponding to different values of $x_{\mathrm{u}}$ ($x_{\mathrm{u}} = \{0.1, 0.2, 0.3, 0.4\}$), as indicated in the legend. Other parameters are $T_{\mathrm{u}} = 0.9$, $L_{\mathrm{in}} = 0.01$.}
\label{fig:T_vs_fopt}
\end{figure*}
\begin{table}[b]
\caption{\label{tab:f_crit}The sideband frequency $\omega_{\mathrm{opt}}^{\star}/2\pi$~(in MHz), at which the high-$\omega_{\mathrm{opt}}$ regime commences, for the CQFC network of two coupled OPOs with $T_{\mathrm{u}} = 0.9$, $L_{\mathrm{in}} = 0.01$, and various values of $L_{\mathrm{out}}$ and $x_{\mathrm{u}}$. The accuracy of the reported values is limited by the sampling interval of $2$~MHz.}
\begin{ruledtabular}
\begin{tabular}{lrrrrrrr}
{} & \multicolumn{7}{c}{$L_{\mathrm{out}}$}\\
$x_{\mathrm{u}}$ & 0.01 & 0.05 & 0.10 & 0.15 & 0.20 & 0.25 & 0.30 \\ \hline
0.1 & 8 & 16 & 22 & 28 & 34 & 42 & 48 \\
0.2 & 10 & 18 & 26 & 32 & 40 & 48 & 56\\
0.3 & 14 & 24 & 30 & 38 & 46 & 56 & 66\\
0.4 & 20 & 30 & 36 & 44 & 54 & 68 & 90\\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{figure}[b]
\centering
\includegraphics[width=1.0\columnwidth]{fig_paper_12vars_and_5vars_Pp_vs_fopt.pdf}
\caption{The optimal values of the pump power for the plant OPO in the CQFC network of two coupled OPOs (subplots (a) and (c)) and for the single OPO (subplots (b) and (d)), versus $\omega_{\mathrm{opt}}$. The transmission line losses are $L_{\mathrm{tl}} = L_{\mathrm{out}} = 0.01$ in subplots (a) and (b), and $L_{\mathrm{tl}} = L_{\mathrm{out}} = 0.1$ in subplots (c) and (d). Each subplot shows four curves corresponding to different values of $x_{\mathrm{u}}$ ($x_{\mathrm{u}} = \{0.1, 0.2, 0.3, 0.4\}$), as indicated in the legend. Other parameters are $T_{\mathrm{u}} = 0.9$, $L = L_{\mathrm{in}} = 0.01$.}
\label{fig:Pp_vs_fopt}
\end{figure}
\begin{figure*}[htbp]
\centering
\includegraphics[width=1.7\columnwidth]{fig_paper_12vars_and_5vars_Qmin_vs_f.pdf}
\caption{The squeezing spectrum $\mathcal{Q}^{-}(\omega)$ for the optimal operation of both networks. Each subplot shows four curves corresponding to the optimally operated CQFC network of two coupled OPOs for different values of $\omega_{\mathrm{opt}}$ ($\omega_{\mathrm{opt}}/2\pi = \{5, 25, 50, 100\}$~MHz), along with a curve corresponding to the optimally operated single OPO network for any value of $\omega_{\mathrm{opt}}$, as indicated in the legend. The transmission line losses are $L_{\mathrm{tl}} = L_{\mathrm{out}} = 0.01$ (subplots (a), (b), (c)), $L_{\mathrm{tl}} = L_{\mathrm{out}} = 0.05$ (subplots (d), (e), (f)), and $L_{\mathrm{tl}} = L_{\mathrm{out}} = 0.1$ (subplots (g), (h), (i)). The upper limit on the scaled pump amplitude is $x_{\mathrm{u}} = 0.1$ (subplots (a), (d), (g)), $x_{\mathrm{u}} = 0.2$ (subplots (b), (e), (h)), and $x_{\mathrm{u}} = 0.3$ (subplots (c), (f), (i)). Other parameters are $T_{\mathrm{u}} = 0.9$, $L = L_{\mathrm{in}} = 0.01$.}
\label{fig:Qmin_vs_f}
\end{figure*}
To understand the physical differences between operations of the CQFC network in the low-$\omega_{\mathrm{opt}}$ and high-$\omega_{\mathrm{opt}}$ regimes, we consider the dependence of the optimal values of power transmittances of cavity mirrors, $T_{\mathrm{p} 1}$, $T_{\mathrm{p} 2}$, $T_{\mathrm{c} 1}$, and $T_{\mathrm{c} 2}$, on $\omega_{\mathrm{opt}}$. This dependence is shown in Fig.~\ref{fig:T_vs_fopt} for optimizations with $T_{\mathrm{u}} = 0.9$, $L_{\mathrm{in}} = 0.01$, and various values of $L_{\mathrm{out}}$ and $x_{\mathrm{u}}$. First, we see that the optimal values of $T_{\mathrm{p} 2}$ and $T_{\mathrm{c} 1}$ are constant over the entire range of $\omega_{\mathrm{opt}}$ values; specifically, $T_{\mathrm{p} 2} = 0.9$ is at the upper bound, which corresponds to the maximum flow from the plant cavity to the 1st output field (the one whose squeezing properties are measured), and $T_{\mathrm{c} 1} = 0$ is at the lower bound, which corresponds to the minimum flow from the controller cavity to the 5th output field (the one which is not used for either squeezing measurement or feedback). In contrast to this simple behavior of the optimal values of $T_{\mathrm{p} 2}$ and $T_{\mathrm{c} 1}$, the optimal values of $T_{\mathrm{p} 1}$ and $T_{\mathrm{c} 2}$, which regulate the feedback between the plant and controller OPOs, demonstrate much more intricate dependence on $\omega_{\mathrm{opt}}$. The optimal value of $T_{\mathrm{p} 1}$ and especially that of $T_{\mathrm{c} 2}$ undergo a substantial and rather abrupt change at the critical $\omega_{\mathrm{opt}}$ value at which the network's operation switches between the low-$\omega_{\mathrm{opt}}$ and high-$\omega_{\mathrm{opt}}$ regimes. As $\omega_{\mathrm{opt}}$ increases through the critical point, $T_{\mathrm{p} 1}$ changes from a lower to a higher value, while $T_{\mathrm{c} 2}$ decreases from the upper bound $T_{\mathrm{c} 2} = 0.9$ to a much lower value. In other words, the low-$\omega_{\mathrm{opt}}$ optimal regime is characterized by \emph{the maximum flow of light from the controller to the plant and a much lower flow in the opposite direction}, while the high-$\omega_{\mathrm{opt}}$ optimal regime is characterized by \emph{roughly similar flows of light in both directions}. These patterns characterizing the regimes of optimal network operation, their dependencies on pump and loss parameters, and the rapid switch between the regimes, are quite non-intuitive, and finding them would be rather unlikely without the use of a stochastic global search that explores vast areas of the fitness landscape.
The sharp change of the optimal value of $T_{\mathrm{c} 2}$ associated with the regime switch makes it easy to identify the sideband frequency $\omega_{\mathrm{opt}}^{\star}$, at which the high-$\omega_{\mathrm{opt}}$ regime commences (the precision of determining the $\omega_{\mathrm{opt}}^{\star}/2\pi$ values is limited by the sampling interval, which is $2$~MHz in our data). The values of $\omega_{\mathrm{opt}}^{\star}/2\pi$ are shown in Table~\ref{tab:f_crit} for $T_{\mathrm{u}} = 0.9$, $L_{\mathrm{in}} = 0.01$, and various values of $L_{\mathrm{out}}$ and $x_{\mathrm{u}}$. We see that $\omega_{\mathrm{opt}}^{\star}$ increases monotonously with both $L_{\mathrm{out}}$ and $x_{\mathrm{u}}$.
We also note that the optimal values of the scaled pump amplitudes, $x_{\mathrm{p}}$ and $x_{\mathrm{c}}$, are almost always at (or very close to) the upper bound $x_{\mathrm{u}}$, i.e., in either regime the optimally operated CQFC network usually uses all the pump power it can get. The maximum use of the pump power is also observed for the optimal operation of the single OPO network. Indeed, as seen in Fig.~\ref{fig:Pp_vs_fopt}, for both networks, the optimal values of the pump power are virtually independent of $\omega_{\mathrm{opt}}$ and losses, while they scale quadratically with $x_{\mathrm{u}}$. Due to the rapid growth of the optimal pump power with $x_{\mathrm{u}}$, only values $x_{\mathrm{u}} \leq 3$ should be considered realistic for the optimal operation with a typical tabletop experimental setup considered in this paper.
Next, we investigate the squeezing spectrum $\mathcal{Q}^{-}(\omega)$ generated under the optimal operation of either network for various values of $\omega_{\mathrm{opt}}$, $x_{\mathrm{u}}$, and transmission line losses. Figure~\ref{fig:Qmin_vs_f} shows $\mathcal{Q}^{-}(\omega)$ for both networks for various values of $\omega_{\mathrm{opt}}$, $L_{\mathrm{tl}} = L_{\mathrm{out}}$, and $x_{\mathrm{u}}$. We see that the optimally operated single OPO network generates exactly the same Lorentzian squeezing spectrum for any choice of $\omega_{\mathrm{opt}}$. In contrast, the CQFC network of two coupled OPOs is capable of generating diverse squeezing spectra, with the specific spectral shape varying to fit the selected value of $\omega_{\mathrm{opt}}$, and overall generates much stronger squeezing over a major portion of the spectrum (especially, at frequencies around $\omega_{\mathrm{opt}}$). Interestingly, the capability of the CQFC network to generate a squeezing spectrum $\mathcal{Q}^{-}(\omega)$ that has the minimum at $\omega = \omega_{\mathrm{opt}}$ is attained only if the selected value of $\omega_{\mathrm{opt}}$ is within the high-$\omega_{\mathrm{opt}}$ regime of optimal network operation, i.e., $\omega_{\mathrm{opt}} \geq \omega_{\mathrm{opt}}^{\star}$ (for a given set of bound and loss values). Conversely, as seen for $\omega_{\mathrm{opt}}/2\pi = 5$~MHz in all subplots of Fig.~\ref{fig:Qmin_vs_f} and for $\omega_{\mathrm{opt}}/2\pi = 25$~MHz in subplots (h) and (i) of Fig.~\ref{fig:Qmin_vs_f}, the squeezing spectrum has the minimum at $\omega = 0$ if $\omega_{\mathrm{opt}}$ is within the low-$\omega_{\mathrm{opt}}$ regime of optimal network operation.
\begin{figure}[htbp]
\centering
\includegraphics[width=1.0\columnwidth]{fig_paper_12vars_3D_Qmin_vs_Lin_and_L3_fopt.pdf}
\caption{The optimized degree of squeezing, $\mathcal{Q}^{-}(\omega_{\mathrm{opt}})$, for the CQFC network of two coupled OPOs, versus (a) $L_{\mathrm{in}}$ and $L_3$ (with $L_1 = L_2 = 0.1$), and (b) $L_{\mathrm{in}}$ and $L_{\mathrm{out}}$. Other parameters are $\omega_{\mathrm{opt}}/2\pi = 100$~MHz, $x_{\mathrm{u}} = 0.2$, $T_{\mathrm{u}} = 0.9$.}
\label{fig:Qmin_vs_Lin_and_L3}
\end{figure}
Finally, we explore further the dependence of the optimized degree of squeezing, $\mathcal{Q}^{-}(\omega_{\mathrm{opt}})$, on the intracavity and transmission line losses for the CQFC network of two coupled OPOs. Figure~\ref{fig:Qmin_vs_Lin_and_L3} shows $\mathcal{Q}^{-}(\omega_{\mathrm{opt}})$ versus (a) $L_{\mathrm{in}}$ and $L_3$ (with $L_1 = L_2 = 0.1$), and (b) $L_{\mathrm{in}}$ and $L_{\mathrm{out}}$. The situation when $L_3 \neq L_1 = L_2$ is practically relevant since $L_3$ includes, in addition to losses in the output transmission line, inefficiencies in the homodyne detector used to measure the squeezing spectrum of the output field. The results shown in Fig.~\ref{fig:Qmin_vs_Lin_and_L3} confirm that any increase in losses is detrimental to squeezing and quantify this relationship.
\section{Correlations between optimal values of phase variables}
\label{sec:corr}
Since the phase parameters play a significant role in tuning the quantum interference that governs the CQFC network's performance, an interesting question is whether their optimal values are correlated. Optimal values of a parameter can be cast as a vector each element of which corresponds to a distinct value of $\omega_{\mathrm{opt}}$, and correlations can be computed between pairs of such vectors. Specifically, we computed the Pearson correlation coefficient for all six pairs of four phase variables ($\theta_{\mathrm{p}}$, $\theta_{\mathrm{c}}$, $\phi_1$, $\phi_2$), and found that substantial correlations only exist between $\phi_1$ and $\phi_2$. Table~\ref{tab:corr_1} shows the values of the Pearson correlation coefficient $r(\phi_1, \phi_2)$, computed for $T_{\mathrm{u}} = 0.9$, $L_{\mathrm{in}} = 0.01$, and various values of $L_{\mathrm{out}}$ and $x_{\mathrm{u}}$.
\begin{table}[htbp]
\caption{\label{tab:corr_1}The Pearson correlation coefficient $r(\phi_1, \phi_2)$, for the CQFC network of two coupled OPOs with $T_{\mathrm{u}} = 0.9$, $L_{\mathrm{in}} = 0.01$, and various values of $L_{\mathrm{out}}$ and $x_{\mathrm{u}}$.}
\begin{ruledtabular}
\begin{tabular}{lrrrrrr}
{} & \multicolumn{6}{c}{$L_{\mathrm{out}}$}\\
$x_{\mathrm{u}}$ & 0.01 & 0.05 & 0.10 & 0.15 & 0.20 & 0.25 \\ \hline
0.1 & 0.575 & 0.438 & 0.436 & 0.354 & 0.275 & 0.223 \\
0.2 & 0.381 & 0.474 & 0.471 & 0.244 & 0.169 & -0.041 \\
0.3 & 0.588 & 0.322 & 0.316 & 0.165 & 0.132 & -0.121 \\
0.4 & 0.423 & 0.373 & 0.189 & 0.172 & -0.057 & -0.002 \\
\end{tabular}
\end{ruledtabular}
\end{table}
The correlation in Table~\ref{tab:corr_1} generally decreases as $L_{\mathrm{out}}$ and $x_{\mathrm{u}}$ increase. This trend can be compared to the one observed in Table~\ref{tab:f_crit} where the sideband frequency $\omega_{\mathrm{opt}}^{\star}$ at which the high-$\omega_{\mathrm{opt}}$ regime commences increases as $L_{\mathrm{out}}$ and $x_{\mathrm{u}}$ increase. Since the vectors $\phi_1 (\omega_{\mathrm{opt}})$ and $\phi_2 (\omega_{\mathrm{opt}})$ contain elements corresponding to both operation regimes, the trend observed in Table~\ref{tab:corr_1} implies that the correlation $r(\phi_1, \phi_2)$ generally decreases as the number of vector components corresponding to the high-$\omega_{\mathrm{opt}}$ regime decreases. A plausible explanation of this behavior is that the correlation between the two phase variables is higher in the high-$\omega_{\mathrm{opt}}$ regime. To test this hypothesis, we computed the Pearson correlation coefficient $r(\phi'_1, \phi'_2)$ for the pair of vectors $\phi'_1 = \phi_1 (\omega_{\mathrm{opt}} \geq \omega_{\mathrm{opt}}^{\star})$ and $\phi'_2 = \phi_2 (\omega_{\mathrm{opt}} \geq \omega_{\mathrm{opt}}^{\star})$ that include only elements corresponding to the high-$\omega_{\mathrm{opt}}$ regime. The values of $r(\phi'_1, \phi'_2)$ are shown in Table~\ref{tab:corr_2} for $T_{\mathrm{u}} = 0.9$, $L_{\mathrm{in}} = 0.01$, and various values of $L_{\mathrm{out}}$ and $x_{\mathrm{u}}$. The correlations in Table~\ref{tab:corr_2} are consistently larger than $0.5$, and, furthermore, we find that $r(\sin\phi'_1, \sin\phi'_2) = 1.0$ and $r(\cos\phi'_1, \cos\phi'_2) = -1.0$ (up to numerical precision) for all considered values of $L_{\mathrm{out}}$ and $x_{\mathrm{u}}$. These findings indicate a significant degree of concerted action in how the CQFC network of two coupled OPOs operates in the high-$\omega_{\mathrm{opt}}$ regime.
\begin{table}[htbp]
\caption{\label{tab:corr_2}The Pearson correlation coefficient $r(\phi'_1, \phi'_2)$, for the CQFC network of two coupled OPOs with $T_{\mathrm{u}} = 0.9$, $L_{\mathrm{in}} = 0.01$, and various values of $L_{\mathrm{out}}$ and $x_{\mathrm{u}}$.}
\begin{ruledtabular}
\begin{tabular}{lrrrrrr}
{} & \multicolumn{6}{c}{$L_{\mathrm{out}}$}\\
$x_{\mathrm{u}}$ & 0.01 & 0.05 & 0.10 & 0.15 & 0.20 & 0.25 \\ \hline
0.1 & 0.620 & 0.546 & 0.592 & 0.629 & 0.532 & 0.599 \\
0.2 & 0.610 & 0.604 & 0.597 & 0.578 & 0.650 & 0.679 \\
0.3 & 0.649 & 0.671 & 0.559 & 0.644 & 0.602 & 0.606 \\
0.4 & 0.555 & 0.646 & 0.540 & 0.687 & 0.604 & 0.582 \\
\end{tabular}
\end{ruledtabular}
\end{table}
\section{Robustness of optimal solutions}
\label{sec:robust}
Any practical implementation of a quantum optical network inevitably involves imprecisions and imperfections, which may affect the desired performance. This issue is of especial importance in a CQFC network, which relies on a precise quantum interference between the pump and controller fields to manipulate the properties of the output field (see, for example, the superposition of $a_{\mathrm{p}}$ and $a_{\mathrm{c}}$ in the first element of the $\mathbf{L}$ vector in Eq.~\eqref{eq:L-2OPOs}). This interference depends on the values of phase variables, and a key question is how robust is an optimal solution to small variations in these values. To analyze this robustness, we computed the Hessian of the objective function $J$ with respect to the phase variables, for a variety of optimal sets of network parameters.
For the single OPO network, $J$ depends on one phase variable $\theta_{\xi}$, and the Hessian $\mathsf{H}$ has one element $\partial^2 J/\partial \theta_{\xi}^2$. $\mathsf{H}$ was computed for 3500 optimal solutions (all combinations of $\omega_{\mathrm{opt}}/2\pi = \{ 2, 4, \ldots, 100 \}$~MHz, $x_{\mathrm{u}} = \{0.1, 0.2, \ldots, 0.5\}$, $T_{\mathrm{u}} = \{0.5, 0.9\}$, and $L_{\mathrm{tl}} = \{0.01, 0.05, 0.1, \ldots, 0.3\}$, with $L = 0.01$). The numerical analysis shows that the Hessian is zero (up to numerical precision) for all of these optimal solutions. Therefore, small fluctuations in the value of the pump phase $\theta_{\xi}$ should have no effect on the optimized degree of squeezing.
For the CQFC network of two coupled OPOs, $J$ depends on four phase variables ($\theta_{\mathrm{p}}$, $\theta_{\mathrm{c}}$, $\phi_1$, $\phi_2$), and the Hessian $\mathsf{H}$ is a $4 \times 4$ matrix of second-order derivatives. We computed the eigenvalues $\{h_1, \ldots, h_4\}$ and eigenvectors $\{\mathbf{e}_1, \ldots, \mathbf{e}_4\}$ of the Hessian $\mathsf{H}$ for 3500 optimal solutions (all combinations of $\omega_{\mathrm{opt}}/2\pi = \{ 2, 4, \ldots, 100 \}$~MHz, $x_{\mathrm{u}} = \{0.1, 0.2, \ldots, 0.5\}$, $T_{\mathrm{u}} = \{0.5, 0.9\}$, and $L_{\mathrm{out}} = \{0.01, 0.05, 0.1, \ldots, 0.3\}$, with $L_{\mathrm{in}} = 0.01$). The numerical analysis shows that two of the Hessian eigenvalues ($h_3$ and $h_4$) are zero (up to numerical precision) for all of these optimal solutions. Therefore, robustness to small phase variations is determined by two nonzero Hessian eigenvalues ($h_1$ and $h_2$). Figures~\ref{fig:HE_vs_fopt_and_xb}--\ref{fig:HE_vs_xb_and_Lout} show these nonzero Hessian eigenvalues as functions of $\omega_{\mathrm{opt}}$ and $x_{\mathrm{u}}$ (Fig.~\ref{fig:HE_vs_fopt_and_xb}), $\omega_{\mathrm{opt}}$ and $L_{\mathrm{out}}$ (Fig.~\ref{fig:HE_vs_fopt_and_Lout}), and $x_{\mathrm{u}}$ and $L_{\mathrm{out}}$ (Fig.~\ref{fig:HE_vs_xb_and_Lout}). We see that $h_1$ is typically much larger than $h_2$, and hence the magnitude of $h_1$ is the main factor determining the robustness properties of the optimal solutions.
\begin{figure}[htbp]
\centering
\includegraphics[width=1.0\columnwidth]{fig_paper_12vars_3D_Hessian-eigenvalues_vs_fopt_and_xb.pdf}
\caption{The first (a) and second (b) Hessian eigenvalues for the CQFC network of two coupled OPOs, versus $\omega_{\mathrm{opt}}$ and $x_{\mathrm{u}}$. Other parameters are $L_{\mathrm{out}} = 0.1$, $T_{\mathrm{u}} = 0.9$, $L_{\mathrm{in}} = 0.01$.}
\label{fig:HE_vs_fopt_and_xb}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=1.0\columnwidth]{fig_paper_12vars_3D_Hessian-eigenvalues_vs_fopt_and_Lout.pdf}
\caption{The first (a) and second (b) Hessian eigenvalues for the CQFC network of two coupled OPOs, versus $\omega_{\mathrm{opt}}$ and $L_{\mathrm{out}}$. Other parameters are $x_{\mathrm{u}} = 0.2$, $T_{\mathrm{u}} = 0.9$, $L_{\mathrm{in}} = 0.01$.}
\label{fig:HE_vs_fopt_and_Lout}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=1.0\columnwidth]{fig_paper_12vars_3D_Hessian-eigenvalues_vs_xb_and_Lout.pdf}
\caption{The first (a) and second (b) Hessian eigenvalues for the CQFC network of two coupled OPOs, versus $x_{\mathrm{u}}$ and $L_{\mathrm{out}}$. Other parameters are $\omega_{\mathrm{opt}}/2\pi = 100$~MHz, $T_{\mathrm{u}} = 0.9$, $L_{\mathrm{in}} = 0.01$.}
\label{fig:HE_vs_xb_and_Lout}
\end{figure}
The dependence of $h_1$ on $\omega_{\mathrm{opt}}$, seen in Figs.~\ref{fig:HE_vs_fopt_and_xb}~and~\ref{fig:HE_vs_fopt_and_Lout}, demonstrates a significant difference in robustness properties between the low-$\omega_{\mathrm{opt}}$ and high-$\omega_{\mathrm{opt}}$ regimes. The low-$\omega_{\mathrm{opt}}$ regime is intrinsically robust for a broad range of parameter values. In the high-$\omega_{\mathrm{opt}}$ regime, a reasonable degree of robustness is achieved for $x_{\mathrm{u}} \geq 0.2$ (i.e., for pump powers above 4~W for the OPO parameters considered here). Larger losses in transmission lines ($L_{\mathrm{out}} \geq 0.1$) also enhance robustness.
\begin{figure}[htbp]
\centering
\includegraphics[width=1.0\columnwidth]{fig_paper_12vars_Hessian_eigenvector_corrected_vs_fopt.pdf}
\caption{Components of the first Hessian eigenvector for the CQFC network of two coupled OPOs, versus $\omega_{\mathrm{opt}}$. The four curves show components corresponding to the phase variables ($\theta_{\mathrm{p}}$, $\theta_{\mathrm{c}}$, $\phi_1$, $\phi_2$), as indicated in the legend. The parameters are $x_{\mathrm{u}} = 0.2$, $T_{\mathrm{u}} = 0.9$, $L_{\mathrm{in}} = 0.01$, $L_{\mathrm{out}} = 0.1$.}
\label{fig:HEV_vs_wopt}
\end{figure}
The four components of the Hessian eigenvector $\mathbf{e}_1$ (which corresponds to the largest eigenvalue $h_1$) are shown in Fig.~\ref{fig:HEV_vs_wopt} versus $\omega_{\mathrm{opt}}$. They also exhibit an abrupt change associated with the switch of the optimal operation regime at $\omega_{\mathrm{opt}}^{\star}$. In the low-$\omega_{\mathrm{opt}}$ regime, the eigenvector component corresponding to $\phi_2$ has the largest value and the rest of the components have smaller absolute values, but none is negligible. In the high-$\omega_{\mathrm{opt}}$ regime, the components corresponding to $\phi_1$ and $\phi_2$ have similar values, while the components corresponding to $\theta_{\mathrm{p}}$ and $\theta_{\mathrm{c}}$ are close to zero. These results are consistent with the findings that the low-$\omega_{\mathrm{opt}}$ regime is characterized by the maximum flow of light passing through the phase shifter P2 (from the controller to the plant), while the high-$\omega_{\mathrm{opt}}$ regime is characterized by roughly similar flows of light passing through the phase shifters P1 and P2 (in both directions).
The decrease of the optimized degree of squeezing due to small variations of phase parameters can be quantified using the computed Hessian eigenvalues or, alternatively, via direct Monte Carlo averaging over a random distribution of phase variable values. Figure~\ref{fig:Qmin_vs_dphase} shows the optimized degree of squeezing, $\mathcal{Q}^{-}(\omega_{\mathrm{opt}})$, for the CQFC network of two coupled OPOs (with $\omega_{\mathrm{opt}}/2\pi = 100$~MHz, $x_{\mathrm{u}} = 0.2$, $T_{\mathrm{u}} = 0.9$, and various values of $L_{\mathrm{out}}$), versus the standard deviation of phase uncertainty, $\sigma_{\mathrm{phase}}$ (for simplicity, we assume a normal distribution with zero mean and the same value of $\sigma_{\mathrm{phase}}$ for uncertainty in each of the four phase variables). We see a good agreement between the Hessian-based and Monte Carlo computations for $\sigma_{\mathrm{phase}} \leq 0.1$ (and even for $\sigma_{\mathrm{phase}} \leq 0.2$ for $L_{\mathrm{out}} \geq 0.1$). We also see that the deterioration of squeezing induced by phase variations is quite tolerable for $\sigma_{\mathrm{phase}} \leq 0.1$ (especially, for $L_{\mathrm{out}} \geq 0.1$). Note that our squeezing optimization procedure does not explicitly include a robustness requirement, and hence the observed high level of robustness might be surprising, but it is likely related to the natural tendency of stochastic optimization algorithms to eliminate solutions that are very sensitive to small parameter variations.
\begin{figure}[htbp]
\centering
\includegraphics[width=1.0\columnwidth]{fig_paper_12vars_Qmin_vs_dphase.pdf}
\caption{The optimized degree of squeezing, $\mathcal{Q}^{-}(\omega_{\mathrm{opt}})$, for the CQFC network of two coupled OPOs, versus the standard deviation of phase uncertainty, $\sigma_{\mathrm{phase}}$. The four curves correspond to different values of $L_{\mathrm{out}}$ ($L_{\mathrm{out}} = \{0.05, 0.10, 0.15, 0.20\}$), as indicated in the legend. Other parameters are $\omega_{\mathrm{opt}}/2\pi = 100$~MHz, $x_{\mathrm{u}} = 0.2$, $T_{\mathrm{u}} = 0.9$, $L_{\mathrm{in}} = 0.01$. For each value of $L_{\mathrm{out}}$, the plot shows the results computed using the Hessian eigenvalues (lines) along with the data computed via Monte Carlo averaging over a random distribution of phase values (circles).}
\label{fig:Qmin_vs_dphase}
\end{figure}
\section{Squeezing bandwidth optimization}
\label{sec:bw}
In CV-QKD with squeezed states, the secure key rate is proportional to the bandwidth of squeezing. Therefore, we also explored the capability of the CQFC network of two coupled OPOs to generate output states with high squeezing bandwidth, by optimizing the average degree of squeezing over a frequency interval $[0, \omega_{\mathrm{B}}]$, for various values of $\omega_{\mathrm{B}}$. Specifically, the objective function for these optimizations is
\begin{equation}
\label{eq:Pav}
J_{\mathrm{B}} = \overline{\mathcal{P}^{-}}(\omega_{\mathrm{B}})
\equiv \langle \mathcal{P}^{-}(\omega) \rangle
= \frac{1}{N_{\mathrm{B}}+1} \sum_{k = 0}^{N_{\mathrm{B}}} \mathcal{P}^{-}(\omega_k) .
\end{equation}
Here, $N_{\mathrm{B}} = \omega_{\mathrm{B}} / h_{\mathrm{B}}$ (i.e., $N_{\mathrm{B}} + 1$ is the number of sampling points), $\omega_k = k h_{\mathrm{B}}$, and $h_{\mathrm{B}}$ is the sampling interval. Except for the different choice of the objective function, the rest of the optimization procedure is the same as that described in Sec.~\ref{sec:optim}. In optimization runs that minimized $J_{\mathrm{B}}$, we considered four bandwidth values $\omega_{\mathrm{B}}/2 \pi = \{ 25, 50, 75, 100 \}$~MHz and used the fixed sampling interval $h_{\mathrm{B}}/2 \pi = 1$~MHz.
For illustration purposes, we use a logarithmic measure of average squeezing,
\begin{equation}
\label{eq:Qav}
\overline{\mathcal{Q}^{-}}(\omega_{\mathrm{B}}) = 10 \log_{10} \overline{\mathcal{P}^{-}}(\omega_{\mathrm{B}}),
\end{equation}
however, note that $\overline{\mathcal{Q}^{-}}(\omega_{\mathrm{B}}) \neq \langle \mathcal{Q}^{-}(\omega) \rangle$. Table~\ref{tab:bw} shows the best values of $\overline{\mathcal{Q}^{-}}(\omega_{\mathrm{B}})$ for $\omega_{\mathrm{B}}/2 \pi = 100$~MHz, for both the CQFC network of two coupled OPOs and the single OPO network, obtained in optimizations with $T_{\mathrm{u}} = 0.9$, $L = L_{\mathrm{in}} = 0.01$, and various values of $L_{\mathrm{tl}} = L_{\mathrm{out}}$ and $x_{\mathrm{u}}$. We see that the CQFC network of two coupled OPOs significantly outperforms the single OPO network in terms of the average squeezing generated over the $100$~MHz bandwidth, especially for lower values of transmission line losses.
\begin{table}[htbp]
\caption{\label{tab:bw}The best values of $\overline{\mathcal{Q}^{-}}(\omega_{\mathrm{B}})$ for $\omega_{\mathrm{B}}/2 \pi = 100$~MHz, for the CQFC network of two coupled OPOs and the single OPO network, obtained in optimizations with $T_{\mathrm{u}} = 0.9$, $L = L_{\mathrm{in}} = 0.01$, and various values of $L_{\mathrm{tl}} = L_{\mathrm{out}}$ and $x_{\mathrm{u}}$.}
\begin{ruledtabular}
\begin{tabular}{lrrrrrr}
\multicolumn{7}{c}{CQFC network of two coupled OPOs}\\
{} & \multicolumn{6}{c}{$L_{\mathrm{out}}$}\\
$x_{\mathrm{u}}$ & 0.01 & 0.05 & 0.10 & 0.15 & 0.20 & 0.25 \\ \hline
0.1 & -3.382 & -2.688 & -2.408 & -2.157 & -1.930 & -1.724 \\
0.2 & -5.773 & -4.937 & -4.339 & -3.829 & -3.385 & -2.994 \\
0.3 & -7.850 & -6.857 & -5.886 & -5.109 & -4.463 & -3.913 \\
0.4 & -9.994 & -8.441 & -7.073 & -6.049 & -5.234 & -4.559 \\ \hline
\multicolumn{7}{c}{Single OPO}\\
{} & \multicolumn{6}{c}{$L_{\mathrm{tl}}$}\\
$x_{\mathrm{u}}$ & 0.01 & 0.05 & 0.10 & 0.15 & 0.20 & 0.25 \\ \hline
0.1 & -1.428 & -1.361 & -1.277 & -1.196 & -1.115 & -1.037 \\
0.2 & -2.843 & -2.684 & -2.493 & -2.310 & -2.134 & -1.965 \\
0.3 & -4.248 & -3.966 & -3.638 & -3.332 & -3.047 & -2.779 \\
0.4 & -5.637 & -5.193 & -4.696 & -4.249 & -3.845 & -3.475 \\
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{figure*}[htbp]
\centering
\includegraphics[width=1.82\columnwidth]{fig_paper_12vars_and_5vars_bandwidth_Qmin_vs_f.pdf}
\caption{The squeezing spectrum $\mathcal{Q}^{-}(\omega)$ for the optimal operation of both networks under the minimization of $J_{\mathrm{B}} = \overline{\mathcal{P}^{-}}(\omega_{\mathrm{B}})$ of Eq.~\eqref{eq:Pav}. Each subplot shows four curves corresponding to the optimally operated CQFC network of two coupled OPOs for different values of $\omega_{\mathrm{B}}$ ($\omega_{\mathrm{B}}/2\pi = \{25, 75, 50, 100\}$~MHz), along with a curve corresponding to the optimally operated single OPO network for any value of $\omega_{\mathrm{B}}$, as indicated in the legend. The transmission line losses are $L_{\mathrm{tl}} = L_{\mathrm{out}} = 0.01$ (subplots (a), (b), (c)), $L_{\mathrm{tl}} = L_{\mathrm{out}} = 0.05$ (subplots (d), (e), (f)), and $L_{\mathrm{tl}} = L_{\mathrm{out}} = 0.1$ (subplots (g), (h), (i)). The upper limit on the scaled pump amplitude is $x_{\mathrm{u}} = 0.1$ (subplots (a), (d), (g)), $x_{\mathrm{u}} = 0.2$ (subplots (b), (e), (h)), and $x_{\mathrm{u}} = 0.3$ (subplots (c), (f), (i)). Other parameters are $T_{\mathrm{u}} = 0.9$, $L = L_{\mathrm{in}} = 0.01$.}
\label{fig:Qmin_vs_f_bw}
\end{figure*}
It is also interesting to examine the squeezing spectrum $\mathcal{Q}^{-}(\omega)$ generated under the optimal operation of either network when we minimize $J_{\mathrm{B}} = \overline{\mathcal{P}^{-}}(\omega_{\mathrm{B}})$. Figure~\ref{fig:Qmin_vs_f_bw} shows $\mathcal{Q}^{-}(\omega)$ for both networks for various values of $\omega_{\mathrm{B}}$, $L_{\mathrm{tl}} = L_{\mathrm{out}}$, and $x_{\mathrm{u}}$. Similarly to the results shown in Sec.~\ref{sec:results} (cf.~Fig.~\ref{fig:Qmin_vs_f}), we find that the optimally operated single OPO network generates exactly the same Lorentzian squeezing spectrum for any choice of $\omega_{\mathrm{B}}$. In contrast, the CQFC network of two coupled OPOs is capable of adapting the generated squeezing spectrum depending on the selected value of $\omega_{\mathrm{B}}$ and overall produces much higher squeezing bandwidth.
\section{Conclusions}
\label{sec:conclusions}
We modeled the squeezing spectrum of the output field of the CQFC network of two coupled OPOs and used a suite of global optimization methods to examine the limits to which this spectrum can be varied under conditions typical for tabletop experiments. We found that, in contrast to a single OPO, the CQFC network can utilize the interference between the fields in the plant OPO and the controller OPO to significantly modify the squeezing spectrum of the output field in response to the selected optimization objective. In particular, when the objective is to maximize the degree of squeezing at a high-frequency sideband $\omega_{\mathrm{opt}}$, the CQFC network can operate in an optimal regime characterized by a high degree of cooperativity between the plant OPO and the controller OPO, as quantified by the flows of light between them and the correlation between the phase shifts $\phi_1$ and $\phi_2$. In this operation regime, the optimized squeezing spectrum $\mathcal{Q}^{-}(\omega)$ of the CQFC network of two coupled OPOs has the minimum at $\omega = \omega_{\mathrm{opt}}$, while the minimum of the optimized spectrum of the single OPO network is always at zero sideband frequency.
For both types of optimization objectives considered in this work (maximizing the degree of squeezing at a selected sideband frequency and maximizing the average degree of squeezing over a selected bandwidth), the CQFC network of two coupled OPOs significantly outperforms a single OPO in terms of squeezing achieved under similar conditions, even with higher losses in the CQFC network due to additional components and transmission lines. Also, the CQFC network is more effective in terms of converting a higher pump power into a stronger squeezing. While this superior performance of the CQFC network of two coupled OPOs relies on a phase-sensitive interference between multiple fields, we discovered, perhaps surprisingly, that squeezing generated by the optimally operated CQFC network is rather robust to small variations of phase parameters. This robustness can be attributed to the tendency of global optimization algorithms to avoid solutions that are overly sensitive to small parameter variations, but the fact that such robust network configurations do actually exist is quite remarkable.
Overall, our results strongly indicate that CQFC networks provide a very effective tool for engineering quantum optical systems with new properties and unprecedented levels of performance. This work also demonstrates the usefulness of advanced optimization methods for analyzing and improving the performance of such networks.
\acknowledgments
This work was supported by the Laboratory Directed Research and Development program at Sandia National Laboratories. Sandia is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
|
2,877,628,091,570 | arxiv | \section*{Introduction}
A monad on $\mathbb{P}^n$ or, more generally, on a projective variety $X$, is a complex of three vector bundles
$$0 \rightarrow \hbox{${{\cal A}}$} \xrightarrow{\alpha} \hbox{${\cal B}$} \xrightarrow{\beta} \hbox{${\cal C}$} \rightarrow 0$$
such that $\alpha$ is injective as a map of vector bundles and $\beta$ is surjective.
Monads have been studied by Horrocks, who proved (see \cite{Ho} or \cite{BH}) that every vector bundle on $\mathbb{P}^n$ is the homology of a suitable minimal monad.
Throughout the paper we often use the Horrocks correspondence between a bundle $\hbox{${\cal E}$}$ on $\mathbb P^n$ ($n\geq 3$) and the corresponding minimal monad
$$0 \rightarrow \hbox{${{\cal A}}$} \xrightarrow{\alpha} \hbox{${\cal B}$}
\xrightarrow{\beta} \hbox{${\cal C}$} \rightarrow 0,$$ where $\hbox{${{\cal A}}$}$ and $\hbox{${\cal C}$}$ are sums of line
bundles and $\hbox{${\cal B}$}$ satisfies:
\begin{enumerate}
\item $H^1_*(\hbox{${\cal B}$})=H^{n-1}_*(\hbox{${\cal B}$})=0$
\item $H^i_*(\hbox{${\cal B}$})=H^i_*(\hbox{${\cal E}$})$ \ $\forall i, 1<i<n-1
$.
\end{enumerate}where $ H^i_*(\hbox{${\cal B}$})$ is defined as $\oplus_{k\in \mathbb(Z)} H^i(\mathbb P^n , B(k))$.\\
This correspondence holds also on a projective variety $X$ ($\dim X\geq 3$) if we fix a very ample line bundle $\sO_X(1)$. Indeed the proof of the result in (\cite{BH} proposition $3$) can be easily extended to $X$ (see \cite{Ml})). \\
Rao, Mohan Kumar and Peterson have successfully used this tool to investigate the intermediate cohomology modules of a vector bundle on $\mathbb{P}^n$ and give cohomological splitting conditions (see \cite{KPR1}).\\
This theorem makes a strong use of monads and of Horrocks' splitting criterion which states the following:\\
Let $\hbox{${\cal E}$}$ be a vector bundle of rank $r$ on $\mathbb P^n$, $n\geq 2$ then $\hbox{${\cal E}$}$ splits if and only if it does not have intermediate cohomology (i.e. $H^1_*(\hbox{${\cal E}$})= ...=H^{n-1}_*(\hbox{${\cal E}$})=0$).\\
This criterion fails on more general varieties. In fact there exist non-split vector bundles on $X$ without intermediate cohomology. This bundles are called ACM bundles.\\
Rao, Mohan Kumar and Peterson focused on bundles without inner cohomology (i.e. $H^2_*(\hbox{${\cal E}$})= ...=H^{n-2}_*(\hbox{${\cal E}$})=0$) and showed that these bundles on $\mathbb{P}^n$ $(n>3)$ are split if the rank is small.\\
On a quadric hypersurface $\hbox{${\cal Q}$}_n$ the Horrocks criterion does not work, but there is a theorem that classifies all the ACM bundles (see \cite{Kn}) as direct sums of line bundles and spinor bundles (up to a twist - for generalities about spinor bundles see \cite{Ot2}).\\
In [Ml] we improve Ottaviani's splitting criterion for vector bundles on a quadric hypersurface (see \cite{Ot1} and \cite{Ot3}) and obtain the equivalent of the result by Rao, Mohan Kumar and Peterson. Moreover we give the classification of rank $2$ bundles without inner cohomology on $\hbox{${\cal Q}$}_n$ ($n>3$). It surprisingly exactly agrees with the classification by Ancona, Peternell and Wisniewski of rank $2$ Fano bundles (see \cite{APW}).\\
We proved that for an indecomposable rank $2$ bundle $\hbox{${\cal E}$}$ on $\hbox{${\cal Q}$}_4$ with $H_*^1(\hbox{${\cal E}$})\not=0$ and $H_*^2(\hbox{${\cal E}$})=0$, the only possible minimal monad with $\hbox{${{\cal A}}$}$ and $\hbox{${\cal C}$}$ different from zero is (up to a twist)
$$0 \to \sO \xrightarrow{\alpha'}
\sS'(1)\oplus\sS''(1) \xrightarrow{\beta'} \sO(1) \to 0,$$
where $\sS'$ and $\sS''$ are the two spinor bundles on $\hbox{${\cal Q}$}_4$.\\
The homology is the bundle $\sZ_4$ associated to the disjoint union of a plane in $\Lambda$ and a plane in $\Lambda'$, the two families of planes in $\hbox{${\cal Q}$}_4$ (see \cite{AS}).\\
The kernel $\hbox{${\cal G}$}_4$ and the cokernel $\hbox{${\cal P}$}_4$ (the dual) of this monad are rank $3$ bundles without inner cohomology and we have the two sequences
\begin{equation} 0 \to \hbox{${\cal G}$}_4 \rightarrow
\sS'(1)\oplus\sS''(1) \rightarrow \sO(1) \to 0,
\end{equation}
and
\begin{equation} 0 \to \sO \rightarrow
\sS'(1)\oplus\sS''(1) \rightarrow \hbox{${\cal P}$}_4 \to 0.
\end{equation}
On $\hbox{${\cal Q}$}_5$ there is only one spinor $\sS_5$ and the only possible minimal monad with $\hbox{${{\cal A}}$}$ and $\hbox{${\cal C}$}$ different from zero, for a rank $2$ bundle without inner cohomology, is (up to a twist)
$$0 \to \sO \xrightarrow{\alpha''}
\sS(1) \xrightarrow{\beta''} \sO(1) \to 0.$$ The kernel $\hbox{${\cal G}$}_5$ and the cokernel $\hbox{${\cal P}$}_5$ (the dual) of the monad are rank $3$ bundles without inner cohomology and we have the two sequences
\begin{equation} 0 \to \hbox{${\cal G}$}_5 \rightarrow
\sS(1) \rightarrow \sO(1) \to 0,
\end{equation}
and
\begin{equation} 0 \to \sO \rightarrow
\sS(1) \rightarrow \hbox{${\cal P}$}_5 \to 0.
\end{equation}
The homology of the monad $\sZ_5$ is a Cayley bundle (see \cite{Ot4} for generalities on Cayley bundles).\\
The bundle $\sZ_5$ appear also in \cite{Ta} and \cite{KPR2}.\\
For $n>5$, no non-split bundle of rank $2$ on $\hbox{${\cal Q}$}_n$ exists with\\
$H^2_*(\hbox{${\cal E}$})= ...=H^{n-2}_*(\hbox{${\cal E}$})=0$.\\
The main aim of the present paper is the classification of rank three bundles without inner cohomology on $\hbox{${\cal Q}$}_4$ by studying the associated monads.
We are able to prove that:\\
For a non-split rank $3$ bundle $\hbox{${\cal E}$}$ on $\hbox{${\cal Q}$}_4$ with $H_*^2(\hbox{${\cal E}$})=0$, the only possible minimal monads with $\hbox{${{\cal A}}$}$ or $\hbox{${\cal C}$}$ different from zero are (up to a twist) the sequences $(1)$ and $(2)$ and
\begin{equation} 0 \to \sO \xrightarrow{\alpha}
\sS'(1)\oplus\sS''(1)\oplus \sO(a) \xrightarrow{\beta} \sO(1) \to 0,
\end{equation}
where $a$ is an integer, $\alpha=(\alpha', 0)$ and $\beta=(\beta', 0)$.\\
This means that on $\hbox{${\cal Q}$}_4$ the only non-split rank $3$ bundles without inner cohomology are the following:\\
the ACM bundles $\sS'\oplus\sO(a)$ and $\sS''\oplus\sO(a)$, $\hbox{${\cal G}$}_4$, $\hbox{${\cal P}$}_4$ and $\sZ_4\oplus\sO(a)$.\\
In particular $\hbox{${\cal G}$}_4$ and its dual are the only indecomposable rank $3$ bundles without inner cohomology on $\hbox{${\cal Q}$}_4$.\\
By using monads again we can also understand the behavior of rank three bundles on $Q_5$ and also on $\hbox{${\cal Q}$}_n$, ($n >5$).\\ More precisely we can prove that:\\
For a non-split rank $3$ bundle $\hbox{${\cal E}$}$ on $\hbox{${\cal Q}$}_5$ without inner cohomology, the only possible minimal monad with $\hbox{${{\cal A}}$}$ or $\hbox{${\cal C}$}$ not zero are (up to a twist) the sequences $(3)$ and $(4)$ and
$$ 0 \to \sO \xrightarrow{\alpha}
\sS_5(1)\oplus\sO(a) \xrightarrow{\beta} \sO(1) \to 0.$$
where $a$ is an integer, $\alpha=(\alpha'', 0)$ and $\beta=(\beta'', 0)$.
This means that on $\hbox{${\cal Q}$}_5$ the only rank $3$ bundles without inner cohomology are the following:\\
$\hbox{${\cal G}$}_5$, $\hbox{${\cal P}$}_5$ and $\sZ_5\oplus\sO(a)$.\\
In particular $\hbox{${\cal G}$}_5$ and its dual are the only indecomposable rank $3$ bundles without inner cohomology on $\hbox{${\cal Q}$}_5$.\\
For a non-split rank $3$ bundle $\hbox{${\cal E}$}$ on $\hbox{${\cal Q}$}_6$ without inner cohomology, we have four possible minimal monads:
\begin{equation} 0 \to \sO \rightarrow
\sS'_6(1) \rightarrow \hbox{${\cal P}$}'_6 \to 0,\end{equation}
\begin{equation} 0 \to \sO \rightarrow
\sS''_6(1) \rightarrow \hbox{${\cal P}$}''_6 \to 0,\end{equation}
\begin{equation} 0 \to \hbox{${\cal G}$}'_6 \rightarrow\sS'_6(1) \rightarrow \sO(1) \to 0,\end{equation} and
\begin{equation} 0 \to \hbox{${\cal G}$}''_6 \rightarrow
\sS''_6(1) \rightarrow \sO(1) \to 0.\end{equation}
These sequences appear for instance in \cite{Ot2} Theorem $3.5.$\\
Therefore on $\hbox{${\cal Q}$}_6$ the only rank $3$ bundles without inner cohomology are the following:\\
$\hbox{${\cal G}$}'_6$, $\hbox{${\cal G}$}''_6$, $\hbox{${\cal P}$}'_6$ and $\hbox{${\cal P}$}''_6$.\\
For $n>6$, no non-split bundles of rank $3$ in $\hbox{${\cal Q}$}_n$ exist with
$H^2_*(\hbox{${\cal E}$})= ...=H^{n-2}_*(\hbox{${\cal E}$})=0$.\\
I would like to thank A. Prabhakar Rao for having introduced me into the topic and Giorgio Ottaviani for his useful comments and suggestions.\\
\section{Monads for Bundles on ACM Varieties}
In this section $X$ denotes a nonsingular subcanonical, irreducible ACM projective variety. By this we mean that $X$ has a very ample line bundle $\sO_X(1)$ such that $\omega_X \cong \sO_X(a)$ for some $a \in \mathbb{Z}$ and the embedding of $X$ by $\sO_X(1)$ is arithmetically Cohen-Macaulay. We will also assume that every line bundle on $X$ has the form $\sO_X(k), k \in \mathbb{Z}$.
\\
If $M$ is a finitely generated graded module over the homogeneous coordinate ring of $X$, we denote by $\beta_{ij}(M)$ and $\beta_i(M)$ the graded Betti numbers and total Betti numbers of $M$. We will mainly use $\beta_{0j}(M)$ and $\beta_0(M)$ which give the number of minimal generators of $M$ in degree $j$ and the total number of minimal generators respectively.\\
We say that a bundle is non-split if it does not split as a direct sum of line bundles.\\
We say that a bundle is indecomposable if it does not split in two direct summmands.\\
\begin{definition}We will call bundle without inner cohomology a bundle $\hbox{${\cal E}$}$ on $X$ with
$$H^2_*(\hbox{${\cal E}$})= \dots =H^{n-2}_*(\hbox{${\cal E}$})=0,$$
where $n=dim X$.
\end{definition}
We prove a theorem about monads for rank $r$ bundles:
\begin{theorem}\label{t2} On $X$
of dimension $n$ with $n>3$, any minimal monad $$ 0 \to \hbox{${{\cal A}}$} \xrightarrow{\alpha} \hbox{${\cal B}$}
\xrightarrow{\beta} \hbox{${\cal C}$} \to 0,$$ such that $\hbox{${{\cal A}}$}$ or $\hbox{${\cal C}$}$ are not
zero, for a rank $r$ ($r\geq 2$) bundle with $H^2_*(\hbox{${\cal E}$})=H^{n-2}_*(\hbox{${\cal E}$})=H^2_*(\wedge^2 \hbox{${\cal E}$}) =H^2_*(\wedge^2 \hbox{${\cal E}$}^\vee)=0$, must satisfy the
following conditions:
\begin{enumerate}
\item $H^1_*(\wedge^2\hbox{${\cal B}$})\not=0$, $\beta_0(H^1_*(\wedge^2\hbox{${\cal B}$}))\geq
\beta_0(H^0_*(S_2\hbox{${\cal C}$}))$ and \\
\vskip0.01truecm
$\beta_{0j}(H^1_*(\wedge^2\hbox{${\cal B}$}))\geq
\beta_{0j}(H^0_*(S_2\hbox{${\cal C}$}))$ $\forall j\in {\mathbb Z}$, if $\hbox{${\cal C}$}$ is not zero.
\vskip0.8truecm
\item $H^1_*(\wedge^2\hbox{${\cal B}$}^{\vee})\not=0$, $ \beta_0(H^1_*(\wedge^2\hbox{${\cal B}$}^{\vee}))\geq
\beta_0(H^0_*(S_2\hbox{${{\cal A}}$}^{\vee}))$ and\\
\vskip0.01truecm $ \beta_{0j}(H^1_*(\wedge^2\hbox{${\cal B}$}^{\vee}))\geq
\beta_{0j}(H^0_*(S_2\hbox{${{\cal A}}$}^{\vee}))$ $\forall j\in {\mathbb Z}$, if $\hbox{${{\cal A}}$}$ is not zero.
\vskip0.8truecm
\item $H^2_*(\wedge^2\hbox{${\cal B}$})=H^2_*(\wedge^2\hbox{${\cal B}$}^{\vee}) =0$.
\end{enumerate}
\begin{proof}First of all, since $X$ is ACM, the sheaf $\sO_X$ does not have intermediate
cohomology. Hence the same is true for $\hbox{${{\cal A}}$}$ and $\hbox{${\cal C}$}$ that are free $\sO_X$ sheaves.\\
Let us now assume the existence of a minimal monad with $H^1_*(\wedge^2\hbox{${\cal B}$})=0$ and $\hbox{${\cal C}$}$ not zero
$$ 0 \to \hbox{${{\cal A}}$} \xrightarrow{\alpha} \hbox{${\cal B}$}
\xrightarrow{\beta} \hbox{${\cal C}$} \to 0.$$ Then, if we call $\hbox{${\cal G}$}=\ker\beta$,
from the sequence
$$ 0 \to S_2\hbox{${{\cal A}}$} \to (\hbox{${{\cal A}}$} \otimes
\hbox{${\cal G}$})\to \wedge^2 \hbox{${\cal G}$}
\to \wedge^2\mathcal E \to 0, $$
we have
$$ H^2_*(\wedge^2 \hbox{${\cal G}$}) = H^2_*(\hbox{${{\cal A}}$} \otimes \hbox{${\cal G}$}) = 0,$$ since $H^2_*(\hbox{${\cal B}$})=H^2_*(\hbox{${\cal G}$})=H^2_*(\hbox{${\cal E}$})=0$ and $ H^2_*(\wedge^2 \hbox{${\cal E}$}) = 0,$.\\
Moreover, from the sequence
$$ 0 \to \wedge^2\hbox{${\cal G}$} \to \wedge^2\hbox{${\cal B}$} \to \hbox{${\cal B}$}\otimes \hbox{${\cal C}$}
\to S_2\hbox{${\cal C}$} \to 0, $$
by passing to the exact
sequence of maps on cohomology groups, since $H^1_*(\wedge^2\hbox{${\cal B}$})=H^2_*(\wedge^2 \hbox{${\cal G}$})=0$ we get
$$ H^0_*(\hbox{${\cal B}$}\otimes \hbox{${\cal C}$})
\to H^0_*(S_2\hbox{${\cal C}$}) \to 0 .$$
Now, if we call $S_X$ the coordinate ring, we can say that $H^0_*(S_2\hbox{${\cal C}$})$
is a free $S_X$-module hence projective, then there exists a map $$H^0_*(\hbox{${\cal B}$}\otimes \hbox{${\cal C}$})
\leftarrow H^0_*(S_2\hbox{${\cal C}$})$$ and this means that $$\hbox{${\cal B}$}\otimes \hbox{${\cal C}$}
\to S_2\hbox{${\cal C}$} \to 0 $$ splits.\\
But this map is obtained from $\beta$ as $b\otimes c\mapsto
\beta(b)c$, so if it splits also $\beta$ has to split and this violates the
minimality of the monad.
We can say something stronger.\\
Recall that if $M \to N \to 0$ is a surjection of finitely generated graded $S_X$-modules, then $\beta_0(M) \geq \beta_0(N)$ and also $\beta_{0j}(M) \geq \beta_{0j}(N)$ for any $j$. Furthermore, if the inequality is strict, it means that a set of minimal generators of $M$ (in degree $j$ in the second case) can be chosen in such a way that one of generators in the set maps to zero. \\
From the sequence
$$0 \to \wedge^2\hbox{${\cal G}$} \to \wedge^2\hbox{${\cal B}$} \to \hbox{${\cal B}$}\otimes \hbox{${\cal C}$}\xrightarrow{\gamma} S_2\hbox{${\cal C}$} \to 0, $$
since $H^2_*(\wedge^2 \hbox{${\cal G}$}) = 0$, we have a surjective map
$$ H^1_*(\wedge^2\hbox{${\cal B}$})
\to H^1_*(\Gamma) \to 0 $$ where $\Gamma=\ker\gamma$, and then
$$ \beta_0(H^1_*(\wedge^2\hbox{${\cal B}$}))\geq \beta_0(H^1_*(\Gamma)).$$
On the other hand we have the sequence
$$ H^0_*(\hbox{${\cal B}$}\otimes \hbox{${\cal C}$})
\xrightarrow{\gamma} H^0_*(S_2\hbox{${\cal C}$})\rightarrow H^1_*(\Gamma) \to 0; $$ so, if
$$ \beta_0(H^1_*(\wedge^2\hbox{${\cal B}$}))< \beta_0(H^0_*(S_2\hbox{${\cal C}$})),$$ also $$\beta_0(H^1_*(\Gamma))< \beta_0(H^0_*(S_2\hbox{${\cal C}$})),$$ and some of the generators
of $H^0_*(S_2\hbox{${\cal C}$})$ must be in the image of $\gamma$.\\
But $\gamma$ is obtained from $\beta$ as $b\otimes c\mapsto
\beta(b)c$, so also some generators of $C$ must be in the image of $\beta$ and this contradicts the
minimality of the monad.\\
We conclude that not just $H^1_*(\wedge^2\hbox{${\cal B}$})$ has to be non zero
but also $$ \beta_0(H^1_*(\wedge^2\hbox{${\cal B}$}))\geq \beta_0(H^0_*(S_2\hbox{${\cal C}$})).$$
If we fix the degree $\j$ we have that also the map $H^0(\hbox{${\cal B}$}\otimes \hbox{${\cal C}$}(j))\rightarrow H^0(S_2\hbox{${\cal C}$}(j))$ and so we see that, $\forall \j\in{\mathbb Z}$
$$ \beta_{0j}(H^1_*(\wedge^2\hbox{${\cal B}$}))\geq \beta_{0j}(H^0_*(S_2\hbox{${\cal C}$})).$$
If $\hbox{${{\cal A}}$}=0$ the monad is simply $$ 0 \to \hbox{${\cal E}$} \xrightarrow{\alpha} \hbox{${\cal B}$}
\xrightarrow{\beta} \hbox{${\cal C}$} \to 0,$$ and, by using the sequence $$ 0 \to \wedge^2\hbox{${\cal E}$} \to \wedge^2\hbox{${\cal B}$} \to \hbox{${\cal B}$}\otimes \hbox{${\cal C}$}\to S_2\hbox{${\cal C}$} \to 0, $$
since $H^2_*(\wedge^2 \hbox{${\cal E}$}) = 0$, we can conclude as before.\\
Let us now assume the existence of a monad with $\hbox{${{\cal A}}$}$ not zero $(H^1_*(\wedge^2\hbox{${\cal B}$}^{\vee}))=0$, we use the dual sequences.\\
From $$ 0 \to S_2\hbox{${\cal C}$}^{\vee} \to (\hbox{${\cal C}$}^{\vee} \otimes
\hbox{${\cal B}$}^{\vee})\to \wedge^2 \hbox{${\cal B}$}^{\vee}
\to \wedge^2\hbox{${\cal G}$}^{\vee} \to 0, $$
we have
$ H^1_*(\wedge^2 \hbox{${\cal G}$}^{\vee})\cong H^1_*(\wedge^2 \hbox{${\cal B}$}^{\vee})$.\\
Moreover, from the sequence
$$ 0 \to \wedge^2\hbox{${\cal E}$}^{\vee} \to \wedge^2\hbox{${\cal G}$}^{\vee} \to \hbox{${\cal G}$}^{\vee}\otimes \hbox{${{\cal A}}$}^{\vee}
\to S_2\hbox{${{\cal A}}$}^{\vee} \to 0, $$
by passing to the exact
sequence of maps on cohomology groups, since $H^2_*(\wedge^2\hbox{${\cal E}$}^{\vee})=H^1_*(\wedge^2 \hbox{${\cal G}$}^{\vee})=0$ we get
$$ H^0_*(\hbox{${\cal G}$}^{\vee}\otimes \hbox{${{\cal A}}$}^{\vee})
\to H^0_*(S_2\hbox{${{\cal A}}$}^{\vee}) \to 0 ,$$ and this violates the
minimality of the monad as before.\\
We can also conclude that $$ \beta_0(H^1_*(\wedge^2\hbox{${\cal B}$}^{\vee}))=\beta_0(H^1_*(\wedge^2\hbox{${\cal G}$}^{\vee}))\geq \beta_0(H^0_*(S_2\hbox{${{\cal A}}$}^{\vee})).$$
If we fix the degree $\j$ we have that also the map $H^0(\hbox{${\cal G}$}^{\vee}\otimes \hbox{${{\cal A}}$}^{\vee}(j))\rightarrow H^0(S_2\hbox{${{\cal A}}$}^{\vee}(j))$ and so we see that, $\forall \j\in{\mathbb Z}$
$$ \beta_{0j}(H^1_*(\wedge^2\hbox{${\cal B}$}^\vee))\geq \beta_{0j}(H^0_*(S_2\hbox{${{\cal A}}$}^\vee)).$$
If $\hbox{${\cal C}$}=0$ the monad is simply $$ 0 \to \hbox{${{\cal A}}$} \xrightarrow{\alpha} \hbox{${\cal B}$}
\xrightarrow{\beta} \hbox{${\cal E}$} \to 0,$$ and, by using the sequence $$ 0 \to \wedge^2\hbox{${\cal E}$}^{\vee} \to \wedge^2\hbox{${\cal B}$}^{\vee} \to \hbox{${\cal B}$}^{\vee}\otimes \hbox{${{\cal A}}$}^{\vee}
\to S_2\hbox{${{\cal A}}$}^{\vee} \to 0, $$
since $H^2_*(\wedge^2 \hbox{${\cal E}$}^\vee) = 0$, we can conclude as before.\\
The third condition
comes from the sequences $$ 0 \to \wedge^2\hbox{${\cal G}$} \to \wedge^2\hbox{${\cal B}$} \to
\hbox{${\cal B}$}\otimes \hbox{${\cal C}$}
\to S_2\hbox{${\cal C}$} \to 0, $$ and $$ 0 \to S_2\hbox{${\cal C}$}^{\vee} \to (\hbox{${\cal C}$}^{\vee} \otimes
\hbox{${\cal B}$}^{\vee})\to \wedge^2 \hbox{${\cal B}$}^{\vee}
\to \wedge^2\hbox{${\cal G}$}^{\vee} \to 0, $$ since $H^2_*(\wedge^2\hbox{${\cal G}$})=H^2_*(
\hbox{${\cal B}$}\otimes \hbox{${\cal C}$})=H^2_*((\hbox{${\cal C}$}^{\vee} \otimes
\hbox{${\cal B}$}^{\vee})=H^2_*(\wedge^2\hbox{${\cal G}$}^{\vee})=0$.
\end{proof}
\end{theorem}
\begin{remark}If $r=2$ we have ([ML]$1.6$).
\end{remark}
\begin{remark} If $r=3$ we don't need the hypothesis $H^2_*(\wedge^2 \hbox{${\cal E}$}) =H^2_*(\wedge^2 \hbox{${\cal E}$}^\vee)=0$ because $H^2_*(\wedge^2 \hbox{${\cal E}$}) = H^{n-2}_*(\hbox{${\cal E}$}) = 0.$
\end{remark}
\begin{remark}On $\mathcal{P}^n$ we can say the following:\\
Let $\hbox{${\cal E}$}$ be a bundle without inner cohomology such that $H^2_*(\wedge^2 \hbox{${\cal E}$}) =H^2_*(\wedge^2 \hbox{${\cal E}$}^\vee)=0$, then $\hbox{${\cal E}$}$ splits.\\
In fact if $\hbox{${\cal E}$}$ does not split the associated minimal monad has $\hbox{${{\cal A}}$}$ or $\hbox{${\cal C}$}$ different to zero. Since $H^2_*(\hbox{${\cal E}$})= ...=H^{n-2}_*(\hbox{${\cal E}$})=0$, the bundles $\hbox{${\cal B}$}$ does not have intermediate cohomology and hence it splits. In particular $H^1_*(\wedge^2 \hbox{${\cal B}$}) =0$. By the above theorem this is a contradiction.\\
So the hypothesis $H^2_*(\wedge^2 \hbox{${\cal E}$}) =H^2_*(\wedge^2 \hbox{${\cal E}$}^\vee)=0$ avoid the limitation of the rank in the Kumar, Peterson and Rao theorem (see \cite{KPR1}).
\end{remark}
We need also the following lemma:\\
\begin{lemma} Let $\hbox{${\cal E}$}$ be a rank $2$ on $X$. If
$$ 0 \to \hbox{${{\cal A}}$} \xrightarrow{\alpha'}
\hbox{${\cal B}$} \xrightarrow{\beta'} \hbox{${\cal C}$} \to 0,$$ is a minimal monad for $\hbox{${\cal E}$}$, then
$$ 0 \to \hbox{${{\cal A}}$} \xrightarrow{\alpha}
\hbox{${\cal B}$}\oplus \sO(a) \xrightarrow{\beta} \hbox{${\cal C}$} \to 0,$$ where $\alpha=(\alpha', 0)$ and $\beta=(\beta', 0)$, is a minimal monad for $\hbox{${\cal E}$}\oplus \sO(a)$.
\end{lemma}
\section{Rank $3$ Bundles without Inner\\ Cohomology}
We want now apply these results in order to classify the rank $3$ bundles without inner cohomology on $\hbox{${\cal Q}$}_4$:
\begin{theorem}For a non-split rank $3$ bundle $\hbox{${\cal E}$}$ on $\hbox{${\cal Q}$}_4$ with\\ $H_*^2(\hbox{${\cal E}$})=0$, the only possible minimal monads with $\hbox{${{\cal A}}$}$ or $\hbox{${\cal C}$}$ different from zero are (up to a twist) the sequences $(1)$ and $(2)$ and
\begin{equation} 0 \to \sO \rightarrow
\sS'(1)\oplus\sS''(1)\oplus \sO(a) \rightarrow \sO(1) \to 0,
\end{equation}
where $a$ is an integer, $\alpha=(\alpha', 0)$ and $\beta=(\beta', 0)$.
\begin{proof}First of all consider a minimal monad for $\hbox{${\cal E}$}$,
$$ 0 \to \hbox{${{\cal A}}$} \xrightarrow{\alpha} \hbox{${\cal B}$}
\xrightarrow{\beta} \hbox{${\cal C}$} \to 0,$$ Since by construction, $\hbox{${\cal B}$}$ is an ACM bundle on $\hbox{${\cal Q}$}_4$,
then it has to be isomorphic to a direct sum of line bundles and
spinor bundles twisted by some $\sO(t)$.\\
Since $\hbox{${\cal B}$}$ cannot be split without violating part $1$ of (Theorem \ref{t2}) which states that $$H^1_*(\wedge^2\hbox{${\cal B}$})\not=0.$$ Hence at least a spinor bundle must appear in its decomposition.\\
If just one copy of $\sS'$ or one copy of $\sS''$ appears in $\hbox{${\cal B}$}$, since $$\textrm{rank $\sS''$ $=$ rank $\sS'$ $=2$},$$ and then $\wedge^2 \sS'$ and $\wedge^2\sS''$ are line bundles, also the
bundle $\wedge^2\hbox{${\cal B}$}$ is ACM and again the condition
$$H^1_*(\wedge^2\hbox{${\cal B}$})\not=0,$$ in (Theorem \ref{t2}), is not satisfied.\\
If it appears more than one copy of $\sS'$ or more than one copy of $\sS''$ appears in $\hbox{${\cal B}$}$, in the
bundle $\wedge^2\hbox{${\cal B}$}$ it appears $(\sS'\otimes\sS')(t)$ or
$(\sS''\otimes\sS'')(t)$ appears and, since
$$H^2_*(\sS'\otimes\sS')=H^2_*(\sS''\otimes\sS'')=\mathbb
C,$$ the condition
$$H^2_*(\wedge^2\hbox{${\cal B}$})=0$$ in (Theorem (\ref{t2})), fails to be satisfied. So $\hbox{${\cal B}$}$ must contain both $\sS'$ and $\sS''$
with some twist and only one copy of each. We can conclude that $\hbox{${\cal B}$}$ has to be of the form $$
(\bigoplus_i\sO(a_i))\oplus ( \sS'(b))\oplus (\sS''(c)).$$ Let us notice
furthermore that if $H_*^1(\hbox{${\cal E}$})$ has more than $1$ generator, rank $
\hbox{${\cal C}$}>1$ and $H_*^0( S_2\hbox{${\cal C}$})$ has at least $3$ generators.\\
But
$$H_*^1(\wedge^2\hbox{${\cal B}$})\backsimeq H_*^1(\sS'\otimes\sS'')=\mathbb C$$ has just $1$ generator and
this is a
contradiction because by (Theorem (\ref{t2})) $$ \beta_0(H^1_*(\wedge^2\hbox{${\cal B}$}))\geq
\beta_0(H^0_*(S_2\hbox{${\cal C}$})).$$
This means that rank $\hbox{${\cal C}$}$ $=1$ or $=0$ .\\
Similarly, looking at dual sequence, we have that also rank $\hbox{${{\cal A}}$}$ must be $1$.\\
If $\hbox{${\cal C}$}=0$, we have the minimal monad
$$0 \to \sO \rightarrow
\sS'(l)\oplus\sS''(m) \rightarrow \hbox{${\cal E}$} \to 0.$$ By computing $c_4(\sS'(l)\oplus\sS''(m))$ as in (\cite{Ml} Theorem $3.1$) we see that $l$ and $m$ must be both equal to $1$ and we have the monad ($2$).\\
If $\hbox{${{\cal A}}$}=0$ we see in the same way that we have the monad ($1$).\\
At this point the only possible monads with $\hbox{${{\cal A}}$}$ and $\hbox{${\cal C}$}$ not zero, are like
$$ 0 \to \sO \xrightarrow{\alpha}
\sO(a)\oplus\sS'(1+b)\oplus\sS''(1+c) \xrightarrow{\beta} \sO(d) \to 0.$$ where $a, b$ and $c$
are integer numbers.\\
Now, since $$\beta_{0j}(H^1_*(\wedge^2\hbox{${\cal B}$}))\geq
\beta_{0j}(H^0_*(S_2\hbox{${\cal C}$}))$$ $\forall j\in {\mathbb Z}$ we see that $2+b+c=2d$.\\
Let assume that $b\leq 0$ then by the sequence $$ 0 \to \sS'' \rightarrow
\sO^4 \rightarrow \sS'(1) \to 0,$$ (see \cite{Ot2}) we see that $\sS'(1+b)$ does not have global section.\\
Moreover $\sO(a)\oplus\sS''(1+c)$ does not have nowhere vanishing section. In fact a section of $\sO(a)$ has zero locus of dimension $4$ (if it is the zero map) or $3$. If $a=0$ it could be a scalar different to zero but this is against our assumption of minimality. Since the zero locus of a section of $\sS''(1+c)$ has dimension at least $2$, we conclude that the zero locus of a section of $\sO(a)\oplus\sS''(1+c)$ must be not empty.\\
This means that the map $\alpha$ cannot be injective.\\
If $c\leq 0$ we have the same contradiction.\\
We have, hence, that $b$ and $c$ must be positives.
Let us consider now the dual monad twisted by $d$ $$ 0 \to \sO \xrightarrow{\beta^\vee}
\sO(d-a)\oplus\sS'(d-b)\oplus\sS''(d-c) \xrightarrow{\alpha^\vee} \sO(d) \to 0.$$
By the argument above we have that $d-b\geq 1$ and $d-c\geq 1$; but, since $2=d-b+d-c$, it follows that $b=c$ and $d=b+1$.\\
We have the map $$\beta: \sO(a) \oplus \sS'(1+b)\oplus \sS''(1+b) \rightarrow \sO(1+b).$$
Let us consider the restriction $$\beta' : \sS'(1+b)\oplus \sS''(1+b) \rightarrow \sO(1+b).$$ We know (by \cite{Ml}) that in general, we can find a surjective map $$\gamma: \sS'(1+b)\oplus \sS''(1+b) \rightarrow \sO(1+b)$$ since we have some standard rank two bundles obtained from a monad on $\hbox{${\cal Q}$}_4$. Hence the map
$\gamma^\vee$ gives a nowhere vanishing section of $\sS'^\vee \oplus \sS''^\vee$, which thus has fourth Chern class $0$. Hence in particular, some other map like
$\beta'^\vee$ must give a section which is either nowhere vanishing, or which vanishes on a locus of dimension $\geq 1$. (It cannot vanish on a non empty zero dimensional set). However, if $\beta'^\vee$ vanishes on a locus of dimension $\geq 1$, then $\beta^\vee$ itself cannot give a nowhere vanishing section since the map $\sO \rightarrow \sO(-a+1+b)$ is either zero or defines a hypersurface, by minimality.
Therefore $\beta'$ must be surjective (like a standard map $\gamma$).
By an easy computation we have the following claim:\\ If $\hbox{${\cal E}$}$ is a rank two bundle on $\hbox{${\cal Q}$}_4$ with monad $$0 \rightarrow \sO \rightarrow \sS'(1) \oplus \sS''(1) \rightarrow \sO(1) \to 0,$$ then $H^1(E(-1)) = k$ and $H^1(E(t)) = 0$ for every $t\not= -1$.\\
Hence on the level of global sections, $\beta'$ is surjective onto $\sO(1+b)$, except that the element $1$ in degree $(-1-b)$ is not in the image. By minimality, $1$ cannot be in the image of $\sO(a)$. Hence $\sO(a)$ maps by $\beta$ to the image of $\beta'$ i.e. there exists $l\in \sS'(1+b)\oplus\sS''(1+b)$ such that $\beta(1,0)=\beta'(l)$ . Therefore, after a change of basis, we may assume that $\sO(a)$ maps to zero. In fact, if we consider a map $$\delta: \sO(a)\oplus\sS'(1+b)\oplus\sS''(1+b) \rightarrow \sO(a)\oplus\sS'(1+b)\oplus\sS''(1+b)$$ sending $(1,0)$ in $(1,-l)$, we have that $$\beta(\delta(1,0))=\beta(1,-l)=\beta'(l)-\beta'(l)=0.$$
We have at this point the monad $$ 0 \to \sO \xrightarrow{(h, \alpha')}
\sO(a)\oplus\sS'(1+b)\oplus\sS''(1+b) \xrightarrow{(0, \beta')} \sO(1+b) \to 0.$$
We want to prove that $h$ must be the zero map and $b$ must be zero.\\
If $a\leq 0$, clearly $h=0$ and $\alpha'$ is injective if and only if $b=0$.\\
If $a>0$ we consider the kernel of $\beta$ $\sO(a)\oplus\hbox{${\cal G}$}_4(b)$ and, from the exact sequence $$ 0 \to \sO \rightarrow
\sO(a)\oplus\hbox{${\cal G}$}_4(b) \rightarrow \hbox{${\cal E}$} \to 0,$$ we see that $c_4(\sO(a)\oplus\hbox{${\cal G}$}_4(b))$ must be zero. But from $$0 \to \hbox{${\cal G}$}_4(b) \rightarrow
\sS'(1+b)\oplus\sS''(1+b) \rightarrow \sO(1+b) \to 0,$$ we see that $c_3(\hbox{${\cal G}$}_4(b))=c_4(\sS'(1+b)\oplus\sS''(1+b))*c_1(\sO(1+b))^{-1}=2(1+b+b^2)(b+b^2)(1+b)^{-1}$ and so $c_4(\sO(a)\oplus\hbox{${\cal G}$}_4(b))=c_1(\sO(a))*c_3(\hbox{${\cal G}$}_4(b))=0$ if and only if $b=0$.
\end{proof}
\end{theorem}
\begin{remark} On $\hbox{${\cal Q}$}_4$ the only rank $3$ bundles without inner cohomology are the ACM bundles, $\hbox{${\cal G}$}_4$, $\hbox{${\cal P}$}_4$ and $\sZ_4\oplus\sO(a)$.
\end{remark}
\begin{corollary} In higher dimension we have:
\begin{enumerate}
\item For a non-split rank $3$ bundle $\hbox{${\cal E}$}$ on $\hbox{${\cal Q}$}_5$ without inner cohomology, the only possible minimal monad with $\hbox{${{\cal A}}$}$ or $\hbox{${\cal C}$}$ not zero are (up to a twist) the sequences $(3)$ and $(4)$ and
$$ 0 \to \sO \rightarrow
\sS_5(1)\oplus\sO(a) \rightarrow \sO(1) \to 0.$$
where $a$ is an integer, $\alpha=(\alpha'', 0)$ and $\beta=(\beta'', 0)$.
\item For a non-split rank $3$ bundle $\hbox{${\cal E}$}$ on $\hbox{${\cal Q}$}_6$ without inner cohomology, the only possible minimal monad with $\hbox{${{\cal A}}$}$ or $\hbox{${\cal C}$}$ not zero are (up to a twist) the sequences $(6)$, $(7)$, $(8)$ and $(9)$.
\item For $n>6$, no non-split bundle of rank $3$ in $\hbox{${\cal Q}$}_n$ exist with\\
$H^2_*(\hbox{${\cal E}$})= ...=H^{n-2}_*(\hbox{${\cal E}$})=0$.
\end{enumerate}
\begin{proof}
First of all let us notice that for $n>4$ there is not non-split ACM rank $3$ bundles
since the spinor bundles have rank greater than $3$.\\
Let us assume
then that $H^1_*(\hbox{${\cal E}$})\not=0$ or $H^{n-1}_*(\hbox{${\cal E}$})\not=0$ and let us see how many monads it is possible
to find:
\begin{enumerate}
\item
In a minimal monad for $\hbox{${\cal E}$}$ on $\hbox{${\cal Q}$}_5$,
$$ 0 \to \hbox{${{\cal A}}$} \xrightarrow{\alpha} \hbox{${\cal B}$}
\xrightarrow{\beta} \hbox{${\cal C}$} \to 0,$$ $\hbox{${\cal B}$}$ is an ACM bundle on $\hbox{${\cal Q}$}_5$;
then it has to be isomorphic to a direct sum of line bundles and
spinor bundles twisted by some $\sO (t)$,
Moreover, since $H_*^2(\hbox{${\cal E}$})=0$ and $H_*^3(\hbox{${\cal E}$})=0$,
$\hbox{${\cal E}$}_{|\hbox{${\cal Q}$}_4}=\hbox{${\cal F}$}$ is a bundle with $H^2_*(\hbox{${\cal F}$})=0$ and by (\cite{Ml} Lemma $1.2$) his minimal
monad is just the restriction of the minimal monad for $\hbox{${\cal E}$}$ $$ 0 \to \hbox{${{\cal A}}$} \xrightarrow{\alpha} \hbox{${\cal B}$}
\xrightarrow{\beta} \hbox{${\cal C}$} \to 0.$$
For the theorem above, hence, this monad must be
$$ 0 \to \sO \rightarrow
\sS'(1)\oplus\sS''(1)\oplus\sO(a) \rightarrow \sO(1) \to 0.$$
Now, since $$\sS_{5_{|\hbox{${\cal Q}$}_4}}\backsimeq \sS'\oplus\sS'',$$ the only bundle of the form
$$(\bigoplus_i\sO(a_i))\oplus ( \bigoplus_j\sS_5(b_j))$$ having $\sS'(1)\oplus\sS''(1)\oplus\sO(a)$ as restriction on $\hbox{${\cal Q}$}_4$ is $\sS_5(1)\oplus\sO(a)$ and then if $\hbox{${{\cal A}}$}$ and $\hbox{${\cal C}$}$ are different to zero the claimed
monad $$ 0 \to \sO \xrightarrow{\alpha} \sS_5(1)\oplus\sO(a)
\xrightarrow{\beta} \sO(1) \to 0$$ where $\alpha=(\alpha'', 0)$ and $\beta=(\beta'', 0)$, is the only possible.\\
If $\hbox{${{\cal A}}$}=0$, we have the monad ($3$) and if $\hbox{${\cal C}$}=0$, we have the monad ($4$).\\
\item
In $\hbox{${\cal Q}$}_6$ we use the same argument. Let us consider a minimal monad for $\hbox{${\cal E}$}$.\\
If $\hbox{${{\cal A}}$}$ and $\hbox{${\cal C}$}$ are not zero the restriction of the monad on $\hbox{${\cal Q}$}_5$ must be $$ 0 \to \sO \xrightarrow{\alpha} \sS_5(1)\oplus\sO(a)
\xrightarrow{\beta} \sO(1) \to 0.$$ Since
$\sS'_{6_{|\hbox{${\cal Q}$}_5}}\backsimeq \sS_5$
and also
$\sS''_{6_{|\hbox{${\cal Q}$}_5}}\backsimeq \sS_5,$
we have two possible minimal monads:
$$ 0 \to \sO \rightarrow
\sS'_6(1)\oplus\sO(a) \rightarrow \sO(1) \to 0$$ and $$ 0 \to \sO \rightarrow
\sS''_6(1)\oplus\sO(a) \rightarrow \sO(1) \to 0,$$ where the maps are of the form $(\gamma, 0)$.\\
In both the sequences the homology is a bundle $\hbox{${\cal F}$}\oplus\sO(a)$ where $\hbox{${\cal F}$}$ is a rank $2$ bundle without inner cohomology that by (\cite{Ml} Cor. $3.4$) cannot exist, so they cannot be the monads of a rank $3$ bundles.\\
If $\hbox{${{\cal A}}$}$ or $\hbox{${\cal C}$}$ are zero the restriction of the minimal monad on $\hbox{${\cal Q}$}_5$ must be the minimal monad ($3$) or the minimal monad $(4)$. We have four possible minimal monads:
$$ 0 \to \sO \rightarrow
\sS'_6(1) \rightarrow \hbox{${\cal E}$} \to 0,$$ $$ 0 \to \sO \rightarrow
\sS''_6(1) \rightarrow \hbox{${\cal E}$} \to 0,$$
$$ 0 \to \hbox{${\cal E}$}^{\vee} \rightarrow\sS'_6(1) \rightarrow \sO(1) \to 0,$$ and $$ 0 \to \hbox{${\cal E}$}^{\vee} \rightarrow
\sS''_6(1) \rightarrow \sO(1) \to 0.$$
These are precisely the sequences $(6)$, $(7)$, $(8)$ and $(9)$.
\item
Let us consider a minimal monad for bundle without inner cohomology $\hbox{${\cal E}$}$ on $\hbox{${\cal Q}$}_7$:
$$ 0 \to \hbox{${{\cal A}}$} \rightarrow
\hbox{${\cal B}$} \rightarrow \hbox{${\cal C}$} \to 0.$$
$\hbox{${\cal B}$}$ must be not split and ACM. Since $\sS_{7_{|\hbox{${\cal Q}$}_5}}\backsimeq \sS'_6\oplus\sS''_6,$
the restriction of the monad on $\hbox{${\cal Q}$}_6$ cannot be one of the sequence $(6)$, $(7)$, $(8)$ and $(9)$.\\
We can conclude that no non-split bundle of rank $3$ in
$\hbox{${\cal Q}$}_7$ exists without inner cohomology.\\
Clearly also in
higher dimension it is not possible to find any rank $3$ bundle without inner cohomology.
\end{enumerate}
\end{proof}
\end{corollary}
\begin{remark} On $\hbox{${\cal Q}$}_n$ ($n>3$) the only rank $3$ bundles without inner cohomology are the following:\begin{enumerate}
\item for $n=4$, the ACM bundles $\sS'\oplus\sO(a)$ and $\sS''\oplus\sO(a)$, $\hbox{${\cal G}$}_4$, $\hbox{${\cal P}$}_4$ and $\sZ_4\oplus\sO(a)$.\\
\item For $n=5$, $\hbox{${\cal G}$}_5$, $\hbox{${\cal P}$}_5$ and $\sZ_5\oplus\sO(a)$.\\
\item For $n=6$, $\hbox{${\cal G}$}'_6$, $\hbox{${\cal G}$}''_6$, $\hbox{${\cal P}$}'_6$ and $\hbox{${\cal P}$}''_6$.\end{enumerate}
\end{remark}
\begin{remark} If we consider rank $r$ ($r\geq 4$) without inner cohomology we cannot have such a simple classification on $\hbox{${\cal Q}$}_n$ ($n\geq 4$).\\
In fact let $\sH$ be any ACM bundle of rank $r$ ($r > 4$) on $\hbox{${\cal Q}$}_4$. The generic map $$0 \to \sO^{r-4} \xrightarrow{\alpha}
\sH$$ is injective, so the cokernel of $\alpha$ is a rank $4$ bundle without inner cohomology.\\
This means there are many bundles without inner cohomology of rank $r$ ($r\geq 4$) on $\hbox{${\cal Q}$}_n$ ($n\geq 4$).
\end{remark}
\bibliographystyle{amsplain}
|
2,877,628,091,571 | arxiv | \section{Introduction}
The rattling phenomenon, that is, \textit{a large vibration of an atom in an oversized atomic cage}, is a key issue in understanding exotic physical properties in a family of bulk materials that possess cage-like units in their crystalline structure. Filled skutterudites, RM$_4$X$_{12}$ (R = rare-earth; M = Fe, Ru or Os; X = P, As or Sb), are one of those compounds that have large X$_{12}$-icosahedron atomic cages filled with rare-earth atoms \cite{Jeitschko77,Jeitschko80,Lee2004,Lee2004b}. Despite their well-defined crystalline structures, their lattice thermal conductivity is remarkably small, being comparable to that of vitreous silica \cite{Sales96}. It has been pointed out by structural refinement studies that the thermal parameters of the filling rare-earth atoms are unusually large, indicating a large vibrational amplitude of the rare-earth atoms \cite{Sales97}. This result has led to the speculation that the rattling motion of the rare-earth atoms may strongly scatter acoustic phonons, which carry most of the heat flow in a crystal, resulting in an anomalous suppression of the lattice thermal conductivity.
The rattling motion can also influence the electronic properties via electron-phonon coupling. The filled skutterudites show various interesting features such as heavy fermion superconductivity \cite{Bauer} and a metal-insulator transition \cite{Sekine97}, which could be affected by the rattling. There are also theoretical works that indicate a correlation between electronic properties and the motion of filled rare-earth atoms \cite{Hattori}. Recently, it has been proposed that rattling may assist the appearance of superconductivity in the $\beta$-pyrochlore compound KOs$_2$O$_6$ \cite{Yonezawa}. Charge fluctuation enhanced by rattling is considered to be a possible exotic pairing mechanism responsible for the superconductivity.
Although the importance of rattling phenomena in skutterdites is recognized, there have been only a few studies of rattling motion \cite{Keppens,Hermann,Goto,Kondo,Cao,Iwasa}. Powder neutron scattering and heat capacity measurements suggest that the rattling motion is an incoherent localized mode, which can be well described by a localized Einstein mode \cite{Keppens,Hermann}. The energy of the vibrational rattling motion has been estimated from the peak in the phonon density of states measured by powder neutron scattering to be $E = 5 \sim 7$ meV. Ultrasonic measurements of PrOs$_4$Sb$_{12}$ suggest that a Pr site splits into four off-centered positions, and the Pr atoms vibrate among those positions \cite{Goto}.
To clarify the influence of rattling motion on electronic and thermal transport properties, further studies are quite important. Inelastic neutron scattering using a single crystal is a powerful method of studying rattling since it can determine both the energy and momentum dependences of phonon spectra. In this work, we report an inelastic neutron scattering study that uses single crystals of CeRu$_4$Sb$_{12}$ and demonstrate the interplay between acoustic and low-lying optical phonon modes characterized by the large vibration of rare-earth atoms. We discuss the environment of the filled rare-earth atoms based on the analysis using the Born-von K\'{a}rm\'{a}n method and a possible mechanism of extremely low lattice thermal conductivity in filled skutterudites.
Single crystals of CeRu$_4$Sb$_{12}$ were grown by an Sb flux method, as described elsewhere \cite{Sugawara}. The size of a single crystal is about 2 $\sim$ 3 mm in length with an almost cubical shape. To increase the signal intensity in the neutron scattering measurements, six single crystals with a total volume of $\sim 0.2$ cc were assembled.
Neutron scattering measurements were carried out using the triple-axis spectrometer, TOPAN, at the JRR-3M reactor of JAEA at Tokai. The final neutron energy was fixed at $E_f = 30.5$ meV using a pyrolytic graphite monochromator and an analyzer. A pyrolytic graphite filter was used to reduce neutrons from higher-order reflections. The sequences of the horizontal collimators were 15'-15'-S-15'-30' or 40'-60'-S-60'-80', where S denotes the sample position. The measurements were conducted at room temperature.
For low-energy phonons at below $E = 10$ meV, the observed phonon peaks were fitted using the following scattering function convoluted with the resolution function,
\begin{equation}
S({\bf q},E)=\frac{A}{1-\exp \left(-\frac{E}{k_BT} \right)}
\left\{\frac{\Gamma}{(E-E_s)^2+\Gamma^2} \right\}
\end{equation}
where $E_s$, $\Gamma$, $k_B$, $A$ and $\bf q$ denote the phonon energy, the linewidth, the Boltzmann constant, a scaling factor, and the wave vector, respectively. In the fitting, we assume that $E_s$ has a linear dependence on $|{\bf q}|$ over the range of the instrumental resolution. The dynamical structure factor $F_{\mathrm{inel}}$ is estimated using the obtained scattering function. $F_{\mathrm{inel}}$ is described as
\begin{equation}
F_{\mathrm{inel}} = \sum_{d} \frac{b_d}{\sqrt{m_d}} \exp (-W_d + i {\bf G} \cdot {\bf r_d}) ({\bf Q} \cdot {\bf e_d}),
\end{equation}
where $b_d$ is the coherent scattering length of the $d$th atom at $\bf{r}_d$, $m_d$ is the mass, $e^{-W_d}$ is the Debye-Waller factor, $\bf Q$ is the scattering vector, $\bf G$ is the reciprocal lattice vector, and $\bf{e}_d$ is the polarization vector. It is known that $F_{\mathrm{inel}}$ is related to the energy-integrated scattering function for a one-phonon process in the neutron energy-loss mode as
\begin{equation}
\int S({\bf Q},E) dE \propto \frac{1}{E_s}
\frac{1}{1-\exp \left(-\frac{E_s}{k_BT} \right)}
|F_{\mathrm{inel}}|^{2}.
\end{equation}
To estimate the interatomic force constants, we have performed calculations based on the Born-von K\'{a}rm\'{a}n atomic force model. The longitudinal force constants of the seventeen closest atomic pairs were chosen as fitting parameters (see Table \ref{exp-list}), and the calculated intensities of the phonon spectra as well as the energies were fitted to the measured data. The atomic coordinates of the Sb atoms at 24$g$ sites in the space group $Im\bar{3}$ were assumed to be (0,0.34105,0.15744) \cite{Skutterudite} for the analysis.
\begin{table}[force constant]
\caption{Interatomic force constants obtained from analysis based on Born-von K\'{a}rm\'{a}n model.}
\begin{center}
\begin{tabular}{c c c c}\hline \hline
& Pair & bond length & Force constants \\
& & (\AA) & (mdyn/\AA) \\ \hline
1 & Ru-Sb & 2.61 & 1.40 \\
2 & Sb-Sb & 2.92 & 0.35 \\
3 & Sb-Sb & 2.95 & 0.30 \\
4 & Ce-Sb & 3.48 & 0.025 \\
5 & Sb-Sb & 3.50 & 0.30 \\
6 & Sb-Sb & 3.87 & 0.28 \\
7 & Ce-Ru & 4.01 & 0.025 \\
8 & Sb-Sb & 4.15 & 0.05 \\
9 & Ru-Sb & 4.51 & 0.05 \\
10 & Ru-Sb & 4.52 & 0.05 \\
11 & Sb-Sb & 4.56 & 0.05 \\
12 & Ru-Ru & 4.63 & 0.00 \\
13 & Sb-Sb & 5.22 & 0.00 \\
14 & Sb-Sb & 5.78 & 0.00 \\
15 & Ce-Sb & 5.81 & 0.00 \\
16 & Sb-Sb & 5.81 & 0.00 \\
17 & Ru-Sb & 5.83 & 0.05 \\ \hline \hline
\end{tabular}
\end{center}
\label{exp-list}
\end{table}%
Figure 1 shows energy spectra observed along $\bf{q}$ $= (\zeta, -\zeta, 0)$ below $E = 10$ meV, which give the transverse phonon modes with the propagation vector $[110]$. The solid lines depict the calculated profiles from the fits convoluted with the instrumental resolution. We find that the linewidths of these spectra are comparable to the instrumental resolution. At $\zeta = 0.15$, a well-defined single peak is observed, which can be assigned to a transverse acoustic (TA) phonon. Around $\zeta = 0.3$, on the other hand, two peaks are observed with a separation of $\sim1.5$ meV. The intensity of the lower acoustic mode is strong for small q. As $\zeta$ increases, however, it gradually decreases and vanishes near the zone boundary by transferring the spectral weight to the higher branch.
\begin{figure}[htb]
\includegraphics[width=\columnwidth]{fig1.eps}
\caption{\label{fig:acoustic phonons} Energy spectra of transverse acoustic and optical phonon peaks with propagation vector [110]. The solid lines are the results of fits convoluted with the instrumental resolution function.}
\end{figure}
The dispersion relations of the peaks observed in Fig. \ref{fig:acoustic phonons} are summarized in Fig. \ref{fig:low energy dispersion}(a). As indicated, the TA and optical modes show typical anticrossing behavior at $\zeta \sim 0.25$. The energy of the lower phonon mode increases linearly with increasing $\zeta$ from the $\Gamma$-point like a typical TA mode. When the upper phonon mode appears, however, the linear relationship breaks down. In contrast, the energy of the higher mode increases above $\zeta = 0.2$ and begins to saturate near the zone boundary. The behavior of the dynamical structure factor shown in Fig. \ref{fig:low energy dispersion}(b) is also consistent with the mixing of two modes. In the region above $\zeta = 0.2$, the dynamical structure factor of the lower energy mode is strongly suppressed, while that of the higher energy mode is enhanced, indicating that they satisfy a sum rule.
\begin{figure}[htb]
\includegraphics[width=\columnwidth]{fig2.eps}
\caption{\label{fig:low energy dispersion} (a) Phonon dispersion curves of transverse acoustic and optical phonon modes with propagation vector [110] in CeRu$_4$Sb$_{12}$. The motion of rare-earth ions in the optical mode (guest mode) is indicated by a red arrow in the inset. (b) Dynamical structure factor as function of $\zeta$. The solid and dashed lines in (a) and (b) depict the results of a fit based on the Born-von K\'{a}rm\'{a}n model.}
\end{figure}
Figure 3 shows typical profiles of the transverse optical (TO) modes for energies greater than $E = 10$ meV observed at three $\bf{ q}$-positions with the propagation vector along the [110] direction. Two peaks can be recognized at $E \sim 18$ meV and 35 meV, indicating that the optical phonon modes over $E = 10$ meV are classified into two bands. The solid lines depict the results of Gaussian fits. Despite the instrumental energy resolution of $E_{\mathrm{res}} \sim 6$ meV, the full width at half maximum (FWHM) of the phonon profiles at $E \sim 18$ meV is about $\Delta E = 15$ meV, clearly larger than $E_{\mathrm{res}}$, suggesting strongly that the profiles actually consist of a number of closely lying optical branches.
\begin{figure}[htb]
\includegraphics[width=\columnwidth]{fig3.eps}
\caption{\label{fig:optical phonons} Energy spectra of transverse optical phonons with propagation vector along the [110] direction. The solid lines are the results of Gaussian fits.}
\end{figure}
To interpret the phonon profiles in Figs. \ref{fig:acoustic phonons} and \ref{fig:optical phonons}, we performed analysis based on the Born-von K\'{a}rm\'{a}n model. From fits to the data, we obtained the calculated phonon dispersion relations. Those with the propagation vector [110] are depicted in Figs. \ref{fig:low energy dispersion}(a) and \ref{fig:dispersion}(a). In Figs. \ref{fig:dispersion}(b) and \ref{fig:dispersion}(c), we separately show the dispersion of the longitudinal and transverse modes. The shade scale of the colored curves corresponds to the spectral weight of the scattering function. We also depict the peak positions of the observed phonon profiles using green circles. The vertical lines drawn at each data point above $E = 10$ meV correspond to the FWHM of each phonon profile. Clearly, the calculated lines follow the observed phonon peaks quite well. In particular, the anticrossing behavior of the observed phonon dispersions and dynamical structure factor shown in Fig. \ref{fig:low energy dispersion} are successfully reproduced by our Born-von K\'{a}rm\'{a}n force model analysis. We have confirmed that these anticrossing behaviors can be reproduced only with small Ce-Sb and Ce-Ru force constants. The alternative lowering of any other force constant leads to large separation between calculation and observation. The results indicate that the optical mode in the low-energy region at $E \sim 6$ meV is a guest mode with $T_u$ symmetry. As illustrated in the inset of Fig. \ref{fig:low energy dispersion}(a), the guest mode allows large vibration of the rare-earth atoms. Since the flat optical band produces a large phonon density of states in the energy spectrum, \textit{the guest mode} can be the vibrational mode observed by inelastic powder neutron scattering measurements at $E = 5 \sim 7$ meV \cite{Keppens,Hermann}, which was identified as being the rattling motion. Table \ref{exp-list} summarizes the obtained force constants. The evaluated force constants for Ce-Sb and Ce-Ru pairs are very small, 0.025 mdyn/\AA, reflecting the low phonon energy, and supporting the picture that loosely bound Ce ions rattle within the cage. However, the anticrossing behavior indicates clearly that the guest atoms interact with host lattices, although the force constants are very small. This indicates that the vibrations of the guest atoms can propagate to the next guest atoms through host lattices. The dispersive phonon modes indicate that the guest modes at $E \sim 6$ meV can be interpreted as coherent optical phonon modes rather than a local incoherent Einstein mode. In the intermediate energy group, the optical phonon energy mainly depends on Sb-Sb force constants. These force constants have medium values. In the highest energy group, the optical phonon energy is determined by the large force constants of Ru-Sb pairs, 1.4 mdyn/\AA, indicating that the RuSb$_6$ octahedron is quite rigid.
\begin{figure}[htb]
\includegraphics[width=\columnwidth]{fig4.eps}
\caption{\label{fig:dispersion} Phonon dispersion curves along [110] direction in CeRu$_4$Sb$_{12}$. (a) Results of fit based on the Born-von K\'{a}rm\'{a}n model. (b) Longitudinal and (c) transverse phonon dispersion curves. The solid circles depict the results of measurements. The vertical lines drawn at each data point above $E$ = 10 meV depict the FWHM of each peak. The contour maps show the calculated scattering function.}
\end{figure}
It has been generally believed that the extremely low lattice thermal conductivity in filled skutterudites is related to the incoherent rattling motions. Rattled rare-earth atoms were considered to act as scattering centers for acoustic phonons. This scattering process becomes a dominant mechanism when the rattlers are disordered and/or vibrate incoherently. However, the Ce atoms in CeRu$_4$Sb$_{12}$ are ordered and fully occupy the Sb$_{12}$-icosahedron atomic cages. Furthermore, our results suggest that the guest mode is the coherent optical phonon branch, and consequently the Ce atoms cannot function as scattering centers. In fact, the lattice thermal conductivity of the 100\%-filled samples is higher than that of partially filled samples where rare-earth atoms can scatter acoustic phonons due to disorder \cite{Nolas}. Feldman \textit{et al}. have also pointed out that the scattering mechanism is not valid for filled skutterudites \cite{Feldman}. Clearly, another mechanism is needed to explain the low lattice thermal conductivity of 100\%-filled skutterudites.
One of the most plausible scattering mechanisms in filled skutterudites is their unique Umklapp processes. Generally, the phonon modes that contribute to the Umklapp processes are restricted to a narrow $\bf q$-range. Figure \ref{fig:Umklapp}(a) illustrates a conceptual diagram of phonon dispersion relations in a conventional system. The gray area depicts a trace of the parallel translation of the original acoustic branch by shifting its origin (O') along the dispersion relation from the zone center towards the zone boundary. The point $\bf {q'}$ depicts the intersection between the optical and shifted acoustic branches, that satisfy the conservation law. Clearly, the transition of the ``acoustic + acoustic $\rightarrow$ optical phonons'' through the Umklapp process is limited to the short thick line. In contrast, for filled skutterudites, the flat optical phonon branch lies in a low-energy region below the top of the acoustic phonon branch (Fig. \ref{fig:Umklapp}(b)). Consequently, the process of the ``acoustic + optical $\rightarrow$ optical phonon'' transition is allowed in a wide area, which can be defined by moving the point O' along the low-lying optical phonon branch. The optical phonons created through this process are distributed along the long $\bf{q'}$ thick line, suggesting that the Umklapp process occurs more frequently than in Fig. \ref{fig:Umklapp}(a). It should be noted that this process requires a specific condition: namely, the energy of the guest modes ($E_{\textrm{guest}}$) has to be larger than the gap energy between the acoustic and upper-lying optical phonon modes ($\Delta$). In fact, $\Delta$ for TA phonons in CeRu$_4$Sb$_{12}$ is $\sim$ 4 meV, which is smaller than $E_{\textrm{guest}}$ $\sim$ 6 meV. As shown in Fig. \ref{fig:dispersion}(b), this condition $E_{\textrm{guest}} \geq \Delta$ is also satisfied for the longitudinal acoustic mode. At the same time, a small value of $E_{\textrm{guest}}$ is also essential since the number of phonons contributing to the process increases with decreasing phonon energy. Since these conditions are all satisfied in the present system, we propose that Umklapp scattering can be one of important processes in the suppression of thermal conductivity in filled skutterudites.
\begin{figure}[htb]
\includegraphics[width=\columnwidth]{fig5.eps}
\caption{\label{fig:Umklapp} Conceptual diagram of phonon dispersion relations. (a) Simple system without guest modes. (b) Filled skutterudites with guest modes lying within acoustic phonon branches. The gray areas indicate the range where a phonon can be created beyond the Brillouin zone via (a) acoustic-acoustic and (b) acoustic-optical phonon scattering. The thick segments of the flat lines indicate the $\bf{q}$-range of the resultant phonons. $\Delta$ indicates the gap energy between an acoustic phonon and an upper-lying optical mode.}
\end{figure}
In conclusion, we have clarified that Ce atoms are weakly bound to surrounding atoms, which ensures their large amplitude of vibration. The observed phonons at $E \sim$ 6 meV are identified as the optical phonons of a guest mode. The results suggest that the remarkably low lattice thermal conductivity in filled skutterudites cannot be due to an Einstein oscillation. As one possibility, we propose that it can be attributed to intensive Umklapp scattering caused by their unique phonon dispersion relations.
The authors would like to thank M. Udagawa, M. Kataoka, Y. Tsunoda and M. Matsuda for their helpful discussions. This work was supported by a Grant-in-Aid for Scientific Research in Priority Area ``Skutterudite" (Nos. 15072201 and 15072206) of the Ministry of Education, Culture, Sports, Science and Technology of Japan and a grant from the Ministry of Economy, Trade and Industry of Japan.
|
2,877,628,091,572 | arxiv | \section{Introduction}\label{s0}
Classification of simple weight modules is a classical problem
in the representation theory of Lie algebras. Simple weight
modules with finite-dimensional weight spaces (sometimes also
called Harish-Chandra modules) are classified for several classes
of algebras, including simple finite-dimensional algebras
(\cite{M2}) and various generalized Virasoro algebras
(\cite{M,Ma25,LZ}). In the general case, however, the problem
is solved only for the Lie algebra $\mathfrak{sl}_2$ (\cite{Ga}, see
\cite[Chapter~3]{Ma5} for details).
A first step in understanding simple weight modules is to understand
possible forms for the support of a module. For simple finite-dimensional
Lie algebras this was originated in \cite{Fe,Fu} and completed in
\cite{DMP}, where it was shown that any simple weight module is either
dense (that is has the maximal possible support) or is the quotient
of a parabolically induced module. For some algebras of Cartan type
a similar result was obtained in \cite{PS}.
A new effect appears for generalizations of the Virasoro algebra. In
this case already in the class of Harish-Chandra modules there are
modules whose support is one element less than the maximal possible
(in what follows we call such modules punctured). Description of the
support for weight modules over the Virasoro algebra is relatively
easy (see \cite{Ma3}) and for all other algebras there are only some
partial results (\cite{Ma1,Ma2}).
The original motivation for the present paper is the problem of
classification of simple Harish-Chandra modules for the Witt algebra
$W_n$ (the algebra of derivations of a Laurent polynomial algebra in
$n$ commuting variables). For the moment this problem seems to be
too difficult, so as a first step in the present paper we completely
determine the support of such modules, basically reducing the
original classification problem to two cases: classification of
simple dense modules and classification of simple punctured modules.
The main result of the present paper asserts that any simple
Harish-Chandra $W_n$-module is either dense (with uniformly bounded
weight spaces) or punctured (with uniformly bounded weight spaces)
or is the simple quotient of some generalized Verma module. The
latter class of modules is relatively well-understood (\cite{BZ}).
It is known that both dense and punctured modules do exist
(\cite{Sh,Ra}). The main result of the paper is formulated and
proved in Section~\ref{s2} after some preliminaries collected in
Section~\ref{s1}.
We also obtain some partial results about the form of the support
for arbitrary weight modules over $\ensuremath{\mathbb{Z}}\xspace^n$-graded Lie algebras
$\mathfrak {g}=\oplus_{\alpha\in\ensuremath{\mathbb{Z}}\xspace^n}\mathfrak {g}_\alpha$ (root
space decomposition with respect to the abelian subalgebra
$\mathfrak {g}_0$) with the property $[\mathfrak
{g}_\alpha,\mathfrak {g}_\beta]=\mathfrak {g}_{\alpha+\beta}$ for
all $\alpha,\beta\in\ensuremath{\mathbb{Z}}\xspace^n$ with $\alpha\neq \beta$, which generalize
the results in \cite{Ma1,Ma2}. In fact, we recover and give a more
detailed proof for the latter results. In particular, we show that
under some mild additional assumptions the complement (in the weight
lattice) to the support of a simple module is either very small
(roughly speaking belongs to a sublattice of dimension $n-2$) or is
at least a half of the lattice, see Theorem 8. This is done in
Subsection~\ref{s3.1}. The case $n=2$ (related to \cite{Ma1,Ma2}) is
studied in details in Subsection~\ref{s3.2}.
We finish the paper with a brief description of similar results for
so-called mixed modules. The support of a module can be refined to
encode the information about finite-dimensional and
infinite-dimensional weight spaces separately. A mixed module is a
module which contains both finite- and infinite-dimensional weight
spaces in the same coset of a weight lattice. In \cite{MZ} it was
shown that there are no simple mixed modules over the Virasoro
algebra. For $W_n$, $n>1$, mixed modules do exist. However, in
Subsection~\ref{s3.3} we show that the part of the support of the
mixed module, which describes infinite-dimensional weight spaces,
behaves similarly to the support of a Harish-Chandra module. In
particular, we deduce that under some mild assumptions the support
of a mixed module is contained in a half of the weight lattice. We
conjecture that any mixed $W_n$-module is neither dense nor
punctured.
\section{Notation and preliminaries}\label{s1}
\subsection{Weight modules over Witt algebras}\label{s1.1}
We denote by $\mathbb{Z}$, $\mathbb{Z}_+$, $\mathbb{N}$ and $\mathbb{Q}$
the sets of all integers, nonnegative integers, positive integers
and rational numbers, respectively.
For any set $X$ every $\alpha\in X^n$ has the form $\alpha=
(\alpha_1,\alpha_2,\dots,\alpha_n)$, where $\alpha_i\in X$ for all $i$.
For a Lie algebra $\mathfrak{a}$ we denote by $U(\mathfrak{a})$
the corresponding universal enveloping algebra.
Let $\Bbbk$ denote an algebraically closed field of characteristic zero.
For a positive integer $n$ the corresponding
classical Witt algebra $\mathfrak{W}_n$ is defined as the algebra
of derivations of the Laurent polynomial algebra
$\Bbbk[t_1^{\pm1},\dots,t_n^{\pm1}]$ in $n$ commuting variables
$t_1, t_2,\dots,t_n$. The algebra $\mathfrak{W}_n$ is simple and
for $n=1$ the algebra $\mathfrak{W}_1$ is the centerless Virasoro algebra.
We fix a positive integer $n$ and denote $\mathfrak{g}=\mathfrak{W}_n$.
For $i\in\{1,2,\dots,n\}$ set $\partial _i=t_i\frac{\partial}{\partial t_i}$
and denote by $\mathfrak {g}_0$ the $\Bbbk$-linear span of all the
$\partial _i$'s. Then $\mathfrak {g}_0$ is an abelian Lie subalgebra
of $\mathfrak{g}$, called the {\em Cartan subalgebra}.
For any $\alpha\in\mathbb{Z}^n$
set $t^\alpha=t_1^{\alpha_1}t_2^{\alpha_2}\cdots t_n^{\alpha_n}$.
If $\partial\in\mathfrak{g}_0$ is arbitrary, then
$t^\alpha\partial\in\mathfrak{g}$. Setting $\mathfrak{g}_\alpha=
t^\alpha\mathfrak{g}_0$ we obtain the following decomposition
of $\mathfrak{g}$:
\begin{equation}\label{eqno1}
\mathfrak{g}=\bigoplus_{\alpha\in\mathbb{Z}^n}\mathfrak{g}_\alpha.
\end{equation}
It is easy to check that $[\mathfrak{g}_\alpha,\mathfrak{g}_\beta]\subset
\mathfrak{g}_{\alpha+\beta}$
(and even $[\mathfrak{g}_\alpha,\mathfrak{g}_\beta]=
\mathfrak{g}_{\alpha+\beta}$ unless $\alpha=\beta$)
and hence the above decomposition,
in fact, induces a $\mathbb{Z}^n$-grading of $\mathfrak{g}$.
The adjoint action of $\mathfrak{g}_0$ on $\mathfrak{g}$ is
diagonalizable and the decomposition \eqref{eqno1} coincides
with the decomposition of $\mathfrak{g}$ into a direct sum
of $\mathfrak{g}_0$-eigenspaces.
If $\gamma\in\Bbbk^n$ is such that
$\gamma_1, \gamma_2,\dots, \gamma_n$ are linearly independent over $\mathbb{Q}$, then the subalgebra
\begin{displaymath}
\mathrm{Vir}(\gamma)=\Bbbk\langle t^\beta(\gamma_1\partial_1+
\gamma_2\partial_2+\cdots+ \gamma_n\partial_n)\,:\,\beta\in\ensuremath{\mathbb{Z}}\xspace^n
\rangle
\end{displaymath}
of $\mathfrak{g}$ is a centerless {\em generalized}
(or {\em higher rank}) Virasoro algebra of rank $n$ (in the
sense of \cite{PZ}). For any subgroup $G$ of $\mathbb{Z}^n$
there is also the corresponding subalgebra
$\displaystyle\mathfrak{g}(G)=\bigoplus_{\alpha\in G}
\mathfrak{g}_{\alpha}$ of $\mathfrak{g}$.
A $\mathfrak{g}$-module $V$ is called a {\em weight} module
provided that the action of $\mathfrak{g}_0$ on $V$ is
diagonalizable. For example, from the previous paragraph it follows
that the adjoint $\mathfrak{g}$-module is a weight module. For any
weight module $V$ we have the decomposition
\begin{equation}\label{eqno2}
V=\bigoplus_{\lambda\in \mathfrak{g}_0^*}V_{\lambda},
\end{equation}
where $\mathfrak{g}_0^*=\mathrm{Hom}_{\Bbbk}(\mathfrak{g}_0,\Bbbk)$
and
\begin{displaymath}
V_{\lambda}=\{v\in V:\partial v=\lambda(\partial)v
\text{ for all }\partial\in \mathfrak{g}_0\}.
\end{displaymath}
The space $V_{\lambda}$ is called the {\em weight space} corresponding
to the {\em weight} $\lambda$. The {\em support} $\mathrm{supp}(V)$
of the weight module
$V$ is defined as the set of all weights $\lambda$ for which
$V_{\lambda}\neq 0$. If $V$ is a weight $\mathfrak{g}$-module
and $\dim_{\Bbbk}V_{\lambda}<\infty$ for all
$\lambda\in \mathfrak{g}_0^*$, the module
$V$ is called a {\em Harish-Chandra} module.
We consider $\mathbb{Z}^n$ as a subset of $\mathfrak{g}_0^*$ such
that each $\alpha\in \mathbb{Z}^n$ becomes the weight of the weight
space $\mathfrak{g}_{\alpha}$ in the adjoint $\mathfrak{g}$-module
(i.e. $\a(\partial_i)=\a_i$ for all $\alpha \in \mathbb{Z}^n$).
Under this convention, the decomposition \eqref{eqno1} becomes a
special case of the decomposition \eqref{eqno2} for the adjoint
$\mathfrak{g}$-module. Furthermore, if $V$ is a weight
$\mathfrak{g}$-module, then for all $\alpha\in \mathbb{Z}^n$ and
$\lambda\in \mathfrak{g}_0^*$ we have
$\mathfrak{g}_{\alpha}V_{\lambda}\subset V_{\lambda+\alpha}$. From
this it follows that if $V$ is an indecomposable weight module (in
particular, simple), then $\mathrm{supp}(V)\subset
\lambda+\mathbb{Z}^n$ for some $\lambda\in \mathfrak{g}_0^*$.
A weight $\mathfrak{g}$-module $V$ is called
\begin{itemize}
\item {\em dense} provided that $\mathrm{supp}(M)=\lambda+\mathbb{Z}^n$
for some $\lambda\in\mathfrak {g}_0^*$;
\item {\em punctured} provided that $\mathrm{supp}(M)=
\mathbb{Z}^n\setminus\{0\}$;
\item {\em uniformly bounded} provided that there exists a positive
integer $N$ such that $\dim V_\lambda<N$ for all
$\lambda\in\mathfrak {g}_0^*$.
\end{itemize}
\subsection{Highest weight modules over Witt algebras}\label{s1.2}
Choose some subgroup $G\subset \mathbb{Z}^n$ and some nonzero
$\beta\in \mathbb{Z}^n$ such that $\mathbb{Z}^n\cong G\oplus H$,
where $H$ is the subgroup of $\mathbb{Z}^n$, generated by
$\beta$. Define the following subalgebras of $\mathfrak{g}$:
\begin{displaymath}
\mathfrak{a}_G:=\bigoplus_{\alpha\in G}\mathfrak{g}_{\alpha};\quad
\mathfrak{n}^+_G:=\bigoplus_{\alpha\in G,k\in\mathbb{N}}
\mathfrak{g}_{\alpha+k\beta};\quad
\mathfrak{n}^-_G:=\bigoplus_{\alpha\in G,k\in\mathbb{N}}
\mathfrak{g}_{\alpha-k\beta}.
\end{displaymath}
This gives the triangular decomposition
$\mathfrak{g}=\mathfrak{n}^-_G\oplus \mathfrak{a}_G
\oplus \mathfrak{n}^+_G$ and allows us to define highest weight
$\mathfrak{g}$-modules with respect to this decomposition.
Note that the algebras $\mathfrak{n}^-_G$ and
$\mathfrak{n}^+_G$ do not really depend on $\beta$ but rather
on the coset $\beta+G$ (which can be chosen in two different
ways, namely as $\beta+G$ or $-\beta+G$).
Let $X$ be a simple weight $\mathfrak{a}_G$-module. Setting
$\mathfrak{n}^+_G X=0$ we turn $X$ into a
$\mathfrak{a}_G\oplus \mathfrak{n}^+_G$-module. We define
the {\em generalized Verma module} $M(G,\beta,X)$ as follows:
\begin{displaymath}
M(G,\beta,X):=
U(\mathfrak{g})\bigotimes_{\mathfrak{a}_G\oplus \mathfrak{n}^+_G}X.
\end{displaymath}
The module $M(G,\beta,X)$ is an indecomposable weight module and it
has a unique simple quotient, which we will denote by
$L(G,\beta,X)$. In \cite{BZ} it was shown that $L(G,\beta,X)$ is a
Harish-Chandra module if $X$ is a uniformly bounded exp-polynomial
module. Moreover, the module $L(G,\beta,X)$ itself is not uniformly
bounded unless $X$ is the trivial module (in which case
$L(G,\beta,X)$ is the trivial module itself). Loosely speaking we
will call $L(G,\beta,X)$ a {\em simple highest weight} module.
\section{Description of supports for Harish-Chandra
$\mathfrak{g}$-modules}\label{s2}
\subsection{Formulation of the main result}\label{s2.1}
Our main result is the following statement:
\begin{theorem}\label{mainthm}
Let $V$ be a nontrivial simple Harish-Chandra $\mathfrak{g}$-module. Then
exactly one of the following statements takes place:
\begin{enumerate}[(a)]
\item\label{mainthm.1} $V$ is dense and uniformly bounded.
\item\label{mainthm.2} $V$ is punctured and uniformly bounded.
\item\label{mainthm.3} $V\cong L(G,\beta,X)$ for some
$G$, $\beta$ and a uniformly bounded $X$ as in Subsection~\ref{s1.2}.
\end{enumerate}
\end{theorem}
For $n=1$ all Harish-Chandra modules over $\mathfrak{W}_1$ were
classified in \cite{M}. In this case the statement of
Theorem~\ref{mainthm} follows immediately from this
classification. Dense and punctured $\mathfrak{W}_1$-module occur
as modules from the so-called {\em intermediate series}. In
particular, in what follows we assume $n>1$.
For $n>1$ the existence of modules of the type
Theorem~\ref{mainthm}\eqref{mainthm.3} follows from \cite{BZ}.
Examples of both dense and punctured $\mathfrak{W}_1$-modules
were constructed in \cite{Ra} (and in a slightly different
setting already in \cite{Sh}).
For higher rank Virasoro algebras a complete classification of all
simple Harish-Chandra modules was obtained in \cite{LZ}. From this
classification it follows that the statement of
Theorem~\ref{mainthm} is true for all higher rank Virasoro
algebras (simple highest weight modules over higher rank Virasoro
algebras are defined exactly in the same way as in
Subsection~\ref{s1.2}, we will denote these modules by
$L^{\mathfrak{a}}(G,\beta,X)$ where $\mathfrak{a}$ is the higher
rank Virasoro algebra in question).
\subsection{Auxiliary lemmata for not uniformly
bounded modules}\label{s2.2} To prove Theorem~\ref{mainthm} we
will need several auxiliary statements. In the whole section we
assume that $V$ is a simple Harish-Chandra $\mathfrak{g}$-module
and that for some (usually fixed) $\lambda\in\mathfrak{g}_0^*$ we
have $V=\oplus_{\alpha\in\ensuremath{\mathbb{Z}}\xspace^n}M_{\lambda+\alpha}$ .
To start with, we assume that the module $V$ is not
uniformly bo\-un\-ded. Let $e_i=(\delta_{i1},\delta_{i2}\dots,\delta_{in})$,
where $\delta_{ij}$ is the Kronecker delta. Then
$\{e_i:i=1,2,\dots,n\}$ is the standard basis of $\mathbb{Z}^n$.
\begin{lemma}\label{l1}
Assume that $V$ is not uniformly bounded. Then, after an appropriate
change of variables $t_1,t_2,\dots,t_n$ and the weight
$\lambda$, we may assume that $\lambda\neq 0$ and there exists a nonzero
$v_0\in V_{\lambda}$ such that
\begin{equation}\label{gh1}
\mathfrak {g}_{e_i} v_0=0,\,\text{ for all }\, i=1,2,\dots,n.
\end{equation}
\end{lemma}
\begin{proof}
Choose some $\gamma\in\Bbbk^n$ such that
$\gamma_1,\gamma_2,\dots,\gamma_n$ are linearly independent over
$\mathbb{Q}$ and consider $V$ as a Harish-Chandra
$\mathrm{Vir}(\gamma)$-module, which is not uniformly bounded.
From \cite[Theorem~3.9]{LZ} and \cite{BZ} we obtain that every
uniformly bounded $\mathrm{Vir}(\gamma)$-module is either dense
or punctured. Hence the total number of uniformly bounded simple
subquotients of the $\mathrm{Vir}(\gamma)$-module $V$ is finite
(it is bounded by $\dim_{\Bbbk} V_{0}+\dim_{\Bbbk} V_{\mu}<\infty$
for any nonzero $\mu\in\lambda+\mathbb{Z}^n$). As the module $V$
itself is not uniformly bounded, it must contain a simple
$\mathrm{Vir}(\gamma)$-subquotient, say $X$, which is not
uniformly bounded. By \cite[Theorem~3.9]{LZ}, the module $X$ is
a highest weight $\mathrm{Vir}(\gamma)$-module and, after an
appropriate change of variables $t_1,t_2,\dots,t_n$ and the weight
$\lambda$, the module $X$ is isomorphic to the module
$L^{\mathrm{Vir}(\gamma)}(G,e_1,Y)$, where $G=\mathbb{Z}\langle
e_2,e_3,\dots,e_n\rangle$ and $Y$ is the corresponding simple
uniformly bounded $\mathfrak{b}_G$-module (here
$\mathfrak{b}_G=\mathfrak{a}_G\cap \mathrm{Vir}(\gamma)$). By
\cite{BZ} we have that the dimensions $\dim_{\Bbbk}
V_{-ke_1+\lambda}$, $k\in\mathbb{N}$, are not uniformly bounded.
Fix an integer $N>3$ and consider the finite set
\begin{displaymath}
B_N(\lambda)=\lambda+\{\a\in\ensuremath{\mathbb{Z}}\xspace^n\,:\,|\a_i|\leq N
\text{ for all }i\}.
\end{displaymath}
As $V$ is a Harish-Chandra module and $\dim_{\Bbbk} V_{-ke_1+\lambda}$,
$k\in\mathbb{N}$, are not uniformly bounded,
there exists a positive integer $k$ such that we have
$-ke_1+\lambda\ne 0$ and
\begin{equation}\label{eqno3}
\dim_{\Bbbk} V_{-ke_1+\lambda}>
n\sum _{\beta\in B_N(\lambda)}\dim_{\Bbbk} V_{\beta}.
\end{equation}
Set $e_1'=(k+1)e_1+e_2$, $e_2'=ke_1+e_2$, and, finally,
$e_j'= e_1'+e_j$ for all
$j=3,4,\dots,n$. Then $\{e'_1,e'_2,...,e_n'\}$ is a new
$\mathbb{Z}$-basis of $\mathbb{Z}^n$.
Moreover, for any $i=1,2,\dots,n$ we have
$e'_i+(-ke_1+\lambda)\in B_N(\lambda)$. Note that
$\dim_{\Bbbk}\mathfrak{g}_{e'_i}=n$ (observe the factor $n$ in the
right hand side of \eqref{eqno3}). Hence,
because of our choice of $k$ above, there exists a nonzero
$v_0\in V_{-ke_1+\lambda}$ such that $\mathfrak{g}_{e'_i}v_0=0$
for all $i$. Thus, after another change of variables
(corresponding to the choice of the $\mathbb{Z}$-basis
$\{e'_i\}$ of $\mathbb{Z}^n$) and replacement of
$\lambda$ with $-ke_1+\lambda$, we obtain $v_0\in V_{\lambda}$
such that $\mathfrak {g}_{e_i} v_0=0$ for all $i=1,2,\dots,n$.
This completes the proof of the lemma.
\end{proof}
To proceed we need some more notation.
For any $\alpha,\beta\in\ensuremath{\mathbb{Z}}\xspace^n$, we write $\alpha>\beta$ and
$\alpha\geq \beta$ if $\alpha_i>\beta_i$ or $\alpha_i\geq \beta_i$ for
$i=1,2,\dots,n$, respectively. For $p,q\in \mathbb{Z}$
we set $[p,q]=\{x \in \ensuremath{\mathbb{Z}}\xspace\,:\, p \leq x
\leq q\}$ and define $(-\infty,q]$ and $[p,+\infty)$ similarly.
An element $v\in V$ will be called a {\em generalized highest
weight} element provided that there exists some $N\in\mathbb{N}$
such that $\mathfrak{g}_{\alpha}v=0$ for every
$\alpha\in\mathbb{Z}^n$ such that $\alpha>(N,N,\dots,N)$.
Analogues for different algebras of the next two claims appeared
in various disguises and setups e.g. in \cite{Ma1,Ma2,Su,LZ}.
\begin{lemma}\label{lem101}
Let $X$ be a weight $\mathfrak{g}$-module and $x\in X$ be a
generalized highest weight element. Then every element in
$U(\mathfrak{g})x$ is a generalized highest weight element. In
particular, if $V$ is as in Lemma~\ref{l1} satisfying \eqref{gh1},
then every $v\in V$ is a generalized highest weight element.
\end{lemma}
\begin{proof}
Since the algebra $U(\mathfrak{g})$ is generated by all
$\mathfrak{g}_{\alpha}$, to prove the first assertion it is enough to
show that if $x$ is a generalized highest weight element,
$\beta\in \mathbb{Z}^n$ and $a\in \mathfrak{g}_{\beta}$, then
the element $ax$ is a generalized highest weight element.
Assume that $N\in\mathbb{N}$ is such that
$\mathfrak{g}_{\alpha}v=0$ for every $\alpha\in\mathbb{Z}^n$ with
$\alpha>(N,N,\dots,N)$. Set
$N'=N+|\beta_1|+|\beta_2|+\cdots+|\beta_n|+1$. Then for every
$\alpha>(N',N',\dots,N')$ we have
\begin{displaymath}
\mathfrak{g}_{\alpha}\mathfrak{g}_{\beta}v\subset
\mathfrak{g}_{\beta}\mathfrak{g}_{\alpha}v+
\mathfrak{g}_{\beta+\alpha}v=0
\end{displaymath}
as $\alpha,\beta+\alpha>(N,N,\dots,N)$ by our choice of $N'$.
The first assertion of the lemma follows.
The module $V$ from Lemma~\ref{l1} is simple and hence is generated
by every nonzero element. By \eqref{gh1}, the element $v_0$ is a
generalized highest weight element. Therefore the second assertion of our
lemma follows directly from the first assertion.
\end{proof}
\begin{lemma}\label{lem107}
Let $V$ be as in Lemma~\ref{l1} satisfying \eqref{gh1}.
Then $\mathfrak{g}_{-\alpha}v\neq 0$ for
any nonzero $v\neq V$ and any $\alpha\in\mathbb{N}^n$.
\end{lemma}
\begin{proof}
Assume that $\mathfrak{g}_{-\alpha}v=0$ for some nonzero $v\neq
V$. By Lemma~\ref{lem101}, there exists $N\in \mathbb{N}$ such
that $\mathfrak{g}_{e_i+N\alpha}v=0$ for every $i=1,2,\dots,n$. It
is easy to see that the monoid $\mathbb{Z}^n$ is generated, as a
monoid, by the elements $e_i+N\alpha$, $i=1,2,\dots,n$, and
$-\alpha$. It follows that $\mathfrak{W}_n$ is generated, as a Lie
algebra, by $\mathfrak{g}_{e_i+N\alpha}$, $i=1,2,\dots,n$, and
$\mathfrak{g}_{-\alpha}$. Hence $\mathfrak{W}_nv=0$, which implies
that $V$ is the trivial module. This contradicts our assumption
that $V$ is not uniformly bounded and the claim of the lemma
follows.
\end{proof}
Analogues of the next two claims appeared in the
setup of generalized Virasoro algebras in \cite[Lemma~3.1]{LZ}.
Our proofs below are generalizations of the ones from \cite{LZ}.
\begin{lemma}\label{lem102}
Let $V$ be as in Lemma~\ref{l1} satisfying \eqref{gh1}.
Then for any $\mu\in\ensuremath{\operatorname{Supp}}\xspace (V)$ and any $\alpha\in\mathbb{N}^n$
we have
\begin{displaymath}
\{x \in \mathbb{Z}\,:\,\mu+x\alpha \in \ensuremath{\operatorname{Supp}}\xspace(V)\}=(-\infty, m]
\end{displaymath}
for some $m\in \mathbb{N}\cup\{0\}$.
\end{lemma}
\begin{proof}
Set $J:=\{x \in \mathbb{Z}\,:\,\mu+x\alpha \in \ensuremath{\operatorname{Supp}}\xspace(V)\}$.
From Lemma~\ref{lem107} we have that either
$J=(-\infty,m]$ for some $m\in \mathbb{Z}$ or $J=\mathbb{Z}$.
Suppose that $J=\mathbb{Z}$. Choose some $\partial\in\mathfrak {g}_0$
such that $\partial (t^\beta)=0$ implies $\beta=0$ for all $\beta\in\mathbb{Z}^n$.
Consider the subalgebra
$\mathfrak{V}_{\alpha}$ of $\mathfrak{g}$, generated by
the elements $t^{k\alpha}\partial$, $k\in\mathbb{Z}$. This
subalgebra is a classical centerless Virasoro algebra (of rank one)
and the space $X_{\alpha}:=\oplus_{x\in\ensuremath{\mathbb{Z}}\xspace} V_{\mu+x\alpha}$
admits a natural structure of a $\mathfrak{V}_{\alpha}$-module,
given by restriction.
From Lemma~\ref{lem101} we obtain that for any
$v\in X_{\alpha}$ we have $t^{k\alpha}\partial(v)=0$ for all
$k\in\mathbb{N}$ big enough. Hence, by \cite[Lemma~1.6]{M}, for every
$m\in\mathbb{Z}$ there exists $m'\in \mathbb{Z}$, $m'>m$,
and a nonzero vector $v(m')\in V_{\mu+m'\alpha}$ annihilated by
$t^{k\alpha}\partial$ for all $k\in \mathbb{N}$.
Therefore the weight $\mu$ occurs in infinitely many simple
highest weight $\mathfrak{V}_{\alpha}$-subquotients of
$X_{\alpha}$, implying $\dim_{\Bbbk}V_{\mu}=\infty$.
This contradicts our assumption that $V$ is a Harish-Chandra
module. Thus $J\neq \mathbb{Z}$ and the claim of the lemma follows.
\end{proof}
\begin{lemma}\label{l3}
Let $V$ be as in Lemma~\ref{l1} satisfying \eqref{gh1}.
Then one can change the variables $t_1,t_2,\dots,t_n$
(keeping the weight $\lambda$) such that
there exists $v_0\in V_{\lambda}$
with the following properties:
\begin{enumerate}[(a)]
\item\label{l3.1} The condition \eqref{gh1} is satisfied.
\item\label{l3.2} $\lambda+\alpha\not\in\mathrm{supp}(V)$ for any
nonzero $\alpha\in\mathbb{Z}_+^n$. \item\label{l3.3}
$\lambda-\alpha\in\mathrm{supp}(V)$ for any
$\alpha\in\mathbb{Z}_+^n$. \item\label{l3.4} for any
$\alpha,\beta\in \mathbb{Z}^n$ such that $\alpha\leq \beta$ we
have $\lambda+\alpha\not\in\mathrm{supp}(V)$ implies
$\lambda+\beta\not\in\mathrm{supp}(V)$.
\end{enumerate}
\end{lemma}
\begin{proof}
By Lemma~\ref{lem102} we have $\{x \in\mathbb{Z}\,:
\,\lambda+x(1,1\cdots,1) \in\mathrm{supp} (V)\}=(\infty, p-2]$
for some $p\geq 2$. Take $e'_1=(p+1,p,\dots,p)$,
$e'_2=(p+2,p+1,p,\dots,p)$
and $e'_i=e'_1+e_i$ for all $i=3,4,\dots,n$. This gives us a new
$\mathbb{Z}$-basis of $\mathbb{Z}^n$. The condition
\eqref{l3.1} is obvious. The condition \eqref{l3.4} follows from
Lemma~\ref{lem102}. The conditions
\eqref{l3.2} and \eqref{l3.3} are proved similarly to the
proof of Lemma~\ref{lem107} (note that $\lambda+(p-1)(1,1,\dots,1)\not
\in\mathrm{supp}(V)$).
\end{proof}
\subsection{The key lemma}\label{s2.4}
The following statement is our key observation.
For $a,b\in\mathbb{Z}^n$, we set $a\cdot b=a_1b_1+a_1b_1+...+a_1b_1$.
For any subgroup $G$ of $\ensuremath{\mathbb{Z}}\xspace^n$ and any $\mu\in\mathfrak{g}_0^*$ the space
$\displaystyle V(\mu+G)=\bigoplus_{\alpha\in \mu+G}V_{\alpha}$ is
naturally a $\mathfrak{g}(G)$-module by restriction.
\begin{lemma}\label{l4}
Assume that $V$ is not uniformly bounded. Then $V$ is a highest weight
module as in Theorem~\ref{mainthm}\eqref{mainthm.3}.
\end{lemma}
\begin{proof}
By Lemma~\ref{l3} we may assume that $V$ satisfies
Lemma~\ref{l3}\eqref{l3.1}--\eqref{l3.4}.
Fix $\gamma\in\Bbbk^n$ such that $\gamma_1, \gamma_2,\cdots, \gamma_n$
are linearly independent over $\mathbb{Q}$ and consider
$V$ as a Harish-Chandra $\mathrm{Vir}(\gamma)$-module by restriction.
To simplify our notation, set $\mathfrak{a}:=\mathrm{Vir}(\gamma)$.
Let $Y$ be a minimal $\mathfrak{a}$-submodule of $V$ such
that $Y\cap V_\lambda\neq 0$, and $Z$ the maximal
$\mathfrak{a}$-submodule of $Y$ such that $Z\cap V_\lambda=0$.
Then the $\mathfrak{a}$-module $Y/Z$ is simple.
As both $\lambda\neq 0$ and $V_\lambda\neq 0$ by Lemma~\ref{l3},
from \cite[Theorem~3.9]{LZ} it follows that the
$\mathfrak{a}$-module $Y/Z$ is isomorphic to
$L^{\mathfrak{a}}(G,\beta,X)$ for some $G$, $\beta$ and $X$ as
described in Subsection~\ref{s1.2} (but for the algebra
$\mathfrak{a}$ and after the identification of $\lambda$ with
$\lambda(\gamma_1\partial_1+\gamma_2\partial_2+\cdots+\gamma_n\partial_n)$).
Moreover, $X$ is uniformly bounded (and hence is a module from the
intermediate series). It now follows that
\begin{equation}\label{eqno5}
\left(\lambda-\mathbb{Z}_+ \beta + G\right)\setminus\{0\}
\subset \ensuremath{\operatorname{Supp}}\xspace(V).
\end{equation}
Moreover, from Lemma~\ref{l3}\eqref{l3.2} we have $\lambda+\alpha
+ G\not\subset \ensuremath{\operatorname{Supp}}\xspace(V)$ for any nonzero
$\alpha\in\mathbb{Z}_+^n$.
From Lemma~\ref{l3}\eqref{l3.2} it follows that there exists
$\alpha\in \mathbb{N}^n$ such that we have
$G=\{x\in\mathbb{Z}^n\,:\, \alpha\cdot x=0\}$. (For example, if
$\alpha_1=0$, then $le_1\in G$ for all $l\in\ensuremath{\mathbb{Z}}\xspace$, and $\lambda
+l_1e_1\notin \mathrm{supp}(V)$ for sufficiently large $l_1$ which
contradicts the fact that $(\lambda+ G)\setminus\{0\}\subset
\ensuremath{\operatorname{Supp}}\xspace(V)$.) It follows from \eqref{eqno5} that $\lambda+x\in
\ensuremath{\operatorname{Supp}}\xspace(V)$ for all $x\in\mathbb{Z}^n$ with $x\cdot\alpha<0$.
We first consider the case when
$\{\lambda+k\beta+G\}\cap\mathrm{supp}(V)=\varnothing$
for some $k\in\mathbb{N}$. We may assume that $k$ is minimal possible.
In this case for any $\mu\in \{\lambda+(k-1)\beta+G\}\cap\mathrm{supp}(V)$
(note that the latter intersection is not empty because of our
assumption on $k$)
and any $x\in G$ we have $\mathfrak{g}_{x+\beta}V_{\mu}=0$. Hence
$V\cong L(G,\beta,X')$, where
\begin{displaymath}
X'=\bigoplus_{\mu\in \{\lambda+(k-1)\beta+G\}}V_{\mu}.
\end{displaymath}
Now consider the remaining case when
$\{\lambda+k\beta+G\}\cap\mathrm{supp}(V)\neq \varnothing$
for all $k\in\mathbb{N}$. Obviously, we can choose $k\in\mathbb{N}$
big enough such that the following two conditions are satisfied:
\begin{enumerate}[(I)]
\item\label{cond1}
$|\{\lambda+k\beta+G\}\cap \{\lambda+\mathbb{Z}_+^n\}|>1$
\item\label{cond2}
$|\{\lambda+(k-1)\beta+G\}\cap \{\lambda+\mathbb{Z}_+^n\}|>0$.
\end{enumerate}
The space $V(\lambda+k\beta+G)$ is a Harish-Chandra
$\mathfrak{g}(G)$-module. Restricting this module to any
generalized Virasoro subalgebra of $\mathfrak{g}(G)$ (of the same
rank) and using \eqref{cond1}, \cite{LZ} and \cite{BZ} in the same
way as we did in the proof of Lemma~\ref{l1}, we get that this
module is not uniformly bounded. Hence we can repeat the arguments
of Lemma~\ref{l1} and find a $\mathbb{Z}$-basis
$\beta_2,\beta_3,\dots,\beta_n$ of $G$, $\mu\in \lambda+k\beta+G$,
and a nonzero element $v\in V_{\mu}$, such that
$\mathfrak{g}_{\beta_i}v=0$ for all $i=2,3,\dots,n$.
Let $\nu\in \{\lambda+(k-1)\beta+G\}\cap \{\lambda+\mathbb{Z}_+^n\}$,
which exists by \eqref{cond2}. Then $\nu\not\in\mathrm{supp}(V)$
by Lemma~\ref{l3}\eqref{l3.2}. Let $\beta'_1=\nu-\mu$. Then
$\beta'_1,\beta_2,\beta_3,\dots,\beta_n$ is a $\mathbb{Z}$-basis of
$\mathbb{Z}^n$ and $\mathfrak{g}_{\beta'_1}v=0$ as well.
From Lemma~\ref{lem102} we obtain that
$\mu+m(\beta'_1+\beta_2+...+\beta_n)\not\in\mathrm{supp}(V)$ for
all sufficiently large integers $m$. At the same time for
$m\in\mathbb{N}$ we have
\begin{displaymath}
\alpha\cdot m(\beta'_1+\beta_2+...+\beta_n)=
m\alpha\cdot \beta'_1=-m\alpha\cdot \beta<0.
\end{displaymath}
Hence for all $m$ sufficiently large we have
\begin{displaymath}
\mu+ m(\beta'_1+\beta_2+...+\beta_n)
\in \lambda-\mathbb{Z}_+ \beta + G.
\end{displaymath}
This contradicts \eqref{eqno5}. Hence it is not possible that the
intersection
$\{\lambda+k\beta+G\}\cap\mathrm{supp}(V)$ is nonempty
for all $k\in\mathbb{N}$. The claim of the lemma follows.
\end{proof}
\subsection{Proof of Theorem~\ref{mainthm}}\label{s2.5}
From Lemma~\ref{lem102} it follows that every simple dense and
every simple punctured $\mathfrak{g}$-module is uniformly bounded.
Let now $V$ be a simple nontrivial Harish-Chandra
$\mathfrak{g}$-module, which is neither dense nor punctured. Let
$\gamma\in\Bbbk^n$ be such that $\gamma_1,\gamma_2,\dots,\gamma_n$
are linearly independent over $\mathbb{Q}$. Then $V$ is a
$\mathrm{Vir}(\gamma)$-module by restriction. As $V$ is neither
dense nor punctured, from \cite[Theorem~3.9]{LZ} it follows that
every simple nontrivial subquotient of $V$ is a highest weight
module. In particular, $V$ is not uniformly bounded by \cite{BZ}.
From Lemma~\ref{l4} we now get that $V$ is a highest weight
$\mathfrak{g}$-module as described in
Theorem~\ref{mainthm}\eqref{mainthm.3}. This completes the proof.
\section{Support of non Harish-Chandra weight modules}\label{s3}
In this section we would like to prove some analogue of
Theorem~\ref{mainthm} for all weight modules (that is without the
assumption to be a Harish-Chandra module). At this moment, we
cannot get such a nice statement as the one in
Theorem~\ref{mainthm}, but we get some information about the
support of the module generalizing the corresponding result for
the Virasoro algebra (\cite[Theorem~2]{Ma3}).
Actually in this section our algebra $\mathfrak {g}$ can be more
general than $W_n$. We assume that $\mathfrak {g}$ has a
$\ensuremath{\mathbb{Z}}\xspace^n$-gradation $\displaystyle\mathfrak {g}=
\bigoplus_{\alpha\in\ensuremath{\mathbb{Z}}\xspace^n}\mathfrak
{g}_\alpha$ such that $\mathfrak {g}_0$ is abelian and the gradation
itself is the root space decomposition with respect to $\mathfrak {g}_0$.
We also assume that $[\mathfrak {g}_\alpha,\mathfrak {g}_\beta]=\mathfrak
{g}_{\alpha+\beta}$ for all $\alpha,\beta\in\ensuremath{\mathbb{Z}}\xspace^n$ with $\alpha\ne
\beta$. Clearly both $W_n$ and higher rank Virasoro algebras are
examples of such Lie algebras.
\subsection{Cut modules}\label{s3.1}
For $a\in\mathbb{R}^n$ set
\begin{gather*}
\mathbb{Z}^{(a)}_-=\{x\in \mathbb{Z}^n:a\cdot x<0\};\\
\mathbb{Z}^{(a)}_+=\{x\in \mathbb{Z}^n:a\cdot x>0\};\\
\mathbb{Z}^{(a)}_0=\{x\in \mathbb{Z}^n:a\cdot x=0\};\\
\mathbb{Z}^{(a)}_{-0}=\{x\in \mathbb{Z}^n:a\cdot x\leq 0\};\\
\end{gather*}
Following \cite{Ma1,Ma2} we call a simple weight $\mathfrak{g}$-module
$V$ {\em cut} provided that there exists $\lambda\in\mathrm{supp}(V)$,
$a\in\mathbb{R}^n$, $a\neq 0$, and $b\in \mathbb{Z}^n$ such that
$\mathrm{supp}(V)\subset \lambda+b+\mathbb{Z}^{(a)}_{-0}$.
Let $L(G,\beta,X)$ be a simple highest weight module as in
Subsection~\ref{s1.2}. It is easy to see that there exists
$a\in\mathbb{R}^n$, $a\neq 0$, such that $G\subset
\mathbb{Z}^{(a)}_0$ and $\beta\in \mathbb{Z}^{(a)}_+$ (actually in
this case we can even take $a\in\ensuremath{\mathbb{Z}}\xspace^n$). For any
$\lambda\in\mathrm{supp}(X)$ we have
$\mathrm{supp}(L(G,\beta,X))\subset\lambda+\mathbb{Z}^{(a)}_{-0}$,
in particular, $L(G,\beta,X)$ is a cut module (where we take $b=0$).
In the general case one cannot get rid of the element $b$ in the
definition of cut modules. Choose some $a\in\mathbb{R}^n$, $a\neq
0$, such that $\mathbb{Z}^{(a)}_0=\{0\}$. Set
\begin{displaymath}
\mathfrak{n}^{\pm}=\bigoplus_{\alpha\in \mathbb{Z}^{(a)}_{\pm}}
\mathfrak{g}_{\alpha}.
\end{displaymath}
We make $\Bbbk$ as the trivial $\mathfrak{b}=
\mathfrak{g}_{0}\oplus\mathfrak{n}^{+}$-module. Then the kernel of
the canonical epimorphism from the Verma module
$U(\mathfrak{g})\otimes_{U(\mathfrak{b})}\Bbbk$ onto the trivial
$\mathfrak{g}$-module contains a weight irreducible subquotient
with support $\mathbb{Z}^{(a)}_-$ since $\mathfrak{g}$ contains a
rank $n$ Virasoro algebra(using \cite{HWZ}).
Our main result of this subsection is the following:
\begin{theorem}\label{thm31}
Let $V$ be a simple weight $\mathfrak{g}$-module, which is neither dense
nor trivial. Assume that $V$ contains a generalized highest weight element
for some choice of variables $t_1,\dots,t_n$. Then $V$ is a cut module.
\end{theorem}
\begin{proof}
For $n=1$ this is proved in \cite[Theorem~2]{Ma3}, so in what follows
we assume $n>1$. By Lemma~\ref{lem101}, every element of $V$ is a
generalized highest weight element with respect to our fixed
choice of $t_1,\dots,t_n$. We will use real convexity theory in our
arguments, so we will need to fix the corresponding setup.
Fix $\lambda\in \mathrm{supp}(V)$, $\lambda\neq 0$, and consider the set
$\Lambda=\lambda+\mathbb{Z}^n$, which we identify with $\mathbb{Z}^n$
and consider as a subset of $\mathbb{R}^n$. Set further
$\hat{\Lambda}=\Lambda\setminus \mathrm{supp}(V)$ and note that
$\hat{\Lambda}\neq \varnothing$ because of our assumption that
$V$ is not dense. The key point of the proof is the following
observation, which establishes a convexity property for $\hat{\Lambda}$:
\begin{lemma}\label{lem34}
Let $\mu\in \Lambda$ be a convex linear combination of some elements
from $\hat{\Lambda}$. Then either $\mu=0$ or $\mu\in \hat{\Lambda}$.
\end{lemma}
\begin{proof}
Assume that
\begin{equation}\label{eq301}
\mu=\sum_{i=1}^k a_i\mu_i,
\end{equation}
where $\mu_i\in \hat{\Lambda}$, $a_i\in\mathbb{R}$, $a_i>0$, $k>1$
and $\sum_{i=1}^k a_i=1$. We may even assume that $k$ is minimal
possible. The latter says that $\mu$ belongs to the interior of the
convex hull $H$ of the $\mu_i$'s. By \cite[Corollary~2.7.2]{La} we
may assume that $\mu_i-\mu$, $i=1,\dots,k$, are affinely
independent.
Denote by $X$ the subspace of $\mathbb{R}^n$, generated by the
elements $\mu_i-\mu$, $i=1,\dots,k$. Then $H-\mu\subset X$ and
since $\mu$ is a point in the interior of $H$, we get that the
convex cone in $X$ with origin in $\mu$, which contains all
$\mu_i-\mu$, $i=1,\dots,k$, coincides with $X$. By
\cite[Lemma~2.6.2]{La} we have $\dim X=k-1$ and hence without loss
of generality we may assume that the elements $\mu_i-\mu$,
$i=2,\dots,k$, are linearly independent. Then from \eqref{eq301}
we have
\begin{equation}\label{eq302}
-(\mu_1-\mu)=\sum_{j=2}^{k} \frac{a_i}{a_1}(\mu_i-\mu).
\end{equation}
As the vectors $\mu_i-\mu$, $i=2,\dots,k$, are linearly independent,
the equation \eqref{eq302} gives the unique linear combination of
these elements, which is equal to $-(\mu_1-\mu)$. Since all involved
elements $\mu_i, \mu$ are from $\mathbb{Z}^n$, it follows that all
$\frac{a_i}{a_1}$ in \eqref{eq302} are rational numbers (and are
positive). Multiplying, if necessary, with the denominator, we
obtain the equality
\begin{equation}\label{eq303}
-b_1(\mu_1-\mu)=\sum_{j=2}^{k} b_i(\mu_i-\mu),
\end{equation}
where all $b_i$'s are positive integers.
Note that for every $i=1,2,\dots,k$ we have
$\mathfrak{g}_{\mu_i-\mu}V_{\mu}\subset V_{\mu_i}=0$ by our
assumptions. From \eqref{eq303} we see that $\mathfrak{g}_{0}$ is
in the Lie subalgebra generated by all $\mathfrak{g}_{\mu_i-\mu},
i=1,2,...,k$. Therefore $\mathfrak{g}_{0}V_{\mu}=0$. This implies $\mu=0$
or $V_{\mu}=0$ and completes the proof of our lemma.
\end{proof}
Let $\overline{\Lambda}$ denote the convex hull of $\hat{\Lambda}$
under the usual topology in $\mathbb{R}^n$. As the module $V$ is not
dense, we have $\overline{\Lambda}\neq \varnothing$. As the module
$V$ is not trivial, it must contain a nonzero weight and hence
$\overline{\Lambda}\neq \mathbb{R}^n$ by Lemma~\ref{lem34}. From the
hyperplane separation theorem (see e.g. \cite[3.2]{La}) it follows
that for every $\mu\in\mathrm{supp}(V)\setminus\{0\}$ there exists
$a_{\mu}\in\mathbb{R}^n$, $a_{\mu}\neq 0$, such that
$\hat{\Lambda}\subset \mu+\mathbb{Z}_+^{(a_{\mu})}$. Thus for the
fix $\lambda$ there is a nonzero $a\in\mathbb{R}^n$ such that
$\hat{\Lambda}\subset \lambda+\mathbb{Z}_+^{(a)}$.
To proceed we will need a simple fact about ordered abelian groups.
\begin{lemma}\label{lem051}
Let $\beta\in \mathbb{Z}^n$ be such that $\beta\cdot a<0$. Then
there exists a finite collection of elements
$\beta^{\pm}_1,\dots,\beta^{\pm}_{n}$ from $\mathbb{Z}_+^{(a)}$ such
that $\{\beta,\beta^{\pm}_1,\dots,\beta^{\pm}_n\}$ generates
$\mathbb{Z}^n$ as a semigroup.
\end{lemma}
\begin{proof}
As $\beta\cdot a<0$, for every $\alpha\in \mathbb{Z}^n$ we can fix
some $k_{\alpha}\in\{0,1,\dots\}$ such that
$\alpha-k_{\alpha}\beta\in \mathbb{Z}_+^{(a)}$.
For $i\in\{1,\dots,n\}$ set $\beta^{\pm}_i=\pm e_i-k_{\pm e_i}\beta$.
Then all elements $\pm e_i$ belong to the semigroup, generated by
$\{\beta,\beta^{\pm}_1,\dots,\beta^{\pm}_n\}$, by construction and
hence the latter set generates the whole $\mathbb{Z}^n$ as a semigroup.
\end{proof}
For every $\beta\in \mathbb{Z}^n$ with $\beta\cdot a<0$, we fix
$\beta^{\pm}_1,\dots,\beta^{\pm}_{n}$ given by Lemma~\ref{lem051}.
Now we can generalize our analysis from Section~\ref{s2}.
\begin{lemma}\label{lem033}
Let $\beta\in \mathbb{Z}^n$ be such that $\beta\cdot a<0$. Then
there exists $\mu\in \mathrm{supp}(V)$ such that
$\mu-\beta\in \hat{\Lambda}$.
\end{lemma}
\begin{proof}
Let $\nu\in \hat{\Lambda}\neq\varnothing$. Consider the ray
$\{\nu+k\beta:k\in\mathbb{N}\}$. As $\beta\cdot a<0$, this ray must
intersect $\lambda+\mathbb{Z}_-^{(a)}\subset \mathrm{supp}(V)$.
The claim follows.
\end{proof}
\begin{lemma}\label{lem034}
Let $v\in V$ be a nonzero weight vector. Assume that there exists
$\beta\in \mathbb{Z}^n$ such that $\beta\cdot a<0$ and
$\mathfrak{g}_{\beta}v=0$. Then for any $w\in V$ there exists $N\in
\mathbb{N}$ such that for any $\alpha=b\beta+\gamma$, where
$b\in\mathbb{N}$ and $\gamma\in\mathbb{N}^n$ with
$\gamma>(N,N,\dots,N)$, we have $\mathfrak{g}_{\alpha}w=0$.
\end{lemma}
\begin{proof}
From Lemma~\ref{lem101} we know that $v$ is a generalized highest
weight element and hence there exists $N_1\in \mathbb{N}$ such
that we have $\mathfrak{g}_{\gamma}v=0$ for all
$\gamma\in\mathbb{N}^n$, $\gamma>(N_1,N_1,\dots,N_1)$. If not all
components $\beta_i$ of $\beta$ are negative, using that
$[\mathfrak{g}_{x},\mathfrak{g}_{y}]= \mathfrak{g}_{x+y}$ if $x\ne
y$, in the case $w=v$ the claim of the lemma follows by induction
on $b$ and taking $N=N_1$. Now suppose all components $\beta_i$ of
$\beta$ are negative. In the case $w=v$ we take $N=2N_1+2$. For
$\gamma
>(N,N,\dots,N)$ with $0\notin \mathbb{N}\beta+\gamma$ the claim of
the lemma follows by induction on $b$. For $\gamma
>(N,N,\dots,N)$ with $\gamma=-k\beta, k>0$, we write $\gamma=\gamma_1+\gamma_2$ such that
$\gamma_1,\gamma_2>(N,N,\dots,N)$, $0\notin
\mathbb{N}\beta+\gamma_1$ and $\gamma_2\ne b\beta+\gamma_1$. Then
we know that $\mathfrak{g}_{b\beta+\gamma_1}v=0$, further
$\mathfrak{g}_{b\beta+\gamma}v=0$. The claim in the Lemma follows
for the case $w=v$.
Using the same arguments as in Lemma~\ref{lem101} one shows that if the
claim of our lemma is true for some $w$, it is true for any element
from $\mathfrak{g}_{x}w$, $x\in\mathbb{Z}^n$. The general claim of the
lemma now follows from the fact that $V$ is simple and hence generated
by $v$.
\end{proof}
\begin{lemma}\label{lem035}
Let $v\in V$ be a nonzero weight vector. Assume that there exists
$\beta\in \mathbb{Z}^n$ such that $\beta\cdot a<0$ and
$\mathfrak{g}_{\beta}v=0$. Then there exists $N\in \mathbb{N}$ such
that for any $\alpha=b\beta+\sum_{\varepsilon\in\{\pm\}}\sum_{i=1}^n
b^{\varepsilon}_i\beta^{\varepsilon}_i+\gamma$, where
$b,b^{\pm}_i\in\mathbb{N}$, $i=1,\dots,n$, and
$\gamma\in\mathbb{N}^n$ with $\gamma>(N,N,\dots,N)$, we have
$\mathfrak{g}_{\alpha}v=0$.
\end{lemma}
\begin{proof}
By Lemma~\ref{lem033} for every $\varepsilon\in\{\pm\}$ and
$i\in\{1,2,\dots,n\}$ there is
$\mu_{i}^{\varepsilon}\in\mathrm{supp}(V)$ such that
$\mu_{i}^{\varepsilon}+\beta^{\varepsilon}_i\in \hat{\Lambda}$.
This yields
$\mathfrak{g}_{\beta^{\varepsilon}_i}V_{\mu_{i}^{\varepsilon}}=0$.
Hence, by Lemma~\ref{lem034}, there exists
$N_{i}^{\varepsilon}\in\mathbb{N}$ such that
\begin{equation}\label{eq50501}
\mathfrak{g}_{b_{i}^{\varepsilon}\beta^{\varepsilon}_i+
\gamma_{i}^{\varepsilon}}v=0
\end{equation}
for any $b_{i}^{\varepsilon}\in \mathbb{N}$ and any
$\gamma_{i}^{\varepsilon}\in\mathbb{N}^n$, $\gamma_{i}^{\varepsilon}>
(N_{i}^{\varepsilon},N_{i}^{\varepsilon},\dots,N_{i}^{\varepsilon})$.
Similarly, there exists $N_{\beta}\in\mathbb{N}$
such that
\begin{equation}\label{eq50502}
\mathfrak{g}_{b\beta+\gamma'}v=0
\end{equation}
for any $b\in\mathbb{N}$ and any $\gamma'\in\mathbb{N}^n$,
$\gamma'>(N_{\beta},N_{\beta},\dots,N_{\beta})$. Let
\begin{displaymath}
N= N_{\beta}+\sum_{\varepsilon\in\{\pm\}}
\sum_{i=1}^n N_{i}^{\varepsilon} +2n+1.
\end{displaymath}
Then every $\gamma>(N,N,\dots,N)$ can be written as
\begin{displaymath}
\gamma=\gamma'+ \sum_{\varepsilon\in\{\pm\}}
\sum_{i=1}^n \gamma_{i}^{\varepsilon},
\end{displaymath}
where $\gamma'>(N_{\beta},N_{\beta},\dots,N_{\beta})$ and
$\gamma_{i}^{\varepsilon}>
(N_{i}^{\varepsilon},N_{i}^{\varepsilon},\dots,N_{i}^{\varepsilon})$.
For this choice of $N$ using the expression
$$b\beta+\sum_{\varepsilon\in\{\pm\}}\sum_{i=1}^n
b^{\varepsilon}_i\beta^{\varepsilon}_i+\gamma=(b\beta+\gamma')+\sum_{\varepsilon\in\{\pm\}}\sum_{i=1}^n
(b^{\varepsilon}_i\beta^{\varepsilon}_i+\gamma^{\varepsilon}_i),$$
we deduce the claim of the lemma from \eqref{eq50501} and
\eqref{eq50502}.
\end{proof}
\begin{lemma}\label{lem036}
Let $\mu\in \hat{\Lambda}$ and $\beta\in\mathbb{Z}^n$ be such that
$\beta\cdot a<0$. Then $\mu-\beta\in \hat{\Lambda}.$
\end{lemma}
\begin{proof}
We have $\mathfrak{g}_{\beta}V_{\mu-\beta}\subset V_{\mu}=0$.
Suppose $V_{\mu-\beta}\ne0$ and fix any nonzero $v\in
V_{\mu-\beta}$. By Lemma~\ref{lem035},
there exists $N\in \mathbb{N}$
such that for any $\alpha=b\beta+\sum_{i=1}^n
b^{\pm}_i\beta^{\pm}_i+\gamma$, where $b,b^{\pm}_i\in\mathbb{N}$,
$i=1,\dots,n$, and $\gamma\in\mathbb{N}^n$ with
$\gamma>(N,N,\dots,N)$, we have $\mathfrak{g}_{\alpha}v=0$.
Since $\{\beta,\beta^{\pm}_1,\dots,\beta^{\pm}_n\}$ generate $\mathbb{Z}^n$
as a semigroup (Lemma~\ref{lem051}), for any $\alpha\in \mathbb{Z}^n$
we can write
\begin{displaymath}
\alpha-(N+1,N+1,\dots,N+1)=b\beta+ \sum_{\varepsilon\in\{\pm\}}
\sum_{i=1}^n b_{i}^{\varepsilon}\beta^{\pm}_i
\end{displaymath}
for some $b,b^{\pm}_i\in\mathbb{N}$ and hence
\begin{displaymath}
\alpha=b\beta+ \sum_{\varepsilon\in\{\pm\}}
\sum_{i=1}^n b_{i}^{\varepsilon}\beta^{\pm}_i +(N+1,N+1,\dots,N+1).
\end{displaymath}
Note that $(N+1,N+1,\dots,N+1)>(N,N,\dots,N)$, which yields
$\mathfrak{g}_{\alpha}v=0$ from the previous paragraph. Thus $V$
is a trivial module which is a contradiction. Therefore
$V_{\mu-\beta}=0$ which completes the proof.
\end{proof}
Theorem~\ref{thm31} follows directly from Lemma~\ref{lem036}.
\end{proof}
\subsection{Case $n=2$}\label{s3.2}
In the case $n=2$ Theorem~\ref{thm31} implies the following
trichotomy result for all weight modules (see \cite{Ma2}):
\begin{corollary}\label{cor071}
Assume that $n=2$ and $V$ is a simple weight $\mathfrak{g}$-module.
Then $V$ is either dense or punctured or cut.
\end{corollary}
\begin{proof}
We work with the same setup (for $\Lambda$ and $\hat{\Lambda}$)
as in Theorem~\ref{thm31}.
Note that the trivial $\mathfrak{g}$-module is obviously cut. Hence,
because of Theorem~\ref{thm31}, it is enough to show that any simple
nontrivial weight module $V$ for which $|\hat{\Lambda}|>1$ contains a
generalized highest weight element.
Now suppose $|\hat{\Lambda}|>1$. If $\hat{\Lambda}$ contains a line in $\ensuremath{\mathbb{Z}}\xspace^n$, we can easily see
that $V$ is cut (we are using the identification defined before
Lemma 9). Now we also suppose that each line in $\ensuremath{\mathbb{Z}}\xspace^n$ has at least
one weight of $V$. By changing the coordinate system $\{e_1, e_2\}$
and changing $\lambda$ (which is allowed to be $0$) if necessary, we
may assume that $(0,0)\notin \hat{\Lambda}$ and $(1,0), (1,k)\in
\hat{\Lambda}$
for some
$k\in\mathbb{N}$. Using Lemma~\ref{lem34} we may even assume
$k\in\{1,2\}$. If $(1,1)\in\mathrm{supp}(V)$ we must have
$\lambda+e_1+e_2=0$, and further $\hat{\Lambda}=\{(1,0),(1,2)\}$
(otherwise we use other points instead of $ (1,0),(1,2)$ to get
$k=1$).
If $(1,1)\in \hat{\Lambda}$, then $\mathfrak{g}_{e_1}V_{\lambda}=
\mathfrak{g}_{e_1+e_2}V_{\lambda}=0$ and since $\{e_1,e_{1}+e_{2}\}$
is a $\mathbb{Z}$-basis of $\mathbb{Z}^2$ we obtain that any element
in $V_{\lambda}$ is a generalized highest weight element.
Finally, we are left with the case $\hat{\Lambda}=\{(1,0),(1,2)\}$
(and we also have the equality $\lambda+e_1+e_2=0$, which will not
be used). Consider the $\mathbb{Z}$-basis $\alpha=e_1$,
$\beta=2e_1+e_2$ of $\mathbb{Z}^2$. We claim that in this case for
any $a,b\in\mathbb{N}$ such that $a,b>1$ we have
$\mathfrak{g}_{a\alpha+b\beta}V_{\lambda}=0$ (and hence any element
in $V_{\lambda}$ is a generalized highest weight element with
respect to the basis $\{\alpha+\beta, \alpha+2\beta\}$). As
$\mathfrak{g}_{\alpha}V_{\lambda}=0$ by our assumptions, by
induction it is enough to show that
$\mathfrak{g}_{b\beta}V_{\lambda}=0$ for all $b\in\mathbb{N}$,
$b>1$.
If $b$ is even we have $b\beta=\frac{b}{2}(e_1+2e_2)+\frac{3b}{2}e_1$
and $\mathfrak{g}_{b\beta}V_{\lambda}=0$ follows from
$\mathfrak{g}_{e_1+2e_2}V_{\lambda}\subset V_{\lambda+e_1+2e_2}=0$
and $\mathfrak{g}_{e_1}V_{\lambda}\subset V_{\lambda+e_1}=0$.
If $b=2k+1$ is odd (in particular, $b\geq 3$), we write
$b\beta=(b\beta-e_2)+e_2$ and have
\begin{displaymath}
\mathfrak{g}_{b\beta}V_{\lambda}=
[\mathfrak{g}_{b\beta-e_2},\mathfrak{g}_{e_2}]V_{\lambda}\subset
\mathfrak{g}_{b\beta-e_2}V_{\lambda+e_2}+
\mathfrak{g}_{e_2}\mathfrak{g}_{b\beta-e_2}V_{\lambda}.
\end{displaymath}
Similarly to the argument in the previous paragraph we have
$\mathfrak{g}_{b\beta-e_2}V_{\lambda}=0$. Note that
$\mathfrak{g}_{e_1+e_2}V_{\lambda+e_2}\subset
V_{\lambda+e_1+2e_2}=0$,
$\mathfrak{g}_{e_1-e_2}V_{\lambda+e_2}\subset V_{\lambda+e_1}=0$
and that
$b\beta-e_2=(4k+2)e_1+2ke_2=(3k+1)(e_1+e_2)+(k+1)(e_2-e_2)$. Then
we have $\mathfrak{g}_{b\beta-e_2}V_{\lambda+e_2}=0$, and further
$\mathfrak{g}_{b\beta}V_{\lambda}=0$. This completes the proof.
\end{proof}
\subsection{Mixed modules}\label{s3.3}
Another way to generalize the results of this paper is to study the
supports of the so-called mixed modules. A weight
$\mathfrak{g}$-module $V$ is called {\em mixed} (see \cite{Ma3})
provided that there exists $\lambda\in\mathrm{supp}(V)$ and
$\alpha\in\mathbb{Z}^n$ such that $\dim V_{\lambda}=\infty$ and
$\dim V_{\lambda+\alpha}<\infty$. In \cite{MZ} it is shown that for
the Virasoro algebra mixed modules do not exist. However, for $n>1$
simple highest weight $W_n$-modules are mixed in the general case
(this follows for example from \cite{HWZ}). Hence it is natural to
ask whether there are other classes of simple mixed modules, for
example if there are mixed dense or mixed punctured modules.
\begin{conjecture}\label{conj}
Any mixed $\mathfrak{g}$-module is cut.
\end{conjecture}
Below we give some motivation and (weak) evidence for this
conjecture. Denote by $\mathrm{supp}^{\infty}(V)$ the set of all
$\lambda\in\mathfrak{g}_0^*$ such that $\dim V_{\lambda}=\infty$.
For a simple weight $\mathfrak{g}$-module $V$ we also denote by
$\mathrm{supp}^{\mathrm{fin}}(V)$ the set of all
$\lambda\in\mathrm{supp}(V)+\mathbb{Z}^n$ such that $\dim
V_{\lambda}<\infty$. Note that $\mathrm{supp}^{\mathrm{fin}}(V)$ may
not be a subset of $\mathrm{supp}(V)$. Let
$\Lambda=\mathrm{supp}(V)+\mathbb{Z}^n$.
\begin{lemma}\label{lem071}
Let $V$ be a simple mixed module and $\mu\in \Lambda$ be a convex
linear combination of some elements from
$\mathrm{supp}^{\mathrm{fin}}(V)$. Then we have either $\mu=0$ or
$\mu\in \mathrm{supp}^{\mathrm{fin}}(V)$.
\end{lemma}
\begin{proof}
Let $\mu$ be a convex linear combination of some
$\mu_1,\dots,\mu_k\in \mathrm{supp}^{\mathrm{fin}}(V)$ as in the
proof of Lemma~\ref{lem34}. Assume $\mu\in \mathrm{supp}^{\infty}(V)$.
Then $\dim V_{\mu}=\infty$ while $\dim V_{\mu_i}<\infty$ for all
$i=1,\dots,k$. Hence there exists $v\in V_{\mu}$, $v\neq 0$, such
that $\mathfrak{g}_{\mu_i-\mu}v=0$ for all $i=1,\dots,k$. Repeating
the arguments from the proof of Lemma~\ref{lem34} we obtain
$\mathfrak{g}_{0}v=0$, which implies $\mu=0$. The claim follows.
\end{proof}
\begin{corollary}\label{cor072}
Any simple mixed module $V$ containing a generalized highest weight
element is a cut module.
\end{corollary}
\begin{proof}
We may assume that every element in $V$ is a generalized highest
weight element with respect to the basis $e_1,\dots,e_n$. Let
$\lambda\in \mathrm{supp}^{\mathrm{fin}}(V)$. Then for any
$\alpha\in\mathbb{N}^n$ we have $\lambda+\alpha\in
\mathrm{supp}^{\mathrm{fin}}(V)$ for otherwise $\dim
V_{\lambda+\alpha}=\infty$ and hence $V_{\lambda+\alpha}$ must
contain a nonzero vector $v$, annihilated by
$\mathfrak{g}_{-\alpha}$. Using that $v$ is a generalized highest
weight element and $\mathfrak{g}_{-\alpha}v=0$ one shows that
$\mathfrak{g}v=0$, implying that $V$ is trivial and hence not
mixed, a contradiction.
From $\lambda+\alpha\in \mathrm{supp}^{\mathrm{fin}}(V)$ for any
$\alpha\in\mathbb{N}^n$, similarly to the proof of Lemma~\ref{lem102}
one shows that $V$ is not dense. Hence the claim follows from
Theorem~\ref{thm31}.
\end{proof}
\begin{corollary}\label{cor073}
Let $n=2$. Then any simple mixed module $V$ for which
$|\mathrm{supp}^{\mathrm{fin}}(V)|>1$ is a cut module.
\end{corollary}
\begin{proof}
Using Lemma~\ref{lem071}, similarly to the proof of
Corollary~\ref{cor071} we may assume
$\lambda\in \mathrm{supp}^{\infty}(V)$,
$\lambda+e_1\in \mathrm{supp}^{\mathrm{fin}}(V)$
and either $\lambda+e_1+e_2\in \mathrm{supp}^{\mathrm{fin}}(V)$
or $\lambda+e_1+2e_2\in \mathrm{supp}^{\mathrm{fin}}(V)$.
In the first case from $\dim V_{\lambda}=\infty$ and
$\dim V_{\lambda+e_1},\dim V_{\lambda+e_1+e_2}<\infty$ we have that there
exists $v\in V_{\lambda}$, annihilated by both $\mathfrak{g}_{e_1}$
and $\mathfrak{g}_{e_1+e_2}$. As $\{e_1,e_1+e_2\}$ is a
$\mathbb{Z}$-basis of $\mathbb{Z}^2$, the element $v$ is a
generalized highest weight element and the claim follows from
Corollary~\ref{cor072}.
In the second case ($\lambda+e_1+2e_2\in \mathrm{supp}^{\mathrm{fin}}(V)$)
one proves the existence of a generalized highest weight element
in $V_{\lambda}$ similarly to the proof of Corollary~\ref{cor072}.
The claim follows.
\end{proof}
\begin{corollary}\label{cor074}
Let $n=2$. Then for any simple mixed punctured module $V$
we have $\mathrm{supp}^{\infty}(V)=\mathrm{supp}(V)$.
\end{corollary}
\begin{proof}
This follows from Corollary~\ref{cor073} and definitions.
\end{proof}
\vspace{0.5cm}
\begin{center}
\bf Acknowledgments
\end{center}
The research was done during the visit of the first author to
Wilfrid Laurier University in April and May 2009. The hospitality
and financial support of Wilfrid Laurier University are gratefully
acknowledged. The first author was partially
supported by the Swedish Research Council. The second
author was partially supported by NSERC
and NSF of China (Grant 10871192).
We thank Svante Janson for helpful discussions.
|
2,877,628,091,573 | arxiv | \section{Introduction}
Traditionally, the state-space approach to time series considers a representation:
$$\boldsymbol{y}_t = \boldsymbol{K} \boldsymbol{v}_t$$
$$\boldsymbol{v}_{t} = \boldsymbol{C} \boldsymbol{v}_{t-1}+\boldsymbol{D} \epsilon_t$$
Using the lag operator $L$, $\boldsymbol{v}_t = (\boldsymbol{I}-\boldsymbol{C} L)^{-1}\boldsymbol{D}\boldsymbol{\epsilon}_t$ and thus
\begin{equation}
\label{eq:state0}
\boldsymbol{y}_t = \boldsymbol{K}(\boldsymbol{I}-\boldsymbol{C} L)^{-1}\boldsymbol{D}\boldsymbol{\epsilon}_t
\end{equation}
Let $\tilde{\boldsymbol{T}}$ be a rational matrix function such that $\tilde{\boldsymbol{T}}(\infty) = 0$ ($\tilde{\boldsymbol{T}}$ is called strictly-proper in this case.) A realization is a representation of $\tilde{\boldsymbol{T}}$ in the form
$$\tilde{\boldsymbol{T}}(z) = \boldsymbol{K}(z\boldsymbol{I} - \boldsymbol{C} )^{-1}\boldsymbol{D}$$
It is known \parencite{Gilbert, Kalman} a realization exists for all strictly-proper $\tilde{\boldsymbol{T}}$. If $\boldsymbol{y}_t = \boldsymbol{T}(L)\boldsymbol{\epsilon}_t$ for a rational matrix $\boldsymbol{T}(L)$ with $0$ is not a pole of $\boldsymbol{T}$ ($\boldsymbol{T}(0)$ is finite) then $\tilde{\boldsymbol{T}}(L)= L^{-1}\boldsymbol{T}(L^{-1})$ is strictly proper. Hence:
$$\boldsymbol{T}(L) = L^{-1} \boldsymbol{K}(L^{-1}\boldsymbol{I}-\boldsymbol{C} )^{-1}\boldsymbol{D} = \boldsymbol{K}(\boldsymbol{I}-\boldsymbol{C} L)^{-1}\boldsymbol{D}$$
and so $\boldsymbol{y}$ can be represented in state-space form. ($\boldsymbol{T}(0)= \boldsymbol{I}$ in many models, structure models could have $\boldsymbol{T}(0)\neq\boldsymbol{I}$.)
We note, this traditional state-space realization (which we will call $\textsc{MA}$-state-space realization) gives a representation of $\boldsymbol{y}$ in term of $\boldsymbol{\epsilon}$. $\boldsymbol{C}$ gives valuable information, for example its eigenvalues could determine stability of the process. As a moving average representation, it does not link $\boldsymbol{y}_t$ with its lagged values directly. We will take a different approach in this paper.
If $\boldsymbol{T}(0)$ is invertible, we note $\boldsymbol{Z}(s) = \boldsymbol{I} - \boldsymbol{T}(0)\boldsymbol{T}(s^{-1})^{-1}$ is strictly proper as a function of $s$. We can apply the same realization theorem to express
$$\boldsymbol{Z}(s) =\boldsymbol{H}(s\boldsymbol{I}-\boldsymbol{F})^{-1} \boldsymbol{G}$$
for some $\{\boldsymbol{H}, \boldsymbol{F}, \boldsymbol{G}\}$. With $s=L^{-1}$, this implies:
$$\boldsymbol{T}(0)\boldsymbol{T}(L)^{-1} = \boldsymbol{I} - \boldsymbol{Z}(L^{-1}) = \boldsymbol{I} -\boldsymbol{H}(L^{-1}\boldsymbol{I}-\boldsymbol{F} )^{-1}\boldsymbol{G} = \boldsymbol{I} - \boldsymbol{H}(\boldsymbol{I} -\boldsymbol{F} L)^{-1}\boldsymbol{G} L$$
And the model
$$\boldsymbol{y}_t = \boldsymbol{T}(L)\boldsymbol{\epsilon}_t$$
could be written as
$$\boldsymbol{T}(L)^{-1}\boldsymbol{y}_t = \boldsymbol{\epsilon}_t$$
or
$$\boldsymbol{T}(0)\boldsymbol{T}(L)^{-1}\boldsymbol{y}_t = (\boldsymbol{I} - \boldsymbol{H}(\boldsymbol{I} - \boldsymbol{F} L)^{-1}L)\boldsymbol{y}_t = \boldsymbol{T}(0)\boldsymbol{\epsilon}_t$$
$$\boldsymbol{y}_t = \boldsymbol{H}(\boldsymbol{I} -\boldsymbol{F} L)^{-1}\boldsymbol{G} L \boldsymbol{y}_t + \boldsymbol{T}(0)\boldsymbol{\epsilon}_t$$
This is what we call the $autoregressive$ ($\textsc{AR}$) state-space form. $\boldsymbol{y}$ could be forecast by its lagged values. This is an important feature we would like to explore in this paper. Consider the Vector Autoregressive ($\textsc{var}$) model:
\begin{equation}
\label{eq:VAR}
\boldsymbol{y}_t = \sum_{i=1}^{p} \boldsymbol{\Phi}_i \boldsymbol{y}_{t-i} + \boldsymbol{\epsilon}_t
\end{equation}
In this case, $\boldsymbol{T}(L)^{-1} = \boldsymbol{I} - \sum_{i=1}^{p} \boldsymbol{\Phi}_i L^{i}$. So $\boldsymbol{Z}(s) = \sum_{i=1}^{p} \boldsymbol{\Phi}_i s^{-i}$. It is clear that $\boldsymbol{Z}(s)$ is strictly proper. Moreover, it has only one pole of degree $p$ at $0$. Kalman described its minimal realization explicitly. We will see $\boldsymbol{F}$ could be made a nilpotent Jordan matrix, and so could be classified by the shape of the Jordan blocks. For a Jordan form $\boldsymbol{F}$ with $\boldsymbol{F}^p=0$, this shows:
$$\boldsymbol{P}(L) := \boldsymbol{Z}(L^{-1}) = \sum_{i=1}^p \boldsymbol{\Phi}_i L^i = \sum_{i=1}^p (\boldsymbol{H}\boldsymbol{F}^{i-1} \boldsymbol{G})L^i$$
and the regression is
$$\boldsymbol{y}_t = \sum_{i=1}^p (\boldsymbol{H}\boldsymbol{F}^{i-1} \boldsymbol{G})L^i\boldsymbol{y}_{t-i} + \boldsymbol{\epsilon}_t$$
Here, $\boldsymbol{F}$ does not determine stability of $\boldsymbol{y}_t$, but $\boldsymbol{y}_t$ is explicitly expressed in its lagged values. This approach offers a number of crucial advantages. It turns out to be a generalization of the reduced-rank regression approach. In that case $\boldsymbol{F}=0$, and we have $\boldsymbol{y}_t = \boldsymbol{H}\boldsymbol{G}\boldsymbol{y}_{t-1}+\boldsymbol{\epsilon}_t$. We show that we can replicate most of the reduced-rank analysis here. Fixing $\boldsymbol{G}$, $\boldsymbol{H}$ could be computed by least square. The likelihood function could be expressed via the Schur determinant formula as a determinant ratio which could be considered a generalized Rayleigh quotient. This generalizes the classical result that reduced-rank $\textsc{var}(1)$ models are related to generalized invariant subspace representations, and to the associated Rayleigh quotients. Therefore, maximizing the likelihood means minimizing a determinant ratio. The gradient and hessian of the likelihood function are very easy to compute, and could be used to estimated model parameters using standard optimizers in the examples we consider. However, similar to the reduced rank case, the likelihood function is unchanged if we replace $\boldsymbol{G}$ by $\boldsymbol{S}\boldsymbol{G}$ if $\boldsymbol{S}$ commutes with $\boldsymbol{F}$. So it is possible to restrict the search space to a lower dimension set. In the reduced-rank case, using $QR$ factorization on $\beta'$ we can assume the rows of $\boldsymbol{G}$ to be orthonormal. We have a similar situation in the minimal state-space case.
As the structure of $\boldsymbol{F}$ could be classified by listing all Jordan forms with $\boldsymbol{F}^p=0$, we have a very explicit and simple classification of possible realizations. The approach offers a systematic parameter reduction technique that we hope to compare and combine with other parameter estimation techniques.
Reduced-rank regressions could be defined for any two variables $\boldsymbol{x}$ and $\boldsymbol{y}$, not only for an autoregressive $\boldsymbol{y}$. Our results are valid for a more general forecasting model with time lagged regressors, the $\textsc{varx}$ model. We restrict ourselves to consider $\textsc{var}$ and $\textsc{varx}$ in this article. The more general case of $\textsc{varma}$ will be considered in a future article.
We collect the symbols used and compares our minimal $\textsc{AR}$-state-space approach with reduced-rank regression for the reader's convenience in \cref{tab:summarize}. The concepts and symbols will be introduced in subsequent sections.
\begin{table}[H]
\begin{tabular}{|p{3.5cm} | p{4.5cm}| p{7cm}|}
\hline
Concept/Symbol & Reduced-Rank & $\textsc{AR}$-state-space \\ [0.5ex]
\hline\hline
Dimension of $\boldsymbol{x}$ & $m$ & $m$\\
Dimension of $\boldsymbol{y}$ & $k$ & $k$\\
$\min(m, k)$ & $h$ & $h$ \\
Lag & $p=1$ or not applicable & $p$\\
Structure params & reduced rank $\mathfrak{l} = d < m$ & $\hat{\Psi} = [d_1,\cdots,d_p], d_i \geq 0; d_p > 0; \sum d_i\leq h$ \\
Total rank alloc. & $\mathfrak{l}=d$ & $\mathfrak{l} = \sum d_i$ \\
Min. state-space dim. & $\mathfrak{l}=d$ & $\sum jd_j$ \\
Alt. struct. params & $\mathfrak{l}=d$ & $\Psi = [(r_g, l_g), \cdots,(r_1, l_1) )]_{r_g > \cdots > r_1, 0 < l_i = d_{r_i}}$ \\
Parameter reduction & $(m-d)(k-d)$ & $\sum_{i=1}^p(m-\sum_{j\geq i}d_j) \sum_{i=1}^p(k-\sum_{j\geq i}d_j)$\\
Jordan block & $\boldsymbol{J}(\lambda, r, l)$ & $\boldsymbol{J}(\lambda, r, l)$\\
$\lambda=0$ Jordan block & $\boldsymbol{K}(r, l)$ & $\boldsymbol{K}(r, l)$ \\
$\boldsymbol{F}$ & $\boldsymbol{F}= \boldsymbol{K}(1, l) = 0_{d, d}$ & $\boldsymbol{F}_{\Psi} = \oplus_{i=g}^1 \boldsymbol{K}(r_i, l_i)$\\
Factorization & $\beta = \boldsymbol{H}\boldsymbol{G}$ & $\boldsymbol{P}(L) = \boldsymbol{Z}(L^{-1}) = \boldsymbol{H}(\boldsymbol{I} -\boldsymbol{F} L)^{-1} \boldsymbol{G}$\\
Minimal criteria & $\boldsymbol{G}, \boldsymbol{H}$ of size $(d, m), (k, d)$, rank $d$ & $\boldsymbol{G}_{r, 0}, \boldsymbol{H}_{r, 0}$ of size $(d_{r}, m), (k, d_r)$, rank $d_r$. $\boldsymbol{G}_{:, 0} = (\boldsymbol{G}_{r, 0})_r$ and $\boldsymbol{H}_{:, 0} = (\boldsymbol{H}_{r, 0})_r$ are of full row and column rank\\
$LQ$\slash Gram-Schmidt & $\boldsymbol{G}\bG' = \boldsymbol{I}_{\mathfrak{l}}$ & $\boldsymbol{G}_{:, 0}\boldsymbol{G}_{:, 0}' = \boldsymbol{I}_{\mathfrak{l}}$; $\boldsymbol{G}_{r, l} \boldsymbol{G}_{r_1, 0}' = 0$ for $l>\max(0, r-r_1-1)$\\
Num. mat. ($\boldsymbol{A}$) &$\boldsymbol{X}\bX' -\boldsymbol{X}\boldsymbol{Y}'(\boldsymbol{Y}\bY')^{-1}\boldsymbol{Y}\boldsymbol{X}'$ & $\boldsymbol{X}_{\textsc{lag}}\bXLag' -\boldsymbol{X}_{\textsc{lag}}\boldsymbol{Y}'(\boldsymbol{Y}\bY')^{-1}\boldsymbol{Y}\boldsymbol{X}_{\textsc{lag}}'$\\
Denom. mat. ($\boldsymbol{B}$) & $\boldsymbol{X}\bX'$ & $\boldsymbol{X}_{\textsc{lag}}\bXLag'$ \\
$\kappa$ & $\kappa(\boldsymbol{G}) = \boldsymbol{G}$ & defined in \cref{eq:kappa} \\
Neg Log Likelhood &$\log(\det(\boldsymbol{G}\boldsymbol{A}\boldsymbol{G}'))-\log(\det(\boldsymbol{G}\boldsymbol{B}\boldsymbol{G}'))$ & $\log(\det(\kappa(\boldsymbol{G})\boldsymbol{A}\kappa(\boldsymbol{G})')-\log(\det(\kappa(\boldsymbol{G})\boldsymbol{B}\kappa(\boldsymbol{G})')$\\
Gradient & $2\Tr(\boldsymbol{G}\boldsymbol{A}\boldsymbol{G}')^{-1}\boldsymbol{G}\boldsymbol{A}\eta'-2\Tr(\boldsymbol{G}\boldsymbol{B}\boldsymbol{G}')^{-1}\boldsymbol{G}\boldsymbol{B}\eta'$ & $2\Tr(\kappa(\boldsymbol{G})\boldsymbol{A}\kappa(\boldsymbol{G})')^{-1}\kappa(\boldsymbol{G})\boldsymbol{A}\kappa(\eta)'-2\Tr(\kappa(\boldsymbol{G})\boldsymbol{B}\kappa(\boldsymbol{G})')^{-1}\kappa(\boldsymbol{G})\boldsymbol{B}\kappa(\eta)'$ \\
($\mathcal{H}(\boldsymbol{A}, \boldsymbol{G}, \psi, \eta)$) & $2\Tr((\boldsymbol{G}\boldsymbol{A}\boldsymbol{G}')^{-1}(\boldsymbol{G}\boldsymbol{A}\psi' + \psi\boldsymbol{A}\boldsymbol{G}') - (\boldsymbol{G}\boldsymbol{A}\boldsymbol{G}')^{-1}\psi\boldsymbol{A}\eta')$ &
$2\Tr((\kappa(\boldsymbol{G})\boldsymbol{A}\kappa(\boldsymbol{G})')^{-1}(\kappa(\boldsymbol{G})\boldsymbol{A}\kappa(\psi)' + \kappa(\psi)\boldsymbol{A}\kappa(\boldsymbol{G})') - (\kappa(\boldsymbol{G})\boldsymbol{A}\kappa(\boldsymbol{G})')^{-1}\kappa(\psi)\boldsymbol{A}\kappa(\eta)')$ \\
Hessian & $\mathcal{H}(\boldsymbol{A}, \boldsymbol{G}, \psi, \eta) - \mathcal{H}(\boldsymbol{B}, \boldsymbol{G}, \psi, \eta)$ & $\mathcal{H}(\boldsymbol{A}, \boldsymbol{G}, \psi, \eta) - \mathcal{H}(\boldsymbol{B}, \boldsymbol{G}, \psi, \eta)$ \\
No. configs for $\boldsymbol{F}$ & $h$ & $\begin{pmatrix}p+h-1\\ p\end{pmatrix}$\\
\hline
\end{tabular}
\caption{Main symbols and concepts.}
\label{tab:summarize}
\end{table}
To summarize, in this paper, we:
\begin{itemize}
\item Introduce a framework for dimensional reduction for $\textsc{varx}$ models under the concept of minimal $\textsc{AR}$-state-space. We show in this case, minimal state-space could be classified explicitly in term of Jordan forms.
\item Compute the likelihood function for each configuration of Jordan form. It could be considered as a multi-lag canonical correlation analysis. While we can no longer maximize the likelihood via eigenvectors, its gradient and hessian are simple to compute, so we can apply standard optimization techniques.
\item Reduce the search domain for the likelihood function to a lower dimensional set, via a generalized $LQ$\slash Gram-Schmidt procedure. $\boldsymbol{G}$ could be made to satisfy two sets of orthogonal relations. This allows us to apply manifold optimization techniques for large scale problems.
\end{itemize}
In the appendix, we show the reduced search domain could be considered a vector bundle on a flag manifold. This geometric concept is not essential to use our model but potentially will be helpful for higher dimension\slash autoregressive order.
The work \parencite{VeluReinselWichern} (which mentioned the model in \parencite{Brillinger:1969}) is probably a predecessor to this approach. It is related to the case $\boldsymbol{F}= \boldsymbol{K}(p, d_p)$ (Jordan block of one exponent). The author handled the case where $\boldsymbol{H}_{p, j} = 0$ for all $j > 0$. We will discuss this model in more details in \cref{sec:examples}. We will show our approach can also be used for their model in the general case.
As we only consider $\textsc{AR}$-state-space models in this article, we often drop the prefix $\textsc{AR}$ when mentioning state-space models. We will use the wild card character {\it :} to replace running indices on (block) rows and columns of matrices.
\section{Review of reduced-rank regression and VAR(1) minimal realization.}
Reduced-rank regression was first studied in \parencite{Anderson}. It found applications in different areas of statistics, notably in time series. Johansen \parencite{Johansen} used it in his famous test of cointegration. Reduced-rank regression for time series has been studied by \parencite{VeluReinselWichern, AhnReinsel, Anderson1999, Anderson2002}. \parencite{BoxTiao} introduced canonical-correlation-analysis to time series. We review reduced-rank regression briefly here, in a less general framework but sufficient for the subsequent analysis. We study a model of form:
$$\boldsymbol{y} = \beta \boldsymbol{x} + \boldsymbol{\epsilon}$$
$\boldsymbol{y}$ is a $k$-dimensional random variable, $\boldsymbol{x}$ is a $m$ dimension variable and $\beta$ is a $k\times m$ matrix of rank $d\leq \min(k, m)$. We can set $\beta = \boldsymbol{H}\boldsymbol{G}$ with $\boldsymbol{H}$ of size $k\times d$ of rank $d$ and $\boldsymbol{G}$ of size $d\times m$. Given sample matrices $\boldsymbol{Y}$ of size $k\times n$ and $\boldsymbol{X}$ of size $m\times n$; for a fixed $\boldsymbol{G}$, the optimal $\boldsymbol{H}$ is obtained by:
$$\boldsymbol{H} = \boldsymbol{Y}\boldsymbol{X}^{\prime}\boldsymbol{G}'(\boldsymbol{G}\boldsymbol{X}\bX'\boldsymbol{G}')^{-1}$$
and the residual covariance matrix is
$$\mathcal{C}(\boldsymbol{G})= \boldsymbol{Y}\bY' - \boldsymbol{Y}\boldsymbol{X}'\boldsymbol{G}'(\boldsymbol{G}\boldsymbol{X}\bX'\boldsymbol{G}')^{-1}\boldsymbol{G}\boldsymbol{X}\boldsymbol{Y}'$$
Following \parencite{Johansen_book}, we apply the Schur determinant formula to the block matrix:
$$\begin{pmatrix}
\boldsymbol{Y}\bY' & \boldsymbol{Y}\boldsymbol{X}'\boldsymbol{G}'\\
\boldsymbol{G} \boldsymbol{X} \boldsymbol{Y}' & \boldsymbol{G} \boldsymbol{X}\bX' \boldsymbol{G}'
\end{pmatrix}$$
We have:
$$\begin{aligned}\det(\boldsymbol{Y} \boldsymbol{Y}') \det(\boldsymbol{G} \boldsymbol{X}\bX' \boldsymbol{G}' - \boldsymbol{G}\boldsymbol{X}\boldsymbol{Y}'(\boldsymbol{Y}\bY')^{-1}\boldsymbol{Y}\boldsymbol{X}'\boldsymbol{G}') =\\
\det(\boldsymbol{G} \boldsymbol{X}\bX' \boldsymbol{G}')\det(\boldsymbol{Y} \boldsymbol{Y}' - \boldsymbol{Y}\boldsymbol{X}'\boldsymbol{G}'(\boldsymbol{G} \boldsymbol{X}\bX' \boldsymbol{G}')^{-1}\boldsymbol{G} \boldsymbol{X} \boldsymbol{Y}')\end{aligned}$$
So to minimize $\det(\mathcal{C}(\boldsymbol{G}))$ (as a function of $\boldsymbol{G}$) we need to minimize the ratio:
$$\mathcal{R}(\boldsymbol{G}) = \frac{\det(\boldsymbol{G} [\boldsymbol{X}\bX' - \boldsymbol{X}\boldsymbol{Y}'(\boldsymbol{Y}\bY')^{-1}\boldsymbol{Y}\boldsymbol{X}']\boldsymbol{G}')}{\det(\boldsymbol{G} \boldsymbol{X}\bX' \boldsymbol{G}')}$$
or its logarithm, which has a simple gradient:
$$\nabla_{\eta}D\log(\mathcal{R}) = 2\Tr((\boldsymbol{G}\boldsymbol{A}\boldsymbol{G}')^{-1}\boldsymbol{G}\boldsymbol{A}\eta' - (\boldsymbol{G}\boldsymbol{B}\boldsymbol{G}')^{-1}\boldsymbol{G}\boldsymbol{B}\eta')$$
Where $\boldsymbol{A} = [\boldsymbol{X}\bX' - \boldsymbol{X}\boldsymbol{Y}'(\boldsymbol{Y}\bY')^{-1}\boldsymbol{Y}\boldsymbol{X}']$ and $\boldsymbol{B} = \boldsymbol{X}\bX'$. Here $\nabla_{\eta}$ is the directional derivative in the direction $\eta$. At a critical point we have
$$(\boldsymbol{G}\boldsymbol{A}\boldsymbol{G}')^{-1}\boldsymbol{G}\boldsymbol{A} = (\boldsymbol{G}\boldsymbol{B}\boldsymbol{G}')^{-1}\boldsymbol{G}\boldsymbol{B}$$
Therefore, $\boldsymbol{G}\boldsymbol{A} = \gamma \boldsymbol{G}\boldsymbol{B}$, where $\gamma =(\boldsymbol{G}\boldsymbol{A}\boldsymbol{G}')(\boldsymbol{G}\boldsymbol{B}\boldsymbol{G}')^{-1}$ is a matrix of size $d\times d$. So this is a generalized invariant subspace problem. Rewrite it as:
$$\boldsymbol{G}\boldsymbol{B}^{1/2}\boldsymbol{B}^{-1/2}\boldsymbol{A}\boldsymbol{B}^{-1/2} = \gamma \boldsymbol{G}\boldsymbol{B}^{1/2}$$
This becomes an invariant subspace problem, where the new matrix is $\boldsymbol{B}^{-1/2}\boldsymbol{A}\boldsymbol{B}^{-1/2}$ and $\boldsymbol{G}\boldsymbol{B}^{1/2}$ is the new variable. Alternatively, by comparing gradient it is well-known that this determinant ratio minimizing problem is equivalent to the trace ratio problem:
$$ \Tr((\boldsymbol{G} [\boldsymbol{X}\bX'-\boldsymbol{X}\boldsymbol{Y}'(\boldsymbol{Y}\bY')^{-1}\boldsymbol{Y}\boldsymbol{X}']\boldsymbol{G}')(\boldsymbol{G} \boldsymbol{X}\bX' \boldsymbol{G}')^{-1})$$
and this leads to the problem of maximizing
$$\Tr((\boldsymbol{G} [\boldsymbol{X}\boldsymbol{Y}'(\boldsymbol{Y}\bY')^{-1}\boldsymbol{Y}\boldsymbol{X}']\boldsymbol{G}')(\boldsymbol{G} \boldsymbol{X}\bX' \boldsymbol{G}')^{-1})$$
which brings us to canonical-correlation-analysis.
In the time series case, we have $\boldsymbol{y}=\boldsymbol{y}_t$ and $\boldsymbol{x} = \boldsymbol{y}_{t-1}$. The corresponding regression is:
$$\boldsymbol{y}_t = \boldsymbol{\Phi} \boldsymbol{y}_{t-1} + \boldsymbol{\epsilon}_t$$
This is the vector autoregressive model $\textsc{var}(1)$, with $p=1$ as we only have one lag. If $\boldsymbol{\Phi}$ is of reduced-rank $d$: $\boldsymbol{\Phi} = \boldsymbol{H}\boldsymbol{G}$ as above, which can be written with lag operator symbol as
$$(\boldsymbol{I}_k-\boldsymbol{\Phi} L)\boldsymbol{y}_t = \boldsymbol{\epsilon}_t$$
Its transfer function, is $\boldsymbol{T}(L)=(\boldsymbol{I}-\boldsymbol{\Phi} L)^{-1}$, so as in the introduction $\boldsymbol{Z}(s)=\boldsymbol{I}-\boldsymbol{T}(s^{-1})^{-1}=s^{-1}\boldsymbol{\Phi} $ has minimal state-space form:
$$\boldsymbol{Z}(s) = \boldsymbol{H}[s \boldsymbol{I}_r ]^{-1}\boldsymbol{G}$$
From our discussion so far, it is clear that the minimal $\textsc{AR}$-state-space realization of the lag polynomial of $\textsc{var}(1)$ model is exactly the reduced-rank regression model.
\section{Minimal state-space realization for VAR(2)}
Let us consider the case $p=2$. In this case, $\boldsymbol{Z}(s) = \boldsymbol{\Phi}_1 s^{-1} + \boldsymbol{\Phi}_2 s^{-2}$ for the regression:
$$\boldsymbol{y}_t = \boldsymbol{\Phi}_1 \boldsymbol{y}_{t-1}+ \boldsymbol{\Phi}_2 \boldsymbol{y}_{t-2} + \boldsymbol{\epsilon}_t$$
With $s = L^{-1}$, we want to realize $\boldsymbol{P}(L) = \boldsymbol{\Phi}_1 L + \boldsymbol{\Phi}_2 L^2$ in the form $\boldsymbol{H}(L^{-1}\boldsymbol{I} - \boldsymbol{F})^{-1}\boldsymbol{G} = \boldsymbol{H}(\boldsymbol{I} - \boldsymbol{F} L)^{-1}\boldsymbol{G} L$. The later expression will need to be a polynomial in $L$. We note if $\boldsymbol{F}$ is nilpotent with $\boldsymbol{F}^2=0$ then $(\boldsymbol{I}-\boldsymbol{F} L)^{-1} = \boldsymbol{I}+\boldsymbol{F} L$, therefore:
$$\boldsymbol{\Phi}_1 = \boldsymbol{H} \boldsymbol{G}$$
$$\boldsymbol{\Phi}_2 = \boldsymbol{H}\boldsymbol{F}\boldsymbol{G}$$
Assume that is the case, we can assume further that $\boldsymbol{F}$ is of Jordan form:
$$\boldsymbol{F} = \begin{bmatrix}0_{l_2, l_2} & \boldsymbol{I}_{l_2} & 0\\ & 0_{l_2, l_2} & 0\\ & &0_{l_1, l_1} \end{bmatrix}$$
We can divide $\boldsymbol{H}$ and $\boldsymbol{G}$ to corresponding blocks, $\boldsymbol{H} = \begin{bmatrix}\boldsymbol{H}_{2, 0} & \boldsymbol{H}_{2, 1} & \boldsymbol{H}_{1, 0}\end{bmatrix}$, $\boldsymbol{G}=\begin{bmatrix} \boldsymbol{G}_{2, 1}\\ \boldsymbol{G}_{2, 0}\\ \boldsymbol{G}_{1, 0}\end{bmatrix}$. Expanding:
\begin{equation}
\label{eq:Phi2}
\begin{gathered}
\boldsymbol{\Phi}_1 = \boldsymbol{H}_{1, 0} \boldsymbol{G}_{1, 0} + \boldsymbol{H}_{2, 0} \boldsymbol{G}_{2,1} + \boldsymbol{H}_{2, 1} \boldsymbol{G}_{2, 0}\\
\boldsymbol{\Phi}_2 = \boldsymbol{H}_{2, 0} \boldsymbol{G}_{2, 0}
\end{gathered}
\end{equation}
Therefore, we have a realization if we can decompose $\boldsymbol{\Phi}_1$ and $\boldsymbol{\Phi}_2$ to this form. The $minimal$ requirement would put further restrictions: if $\boldsymbol{G}_{2, 0}$ and $\boldsymbol{G}_{1, 0}$ has zero rows, then we can just drop those rows and get a smaller state-space realization. So a starting condition is $\boldsymbol{G}_{2, 0}$ and $\boldsymbol{G}_{1, 0}$ should not have zero rows. The actual condition, specified by Kalman is that the rows of $\boldsymbol{G}_{2, 0}$ and $\boldsymbol{G}_{1, 0}$ are linearly independent, and so are the columns of $\boldsymbol{H}_{2, 0}$ and $\boldsymbol{H}_{1, 0}$. This puts a constraint $l_1 + l_2 \leq k$ where $k$ is the dimension of the vector $\boldsymbol{y}_t$. Further, he proved our guess is correct: $\boldsymbol{F}$ needs to be a nilpotent matrix. The discussion here could be generalized to $\textsc{var}(p)$ models and more generally to regression models with time lag structures as we can see in the next sections.
\section{A result of Kalman on rational matrix function}
The results in this section is purely algebraic involving matrix polynomial and rational functions. Its main result was discovered by Kalman in \parencite{Kalman}. Together with earlier results of Gilbert in \parencite{Gilbert} they give a complete picture of minimal state-space realization of all proper rational transfer functions. We will recall a few definitions but will mostly focus on Proposition 3 of \parencite{Kalman}, which is most relevant to our situation. As before, a rational matrix function $\boldsymbol{Z}(s)$ is called strictly proper if $\lim_{s\to\infty}\boldsymbol{Z}(s) = 0$. By factoring out the (scalar) least common denominator $\boldsymbol{\bar{q}}(s)$ we can write:
\[ \boldsymbol{Z}(s) = \frac{1}{\boldsymbol{\bar{q}}(s)} \boldsymbol{\bar{P}}(s)\]
with degree of $\boldsymbol{\bar{P}}$ is less than degree of $\boldsymbol{\bar{q}}$. Gilbert addressed the case where $\boldsymbol{\bar{q}}$ has simple roots. In that case we can expand $\boldsymbol{Z}$ by partial fraction:
\[ \boldsymbol{Z}(s) = \sum_{i=1}^{g} \frac{\boldsymbol{H}_i \boldsymbol{G}_i}{s - \lambda_i} \]
with $\boldsymbol{H}_i, \boldsymbol{G}_i$ are of sizes $k\times d_i, d_i\times k$ and of rank $d_i$. Note we are assuming that $\boldsymbol{\bar{q}}(s)$ could be factored to monomials, hence $\boldsymbol{H}_i, \boldsymbol{G}_i$ and $\lambda_i$ could be complex numbers (but in the end $\boldsymbol{Z}$ is real.) Gilbert showed that $\boldsymbol{Z}$ admits a minimal state-space realization with minimal state $\sum_i d_i$ and constructed it as a direct sum of blocks of form $\boldsymbol{H}_i(s \boldsymbol{I}_{r_i}-\lambda_i)^{-1}\boldsymbol{G}_i$. The case $g=1$ and $\lambda_i = 0$ is the $\textsc{var}(1)$ case above, with $L=s^{-1}$ as usual.
Kalman addressed the case of roots with multiplicity. He showed in general the minimal state-space realization could be constructed as direct sum of realization for distinct roots, each could have multiplicity greater than $1$. For the case of one root his result could be summarized in the following proposition, which is a restatement of Proposition 3 of \parencite{Kalman} which works for matrices in $\mathcal{X} = \mathbb{R}^n$ or $\mathbb{C}^n$ as this is purely an algebraic result. We note he used the term $irreducible$ instead of $minimal$ realization, which is the standard term today. For our application the root will be zero and all coefficients will turn out to be real.
\begin{proposition}
Let $\boldsymbol{Z}(s) = \frac{1}{(s-\lambda)^p} \boldsymbol{\bar{P}}(s)$ with $\boldsymbol{\bar{P}}(s)$ is a polynomial matrix of degree less than $p$. Let $\boldsymbol{F}$ be an $n\times n$ matrix with a single eigenvalue $\lambda$. We take a basis of $\boldsymbol{F}$ in $\mathcal{X}$ so that $\boldsymbol{F}$ has the Jordan form:
$\boldsymbol{F} = \diag(\boldsymbol{J}(\lambda, r_g, l_g), \cdots, \boldsymbol{J}(\lambda, r_1, l_1))$ with $r_g > r_1 > \cdots > r_1$. Where $\boldsymbol{J}(\lambda, r, l)$ is defined by:
$$\boldsymbol{J}(\lambda, r) =\begin{bmatrix}\lambda & 1& & & & \\
& \cdots & \cdots & & & \\
& & \cdots & \cdots & &\\
& & & \cdots & &1\\
0 & & & & & \lambda
\end{bmatrix}$$
$$\boldsymbol{J}(\lambda, r, l) = \boldsymbol{J}(\lambda, r) \otimes \boldsymbol{I}_l$$
and $\boldsymbol{J}(\lambda, r)$ is of size $r$. Let $\boldsymbol{G}, \boldsymbol{H}$ be $n\times m, k\times n$ matrices expressed with respect to the same basis. Let $\mathfrak{l} = \sum l_i$. Then $\{\boldsymbol{H}, \boldsymbol{F}, \boldsymbol{G}\}$ is a minimal state-space realization of $\boldsymbol{Z}(s)$ if and only if both of the following conditions are satisfied:
\begin{enumerate}
\item The $\mathfrak{l}\times m$ matrix $\boldsymbol{G}_{:, 0}$, which consists of rows $(r_g-1)l_g+1, \cdots, r_g l_g, r_g l_g + (r_{g-1}-1)l_{g-1}+1,\cdots, r_g l_g + r_{g-1}l_{g-1},
\cdots, \sum_2^{g}r_i l_i +(r_1-1)l_1+1 ,\cdots, \sum_1^{g}r_i l_i$ has rank $\mathfrak{l}$.
\item The $k\times \mathfrak{l}$ matrix $\boldsymbol{H}_{:, 0}$, which consists of columns $1,\cdots, l_g, r_g l_g+1, \cdots r_g l_g + l_{g-1},\cdots, \sum_2^{g}r_i l_i +1 ,\cdots, \sum_2^{g}r_i l_i + l_1$ has rank $\mathfrak{l}$.
\end{enumerate}
\end{proposition}
We have mostly preserved Kalman's notations, the notable differences are:
\begin{enumerate}
\item Replacing his $p$ with $k$ so not to confuse with the degree of the $\textsc{var}$ model.
\item Use $g$ for the number of terms distinct $r$'s, as $q$ may be confused with moving average order.
\item Grouping the blocks with the same $r$ together and order the blocks in descending order of $r$'s. This is to conform with the order of the McMillan denominator. We will see the block with exponent $r$ will contribute to $r$ coefficients $\boldsymbol{\Phi}_{1},\cdots, \boldsymbol{\Phi}_r$, and the reduction in the rank of the $\boldsymbol{G}_{p, 0}$ block contribute the most to reduction of overall parameters. So in a sense it is an order of importance.
\end{enumerate}
We note the dimension of the minimal realization is
$$n_{\textsc{min}}= \sum r_i l_i = \delta_M$$
$\delta_M$ is defined in term of the Smith normal form. If a minimal state-space realization is given, $\boldsymbol{Z}(s)$ is recovered by expanding the terms. Conversely, given $\boldsymbol{Z}(s)$, Kalman gave an algorithm to recover $\{\boldsymbol{H}, \boldsymbol{F}, \boldsymbol{G}\}$. The algorithm is based on representing $\boldsymbol{\bar{P}}(s)$ in Smith normal form then expand the terms to Taylor series of terms $s-\lambda$ and read the coefficients off from the representation. Kalman's proposition is essentially a translation between the Smith-McMillan form and the state-space form, a point carried out to all base fields by the work of \parencite{ItoWimmer}.
Our interest is in the case $\lambda = 0$. We will use the notation $\boldsymbol{K}(r, l) = \boldsymbol{J}(0, r, l)$ going forward. Consider again
$$\boldsymbol{Z}(s) = \boldsymbol{\Phi}_1 s^{-1} +\cdots \boldsymbol{\Phi}_p s^{-p} = \frac{1}{s^p}\sum_{i=1}^p \boldsymbol{\Phi}_{p-i} s^{i}$$
with $\boldsymbol{\Phi}_p \neq 0$, let $\boldsymbol{P}(L)=\boldsymbol{Z}(L^{-1})$:
$$\boldsymbol{P}(L) = \boldsymbol{\Phi}_1 L +\cdots \boldsymbol{\Phi}_pL^p$$
Applying the proposition for this case:
$$\boldsymbol{Z}(s) = \boldsymbol{H}(s \boldsymbol{I} -\boldsymbol{F})^{-1}\boldsymbol{G} = \boldsymbol{H}(\boldsymbol{I}-\boldsymbol{F} s^{-1})^{-1}s^{-1}\boldsymbol{G}$$
and observe since $\lambda=0$, $\boldsymbol{F}$ is a $nilpotent$ Jordan matrix: $\boldsymbol{F}^p = 0$. Therefore we could rewrite the state-space realization in term of $L$:
\begin{equation}
\boldsymbol{P}(L) = \boldsymbol{Z}(L^{-1}) = \boldsymbol{H} (\boldsymbol{I} - \boldsymbol{F} L)^{-1}\boldsymbol{G} L = \boldsymbol{H}\boldsymbol{G} L + \boldsymbol{H}\boldsymbol{F}\boldsymbol{G} L + \cdots \boldsymbol{H}\boldsymbol{F}^{p-1}\boldsymbol{G} L^p
\end{equation}
or
$$\boldsymbol{\Phi}_i = \boldsymbol{H}\boldsymbol{F}^{i-1}\boldsymbol{G}$$
From the fact $\boldsymbol{F}^{r_g} = 0$ and $\boldsymbol{\Phi}_p \neq 0$ this implies $r_g = p$.
\section{Detailed description of the minimal realization.}
Let us now go deeper to the structure of $\boldsymbol{G}$ and $\boldsymbol{H}$. Since $\boldsymbol{F}$ is composed of $g$ blocks $\boldsymbol{K}(r_i, l_i)$, $\boldsymbol{G}$ and $\boldsymbol{H}$ could be decomposed to the corresponding $g$ row or column blocks respectively. The block corresponding to $r_i$ is of size $r_i l_i$ and it has $r_i$ sub-blocks, each of $equal$ size $l_i$. We call $r$ the exponent of the Jordan block $\boldsymbol{K}(r, l)$ and $l$ the sub-rank.
We index the sub-blocks corresponding to an exponent $r$ of $\boldsymbol{G}$ in descending order: $\boldsymbol{G}_{r, r-1}, \cdots, \boldsymbol{G}_{r, 0}$. The corresponding sub-blocks in $\boldsymbol{H}$ are in ascending order, $\boldsymbol{H}_{r, 0}, \cdots, \boldsymbol{H}_{r, r-1}$. We will see this more explicitly in the next section. The somewhat mysterious indexing has origin from the correspondence between the Smith normal form and state-space realization. With this double indexing convention, it is clear $\boldsymbol{G}_{:, 0}$ above is just the collection of all $\boldsymbol{G}_{r_i, 0}$, and the assumption is the rows of $\boldsymbol{G}_{:, 0}$ are linearly independent of rank $\mathfrak{l} = \sum l_i \leq \min(k, m)$. We have a similar observation for the sub-blocks of $\boldsymbol{H}$.
Let $h = \min(k, m)$. Instead of describing the Jordan blocks by pairs $(r_i, l_i)$ with $l_i > 0$, it is sometime more convenient to allow zero Jordan blocks. To summarize:
\begin{itemize}
\item For a $\textsc{var}$ model the possible choices of $\boldsymbol{F}$ could be classified by Jordan matrices such that $\boldsymbol{F}^{p-1}\neq 0; \boldsymbol{F}^p=0$. In other words, it could be classified by a list of tuples $\Psi = [(r_g, l_g), \cdots, (r_1, l_1)]$ with $l_i> 0, r_g = p, r_g > r_{g-1}> \cdots > r_1$, together with the rank constraint: $\mathfrak{l} = \sum l_i \leq h$. The corresponding Jordan matrix is $\boldsymbol{F} = \boldsymbol{F}_{\Psi}=\oplus_{i=g}^{i=1} \boldsymbol{K}(r_i, l_i)$.
\item An alternative classification is by nonnegative integer value vector $\hat{\Psi} = [d_1,\cdots, d_p]$ with $d_p > 0$, $0\leq d_i$ and $\mathfrak{l} = \sum d_p \leq h$. $\Psi$ is obtained from $\hat{\Psi}$ as the list of tuples $[(i, d_i) | d_i > 0]$ in reversed order of $i$. Conversely, we can obtain $\hat{\Psi}$ from $\Psi$ by patching $d_i=0$ for the exponents $i$ not in $\Psi$. We call $\Psi$ and $\hat{\Psi}$ {\it structure parameters}.
\item To be consistent with the convention of the McMillan denominator, we will write the Jordan blocks in descending order of exponent.
\item For each $1 \leq i \leq p$, if $d_i > 0$, a Jordan block is defined by its exponent $i$ and its sub-rank $d_i$. $\boldsymbol{K}(i, d_i)= \boldsymbol{J}(0, i, d_i)$ is of size $d_ii$. We will skip blocks with $d_i=0$.
\item The minimal state-space dimension, which is the dimension of $\boldsymbol{F}$ in a minimal realization is $n_{\textsc{min}}=\sum_{i=1}^p d_ii = \sum_{j=g}^{1} r_j l_j$. It is equal to the McMillan degree.
\item For a given tuple $(k, m, p)$, and $h=\min(k, m)$, the highest possible minimal state-space dimension is $ph$ and corresponds to $\Psi = [(p, h)]$. The lowest possible dimension is $p$, corresponds to $\Psi=[(p, 1)]$.
\item The number of possible configurations of $\Psi$ with maximal degree $p$ is $\begin{pmatrix} h + p-1 \\ p \end{pmatrix}$. This follows for a balls-and-urns computation. A more straight forward application of balls-and-urns is the number of configurations with $\mathfrak{l} = \sum d_i \leq h$ and degree {\it not exceeding} $p$: in addition to the urns $1\cdots p$, we add an urn corresponding to unused dimension $d_0 = h -\mathfrak{l}$. This is an $h$ balls, $p+1$ urns problem with number of configurations $\begin{pmatrix} h + p \\ p \end{pmatrix}$. Our count for the maximal degree is exactly $p$ comes from
$$\begin{pmatrix} h + p -1 \\ p \end{pmatrix} = \begin{pmatrix} h + p \\ p \end{pmatrix} - \begin{pmatrix} h + p -1\\ p-1 \end{pmatrix}$$
In the code, we include a function to list all possible $\Psi$ per a pair $(h, p)$.
\item From here the number of configurations growths polynomially (of degree $p$) in $m$. There are $ph$ possible minimal state-space dimensions, and the distribution of number of $\Psi$'s per state-space dimension is a bell-shape. The analysis suggests that for large $m$ and $p$, iteration through the set of all possible $\Psi$ is not practical. We will discuss estimation in the next section.
\end{itemize}
As an example, for $p =3$, $\boldsymbol{P}(L)=\sum \Phi_i L^i$ is represented as:
\[ L\begin{bmatrix}\boldsymbol{H}_{[3, :]} & \boldsymbol{H}_{[2, :]} & \boldsymbol{H}_{[1, :]} \end{bmatrix} \begin{bmatrix}\boldsymbol{I} - \boldsymbol{K}(3, d_3)L & & \\
& \boldsymbol{I} - \boldsymbol{K}(2, d_2)L & \\
& & \boldsymbol{I} -\boldsymbol{K}(1, d_1)L\\
\end{bmatrix}^{-1}
\begin{bmatrix}\boldsymbol{G}_{[3, :]} \\ \boldsymbol{G}_{[2, :]} \\ \boldsymbol{G}_{[1, :]} \end{bmatrix}
\]
As before we can use the nilpotency of $\boldsymbol{K}(r, l)$:
$$(\boldsymbol{I}- \boldsymbol{K}(r, l)L)^{-1} = \boldsymbol{I} + \sum_{i=1}^{r-1} \boldsymbol{K}(r, l)^iL^i$$
We note for the block $\boldsymbol{K}(r, l)^i$ is the matrix with the $i^{th}$ offset upper block diagonal is $\boldsymbol{I}_l$, and other entries are zero. Therefore, the contribution of that block is
\begin{equation}
\begin{bmatrix}\boldsymbol{H}_{r, 0}, \cdots \boldsymbol{H}_{r, r-1}\end{bmatrix}\begin{bmatrix}
\boldsymbol{I}_l L & \boldsymbol{I}_l L^2 & \boldsymbol{I}_l L^3\cdots &\cdots & &\boldsymbol{I}_l L^r \\
& \cdots & \cdots & & \boldsymbol{I}_l L^{r-1} & \\
& & \cdots & \cdots & &\vdots\\
& & & \cdots & &\boldsymbol{I}_l L^2\\
0 & & & & & \boldsymbol{I}_lL
\end{bmatrix}\begin{bmatrix} \boldsymbol{G}_{r, r-1} \\ \vdots \\ \boldsymbol{G}_{r, 0}\end{bmatrix}
\end{equation}
So its contribution to $\boldsymbol{\Phi}_i = \boldsymbol{H}\boldsymbol{F}^{i-1} \boldsymbol{G}$ is:
\[
\sum_{a=0}^{r-i} \boldsymbol{H}_{r, a} \boldsymbol{G}_{r, p-i-a}
\]
And so:
\begin{equation}
\label{eq:Phi}
\boldsymbol{\Phi}_i = \sum_{j\geq i} \sum_{a=0}^{j-i} \boldsymbol{H}_{j,a} \boldsymbol{G}_{j, j-i-a}
\end{equation}
It is now simple to recover the formula for the cases $p=1$ and $p=2$ in previous sections. While the set up looks more involved to remember we have the following rules:
\begin{itemize}
\item $\boldsymbol{H}_{j_1, a}\boldsymbol{G}_{j_2, b}$ only appears if $j_1 = j_2$. So only terms of form $\boldsymbol{H}_{j, a}\boldsymbol{G}_{j, b}$ are present. There are no terms linking distinct Jordan blocks.
\item $\boldsymbol{H}_{j, a}\boldsymbol{G}_{j, b}$ contributes to $\boldsymbol{\Phi}_{j - a - b}$
\end{itemize}
In the next section we will describe the associated regression model, as well as estimations.
\section{VARX and least square estimates.}
We will describe a model of the form:
\begin{equation}
\label{eq:VARX}
\boldsymbol{y}_t = \boldsymbol{\Phi}_1 \boldsymbol{x}_{t-1} + \boldsymbol{\Phi}_2 \boldsymbol{x}_{t-2} \cdots +\boldsymbol{\Phi}_p \boldsymbol{x}_{t-p} + \boldsymbol{\epsilon}_t
\end{equation}
We allow $\boldsymbol{y}_{t-i}$ to be part of $\boldsymbol{x}_{t-i}$, as discussed in \parencite{VeluReinselWichern}. The classical reduced-rank regression would be a special case of \cref{eq:VARX} with $p=1$.
Let us now turn to the estimation problem. Assuming we have $T+p$ samples of data, we organize the data in to matrices $\boldsymbol{Y}_f$ of size $k\times (T+p)$ and $\boldsymbol{X}_f$ of size $m\times (T+p)$. Let $\boldsymbol{Y}$ be the submatrix of $\boldsymbol{Y}_f$ of size $k\times T$ skipping the first $p$ (columns) samples. Define $L^i\boldsymbol{X}$ to be the submatrix of $\boldsymbol{X}_f$ of size $m\times T$ skipping the last $i$ samples and the first $p-i$ samples. As we will not be using the first $p$ samples of $\boldsymbol{Y}_f$ as well as the last sample of $\boldsymbol{X}_f$, they are allowed to be null. In the situation where we have autoregression it may be advantageous to share storage of $\boldsymbol{X}_f$ and $\boldsymbol{Y}_f$. However we will consider $\boldsymbol{X}_f$ and $\boldsymbol{Y}_f$ to be in separate matrices here to simplify the notations. We now look at the problem of estimating $\boldsymbol{\Phi}_i$ such that the minimal state-space realization of $\boldsymbol{Z}(s) = \boldsymbol{\Phi}_1 s^{-1} + \boldsymbol{\Phi}_2 s^{-2} \cdots +\boldsymbol{\Phi}_p s^{-p}$ corresponds to $\hat{\Psi} = \{d_1, \cdots d_p\}$.
As with classical regressions, the maximum likelihood estimate with Gaussian noise of the parameters $\boldsymbol{H}, \boldsymbol{G}, \Omega$
will have the form:
$$-\frac{T}{2}\log(\det(\Omega)) - \frac{1}{2}\sum^T(\boldsymbol{y}_t - \sum\boldsymbol{\Phi}_i \boldsymbol{x}_{t-i})'\Omega^{-1} (\boldsymbol{y}_t - \sum\boldsymbol{\Phi}_i \boldsymbol{x}_{t-i})$$
where $\boldsymbol{\Phi}_i$ is given by \cref{eq:Phi}. We arrive at the condition:
$$\Omega = \frac{1}{T}(\boldsymbol{Y} - \sum\boldsymbol{\Phi}_i L^i\boldsymbol{X})(\boldsymbol{Y} - \sum\boldsymbol{\Phi}_i \boldsymbol{X})'$$
Since $\boldsymbol{F}$ is known from the specification of $\hat{\Psi}$, similar to the reduced-rank case
we will need to estimate $\boldsymbol{H}$ and $\boldsymbol{G}$. Before we proceed with the general case let us go through the case $p=2$. From \cref{eq:Phi2} we have:
$$\boldsymbol{y}_t = (\boldsymbol{H}_{1, 0} \boldsymbol{G}_{1, 0} + \boldsymbol{H}_{2, 0} \boldsymbol{G}_{2,1} + \boldsymbol{H}_{2, 1} \boldsymbol{G}_{2, 0}) \boldsymbol{x}_{t-1} + \boldsymbol{H}_{2, 0} \boldsymbol{G}_{2, 0} \boldsymbol{x}_{t-2} + \boldsymbol{\epsilon}_t$$
Similar to the $\textsc{var}(1)$ case, to find a least square estimate by minimizing the determinant of the covariance matrix of $\epsilon_t$, we fix $\boldsymbol{G}$ and find the optimized $\boldsymbol{H}$. With the regressor:
$$\begin{bmatrix}\boldsymbol{G}_{2, 1} L\boldsymbol{X} + \boldsymbol{G}_{2, 0}L^2\boldsymbol{X} \\ \boldsymbol{G}_{2, 0}L\boldsymbol{X} \\ \boldsymbol{G}_{1, 0} L\boldsymbol{X} \end{bmatrix}$$
We can write it as $\kappa(\boldsymbol{G}) \boldsymbol{X}_{\textsc{LAG}}$, where:
$$\boldsymbol{X}_{\textsc{LAG}} = \begin{bmatrix}L^2\boldsymbol{X} \\ L\boldsymbol{X} \end{bmatrix}$$
$$\kappa(\boldsymbol{G}) = \begin{bmatrix}\boldsymbol{G}_{2, 0} & \boldsymbol{G}_{2, 1} \\ & \boldsymbol{G}_{2, 0}
\\ & \boldsymbol{G}_{1, 0} \end{bmatrix}$$
And note that we will write the exponents in descending order as is the case of the McMillan denominator. We see:
$$\boldsymbol{H}(\boldsymbol{G}) = \begin{bmatrix}\boldsymbol{H}_{2, 0} & \boldsymbol{H}_{2, 1} & \boldsymbol{H}_{1, 0} \end{bmatrix} = \boldsymbol{Y}\boldsymbol{X}_{\textsc{LAG}}'\kappa(\boldsymbol{G})'(\kappa(\boldsymbol{G})\boldsymbol{X}_{\textsc{LAG}}\Xlag'\kappa(\boldsymbol{G})')^{-1}$$
So we need to minimize the determinant of residual covariance matrix:
$$\det(\boldsymbol{Y}\bY' - \boldsymbol{Y}\boldsymbol{X}_{\textsc{LAG}}'\kappa(\boldsymbol{G})'(\kappa(\boldsymbol{G})\boldsymbol{X}_{\textsc{LAG}}\Xlag'\kappa(\boldsymbol{G})')^{-1}\kappa(G)\boldsymbol{X}_{\textsc{LAG}} \boldsymbol{Y} )$$
Again, using the Schur complement trick we need to minimize:
$$\mathcal{R}(G) = \frac{\det(\kappa(\boldsymbol{G}) [\boldsymbol{X}_{\textsc{LAG}} \boldsymbol{X}_{\textsc{LAG}}' - \boldsymbol{X}_{\textsc{LAG}} \boldsymbol{Y}'(\boldsymbol{Y}\bY')^{-1}\boldsymbol{Y}\boldsymbol{X}_{\textsc{LAG}}']\kappa(\boldsymbol{G})')}{\det(\kappa(\boldsymbol{G}) \boldsymbol{X}_{\textsc{LAG}}\Xlag' \kappa(\boldsymbol{G})')}$$
Write its logarithm as $\log(\det(\kappa(\boldsymbol{G})\boldsymbol{A}\kappa(\boldsymbol{G})' -\log(\det(\kappa(\boldsymbol{G})\boldsymbol{B}\kappa(\boldsymbol{G})'))$ with $\boldsymbol{A} = [\boldsymbol{X}_{\textsc{LAG}} \boldsymbol{X}_{\textsc{LAG}}' - \boldsymbol{X}_{\textsc{LAG}} \boldsymbol{Y}'(\boldsymbol{Y}\bY')^{-1}\boldsymbol{Y}\boldsymbol{X}_{\textsc{LAG}}']$ and $\boldsymbol{B} = \boldsymbol{X}_{\textsc{LAG}}\Xlag'$ as before. The logarithm has a simple gradient by Jacobi's formula:
$$\nabla_{\eta}{\log(\det(\kappa (\boldsymbol{G})\boldsymbol{A}\kappa(\boldsymbol{G})'))} = 2\Tr((\kappa (\boldsymbol{G})\boldsymbol{A}\kappa(\boldsymbol{G})')^{-1} \kappa (\boldsymbol{G})\boldsymbol{A}\kappa(\eta)'$$
where $\eta$ is a matrix in the shape of $\boldsymbol{G}$ to specify the direction for the directional derivative $\nabla_{\eta}$.
$$\nabla_{\eta}\log(\mathcal{R}(\boldsymbol{G})) = \Tr((\kappa (\boldsymbol{G})\boldsymbol{A}\kappa(\boldsymbol{G})')^{-1} \kappa (\boldsymbol{G})\boldsymbol{A}\kappa(\eta)'
- \Tr((\kappa (\boldsymbol{G})\boldsymbol{B}\kappa(\boldsymbol{G})')^{-1} \kappa (\boldsymbol{G})\boldsymbol{B}\kappa(\eta)' $$
for all $\eta$. So far, we can generalize the steps of the reduced-rank regression in the introduction. From here, the situation divergences. As $\kappa$ must be represented as a tensor, we do not have a matrix equation for $\boldsymbol{G}$. However, since we know the gradient (and the hessian is also easy to compute) we can use a hessian-based optimizer. Later on we will see this is an optimization problem with the underlying function is invariant under a large group of matrix operations. We may use manifold optimization techniques for faster convergence. To conclude the section we note the analysis thus far generalizes to higher $p$:
\begin{theorem}
\label{th:likelihood}
Assume the minimal state-space realization of $\sum_{i=1}^p \boldsymbol{\Phi}_i L^{i}$ is represented by a nilpotent Jordan matrix consisting of blocks $\boldsymbol{K}(p, d_p), \cdots,\boldsymbol{K}(1,d_1)$. Let $n_{\textsc{min}} = \sum d_ii$. For a fix $\boldsymbol{G}$:
$$\boldsymbol{G} = \begin{bmatrix}\boldsymbol{G}_{p, p-1}\\ \vdots \\ \boldsymbol{G}_{p, 0} \\ \vdots \\
\boldsymbol{G}_{2, 1} \\ \boldsymbol{G}_{2, 0} \\
\boldsymbol{G}_{1, 0}
\end{bmatrix}$$
We define $\kappa(\boldsymbol{G})$ to be the block matrix with row blocks indexed by $(r, l)$ with $r$ arranged in descending order, $l$ arranged in in {\it ascending} order ($0\leq l\leq r-1$), and column blocks ordered from $p$ to $1$:
\begin{equation}
\label{eq:kappa2}
\kappa(\boldsymbol{G})_{(r, l), i} = \left\{\begin{array}{l} \boldsymbol{G}_{r, r-l-i}\text{ if }i \leq r-l\\
0 \text{ otherwise.}
\end{array}
\right.
\end{equation}
In other words, the row blocks from $\kappa(\boldsymbol{G})_{r, r-1}$ to $\kappa(\boldsymbol{G})_{r, 0}$ has the right most column filled by $\boldsymbol{G}_{r, r-1}$ to $\boldsymbol{G}_{r, 0}$ from the top down, then the blocks are propagated up diagonally.
\begin{equation}
\label{eq:kappa}
\kappa(\boldsymbol{G}) = \begin{bmatrix}
\boldsymbol{G}_{p, 0} & \boldsymbol{G}_{p, 1} & \vdots & \vdots & \boldsymbol{G}_{p, p-2} & \boldsymbol{G}_{p, p-1}\\
0 & \boldsymbol{G}_{p, 0} & \boldsymbol{G}_{p, 1} & \vdots & \vdots & \boldsymbol{G}_{p, p-1}\\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
0 & \cdots & \cdots & \cdots & 0 & \boldsymbol{G}_{p, 0}\\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
0 & \cdots & \cdots & 0 & \boldsymbol{G}_{2, 0} & \boldsymbol{G}_{2, 1} \\
0 & \cdots & \cdots & \cdots & 0 & \boldsymbol{G}_{2, 0}\\
0 & \cdots & \cdots & \cdots & 0 & \boldsymbol{G}_{1, 0}\\
\end{bmatrix}
\begin{matrix}(p, 0)\\ (p, 1)\\ \vdots\\ (p, p-1)\\ \vdots \\ (2, 0) \\ (2, 1) \\ (1, 0)\end{matrix}
\end{equation}
Set
\begin{equation} \boldsymbol{X}_{\textsc{LAG}} = \begin{bmatrix}L^p\boldsymbol{X} \\ \vdots \\ L\boldsymbol{X} \end{bmatrix}
\end{equation}
Then the optimal $\boldsymbol{H}$ to minimize the determinant of the residual covariance matrix is given by:
\begin{equation}
\begin{gathered}
\boldsymbol{H}(\boldsymbol{G}) = \begin{bmatrix}\boldsymbol{H}_{p, 0}\cdots \boldsymbol{H}_{p, p-1} & \cdots & \boldsymbol{H}_{2, 0} & \boldsymbol{H}_{2, 1} & \boldsymbol{H}_{1, 0} \end{bmatrix} = \\ \boldsymbol{Y}\boldsymbol{X}_{\textsc{LAG}}'\kappa(\boldsymbol{G})'(\kappa(\boldsymbol{G})\boldsymbol{X}_{\textsc{LAG}}\Xlag'\kappa(\boldsymbol{G})')^{-1}
\end{gathered}
\end{equation}
and the residual determinant is:
\begin{equation}
\label{eq:resi_covar}
\det(\boldsymbol{Y}\bY' - \boldsymbol{Y}\boldsymbol{X}_{\textsc{LAG}}'\kappa(\boldsymbol{G})'(\kappa(\boldsymbol{G})\boldsymbol{X}_{\textsc{LAG}}\Xlag'\kappa(\boldsymbol{G})')^{-1}\kappa(\boldsymbol{G})\boldsymbol{X}_{\textsc{LAG}} \boldsymbol{Y} )
\end{equation}
$\kappa(\boldsymbol{G})$ is of full row rank if $\boldsymbol{G}_{:, 0}$ is of full row rank. Minimizing \cref{eq:resi_covar} is equivalent to minimizing:
\begin{equation}
\label{eq:ratio_det}
\mathcal{R}(\boldsymbol{G}) = \frac{\det(\kappa(\boldsymbol{G}) [\boldsymbol{X}_{\textsc{LAG}} \boldsymbol{X}_{\textsc{LAG}}' - \boldsymbol{X}_{\textsc{LAG}} \boldsymbol{Y}'(\boldsymbol{Y}\bY')^{-1}\boldsymbol{Y}\boldsymbol{X}_{\textsc{LAG}}']\kappa(\boldsymbol{G})')}{\det(\kappa(\boldsymbol{G}) \boldsymbol{X}_{\textsc{LAG}}\Xlag' \kappa(\boldsymbol{G})')}
\end{equation}
which has the $\log$-gradient:
\begin{equation}
\begin{gathered}
2\Tr((\kappa (\boldsymbol{G})\boldsymbol{A}\kappa(\boldsymbol{G})')^{-1} \kappa (\boldsymbol{G})\boldsymbol{A} \kappa(\eta)') \\
- 2\Tr((\kappa (\boldsymbol{G})\boldsymbol{B}\kappa(\boldsymbol{G})')^{-1} \kappa (\boldsymbol{G})\kappa(\eta)')
\end{gathered}
\end{equation}
with $\boldsymbol{A} =\boldsymbol{X}_{\textsc{LAG}} \boldsymbol{X}_{\textsc{LAG}}' - \boldsymbol{X}_{\textsc{LAG}} \boldsymbol{Y}'(\boldsymbol{Y}\bY')^{-1}\boldsymbol{Y}\boldsymbol{X}_{\textsc{LAG}}'$ and $\boldsymbol{B} = \boldsymbol{X}_{\textsc{LAG}}\Xlag'$.
Define $\mathcal{H}(\boldsymbol{A}, \boldsymbol{G}, \psi, \eta)$ to be
\begin{equation}
2\Tr((\kappa(\boldsymbol{G})\boldsymbol{A}\kappa(\boldsymbol{G})')^{-1}(\kappa(\boldsymbol{G})\boldsymbol{A}\kappa(\psi)' + \kappa(\psi)\boldsymbol{A}\kappa(\boldsymbol{G})') - (\kappa(\boldsymbol{G})\boldsymbol{A}\kappa(\boldsymbol{G})')^{-1}\kappa(\psi)\boldsymbol{A}\kappa(\eta)')
\end{equation}
Then its Hessian is $\mathcal{H}(\boldsymbol{A}, \boldsymbol{G}, \psi, \eta) - \mathcal{H}(\boldsymbol{B}, \boldsymbol{G}, \psi, \eta)$.
\end{theorem}
\begin{proof}
The proof is a generalization of the case $p=2$. We note as before, row blocks indices of $\kappa$ are ordered in order of $\boldsymbol{H}$, so the row block indices are $(r, 0), \cdots (r, r-1)$; in opposite with convention for $\boldsymbol{G}$. The columns are indexed from $p$ to $1$ as they correspond to $\boldsymbol{X}_{\textsc{LAG}}$. First, we need to prove the regressor to $\boldsymbol{H}$ is $\kappa(\boldsymbol{G})\boldsymbol{X}_{\textsc{LAG}}$. From \cref{eq:Phi}, the block of regressor corresponding to $\boldsymbol{H}_{r, l}$ is $\sum_i \boldsymbol{G}_{r, r-l-i}L^{i}\boldsymbol{X}$, but this is the $(r, l)$ row block of $\kappa(\boldsymbol{G})\boldsymbol{X}_{\textsc{LAG}}$ by \cref{eq:kappa2}. Next, we need to show $\kappa$ is of full row rank. If $v\kappa(\boldsymbol{G}) = 0$ then the columns of $v$ corresponding to blocks $(r, 0)$ are zeros, as rows of $\boldsymbol{G}_{:, 0}$ are linearly independent. From the block triangular shape of $\kappa$ we can show $v_{r, l}=0$ inductively in $l$. Hence, $\kappa(\boldsymbol{G})\boldsymbol{U} \kappa(\boldsymbol{G})'$ does not have zero as an eigenvalue if $\boldsymbol{U}$ is positive definite. Therefore, $\kappa(\boldsymbol{G})\boldsymbol{U} \kappa(\boldsymbol{G})'$ is also positive definite, so the determinant ratio and its logarithm in the theorem are well-defined. The remaining calculations are routine.
\end{proof}
In a sense, our determinant ratio likelihood could be considered as a multiple lag version of canonical-correlation-analysis, as the gradient equation reduces to the same calculation in the $p=1$ case.
\section{Equivalence of state-space realizations.}
\label{sec:equivalence}
In the simple reduced-rank case, \cref{eq:ratio_det} is unchanged when $\boldsymbol{G}$ is replaced by $\boldsymbol{S}\boldsymbol{G}$ for any invertible matrix $\boldsymbol{S}$. In our general framework, a similar result holds for an invertible matrix $\boldsymbol{S}$ such that $\boldsymbol{S}\boldsymbol{F} = \boldsymbol{F}\boldsymbol{S}$. We will describe such matrices, and show we can normalize $\boldsymbol{G}$ to a form satisfying certain orthogonal relations. This section is more on linear algebra and geometry, readers can skip the section if they are not interested in the details of dimensional reduction for estimation algorithms. The main result to take away is the parameter count of $\boldsymbol{S}$, which implies the parameter reduction count for the model. On the other hand, it is not difficult to work out all the details of the section in the case $m=2$ or $m=3$ manually, and that would probably give readers more intuition on parameter reduction.
As pointed out by Kalman in the same paper, the realizations $\{\boldsymbol{H}, \boldsymbol{F}, \boldsymbol{G}\}$ and $\{\boldsymbol{H}\boldsymbol{S}^{-1}, \boldsymbol{S}\boldsymbol{F}\boldsymbol{S}^{-1}, \boldsymbol{S}\boldsymbol{G}\}$ are equivalent. If $\boldsymbol{S}$ commutes with $\boldsymbol{F}$, then it is equivalent to $\{ \boldsymbol{H}\boldsymbol{S}^{-1}, \boldsymbol{F}, \boldsymbol{S}\boldsymbol{G}\}$. With our least square estimates for a given $\boldsymbol{G}$, this implies the models defined by $\boldsymbol{G}$ and $\boldsymbol{S}\boldsymbol{G}$ are equivalent. Let $\mathcal{S} = Centr(\boldsymbol{F})$ the set (which is a $group$) of all invertible matrices $\boldsymbol{S}$ of size $n_{\textsc{min}}\times n_{\textsc{min}}$ such that $\boldsymbol{S}\boldsymbol{F} = \boldsymbol{F}\boldsymbol{S}$. We show that we can transform $\boldsymbol{G}$ to normalized forms by applying an element of $\mathcal{S}$, similar to the Gram-Schmidt process. In particular, we can make the rows of $\boldsymbol{G}_{:, 0}$ orthonormal, similar to the classical Rayleigh quotient case. Since there are a few concepts to introduce, it is helpful to examine the case $p=2$ explicitly. In this case, $\boldsymbol{G} = \begin{bmatrix}\boldsymbol{G}_{21} \\ \boldsymbol{G}_{20} \\ \boldsymbol{G}_{10}\end{bmatrix}$ and $\boldsymbol{F}$ will have the form:
$$\boldsymbol{F} = \begin{bmatrix}
0_{l_2} & \boldsymbol{I}_{l_2} & \\
& 0_{l_2} & \\
& & 0_{l_1}
\end{bmatrix}$$
so for $\boldsymbol{S}$ to commute with $\boldsymbol{F}$, it needs to have the form:
$$
\begin{gathered}
\boldsymbol{S} = \begin{bmatrix}
\boldsymbol{S}_{21, 21} & \boldsymbol{S}_{21, 20} & \boldsymbol{S}_{21, 10} \\
& \boldsymbol{S}_{20, 20} & \\
& \boldsymbol{S}_{10, 20} & \boldsymbol{S}_{10, 10} \\
\end{bmatrix}\\
\boldsymbol{S}_{21, 21} = \boldsymbol{S}_{20, 20}
\end{gathered}
$$
Each diagonal block is invertible, but there is no restriction on the off-diagonal blocks. Since the combined matrix $\begin{bmatrix} \boldsymbol{G}_{2, 0} \\ \boldsymbol{G}_{1, 0}\end{bmatrix}$ is of full row rank, we can make it orthonormal using an $LQ$ factorization (thin $QR$ on its transpose). The end results are matrices $\boldsymbol{S}_{10, 10}$, $\boldsymbol{S}_{10, 20}$ and $\boldsymbol{S}_{20, 20}$, so that we have $(\boldsymbol{S}\boldsymbol{G})_{1,0}(\boldsymbol{S}\boldsymbol{G})_{1,0}' = \boldsymbol{I}_{d_1}$, $(\boldsymbol{S}\boldsymbol{G})_{2,0}(\boldsymbol{S}\boldsymbol{G})_{2,0}' = \boldsymbol{I}_{d_2}$ and $(\boldsymbol{S}\boldsymbol{G})_{1, 0} (\boldsymbol{S}\boldsymbol{G})_{2, 0}' = 0$. Therefore we assume after this step we have $\boldsymbol{G}_{r, 0}\boldsymbol{G}_{r_1, 0}' = \delta_{r, r_1}\boldsymbol{I}_{r, r_1}$. We claim that we can choose the $\boldsymbol{S}_{21, 20}, \boldsymbol{S}_{21, 10}$ blocks of $\boldsymbol{S}$ to make $(\boldsymbol{S}\boldsymbol{G})_{2,1}(\boldsymbol{S}\boldsymbol{G})_{10}' = 0$, $(\boldsymbol{S}\boldsymbol{G})_{2,1}(\boldsymbol{S}\boldsymbol{G})_{2,0}' = 0$. We have
$$(\boldsymbol{S}\boldsymbol{G})_{2, 1} = \boldsymbol{S}_{21, 10}\boldsymbol{G}_{1, 0} + \boldsymbol{S}_{21, 21}\boldsymbol{G}_{2, 1} + \boldsymbol{S}_{21, 20}\boldsymbol{G}_{2, 0}$$
Note that $\boldsymbol{S}_{21, 21} = \boldsymbol{S}_{20, 20}$ is already defined in the first step. To make $(\boldsymbol{S}\boldsymbol{G})_{2, 1}$ orthogonal to $\boldsymbol{G}_{1, 0}$ and $\boldsymbol{G}_{2, 0}$ we only need to set:
$$\begin{gathered}
\boldsymbol{S}_{21, 10} = -\boldsymbol{S}_{21, 21}\boldsymbol{G}_{2, 1}\boldsymbol{G}_{1, 0}' \\
\boldsymbol{S}_{21, 20} = -\boldsymbol{S}_{21, 21}\boldsymbol{G}_{2, 1}\boldsymbol{G}_{2, 0}'
\end{gathered}$$
This is our generalized $LQ$\slash Gram-Schmidt for $p=2$. While we cannot make $\boldsymbol{G}$ fully orthogonal like the case $p=1$, this helps reduce the search space. Recall $h = \min(k, m)$ and $\mathfrak{l} = d_1 + d_2 \leq h$.
Assume we have chosen $\boldsymbol{G}$ such that we have $\boldsymbol{G}_{:, 0} \boldsymbol{G}_{:, 0}' = \boldsymbol{I}_{\mathfrak{l}}$ and $\boldsymbol{G}_{:, 0} \boldsymbol{G}_{2, 1}' = 0$. Completing $\boldsymbol{G}_{:, 0}$ to an orthonormal basis by adding $m -\mathfrak{l}$ row vectors organized in a matrix $\boldsymbol{G}_{\perp}$, then we can express $\boldsymbol{G}_{2, 1}$ in that basis:
$$\boldsymbol{G}_{2, 1} = \boldsymbol{C}\boldsymbol{G}^o_{\perp}$$
where $\boldsymbol{C}$ is a matrix of size $(d_2, h-\mathfrak{l})$. So far we have showed that for $p=2$ the choice of minimal state representation is the same as the choice of $d_1, d_2$ such that $d_1 + d_2 \leq m$, and $\boldsymbol{G}$ could be normalized to an orthogonal form. This form could be represented by a pair $(\boldsymbol{C}, \boldsymbol{O})$ where $\boldsymbol{O}= \begin{bmatrix} \boldsymbol{G}^o_{2, 0} \\ \boldsymbol{G}^o_{1, 0}\\ \boldsymbol{G}^o_{\perp} \end{bmatrix} $ is an orthonormal basis of $\mathbb{R}^m$ and $\boldsymbol{C}\in \Mat(d_2, h-\mathfrak{l})$.
We note if $\boldsymbol{Q}_{20, 20}, \boldsymbol{Q}_{10, 10}$ are orthogonal square matrices having $h-\mathfrak{l}, d_2, d_1$ rows respectively, then the block diagonal matrix $\boldsymbol{Q}=\diag(\boldsymbol{Q}_{20, 20}, \boldsymbol{Q}_{20, 20}, \boldsymbol{Q}_{10, 10})$ commutes with $\boldsymbol{F}$. Hence
$(\boldsymbol{C}, \begin{bmatrix} \boldsymbol{G}_{2, 0}\\ \boldsymbol{G}_{1, 0}\\ \boldsymbol{G}^o_{\perp}\end{bmatrix})$ and $(\boldsymbol{Q}_{20, 20} \boldsymbol{C}, \begin{bmatrix} \boldsymbol{Q}_{20, 20}\boldsymbol{G}_{2, 0}\\ \boldsymbol{Q}_{10, 10}\boldsymbol{G}_{1, 0}\\ \boldsymbol{G}^o_{\perp}\end{bmatrix})$ represent $\boldsymbol{G}$ and $\boldsymbol{Q}\boldsymbol{G}$, respectively. The generalized Rayleigh functional
$$\mathcal{R}(\boldsymbol{G}, \boldsymbol{A}, \boldsymbol{B}) = \frac{\det(\kappa(\boldsymbol{G})\boldsymbol{A}\kappa(\boldsymbol{G})')}{\det(\kappa(\boldsymbol{G})\boldsymbol{B}\kappa(\boldsymbol{G})')} $$
is invariant under multiplication of $\boldsymbol{G}$ by $\boldsymbol{Q}$. Also if we replace $\boldsymbol{G}^o_{\perp}$ by $\boldsymbol{Q}_{\perp}\boldsymbol{G}^o_{\perp}$ and $\boldsymbol{C}$ by $\boldsymbol{C}\boldsymbol{Q}_{\perp}'$ we get the same matrix $\boldsymbol{G}^o$.
To illustrate, let us consider the case $k=m=2$ and $p=2$. Consider the case $d_2 = d_1 = 1$. In this case $\mathfrak{l} = 2$ and $n_{\textsc{min}}= 3$ and $\boldsymbol{G}^o_{\perp}$ is empty. An orthogonal matrix could be parameterized under the form $\begin{bmatrix}\cos t & \sin t\\ -\sin t & \cos t\end{bmatrix} = \begin{bmatrix} v \\ w\end{bmatrix}$. We can take $\boldsymbol{G} = \begin{bmatrix}0 \\ v \\ w\end{bmatrix}$ and $\kappa(\boldsymbol{G}) = \begin{bmatrix}v & 0 \\ 0& v \\0 & w\end{bmatrix}$. The numerator of the generalized Rayleigh quotient will be:
\[\det(\begin{bmatrix} vL^2 \boldsymbol{X} \boldsymbol{A}(L^2\boldsymbol{X})'v' & vL^2 \boldsymbol{X} \boldsymbol{A}(L\boldsymbol{X})'v' & vL^2 \boldsymbol{X} \boldsymbol{A}(L^1\boldsymbol{X})'w' \\
vL \boldsymbol{X} \boldsymbol{A}(L^2\boldsymbol{X})'v' & vL \boldsymbol{X} \boldsymbol{A}(L\boldsymbol{X})'v' & vL \boldsymbol{X} \boldsymbol{A}(L\boldsymbol{X})'w' \\
wL \boldsymbol{X} \boldsymbol{A}(L^2\boldsymbol{X})'v' & wL \boldsymbol{X} \boldsymbol{A}(L\boldsymbol{X})'v' & wL \boldsymbol{X} \boldsymbol{A}(L\boldsymbol{X})'w'
\end{bmatrix})\]
and the denominator is of the same form with $\boldsymbol{A}$ is replaced by $\boldsymbol{B}$. In effect we have an optimization problem on the circle, where the function to optimize is a rational function of high degree in $\sin$ and $\cos$.
The following proposition is rather technical but necessary for the exact parameter count of $\boldsymbol{S}$ for the general case. The main point to remember is we can slide diagonally entries in a combined block to a wall. Within a combined block, the sub-blocks are equal if they are on the same (not necessarily principal) diagonal, and we only need to define $\boldsymbol{S}$ on certain entries of vertical walls corresponding to $(\rho, 0)$ as in the case $p=2$.
\begin{proposition}
Let $\mathcal{S}=Centr(\boldsymbol{F})$ be the set of all invertible $n_{\textsc{min}}\times n_{\textsc{min}}$ matrices commuting with $\boldsymbol{F}$. We can index the blocks of $\boldsymbol{S}\in \mathcal{S}$ by $\boldsymbol{S}_{\rho_1, j_1; \rho_2, j_2}$ for $1\leq \rho_i \leq p$ and $0 \leq j_i \leq \rho_i-1$. $\boldsymbol{S}_{\rho_1, j_1; \rho_2, j_2}$ maps the block $\boldsymbol{G}_{\rho_2, j_2}$ to block $(\boldsymbol{S}\boldsymbol{G})_{\rho_1, j_1}$. $\boldsymbol{S}_{\rho_1, j_1; \rho_2, j_2}$ is of size $d_{\rho_1}\times d_{\rho_2}$. We have the following characterization of $\boldsymbol{S}$:
\begin{equation}
\label{eq:Sdiagonal}
\boldsymbol{S}_{\rho_1, j_1; \rho_2, j_2} = \boldsymbol{S}_{\rho_1, j_1+1; \rho_2, j_2+1} \text{ if } \rho_1 -1 > j_1 \geq 0, \rho_2 -1 > j_2 \geq 0
\end{equation}
\begin{equation}
\label{eq:Srelations}
\begin{gathered}
\boldsymbol{S}_{\rho, 0; \rho, 0} \text{ is invertible.} \\
\boldsymbol{S}_{\rho_1, j_1; \rho_2, \rho_2-1} = 0 \text{ if } \rho_1-1> j_1 \geq 0 \\
\boldsymbol{S}_{\rho_1, 0; \rho_2, j_2} = 0 \text{ if } 0 < j_2 < \rho_2 \\
\end{gathered}
\end{equation}
As a consequence:
\begin{equation}
\label{eq:Szero}
\begin{gathered}
\boldsymbol{S}_{\rho, \rho-1; \rho_2, j_2} = 0 \text{ if }j_2 > \rho-1 \\
\boldsymbol{S}_{\rho_1, j_1; \rho, 0} = 0 \text{ if } \rho_1 -\rho > j_1\\
\boldsymbol{S}_{\rho_1, 0; \rho_2, 0} = 0 \text{ if } \rho_1 > \rho_2
\end{gathered}
\end{equation}
The following blocks uniquely define $\boldsymbol{S}$:
\begin{equation}
\label{eq:Sdefined}
\boldsymbol{S}_{\rho_1, j; \rho, 0} \text{ with } j \geq \rho -\rho_1
\end{equation}
Given a collection of blocks as in \cref{eq:Sdefined}, such that $\boldsymbol{S}_{\rho, 0; \rho, 0}$ are invertible, we can construct a unique $\boldsymbol{S}\in\mathcal{S}$ using \cref{eq:Sdiagonal}. In particular, the number of parameters of $\boldsymbol{S}$ is
\begin{equation}
\label{eq:Sdim}
\sum_{\rho_1,\rho}\min(\rho_1, \rho) d_{\rho_1}d_{\rho} = \sum_{i=1}^p(\sum_{j\geq i} d_j)^2
\end{equation}
To summarize, to define $\boldsymbol{S}$ we only need to define the $(:, :, \rho, 0)$ vertical walls, and on those walls, for each $\rho_1$, $j$ could take values from $\max(0,\rho-\rho_1)$ to $\rho_1-1$. The remaining cells of $\boldsymbol{S}$ are either zero, or could be filled by \cref{eq:Sdiagonal}. Finally:
\begin{equation}\kappa(\boldsymbol{S}\boldsymbol{G}) =\boldsymbol{S}\kappa(\boldsymbol{G})
\end{equation}
\end{proposition}
\begin{proof}
Note that $\boldsymbol{F}_{\rho_1, j_1; \rho_2, j_2}$ is zero, unless $\rho_1 = \rho_2$ and $j_1=j_2+1$. So $\boldsymbol{S}\boldsymbol{F} = \boldsymbol{F}\boldsymbol{S}$ implies:
$$(\boldsymbol{F}\boldsymbol{S})_{\rho_1, j_1;\rho_2, j_2}=\begin{aligned}\boldsymbol{S}_{\rho_1, j_1-1; \rho_2, j_2}\text{ if } j_1 > 0\\0 \text{ if } j_1 = 0\end{aligned}$$
$$(\boldsymbol{S}\boldsymbol{F})_{\rho_1, j_1;\rho_2, j_2}= \begin{aligned}\boldsymbol{S}_{\rho_1, j_1; \rho_2, j_2+1} \text{ if } j_2 < \rho_2-1\\ 0 \text{ if } j_2 = \rho_2 - 1\end{aligned}$$
From here \cref{eq:Sdiagonal} and \cref{eq:Srelations} follow.
For fixed $(\rho_1, \rho_2)$, consider the rectangular combined block:
$$\boldsymbol{S}_{\rho_1, :; \rho_2, :} := (\boldsymbol{S}_{\rho_1, j_1; \rho_2, j_2})_{\rho_1 > j_1 \geq 0; \rho_2 > j_2 \geq 0}$$
It has four walls corresponding to rows $\boldsymbol{S}_{\rho_1, 0;\rho_2, :}$, $\boldsymbol{S}_{\rho_1, \rho_1-1;\rho_2, :}$ and columns $\boldsymbol{S}_{\rho_1, :;\rho_2, 0}$, $\boldsymbol{S}_{\rho_1, :;\rho_2, \rho_2-1}$. By \cref{eq:Sdiagonal}, $\boldsymbol{S}_{\rho_1, j_1; \rho_2, j_2}$ is defined if the surrounding walls are defined. From \cref{eq:Srelations}, the vertical wall $(:, :, \rho_2, \rho_2-1)$ is zero, except for $\boldsymbol{S}_{\rho_1, \rho_1-1, \rho_2, \rho_2-1}$, and the horizontal wall $(\rho_1, 0, :, :)$ is zero, except for $\boldsymbol{S}_{\rho_1, 0, \rho_2, 0}$. So $\boldsymbol{S}$ is defined by the diagonal blocks and the $(\rho, \rho-1; :, :)$ horizontal walls as well as the $(:, :, \rho, 0)$ vertical walls. We have \cref{eq:Szero} by using \cref{eq:Sdiagonal} to move these matrix entries to another wall using \cref{eq:Sdiagonal}, then apply \cref{eq:Srelations}.
The first equation of \cref{eq:Szero} shows the only non-zero entries on the $(\rho, \rho-1)$ horizontal wall are those with $j_2 \leq \rho-1$, but then they are equal to $\boldsymbol{S}_{\rho, \rho-1 -j_2; \rho_2, 0}$. So the entries of the vertical walls $(\rho, 0)$ alone are sufficient to define $\boldsymbol{S}$. Finally, using the second equality of \cref{eq:Szero}, we have the restriction on $j$ in \cref{eq:Sdefined}.
We count the number of $j$'s for a given ordered pair of $\rho_1, \rho$ to be $\min(\rho, \rho_1)$, while the number of parameters for each such block is $d_{\rho}d_{\rho_1}$. From here the parameter count for $\boldsymbol{S}$ follows.
The relationship between $\boldsymbol{S}\kappa(\boldsymbol{G})$ and $\kappa(\boldsymbol{S}\boldsymbol{G})$ could be verified by direct substitution. For $i \leq r -l$:
$$\kappa(\boldsymbol{S}\boldsymbol{G})_{(r,l), i} = (\boldsymbol{S}\boldsymbol{G})_{r, r-l-i}=\sum \boldsymbol{S}_{r, r-l-i; r_3, l_3}\boldsymbol{G}_{r_3, l_3}$$
Notice that row blocks $(r, l)$ of $\kappa$ are indexed in ascending order in $l$ while $\boldsymbol{S}$ is ordered in descending order in $l$, so we need:
$$\sum_{r_2, l_2; i \leq r_2 - l_2} \boldsymbol{S}_{r, r-l-1; r_2, r_2-l_2-1}\kappa(\boldsymbol{G})_{(r_2, l_2), i}= \sum_{r_2, l_2; i \leq r_2 - l_2} \boldsymbol{S}_{r, r-l-1; r_2, r_2-l_2-1}\boldsymbol{G}_{r_2, r_2-l_2-i}$$
With the change of variable $r_2 = r_3, l_3 = r_2-l_2-i$, the right-hand side becomes:
$$\sum_{r_3, l_3} \boldsymbol{S}_{r, r-l-1; r_3, l_3+i-1}\boldsymbol{G}_{r_3, l_3}$$
and then we apply $\boldsymbol{S}_{r, r-l-1; r_3, l_3+i-1} = \boldsymbol{S}_{r, r-l-i; r_3, l_3}$. We need to pay some attention to show the various constraints on indices carry through, but we will leave that to the reader.
\end{proof}
The generalized Gram-Schmidt (or $LQ$) algorithm is described next. It is purely a linear algebra result, we are not sure if it is already known.
\begin{proposition}
\label{prop:GGS}
For any block matrix $\boldsymbol{G}\in \Mat(n_{\textsc{min}}, m)$ such that $\boldsymbol{G}_{:, 0}$ is of full row rank, we can construct an invertible matrix $\boldsymbol{S}\in \mathcal{S}=Centr(\boldsymbol{F})$ such that $\boldsymbol{G}^{o} = \boldsymbol{S} \boldsymbol{G}$ satisfies
\begin{equation}
\label{eq:Orthonormal}
\boldsymbol{G}^{o}_{:, 0} \boldsymbol{G}^{o'}_{:, 0} = \boldsymbol{I}_{\mathfrak{l}}
\end{equation}
\begin{equation}
\label{eq:Orthogonal2}
\boldsymbol{G}^{o}_{\rho, l} \boldsymbol{G}^{o'}_{\rho_1, 0} = 0 \text{ for } l > \max(0, \rho-\rho_1-1)
\end{equation}
$(\boldsymbol{S}_{\rho_1, 0; \rho_2, 0})_{\rho_1, \rho_2}$ could be chosen to be any block lower triangular matrix ($\boldsymbol{S}_{\rho_1, 0; \rho_2, 0} = 0$ if $\rho_1 > \rho_2$) so that \cref{eq:Orthonormal} is satisfied, that is:
$$ (\boldsymbol{S}_{\rho_1, 0; \rho_2, 0})_{\rho_1, \rho_2}(\boldsymbol{G}_{\rho_2, 0})_{\rho_2} (\boldsymbol{G}_{\rho_2, 0})_{\rho_2}' (\boldsymbol{S}_{\rho_1, 0; \rho_2, 0})_{\rho_1, \rho_2}' = \boldsymbol{I}_{\mathfrak{l}}$$
Once $(\boldsymbol{S}_{\rho_1, 0; \rho_2; 0})_{\rho_1, \rho_2}$ is chosen, $\boldsymbol{S}$ is uniquely determined by \cref{eq:Orthogonal2}.
\end{proposition}
\begin{proof}
Applying the usual $QR$\slash Gram-Schmidt to $\boldsymbol{G}'_{:, 0}$, then transpose, we get the $LQ$ factorization of $\boldsymbol{G}_{:, 0} = \boldsymbol{L} \boldsymbol{W}_0$ with $\boldsymbol{W}_0$ satisfies $\boldsymbol{W}_0\boldsymbol{W}_0' = \boldsymbol{I}_{\mathfrak{l}}$ and $\boldsymbol{L}$ is lower-triangular, so we can write $\boldsymbol{W}_0 = \boldsymbol{L}^{-1}\boldsymbol{G}_{:, 0}$. We take
$(\boldsymbol{S}_{\rho_1, 0; \rho_2, 0})_{\rho_1, \rho_2}= \boldsymbol{L}^{-1}$, which is lower-triangular, and $(\boldsymbol{G}^o_{\rho_, 0})_{\rho }=\boldsymbol{W}_0$.
From here, for each $\rho$ we get the diagonal blocks $\boldsymbol{S}_{\rho, \rho-1; \rho; \rho-1} =\cdots = \boldsymbol{S}_{\rho, 0; \rho, 0}$ such that $\boldsymbol{G}^o_{\rho, 0} = \boldsymbol{S}_{\rho, 0}\boldsymbol{G}_{\rho, 0}$ satisfies: $\boldsymbol{G}^{o}_{\rho, 0}\boldsymbol{G}^{o'}_{\rho, 0} = I_{l_{\rho}}$ and $\boldsymbol{G}^{o}_{\rho,0}\boldsymbol{G}^{o'}_{\rho_1, 0}= 0$ for $\rho_1 \neq \rho$.
We note the block matrix:
$$(\boldsymbol{G}_{\rho, 0}\boldsymbol{G}^{o'}_{\rho_1, 0})_{{p\geq \rho\geq 1; p \geq \rho_1 \geq 1}}=\boldsymbol{L}$$
is invertible. The remaining constraints on $\boldsymbol{S}$, for fixed $\rho, j, \rho_1$, with $j> 0$ and $j\geq \rho - \rho_1$ are:
\begin{equation}
\label{eq:Ssolve}
\sum_{\rho_2, j_2} \boldsymbol{S}_{\rho, j; \rho_2, j_2}\boldsymbol{G}_{\rho_2, j_2}\boldsymbol{G}^{o'}_{\rho_1, 0} = 0
\end{equation}
We need to show that we can solve for $\boldsymbol{S}_{\rho, j; \rho_2, j_2}$ uniquely from the above equations. We will back solve in $j$. The case $j=0$ is already done, as by \cref{eq:Srelations} $\boldsymbol{S}_{\rho, 0; \rho_2, j_2} = 0$ if $j_2 > 0$. With fix $j$, we try to solve \cref{eq:Ssolve} for each $\rho$'s blocks. If $j_2 >0$, by \cref{eq:Sdiagonal}, $\boldsymbol{S}_{\rho, j; \rho_2, j_2} = \boldsymbol{S}_{\rho, j-1; \rho_2, j_2-1}$ has been solved from the previous step. So we only need to solve for the case $j_2=0$, that is for $\boldsymbol{S}_{\rho, j; \rho_2, 0}$. The constraint of $\boldsymbol{S}$ forces $\boldsymbol{S}_{\rho, j;\rho_2, 0} = 0$ unless $p\geq \rho_2\geq \rho-j$. Therefore we have only $p-\rho +j + 1$ block variables, corresponding to $p - \rho + j +1$ equations in \cref{eq:Ssolve}. The coefficient matrix in this case is $(\boldsymbol{G}_{\rho_2, 0}\boldsymbol{G}^{o'}_{\rho_1, 0})_{p\geq \rho_2 \geq \max(1, \rho-j); p \geq \rho_1 \geq \max(1, \rho-j)}$, which is invertible, and the invert is
$(\boldsymbol{S}_{\rho, 0;\rho_1, 0})_{\rho, \rho_1}$ skipping the last $\max(1, \rho-j)-1$ rows and column blocks. So the solution exists and is unique. From \cref{eq:Sdefined}, $\boldsymbol{S}$ is uniquely defined once we have $\boldsymbol{S}_{\rho, j; \rho_2, 0}$.
\end{proof}
We implement this algorithm in function {\it LQ\_multi\_lag} in the package. With $\Psi$ and $\boldsymbol{G}$ as inputs, it returns the factors $\boldsymbol{S}$ and $\boldsymbol{W}$ such that $\boldsymbol{G}=\boldsymbol{S}^{-1}\boldsymbol{W}$, $\boldsymbol{S}$ commutes with $\boldsymbol{F}_{\Psi}$ and $\boldsymbol{W}$ satisfies the orthogonal relations of \cref{eq:Orthonormal}, \cref{eq:Orthogonal2}. As an example for $\Psi=[(3, 1), (1,1)]$, we have $\boldsymbol{F}=\boldsymbol{K}(3,1)\oplus(\boldsymbol{K}(1,1)$. With
$$\boldsymbol{G} = \begin{bmatrix}
-2 & 5 \\
3 & -2 \\
1 & 8 \\
1 & -2 \\
\end{bmatrix}
$$
the function found $\boldsymbol{G}^o =\boldsymbol{S} \boldsymbol{G}$ with:
$$\boldsymbol{G}^o = \begin{bmatrix}
0. & 0.0 \\
-0.3969112 & 0.049614 \\
-0.1240347 & -0.992278 \\
-0.9922779 & 0.124035 \\
\end{bmatrix}; \boldsymbol{S} = \begin{bmatrix}
-0.124035 & -0.024807 & 0.022326 & -0.195975 \\
0.000000 & -0.124035 & -0.024807 & 0.000000 \\
0.000000 & 0.000000 & -0.124035 & 0.000000 \\
0.000000 & 0.000000 & -0.186052 & -0.806226 \\
\end{bmatrix}
$$
The points to note are $\boldsymbol{G}^o_{3, 0}$ and $\boldsymbol{G}^o_{1, 0}$ are of norm $1$, $\boldsymbol{G}^o_{3,2}=0$ and $\boldsymbol{G}^o_{3,1} = c \boldsymbol{G}^o_{1, 0}$ ($c=0.4$ in this case). These are consequences of the orthogonal relations. So, while $\boldsymbol{G}$ has total dimension $8$, $\boldsymbol{G}^o$ just needs two parameters, one for the pair of orthogonal vectors, and one for the proportional constant between $\boldsymbol{G}^o_{3,1}$ and $\boldsymbol{G}^o_{1, 0}$.
The next proposition clarifies further the parameterization of $\boldsymbol{G}^o$:
\begin{proposition}
\label{prop:Gparam}
Let $\mathcal{G}$ be the set of matrices $\boldsymbol{G}$'s of size $n_{\textsc{min}}\times m$ such that $\boldsymbol{G}_{:, 0}$ is of full row rank. Let $\mathcal{GO}$ be the subset of $\mathcal{G}$ consisting of all matrices $\boldsymbol{G}^o$ satisfying the constraints in \cref{prop:GGS}. An element $\boldsymbol{Q}$ in $\mathcal{S}=Centr(\boldsymbol{F})$ maps $\mathcal{GO}$ to $\mathcal{GO}$ if and only if $\boldsymbol{Q}$ is block diagonal with the diagonal blocks invertible and satisfies:
\begin{equation}
\label{eq:Q_format}
\begin{gathered}
\boldsymbol{Q}_{r, 0; r, 0} = \boldsymbol{Q}_{r, 1; r, 1} = \cdots = \boldsymbol{Q}_{r, r-1; r, r-1} \\
\boldsymbol{Q}_{r, 0; r, 0} \boldsymbol{Q}_{r, 0; r, 0}^{\prime} = \boldsymbol{I}_{d_r}
\end{gathered}
\end{equation}
Let us call $\mathcal{Q}$ the set of all such $\boldsymbol{Q}$'s. We have a pairing:
$\mathcal{Q} \times \mathcal{GO}\mapsto \mathcal{GO}$ and the likelihood function is unchanged if $\boldsymbol{G}^o$ is replaced by $\boldsymbol{Q}\boldsymbol{G}^o$.
Let $O(m)$ be the set of all square orthogonal matrices of size $m$. Set $d_0 = m -\mathfrak{l}$. For each matrix $\boldsymbol{G}^o_{\perp}\in \Mat(d_0, m)$ such that $\begin{bmatrix}\boldsymbol{G}^o_{:, 0}\\ \boldsymbol{G}_{\perp}\end{bmatrix} \in O(m)$, ($\boldsymbol{G}^o_{\perp}$ together with $\boldsymbol{G}_{:, 0}$ form an orthonormal basis) we can find for $p \geq r \geq 1, r > l > 0$ matrices $\boldsymbol{C}_{r, l}\in \Mat((d_r, \sum_{j=0}^{r-l-1}d_j)$ such that
\begin{equation}\label{eq:Go_rep}\boldsymbol{G}^{o}_{r, l} = \boldsymbol{C}_{r, l} \begin{bmatrix}\boldsymbol{G}^o_{r-l-1}\\ \vdots\\ \boldsymbol{G}^o_1\\ \boldsymbol{G}^o_{\perp}\end{bmatrix} \end{equation}
Conversely, given $((\boldsymbol{C}_{r, l})_{p\leq r \leq 1, l >0}, \boldsymbol{O})$, where $\boldsymbol{C}(r, l) \in \Mat(d_r, \sum_{j=0}^{r-l-1}d_j)$ and $\boldsymbol{O}\in O(m)$, we can reconstruct an element $\boldsymbol{G}^o$ by first decompose $\boldsymbol{O}$ to $\begin{bmatrix}\boldsymbol{O}_{p, p-1}\\ \vdots\\ \boldsymbol{O}_{1, 0} \\ \boldsymbol{O}_{\perp} \end{bmatrix}$; then set $\boldsymbol{G}^o_{r, 0}= \boldsymbol{O}_{r, 0}$; $\boldsymbol{G}^o_{r, l} = \boldsymbol{C}_{r, l} \begin{bmatrix}\boldsymbol{O}_{r-l-1}\\ \vdots\\ \boldsymbol{O}_1\\ \boldsymbol{O}_{\perp}\end{bmatrix}$ for $l > 0$. $\boldsymbol{G}^o$ constructed that way is an element of $\mathcal{GO} \subset \mathcal{G}$.
\end{proposition}
Let us clarify that when $l = r-1$, \cref{eq:Go_rep} only has one block $\boldsymbol{G}^o_{\perp}$.
\begin{proof} \cref{eq:Q_format} follows from \cref{eq:Sdiagonal} and the orthogonal relation of $\mathcal{GO}$. In particular this means applying $\boldsymbol{Q}$, the block $\boldsymbol{G}_{r, 0}$ is transformed to $\boldsymbol{Q}_{r, 0; r, 0}\boldsymbol{G}_{r, 0}$ and thus orthogonal to $\boldsymbol{G}_{\rho, 0}$ if $\rho \neq r$. The off diagonal blocks of $\boldsymbol{Q}$ must satisfies an equation of the form \cref{eq:Ssolve}, and the orthogonal relation just mentioned means they are zeros. The fact that the likelihood function is unchanged by $\mathcal{Q}$ follows from the fact that it is unchanged under $\mathcal{S}$.
Given $\boldsymbol{G}^o$ and $\boldsymbol{G}^o_{\perp}$, we can take $\boldsymbol{C}_{r, l}$ as the coefficients of $\boldsymbol{G}^o_{r,l}$ in the basis $\begin{bmatrix}\boldsymbol{G}^o_{r-l-1}\\ \vdots\\ \boldsymbol{G}^o_1\\ \boldsymbol{G}^o_{\perp}\end{bmatrix}$ by the orthogonal relations of \cref{prop:GGS}. On the other hand any linear combinations of that basis satisfies the orthogonal relation required by $\mathcal{GO}$.
\end{proof}
For $j > 0$, $\boldsymbol{G}_{:, j}$ could be arbitrarily large, $\mathcal{G}$ and $\mathcal{GO}$ are not bounded. However, we have the following proposition:
\begin{proposition}
\label{prop:bounded}
$\mathcal{R}$ is bounded by maximum and minimum of $\frac{u\boldsymbol{A} u'}{u\boldsymbol{B} u'}$ with $u \in \Mat(n_{\textsc{min}}, pm)$ is of full row rank with the later expression has maximum and minimum calculated by the generalized invariant subspace algorithm.
\end{proposition}
\begin{proof} This follows from the fact that the image of $\kappa$ is a subset of $\Mat(n_{\textsc{min}}, pm)$.
\end{proof}
The example for $p=2$ above suggests that for the general case the likelihood function could be defined on a geometric object of higher dimension. We will return to this topic in \cref{sec:examples} as well as the appendix.
\section{Parameter reduction}
It is well-known that reduced-rank regression reduces the number of parameters by $(m-\mathfrak{l})(k-\mathfrak{l})$. We could see this by observing that $\boldsymbol{H}$ and $\boldsymbol{G}$ has $k\mathfrak{l}$ and $m\mathfrak{l}$ parameters, and we can replace $\{\boldsymbol{H}, \boldsymbol{G}\}$ by $\{\boldsymbol{H}\boldsymbol{S}^{-1}, \boldsymbol{S}\boldsymbol{G}\}$, where $\boldsymbol{S}$ has $\mathfrak{l}^2$ parameters, so totally we have $(m+k)\mathfrak{l} - \mathfrak{l}^2$ free parameters, a reduction of $mk -(m+k)\mathfrak{l} + \mathfrak{l}^2$ free parameters versus full regression. For the state-space model we have:
\begin{proposition}
The total number of parameter reduction for state-space model of structure $\hat{\Psi}=[d_1,\cdots,d_p]$ is
\begin{equation}
\sum_{i=1}^p (k -\sum_{j\geq i}d_i)(m -\sum_{j\geq i}d_i)
\end{equation}
\end{proposition}
From this, we see there is no parameter reduction if $d_p = h=\min(p, k)$. The largest possible parameter reduction is $p(m-1)(k-1)$, which corresponds to $d_p=1, d_1=\cdots=d_{p-1}=0$. A change in $d_p$ has the most effect on the change in the number of free parameters.
\begin{proof}
We count the number of parameters of $\boldsymbol{H}$ and $\boldsymbol{G}$ and subtract by the number of parameter of $\boldsymbol{S}$ to count the number of free parameter. The reduction is:
$$pmk - (m+k)(\sum j d_j) + \sum_{i=1}^p(\sum_{j\geq i} d_j)^2$$
which we see is the same expression as in the proposition.
\end{proof}
We plot the number of reductions versus minimal state-space dimension, and allocation rank $d_i$ in \cref{fig:param_reduction} for $m=10, p=5$. The averages are taken over all possible $\Psi$. We also fix four structural parameters and plot the fifth. The graph illustrates the point that we have the most parameter saving for higher exponent.
\begin{figure}
\includegraphics[scale=0.4]{param_saving.png}
\caption{Parameter reductions versus space-state dimension and structure parameters.}
\label{fig:param_reduction}
\end{figure}
\section{Examples.}
\label{sec:examples}
In \parencite{minmal_varx_code} we provided the python package implementing this model. The open source notebook {\it minimal\_varx} in that package allows one to test the model on $colab$ environment without installing it on a local machine. We provided a number of examples in the notebook, which we would like to summarize the results here.
The main class in the package is $varx\_minimal\_estimator(\Psi, m)$. It can evaluate the likelihood function as well as its gradient given a matrix $\boldsymbol{G}$. Given data matrices $\boldsymbol{X}$ and $\boldsymbol{Y}$ we provide four data fitting method: {\it simple\_fit, gradient\_fit, hessian\_fit} and {\it manifold\_fit} (We implemented manifold\_fit for the case $\mathfrak{l}=h$ only, and use the orthogonal constraints but not the full reduction by symmetry of $\mathcal{Q}$). The first three methods vectorize $\boldsymbol{G}$ then just use a standard optimizer. All methods estimate $\boldsymbol{G}$ to maximize the likelihood. The examples show that if $\Psi$ is known, the algorithm converges relatively fast. After fitting, $\boldsymbol{H}, \boldsymbol{F}, \boldsymbol{G}, \boldsymbol{\Phi}$ could be read off the estimator. To forecast, we can use the {\it predict} method of the same class.
The package contains many utility functions, including those to produce the Smith-McMillan form for a polynomial matrix, as well as determinants of polynomial matrices and test for stability. We use a utility function to create a random stable polynomial of state-space form $\Psi$. We use this function to generate the tests samples. We provided a number of examples with different $m$ and $p$. The examples aim to clarify the concepts here. They also give a flavor of the behavior of this estimation method when the structure of $\boldsymbol{F}$ changes. In our examples, we mostly work with $1000$ samples.
Let us first consider the case $m=2, p=2$. There are $\begin{pmatrix}p+m-1 \\ p\end{pmatrix} = 3$ possible configurations of $\Psi$. The case $\Psi= [(2, 2)]$, is the full rank case. There are two reduced-rank cases: $\Psi=[(2,1), (1, 1)]$ and $\Psi=[(2, 1)]$:
\subsection{The case $m=2, \Psi=[(2,1), (1, 1)]$: a circle}
In this case, both $\boldsymbol{G}_{2, 0}$ and $\boldsymbol{G}_{1, 0}$ has full rank $1$. In the first test, we randomly generate a number of stable matrices with structure parameter $\Psi$ then try to recover it using {\it simple\_fit}. We got reasonable convergence for our test. Applying \cref{prop:GGS}, $\boldsymbol{G}_{2, 0}$ and $\boldsymbol{G}_{1, 0}$ could be made to be orthogonal, so $\boldsymbol{G}_{2, 0} = [\cos(t), \sin(t)]$ and $\boldsymbol{G}_{1, 0} = [-\sin(t), \cos(t)]$. The generalized Rayleigh quotient is thus a ratio of two polynomial functions in $\cos(t)$ and $\sin(t)$, invariant when $\cos(t)$ is replaced by $-\cos(t)$, $\sin(t)$ by $-\sin(t)$ and hence it is sufficient to examine for $t\in [0,\pi]$. The enclosed graph plots the negative log likelihood for different values of $t$, and shows the minimum negative log likelihood is close to the likelihood of the data generation $\boldsymbol{G}$, which is $-1.27233$.
\begin{figure}[h]
\includegraphics[scale=0.4]{circle_llk.png}
\caption{$m=2, \Psi=[(2,1), (1, 1)]$, parameters versus minus log likelihood.}
\end{figure}
Below is the original versus the fitted $\boldsymbol{\Phi}$
$$
\boldsymbol{\Phi}_1^{\text{org}} = \begin{bmatrix}-0.21712203 & -0.5690077 \\
0.43775564 &-0.98005326\end{bmatrix}
\boldsymbol{\Phi}_2^{\text{org}} =
\begin{bmatrix}
0.12523273 & 0.04440429\\
-0.25249089 & 0.17464151
\end{bmatrix}
$$
$$
\boldsymbol{\Phi}_1^{\text{fitted}} =
\begin{bmatrix}
-0.19686572 & -0.53498222\\
0.42740042 & -0.95142986
\end{bmatrix}
\boldsymbol{\Phi}_2^{\text{fitted}} =
\begin{bmatrix} 0.12781235 & 0.05058104 \\
-0.27748382 & 0.20364765\end{bmatrix}
$$
We also generate a random $\boldsymbol{G}$ and show how $LQ$ factorization reduces it to orthogonal one.
\subsection{The case $m=2, \Psi=[(2,1)]$: a circle and its tangents.}
\label{subsec:circle_tangent}
In this case $\boldsymbol{G}$ has the form $\begin{pmatrix}\boldsymbol{G}_{2, 1}\\ \boldsymbol{G}_{2, 0}\end{pmatrix}$. As before, in the first test we do not put the orthogonal restriction on $\boldsymbol{G}$. We generate a number of stable polynomials with minimal state-space configuration $\Psi$ then recover them using {\it simple\_fit}. We get convergence as expected. The notebook also shows an example of $LQ$ factorization in this case.
This is a good example to illustrate the geometric concepts of the problem. Note that in this case $w = \boldsymbol{G}_{2, 1}$ and $v= \boldsymbol{G}_{2, 0}$ are both two-dimensional vectors. By \cref{prop:GGS}, $v$ could be assumed to have norm $1$, and $v.w = 0$. So we could think of $\boldsymbol{G}_{2, 0}$ as being constrained to the unit circle, while for each $v$, $\boldsymbol{G}_{2, 1}$ is being constrained to the line tangent to the unit circle at $v$. Therefore, the whole configurations of $\boldsymbol{G}$ could be restricted to that of pairs $(v, w)$ of a point on the unit circle and a vector on its tangent line at $v$, not unlike the configuration space of position and velocity of a circular motion in classical physics problems. As before, if we parameterize $v = (\cos t, \sin t)$ then we can write $w=
(-c\sin t, c\cos t) = cv_{\perp}$ with $v_{\perp} = (-\sin t, \cos t)$. So $O = \begin{bmatrix} -\sin t & \cos t \\ \cos t & \sin t \end{bmatrix}$ as in \cref{prop:Gparam}. We plot the likelihood function on a two-dimensional surface of $(t,c)$, as well as plotting it for a number of fixed $t$ as below. The determinant ratio
In this example, the likelihood corresponding to $\boldsymbol{G}$ used in data generation is $-1.101996$, vs estimated value $-1.096700$
\begin{figure}[h]
\includegraphics[scale=0.4]{tangent_circle_3d.png}
\includegraphics[scale=0.4]{tangent_circle_2d.png}
\caption{Parameters versus minus log likelihood. Circle with tangent.}
\end{figure}
$$\boldsymbol{\Phi}_1^{\text{org}} = \begin{bmatrix}
0.74186938 & -0.05524596 \\
[-0.24536459 & 0.73052047
\end{bmatrix}
\boldsymbol{\Phi}_2^{\text{org}} = \begin{bmatrix}
0.12980835 & -0.05497244\\
-0.04293259 & 0.14280693
\end{bmatrix}
$$
$$
\boldsymbol{\Phi}_1^{\text{fitted}}
\begin{bmatrix}
0.74142713 & -0.03649409\\
-0.24909784 & 0.75546506
\end{bmatrix}
\boldsymbol{\Phi}_2^{\text{fitted}} =
\begin{bmatrix}
0.10750374 & -0.07386304 \\
-0.03611811 & 0.13257722
\end{bmatrix}
$$
\subsection{VAR(p) with $m=2$}
This is the generalization of the last two examples. As $AR(p)$ is well understood, $m=k=2$ is also the natural next step. $\Psi=[(p, 2)]$ is the case of full rank regression, which is already well studied, so we will assume $d_p = 1$. As before, there are two sub-cases, $\Psi=[(p, 1), (\rho, 1)]$ and $\Psi=[(p, 1)]$. We show the reduction of the search space for $\boldsymbol{G}^o$ by $LQ$ factorization discussed here in the notebook on our code page.
For the first case, the configuration space of $\boldsymbol{G}^o$ is a circle with $p-\rho-1$ tangent vectors. We have $\boldsymbol{G}^o_{p, 0}$ and $\boldsymbol{G}^o_{\rho, 0}$ form an orthonormal basis. $\boldsymbol{G}^o_{p, 1},\cdots,\boldsymbol{G}^o_{p-\rho-1}$ are proportional to $\boldsymbol{G}^o_{\rho, 0}$. In this case the full search space of $\boldsymbol{G}$ is of dimension $2n_{\textsc{min}} = 2(p+\rho)$ while the reduced space for $\boldsymbol{G}^o$ has dimension $p-\rho$.
For the second case, $\boldsymbol{G}^o_{p, 0}$ could be made of norm $1$, and $\boldsymbol{G}^o_{p, 1},\cdots ,\boldsymbol{G}^o_{p, p-1}$ are orthogonal to it. So this case is a circle with $p-1$ tangent vectors. The minimal state-space dimension is $p$ and the search space for $\boldsymbol{G}$ is of dimension $2p$, while the search space for $\boldsymbol{G}^o$ is of dimension $p$.
This example illustrates the point that even when we can search on the full space of $\boldsymbol{G}$ for smaller $m$ and $p$, the reduced space is of just half the dimension, but it is more complex to describe. We do not implement an optimization routine here but since we can parameterize the search space for $\boldsymbol{G}^o$ explicitly, it could be done via the chain rule and a standard optimizer.
\subsection{$\Psi=[(p, d)]$ and the Velu-Reinsel-Wichern model.}
In \parencite{VeluReinselWichern}, the authors introduced a model of form
$$\boldsymbol{y}_t = A(L) B(L) L \boldsymbol{x}_t +\boldsymbol{\epsilon}_t$$
$\boldsymbol{x}_t$ in their paper is $\boldsymbol{x}_{t-1}$ in our notation. Here $A(L)$ is a $k\times d$ polynomial matrix function of degree $p_1$ and $B(L)$ is a $d\times m$ polynomial matrix function of degree $p_2$ (we switch $p_1$ and $p_2$ as used in their paper.) Set $p = p_1+p_2+1$. Consider the state-space model with $\boldsymbol{F} = \boldsymbol{K}(p, d)$. Set $\boldsymbol{G}_{p, i}= B_{p_1-i}$ if $i \leq p_2$, and zero otherwise, $\boldsymbol{H}_{p, i} = A_{p_2-i}$ if $i\leq p_1$, and zero otherwise. From \cref{eq:Phi}:
$$\boldsymbol{H}(\boldsymbol{I}-\boldsymbol{F} L)^{-1}\boldsymbol{G} = A(L) B(L) L$$
We note a number of blocks are set to zero in this model. The paper considers the case when $A(L)$ is constant, corresponding to the case where only $\boldsymbol{H}_{p, 0}$ is non-zero. We can modify our framework to obtain the likelihood function for non constant $A(L)$. Note that the regressor for $A_i$ is $\sum B_{j} L^{j+i+1}\boldsymbol{X}$ we get
$$[A_0,\cdots,A_{p_1}] = \boldsymbol{Y}\boldsymbol{X}'_B(\boldsymbol{X}_B\boldsymbol{X}'_B)^{-1}$$
where $\boldsymbol{X}_B = \upsilon(B)\boldsymbol{X}_{\textsc{LAG}}$ and $\upsilon(B)$ is a block matrix of $(p_1+1)$ row $\times (p_1+p_2+1)$ column blocks.
\begin{equation}
\upsilon(B) = \begin{bmatrix} B_{p_2} & B_{p_2-1} &\cdots & B_{0} & 0 & \cdots & 0\\
0 & B_{p_2} &\cdots & B_{1} & B_{0} & \cdots & 0\\
\vdots & \vdots &\cdots & \vdots & \vdots & \cdots & \vdots\\
0 & 0 &\cdots & 0 & \vdots & B_1 & B_0\\
\end{bmatrix}
\end{equation}
We could think of $\upsilon$ as a truncated $\kappa$. For the case $p_1=0$ considered in their paper, $\upsilon(B)$ has only one row block and $p_2+1$ column blocks. With $\upsilon(B)$ in place of $\kappa(G)$, the result of \cref{th:likelihood} still applies, if $\upsilon(B)$ is of full row rank. We will assume this is the case, this means $\begin{bmatrix}B_{p_2} & B_{p_2-1} & \cdots & B_{0}\end{bmatrix}$ is of full row rank.
\subsection{Likelihood estimation over different $\Psi$'s.}
For $m$ and $p$ sufficiently large, the total number of configurations $\begin{bmatrix}h+p-1\\p\end{bmatrix}$ increases fairly quickly. The number of possible minimal state-space dimensions only increase linearly between $p$ and $hp$. A suggested strategy is not to iterate over all possible $\Psi$, but rather start with an $n_{\textsc{min}}$ then search for $\boldsymbol{F}$ (nilpotent but not necessarily Jordan) using continuous optimization method. However, it should be instructive to get a sense how $\Psi$, $n_{\textsc{min}}$ and likelihood function interact. We plot the number of configurations of $\Psi$ per state-space dimension for $h=10, p=5$. The total number of distinct $\Psi$'s is 2002. The minimum states space dimension is between $5$ and $50$ and the number of $\Psi$ per minimum state-space dimension can be plotted to be of a bell-shape curve with the middle dimensions having the most number of $\Psi$, as in \cref{fig:PsiMcMillan}
\begin{figure}[h]
\includegraphics[scale=0.4]{n_of_psi_vs_mcmillan.png}
\caption{Number of $\Psi$ v.s. McMillan degree}
\label{fig:PsiMcMillan}
\end{figure}
We also generate a stable matrix with $m=5, p=2, \Psi_{gen}=[(2, 2), (1, 2)]$. If we do not know $\Psi_{gen}$, we may need to search over the 15 possible $\Psi$ configurations, and we can summarize the optimization results in \cref{tab:all_psi}. From the table, once we get the correct $\Psi$, iteration over more complex $\Psi$ does not improve the likelihood. This suggests that we may not need to search over all $\Psi$, but aim to find a $\Psi$ with a sufficiently small state-space dimension with likelihood function sufficiently close to the full regression likelihood.
\begin{table}[H]
\begin{tabular}{lrrrrr}
\hline
{} & 2 & 1 & org\_llk & fitted\_llk & success \\
\hline
0 & 1.0 & 0.0 & -8.746828 & -5.575984 & 1.0 \\
1 & 1.0 & 1.0 & -8.746828 & -6.715179 & 1.0 \\
2 & 1.0 & 2.0 & -8.746828 & -7.614981 & 1.0 \\
3 & 1.0 & 3.0 & -8.746828 & -8.290710 & 1.0 \\
4 & 1.0 & 4.0 & -8.746828 & -8.342102 & 1.0 \\
5 & 2.0 & 0.0 & -8.746828 & -7.745144 & 1.0 \\
6 & 2.0 & 1.0 & -8.746828 & -8.569492 & 1.0 \\
7 & 2.0 & 2.0 & -8.746828 & -8.754073 & 1.0 \\
8 & 2.0 & 3.0 & -8.746828 & -8.754607 & 1.0 \\
9 & 3.0 & 0.0 & -8.746828 & -8.754916 & 1.0 \\
10 & 3.0 & 1.0 & -8.746828 & -8.756864 & 1.0 \\
11 & 3.0 & 2.0 & -8.746828 & -8.759571 & 1.0 \\
12 & 4.0 & 0.0 & -8.746828 & -8.759829 & 1.0 \\
13 & 4.0 & 1.0 & -8.746828 & -8.760026 & 1.0 \\
14 & 5.0 & 0.0 & -8.746828 & -8.760028 & 1.0 \\
\hline
\end{tabular}
\caption{Likelihood function for different $\Psi$.}
\label{tab:all_psi}
\end{table}
\subsection{Other examples}
\label{subsec:other_examples}
We ran an example for $m=7, k=5$ with $\Psi=[(2, 2), (1, 2)]$, again the fitted versus original likelihood are close ($19.87$ versus $-19.91$). In a final example, we ran $10$ tests with $k=8, p=3$, $\Psi=[(3, 1), (2, 1), (1, 2)]$. The fitted likelihood is smaller than the original likelihood as seen in \cref{tab:m8p3}. This seems to be an issue with autoregressive noise in the noise series used to generate the sample. We show also the full (i.e. no reduced rank assumption) regression likelihood, it fits with our minimal state-space likelihood well.
\begin{table}[H]
\begin{tabular}{lrrrr}
\hline
{} & org\_llk & fitted\_llk & full\_llk & success \\
\hline
0 & -6.778974 & -7.333462 & -7.397742 & 1.0 \\
1 & -7.664920 & -8.280117 & -8.329034 & 1.0 \\
2 & -7.909143 & -9.150681 & -9.203443 & 1.0 \\
3 & -14.679623 & -15.425846 & -15.491684 & 1.0 \\
4 & -9.291283 & -10.052678 & -10.098918 & 1.0 \\
5 & -12.626511 & -13.004966 & -13.049997 & 1.0 \\
6 & -11.045051 & -12.250215 & -12.296307 & 1.0 \\
7 & -6.679380 & -7.898204 & -7.974920 & 1.0 \\
8 & -10.012286 & -10.811005 & -11.115286 & 1.0 \\
9 & -64.521764 & -65.580908 & -65.678105 & 0.0 \\
\hline
\end{tabular}
\caption{Case $m=8, p=3$}
\label{tab:m8p3}
\end{table}
\section{Discussion}
\subsection{An alternative space-space model}
We note that $L^{-1}\boldsymbol{T}(L^{-1})^{-1}$ is also strictly proper, so we have a state-space realization:
$$L^{-1}\boldsymbol{T}(L^{-1})^{-1} = \boldsymbol{H}_a (L\boldsymbol{I} -\boldsymbol{F}_a)\boldsymbol{G}_a$$
From here
$$\boldsymbol{T}(L)^{-1} = \boldsymbol{H}_a(\boldsymbol{I} - \boldsymbol{F}_a L)^{-1}\boldsymbol{G}_a$$
For Vector Autoregressive model, we have a representation:
$$\boldsymbol{T}(L)^{-1} = \boldsymbol{I} - \sum\boldsymbol{\Phi}_i L^{i} = \boldsymbol{H}_a (\boldsymbol{I} - \boldsymbol{F}_a L)^{-1}\boldsymbol{G}_a$$
We note also zero is the only pole of $L^{-1}\boldsymbol{T}(L^{-1})^{-1}$, so $\boldsymbol{F}_a$ is again a Jordan matrix. Assuming we have the Smith-McMillan form:
$$L^{-1}\boldsymbol{T}(L^{-1}) = A(L)S(L)B(L)$$
That means $A(L), B(L)$ are invertible polynomial matrices and $\boldsymbol{S}(L)$ is diagonal satisfying the Smith-McMillan divisibility requirement. Then
$$L^{-1}\boldsymbol{T}(L^{-1})^{-1} = B(L)^{-1}L^{-2}S(L)^{-1}A(L)^{-1}$$
$L^{-2}S(L)^{-1}$ is diagonal but not necessarily satisfying Smith-McMillan divisibility requirement, but we can make it to be, as in the final step of the Smith-McMillan algorithm. So $L^{-1}\boldsymbol{T}(L^{-1})^{-1}$ and the traditional state-space form are intimately related. However, there is no direct link between our $\textsc{AR}$-state-space realization and the traditional one. This alternative model is harder to estimate, even for $p=1$.
\subsection{Rank condition on $\boldsymbol{H}$}
So far we recover $\boldsymbol{H}$ by regression. Per Kalman, we should confirm the rank for $\boldsymbol{H}_{:, 0}$. If it is not of full rank, the structure parameter $\Psi$ that we work with may not be minimal, and we can replace it by one with more reduced structure.
\subsection{Determining the structure parameters}
As $p$ and $m$ increases, the number of configurations for $\Psi$ increases, polynomially in $m$ and exponentially in $p$. Given that we have relatively fast convergence, a parallel search on configurations is certainly possible for a reasonable range of $p$ and $m$. However it may be unnecessary. The objective of the search should be for the configuration that balance between parameter reduction and close approximation to the full likelihood. As pointed out in earlier analysis, the number of parameter saving is impacted more by decreasing $d_i$ for a higher $i$. This motivates a search process where we do a full regression to obtain $\boldsymbol{\Phi}$, then applying a rank test to reduce the rank of $\boldsymbol{G}_{:, 0}$ which penalizing higher exponents of the Jordan matrix. This could be done sequentially in descending order of exponent, stopping after a number of steps based on a balance between likelihood and parameter count. The search on each exponent could be done using a bisection search if $m$ is sufficiently large, and would have a $\log(m)$ iterative cost. So this method should be applicable even for large value of $m$ and $p$. This analysis could be done with the help of an information criteria or by a likelihood ratio criteria ($AIC$ or $BIC$). Another way is to formulate an objective function that could penalize a norm of $\boldsymbol{F}^i$ for higher $i$. This may be a future research direction.
\subsection{Convergence Analysis}
It is well-known that Rayleigh quotient for a positive definite matrix is convex and has a unique minimum. By now, we know little about the analytic property of $\mathcal{R}(\boldsymbol{G}, \boldsymbol{A}, \boldsymbol{B})$, except that its Hessian is known and it is bounded. For general $\boldsymbol{A}$ and $\boldsymbol{B}$ not necessarily constructed from the regression analysis here, it would be interesting to analyze the critical points of the $\mathcal{R}$. One question would be if its minimal value is at a finite point, or would it be at a direction where $\boldsymbol{G}(r, l)$ goes to infinite ($l > 0$). A second question would be if it has more than one local minima.
\subsection{Generalization to VARMA}
As Kalman's result addresses the multiple root case of minimal realization, a natural question is whether the results presented here has a full Gilbert-Kalman picture analogue. The answer is yes, which we will address in a forthcoming article. We hope the full result will give a new effective method in Linear System Identification.
\subsection{Further directions}
The approach could be adjusted to address the drift and seasonal adjustments. We have not addressed integration in this paper, however it seems plausible that it could be done with appropriate modification. A motivation for this paper comes from Johansen's approach to integration. Fixing a structure $\Psi$, we can study other loss functions depending on $\boldsymbol{G}$ to go beyond the Gaussian assumption. Instead of $\boldsymbol{H}$ applying linearly on $\kappa(\boldsymbol{G})L^i\boldsymbol{X}$ we can assume a non-linear format. For example, we can use the kernel trick to replace $\boldsymbol{X}_{\textsc{LAG}}\Xlag', \boldsymbol{Y}\bY'$ and $\boldsymbol{Y}\boldsymbol{X}_{\textsc{LAG}}'$ with kernel values. We look forward to testing the model with real data. We also look to improve on the optimization algorithms.
\begin{appendices}
\section{Vector bundle on flag manifolds}
To take full advantage of the invariant property of the likelihood function, manifold optimization may be an attractive option. In this appendix we summarize the results in term of flag manifolds. The uninterested reader can skip the appendix, consider it as a discussion on a particular optimization technique that help reduces the search space to a lower dimensional set taking advantage of the invariant property when replacing $\boldsymbol{G}^o$ by $\boldsymbol{Q}\boldsymbol{G}^o$. On the other hand, the geometric picture could be thought of as a high dimensional generalization of the configurations of pairs of a particle moving on a circle and its velocity vector as explained in \cref{subsec:circle_tangent}.
Let us first fix a few notations.
\begin{itemize}
\item Recall $GL(m)$ is the group of all invertible matrices of size $m\times m$, $O(m)$ is the orthogonal group, $SO(m)$ is the special orthogonal group of all orthogonal matrices of size $m\times m$ with determinant $1$, $S(O(l_1)\times O(l_2)\times \cdots \times O(l_p) \times O(m -\mathfrak{l}))$ is the block diagonal subgroup of orthogonal group with block size $(l_1,\cdots, l_p, m -\mathfrak{l})$ and determinant $1$.
\item Let $f_1 = l_1, f_i = \sum_{j=1}^{i} l_j$ and $f_{g+1} = m$. Consider $\mathcal{F}(f_1, \cdots f_{g+1}; \mathbb{R}) = O(m)/(O(l_1)\times O(l_2)\times \cdots \times O(l_p) \times O(m -\mathfrak{l}))$. It is called a real flag manifold. It has an equivalent representation: $SO(m)/S(O(l_1)\times O(l_2)\times \cdots \times O(l_p) \times O(m -\mathfrak{l}))$, see \parencite{ye2019optimization}. (In the literature, it can also be considered as a quotient of $GL(m)$ by a parabolic subgroup of $GL(m)$).
\item Let $\mathcal{R}(G, \boldsymbol{A}, \boldsymbol{B}) = \frac{\det(\kappa(G)\boldsymbol{A}\kappa(G)'}{\det(\kappa(G)\boldsymbol{B}\kappa(G)')}$ be the generalized Rayleigh quotient corresponding to two symmetric matrices $\boldsymbol{A}, \boldsymbol{B}$ each of size $mp \times mp$.
\end{itemize}
The following theorem describes the manifold that $\mathcal{R}$ is defined on based on the invariant properties above, it may be just a restatement of results in \cref{sec:equivalence} in a fancier language, but it allows us to apply manifold optimization techniques:
\begin{theorem} The configuration space $\mathcal{GO}$ and the group of orthogonal diagonal block matrices $\mathcal{Q}$ have the following properties:
\begin{itemize}
\item $\mathcal{Q}$ is isomorphic to $O(l_1)\times \cdots O(l_g)$.
\item The map from $\mathcal{GO}$ to $O(m)/ O(m-\mathfrak{l})$ given by first representing $\boldsymbol{G}^o$ as a pair $(\boldsymbol{C}, \boldsymbol{O})$ with $\boldsymbol{C} = (\boldsymbol{C}_{r,l})_{r,l}$ and $\boldsymbol{O}$ an orthogonal matrix as in \cref{prop:Gparam}, then map $\boldsymbol{O}$ to the class $O(m-l)\boldsymbol{O}\in O(m)/ O(m-\mathfrak{l})$ is well-defined: two representations ($(\boldsymbol{C}, \boldsymbol{O})$ and $(\boldsymbol{C}_1, \boldsymbol{O}_1)$ of $\boldsymbol{G}^o$ give the same image in $O(m)/ O(m-\mathfrak{l})$).
\item The above map induces a fiber bundle projection $\boldsymbol{\pi}$ from $\mathcal{GO}/\mathcal{Q}$ to $\mathcal{F}(f_1, \cdots, f_{g+1}; \mathbb{R})$. Each fiber is a vector space isomorphic to $\oplus_{r=1}^p \Mat(d_r, \sum_{l=1}^{r-1}\sum_{j=0}^{r-l-1}d_j)$ where $d_0=m-\mathfrak{l}$. So $\mathcal{GO}/\mathcal{Q}$ is a vector bundle over $\mathcal{F}(f_1,\cdots, f_{g+1}; \mathbb{R})$. We will call it $\mathcal{K}(\Psi; m)$. When $p = 1$ or $\mathfrak{l}= m$, $\mathcal{K}$ could be identified with $\mathcal{F}$.
\item The dimension of $\mathcal{K}$ is given by $m\sum_{j>0} j d_j - \sum_{i\geq 1}(\sum_{j\geq i}d_j)^2$
\item It is also given by $\sum_{i>j\geq 0}d_i d_j + \sum_{r=1}^p d_r \sum_{l=1}^{r-1}\sum_{j=0}^{r-l-1}d_j$.
\item $\mathcal{R}$ is a bounded smooth function on $\mathcal{K}(\Psi, m)$.
\end{itemize}
\end{theorem}
\begin{proof}
The first statement follows from the diagonal form of $\mathcal{Q}$, and $\boldsymbol{Q}_{\rho, l;\rho,l} = \boldsymbol{Q}_{\rho}$ for $p-1\geq l \geq 1$.
The second statement is clear, as another representation of $\boldsymbol{G}^o$ would have a form $(\boldsymbol{C}\boldsymbol{Q}_{\perp}^{\prime}, \begin{bmatrix}\boldsymbol{G}_{:, 0} \\ \boldsymbol{Q}_{\perp}\boldsymbol{G}^o_{\perp} \end{bmatrix}$ for some $\boldsymbol{Q}_{\perp}\in O(m-\mathfrak{l})$. For the next statement, if $\boldsymbol{G}^o$ is replaced by $\boldsymbol{Q}\boldsymbol{G}^o$ with $\boldsymbol{Q}\in \mathcal{Q}$, $(\boldsymbol{C}, \begin{bmatrix}\boldsymbol{G}^o_{:, 0}\\ \boldsymbol{G}_{\perp}\end{bmatrix})$ is transformed to
$$(\boldsymbol{Q}_r\boldsymbol{C}_{r, l}\diag(\boldsymbol{Q}'_{r-l-1}\cdots \boldsymbol{Q}'_1,\boldsymbol{I}_{d_0}))_{p\geq r\geq 1, r-1\geq l \geq 0}, \begin{bmatrix}\boldsymbol{Q}_p \boldsymbol{G}_{p, 0}\\ \vdots \\ \boldsymbol{Q}_1 \boldsymbol{G}_{1, 0}\\ \boldsymbol{G}^o_{\perp} \end{bmatrix}$$
So $\boldsymbol{\pi}(\boldsymbol{Q}\boldsymbol{G}^o)$ is in the same equivalent class with $\boldsymbol{\pi}(\boldsymbol{G}^o)\in \mathcal{F}(f_1,\cdots, f_{g+1}; \mathbb{R})$. The fiber is isomorphic to $\boldsymbol{C}=\oplus \boldsymbol{C}_{r, l}$. The first expression for dimension of $\mathcal{K}$ is just $\dim(\mathcal{G})-\dim(\mathcal{S})$, the second expression is sum of the dimensions of flag manifold and of the vector space fiber.
That $\mathcal{R}$ is bounded is already proved in \cref{prop:bounded}, and it is clearly smooth.
\end{proof}
Optimizing on $\mathcal{K}$ could be advantageous as in many instances its dimension is much less than $\dim(\mathcal{G}) = m\sum jd_j$. We have seen in our examples the search space dimension could be reduced by half. If there were a manifold optimization package for $\mathcal{K}$ we could take advantage of it. To our knowledge such a package is not yet available (however, see \parencite{ye2019optimization}. On the other hand, we can optimize on $O(m)\times \oplus_{r=1}^p \Mat(d_r, \sum_{l=1}^{r-1}\sum_{j=0}^{r-l-1}d_j)$, instead of taking quotient down to the flag manifold bundle level. We use the packages \parencite{manopt}, \parencite{JMLR:v17:16-177} to optimize on the $O(m)$ for the case where $\mathfrak{l} = h$ and offer a method {\it manifold\_fit} in our package.
\end{appendices}
\printbibliography[title={Bibilography}]
\end{document}
|
2,877,628,091,574 | arxiv | \section{Preliminaries}
No solid theoretical principle prevents \hbox{neutrinos } from having mass.
Moreover, from the point of view of theory, it is rather
mysterious that \hbox{neutrinos } seem to be so special when compared
with the other fundamental fermions. Many attractive
extensions of the \hbox{standard model } require \hbox{neutrinos } to be massive \cite{fae}.
This is the case, for example, in SO(10) or left right
symmetric theories, where the presence of \hbox{right-handed } neutrinos
is required in order to realize the extra symmetry. On the
other hand there is, in these theories, a natural mechanism to
understand the relative smallness of \hbox{neutrino } masses \cite{GRS}.
In this case lepton number is part of the \hbox{gauge } symmetry and
its feeble violation is related to the observed smallness
of \hbox{neutrino } masses and to the V-A nature of the weak interactions
\cite{LR}.
This is by no means the only way to \hbox{neutrino } masses.
Indeed, it has been realized in the early days
that lepton number may be a spontaneously broken
global symmetry \cite{CMP0}. Since then there have
been many other attractive suggestions of how to realize
this idea in realistic scenarios \cite{fae}. In this
case, quite naturally, the observed smallness of \hbox{neutrino }
masses does not require any large mass scale.
The extra particles required to generate the \hbox{neutrino }
masses have masses at scales accessible to present
experiments. Such a low scale for lepton number breaking
could have important implications not only in astrophysics
and cosmology (e.g. electroweak baryogenesis) but also in
particle physics as we will discuss below.
Whichever way one adopts, present theory is not capable,
from general principles, of predicting the scale of \hbox{neutrino }
masses any better than it can fix the masses of the other
quarks and charged leptons, say the muon. One should at this
point turn to experiment.
There are several limits on \hbox{neutrino } masses that follow
from observation. The laboratory bounds may be
summarized as \cite{PDG92}
\begin{equation}
\label{1}
m_{\nu_e} \raise0.3ex\hbox{$\;<$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 10 \: \rm{eV}, \:\:\:\:\:
m_{\nu_\mu} \raise0.3ex\hbox{$\;<$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 270 \: \rm{keV}, \:\:\:\:\:
m_{\nu_\tau} \raise0.3ex\hbox{$\;<$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 31 \: \rm{MeV}
\end{equation}
These limits follow purely from kinematics
and have therefore the great advantage that they
are the most model-independent of the \hbox{neutrino } mass
limits. The experimental status of the limits on
the \hbox{$\nu_e$ } mass have been extensively discussed
here \cite{Erice}.
Note that the limit on the \hbox{$\nu_\tau$ } mass may
be substantially improved at a tau factory \cite{jj}.
In addition, there are limits
on neutrino masses that follow from the nonobservation of
neutrino oscillations. I address you to ref.
\cite{granadaosc} for a detailed discussion
and compilation. As opposed to the limits in \eq{1}
\hbox{neutrino } oscillation limits are correlated ones, involving
\hbox{neutrino } mass differences versus mixing. Thus they rely on
the additional assumption, although quite natural in
\hbox{gauge } theories, that massive \hbox{neutrinos } do mix.
Apart from the above limits, there is an important
one derived from the non-observation of the
${\beta \beta}_{0\nu}$ nuclear decay process i.e.
the process by which nucleus $(A,Z-2)$ decays to
$(A,Z) + 2 \ e^-$.
This lepton number violating process would arise
via \hbox{neutrino } exchange and, although highly favoured by phase space
over the usual $2\nu$ mode, it proceeds only if the virtual neutrino
is a Majorana particle. The decay amplitude is
proportional to
\begin{equation}
\VEV{m} = \sum_{\alpha} {K_{e \alpha}}^2 m_{\alpha}
\label{AVERAGE}
\end{equation}
where $\alpha$ runs over the light neutrinos.
The non-observation of ${\beta \beta}_{0\nu}$
in $^{76} \rm{Ge}$ and other nuclei leads to the limit \cite{Avignone}
\begin{equation}
\label{bb}
\VEV{m} \raise0.3ex\hbox{$\;<$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 1 - 2 \ eV
\end{equation}
depending on nuclear matrix elements \cite{haxton_granada}.
Even better sensitivity is expected from the upcoming
enriched germanium experiments \cite{Avignone}.
Although rather stringent, the limit in \eq{bb}
is rather model-dependent, and does not apply
when total lepton number is an unbroken symmetry,
as is the case for Dirac \hbox{neutrinos. } Even if all \hbox{neutrinos }
are Majorana particles, $\VEV{m}$ may differ substantially
from the true neutrino masses $m_\alpha$ relevant for kinematical
studies, since in \eq{AVERAGE} the contributions of
different neutrino types may interfere destructively,
similarly to what happens in the simplest Dirac \hbox{neutrino } case,
where the lepton number symmetry enforces that
$\VEV{m}$ automatically vanishes \cite{QDN}.
The ${\beta \beta}_{0\nu}$ decay process may
also be engendered through the exchange of scalar
bosons, raising the question of which relationship the
${\beta \beta}_{0\nu}$ decay process bears
with the \hbox{neutrino } mass.
A simple but essentially rigorous proof shows that,
in a gauge theory, whatever the origin of ${\beta \beta}_{0\nu}$
is, it requires \hbox{neutrinos } to be Majorana particles, as illustrated in \fig{box}.
\begin{figure}
\vspace{4.5cm}
\refstepcounter\@captype \@dblarg{\@caption\@captype}{${\beta \beta}_{0\nu}$ decay and Majorana neutrinos.}
\label{box}
\end{figure}
Indeed, any generic "black box" mechanism inducing
neutrinoless double beta decay can be closed, by W exchange,
so as to produce a diagram generating a nonzero Majorana
neutrino mass, so the relevant neutrino will,
at some level, be a Majorana particle \cite{BOX}.
Gauge theories may lead to new varieties of
neutrinoless double beta decay involving the
emission of light superweakly interacting spin
zero particles \cite{GGN}. One of these, called
majoron, is the goldstone boson associated
to the spontaneous violation of a global
lepton number symmetry \cite{CMP0}
\footnote{A related light scalar
boson $\rho$ should also be emitted.}
\begin{equation}
(A,Z-2) \rightarrow (A,Z) + 2 \ e^- + J \:.
\end{equation}
The emission of such light scalars would only be detected
through their effect on the $\beta$ spectrum.
The simplest model with sizeable majoron
emission in $\beta\beta$ decays involving an
isotriplet majoron \cite{GR} leads to a new
invisible decay mode for the neutral \hbox{gauge }
boson with the emission of light scalars,
\begin{equation}
Z \rightarrow \rho + J,
\label{RHOJ}
\end{equation}
now ruled out by LEP measurements of the
invisible Z width \cite{LEP1}.
However it has been recently shown that a sizeable
majoron-neutrino coupling leading to observable
emission rates in neutrinoless double beta decay
can be reconciled with the LEP results in models
where the majoron is an isosinglet and lepton number
is broken at a very low scale \cite{ZU}. An alternative
possibility was discussed in \cite{Burgess93}.
Recently there have been negative searches for
the majoron emitting neutrinoless double beta decay
by the Irvine and Heidelberg-Moscow groups
which lead to a limit on the majoron-neutrino
coupling of about $10^{-4}$ \cite{klapdor_wein}.
In addition to laboratory limits, there is a cosmological
bound that follows from avoiding the overabundance of
relic neutrinos \cite{KT}
\begin{equation}
\sum_i m_{\nu_i} \raise0.3ex\hbox{$\;<$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 50 \: \rm{eV}
\label{rho1}
\end{equation}
This limit is also model-dependent, as it only holds
if \hbox{neutrinos } are stable on cosmological time scales.
There are many models where neutrinos decay into
a lighter \hbox{neutrino } plus a majoron \cite{fae},
\begin{equation}
\nu_\tau \rightarrow \nu_\mu + J \:\: .
\label{NUJ}
\end{equation}
Lifetime estimates in seesaw type majoron models have
been discussed in ref. \cite{V}. Here I borrow the estimate
of the model of ref. \cite{ROMA}, given by curve C in \fig{ntdecay}.
The solid line gives the lifetime required in order
to suppress the relic \hbox{$\nu_\tau$ } contribution.
The dashed line ensures that the universe has become
matter-dominated by a redshift of 1000 at the latest
so that fluctuations have grown by the same factor by
today \cite{ST}
\footnote{However, this lifetime limit
is less reliable than the one derived from the critical
density, as there is not yet an established theory for the
formation of structure in the universe.}.
Comparing curve C with the solid and dashed lines
one sees that the theoretical lifetimes can be shorter
than required. Moreover, since these decays are
$invisible$, they are consistent with all
astrophysical observations.
Recently Steigman and
collaborators have argued that many values of the \hbox{$\nu_\tau$ } mass
can be excluded by cosmological big-bang nucleosynthesis,
even when it decays \cite{BBNUTAU}. This, however, still
leaves open a wide region of theoretically interesting
\hbox{$\nu_\tau$ } lifetime-mass values.
\begin{figure}
\vspace{8.4cm}
\refstepcounter\@captype \@dblarg{\@caption\@captype}{
Estimated \hbox{$\nu_\tau$ } lifetime versus observational limits.
}
\label{ntdecay}
\end{figure}
It follows than that any effort to improve present
\hbox{neutrino } mass limits is worthwhile. These include searches
for distortions in the energy distribution of the electrons
and muons coming from decays \hbox{such as }
$\pi, K \rightarrow e \nu$, $\pi, K \rightarrow \mu \nu$, as
well as kinks in nuclear $\beta$ decays \cite{Deutsch}.
\section{Positive Hints for Neutrino Mass}
\noindent
In addition to the {\sl limits} described in the
previous section, observation also provides us with
some positive {\sl hints} for neutrino masses.
These follow from cosmological, astrophysical
and laboratory observations which I now discuss.
Recent observations of cosmic background temperature
anisotropies on large scales by the COBE satellite,
when combined with smaller scale observations
(cluster-cluster correlations) indicate the need for
the existence of a hot {\sl dark matter} component,
contributing about 30\% to the total mass density,
i.e. $\Omega_{HDM} \sim 0.3$ \cite{cobe,cobe2}.
For this the most attractive particle candidate is a
massive neutrino, \hbox{such as } as a \hbox{$\nu_\tau$ } of a few eV mass.
This suggests the possibility of having
observable \hbox{$\nu_e$ } to \hbox{$\nu_\tau$ } or \hbox{$\nu_\mu$ } to \hbox{$\nu_\tau$ } oscillations in the
laboratory. The next generation of experiments CHORUS
and NOMAD at CERN, and the P803 experiment proposed at
Fermilab will probe this possibility \cite{chorus}.
Second, the {\sl solar neutrino data} collected up to now by
the two high-energy experiments Homestake and Kamiokande,
as well as by the low-energy data on pp neutrinos from
the GALLEX and SAGE experiments still pose a persisting
puzzle \cite{Davis,granadasol}. Comparing the full data
of GALLEX including their most recent ones, with the
Kamiokande data, one can obtain the allowed one sigma
region for $^7 $ Be and $^8$ Be fluxes as the intersection
of the region to the left of line labelled 91 with
the region labelled KAMIOKA.
The lines are normalized with respect to the reference
solar model of Bahcall and collaborators. Including
the Homestake data of course only aggravates the
discrepancy \cite{Smirnov_wein}, as can be seen
from the fig xx.
\begin{figure}
\vspace{9cm}
\refstepcounter\@captype \@dblarg{\@caption\@captype}{
Allowed one sigma bands for $^7 $ Be and $^8$ Be fluxes
from all solar neutrino data
}
\label{solardata}
\end{figure}
Thus the solar \hbox{neutrino } problem seems really a problem.
The simplest astrophysical solutions are highly disfavored
if {\sl all} data are taken simultaneously, leading to the
need of new physics in the \hbox{neutrino } sector \cite{NEEDNEWPHYSICS}.
The most attractive way to account for the data
is to assume the existence of \hbox{neutrino } conversions
involving very small \hbox{neutrino } masses $\sim 10^{-3}$ eV
\cite{MSW}. The region of parameters allowed by
present experiments is illustrated in \fig{msw}
\cite{Hata} (for similar analyses, see ref.
\cite{MSWPLOT}). Note that the fits favour the
non-adiabatic over the large mixing solution,
due mostly to the larger reduction of
the $^7 $ Be flux found in the former.
\begin{figure}
\vspace{9.5cm}
\refstepcounter\@captype \@dblarg{\@caption\@captype}{Region of solar \hbox{neutrino } oscillation parameters
allowed by experiment}
\label{msw}
\end{figure}
Finally, there are hints for \hbox{neutrino } masses from studies involving
{\sl atmospheric neutrinos}. Although the predicted absolute
fluxes of \hbox{neutrinos } produced by cosmic-ray interactions in the
atmosphere are uncertain at the 20 \% level, their
ratios are expected to be accurate to within
5 \% \cite{atmsasso}.
An apparent decrease in the expected flux of atmospheric
$\nu_\mu$'s relative to $\nu_e$'s arising from the decays
of $\pi$'s and $K$'s produced in the atmosphere, and from
the secondary muon decays has been observed in three
underground experiments, Kamiokande, IMB and possibly
Soudan2 \cite{atm}. This atmospheric neutrino deficit
can be ascribed to \hbox{neutrino } oscillations.
Combining these experimental results with observations
of upward going muons made by Kamiokande, IMB and Baksan,
and with the negative Frejus and NUSEX results \cite{up}
leads to the following range of neutrino oscillation
parameters \cite{atmsasso}
\begin{equation}
\label{atm0}
\Delta m^2_{\mu \tau} \approx 0.005 \: - \: 0.5\ \rm{eV}^2,\
\sin^22\theta_{\mu \tau} \approx 0.5
\end{equation}
These recent analyses severely constrain the oscillation
parameters, apparently excluding oscillations of \hbox{$\nu_\mu$ } to \hbox{$\nu_\tau$ }
with maximal mixing, as expected in some theoretical models.
However, the underlying uncertainties are still so large that it is
unsafe to rule out maximal mixing with a high degree of confidence.
Similar analyses have also been performed for the case of
\hbox{$\nu_\mu$ } to \hbox{$\nu_S$ } as well as \hbox{$\nu_\mu$ } to \hbox{$\nu_e$ } channels, where matter
effects play an important role \cite{lipari}.
Taken at face value, the above astrophysical and cosmological
observations suggest an interesting theoretical puzzle, if one
insists in accounting for all three observations on solar,
dark matter and atmospheric \hbox{neutrinos } within a consistent theory.
Indeed, it is difficult to reconcile these three observations
simultaneously in the framework of the simplest seesaw model
with just the three known \hbox{neutrinos }. The only possibility is if
all three \hbox{neutrinos } are closely degenerate \cite{caldwell}.
We now turn to model building. Can we reconcile the present
hints from astrophysics and cosmology in the freamweork of a
consistent elementary particle physics model?
It is known that the general seesaw models have
two independent terms giving rise to the light \hbox{neutrino } masses.
The first is an effective triplet vacuum expectation value
\cite{2227} which is expected to be small in left-right
symmetric models \cite{LR}. Based on this fact one can
in fact construct extended seesaw models where the main
2 eV or so contribution to the light \hbox{neutrino } masses is universal,
due to a suitable horizontal symmetry, while the splittings
between \hbox{$\nu_e$ } and \hbox{$\nu_\mu$ } explain the solar \hbox{neutrino } deficit and that
between \hbox{$\nu_\mu$ } and \hbox{$\nu_\tau$ } explain the atmospheric \hbox{neutrino } anomaly \cite{DEG}.
The alternative way to fit all the data is to add a
fourth \hbox{neutrino } species which, from the LEP data on the
invisible Z width, we know must be of the sterile type,
call it \hbox{$\nu_S$ }. Two basic schemes have been suggested in
which the \hbox{$\nu_S$ } either lies at the dark matter scale
\cite{DARK92} or, alternatively, at the solar \hbox{neutrino }
scale \cite{DARK92B}.
In the first case the atmospheric
\hbox{neutrino } puzzle is explained by \hbox{$\nu_\mu$ } to \hbox{$\nu_S$ } oscillations,
while in the second it is explained by \hbox{$\nu_\mu$ } to \hbox{$\nu_\tau$ }
oscillations. Correspondingly, the deficit of
solar \hbox{neutrinos } is explained in the first case
by \hbox{$\nu_e$ } to \hbox{$\nu_\tau$ } oscillations, while in the second
it is explained by \hbox{$\nu_e$ } to \hbox{$\nu_S$ } oscillations. In both
cases it is possible to fit all observations together.
However, in the first case there is a clash with the
bounds from big-bang nucleosynthesis while, in the
latter case of where \hbox{$\nu_S$ } is at the MSW scale these
limits can be used to single out the nonadiabatic
solution uniquely. Note however that, since the
mixing angle characterizing the \hbox{$\nu_\mu$ } to \hbox{$\nu_\tau$ }
oscillations is nearly maximal, the second
solution is in apparent conflict with \eq{atm0}.
Another theoretical possibility is that all active
\hbox{neutrinos } are very light but the sterile \hbox{neutrino } \hbox{$\nu_S$ } is
the single \hbox{neutrino } responsible for the dark matter
\cite{DARK92D}.
In short, \hbox{neutrino } masses, besides being suggested
by theory, seem to be required to fit present
astrophysical and cosmological observations.
The solid curves in fig. 5 show the regions of
\hbox{$\nu_e$ } to \hbox{$\nu_\mu$ } and \hbox{$\nu_\mu$ } to \hbox{$\nu_\tau$ } oscillation parameters
that are excluded by present accelerator and
reactor experiments.
The next generation of accelerator
experiments at CERN may test for the existence
of \hbox{neutrino } oscillations involving the \hbox{$\nu_\tau$ }. This
is indicated by the dot-dashed line in figure 5.
Finally, the regions suggested by present solar
and atmospheric \hbox{neutrino } data are sketched, for comparison.
Regions A and B are the allowed MSW solutions for solar
\hbox{neutrinos } while the unlabeled regions are for atmospheric \hbox{neutrinos }.
Similar plots can be made for the case of sterile \hbox{neutrinos }.
\begin{figure}
\vspace{9cm}
\refstepcounter\@captype \@dblarg{\@caption\@captype}{
Neutrino oscillation parameters, present hints,
limits, and future experimental sensitivities.
See text.}
\label{osci}
\end{figure}
Further progress will be achievable at the upcoming
long baseline experiments planned at BNL, Soudan,
Icarus and Kamiokande (dashed lines). Underground experiments
should also help to clarify whether or not solar
\hbox{neutrino } conversions exist and also search for
neutrinoless double beta decay with sensitivity
enough to probe the quasidegenerate \hbox{neutrino } scenario
outlined above.
In addition to \hbox{neutrino } oscillations,
there are many other lepton flavour violating
processes whose existence would be related
to neutrino masses and neutrino properties
beyond the standard model. These include
$\mu \rightarrow e \gamma$,
$\mu \rightarrow 3 e $,
$\mu \rightarrow e$ conversion in nuclei,
$\tau \rightarrow e \pi^0$,
$\tau \rightarrow e \gamma$,
as well as two-body decays with the emission
of a superweakly interacting majoron,
e.g. $\mu \rightarrow e + J$ and $\tau \rightarrow e,\mu + J$.
The underlying physics may also be probed
at the high energies accessible at LEP, through related
Z decay processes. e.g. $Z \rightarrow \hbox{$N_\tau$ } \hbox{$\nu_\tau$ }$ or
$Z \rightarrow \chi \tau$, where \hbox{$N_\tau$ } denotes a neutral heavy
lepton, while $\chi$ denotes the lightest chargino.
All of these processes may occur at levels consistent with
present or planned experimental sensitivities, without
violating any experimental data. For recent discussions
see ref. \cite{fae,lfv94}.
\section{Invisible Higgs Decays}
We now turn to a much less usual and less
direct, but striking, possible manifestation of
\hbox{neutrino } masses in the symmetry breaking sector of the
electroweak theory.
In many models \cite{JoshipuraValle92} \hbox{neutrino } masses
are induced from the spontaneous violation of a global
$U(1)$ lepton number symmetry by an $SU(2) \otimes U(1)$ singlet vacuum
expectation value $\VEV{\sigma}$, in such a way that
$m_\nu \to 0$ as $\VEV{\sigma} \rightarrow 0$.
In contrast with the more usual seesaw majoron model
\cite{CMP0}, a low scale for the lepton number violation,
close to the electroweak scale, is {\sl preferred} in
these models, since it is required in order to obtain
small neutrino masses \cite{JoshipuraValle92}
\footnote{Another example is provided by the
RPSUSY models \cite{HJJ}.}.
Another cosmological motivation for low-scale
majoron models has been given in ref. \cite{Goran92}.
In these models, although the majoron has very tiny couplings to
matter, it can have significant couplings to the
Higgs bosons.
This implies that the Higgs boson may decay
with a substantial branching ratio into the
invisible mode \cite{JoshipuraValle92,Joshi92}
\begin{equation}
h \rightarrow J\;+\;J
\label{JJ}
\end{equation}
where $J$ denotes the majoron. The presence of
this invisible Higgs decay channel can affect
the corresponding Higgs mass bounds in an
important way, as well as lead to novel
search strategies at higher energies.
The production and subsequent decay of any Higgs boson
which may decay visibly or invisibly involves three independent
parameters: the Higgs boson mass $M_H$, its coupling
strength to the Z, normalized by that of the \hbox{standard model }, call
this factor $\epsilon^2$, and the invisible Higgs boson
decay branching ratio.
The results published by the LEP experiments on the
searches for various exotic channels can be used
in order to determine the regions in parameter space
that are ruled out already. The procedure was described
in \cite{alfonso,dproy}. Basically it combines the results
of the standard model Higgs boson searches with those one
can obtain for the invisible decay.
For each value of the Higgs mass, the lower bound on
$\epsilon^2$ can be calculated as a function of the
branching ratio $BR(H \rightarrow $ visible), both this
way as well as through the \hbox{standard model } Higgs search analyses
techniques. The weakest of such bounds for
$BR(H \rightarrow $ visible) in the range
between 0 and 1, provides the absolute bound on $\epsilon^2$.
This procedure can be repeated for each value of $M_H$, thus
providing an an exclusion contour in the plane $\epsilon^2$
vs. $M_H$, shown in \fig{alfonso2}, taken from ref. \cite{alfonso}.
The region in $\epsilon^2$ vs. $M_H$ that is already excluded by the
present LEP analyses holds {\sl irrespective of the mode of Higgs decay},
visible or invisible.
\begin{figure}
\vspace{7cm}
\refstepcounter\@captype \@dblarg{\@caption\@captype}{Region in the $\epsilon^2$ vs. $m_H$ that can be
excluded by the present LEP1 analyses, independent of the
mode of Higgs decay, visible or invisible (solid curve).
Also shown are the LEP2 extrapolations (dashed).}
\label{alfonso2}
\end{figure}
Finally, one can also determine the additional
range of parameters that can be covered by LEP2
for a total integrated luminosity of 500 pb$^{-1}$
and centre-of-mass energies of 175 GeV and 190 GeV.
This is shown as the dashed and dotted curves in
\fig{alfonso2}.
The possibility of invisible Higgs decay
is also very interesting from the point of
view of a linear $e^+ e^-$ collider at higher
energy \cite{EE500}. Heavier, intermediate-mass,
invisibly decaying Higgs bosons can also be searched at high energy
hadron supercolliders such as LHC/SSC \cite{granada}.
The limits from LEP discussed above should
serve as useful guidance for such future searches.
\section{Conclusion}
\noindent
Present cosmological and astrophysical observations,
as well as theory, suggest that neutrinos may be massive.
Neutrino masses might even affect the electroweak
symmetry breaking sector in a very important way.
Existing data do not preclude neutrinos from being responsible
for a wide variety of measurable implications at the laboratory.
These new phenomena would cover an impressive region of energy,
from $\beta$ and double $\beta$ decays, to neutrino oscillations,
to rare processes with lepton flavour violation, up to LEP
energies.
The next generation of \hbox{neutrino } oscillation searches
sensitive to \hbox{$\nu_\tau$ } as dark matter (CHORUS/NOMAD/P803),
$e^+ e^-$ collisions from ARGUS/CLEO to tau-charm and
B factories, as well as the experiments at LEP and the
future LHC could all be sensitive to \hbox{neutrino } properties!
It is therefore worthwhile to keep
pushing the underground experiments, for possible
confirmation of \hbox{neutrino } masses. The neutrinoless
$\beta\beta$ decay searches with enriched germanium
could test the quasidegenerate neutrino scenario for
the joint explanation of hot dark matter and
solar and atmospheric \hbox{neutrino } anomalies.
Further data from low energy pp neutrinos
as well as from Superkamiokande, Borexino, and
Sudbury will shed further light on the neutrino
sector. The same can be said of the ongoing
studies with atmospheric \hbox{neutrinos. }
Similarly, a new generation of experiments capable
of more accurately measuring the cosmological
temperature anisotropies at smaller angular scales than
COBE, would be good probes of different models of
structure formation, and presumably shed further
light on the need for hot \hbox{neutrino } dark matter.
All such endeavours should be gratifying!
\vskip .3cm
{\large Acknowledgements}
\vskip .3cm
I thank Fernando de Campos for reading the
manuscript and helping compose it. I also
thank the organizers for the kind invitation
and friendly organization.
\noindent
\vglue 0.6cm
{\large \bf References}
\bibliographystyle{ansrt}
|
2,877,628,091,575 | arxiv | \section{Acknowledgments}
\label{sect:Acknowledgments}
We thank the RHIC Operations Group and RCF at BNL, the NERSC Center at LBNL, and the Open Science Grid consortium for providing resources and support. This work was supported in part by the Office of Nuclear Physics within the U.S. DOE Office of Science, the U.S. National Science Foundation, the Ministry of Education and Science of the Russian Federation, National Natural Science Foundation of China, Chinese Academy of Science, the Ministry of Science and Technology of China and the Chinese Ministry of Education, the National Research Foundation of Korea, GA and MSMT of the Czech Republic, Department of Atomic Energy and Department of Science and Technology of the Government of India; the National Science Centre of Poland, National Research Foundation, the Ministry of Science, Education and Sports of the Republic of Croatia, RosAtom of Russia and German Bundesministerium fur Bildung, Wissenschaft, Forschung and Technologie (BMBF) and the Helmholtz Association.
\section{Corrections}
\label{sect:Corrections}
\begin{figure}[htbp]
\includegraphics[width=0.5\textwidth]{Jet_ME_sub_norm_err.pdf}
\caption{(Color online) Raw correlated jet yield distributions for \ensuremath{R}\ = 0.2 (upper) and \ensuremath{R}\ = 0.5 (lower) in central and peripheral Au+Au\ collisions at \ensuremath{\sqrt{s_{\mathrm {NN}}}}\ = 200 GeV. The uncorrelated background has been removed by subtraction of the scaled ME distribution from the SE distribution, but no other corrections have been applied. The gray shaded band shows the mixed event normalization uncertainty.
\label{fig:Raw_sub_spectra}}
\end{figure}
Figure~\ref{fig:Raw_sub_spectra} shows the raw correlated recoil jet yield
distributions for \ensuremath{R}\ = 0.2 and \ensuremath{R}\ = 0.5 in central and peripheral Au+Au\
collisions, determined by subtracting the \ensuremath{f^\mathrm{ME}}-normalized ME distribution from the
SE distribution. The SE-ME distributions for \ensuremath{R}\ = 0.3 and
\ensuremath{R}\ = 0.4 (not shown) are similar, with features that interpolate
between the distributions in the figure.
In the region where the SE and ME distributions have similar magnitude, their
difference can be negative due to statistical fluctuations. However, the
vertical axis of Fig.~\ref{fig:Raw_sub_spectra} is logarithmic, and negative
entries are not displayed. Negative values only occur in the region $\ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}}<0$
\ensuremath{\mathrm{GeV/}c}\ for peripheral Au+Au\ collisions, and in $\ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}}<-10$ to $-20$ \ensuremath{\mathrm{GeV/}c}\
(\ensuremath{R}-dependent) in central Au+Au\ collisions. The negative values after
subtraction are consistent with zero within statistical uncertainty in all
cases, and carry negligible weight in the correction and unfolding procedures
discussed below. All negative entries are therefore set to zero, to simplify the
unfolding procedure.
These distributions must still be corrected for the effects of local
fluctuations in background energy density and for
instrumental response. The corrections are carried out using regularized
unfolding methods~\cite{Cowan:2002in,Hocker:1995kb}. In this approach, the measured jet
distribution $M$ and true jet distribution $T$ are related by a response matrix,
\begin{widetext}
\begin{equation}
M(\ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}}) = \Big[ \ensuremath{R_{\mathrm{bkg}}}(\ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}},\ensuremath{p_\mathrm{T,jet}^\mathrm{det,ch}}) \times \ensuremath{R_{\mathrm{det}}}(\ensuremath{p_\mathrm{T,jet}^\mathrm{det,ch}},\ensuremath{p_\mathrm{T,jet}^\mathrm{part,ch}}) \Big] \times T(\ensuremath{p_\mathrm{T,jet}^\mathrm{part,ch}}),
\label{eq:foldequation}
\end{equation}
\end{widetext}
\noindent
where the square brackets express the cumulative response matrix as the product of matrices separately encoding background and instrumental response effects; \ensuremath{p_\mathrm{T,jet}^\mathrm{part,ch}}\ is the particle-level charged jet \ensuremath{p_\mathrm{T}}; \ensuremath{p_\mathrm{T,jet}^\mathrm{det,ch}}\ is
the detector-level charged jet \ensuremath{p_\mathrm{T}}; and \ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}}\ the reconstructed jet \ensuremath{p_\mathrm{T}}\ at the detector level, including \ensuremath{p_\mathrm{T}}-smearing due to uncorrelated background. Factorization of the response into two separate matrices was studied in simulations and found to have negligible influence on the corrected distributions.
The corrected spectrum, which is a measurement of $T$, is determined by inverting Eq. \ref{eq:foldequation}. However, exact inversion of Eq. \ref{eq:foldequation} can result in a solution which has large fluctuations in central values and large variance, due to statistical noise in $M(\ensuremath{p_{\mathrm{T,jet}}^{\mathrm{det}}})$~\cite{Cowan:2002in}. A physically interpretable solution can be obtained by regularized unfolding, which imposes an additional smoothness constraint on the solution.
\subsection{Uncorrelated background response matrix \ensuremath{R_{\mathrm{bkg}}}}
\label{sect:dpT}
Central Au+Au\ collisions have large uncorrelated background energy
density, with significant local fluctuations. While the scalar quantity $\rho$ accounts
approximately for the event-to-event variation of
uncorrelated background energy, it
does not account for local background fluctuations that smear \ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}}.
Full background correction requires unfolding of these fluctuations.
The response matrix for fluctuations in uncorrelated energy density is calculated by embedding detector-level simulated jets into real events at the track level,
reconstructing the hybrid events, and matching each embedded jet with
a reconstructed jet. The matching is carried out in the same way as for \ensuremath{R_{\mathrm{det}}}, described below. The response matrix elements are the probability distribution of \ensuremath{\delta{\pT}}, the \ensuremath{p_\mathrm{T}}-shift from the embedding procedure:
\begin{equation}
\ensuremath{\delta{\pT}}=\ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}}-\ensuremath{p_\mathrm{T}^\mathrm{embed}}.
\label{eq:dpT}
\end{equation}
High-\ensuremath{p_\mathrm{T}}\ hadrons can be correlated in azimuth with the EP
orientation. The strength of this correlation is characterized by \ensuremath{v_2}, the
second-order coefficient of the Fourier expansion of
the azimuthal distribution between the hadron and the EP~\cite{Adare:2014bga}. If \ensuremath{v_2}\ is non-zero for $\ensuremath{p_\mathrm{T}}>9$ \ensuremath{\mathrm{GeV/}c}, selection of a trigger hadron will bias
the EP orientation in the accepted event population, thereby biasing the level of
uncorrelated background in the recoil acceptance opposite to the trigger. This
bias is taken into account in the calculation of the \ensuremath{\delta{\pT}}\ probability distribution by weighting the relative
orientation of the trigger axis and EP
orientation according to $1+\ensuremath{v_2}\cdot\cos\left(2\ensuremath{\Delta\varphi_\mathrm{trig,jet}}\right)$.
Observables based on reconstructed jets measure energy flow associated
with a high-$Q^2$ process, independent of the specific distribution of hadrons
arising from jet fragmentation. For accurate correction of local background
fluctuations, the background response matrix should likewise depend only on the
energy of the embedded object, and be independent of its specific distribution
of hadrons. To explore this variation we use two different jet models for
embedding: charged jets generated by PYTHIA, and single tracks carrying the
entire jet energy \ensuremath{p_\mathrm{T,jet}^\mathrm{part}}. Models with
softer fragmentation than PYTHIA have likewise been explored in simulations,
giving similar results~\cite{deBarros:2011ph}.
\begin{figure}[htbp]
\includegraphics[width=0.4\textwidth]{Delta_pt_momentum.pdf}
\includegraphics[width=0.4\textwidth]{Delta_pt_type.pdf}
\caption{(Color online) Probability distributions for \ensuremath{\delta{\pT}}\ in central Au+Au\ collisions.
Upper: single-track embedding with different values of \ensuremath{p_\mathrm{T}^\mathrm{embed}} ($p_{T}^{e}$).
Lower: \ensuremath{p_\mathrm{T}^\mathrm{embed}}\ = 20 \ensuremath{\mathrm{GeV/}c}\
with three different embedded-jet models: PYTHIA-generated detector-level jets,
single tracks, and single tracks with \ensuremath{v_2}\ modulation of average background
density. See text for details.
}
\label{fig:dpT}
\end{figure}
Figure~\ref{fig:dpT}, upper panel shows the \ensuremath{\delta{\pT}}\ probability distribution for
different values \ensuremath{p_\mathrm{T}^\mathrm{embed}}\ of the embedded track, in central Au+Au\ collisions.
Negligible
dependence on \ensuremath{p_\mathrm{T}^\mathrm{embed}}\ is observed.
The lower panel shows the \ensuremath{\delta{\pT}}\ probability distribution
for \ensuremath{p_\mathrm{T}^\mathrm{embed}}\ = 20 \ensuremath{\mathrm{GeV/}c}\ with three different
models for the embedded jet: PYTHIA-generated with no EP-bias; single particles
with no EP-bias; and single particles with EP-bias corresponding to
\ensuremath{v_2}\ = 0.04 for the trigger hadron, which is the largest \ensuremath{v_2}\ value for
hadrons with $\ensuremath{p_\mathrm{T}}>9$ \ensuremath{\mathrm{GeV/}c}\ that is compatible with the uncertainty band measured
in~\cite{Adare:2014bga}. The three distributions are similar, supporting this
approach to correction for background fluctuations. Unfolding is carried out
using all three distributions, with the variation between them contributing to
the systematic uncertainty. Measurements of \ensuremath{v_3}\ and higher
harmonics for high-\ensuremath{p_\mathrm{T}}\ hadrons are not presently available at RHIC energies.
However, non-zero \ensuremath{v_3}\ for the trigger hadron would only offset the
influence in the recoil direction of trigger hadron \ensuremath{v_2}.
Figure \ref{fig:pt_embed_vs_pt_rec}, upper panel, shows the full background response matrix \ensuremath{R_{\mathrm{bkg}}}, calculated by embedding single tracks.
\begin{figure}[htbp]
\includegraphics[width=0.5\textwidth]{pt_embed_vs_rec.png}
\includegraphics[width=0.5\textwidth]{pt_sim_vs_rec.png}
\caption{(Color online) Response matrices for \ensuremath{R}\ = 0.3 jets in central Au+Au\
collisions. Upper: uncorrelated background response matrix \ensuremath{R_{\mathrm{bkg}}}. Lower: instrumental response matrix \ensuremath{R_{\mathrm{det}}}.}
\label{fig:pt_embed_vs_pt_rec}
\end{figure}
\subsection{Instrumental response matrix \ensuremath{R_{\mathrm{det}}}}
\label{sect:Rdet}
The largest contribution to the instrumental response matrix \ensuremath{R_{\mathrm{det}}}\ is from
tracking efficiency, which shifts the spectrum lower in \ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}}. There is a smaller
contribution from track momentum resolution, which smears \ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}}.
The matrix \ensuremath{R_{\mathrm{det}}}\ is determined using PYTHIA-generated events for p+p\
collisions at \ensuremath{\sqrt{s}}\ = 200 GeV. Jet reconstruction is carried out at the particle
level
with the anti-\ensuremath{k_\mathrm{T}}\ algorithm. Detector-level jets are generated by fast
simulation,
applying the effects of tracking efficiency and track \ensuremath{p_\mathrm{T}}\ resolution on the
constituents of each
particle-level jet. Jet reconstruction is then carried out on the detector-level
event. Jets from this procedure are rejected if they lie outside the experimental acceptance, for both the particle-level and detector-level populations.
Tracks in particle-level jets are matched to detector-level
tracks. For each particle-level jet, the detector-level jet with the largest
fraction of the
particle-level jet energy is matched to it, with the additional requirement that
the fraction be greater than 15\%.
The elements of \ensuremath{R_{\mathrm{det}}}\ are the probability
for a particle-level jet with \ensuremath{p_\mathrm{T,jet}^\mathrm{part}}\ to have matched detector-level partner
with \ensuremath{p_\mathrm{T,jet}^\mathrm{det}}. Elements of \ensuremath{R_{\mathrm{det}}}\ are normalized such that, for each bin in
\ensuremath{p_\mathrm{T,jet}^\mathrm{part}}, the sum over all bins in \ensuremath{p_\mathrm{T,jet}^\mathrm{det}}\ is unity. The inefficiency
arising from particle-level jets without a detector-level match is corrected on
a statistical basis (Sect.~\ref{sect:JetMatching}), in a separate
correction step.
As discussed in Sect.~\ref{sect:Interp}, the approach of this analysis results in corrected distributions for $\ensuremath{p_{\rm{T,jet}}^{\rm{ch}}}>0$, while interpretation of such distributions in terms of parton showers and their modification in-medium is restricted to $\ensuremath{p_{\rm{T,jet}}^{\rm{ch}}}>10$ \ensuremath{\mathrm{GeV/}c}. In order to avoid the introduction of arbitrary cuts, \ensuremath{R_{\mathrm{det}}}\ is constructed as described above for $\ensuremath{p_\mathrm{T,jet}^\mathrm{part}}>0$, though jet-like objects with $\ensuremath{p_\mathrm{T,jet}^\mathrm{part}}<10$ \ensuremath{\mathrm{GeV/}c}\ should be interpreted with caution in terms of the fragmentation of quarks and gluons.
The contribution of secondary decays was determined using PYTHIA. The effect of
feed-down from weak decays is
negligible compared to other
systematic uncertainties, and no correction for this effect is applied.
Figure \ref{fig:pt_embed_vs_pt_rec}, lower panel, shows the matrix \ensuremath{R_{\mathrm{det}}}\ for
central Au+Au\ collisions. Matrix elements with $\ensuremath{p_\mathrm{T,jet}^\mathrm{det}}<\ensuremath{p_\mathrm{T,jet}^\mathrm{part}}$ arise largely due
to tracking efficiency, which causes tracks to be lost from the jet. Matrix
elements with $\ensuremath{p_\mathrm{T,jet}^\mathrm{det}}>\ensuremath{p_\mathrm{T,jet}^\mathrm{part}}$, which is less probable, arise from the
effect of momentum resolution, for cases in which \ensuremath{p_\mathrm{T}}-loss due to tracking
efficiency is small.
\subsection{Unfolding}
\label{sect:Unfolding}
Unfolding is carried out using two different methods: an iterative method
based on Bayes's Theorem~\cite{D'Agostini:1994zf}, and a method based on Singular
Value Decomposition (SVD)~\cite{Hocker:1995kb}. For iterative Bayesian
unfolding, regularization is imposed by
limiting the number of iterations, while for
SVD unfolding, regularization is imposed by truncating the expansion to $k$ terms.
The unfolding procedure requires specification of a prior distribution. In order
to assess the dependence of the unfolded solution on the choice of prior,
several different prior distributions were used for both the Bayesian and SVD methods (see
Sect.~\ref{sect:SysUncertUnfold}).
\subsection{Jet reconstruction efficiency}
\label{sect:JetMatching}
The matching procedure between particle-level and detector-level jets in
Sect.~\ref{sect:Rdet} does not generate a match for every particle-level jet.
The corresponding detector-level jet can be lost due to fiducial cuts and
instrumental response, most notably tracking efficiency: especially for low-\ensuremath{p_\mathrm{T}}\ jets containing few tracks, there is a non-zero probability that none of the tracks will be detected due to tracking efficiency less than unity. In addition, the jet area cut generates a small
inefficiency for $\ensuremath{p_\mathrm{T,jet}^\mathrm{part}}<4$ \ensuremath{\mathrm{GeV/}c}, with negligible inefficiency at larger
\ensuremath{p_\mathrm{T,jet}^\mathrm{part}}\ (Sect. \ref{sect:ME}).
\begin{figure}[htbp]
\includegraphics[width=0.49\textwidth]{Jet_match_eff.png}
\caption{(Color online) Jet reconstruction efficiency for peripheral
and central Au+Au\ collisions, as a function of particle-level \ensuremath{p_\mathrm{T,jet}^\mathrm{part}}. See
text for details.}
\label{fig:Jet_match_eff}
\end{figure}
Figure~\ref{fig:Jet_match_eff} shows the jet reconstruction efficiency
for central and peripheral Au+Au\ collisions, defined as the matching efficiency
between particle-level and detector-level jets. The efficiency is calculated for particle-level jets whose centroid is within the experimental acceptance, $|\eta_\mathrm{jet}|<1-\ensuremath{R}$. The systematic uncertainty in
efficiency, indicated by the bands, is due predominantly to uncertainty in the
tracking efficiency. The correction for inefficiency is applied bin-by-bin to
ensemble-averaged distributions, after the unfolding step.
\subsection{Estimated magnitude of corrections}
\label{sect:EstCorrections}
\begin{figure}[htbp]
\includegraphics[width=0.5\textwidth]{PYTHIA_pT_evolution.png}
\caption{(Color online) Estimation of the magnitude of corrections for jets with
\ensuremath{R}\ = 0.3, in central Au+Au\ collisions.}
\label{fig:RawDistributions}
\end{figure}
We conclude this section by estimating the magnitude of corrections.
The estimate, shown in
Fig, \ref{fig:RawDistributions}, is based on the recoil
jet distribution (\ensuremath{R}\ = 0.3) for p+p\ collisions at \ensuremath{\sqrt{s}}\ = 200 GeV calculated by
PYTHIA at the particle level (blue stars), which is then modified by the inverse
of the corrections discussed above. The effects correspond to a measurement in
central Au+Au\ collisions. Instrumental effects, which are dominated by tracking
efficiency, shift the distribution to lower \ensuremath{p_{\mathrm{T,jet}}}\ (blue stars $\rightarrow$
green dashed). Fluctuations due to uncorrelated background, as characterized by the
\ensuremath{\delta{\pT}}\ distribution, smear \ensuremath{p_{\mathrm{T,jet}}}\ but do not change the integrated yield of the distribution (green dashed $\rightarrow$ grey solid). Finally, the large population of uncorrelated
background jet candidates in central Au+Au\ collisions modifies the spectrum
significantly for $\ensuremath{p_{\mathrm{T,jet}}}<10$ \ensuremath{\mathrm{GeV/}c}\ (grey solid $\rightarrow$ red circles). The
cumulative correction for instrumental response and uncorrelated
background therefore corresponds to the transformation from red circles to
blue stars. If considered on a bin-by-bin basis, the cumulative correction modifies the magnitude of the distribution by a factor less than two for $\ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}}>10$ \ensuremath{\mathrm{GeV/}c}.
\section{Experiment, Dataset, and Offline Analysis}
\label{sect:ExpDatasetOffline}
STAR is a large, multi-purpose experiment at RHIC, consisting of a solenoidal magnet
and detectors for triggering, tracking, particle identification, calorimetry,
and event categorization~\cite{Ackermann:2002ad}.
The data used in this analysis were recorded during the 2011 RHIC run with
Au+Au\ collisions at \ensuremath{\sqrt{s_{\mathrm {NN}}}}\ = 200 GeV. Events were accepted online with a
minimum-bias trigger requiring the coincidence of signals from the Zero
Degree Calorimeters (ZDC) and the Vertex Position Detectors (VPD)~\cite{Llope:2003ti}.
The trigger included the requirement that the $z$-position of the primary vertex of the event (\ensuremath{z_\mathrm{vtx}}) was within
$\pm$ 30 cm of the nominal center of the STAR detector.
Offline analysis was carried out using charged tracks measured by the STAR Time
Projection Chamber (TPC)~\cite{Anderson:2003ur}. The TPC has inner radius of 50 cm
and outer radius of 200 cm, with acceptance $|\eta|<1.0$ over the full azimuth. The TPC registers a maximum of 45 independent points for a charged track. The
primary vertex is defined using global tracks, based on fitting of TPC clusters.
The vertex position
resolution in the beam direction is
$\delta{z_{vtx}}=350 \mu{m}$ for the highest multiplicity events in the
analysis, which contain around 1000 primary tracks.
The analysis utilizes primary tracks, which are global tracks whose
distance of closest approach to the primary
vertex in the transverse plane ($\mathrm{DCA}_{xy}$) is less than 1 cm.
The primary track momentum is determined by a fit that includes the primary
vertex. Primary tracks
with $\ensuremath{p_\mathrm{T}}>0.2$ \ensuremath{\mathrm{GeV/}c}\ are accepted for further analysis.
The primary charged track transverse momentum resolution is
$\sigma_{p_T}/\ensuremath{p_\mathrm{T}}=0.01\times\ensuremath{p_\mathrm{T}}$ [\ensuremath{\mathrm{GeV/}c}]. The STAR tracking system momentum
resolution at high \ensuremath{p_\mathrm{T}}\ has been verified by matching tracks to a shower in the
Barrel Electromagnetic Calorimeter (BEMC) for electrons from W-decay in p+p\
collisions~\cite{Aggarwal:2010vc}. Tracks with primary \ensuremath{p_\mathrm{T}}\ larger than 30 \ensuremath{\mathrm{GeV/}c}\ are
excluded from the analysis. The probability for an event to have both a
track with $9<\ensuremath{p_\mathrm{T}}<30$ \ensuremath{\mathrm{GeV/}c}\ and a track with $\ensuremath{p_\mathrm{T}}>30$ \ensuremath{\mathrm{GeV/}c}\ is negligible.
Tracking efficiency is determined by embedding simulated tracks into real Au+Au\
events. Primary track efficiency for charged pions is 48\% at \ensuremath{p_\mathrm{T}}\ = 0.2 \ensuremath{\mathrm{GeV/}c}, 67\%
at \ensuremath{p_\mathrm{T}}\ = 0.4 \ensuremath{\mathrm{GeV/}c}, and 73\% at \ensuremath{p_\mathrm{T}}\ = 20 \ensuremath{\mathrm{GeV/}c}\ for central Au+Au\ collisions; and
66\% at \ensuremath{p_\mathrm{T}}\ = 0.2
\ensuremath{\mathrm{GeV/}c}, 86\% at \ensuremath{p_\mathrm{T}}\ = 0.4 \ensuremath{\mathrm{GeV/}c}, and 89\% at \ensuremath{p_\mathrm{T}}\ = 20 \ensuremath{\mathrm{GeV/}c}\ for peripheral Au+Au\
collisions. At high transverse momentum the tracking efficiency of charged pions, kaons,
and protons is similar, while the efficiency of protons and kaons is
significantly lower than that of pions
for $\ensuremath{p_\mathrm{T}}<0.5$ \ensuremath{\mathrm{GeV/}c}.
Pile-up events, due to high instantaneous
luminosity, are excluded offline by requiring at
least two tracks from the
primary vertex to be matched to cells of the Time-of-Flight (TOF) detector, which is a fast detector that can identify out-of-time tracks.
Quality assurance is carried out on a run-wise basis, with a run corresponding
to several hours of
online data-taking. A run was rejected if its deviation from global mean
values exceeded $5 \sigma$
for mean transverse momenta $\langle p_{T} \rangle$ or $2 \sigma$ for
multiplicity $\langle M \rangle$, measured using uncorrected charged track
distributions in $|\eta|<0.5$; or $2.5 \sigma$ for the
interaction rate measured in the forward scintillator Beam-Beam Counters,
$\langle BBCx \rangle$.
\begin{figure}[htbp]
\includegraphics[width=0.5\textwidth]{RefMult.pdf}
\caption{(Color online) Centrality selection for Au+Au\ collisions at \ensuremath{\sqrt{s_{\mathrm {NN}}}}\
= 200 GeV: distribution of uncorrected charged track multiplicity in
$|\eta|<0.5$ (black histogram), with comparison to the result of a Glauber
model~\cite{Miller:2007ri} calculation (red points). The shaded regions show the windows for 0\%-10\%
(central) and 60\%-80\% (peripheral) Au+Au\ collisions.}
\label{fig:RefMult}
\end{figure}
Figure~\ref{fig:RefMult} shows the distribution of uncorrected multiplicity of
charged particle tracks within $|\eta|$ $<$ 0.5. Events are classified offline
using percentile intervals of this distribution, with the 0\%--10\%
(``central'') and 60\%--80\% (``peripheral'') intervals shown in the figure.
The figure also shows the charged particle
multiplicity distribution from a Monte Carlo Glauber calculation~\cite{Miller:2007ri}. Comparison of the distributions from the Monte Carlo
calculation and data gives an online trigger efficiency of 100\% for
central collisions and 70\% for peripheral collisions.
After event selection
cuts, the data set consists of 56.5 M central (0\%--10\%) and 106.7 M peripheral
(60\%--80\%) events. The effect of trigger inefficiency in peripheral collisions is accounted for by a multiplicity-dependent weighting of events.
Simulated events are generated using PYTHIA 6.416 tune A~\cite{Sjostrand:2006za}
folded with a detector response based on GEANT3~\cite{GEANT3}. Distributions
calculated without incorporating detector response are denoted ``particle
level'', while distributions that include detector response are denoted
``detector level.''
Fast generation of detector-level events from particle-level PYTHIA
simulations is carried out by random rejection of charged
tracks to model tracking efficiency, and smearing of track \ensuremath{p_\mathrm{T}}\ to model
momentum
resolution, with \ensuremath{p_\mathrm{T}}-dependent efficiency and resolution.
Hybrid events for embedding studies are constructed by generating PYTHIA events
for p+p\ collisions at \ensuremath{\sqrt{s}}\ = 200 GeV, selecting events containing a high-\ensuremath{p_\mathrm{T}}\ hadron in the trigger acceptance (Sect. \ref{sect:Observables}), and
applying the ``fast generation" detector-level effects. Each simulated event is
combined with a real Au+Au\
event at the track level from the central or peripheral population, without requiring a track
in the trigger acceptance in the real event. Since embedding is carried out at the track level, tracks are specified in terms of $(\ensuremath{p_\mathrm{T}},\eta,\phi)$, with no need to specify a vertex position. The hybrid events are analyzed using the
same procedure used for real
data analysis.
We also compare these measurements to theoretical expectations for p+p\
collisions at \ensuremath{\sqrt{s}}\ = 200 GeV based on an NLO pQCD calculation~\cite{deFlorian:2009fw} (Sect.~\ref{sect:pQCD}).
\section{Introduction}
\label{sect:intro}
The interaction of energetic jets with hot QCD matter provides unique
probes of the Quark-Gluon Plasma (QGP) generated in high-energy
collisions of heavy nuclei (``jet quenching'',
~\cite{Majumder:2010qh} and references therein). Jet quenching was first observed experimentally as the suppression of inclusive hadron production and hadron correlations
at high transverse momentum (high \ensuremath{p_\mathrm{T}})
~\cite{Adare:2010ry,Adler:2002xw,Adler:2002tq,
Adams:2003kv,Adams:2006yt,Adamczyk:2013jei,Adcox:2001jp,Adare:2012wg,Adare:2012qi,
Abelev:2012hxa, CMS:2012aa, Aamodt:2011vg, Chatrchyan:2012wg}. Jet quenching is calculable theoretically, using approaches based on perturbative QCD and on strong coupling. Comparison of theoretical
calculations with measurements of inclusive hadron suppression at the Relativistic Heavy Ion Collider (RHIC)
and Large Hadron Collider (LHC) has been used to constrain the jet transport parameter \ensuremath{\hat{q}}\ in the
QGP~\cite{Burke:2013yra}.
Measurements based on high-\ensuremath{p_\mathrm{T}}\ hadrons, which are leading fragments
of jets, bias towards jets that have lost relatively little energy in the
medium~\cite{Baier:2002tc}. These are therefore {\it disappearance} measurements, in which the contribution of jets that interact most strongly in the medium is suppressed. Such measurements have limited
sensitivity to the detailed dynamics of parton shower
modification and the response of the medium to the passage of the
jet. Comprehensive exploration of jet quenching therefore requires measurements of reconstructed jets and their correlations. Jet measurements in the high-multiplicity environment of heavy ion
collisions are challenging, however, because of the large and dynamically fluctuating backgrounds in such events.
Heavy ion jet measurements at the LHC have reported medium-induced suppression
in inclusive jet production~\cite{Abelev:2013kqa,Aad:2014bxa,Adam:2015ewa,Khachatryan:2016jfl}, as
well as modification of di-jet and $\gamma$-jet correlations~\cite{Aad:2010bu,Chatrchyan:2012nia,Chatrchyan:2012gt}. These measurements
suppress the contribution of uncorrelated background to the jet signal by
rejecting reconstructed jets on a jet-by-jet basis based on measured jet
\ensuremath{p_\mathrm{T}}\ adjusted by an estimate of the uncorrelated background contribution, which may
induce bias in the accepted jet population.
The ALICE Collaboration at the LHC has measured
jet quenching in central Pb+Pb\ collisions at \ensuremath{\sqrt{s_{\mathrm {NN}}}}\ = 2.76 TeV with a
different approach to the suppression of uncorrelated background, using the
semi-inclusive distribution of reconstructed jets recoiling from a
high-\ensuremath{p_\mathrm{T}}\ trigger hadron~\cite{Adam:2015doa}. In the ALICE approach,
correction for large uncorrelated jet background is carried out at the
level of ensemble-averaged distributions, without discrimination on a jet-by-jet basis of correlated jet signal from uncorrelated background jets. This background
suppression procedure, which does not impose bias on the reported jet
population, enables heavy ion jet measurements over a broad kinematic range,
including large jet radius \ensuremath{R}\ and low \ensuremath{p_{\mathrm{T,jet}}}.
This manuscript reports new measurements of jet quenching in central and
peripheral Au+Au\ collisions at \ensuremath{\sqrt{s_{\mathrm {NN}}}}\ = 200 GeV, by the STAR
Collaboration at RHIC. These
measurements are also based on the semi-inclusive distribution of
reconstructed charged-particle jets recoiling from a high-\ensuremath{p_\mathrm{T}}\ trigger
hadron. We apply a novel mixed-event technique for correcting
uncorrelated jet background, and compare it to the
approach used in the ALICE measurement~\cite{Adam:2015doa}.
Distributions of charged particle recoil jets with $\ensuremath{p_{\rm{T,jet}}^{\rm{ch}}}<30$ \ensuremath{\mathrm{GeV/}c}\
and jet resolution parameters (or jet radius) \ensuremath{R}\ = 0.2, 0.3, 0.4 and 0.5 are reported as a function of \ensuremath{p_{\rm{T,jet}}^{\rm{ch}}}\ and \ensuremath{\Delta\phi}, the
azimuthal angle of the jet centroid relative to that of the trigger axis.
These measurements probe medium-induced modification of jet production and
internal jet structure in several ways. Suppression of jet yield in central
compared to the yield in peripheral collisions with the same jet cone radius \ensuremath{R}\
measures the energy transported to angles larger than \ensuremath{R}.
Comparison of recoil jet
yield at different \ensuremath{R}\ measures medium-induced modification of
jet shape (intra-jet
broadening~\cite{Wang:2013cia,Kurkela:2014tla,Casalderrey-Solana:2016jvj}).
The distribution of \ensuremath{\Delta\phi}\ measures medium-induced acoplanarity (inter-jet
broadening). Yield enhancement in the tail of the \ensuremath{\Delta\phi}\ distribution could
indicate medium-induced Moli{\`e}re scattering off quasi-particles in the hot QCD
medium~\cite{D'Eramo:2012jh,Wang:2013cia}. The acoplanarity distribution of low energy
jets is sensitive to $\langle\ensuremath{\hat{q}}\cdot{L}\rangle$, where \ensuremath{\hat{q}}\ is the jet transport parameter and
$L$ is the in-medium path length~\cite{Chen:2016vem}.
We compare these results with
those from ALICE~\cite{Adam:2015doa}, providing a direct comparison of jet quenching
measured by reconstructed jets at RHIC and the LHC.
Comparison of these measurements to distributions from p+p\ collisions
at \ensuremath{\sqrt{s}}\ = 200 GeV can be used to identify nuclear effects that
are present in peripheral Au+Au\ collisions. However, due to
the lack of a measured reference distribution for p+p\ collisions
at present, we compare the Au+Au\ measurements to expectations for p+p\
collisions at
\ensuremath{\sqrt{s}}\ = 200 GeV from the PYTHIA Monte Carlo event generator, tune A~\cite{Sjostrand:2006za}, and from a perturbative QCD calculation at
Next-to-Leading Order (NLO)~\cite{deFlorian:2009fw}.
The paper is organized as follows: Sect. II, experiment, dataset, and offline
analysis; Sect. III, jet reconstruction; Sect. IV, semi-inclusive hadron+jet
distributions; Sect. V, uncorrelated background and event mixing; Sect. VI, raw
distributions; Sect. VII, corrections; Sect. VIII, systematic uncertainties;
Sect. IX, closure test; Sect. X, perturbative QCD calculation; Sect. XI,
results; and Sect. XII, summary.
\section{Perturbative QCD calculation}
\label{sect:pQCD}
The semi-inclusive recoil jet distribution is the ratio of
cross sections for h+jet and inclusive hadron production
(Eq.~\ref{eq:hJetDefinition}). The spin-dependent cross section for h+jet
production in p+p\ collisions at \ensuremath{\sqrt{s}}\ = 200 GeV has been calculated
perturbatively at NLO~\cite{deFlorian:2009fw}. We utilize this NLO approach to
calculate the spin-averaged h+jet and inclusive hadron cross sections, and
their ratio.
This measurement reports charged-particle jets. Although charged-particle jets
are not infrared-safe in perturbation theory, non-perturbative track functions
have been defined that represent the energy fraction of a parton
carried by charged tracks and that account for infrared divergences, enabling
calculation of infrared-safe charged-jet observables~\cite{Chang:2013rca}.
PYTHIA-based calculations have been compared to such
track functions and have similar evolution~\cite{Chang:2013rca}.
For comparison of these measurements to NLO pQCD calculations, we therefore
utilize PYTHIA to transform perturbatively calculated distributions from the
parton to the charged-particle level.
\begin{figure}[htbp]
\includegraphics[width=0.45\textwidth]{NLO_R04.png}
\caption{(Color online) Calculation of the semi-inclusive recoil jet distribution in p+p\ collisions at \ensuremath{\sqrt{s}}\ = 200 GeV, for jets
with \ensuremath{R}\ = 0.4. The parton-level distribution
is calculated perturbatively at NLO~\cite{deFlorian:2009fw}. The band shows the theoretical uncertainty due to scale variations. The
charged-jet distribution is the transformation of the parton-level
jet distribution using PYTHIA.}
\label{fig:NLO}
\end{figure}
Figure~\ref{fig:NLO} shows the distribution of \ensuremath{Y\left(\pTjetch\right)} (Eq.~\ref{eq:hJetYield}) for jets
with \ensuremath{R}\ = 0.4 in p+p\ collisions at \ensuremath{\sqrt{s}}\ = 200 GeV (Eq.~\ref{eq:hJetDefinition},
RHS). The NLO pQCD formalism in~\cite{deFlorian:2009fw} is used for both the h+jet and inclusive hadron cross
sections, with CTEQ6M parton distribution functions~\cite{Pumplin:2002vw} and DSS
fragmentation functions~\cite{deFlorian:2007ekg}.
Variation of a factor 2 in the renormalization and factorization scales gives a
variation in the ratio of 30\%-40\%, which represents the theoretical
uncertainty. The figure also shows \ensuremath{Y\left(\pTjetch\right)}\ at the
charged-particle level, obtained by transforming the NLO distribution of recoil
jets to charged-particle jets using PYTHIA, in this case version
6.4.26, tune Perugia-0~\cite{Skands:2010ak}.
At LO, the trigger hadron threshold of 9 \ensuremath{\mathrm{GeV/}c}\ sets a lower bound for \ensuremath{p_{\mathrm{T,jet}}}\ of
the recoil jet. The parton-level recoil jet distribution at NLO indeed exhibits
a peak around
\ensuremath{p_{\mathrm{T,jet}}}\ = 9 \ensuremath{\mathrm{GeV/}c}, reflecting this kinematic constraint. However, yield at lower
\ensuremath{p_{\mathrm{T,jet}}}\ is also observed, indicating a contribution from higher-order processes. The peak is significantly reduced by the transformation from
parton-level to charged-particle level, which both reduces and smears
\ensuremath{p_{\mathrm{T,jet}}}. We note that, in this calculation, each parton-level
jet is transformed into only one particle-level jet. The transformation from parton-level to
particle-level distributions based on PYTHIA therefore does not account for jet splitting,
which may contribute at low \ensuremath{p_{\mathrm{T,jet}}}\ and for small \ensuremath{R}.
Comparison of these distributions to measurements is made in the
following section.
\section{Jet reconstruction}
\label{sect:JetReco}
The analysis utilizes charged jets, which are composed of charged tracks. Jet reconstruction is carried out with the \ensuremath{k_\mathrm{T}}~\cite{Cacciari:2011ma}
and anti-\ensuremath{k_\mathrm{T}}~\cite{FastJetAntikt} algorithms applied to all accepted charged
tracks using the E-recombination scheme~\cite{Cacciari:2011ma}. Jet distributions are corrected to the charged particle level for the effects of uncorrelated background and instrumental response.
Jet area is determined using the Fastjet area algorithm~\cite{FastJetArea} with
ghost particle area of 0.01. Ghost particles are randomly generated particles
with negligible \ensuremath{p_\mathrm{T}}\ that are distributed uniformly in the acceptance with known
density, and are clustered during jet reconstruction together with real tracks.
The
number of ghost particles in a jet thereby provides an infrared and
collinear-safe (IRC-safe) measurement of jet area, for jets of arbitrary
shape~\cite{FastJetArea}.
We utilize the following notation to distinguish \ensuremath{p_\mathrm{T}}\ of various types of jet in
the analysis: \ensuremath{p_\mathrm{T,jet}^\mathrm{raw,ch}}\ is \ensuremath{p_\mathrm{T}}\ of jets generated by the jet reconstruction
algorithm; \ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}}\ is \ensuremath{p_\mathrm{T,jet}^\mathrm{raw,ch}}\ adjusted by an estimate of the uncorrelated
background contribution; and \ensuremath{p_{\rm{T,jet}}^{\rm{ch}}}\ is \ensuremath{p_\mathrm{T}}\ of jets after full
correction for the effects of instrumental response and background fluctuations.
For the simulation of p+p\ collisions, \ensuremath{p_{\mathrm{T,jet}}^{\mathrm{part}}}\ is the reconstructed jet
energy at the particle-level and \ensuremath{p_{\mathrm{T,jet}}^{\mathrm{det}}}\ is at
the detector-level, with no correction for uncorrelated background considered;
i.e. these are equivalent to \ensuremath{p_\mathrm{T,jet}^\mathrm{raw,ch}}\ at the two levels of simulation.
Discrimination of correlated jet signal from uncorrelated background in
this analysis is carried out at the level of ensemble-averaged
distributions. Specifically, we do not discriminate the individual objects
generated by the jet reconstruction algorithm based on features that may indicate
contribution from high-Q$^2$ partonic scattering processes.
We therefore refer to all such objects as ``jet candidates",
rather than simply as ``jets", to denote that a
significant fraction of such objects are purely combinatoric in origin; i.e.
without a component arising from a high-Q$^2$ scattering process, in contrast to
what is
conventionally meant by the term ``jet" in QCD.
Jet reconstruction is carried out multiple times for each event. The first jet
reconstruction pass uses the
\ensuremath{k_\mathrm{T}}\ algorithm with \ensuremath{R}\ = 0.3 to estimate
the background transverse energy density $\rho$ in the event~\cite{FastJetPileup},
\begin{equation}
\rho=\mathrm{median}\left\{ \frac{\ensuremath{p_\mathrm{T,jet}^\mathrm{raw,i}}}{\ensuremath{A_\mathrm{jet}^\mathrm{i}}} \right\},
\label{eq:rho}
\end{equation}
\noindent
where i labels the jet candidates in the event, and \ensuremath{p_\mathrm{T,jet}^\mathrm{raw,i}}\ and
\ensuremath{A_\mathrm{jet}^\mathrm{i}}\ are the transverse momentum and area of jet candidate i. The median is
calculated by excluding the two hardest jets in the event for peripheral Au+Au\
collisions, and the three hardest jets for central Au+Au\ collisions.
\begin{figure}[htbp]
\includegraphics[width=0.5\textwidth]{Rho_distribution.png}
\caption{(Color online) Upper panel: distribution of $\rho$
for central and peripheral Au+Au\ collisions (SE),
and for mixed events (ME, see Sect.~\ref{sect:ME}). Lower panel: ratio of distributions SE/ME for central Au+Au\ collisions. Blue points are ME distribution used in analysis; red points are same distribution shifted by 60 MeV/($c$ sr). See discussion in Sect.~\ref{sect:ME}.}
\label{fig:rho}
\end{figure}
Figure \ref{fig:rho} shows the distribution of $\rho$ for central and peripheral
Au+Au\ collisions. Distributions are shown for STAR data (SE) and for
mixed events (ME, see Sect.~\ref{sect:ME}). The term SE refers to "same events",
in contrast to mixed events. The value of $\rho$ varies event-to-event due to
variation in gross event features within each centrality class, in particular
multiplicity and transverse energy. There are peripheral Au+Au\ events with
$\rho=0$, which can occur for low multiplicity events since $\rho$ is calculated
as the median of the jet energy density distribution.
Successive jet reconstruction passes are then carried out using the
anti-\ensuremath{k_\mathrm{T}}\ algorithm, with \ensuremath{R}\ = 0.2, 0.3, 0.4, and 0.5. For each jet candidate
generated in these passes, the
value of \ensuremath{p_\mathrm{T,jet}^\mathrm{raw,i}}\ is adjusted by the estimated background
energy density scaled by jet area~\cite{FastJetPileup},
\begin{equation}
\ensuremath{p_\mathrm{T,jet}^\mathrm{reco,i}}=\ensuremath{p_\mathrm{T,jet}^\mathrm{raw,i}} - \ensuremath{\rho\cdot{A_\mathrm{jet}^\mathrm{i}}}.
\label{eq:pTraw}
\end{equation}
The jet candidate acceptance is $|\eta_{\rm{jet}}|<(1.0-\ensuremath{R})$, where
$\eta_{\mathrm{jet}}$ is the pseudo-rapidity of the jet
centroid. A jet area cut suppresses jets comprising uncorrelated background,
while preserving high efficiency for jet candidates containing a true jet. Jet candidates are rejected if $\ensuremath{A_\mathrm{jet}^\mathrm{i}}<0.05$ for \ensuremath{R}\ = 0.2; $\ensuremath{A_\mathrm{jet}^\mathrm{i}}<0.20$
for \ensuremath{R}\ = 0.3; $\ensuremath{A_\mathrm{jet}^\mathrm{i}}<0.35$ for \ensuremath{R}\ = 0.4; and $\ensuremath{A_\mathrm{jet}^\mathrm{i}}<0.65$ for \ensuremath{R}\ =
0.5. The jet area cut is discussed further in Sect.~\ref{sect:ME}.
\begin{figure}[htbp]
\includegraphics[width=0.5\textwidth]{JER.pdf}
\caption{(Color online) Distribution of jets with \ensuremath{R}=0.3 in p+p\ collisions at \ensuremath{\sqrt{s}}=200 GeV, generated by PYTHIA: \ensuremath{p_{\mathrm{T,jet}}^{\mathrm{det}}}\ (detector level) for fixed values of \ensuremath{p_{\mathrm{T,jet}}^{\mathrm{part}}}\ (particle level). Detector-level effects are for the environment of central Au+Au\ collisions. The red lines are Gaussian fits to the narrow peak, with relative width given as $\delta\ensuremath{p_\mathrm{T}}/\ensuremath{p_\mathrm{T}}$.}
\label{fig:JER}
\end{figure}
Figure~\ref{fig:JER} shows the distribution of jets simulated by PYTHIA for fixed values of particle-level \ensuremath{p_{\mathrm{T,jet}}^{\mathrm{part}}}, as a function of detector-level \ensuremath{p_{\mathrm{T,jet}}^{\mathrm{det}}}. The detector-level effects correspond to conditions in central Au+Au\ collisions. These distributions represent the instrumental response to charged jets, and are non-Gaussian. Correction for these instrumental effects is carried out by an unfolding procedure~\cite{Cowan:2002in,Hocker:1995kb} utilizing an
instrumental response matrix. It is nevertheless illustrative to quantify the
main features of the instrumental response. For charged jets in the range
$5<\ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}}<30$ \ensuremath{\mathrm{GeV/}c}, jet energy resolution (JER) due to instrumental effects has a peak with
$\sigma=5-10\%$ and tail to low jet energy, . The complete JER distribution has
RMS = 25\%, with negligible dependence of the JER on \ensuremath{R}. The jet energy
scale (JES) uncertainty due to instrumental effects, which arises predominantly
from uncertainty in tracking efficiency, is 5\%, likewise with negligible
\ensuremath{R}-dependence.
\begin{figure*}[htbp]
\includegraphics[width=1.0\textwidth]{Jet_event_display_eta_phi.png}
\caption{(Color online) Event display showing the distribution of
charged tracks
and jets (anti-\ensuremath{k_\mathrm{T}},\ensuremath{R}=0.3) in one Au+Au\ collision from the central event
population, as
a function of $\eta$ and
$\phi$.
Filled circles show charged tracks, open circles show ghost
particles, and the centroid of each accepted jet is indicated by ``x".
Charged tracks
and ghost particles clustered into each reconstructed jet have the same
color. The shaded text boxes give \ensuremath{p_\mathrm{T,jet}^\mathrm{raw,ch}}\ for all jet candidates with
$\ensuremath{p_\mathrm{T,jet}^\mathrm{raw,ch}}>5$ \ensuremath{\mathrm{GeV/}c}. The outer dashed rectangle is the tracking acceptance, while the red shaded area is the region of the tracking acceptance that is excluded by the \ensuremath{R}-dependent jet fiducial cut. The trigger particle is indicated by the star, while the blue
shaded area is the recoil jet acceptance. The trigger particle in this event is associated
with the jet candidate with largest \ensuremath{p_\mathrm{T,jet}^\mathrm{raw,ch}}.}
\label{fig:Eta_phi}
\end{figure*}
There is no absolute definition of
uncorrelated background energy density in an event. The definition of
$\rho$ outlined above is not unique; different choices
of reconstruction algorithm, jet radius \ensuremath{R}, and number of excluded jets,
provide equally valid background estimates. As discussed below, the jet-wise
adjustment in
Eq.~\ref{eq:pTraw} is the first step in a multi-step process in which
full correction
for uncorrelated background utilizes an instrumental response matrix
incorporating the same choice of $\rho$.
Since no jet candidates are excluded based on their value
of \ensuremath{p_\mathrm{T,jet}^\mathrm{reco,i}}\ in this
analysis, the final corrected spectrum
is independent of the specific choices made in the definition of
$\rho$. The above choices for $\rho$ are made for technical reasons, to
ensure numerical stability of the unfolding procedures.
\section{Semi-inclusive hadron+jet distributions}
\label{sect:Observables}
\subsection{Specification of observables}
The analysis is based on the semi-inclusive distribution
of charged jets recoiling from a high-\ensuremath{p_\mathrm{T}}\ trigger hadron (``h+jet'')~\cite{deFlorian:2009fw,deBarros:2012ws,Adam:2015doa}. The trigger hadron is a
charged particle with \ensuremath{p_{\mathrm{T,trig}}}\ within a specified interval. The interval for the
primary analysis is $9<\ensuremath{p_{\mathrm{T,trig}}}<30$ \ensuremath{\mathrm{GeV/}c}, while lower \ensuremath{p_{\mathrm{T,trig}}}\ is used for
systematic studies.
The trigger hadron is selected inclusively: if there
is a charged hadron observed within the \ensuremath{p_{\mathrm{T,trig}}}\ interval the event is accepted,
otherwise it is rejected. The
probability per central Au+Au\ collision to find a hadron within the interval
$9<\ensuremath{p_{\mathrm{T,trig}}}<30$ \ensuremath{\mathrm{GeV/}c}\
is about 0.1\%, while the probability to observe multiple trigger
hadron candidates is negligible. The resulting \ensuremath{p_\mathrm{T}}-distribution\ of trigger
hadrons is therefore the same as that of the inclusive charged hadron
distribution.
The trigger hadron is not
necessarily the highest-\ensuremath{p_\mathrm{T}}\ hadron in the event, because neutral
hadrons are not considered in the analysis.
Figure \ref{fig:Eta_phi} is an event display for an Au+Au\ collision in
the central event population, showing charged tracks, ghost
particles, and reconstructed jet candidates. The acceptance
is densely populated with tracks, and all tracks shown are associated
with an accepted jet candidate. Voids in the track distribution occur near the
edges of the jet fiducial acceptance, where the region occupied by a
jet candidate lies partially within the tracking acceptance but its centroid
lies outside the jet acceptance. The most energetic jet in this
event happens to contain the trigger hadron, but that is not required. The
recoil acceptance contains two jets with $\ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}}>5$ \ensuremath{\mathrm{GeV/}c}.
The measured observable is the number of recoil jets observed in a phase space bin, normalized by the number of trigger hadrons. Because the trigger hadron is chosen inclusively, the resulting
distribution is semi-inclusive and is equivalent to the ratio
of production cross sections,
\begin{widetext}
\begin{equation}
\frac{1}{\ensuremath{\mathrm{N}^\mathrm{AA}_{\rm{trig}}}}\cdot\ensuremath{\frac{\rm{d}^{3}N^\mathrm{AA}_{jet}}{\mathrm{d}\pTjetch\mathrm{d}\ensuremath{\Delta\phi}\mathrm{d}\ensuremath{\eta_\mathrm{jet}}}}\Bigg\vert_{\ensuremath{p_{\mathrm{T,trig}}}}
= \left(
\frac{1}{\sigma^{\mathrm{AA}\rightarrow\rm{h}+X}} \cdot
\frac{\rm{d}^3\sigma^{\mathrm{AA}\rightarrow\rm{h}+{jet}+X}}{\mathrm{d}\ensuremath{p_{\rm{T,jet}}^{\rm{ch}}}\mathrm{d}\ensuremath{\Delta\phi}\mathrm{d}\ensuremath{\eta_\mathrm{jet}}}\right)
\Bigg\vert_{\ensuremath{p_{\mathrm{T,trig}}}},
\label{eq:hJetDefinition}
\end{equation}
\end{widetext}
\noindent
where AA denotes p+p\ or Au+Au\ collisions; \ensuremath{\mathrm{N}^\mathrm{AA}_{\rm{trig}}}\ is the number of
trigger hadrons; $\sigma^{\mathrm{AA}\rightarrow\rm{h}+X}$ is the cross
section to generate a hadron within the \ensuremath{p_{\mathrm{T,trig}}}\ interval;
$\rm{d}^3\sigma^{\mathrm{AA}\rightarrow\rm{h}+{jet}+X}/\rm{d}\ensuremath{p_{\rm{T,jet}}^{\rm{ch}}}\mathrm{d}\ensuremath{\Delta\phi}\mathrm{d}\ensuremath{\eta_\mathrm{jet}}$ is the differential cross section for coincidence production of a trigger
hadron and recoil jet; \ensuremath{p_{\rm{T,jet}}^{\rm{ch}}}\ and \ensuremath{\eta_\mathrm{jet}}\ are the charged jet
transverse momentum and pseudo-rapidity; and \ensuremath{\Delta\phi}\ is the azimuthal
separation between trigger hadron and recoil jet.
We report two projections of Eq. \ref{eq:hJetDefinition}: the jet
yield integrated over a recoil region in azimuth relative to the
trigger hadron direction,
\begin{widetext}
\begin{equation}
\ensuremath{Y\left(\pTjetch\right)}=\int_{3\pi/4}^{5\pi/4}\mathrm{d}\ensuremath{\Delta\phi}
\left[\frac{1}{\ensuremath{\mathrm{N}^\mathrm{AA}_{\rm{trig}}}}\cdot\ensuremath{\frac{\rm{d}^{3}N^\mathrm{AA}_{jet}}{\mathrm{d}\pTjetch\mathrm{d}\ensuremath{\Delta\phi}\mathrm{d}\ensuremath{\eta_\mathrm{jet}}}}\Bigg\vert_{\ensuremath{p_{\mathrm{T,trig}}}>\ensuremath{p_{\mathrm{T,thresh}}}}\right];
\label{eq:hJetYield}
\end{equation}
\end{widetext}
\noindent
and the azimuthal distribution of recoil jets in an interval of \ensuremath{p_{\rm{T,jet}}^{\rm{ch}}},
\begin{widetext}
\begin{equation}
\ensuremath{\Phi\left(\ensuremath{\Delta\phi}\right)}=\int_{\ensuremath{p_\mathrm{T,jet;low}^\mathrm{ch}}}^{\ensuremath{p_\mathrm{T,jet;high}^\mathrm{ch}}}\mathrm{d}\ensuremath{p_{\rm{T,jet}}^{\rm{ch}}}
\left[\frac{1}{\ensuremath{\mathrm{N}^\mathrm{AA}_{\rm{trig}}}}\cdot\ensuremath{\frac{\rm{d}^{3}N^\mathrm{AA}_{jet}}{\mathrm{d}\pTjetch\mathrm{d}\ensuremath{\Delta\phi}\mathrm{d}\ensuremath{\eta_\mathrm{jet}}}}\Bigg\vert_{\ensuremath{p_{\mathrm{T,trig}}}>\ensuremath{p_{\mathrm{T,thresh}}}}\right].
\label{eq:hJetPhi}
\end{equation}
\end{widetext}
\subsection{Discussion of observables}
The semi-inclusive observable defined in Eq.
\ref{eq:hJetDefinition} isolates a single high-$Q^2$
process in each event by the requirement of a high-\ensuremath{p_\mathrm{T}}\ hadron, and then measures
the distribution of correlated recoil jets. The main considerations for this
choice
of observable are as follows (see also~\cite{Adam:2015doa}).
The observable in Eq.
\ref{eq:hJetDefinition} is equivalent to the
ratio of inclusive cross sections, which we first discuss from a theoretical
perspective. Inclusive high-\ensuremath{p_\mathrm{T}}\
hadron production in p+p\ collisions at \ensuremath{\sqrt{s}}\ = 200 GeV is
well-described by pQCD calculations at NLO~\cite{deFlorian:2005yj,d'Enterria:2013vba}, and the h+jet cross section in
p+p\ collisions at \ensuremath{\sqrt{s}}\ = 200 GeV has also
been calculated in pQCD at NLO~\cite{deFlorian:2009fw}. For p+p\
collisions at \ensuremath{\sqrt{s}}\ = 200 GeV, the observable in Eq. \ref{eq:hJetDefinition} is
therefore calculable in pQCD at NLO (Sect.
\ref{sect:pQCD}). In Au+Au\ collisions at \ensuremath{\sqrt{s_{\mathrm {NN}}}}\ = 200 GeV,
hadrons with $\ensuremath{p_\mathrm{T}}>5$ \ensuremath{\mathrm{GeV/}c}\ are expected to arise predominantly from jet
fragmentation~\cite{Renk:2011gj}, and pQCD calculations incorporating
medium-evolved fragmentation functions and other techniques are in good agreement with
measurements of inclusive
hadron suppression at high \ensuremath{p_\mathrm{T}}~\cite{Armesto:2007dt,Chang:2014fba,Burke:2013yra}.
Inclusive hadron production in Au+Au\ collisions is therefore well-understood in the
trigger interval of this analysis, using perturbative approaches.
Any procedure to accept a subset of events from the Minimum Bias
distribution
imposes bias on the accepted event population. Event selection in this
analysis is simple, requiring only the presence of a high-\ensuremath{p_\mathrm{T}}\ charged hadron in
the event, with no requirement that a jet satisfying certain criteria be found
in the recoil acceptance. Specifically, no rejection of jet candidates is
carried out based
on \ensuremath{p_\mathrm{T,jet}^\mathrm{reco,i}}, and
discrimination of correlated from uncorrelated yield is carried
out at the level of ensemble-averaged distributions. All jet
candidates in the recoil acceptance therefore contribute to the recoil jet
distribution, and no selection bias is imposed on the correlated recoil jet
population by the
procedure to discriminate correlated jet signal from background.
Trigger hadron selection is carried out inclusively, resulting in
the same \ensuremath{p_\mathrm{T}}-distribution as that of inclusive hadron production~\cite{Adams:2003kv,Adare:2012wg}. Although the same kinematic selection
is used for central and peripheral Au+Au\
collisions ($9<\ensuremath{p_{\mathrm{T,trig}}}<30$ \ensuremath{\mathrm{GeV/}c}), the selected distribution of underlying hard processes
may differ between collision centralities because of jet quenching
effects on high-\ensuremath{p_\mathrm{T}}\ hadron production, resulting in different trigger bias. However, selection of high-\ensuremath{p_\mathrm{T}}\ hadrons is expected from model studies to bias towards
leading fragments of jets that have experienced little quenching, due to the
interplay of jet energy loss, the shape of the jet production spectrum, and jet
fragmentation~\cite{Baier:2002tc}, and thereby limiting the effects of quenching
on the trigger bias.
Insight into the centrality dependence of the trigger bias can be obtained
from measurements of inclusive high-\ensuremath{p_\mathrm{T}}\ hadron production, whose yield is
strongly suppressed in central Au+Au\ collisions at \ensuremath{\sqrt{s_{\mathrm {NN}}}}\ = 200 GeV~\cite{Adare:2012wg,Adams:2003kv}.
Yield suppression of \ensuremath{\pi^0}\ production, measured by the ratio of the
inclusive yield in Au+Au\ to that in p+p\ collisions (\ensuremath{R_\mathrm{AA}}), has a rate of change with \ensuremath{p_\mathrm{T}}\ in central Au+Au\ collisions
(0\%-5\%) of $0.01\pm0.003\ (\ensuremath{\mathrm{GeV/}c})^{-1}$, over the
range $7<\ensuremath{p_\mathrm{T}}<20$ \ensuremath{\mathrm{GeV/}c}~\cite{Adare:2012wg}.
Similar \ensuremath{p_\mathrm{T}}-dependence is observed for peripheral collisions, though
with larger uncertainty.
In other words, while inclusive hadron production is strongly
suppressed in central relative to peripheral Au+Au\ collisions, the shape of the
inclusive \ensuremath{p_\mathrm{T}}-distribution is the same within uncertainties for the two
centralities.
This supports the conjecture of high-\ensuremath{p_\mathrm{T}}\ trigger hadrons being
generated preferentially by non-interacting jets,
thereby selecting a similar distribution of hard processes for peripheral and
central
collisions, though at a suppressed rate for central collisions.
Further exploration of the trigger bias
in this measurement requires theoretical calculations that incorporate jet
quenching. Since inclusive hadron \ensuremath{R_\mathrm{AA}}\ is modeled accurately by such
calculations (\cite{Burke:2013yra} and references therein), they will likewise
model the trigger bias accurately by including effects of jet quenching on the
generation of trigger hadrons.
\subsection{Interpretation of distributions}
\label{sect:Interp}
For jets in vacuum, a pQCD description is thought to be applicable for
$\ensuremath{p_{\rm{T,jet}}^{\rm{ch}}}\gtrsim{10}$ \ensuremath{\mathrm{GeV/}c}, where jets are interpreted in terms of fragmentation of
quarks and gluons. In this analysis, in contrast, the terms ``jet" and ``jet
candidate" refer generically to objects reconstructed by the anti-\ensuremath{k_\mathrm{T}}\ algorithm
with specified \ensuremath{R}, without regard to the interpretability of such objects in
terms of quark or gluon fragmentation. The raw spectrum is measured as a
function of \ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}}, with contribution to each bin in \ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}}\ from a broad
range in \ensuremath{p_{\rm{T,jet}}^{\rm{ch}}}\ due predominantly to large \ensuremath{p_\mathrm{T}}-smearing by background
fluctuations. No cuts are applied on \ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}}, in order not to bias the measured
\ensuremath{p_{\rm{T,jet}}^{\rm{ch}}}\ distributions.
The corrected recoil jet distributions therefore contain entries for
the entire range that is formally allowed, $\ensuremath{p_{\rm{T,jet}}^{\rm{ch}}}>0$, and represent the
distribution in \ensuremath{p_{\rm{T,jet}}^{\rm{ch}}}\ of all jet-like objects that are correlated with the
trigger. The per-trigger rate of such objects is finite for $\ensuremath{p_{\rm{T,jet}}^{\rm{ch}}}\sim0$,
since jet-like objects with $\ensuremath{R}>0$ subtend finite area, and a finite number of
such objects fill the experimental acceptance.
The corrected recoil jet distributions are presented in
Sect.~\ref{sect:Results} over their full measured range, $\ensuremath{p_{\rm{T,jet}}^{\rm{ch}}}>0$. However,
for interpretation of these distributions in terms of parton showers and their
modification in-medium, we restrict consideration to $\ensuremath{p_{\rm{T,jet}}^{\rm{ch}}}>10$ \ensuremath{\mathrm{GeV/}c}, the
range over which a perturbative description of jets is commonly thought to be
applicable in vacuum.
\section{Uncorrelated background and event mixing}
\label{sect:ME}
Jet production in collisions of heavy nuclei occurs in a more complex
environment than in p+p\ collisions, due to the high multiplicity of hadrons
arising from copious soft interactions ($Q^2<$ few GeV$^2$) and the high rate
of multiple, incoherently generated jets. Collective effects in the evolution of
the system also shape the event structure. Hadrons from these various sources will contribute to the population within each phase-space region of
dimension \ensuremath{R}\ that is characteristic of jet reconstruction. This
renders jet measurements in
nuclear collisions especially complex, necessitating precise definition of
jet signal and uncorrelated background.
In this analysis, the raw jet yield distribution as a function of \ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}}\
requires correction for the large yield of
background jets that are uncorrelated with the trigger hadron, and for the
\ensuremath{p_\mathrm{T}}-smearing of correlated jets by the background. The
uncorrelated background jet yield is subtracted at the level of
ensemble-averaged distributions using mixed events (ME), described below.
Correction for \ensuremath{p_\mathrm{T}}-smearing due to background
fluctuations is carried out by the unfolding of ensemble-averaged distributions.
In the ME procedure, real events from the population without high-\ensuremath{p_\mathrm{T}}\ trigger bias are assigned to exclusive classes, with each class
corresponding to a narrow bin in $M$, the uncorrected charged particle
multiplicity; \ensuremath{z_\mathrm{vtx}}, the $z$-position of reconstructed vertex; and \ensuremath{\phi_{EP}}, the
azimuthal orientation of the event plane (EP) in the
laboratory frame. The EP orientation is an approximation of the reaction plane orientation, defined by the collision impact parameter and the beam axis. Event plane reconstruction is described in~\cite{Adamczyk:2013gw}.
There are 8 bins in $M$, 20 bins in \ensuremath{z_\mathrm{vtx}}, and 4
bins in \ensuremath{\phi_{EP}}, corresponding to 640 distinct event mixing
classes. Within each multiplicity bin the distribution of track
multiplicity is sampled from the SE data set, to accurately reproduce the
multiplicity distribution of real events. This procedure accounts
for the multiplicity bias in events containing a high-\ensuremath{p_\mathrm{T}}\ trigger hadron,
relative to the MB population.
Each mixed event with $M$ tracks is
generated by drawing one track from each of $M$ different events in a mixing class. For
efficient construction of ME events, the event mixing algorithm draws from a buffer of
about 1000 real events, with the algorithm terminating when any event in the
buffer has had all its tracks used. All unused tracks remaining in the buffer
are discarded, the event buffer is refilled, and the procedure is repeated.
Tracks are therefore used at most once in the mixing procedure.
The ME procedure generates an
event population without multi-hadron correlations, but with the detailed
features of real data in terms of non-uniformity in instrumental
response and variation in detector acceptance due to the \ensuremath{z_\mathrm{vtx}}\ distribution.
Incorporation of such detector effects in the ME population is required for accurate determination of the uncorrelated background
distribution in the recoil jet population.
\begin{figure}[htbp]
\includegraphics[width=0.4\textwidth]{Track_eta_phi_SE_ME.png}
\caption{(Color online) Distribution in ($\eta,\phi$) of charged particles from
central Au+Au\ collisions, with $\ensuremath{p_\mathrm{T}}<0.5$ \ensuremath{\mathrm{GeV/}c}. Top panel: real or "same events" (SE);
middle panel: mixed events (ME) for one event mixing class; lower panel:
projection of SE and ME distributions onto $\phi$.}
\label{fig:SE_ME_EtaPhi}
\end{figure}
Figure \ref{fig:SE_ME_EtaPhi} shows the distribution of tracks with
$\ensuremath{p_\mathrm{T}}<0.5$
\ensuremath{\mathrm{GeV/}c}\ for central Au+Au\ data, for SE events and for ME events from one mixing
class. The bottom panel shows the projection of the two distributions onto
$\phi$. The periodic structure in the $\phi$ projection is due to reduced
tracking efficiency near TPC sector boundaries, while the broad dip in the
region $-1.0<\phi<0$ is due to reduced overall efficiency in two TPC sectors in
this dataset. As noted above, only a subset of tracks from real events is used
in the ME population. Nevertheless, the SE and ME projections agree
in detail. Similar agreement is seen for all other ME mixing classes. This
level of agreement is likewise stable throughout the data-taking period, with
negligible time dependence.
The jet distribution due to uncorrelated background is determined by carrying
out the same jet reconstruction procedure on the ME events as is used for the
real data. However, no high-\ensuremath{p_\mathrm{T}}\ trigger hadron is required for the ME analysis;
rather, the trigger axis for ME events is chosen by selecting a random track,
resulting in a similar azimuthal distribution to that in analysis of the SE population.
No jet candidates are excluded in the calculation of $\rho$ for ME
events, in contrast to the calculation of $\rho$ for SE events
(Sect.~\ref{sect:JetReco}). This choice is motivated by fact that all
multi-hadron
correlations, including those due to jets, are suppressed in ME events.
Figure~\ref{fig:rho} shows the distribution of $\rho$ in one event-mixing class,
for both SE and ME events. The SE and ME $\rho$ distributions are in good
agreement for both peripheral and central collisions, thereby
validating the jet exclusion choices made for the various event
populations. The fit of a Gaussian function to the central peak of the
SE distribution gives $\sigma=3.7$ \ensuremath{\mathrm{GeV/}c}.
Looking in detail at the tails of the distribution, the SE/ME ratio for central Au+Au\ collisions
(lower panel, blue points) shows an excess in SE relative to ME of about 50\% in
the left tail (smaller $\rho$), where the rate is a factor $\sim10^{3}$ smaller than at the peak of the distribution. This small relative change suggests that the ME $\rho$
distribution is slightly narrower than the SE $\rho$ distribution. In order to quantify this
effect, the ME distribution is shifted towards smaller $\rho$ by 60 MeV/($c$ sr) (red
points), where a similar increase in SE/ME ratio is now seen instead in the
right tail at larger $\rho$. The width in the far tails of the ME $\rho$
distribution is therefore smaller than
the SE width by less than 60 MeV/($c$ sr). We discuss this effect below, in the context of Fig.~\ref{fig:RawLowTrigpT}.
\begin{figure}[htbp]
\includegraphics[width=0.49\textwidth]{jet_pt_vs_area_0_10.png}
\caption{(Color online) Distribution of SE and ME jet populations (\ensuremath{R}\ = 0.3) for
one event-mixing class in central Au+Au\ collisions, as a function of \ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}}\
and \mbox{$\mathrm{A_{jet}}$}. Top panel: real events (SE); middle panel: mixed events (ME); bottom panel:
projection of SE and ME distributions onto \mbox{$\mathrm{A_{jet}}$}. The lower panel also shows the
recoil jet area distribution for p+p\ collisions at \ensuremath{\sqrt{s}}\ = 200 GeV from
PYTHIA-simulated events at the particle level with $\ensuremath{p_{\mathrm{T,trig}}}>9$ \ensuremath{\mathrm{GeV/}c}, for all
recoil jets and for recoil $\ensuremath{p_\mathrm{T,jet}^\mathrm{part}}>5$ \ensuremath{\mathrm{GeV/}c}. The hatched region to the right
of the dashed line is the accepted region for the \mbox{$\mathrm{A_{jet}}$}\ cut.}
\label{fig:SE_ME_pT_Area}
\end{figure}
Figure \ref{fig:SE_ME_pT_Area} shows the distribution of jet candidates as a
function of \ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}}\ and \mbox{$\mathrm{A_{jet}}$}\ for one
event-mixing class, for SE events (top panel); ME events
(middle panel); and the projection of both
distributions onto \mbox{$\mathrm{A_{jet}}$}\ (bottom panel). The SE and ME distributions in Fig.
\ref{fig:SE_ME_pT_Area} agree in detail, with a peak in \mbox{$\mathrm{A_{jet}}$}\ centered near
$\pi\cdot\ensuremath{R}^2$. The bottom panel also shows
the \mbox{$\mathrm{A_{jet}}$}\ distribution from a PYTHIA particle-level
simulation of p+p\ collisions at \ensuremath{\sqrt{s}}\ = 200 GeV, for all reconstructed jets and
for jets with $\ensuremath{p_\mathrm{T,jet}^\mathrm{part}}>5$ \ensuremath{\mathrm{GeV/}c}\ recoiling from a trigger
hadron with $\ensuremath{p_\mathrm{T}}>9$ \ensuremath{\mathrm{GeV/}c}. The area distribution for $\ensuremath{p_\mathrm{T,jet}^\mathrm{part}}>5$ \ensuremath{\mathrm{GeV/}c}\
coincides with the main peak, without the tail to smaller area.
The detailed agreement of the \mbox{$\mathrm{A_{jet}}$}\ distributions for SE and ME events
seen in Fig. \ref{fig:SE_ME_pT_Area}, lower panel, shows that the \mbox{$\mathrm{A_{jet}}$}\
distribution for high-multiplicity events is driven predominantly by geometric
factors, specifically the experimental acceptance and \ensuremath{R}, together with
response of the anti-\ensuremath{k_\mathrm{T}}\ algorithm to the high-multiplicity environment. The
correlated structure of true jets plays a less significant role. We note in
addition that \mbox{$\mathrm{A_{jet}}$}\ for true jets reconstructed with the anti-\ensuremath{k_\mathrm{T}}\ algorithm is
insensitive to the presence of uncorrelated
background~\cite{FastJetAntikt}. Reduction in the uncorrelated background jet
yield can therefore be carried out by a cut on \mbox{$\mathrm{A_{jet}}$}, as indicated by the vertical
dashed line. Based on the PYTHIA particle-level simulation, this cut suppresses
about 15\% of the yield of correlated jets for $\ensuremath{p_\mathrm{T,jet}^\mathrm{part}}<5$ \ensuremath{\mathrm{GeV/}c}, with
negligible suppression for $\ensuremath{p_\mathrm{T,jet}^\mathrm{part}}>5$ \ensuremath{\mathrm{GeV/}c}.
\section{Raw distributions}
\label{sect:pTcorrDistr}
\begin{figure*}[htbp]
\includegraphics[width=0.45\textwidth]{c_jet_pt_sub_leading_pt_9_pm_0_cent_0_r_0_norm_1.pdf}
\includegraphics[width=0.45\textwidth]{c_jet_pt_sub_leading_pt_9_pm_0_cent_1_r_0_norm_1.pdf}
\includegraphics[width=0.45\textwidth]{c_jet_pt_sub_leading_pt_9_pm_0_cent_0_r_1_norm_1.pdf}
\includegraphics[width=0.45\textwidth]{c_jet_pt_sub_leading_pt_9_pm_0_cent_1_r_1_norm_1.pdf}
\caption{(Color online) Distributions of \ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}}\ for Au+Au\ collisions at
\ensuremath{\sqrt{s_{\mathrm {NN}}}}\ = 200 GeV. Left panels: central; right panels: peripheral. Upper
panels: \ensuremath{R}\ = 0.2; lower panels: \ensuremath{R}\ = 0.3. The upper sub-panel shows the
distributions for SE (red points)
and ME (shaded region), with the blue shaded region indicating the
range used for ME normalization. Error bars on SE distributions are
statistical. The lower sub-panel shows the ratio of
the SE and normalized ME distributions, while the insert shows the ratio in
the normalization region. See text for details.}
\label{fig:RawData_2_3}
\end{figure*}
Figures \ref{fig:RawData_2_3} and \ref{fig:RawData_4_5} show distributions of
the uncorrected recoil jet yield in Au+Au\ collisions projected onto \ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}},
for \ensuremath{R}\ between 0.2 and 0.5. The upper sub-panels show the distributions separately for data (red points) and mixed-event background (shaded histogram). The lower sub-panels
are discussed below.
\begin{figure*}[htbp]
\includegraphics[width=0.45\textwidth]{c_jet_pt_sub_leading_pt_9_pm_0_cent_0_r_2_norm_1.pdf}
\includegraphics[width=0.45\textwidth]{c_jet_pt_sub_leading_pt_9_pm_0_cent_1_r_2_norm_1.pdf}
\includegraphics[width=0.45\textwidth]{c_jet_pt_sub_leading_pt_9_pm_0_cent_0_r_3_norm_1.pdf}
\includegraphics[width=0.45\textwidth]{c_jet_pt_sub_leading_pt_9_pm_0_cent_1_r_3_norm_1.pdf}
\caption{(Color online) Same as Fig. \ref{fig:RawData_2_3}, but for \ensuremath{R}\ = 0.4 and 0.5.}
\label{fig:RawData_4_5}
\end{figure*}
The number of jet candidates found in an event is necessarily bounded,
due to the area subtended by each jet candidate and by the total
experimental acceptance. Table \ref{Tab:Integral_fME} shows the integral over
\ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}}\ for the SE and ME distributions shown in Figs. \ref{fig:RawData_2_3}
and \ref{fig:RawData_4_5}. The integral is the average number of observed recoil
jet
candidates per trigger hadron, including both
correlated and uncorrelated. The integrals
decrease with increasing \ensuremath{R}, as expected since jets with larger \ensuremath{R}\ subtend
larger area. The integral values are larger for central than for
peripheral Au+Au\ collisions at the same \ensuremath{R}, corresponding to larger jet
density for central collisions, which is expected since peripheral collisions
are more
sparsely populated.
The integrals of the SE and ME distributions in central Au+Au\
collisions agree to better than 1\% for each value of \ensuremath{R}. Invariance of such
integrals for event classes with differing jet-like correlations
has also been observed for high-multiplicity events in model
studies~\cite{deBarros:2012ws}, and in the analysis of Pb+Pb\
collisions at 2.76 TeV~\cite{Adam:2015doa}. At high multiplicity this integral,
like the \mbox{$\mathrm{A_{jet}}$}\ distribution, is evidently driven predominantly by
geometric factors, specifically the experimental acceptance,
characteristic jet size \ensuremath{R}, and the robustness of the shape of anti-\ensuremath{k_\mathrm{T}}\ jets in
the presence of background~\cite{FastJetAntikt}, but not by the presence of
multi-hadron correlations, whose contribution is different in different event
classes and is absent entirely in the ME population.
\begin{table*}
\caption{Integral of SE and ME distributions in Figs. \ref{fig:RawData_2_3} and \ref{fig:RawData_4_5}, together with the ME normalization factor \ensuremath{f^\mathrm{ME}}. The uncertainty of \ensuremath{f^\mathrm{ME}}\ is systematic.
\label{Tab:Integral_fME}}
\begin{tabular}{ |c|c|c|c|c| }
\hline
Au+Au\ centrality & \ensuremath{R} & \multicolumn{2}{|c|}{Integral} & \ensuremath{f^\mathrm{ME}} \\ \hline
\multicolumn{2}{|c|}{} & SE & ME & \\ \hline
\multirow{4}{*}{peripheral (60\%-80\%)} & 0.2 & 0.446 & 0.397 & $0.72 \pm 0.05$ \\
& 0.3 & 0.269 & 0.252 & $0.67 \pm 0.07$ \\
& 0.4 & 0.184 & 0.175 & $0.61 \pm 0.02$\\
& 0.5 & 0.094 & 0.089 & $0.49 \pm 0.07$ \\ \hline
\multirow{4}{*}{central (0\%-10\%)} & 0.2 & 1.26 & 1.26 & $0.86 \pm 0.01$ \\
& 0.3 & 0.392 & 0.391 & $0.85\pm 0.03$\\ \
& 0.4 & 0.228 & 0.227 & $0.80 \pm 0.03$ \\
& 0.5 & 0.119 & 0.119 & $0.80 \pm 0.02$ \\ \hline
\end{tabular}
\end{table*}
In each panel of Figs.
\ref{fig:RawData_2_3} and \ref{fig:RawData_4_5}, the shape of the ME distribution is very similar to that of the SE distribution in the
region $\ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}}<0$, where the yield is expected to arise predominantly from
uncorrelated background. The shapes differ significantly at large positive \ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}},
where an appreciable contribution from correlated true jets is expected.
Additionally, the absolutely normalized ME distributions are observed to have larger
yield than the SE distributions in the region $\ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}}<0$, consistent with the smaller yield in ME
at large positive \ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}}\ and agreement of the SE and ME integrals within
better than 1\% for central collisions and within about 10\% for peripheral collisions.
These features have also been observed for high-multiplicity events in model studies~\cite{deBarros:2012ws} and in analysis of LHC data for Pb+Pb\ collisions~\cite{Adam:2015doa}.
In order to utilize the ME distribution to
determine the contribution of uncorrelated background in the SE
distribution, the absolutely normalized ME distribution is therefore scaled
downwards by a scalar factor \ensuremath{f^\mathrm{ME}}, determined by a fit in the blue
shaded regions in the upper sub-panels. The range in \ensuremath{p_\mathrm{T,jet}^\mathrm{corr,ch}}\ for determining
the central value of \ensuremath{f^\mathrm{ME}}\ is chosen as the left-most region of the
spectrum in
which the SE/ME yield ratio is uniform within 10\%. The lower sub-panels show
the SE/ME yield ratio after normalization by \ensuremath{f^\mathrm{ME}}, while the inserts show the
ratio in the fit region, also after normalization. Tab.~\ref{Tab:Integral_fME}
gives the values of \ensuremath{f^\mathrm{ME}}. The systematic uncertainty of \ensuremath{f^\mathrm{ME}}\ in
Tab.~\ref{Tab:Integral_fME} is determined by varying the normalization region.
For jets in central collisions and \ensuremath{R}\ = 0.5, the ratio of normalized ME and
SE distributions is within 10\% of unity in the region $-20<\ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}}<-5$ \ensuremath{\mathrm{GeV/}c},
over which the distributions themselves vary by two orders of magnitude
(Fig.~\ref{fig:RawData_4_5}, lower left).
Similarly good agreement of the shapes of the SE and ME distributions over a significant range in
\ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}}\ is observed for the other values \ensuremath{R}. This good
agreement indicates that the normalized ME distributions represent the uncorrelated background accurately, and can therefore be used over the
full range of \ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}}\ for correction of uncorrelated background in the SE
distribution.
For peripheral collisions, the SE distributions fall more rapidly in the region
$\ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}}<0$ and the ME distributions are overall much narrower than for central
collisions, as expected since the uncorrelated background level
is much lower. The width of the \ensuremath{f^\mathrm{ME}}\ normalization region is
correspondingly much narrower than for central collisions, with a weaker
constraint imposed on \ensuremath{f^\mathrm{ME}}. However, the precision required for \ensuremath{f^\mathrm{ME}}\ is much
reduced for perpheral collisions, precisely because of the much smaller
uncorrelated background contribution.
\begin{figure}[htbp]
\includegraphics[width=0.5\textwidth]{c_jet_pt_sub_leading_pt_3_pm_0_cent_0_r_1_norm_1.pdf}
\caption{(Color online) Same as Fig.~\ref{fig:RawData_2_3}, for central Au+Au\ collisions and \ensuremath{R}\ = 0.3. Upper panel: SE distribution is shown for two different ranges of \ensuremath{p_{\mathrm{T,trig}}}: $9<\ensuremath{p_{\mathrm{T,trig}}}<30$ \ensuremath{\mathrm{GeV/}c}\ (grey points), which is used in the primary analysis, and $3<\ensuremath{p_{\mathrm{T,trig}}}<30$ \ensuremath{\mathrm{GeV/}c}\ (red points). The ME distribution is the same as Fig.~\ref{fig:RawData_2_3}, lower left. Lower panel: ratio SE/ME for $9<\ensuremath{p_{\mathrm{T,trig}}}<30$ \ensuremath{\mathrm{GeV/}c}\ (grey points), and for $3<\ensuremath{p_{\mathrm{T,trig}}}<30$ \ensuremath{\mathrm{GeV/}c}\ with $\rho$ as defined in the primary analysis (red points) and $\rho$ shifted by 60 MeV/$c$ (dashed line). See text for details.
The insert shows the ratio for the $3<\ensuremath{p_{\mathrm{T,trig}}}<30$ \ensuremath{\mathrm{GeV/}c}\ SE distribution, in the region of \ensuremath{f^\mathrm{ME}}\ normalization.}
\label{fig:RawLowTrigpT}
\end{figure}
Figure \ref{fig:RawLowTrigpT} shows the uncorrected recoil jet distribution for
central Au+Au\ collisions and \ensuremath{R}\ = 0.3, for two different ranges in
\ensuremath{p_{\mathrm{T,trig}}}, $9<\ensuremath{p_{\mathrm{T,trig}}}<30$ \ensuremath{\mathrm{GeV/}c}\ and $3<\ensuremath{p_{\mathrm{T,trig}}}<30$ \ensuremath{\mathrm{GeV/}c}. The SE distribution for
the higher-\ensuremath{p_{\mathrm{T,trig}}}\ interval and the ME distribution are
the same as in Fig.~\ref{fig:RawData_2_3}, lower
left. Lower values of \ensuremath{p_{\mathrm{T,trig}}}\ are expected to select processes with
smaller $Q^2$ on average, and indeed are observed to generate a lower
rate of correlated recoil jets in both p+p\ and Pb+Pb\ collisions at LHC
energies~\cite{Adam:2015doa}. By measuring the SE distribution for different ranges of \ensuremath{p_{\mathrm{T,trig}}}, as in Fig. \ref{fig:RawLowTrigpT}, we therefore vary the rate of correlated jet yield in the recoil jet candidate population, while keeping the distribution of uncorrelated jet
candidates unchanged.
In Fig. \ref{fig:RawLowTrigpT}, upper panel, the ME distribution and the SE distribution with $3<\ensuremath{p_{\mathrm{T,trig}}}<30$ \ensuremath{\mathrm{GeV/}c}\ are very similar in the range
$-10<\ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}}<15$ \ensuremath{\mathrm{GeV/}c}, over which the distributions themselves vary
by more than five orders of magnitude. It is only in the region $\ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}}>20$ \ensuremath{\mathrm{GeV/}c}\ that this SE distribution exceeds the ME distribution by a significant factor, indicative of
a correlated recoil jet component with relative yield compared to all jet
candidates of less than $10^{-6}$.
The SE distributions with different lower bound for \ensuremath{p_{\mathrm{T,trig}}}\ are likewise
similar in the region $-10<\ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}}<10$ \ensuremath{\mathrm{GeV/}c}, but differ for larger \ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}}, as
expected.
The good agreement of the ME distribution and both SE distributions for
negative and small positive \ensuremath{p_\mathrm{T,jet}^\mathrm{corr,ch}}\ confirms that the yield in this
region is dominated strongly by uncorrelated background. Their
ordering in magnitude at larger \ensuremath{p_\mathrm{T,jet}^\mathrm{corr,ch}}\ also shows that the SE distribution approaches the ME distribution as the lower bound of \ensuremath{p_{\mathrm{T,trig}}}\ is reduced towards zero.
Figure \ref{fig:RawLowTrigpT}, lower panel, shows ratios of the SE and
ME distributions for the two different trigger hadron \ensuremath{p_\mathrm{T}}\ ranges. The
distributions utilize the primary
analysis approach described in Sect.~\ref{sect:JetReco}, including the choices
specified there for determining the background density $\rho$
(Eq.~\ref{eq:rho}). The ratios
exhibit a variation of 20\%-30\% in the region $\ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}}<5$ \ensuremath{\mathrm{GeV/}c}. While the
distributions themselves vary by several orders of magnitude over this range and
this variation is small in relative terms, it is nevertheless observable.
Variation in the ratio is related to the ambiguity in defining $\rho$ for
the SE and ME populations. In Sect.~\ref{sect:ME} we noted that the tails of the
$\rho$ distribution are slightly narrower for the ME than the SE population, by
less than 60 MeV/($c$ sr). To assess the influence of this difference, the red
dashed line in Fig.~\ref{fig:RawLowTrigpT}, lower panel, shows the ratio
of the SE and ME recoil jet distributions for $3<\ensuremath{p_{\mathrm{T,trig}}}<30$ \ensuremath{\mathrm{GeV/}c},
but with the value of $\rho$ for each event shifted systematically by 60 MeV/($c$
sr) as in Fig.~\ref{fig:rho}. In this
case, variation in the SE/ME recoil jet yield ratio is
reduced to less than 5\% for $\ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}}<15$ \ensuremath{\mathrm{GeV/}c}. The ratio increases
rapidly at larger \ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}}, due to significant correlated yield in the SE
distribution.
The influence of the slightly narrower $\rho$ distribution in the ME population
on correction of the recoil jet spectra was assessed by carrying out the full
analysis (described in the following sections) for representative cases, with
and without a 60
MeV/($c$ sr) shift in $\rho$. The resulting change in the fully corrected recoil
jet yield is significantly smaller than its systematic
uncertainties due to other sources. An effective shift in $\rho$ can
also arise from azimuthal anisotropy (\ensuremath{v_2}) of the trigger, which is considered
below. We therefore do not consider the effect of the narrower $\rho$
distribution in the ME population further in the analysis.
The ALICE Collaboration has measured semi-inclusive h+jet distributions
for Pb+Pb\ collisions at \ensuremath{\sqrt{s_{\mathrm {NN}}}}\ = 2.76 TeV with a correction
procedure for uncorrelated background that utilizes the difference between
normalized recoil jet distributions for exclusive ranges of
\ensuremath{p_{\mathrm{T,trig}}}~\cite{deBarros:2012ws,Adam:2015doa}.
Compared to the current analysis, the ALICE analysis differs in its use of an SE
jet distribution recoiling from lower \ensuremath{p_{\mathrm{T,trig}}}\ to measure
uncorrelated background, rather than the ME distribution. This approach results
in a different observable,
\ensuremath{\Delta_\mathrm{recoil}}~\cite{Adam:2015doa}, in which the small correlated component of the
lower threshold SE distribution is also removed by the subtraction. However, the
low-threshold SE and ME distributions in Fig. \ref{fig:RawLowTrigpT} are similar
in the current analysis, so that the difference between \ensuremath{\Delta_\mathrm{recoil}}\ calculated
with this choice of kinematics for the low-threshold SE and \ensuremath{Y\left(\pTjetch\right)}\ is expected
to be negligible. Direct comparison
of these related correction procedures will be explored in future analysis, with
larger data sets.
We note in addition that these two approaches differ in their treatment of
multiple partonic interactions (MPI). Background due to MPI arises when a
trigger hadron and a jet in the recoil acceptance are generated by two different,
incoherent high-$Q^2$ processes in the same collision. This background
is expected to be
independent of \ensuremath{\Delta\phi}, and to be larger in heavy ion than in p+p\
collisions. Since \ensuremath{\Delta_\mathrm{recoil}}\ is the difference of two SE distributions, which have the same MPI background by definition~\cite{Adam:2015doa}, the MPI background is removed from \ensuremath{\Delta_\mathrm{recoil}}\ by construction. In contrast, in the current analysis the event mixing procedure destroys all jet-like correlations, and the ME distribution does not contain an MPI component. However, comparison of the $3<\ensuremath{p_{\mathrm{T,trig}}}<30$ \ensuremath{\mathrm{GeV/}c}\ SE and the ME distribution in Fig. \ref{fig:RawLowTrigpT} shows that their difference, which contains the MPI background component, is negligible compared to the correlated yield for the SE $9<\ensuremath{p_{\mathrm{T,trig}}}<30$ \ensuremath{\mathrm{GeV/}c}\ distribution. Background due to MPI is therefore negligible in this measurement, and no correction for it is warranted in the analysis.
\begin{figure*}[htbp]
\includegraphics[width=0.8\textwidth]{Delta_phi_per.png}
\caption{(Color online) Upper panels: recoil jet distributions after mixed event subtraction
for peripheral
Au+Au\ STAR events (left)
and for p+p\ collisions generated by PYTHIA detector-level simulations (right).
Middle and lower panels: projections onto
\ensuremath{\Delta\varphi_\mathrm{trig,jet}}\ for two different ranges in \ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}}, indicated by
the blue and grey shaded areas in the upper plot. The projected distributions are
fitted with a function that is the sum of two Gaussian distributions, with
fit widths $\sigma_1$ and $\sigma_2$. The values of $\sigma_1$ and $\sigma_2$ are highly correlated, with negligible statistical error.}
\label{fig:Raw_Delta_phi_per}
\end{figure*}
Figure \ref{fig:Raw_Delta_phi_per}, upper panels, show distributions of the
background-subtracted recoil jet yield for \ensuremath{R}\ = 0.3 in peripheral Au+Au\
collisions at \ensuremath{\sqrt{s_{\mathrm {NN}}}}\ = 200 GeV
from STAR data, and in p+p\ events at \ensuremath{\sqrt{s}}\ = 200 GeV simulated with
PYTHIA at the detector level. The middle and lower panels show the projection
onto \ensuremath{\Delta\phi}\ for selected intervals in \ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}}. Correction of \ensuremath{\Phi\left(\ensuremath{\Delta\phi}\right)}\ for
uncorrelated background by subtraction of the ME distribution is discussed in
Sect.~\ref{sect:Acoplanarity}. No correction is carried out for the
effects of underlying event in the PYTHIA-generated p+p\ collision events.
\begin{figure*}[htbp]
\includegraphics[width=0.8\textwidth]{Delta_phi_cen.png}
\caption{(Color online) Same as Fig. \ref{fig:Raw_Delta_phi_per}, but for central Au+Au\ STAR
data (left) and detector-level PYTHIA simulations of p+p\ collisions at
\ensuremath{\sqrt{s}}\ = 200 GeV embedded into mixed events from central Au+Au\ STAR data at the track level (right).}
\label{fig:Raw_Delta_phi_cen}
\end{figure*}
Figure \ref{fig:Raw_Delta_phi_cen} shows the same distributions as in
Fig.~\ref{fig:Raw_Delta_phi_per}, but for central Au+Au\ STAR data with
background subtraction, and for PYTHIA-generated events at the
detector level for \ensuremath{\sqrt{s}}\ = 200 GeV p+p\ collisions embedded into central Au+Au\
STAR data at the track level.
The middle and lower panels of Figs.~\ref{fig:Raw_Delta_phi_per} and
\ref{fig:Raw_Delta_phi_cen} show fits to the \ensuremath{\Phi\left(\ensuremath{\Delta\phi}\right)}\ distributions with a
function that is the sum of two Gaussian distributions, both centered on
$\ensuremath{\Delta\phi}=\pi$, with fitted widths $\sigma_1$ and $\sigma_2$. The values of
$\sigma_1$ and $\sigma_2$ are correlated. The fit provides a
qualitative characterization of the azimuthal distributions. The widths of the
central peaks are seen to be similar in the
peripheral data and PYTHIA distributions, and in the central data and PYTHIA embedded in
central events. The recoil yield is
suppressed for both peripheral and central collisions relative to the yield
predicted by the PYTHIA calculation, with greater suppression for central
collisions. Quantitative analyses of these features is presented in Sect.
\ref{sect:Results}.
\section{Results}
\label{sect:Results}
\subsection{Jet yield suppression}
\label{sect:YieldSuppress}
\begin{figure*}[htbp] %
\includegraphics[width=0.45\textwidth]{c_unfold_ratio_200GeV_R_02.png}
\includegraphics[width=0.45\textwidth]{c_unfold_ratio_200GeV_R_03.png}
\includegraphics[width=0.45\textwidth]{c_unfold_ratio_200GeV_R_04.png}
\includegraphics[width=0.45\textwidth]{c_unfold_ratio_200GeV_R_05.png}
\caption{(Color online) Fully corrected distributions of \ensuremath{Y\left(\pTjetch\right)} (upper panels)
and its ratio \ensuremath{I_\mathrm{CP}}\ (lower panels) for central and peripheral Au+Au\ collisions at
\ensuremath{\sqrt{s_{\mathrm {NN}}}}\ = 200 GeV, for anti-\ensuremath{k_\mathrm{T}}\ jets with \ensuremath{R}\ = 0.2, 0.3, 0.4 and 0.5. The upper
panels also show \ensuremath{Y\left(\pTjetch\right)}\ for p+p\ collisions at
\ensuremath{\sqrt{s}}\ = 200 GeV, calculated using PYTHIA at the charged-particle level
and NLO pQCD transformed to the charged-particle level (Sect.~\ref{sect:pQCD}). The uncertainty of the NLO calculation is not shown.}
\label{fig:ICP}
\end{figure*}
Figure \ref{fig:ICP}, upper panels, show fully corrected distributions of \ensuremath{Y\left(\pTjetch\right)}\
for \ensuremath{R}\ = 0.2, 0.3, 0.4 and 0.5, in peripheral
and central Au+Au\ collisions at \ensuremath{\sqrt{s_{\mathrm {NN}}}}\ = 200 GeV. The lower panels show \ensuremath{I_\mathrm{CP}},
the ratio of \ensuremath{Y\left(\pTjetch\right)}\ in central to peripheral
distributions. The systematic
uncertainty of \ensuremath{I_\mathrm{CP}}\ takes into account the correlated uncertainties of
numerator and denominator.
The recoil jet yield in central collisions is strongly suppressed in the region $\ensuremath{p_{\rm{T,jet}}^{\rm{ch}}}>10$
\ensuremath{\mathrm{GeV/}c}\ for \ensuremath{R}\ between 0.2 and 0.5, with less suppression for
\ensuremath{R}\ = 0.5 than for \ensuremath{R}=0.2.
\begin{center}
\begin{table*}
\caption{Shift of \ensuremath{Y\left(\pTjetch\right)}\ in \ensuremath{p_{\rm{T,jet}}^{\rm{ch}}}\ from peripheral to
central collisions in
Fig.~\ref{fig:ICP}. Statistical and systematic uncertainties are shown.
The systematic uncertainty takes
into account correlated uncertainties between the peripheral and central
distributions, in particular the tracking efficiency. Also shown is the equivalent
shift between p+p\ and central Pb+Pb\ collisions at \ensuremath{\sqrt{s_{\mathrm {NN}}}}\ = 2.76 TeV~\cite{Adam:2015doa}. \label{Tab:pTjetShift}}
\begin{tabular}{ |c|c|c|c| }
\hline
\multicolumn{2}{|c|}{System} & Au+Au\ \ensuremath{\sqrt{s_{\mathrm {NN}}}}\ = 200 GeV & Pb+Pb\ \ensuremath{\sqrt{s_{\mathrm {NN}}}}\ = 2.76
TeV \\
\hline
\multicolumn{2}{|c|}{\ensuremath{p_{\rm{T,jet}}^{\rm{ch}}}\ range (\ensuremath{\mathrm{GeV/}c})} & [10,20] & [60,100] \\ \hline
\hline
\multicolumn{2}{|c|}{} & \multicolumn{2}{c|}{\ensuremath{p_\mathrm{T}}-shift of \ensuremath{Y\left(\pTjetch\right)}\ (\ensuremath{\mathrm{GeV/}c})} \\
\cline{3-4}
\multicolumn{2}{|c|}{} & peripheral$\rightarrow$central & p+p$\rightarrow$central \\
\hline
\multirow{4}{*}{\ensuremath{R}} & 0.2 & $-4.4\pm0.2\pm1.2$ & \\ \cline{2-4}
& 0.3 & $-5.0\pm0.5\pm1.2$ & \\ \cline{2-4}
& 0.4 & $-5.1\pm0.5\pm1.2$ & \\ \cline{2-4}
& 0.5 & $-2.8\pm0.2\pm1.5$ & $-8\pm2$ \\ \hline
\end{tabular}
\end{table*}
\end{center}
The upper panels also show \ensuremath{Y\left(\pTjetch\right)}\ distributions for p+p\
collisions at \ensuremath{\sqrt{s}}\ = 200 GeV, calculated by PYTHIA and by pQCD at NLO
transformed to charged jets (Sect.~\ref{sect:pQCD}). The uncertainty of the NLO calculation (Fig.~\ref{fig:NLO}) is not shown, for visual clarity.
The central value of the PYTHIA-generated
distribution lies about 20\% above the peripheral Au+Au\
distribution for all values of \ensuremath{R}. The NLO-generated distribution lies yet
higher for \ensuremath{R}\ = 0.2, but agrees better with PYTHIA for \ensuremath{R}\ = 0.5. A similar
comparison was carried out for p+p\ collisions at \ensuremath{\sqrt{s}}\ = 7
TeV, with PYTHIA found to agree better than NLO with data~\cite{Adam:2015doa}.
\begin{figure*}[htbp]
\includegraphics[width=0.48\textwidth]{c_radii_ratio_200GeV_R1_02_R2_05C_60_80.png}
\includegraphics[width=0.48\textwidth]{c_radii_ratio_200GeV_R1_02_R2_05C_0_10.png}
\caption{(Color online) Distributions of \ensuremath{Y\left(\pTjetch\right)}\ for \ensuremath{R}\ = 0.2 and 0.5 (upper
panels)
and their ratios (lower panels) in peripheral (left) and central (right) Au+Au\
collisions at \ensuremath{\sqrt{s_{\mathrm {NN}}}}\ = 200 GeV.}
\label{fig:Ratio_2_5}
\end{figure*}
Since the shape of the \ensuremath{Y\left(\pTjetch\right)}\ distributions is approximately exponential, for a range of
\ensuremath{p_{\rm{T,jet}}^{\rm{ch}}}\ in which \ensuremath{I_\mathrm{CP}}\ is constant, suppression of \ensuremath{I_\mathrm{CP}}\ can be expressed
equivalently as a shift of \ensuremath{Y\left(\pTjetch\right)}\ in \ensuremath{p_{\rm{T,jet}}^{\rm{ch}}}\ between the peripheral and
central distributions. Tab.~\ref{Tab:pTjetShift} gives values of the shift
for the distributions in Fig.~\ref{fig:ICP}, together with the
shift measured for \ensuremath{R}\ = 0.5 between p+p\ and central Pb+Pb\
collisions at \ensuremath{\sqrt{s_{\mathrm {NN}}}}\ = 2.76 TeV. The peripheral-central shifts are
consistent within uncertainties for the various \ensuremath{R}\ in Au+Au\ collisions at
\ensuremath{\sqrt{s_{\mathrm {NN}}}}\ = 200 GeV, and are systematically smaller than the p+p\ to central Pb+Pb\
shift measured
at \ensuremath{\sqrt{s_{\mathrm {NN}}}}\ = 2.76 TeV.
\begin{figure}[htbp]
\includegraphics[width=7.5cm]{Delta_phi_V5.png}
\caption{(Color online) Distribution of \ensuremath{\Phi\left(\ensuremath{\Delta\phi}\right)}\ at \ensuremath{\sqrt{s}}\ =
200 GeV, for Au+Au\ collisions measured by STAR and p+p\ collisions generated by PYTHIA (detector level). Vertical dashed lines show
limits of integration for \ensuremath{Y\left(\pTjetch\right)}. Top panel: peripheral Au+Au\
compared to p+p. Blue dashed curve shows PYTHIA
distribution scaled to have the same integral as data between the vertical dashed lines.
Middle panel: central Au+Au\ compared
to p+p\ detector-level events embedded
into central Au+Au\
mixed events. Shaded bands show systematic uncertainty due to
mixed-event normalization. Bottom panel: same as middle panel, but with PYTHIA
distribution scaled to have the same integral as data between the vertical dashed lines.}
\label{fig:dphi}
\end{figure}
In light of the low infrared cutoff of jet constituents
in this analysis (track $\ensuremath{p_\mathrm{T}}>0.2$ \ensuremath{\mathrm{GeV/}c}), we interpret the shift as the
charged-particle energy
transported to angles larger than \ensuremath{R}\ by interaction of the jet with the
medium, averaged over the recoil jet population.
In this interpretation, the spectrum shift represents the average out-of-cone
partonic energy loss for central relative to peripheral
collisions. Table~\ref{Tab:pTjetShift} presents the first quantitative
comparison of the quenching of reconstructed jets at RHIC and the LHC, indicating
reduced medium-induced energy transport to large angles at RHIC,
though the different ranges in \ensuremath{p_{\rm{T,jet}}^{\rm{ch}}}\ and the different reference spectra (p+p\ vs. peripheral) should be noted.
\subsection{Modification of jet shape}
\label{sect:IntraJet}
The ratio of inclusive jet cross sections with small \ensuremath{R}\ relative to large \ensuremath{R}\
has been measured to be less than unity in p+p\ collisions at \ensuremath{\sqrt{s}}\ = 2.76 and
7
TeV~\cite{Abelev:2013fn,Chatrchyan:2014gia},
reflecting the distribution of jet energy transverse to the jet axis.
These measurements are well-described by pQCD
calculations at NLO and NNLO~\cite{Soyez:2011np,Dasgupta:2016bnd}.
Inclusive measurements of small-radius jets are also well-described by an approach based on soft collinear effective theory \cite{Kang:2017frl}.
The ratio of semi-inclusive recoil jet yields with small relative to large \ensuremath{R}\
is likewise less than unity in
p+p\ collisions at \ensuremath{\sqrt{s}}\ = 7 TeV~\cite{Adam:2015doa}, exhibiting sensitivity
to
the transverse distribution of jet energy in the recoil jet population.
PYTHIA provides a better description than NLO of this
ratio~\cite{Adam:2015doa,deFlorian:2009fw}.
A jet quenching calculation using a hybrid weak/strong-coupling
approach indicates that the ratio of (semi-)inclusive yields with different
values of \ensuremath{R}\ has smaller theoretical uncertainties than other jet shape
observables~\cite{Casalderrey-Solana:2016jvj}.
The \ensuremath{R}-dependent ratios of inclusive jet cross sections and semi-inclusive jet
yields therefore
provide discriminating jet shape observables that can be calculated
theoretically
for p+p\ collisions, and that provide sensitive probes of medium-induced
broadening of the jet shower. We note that this approach to measuring jet shapes is different than the differential jet shape observable employed by CMS to measure medium-induced modification of jet shapes in Pb+Pb\ collisions at \ensuremath{\sqrt{s_{\mathrm {NN}}}}=2.76 TeV~\cite{Chatrchyan:2013kwa}.
Figure \ref{fig:Ratio_2_5} shows distributions of
\ensuremath{Y\left(\pTjetch\right)}\ for \ensuremath{R}\ = 0.2 and 0.5, for peripheral and central
Au+Au\ collisions at \ensuremath{\sqrt{s_{\mathrm {NN}}}}\ = 200 GeV. Their ratio, shown in the lower panels,
is less than
unity, also reflecting the intra-jet distribution of
energy transverse to the jet axis. Comparison of the distributions for peripheral and central collisions measures
medium-induced broadening of the jet shower in an angular range between 0.2 and 0.5 rad of the recoil
jet axis. For quantitative comparison, we again express the change in \ensuremath{Y\left(\pTjetch\right)}\
between \ensuremath{R}\ = 0.2 and 0.5 as a horizontal shift of the spectra.
In the range $10<\ensuremath{p_{\rm{T,jet}}^{\rm{ch}}}<20$ \ensuremath{\mathrm{GeV/}c}, the \ensuremath{p_\mathrm{T}}-shift in \ensuremath{Y\left(\pTjetch\right)}\
from \ensuremath{R}\ = 0.2 to \ensuremath{R}\ = 0.5 is $2.9\pm{0.4\mathrm{(stat)}}\pm{1.9\mathrm{(sys)}}$ \ensuremath{\mathrm{GeV/}c}\ in
peripheral collisions and $5.0\pm{0.5\mathrm{(stat)}}\pm{2.3\mathrm{(sys)}}$ \ensuremath{\mathrm{GeV/}c}\ in central
collisions, which are consistent within uncertainties. From this measurement
we find no evidence of broadening of the jet shower due to jet quenching. A
similar picture was obtained for Pb+Pb\ collisions at the LHC~\cite{Adam:2015doa}.
\subsection{Medium-induced acoplanarity}
\label{sect:Acoplanarity}
In this section we discuss the measurements of \ensuremath{\Phi\left(\ensuremath{\Delta\phi}\right)}\ (Eq.~\ref{eq:hJetPhi}), the
azimuthal distribution of the recoil jet centroid relative to the
axis of the trigger hadron. In p+p\ collisions, the azimuthal distribution of back-to-back di-jet pairs is peaked at
$\ensuremath{\Delta\phi}\sim\pi$, with initial-state and final-state radiative processes
generating acoplanarity that
broadens the \ensuremath{\Delta\phi}\ distribution. In nuclear collisions, additional acoplanarity may
be induced by jet interactions in hot QCD matter~\cite{D'Eramo:2012jh,Wang:2013cia,Mueller:2016gko,Chen:2016vem,Casalderrey-Solana:2016jvj}, with magnitude
related to the jet transport parameter
\ensuremath{\hat{q}}~\cite{Mueller:2016gko,Chen:2016vem,Casalderrey-Solana:2016jvj}. Acoplanarity from vacuum
radiation grows with both jet energy and \ensuremath{\sqrt{s}}, so that low energy jets may
have greatest sensitivity to \ensuremath{\hat{q}}~\cite{Chen:2016vem,Casalderrey-Solana:2016jvj}. The \ensuremath{R}\ dependence of
acoplanarity may probe the distribution of both vacuum and medium-induced gluon radiation within the jet shower~\cite{Chen:2016vem}, and may also probe different quenching effects for initially narrow or wide jets~\cite{Casalderrey-Solana:2016jvj}.
Scattering of a jet off quasi-particles in the hot QCD medium is conjectured
to dominate
the azimuthal distribution at large angles from the trigger axis
(QCD Moli{\`e}re scattering), with radiative processes and
soft multiple scattering making smaller
contributions in that region~\cite{D'Eramo:2012jh}.
Measurement of jet acoplanarity at large angles can potentially discriminate
between a
medium with distinct quasi-particles and one that is effectively continuous at
the length scale being probed by the scattering~\cite{D'Eramo:2012jh}. It is
important
to perform such large-angle scattering measurements over a large
range of jet energy, which varies the length scale of the probe. Such
measurements can only be carried out using reconstructed jets recoiling from a
trigger object; observables based on the distribution of single recoil
hadrons convolute the effects of intra-jet broadening and scattering of the
parent, and cannot
discriminate the two processes.
We note that the trigger hadron, with $\ensuremath{p_{\mathrm{T,trig}}}>9$ \ensuremath{\mathrm{GeV/}c}, most likely
arises from fragmentation of a jet, but that the direction of such a trigger hadron
and its parent jet centroid are not necessarily coincident. In order to quantify the
difference, the correlation between the axis defined by jet centroid and the
direction of the leading hadron in the jet was studied using PYTHIA-generated events for p+p\ collisions at \ensuremath{\sqrt{s}}\ = 200 GeV. The distribution of the angular difference between jet
centroid and leading hadron has RMS = 10 mrad for hadrons with $\ensuremath{p_\mathrm{T}}>9$ \ensuremath{\mathrm{GeV/}c}\ and jets with \ensuremath{R}\ = 0.3.
Since high-\ensuremath{p_\mathrm{T}}\ hadrons in Au+Au\ collisions are expected to bias towards jets that have lost
relatively little energy due to quenching~\cite{Baier:2002tc}, we expect a similar correlation in
central Au+Au\ collisions. The trigger hadron direction in this analysis
therefore corresponds closely to the axis of the jet that generates it.
In order to measure the distribution of \ensuremath{\Phi\left(\ensuremath{\Delta\phi}\right)}, the contribution of
uncorrelated background must be removed
from the raw \ensuremath{\Delta\phi}\ distribution. As in the \ensuremath{Y\left(\pTjetch\right)}\ analysis, this correction is
carried out by subtracting the scaled ME distribution from the SE distribution.
However, to correct \ensuremath{\Phi\left(\ensuremath{\Delta\phi}\right)}\ we utilize an ME scaling factor that is determined
separately for each
bin in \ensuremath{\Delta\phi}, rather than applying \ensuremath{f^\mathrm{ME}}\ (Tab.~\ref{Tab:Integral_fME}), which
is the scale factor averaged over the
\ensuremath{\Delta\phi}\ range of the recoil acceptance for \ensuremath{Y\left(\pTjetch\right)}. This modified
procedure is used because the ME scale factor depends upon the interplay between
conservation of total jet number and the enhanced yield at large positive
\ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}}\ for the SE distribution relative to ME. At large angles to the
trigger axis the SE enhancement is small, and the ME scale factor approaches
unity in that region. By utilizing a \ensuremath{\Delta\phi}-dependent scaling of the ME
distribution we track this effect
accurately, resulting in an accurate ME normalization for correction of
uncorrelated background yield.
Figure \ref{fig:dphi} shows \ensuremath{\Phi\left(\ensuremath{\Delta\phi}\right)}\ distributions
for \ensuremath{R}\ = 0.3 and $9<\ensuremath{p_\mathrm{T,jet}^\mathrm{reco,ch}}<13$ \ensuremath{\mathrm{GeV/}c}\ measured in peripheral and central
Au+Au\ collisions at \ensuremath{\sqrt{s_{\mathrm {NN}}}}\ = 200 GeV, compared to \ensuremath{\Phi\left(\ensuremath{\Delta\phi}\right)}\
distributions for p+p\ collisions at \ensuremath{\sqrt{s}}\ = 200 GeV generated by PYTHIA. The
data are the same as those in Figs.
\ref{fig:Raw_Delta_phi_per} and \ref{fig:Raw_Delta_phi_cen}.
The data are corrected for uncorrelated background yield using ME subtraction,
but no correction is applied for instrumental response or uncorrelated
background fluctuations. Rather, for comparison to data, the PYTHIA p+p\
distribution is used at the detector level, which incorporates the effects of
instrumental response. In addition, for comparison to the central Au+Au\ data,
the effects of
uncorrelated background fluctuations are imposed
by embedding the p+p\ events generated by PYTHIA at the detector level
into Au+Au\ mixed events. These reference events based on PYTHIA are analysed in the same way as real data; in particular, the effect of correlated recoil jets on the calculation of $\rho$ is the same as that in real data analysis.
The top and middle panels of Figure \ref{fig:dphi} compare absolutely
normalized \ensuremath{\Phi\left(\ensuremath{\Delta\phi}\right)}\ distributions for Au+Au\ and p+p. The yield for the
PYTHIA-generated p+p\ distribution in this
region is significantly larger than that of the Au+Au\ data for both peripheral
and central collisions, with larger difference for central collisions. This is
in
qualitative agreement
with Fig.~\ref{fig:ICP}, though quantitative comparison is not possible because
these data are not fully corrected.
For detailed comparison of the shape of the central peaks of the
\ensuremath{\Phi\left(\ensuremath{\Delta\phi}\right)}\ distributions, we scale the PYTHIA-generated p+p\ distributions to have
the same integrated yield as the data in the range $|\pi-\ensuremath{\Delta\phi}|<\pi/4$.
The top panel of Figure \ref{fig:dphi} shows scaled p+p\ compared to peripheral
Au+Au, which agree well. The bottom panel shows the
scaled embedded p+p\ and central Au+Au\ distributions, indicating a slightly
broader central peak in data. A recent calculation suggests that such
comparisons may be used to constrain $\langle\ensuremath{\hat{q}}\cdot{L}\rangle$, where \ensuremath{\hat{q}}\
is the jet transport parameter and
$L$ is the in-medium path length~\cite{Chen:2016vem}. However, quantitative
comparison of such measurements and calculations requires correction of the data for
instrumental and background fluctuation effects, which requires higher
statistical precision than the data presented here and is beyond the scope of
the current analysis.
Finally, we turn to the search for large-angle Moli{\`e}re
scattering in the hot QCD medium~\cite{D'Eramo:2012jh}. Absolutely
normalized \ensuremath{\Phi\left(\ensuremath{\Delta\phi}\right)}\
distributions are required for this measurement. We focus on the \ensuremath{\Phi\left(\ensuremath{\Delta\phi}\right)}\
distribution at large angles
relative to the trigger axis, in the range $|\pi-\ensuremath{\Delta\phi}|>0.56$.
Fig.~\ref{fig:dphi}, upper panel, shows no
significant yield in this range for both peripheral
Au+Au\ events and PYTHIA-generated p+p\ events.
The insert in the middle panel shows the \ensuremath{\Phi\left(\ensuremath{\Delta\phi}\right)}\
distribution in this range for central
Au+Au\ collisions and PYTHIA-generated p+p\ events embedded into central Au+Au\
mixed events. Both distributions have
non-zero yield and are consistent with each other within the uncertainty band.
We therefore do not observe significant evidence for
large-angle Moli{\`e}re scattering in central Au+Au\ collisions. A similar measurement by the ALICE Collaboration for
Pb+Pb\ collisions
at \ensuremath{\sqrt{s_{\mathrm {NN}}}}\ = 2.76 TeV likewise found no evidence for large-angle Moli{\`e}re
scattering in nuclear collisions at the LHC~\cite{Adam:2015doa}.
The comparison of central Au+Au\ and embedded p+p\ distributions can however be
used to establish a limit on the magnitude of large-angle scattering, under two
assumptions. The first assumption
is that PYTHIA provides an accurate reference distribution. The second
assumption, which we make for simplicity, is that the distribution of excess
yield from large angle scattering is a constant fraction of the p+p\ reference
yield, independent of \ensuremath{\Delta\phi}\ for $|\pi-\ensuremath{\Delta\phi}|>0.56$. We then form the
ratio
of the central Au+Au\ yield over that for PYTHIA-generated and
embedded p+p\ collisions. No scaling of the p+p\ distribution is applied, since
this measurement requires absolutely normalized distributions. This ratio is
indeed
independent of \ensuremath{\Delta\phi}\ within uncertainties, consistent with the second
assumption. Averaged over the eight data points shown in the inset of Fig.~\ref{fig:dphi}, the ratio is measured to be
$1.2\pm0.2\mathrm{(stat)}\pm0.3\mathrm{(sys)}$. In order to express
this measurement as a limit, we
consider only the statistical error to be Gaussian-distributed, and cite the
systematic uncertainty separately. At 90\% statistical confidence
level (one-sided), the excess yield due to medium-induced large angle scattering
is less than
$50\pm30\mathrm{(sys)\%}$ of the large-angle yield for p+p\ collisions
predicted by PYTHIA.
Future measurements, based on larger Au+Au\ data sets, will reduce the
statistical error and systematic uncertainty of this measurement. The two
assumptions used in the analysis can be relaxed by measurement of the reference
distribution in p+p\ collisions, and by theoretical calculations of the expected
distribution.
\section{Summary}
\label{sect:Summary}
We have reported the measurement of jet quenching in peripheral and central
Au+Au\ collisions at \ensuremath{\sqrt{s_{\mathrm {NN}}}}\ = 200 GeV, based on the semi-inclusive distribution
of reconstructed charged jets
recoiling from a high-\ensuremath{p_\mathrm{T}}\ trigger hadron. Jets were reconstructed with low
infrared cutoff of constituents, $\ensuremath{p_\mathrm{T}}>0.2$ \ensuremath{\mathrm{GeV/}c}. Uncorrelated background was
corrected at the level of ensemble-averaged distributions using a new
event-mixing method. Comparison is made to similar distributions for p+p\
collisions
at \ensuremath{\sqrt{s}}\ = 200 GeV, calculated using PYTHIA and NLO pQCD, and to similar
measurements for Pb+Pb\ collisions at \ensuremath{\sqrt{s_{\mathrm {NN}}}}\ = 2.76 TeV.
The recoil jet yield is suppressed in central Au+Au\ collisions for
jet radii \ensuremath{R}\ between 0.2 and 0.5. Taking into account the low IR-cutoff for
jet
constituents, the suppression corresponds to medium-induced energy transport to
large angles relative to the jet axis of $\sim3-5$ \ensuremath{\mathrm{GeV/}c}, smaller than that
measured
for central Pb+Pb\ collisions at the LHC. Comparison of recoil jet yields for
different \ensuremath{R}\
exhibits no evidence of significant intra-jet broadening within an angle of 0.5
relative to the jet axis.
Yield excess in the tail of the recoil jet azimuthal distribution would
indicate large-angle jet scattering in
the medium, which could probe its quasi-particle nature. However, no evidence
for such a process is seen within the current experimental precision.
The 90\% statistical confidence upper limit from this measurement for the excess jet
yield at large deflection angles is $50\pm30\mathrm{(sys)\%}$ of the large-angle yield in
PYTHIA-generated p+p\ events. This is the first quantitative limit on
large-angle Moli{\`e}re scattering of jets in heavy ion collisions at RHIC.
Future measurements, based on data sets with high integrated luminosity and
incorporating the
STAR electromagnetic calorimeter, will explore these observables with greater
statistical and systematic precision and with greater kinematic reach, providing
further quantification of
jet quenching effects and clarification of their underlying mechanisms.
\section{Systematic Uncertainties}
\label{sect:SysUncert}
Systematic uncertainties arise from the corrections for instrumental response and
uncorrelated background, and from the different algorithmic choices in the unfolding
procedure. This section discusses the significant systematic uncertainties, with representative
values given in Tab.~\ref{Tab:SysUncert}.
\subsection{Instrumental response}
\label{sect:SysUncertInstr}
The systematic uncertainty due to track reconstruction efficiency is determined
by varying the efficiency by $\pm$5\% relative to its
central value (Sect. \ref{sect:Rdet}). This variation generates a shift
in \ensuremath{p_{\rm{T,jet}}^{\rm{ch}}}, corresponding to variation in yield at fixed \ensuremath{p_{\rm{T,jet}}^{\rm{ch}}}\ of less than 10\% for
all \ensuremath{p_{\rm{T,jet}}^{\rm{ch}}}, in both central and peripheral Au+Au\ collisions. Variation of other instrumental
response corrections, including track \ensuremath{p_\mathrm{T}}\
resolution and the contribution of secondary decays, generate smaller systematic
uncertainties. The systematic uncertainty due to instrumental effects is labeled ``Instr" in
Tab.~\ref{Tab:SysUncert}.
\subsection{Mixed events}
\label{sect:SysUncertME}
Correction for uncorrelated background by subtraction of the ME from the SE
distribution requires normalization of the ME distribution by the factor \ensuremath{f^\mathrm{ME}}\ (Tab.~\ref{Tab:Integral_fME}).
Variation of the normalization region for determining \ensuremath{f^\mathrm{ME}}\
results in a systematic uncertainty in corrected recoil jet yield of less than 10\%
(``ME norm" in Tab.~\ref{Tab:SysUncert}).
The track population used to generate the ME data set includes
high-\ensuremath{p_\mathrm{T}}\ tracks that arise predominantly from the fragmentation of jets, and
their inclusion means that not all jet-specific structure has been removed from
the ME distributions. In
order to assess the importance of this contribution, the ME events
were modified to remove all tracks with $\ensuremath{p_\mathrm{T}}>3$
\ensuremath{\mathrm{GeV/}c}\ and the analysis was repeated. No significant change in the
distribution of reconstructed jets was observed from this modification.
\subsection{\ensuremath{\delta{\pT}}}
\label{sect:dpT}
The probability distribution of \ensuremath{\delta{\pT}}, which represents the fluctuations in uncorrelated background energy, was varied
by using different models for embedded jets: single hadrons with the full jet
energy, distributed either uniformly in azimuth or with anisotropic azimuthal
distribution relative to the EP corresponding to \ensuremath{v_2}\ of the trigger
hadron~\cite{Adare:2014bga}, or PYTHIA-simulated jets at the particle level with
uniform azimuthal distribution. This variation of the \ensuremath{\delta{\pT}}\ distribution
generates a systematic uncertainty in corrected jet yield of up to 19\% for
central Au+Au\ collisions (``\ensuremath{\delta{\pT}}" in Tab.~\ref{Tab:SysUncert}).
\subsection{Unfolding}
\label{sect:SysUncertUnfold}
Systematic uncertainty due to the unfolding procedure was determined by
varying the choice of unfolding algorithm, choice of prior, and regularization cutoff.
Two different unfolding algorithms were used: iterative Bayesian and SVD.
Two different functional forms of the prior were used: the recoil jet distribution for
p+p\ collisions at \ensuremath{\sqrt{s}}\ = 200 GeV, calculated by PYTHIA, and a parameterized Levy distribution,
\begin{equation}
f(p_{T},T,n) = \frac{p_{T}B}{[1+(\sqrt{p_{T}^{2}+m_{\pi}^{2}}-m_{\pi})/(nT)]^{n}}
\label{func:Levy}
\end{equation}
\noindent
The parameters $T$ and $n$, which determine the spectrum shape at low and high \ensuremath{p_\mathrm{T}}\ respectively, were varied independently but constrained to $0.6<T<1.5$ GeV and $6<n<7$. These parameter ranges generate priors whose shapes bracket the resulting unfolded solutions, indicating convergence of the unfolding procedure.
For iterative Bayesian unfolding, the regularization limit on the number of iterations is varied between 1 to 5. For SVD
unfolding, regularization is imposed by truncating the number of terms in the series expansion between 2 to 5.
The systematic uncertainty in corrected recoil jet yield resulting from these variations in unfolding procedure is \ensuremath{p_{\rm{T,jet}}^{\rm{ch}}}-dependent, and is labeled ``Unfold" in Tab.~\ref{Tab:SysUncert}.
\subsection{Cumulative uncertainties}
\label{sect:SysUncertCumulative}
There is a complex interplay between the various components of the correction
procedure.
To determine the cumulative systematic uncertainty, each of the components was
varied independently, thereby sampling the parameter space of corrections. The
unfolding process was carried out multiple times, varying the
choices for tracking
efficiency, ME normalization, \ensuremath{\delta{\pT}}\ algorithm, unfolding
algorithm, prior, and regularization cutoff.
For each specific set of choices, convergence
of the unfolded distribution was evaluated by convoluting it
with the same set of corrections (``backfolding") and comparing the result to
the initial raw
distribution using a $\chi^{2}$ test. The errors used to calculate $\chi^{2}$
are the diagonal elements of the covariance matrix from the
unfolding procedure. The off-diagonal covariance elements, representing the
correlation between bins, were not considered in this test.
A set of choices was accepted if the comparison had $\chi^{2}/\mathrm{nDOF}$
less than a threshold which varied between 1.8 and 6.5, depending upon jet
radius
and collision centrality. For SVD unfolding, if an unfolded spectrum
with regularization parameter $k$ was accepted, variations with the same prior but larger value of $k$ were rejected.
\begin{center}
\begin{table*}
\caption{Representative values for components of the cumulative systematic
uncertainty in corrected recoil jet yield for \ensuremath{R}\ = 0.2 and 0.5 in central and
peripheral Au+Au\ collisions, for various ranges in \ensuremath{p_{\rm{T,jet}}^{\rm{ch}}}.
See text for details. \label{Tab:SysUncert}}
\begin{tabular}{ |c|c|c|c|c|c|c||c| }
\hline
\multicolumn{3}{|c|}{} & \multicolumn{5}{c|}{Systematic uncertainty (\%)} \\
\hline
\ensuremath{R} & \ensuremath{p_{\rm{T,jet}}^{\rm{ch}}}\ range [\ensuremath{\mathrm{GeV/}c}] & centrality & Instr & ME norm & \ensuremath{\delta{\pT}} & Unfold & Cumulative\\
\hline
\multirow{6}{*}{0.2} & \multirow{2}{*}{[5,10]} & peripheral (60\%-80\%) & 4 & 2 & 1 & 6 & 10 \\
& & central (0\%-10\%) & 7 & 10 & 19 & 41 & 47 \\ \cline{2-8}
& \multirow{2}{*}{[10,20]} & peripheral (60\%-80\%) & 6 & 2 & 2 & 12 & 18 \\
& & central (0\%-10\%) & 7 & 5 & 10 & 31 & 36 \\ \cline{2-8}
& \multirow{2}{*}{[20,25]} & peripheral (60\%-80\%) & 11 & 8 & 6 & 25 & 33 \\
& & central (0\%-10\%) & 10 & 7 & 16 & 47 & 49 \\ \hline
\multirow{6}{*}{0.5} & \multirow{2}{*}{[5,10]} & peripheral (60\%-80\%) & 4 & 3 & 4 & 22 & 23 \\
& & central (0\%-10\%) & 6 & 5 & 3 & 21 & 27 \\ \cline{2-8}
& \multirow{2}{*}{[10,20]} & peripheral (60\%-80\%) & 7 & 1 & 4 & 31 & 35 \\
& & central (0\%-10\%) & 4 & 2 & 7 & 28 & 34 \\ \cline{2-8}
& \multirow{2}{*}{[20,25]} & peripheral (60\%-80\%) & 9 & 3 & 5 & 29 & 35 \\
& & central (0\%-10\%) & 8 & 1 & 10 & 30 & 39 \\ \hline
\end{tabular}
\end{table*}
\end{center}
Due to the interplay between various components of the correction procedure, the
contribution of each component to the cumulative systematic uncertainty of the
recoil jet yield cannot be uniquely specified. Nevertheless, it is instructive to
identify the principal factors that drive the cumulative systematic uncertainty.
Table~\ref{Tab:SysUncert} shows representative values of each uncertainty
component, for \ensuremath{R}\ = 0.2 and 0.5 in central and peripheral Au+Au\
collisions. These values are calculated by varying only the specified
component, and keeping all other components in the correction procedure fixed.
The uncertainties are averaged over three different
ranges of \ensuremath{p_{\rm{T,jet}}^{\rm{ch}}}, weighted by the spectrum shape.
It is seen that the unfolding procedure generates the largest systematic
uncertainty in the recoil jet yield.
The rightmost column of Tab.~\ref{Tab:SysUncert} shows the cumulative
systematic uncertainty in recoil jet yield. However, the unfolding process generates
significant off-diagonal covariance, especially for large \ensuremath{R}, arising predominantly from correction of fluctuations in uncorrelated background. In order to indicate the significant correlation between different values of \ensuremath{p_{\rm{T,jet}}^{\rm{ch}}}, in the following sections we represent the
unfolded distributions graphically as bands rather than as binned histograms,
with the width of the band representing the outer envelope of all distributions
that were accepted by the above procedure.
\section{Closure test}
\label{sect:SysUncertClosure}
Convergence of the full correction procedure was validated by a closure test on
simulated data, utilizing events for p+p\ collisions at \ensuremath{\sqrt{s}}\ = 200 GeV
generated by PYTHIA. Figure \ref{fig:Closure_test}, upper panel, shows the
particle-level distribution of these events for jets with \ensuremath{R}\ = 0.3, which is
similar in shape to the fully corrected distribution from data for peripheral
Au+Au\ collisions.
Detector-level events were generated with
tracking efficiency and \ensuremath{p_\mathrm{T}}-resolution\ corresponding to those of central Au+Au\
collisions. Each detector-level simulated event containing an
accepted trigger hadron was embedded into a mixed event from the central
Au+Au\ data set. The hybrid dataset has the same number of trigger
hadrons as the real dataset, so that effects arising from finite event
statistics are modeled accurately. The complete analysis chain, including
generation of \ensuremath{\delta{\pT}}\ and the full set of corrections via unfolding, was then run
on the hybrid events to generate the fully corrected recoil jet spectrum, as
shown in the upper panel.
\begin{figure}[htbp]
\includegraphics[width=0.5\textwidth]{Closure_test_R03_central_V2.png}
\caption{(Color online) Closure test for central Au+Au\ collisions. Upper panel:
particle-level input distribution from PYTHIA (red line), unfolded spectrum for
Au+Au\ detector effects and background (grey band), and central value for fully
corrected peripheral STAR data (blue dashed, systematic uncertainty not shown
for clarity). Lower panel: ratio of unfolded
over input distribution from upper panel. See text for details.}
\label{fig:Closure_test}
\end{figure}
Figure \ref{fig:Closure_test}, lower panel, shows the
ratio of the fully-corrected recoil jet distribution to the particle-level
input distribution. The band shows the systematic uncertainty of the corrected
distribution. For $\ensuremath{p_{\rm{T,jet}}^{\rm{ch}}}>20$ \ensuremath{\mathrm{GeV/}c}, fluctuations in the central
value arise from the finite number of
events in
the input spectrum of the simulation, since the
corrected distribution in the numerator is smoothed by regularized unfolding.
For $\ensuremath{p_{\rm{T,jet}}^{\rm{ch}}}<20$ \ensuremath{\mathrm{GeV/}c}, the
ratio is consistent with unity within the uncertainty of about 20\%, with no
indication of a \ensuremath{p_\mathrm{T}}-dependent bias in central value.
|
2,877,628,091,576 | arxiv | \section{Introduction}
\label{intro}
A few years ago Derrida et al.$^{\cite{derr3}, \cite{derr2}}$
suggested an intriguing ``matrix approach" to the
one-dimensional Asymmetric Simple
Exclusion Process (ASEP). This approach
has later been used to treat variants of the
model$^{\cite{derr4}, \cite{hinrich1}, \cite{hone-pesch}}$,
extended to non steady states and by
Sch\"{u}tz et al.$^{\cite{schuetz1}, \cite{schuetz2}, \cite{schuetz4}}$,
used to study
fluctuations by Derrida et al.$^{\cite{derr5}}$ and the
multispecies case by (among others)
Isaev et al. $^{\cite{ipr}}$.
The main aim of this paper is to study the difficulties that arise
in potential
applications of the matrix approach to cases in which
the nearest neighbor interaction or the particle conservation
(both present in the ASEP) are violated.
Further light on the applicability
of the matrix method is shed by the integrability criterion
illustrated by Popkov
et al.$^{\cite{schuetz4}}$.
In section \ref{mps} we provide
a general recipe (using the generator of the process)
to find the algebra of the matrix formalism
associated to both the steady state and the whole
dynamics of any one-dimensional interacting system
such that at each step the configuration changes only in two adjacent sites.
A more complete description, with a
pedagogical aim will be given elsewhere$^{\cite{DI}}$.
In section \ref{beyond} we apply the recipe to some important interacting
systems such as the contact and voter models and show that
the matrix algebra obtained is not useful to treat them.
We will consider only systems in the lattice $\{1,
\ldots , N\}$, this is an intrinsic limitation of the matrix approach.
The dynamics of an interacting particle system is
usually defined by giving
the generator of the process, the general form of which
can be found for instance in Liggett's book$^{\cite{liggett2}}$.
For example, the generator $\Omega$ of
the ASEP, if particles jump one site to the right (left) with
rate $p$ ($q=1-p$) and
enter the lattice from the left (right) at rate $\alpha$
($\delta$) and leave it at rate
$\gamma$ ($\beta$),
is defined by
$^{\cite{liggett2}}$:
\begin{multline}\label{liggen}
( \Omega f ) (\tau) =\\ \!\sum_{x=1}^{N-1} [p\tau(x) (1\!-\tau(x+1))
\! + q\tau(x+1) (1\!-\tau(x))]
[ f( \tau^{x,x+1} ) -\! f( \tau ) ] \\
+[\alpha(1-\tau(1))+\gamma\tau(1)][f(\tau^{1})-f(\tau)] \\ +
[\delta(1-\tau(N))+\beta\tau(N)][f(\tau^{N})-f(\tau)]
\end{multline}
where $\tau=\{\tau(x)\}_{x=1}^N$ is the configuration of the
system, $\tau^{x,y}$ is the
configuration obtained from $\tau$ by exchanging the content of
the sites $x$ and $y$, and $\tau^x$ is the
configuration obtained from $\tau$ by
changing the content of the $x$-th site.
In the following formulas, $|V\rangle\rangle$ is a
vector in an (as yet)
unspecified linear space equipped with an inner product,
$D$ and $E$ linear operators on the same
space, $\langle\langle W|$ is an element of the dual space.
So $\langle\langle W|A|V\rangle\rangle$ is the inner product
generally written as $({\mathbf W}, A{\mathbf V})$.
The formula of Derrida et al. to write the probability of a given
configuration in the stationary state of the ASEP is$^{\cite{derr1}}$
\begin{equation}\label{derrstst}
P_N(\tau_1,...,\tau_N)=\frac{1}{Z_N}\langle\langle
W|\prod_{j=1}^{N}[\tau_jD+(1-\tau_j)E]|V\rangle\rangle\ ,
\end{equation}
where $D$, $E$, $|V\rangle\rangle$, $\langle\langle W|$
are matrices and vectors that satisfy
\begin{eqnarray}\label{derralg}
(\beta D - \delta E)|V\rangle\rangle & = & |V\rangle\rangle \ ,\nonumber \\
pDE - qED & = & D+E \ , \\
\langle\langle W|(\alpha E - \gamma D) & = & \langle\langle W|\nonumber\
\end{eqnarray}
and $Z_N$ is a normalization factor.
One can check these formulas provide a sufficient condition
for the measure to be stationary by observing they
satisfy the recursion relations for the probabilities (first due
to Liggett$^{\cite{liggett3}}$) that relate the probabilities for the
system with $K$ sites to the ones for the system
with $K-1$ sites$^{\cite{derr1}}$.
\section{From the Generator to Matrix \\ Product States}
\label{mps}
Let us start by re-writing the generator by making use
of a formalism borrowed from quantum mechanics.
For all
$ j=1,...,N$ let us define the Hilbert space
${\cal H}_j :=span\left\{ \left| 0 \right\rangle _j,\left|
1 \right\rangle _j\right\} \cong {\mathbb C}^2\ .$
Consider the operators $a^+, a^-, n, m$ defined by:
$a^+|0\rangle =|1\rangle , a^-|0\rangle = 0\ ,
n|0\rangle = 0\ ,
m = \mathbb{I}-n\ ,
a^+|1\rangle =0\ ,
a^-|1\rangle =|0\rangle\ ,
n|1\rangle =|1\rangle$,
where $\mathbb{I}$ is the identity.
Interpreting $|0\rangle$ and $|1\rangle$ as empty
site and occupied site respectively, the role of $a^+, a^-, n$
as {\em creation, annihilation, number operators} respectively
is rather obvious.
The most immediate choice of an explicit expression for
the operators and vectors above is
$|0\rangle = \binom{0}{1}\ , |1\rangle = \binom{1}{0}
a^+ = \begin{pmatrix}
0 & 1 \\
0 & 0
\end{pmatrix},
n = \begin{pmatrix}
1 & 0 \\
0 & 0
\end{pmatrix} ,
a^- = \begin{pmatrix}
0 & 0 \\
1 & 0
\end{pmatrix},
m = \begin{pmatrix}
0 & 0 \\
0 & 1
\end{pmatrix}
$.
Now we take the tensor product
$\mathrm{H}_{N}=\bigotimes_{j=1}^{N}{\cal H}_j$ to describe
the system on all the $N$ sites.
If we consider for example the ASEP, in this
``quantum hamiltonian'' formalism$^{\cite{schuetz3}}$ the
generator is given by
\begin{align}\label{asepham}
H &= -\sum_{k}p(a_k^-a_{k+1}^+-n_km_{k+1})+q(a_k^+a_{k+1}^--m_kn_{k+1})+\nonumber\\
&\quad\gamma(a^-_{1} - n_{1}) + \alpha(a^{+}_{1} - m_{1})
+ \beta(a^-_{N} - n_{N}) + \delta(a^{+}_{N} - m_{N})\nonumber\\
&= h_1^\partial+\sum_{k}h_k+h_N^\partial\ ,
\end{align}
where the superscript $\partial$ denotes a boundary term and
$$h_1^\partial=
\begin{pmatrix}
\gamma & -\alpha \\
-\gamma & \alpha
\end{pmatrix},\
h_k=\begin{pmatrix}
0 & 0 & 0 & 0 \\
0 & p & -q & 0 \\
0 & -p & q & 0 \\
0 & 0 & 0 & 0
\end{pmatrix},\ h_N^\partial=
\begin{pmatrix}
\beta & -\delta \\
-\beta & \delta
\end{pmatrix}.$$
For any given operator or vector $b$ in the space ${\cal H}_k$
we use the notation
$b_k\equiv \mathbb I\otimes\cdots\otimes\mathbb I\otimes b
\otimes\mathbb I\cdots\otimes\mathbb I$
with $b$ as $k$-th factor.
Using a different $h_.$, this formulation can be used for any process
(like the voter and contact, e.g.) such that the occupation
number of each site is either 0 or 1, and such that the dynamics involves
a couple of neighboring sites at a time (slight generalizations
can be treated as well$^{\cite{hinrich1}\cite{hone-pesch}}$).
The generator in (\ref{asepham}) is the same as in (\ref{liggen})
as can be checked by computing the Dirichlet Form
for both and verifying that they coincide
(the same holds for processes with different $h$).
It is however easier to look closely at each
part and see what it does. For instance $a^+a^-$
represents a jump to the right and $nm$ takes into account
the complementary event (the particle stays where it is).
We now look for a stationary solution of the master equation
\begin{equation}\label{master}
\dot{|P(t)\rangle}=H|P(t)\rangle\
\end{equation}
which describes the dynamics of the system by giving the time evolution
of the vector of probabilities of configurations, i.e.
we look for a distribution $|P_s\rangle$ such that
$H|P_s\rangle = 0$.
In order to show where the general idea can be guessed from, let us
consider again the case of the ASEP, to show$^{\cite{hinrich1}}$
that under special conditions (namely
$(\alpha+\beta+\gamma+\delta)
(\alpha\beta-\gamma\delta)/(\alpha+\delta)(\beta+\gamma)=p-q$)
the stationary state is a product state:
$|P_s\rangle=\frac{1}{Z_N}\binom{d}{e}^{\otimes N}$
(where
$d=(\alpha+\delta)/(\alpha\beta-\gamma\delta)$ ,
$e=(\beta+\gamma)/(\alpha\beta-\gamma\delta)$ and
the normalization constant is clearly
$Z_N=(e+d)^N$).
To prove that $H|P_s\rangle=0$, one should first check
\begin{equation}\label{tmr}
h_i\left[\binom{d}{e}\otimes\binom{d}{e}\right]=
\binom{d}{e}\otimes\binom{-1}{1}
-\binom{-1}{1}\otimes\binom{d}{e}\ .\end{equation}
This makes the sum through which $H$ is defined telescopic
(recall that we are omitting the factors of the tensor product
on which the operators act trivially as the identity),
and since
$$
h_1^\partial\binom{d}{e}= \binom{-1}{1}\ ,\
h_N^\partial\binom{d}{e}= -\binom{-1}{1}\ ,
$$
the cancellation of the first term with the last is assured by
the boundary terms.
In other words, $H|P_s\rangle = 0$ would be solved for instance
if we had zero for all $i$ in the r.h.s. of (\ref{tmr}); but
this is too restrictive, so we look for the first non trivial possibility:
instead of zero, we impose a ``telescopic term". This is inspired by
the dynamics, that acts with the same $h_.$ on all couple of
adjacent sites, so the generator acts twice on each site.
We will now try to make the above approach work
for non-product states by imposing a similar
telescopic property.
The idea is to move into a richer context,
substituting the numbers 1, $e$,
$d$ appearing in (\ref{tmr}) with some time-dependent
operators (non commuting
and acting on an auxiliary space of generally infinite dimension)
$S$, $E$, $D$ to be determined, aiming to get the
weights of each possible configuration
through a bracket with a couple of vectors $\langle\langle W|$
and
$|V \rangle\rangle$ to be introduced in the same space.
For instance for a system consisting of a single site we would impose
$
\langle\langle W|\binom{D}{E}|V \rangle\rangle
=\binom{\langle\langle W|D|V \rangle\rangle}
{\langle\langle W|E|V \rangle\rangle}
= \binom{d}{e}
$
and clearly, in the case of a product measure,
$
\langle\langle W|\binom{D}{E}^{\otimes N}|V \rangle\rangle =
\binom{d}{e}^{\otimes N}
$.
We can also write
$
H \langle\langle W|\binom{D}{E}^{\otimes N}|V \rangle\rangle =
\langle\langle W|H \binom{D}{E}^{\otimes N}|V \rangle\rangle
$.
Let us now write
$
|P \rangle=\frac{1}{Z_N} \langle\langle
W|\binom{D}{E}^{\otimes N}|V \rangle\rangle
$ for the probability vector
and plug it into the master equation (\ref{master}).
Clearly $Z_N=\langle\langle
W|C^N|V \rangle\rangle$, with $C=D+E$, that does not
depends on time by conservation of probability.
It is easy to show that the master equation (\ref{master}) is satisfied
if the following equalities hold
(thanks to the same telescopic cancellation mechanism we used for
the product state)
\begin{align}\label{algebra}
& \left(\frac{1}{2}\frac{d}{dt}_.+h_.\right)
\binom{D}{E}\otimes\binom{D}{E} =\nonumber \\
& \quad \binom{D}{E}\otimes\binom{-S}{S}-
\binom{-S}{S}\otimes \binom{D}{E} \ , \\
& \langle\langle W|\left[(\frac{1}{2}\frac{d}{dt}+h_1^\partial)
\binom{D}{E}-\binom{-S}{S}\right]=0\ ,\nonumber\\
& \left[(\frac{1}{2}\frac{d}{dt}+h_L^\partial)
\binom{D}{E}+\binom{-S}{S}\right] |V\rangle\rangle =0\nonumber\ .
\end{align}
These are the relations of the matrix algebra of the process.
If we chose for example the $h_.$ of the ASEP, these equations take the
explicit form of the algebra found by Stinchcombe
and Sch\"{u}tz$^{\cite{schuetz1}, \cite{schuetz2}}$
that includes as a special
case the stationary one (\ref{derralg}) of Derrida et al.
(taking $S=\mathbb{I}$ and putting all the time derivatives equal to zero).
With this procedure we can exhibit an algebra
for all the models with a dynamics involving
only a couple of neighboring sites at a time$^{\cite{schuetz3}, \cite{schuetz7}}$
(see the same works for a classification of the models
with different $h_.$).
If one found an explicit expression for all the operators and vectors, the model
could in principle be solved exactly (provided the algebra is not empty).
Unfortunately, this is in general very difficult
to accomplish (a purely algebraic treatment can also
be used$^{\cite{schuetz4}}$).
In the case of the ASEP, thanks to the preservation
of the number of particles in the bulk
dynamics, the local generator $h_.$ has a block form,
with zero entries in the first and last row and column.
This special form of $h_.$ is such that
in stationary conditions the four equations
(\ref{algebra}) collapse to just one: (\ref{derralg}).
But this great simplification may not occur for
different models.
In many cases the algebra
can be empty (or too complicated to deal with),
as we are going to show
for the contact and voter models.
We can say that the method works for the processes,
such as the ASEP, the probability measures of which are either
product, or a generalization that we can classify as
``matrix product measures". If one distinguished only
between product and non-product states, the choice would be
in general only
between a numerical tensor product and a convex combination
of as many such products as the cardinality of the configuration
space. If the states of a process are matrix product, one can chose
to deal again with a single tensor product, thanks to the richer
nature of the entries, matrices instead of numbers.
Algebras defined by conditions like (\ref{algebra}) are called
\emph{Diffusion Algebras} $^{\cite{ipr}}$.
\section{The matrix Approach Beyond \\ Simple Exclusion}
\label{beyond}
\subsection{Exclusion process with double jumps}
The method to write the matrix algebra of the process
can also be extended to the
case of dynamics not limited
to neighboring sites, such as for instance
the exclusion process with jumps
of length two permitted. The generator, in the case of
symmetric dynamics, is (up to boundary terms):
\begin{align*}\begin{split}
H = &
-\sum_{k}(a_k^-a_{k+1}^+-n_km_{k+1})+(a_k^+a_{k+1}^--m_kn_{k+1})+\\
& (a_k^-a_{k+2}^+-n_km_{k+2})+(a_k^+a_{k+2}^--m_kn_{k+2})=\sum_{k}h_k,
\end{split}\end{align*}
$$
h_.=\begin{pmatrix}
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & -1 & 0 & 0 & 0\\
0 & 0 & 1 & 0 & -1 & 0 & 0 & 0\\
0 & 0 & 0 & 2 & 0 & -1 & -1 & 0\\
0 & -1 & -1 & 0 & 2 & 0 & 0 & 0\\
0 & 0 & 0 & -1 & 0 & 1 & 0 & 0\\
0 & 0 & 0 & -1 & 0 & 0 & 1 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\
\end{pmatrix}
$$
For this system we impose the telescopic property to solve
the master equation in the following way:
\begin{align*}
&\left(\frac{1}{3}\frac{d}{dt}_.+h_.\right)\binom{D}{E}\otimes\binom{D}{E}\otimes\binom{D}{E}=
\binom{D}{E}\otimes\binom{-S}{S}\otimes\binom{D}{E}-\\
&\qquad 2\binom{-S}{S}\otimes \binom{D}{E}\otimes\binom{D}{E}+
\binom{D}{E}\otimes\binom{D}{E}\otimes\binom{-S}{S}
\end{align*}
which is the same as
\begin{equation*}
\frac{1}{3}(2\dot{D}D^2+D\dot{D}D+D^2\dot{D})+ 0
= -DSD+2SD^2-D^2S
\end{equation*}
\begin{equation*}
\frac{1}{3}(2\dot{D}DE+D\dot{D}E+D^2\dot{E})+ D^2E-ED^2
= -DSE+2SDE+D^2S
\end{equation*}
\begin{equation*}
\begin{split}
\frac{1}{3}(2\dot{D}ED+D\dot{E}D+DE\dot{D})+ DED-ED^2
=\\ DSD+2SED-DES
\end{split}
\end{equation*}
\begin{equation*}
\begin{split}
\frac{1}{3}(2\dot{D}E^2+D\dot{E}E+DE\dot{E})+
2DE^2-EDE-E^2D = \\ DSE+2SE^2+DES
\end{split}
\end{equation*}
\begin{equation*}
\begin{split}
\frac{1}{3}(2\dot{E}D^2+E\dot{D}D+ED\dot{D})
-D^2E-DED+2ED^2 =\\ -ESD-2SD^2-EDS
\end{split}
\end{equation*}
\begin{equation*}
\begin{split}
\frac{1}{3}(2\dot{E}DE+E\dot{D}E+ED\dot{E})-DE^2-EDE
=\\ -ESE-2SDE+EDS
\end{split}
\end{equation*}
\begin{equation*}
\frac{1}{3}(2\dot{E}ED+E\dot{E}D+E^2\dot{D})-DE^2+E^2D
=ESD-2SED-E^2S
\end{equation*}
\begin{equation*}
\frac{1}{3}(2\dot{E}E^2+E\dot{E}E+E^2\dot{E})- 0 =ESE-2SE^2+E^2S
\end{equation*}
These relations define now a cubic algebra, as opposed to a quadratic one,
which is therefore not a Diffusion Algebra in the sense of $^{\cite{ipr}}$.
Unfortunately algebras of degree higher than two are very difficult
to handle (see e.g. Vershik$^{\cite{vershik}}$).
However algebras of degree higher than two appear e.g. in
$^{\cite{schuetz4}}$.
\subsection{Voter and Contact Models}
For a description of the voter and contact models
see Liggett$^{\cite{liggett2}}$.
It is easy to see that the local
generator for the voter model can be written
in the form of the r.h.s. of (\ref{asepham})
with
$$
h_.=\begin{pmatrix}
0 & -1 & -1 & 0 \\
0 & 2 & 0 & 0 \\
0 & 0 & 2 & 0 \\
0 & -1 & -1 & 0 \
\end{pmatrix}
$$
and
$$
h_1=\begin{pmatrix}
0 & -\lambda \\
0 & \lambda \
\end{pmatrix}\ ,\ h_N=\begin{pmatrix}
\mu & 0 \\
-\mu & 0 \
\end{pmatrix}
$$
where $\lambda$ and $\mu$ are the rates for opinion changing in
the boundary sites. Notice that there are non zero entries in the
first and last row.
It is easy to compute that
$$
h_.\binom{D}{E}\otimes\binom{D}{E}=\left(\begin{array}{c}
-\{D,E\} \\
2DE \\
2ED \\
-\{D,E\} \
\end{array}\right)
$$
and so we can conclude that the algebra and its stationary limit
are given by
\begin{align*}
\frac{1}{2}(\dot{D}D+D\dot{D})-\{ D,E \}
&= [S,D] \longrightarrow \{D,E\}=0 \\
\frac{1}{2}(\dot{D}E+D\dot{E})+2DE &= SE+DS \longrightarrow
2DE=C \\
\frac{1}{2}(\dot{E}D+E\dot{D})+2ED &= -(SD+ES) \longrightarrow
2ED=-C \\
\frac{1}{2}(\dot{E}E+E\dot{E})-\{D,E\}
&= [E,S]\longrightarrow \{D,E\}=0
\end{align*}
Hence in stationary conditions $$[D,E]=C\equiv D+E ,\{D,E\} = 0 ,
\mu D|V\rangle=|V\rangle , \langle W|\lambda E
=\langle W|.
$$
Notice that the relations are similar to the ones of the ASEP, but
there is an additional condition: $D$ and $E$ anticommute.
The local generator of the contact model is
$$
h_.=\begin{pmatrix}
0 & -\alpha & -\alpha & 0 \\
0 & \alpha+\beta & 0 & -\alpha \\
0 & 0 & \alpha+\beta & -\alpha \\
0 & -\beta & -\beta & 2\alpha \
\end{pmatrix}
$$
so that
$$
h\binom{D}{E}\otimes\binom{D}{E}=\left(\begin{array}{c}
-\alpha\{D,E\} \\
(\alpha+\beta)DE-\alpha E^2 \\
(\alpha+\beta)ED-\alpha E^2 \\
-\beta\{D,E\}+2\alpha E^2 \
\end{array}\right)
$$
and so we can conclude that the algebra
is given by
\begin{align*}
\frac{1}{2}(\dot{D}D+D\dot{D})-\alpha\{D,E\}
&= [S,D] \\
\frac{1}{2}(\dot{D}E+D\dot{E})+(\alpha+\beta)DE-\alpha E^2
&= SE+DS \\
\frac{1}{2}(\dot{E}D+E\dot{D})+(\alpha+\beta)ED-\alpha E^2
&= -(SD+ES) \\
\frac{1}{2}(\dot{E}E+E\dot{E})-\beta\{D,E\}+2\alpha E^2
&= [E,S]
\end{align*}
so that in stationary conditions $E^2=0,\ [D,E]=C\ , \{D,E\} = 0$
if we assume $\alpha=\beta=1$.
Clearly these relations define a subalgebra
of the one for the voter model.
\begin{theorem}
In stationary conditions, the algebra of the voter model is empty
(and {\em a fortiori} so is the one of the contact process and so
are the ones for the whole time evolution).
\end{theorem}
{\bf Proof }
The algebra is
$$
DE=(D+E)/2\ ,\ ED=-(D+E)/2\ , $$
$$
DE=-ED\ ,\
D|V\rangle=\mu|V\rangle\ ,\
\langle W|E=\langle W|\lambda\ .
$$
If
$$
\vartheta_.=D,\ E$$ we get, from the first two conditions
$$ \langle W|\prod_{k=1}^N\vartheta_k|V\rangle=
\langle W|[P(D)+Q(E)]|V\rangle=
[P(1/\mu)+Q(1/\lambda)]\langle W|V\rangle
$$
with some polynomials $P$ and $Q$; but the third condition
(anticommutation) also implies
$$
\langle W|\prod_{k=1}^N\vartheta_k|V\rangle=(\pm)
\langle W|E^mD^n|V\rangle=(\pm)(1/\lambda)^m(1/\mu)^n
\langle W|V\rangle
$$
where $m+n=N$. The two expressions cannot be equal for all
values of $\lambda$ and $\mu$. $\Box$
This shows that, following the recipe of section (\ref{mps}),
we cannot use the matrix approach. However, the l.h.s of
(\ref{algebra}) reflects directly the dynamics of the process
and does not depend on the matrix formalism, but the telescopic r.h.s.
is only inspired by the nearest neighbor nature of the dynamics
and it is more ``artificial".
In other words, if another way to solve the master equation were developed,
some kind of matrix approach could still be productive also for
those models that cannot be treated with the current matrix approach
illustrated in this paper.
\section*{Acknowledgments}
The authors thank G. M. Sch\"{u}tz for useful discussions
and making available reference$^{\cite{schuetz7}}$ before publication;
and an anonymous referees for pointing out
references$^{\cite{schuetz4}, \cite{ipr}}$.
Useful suggestions from two referees, as well as the editor, led to an
improved presentation of our results.
L.D.S. would like to thank G. Jona-Lasinio for his encouragement to
complete this work.
|
2,877,628,091,577 | arxiv | \section{Introduction}
Non-orthogonal multiple access (NOMA) is envisaged as one of the potential technologies in the next generation wireless communication systems. Users in NOMA systems can share the non-orthogonal resources, e.g., the frequency spectrum and the time slot. From a unified perspective, NOMA consists of code-domain NOMA and power-domain NOMA~\cite{wang2018non}.
Both code-domain and power-domain NOMA have been extensively studied in the existing literature. In the power-domain NOMA systems, the signals of different users are assigned different powers. Then, one major challenge is the optimal power allocation as discussed, for example, in~\cite{liu2017downlink,choi2016power}. The optimal power can be determined according to the channel conditions to maximize users' achievable rates. Superposition coding and successive interference cancellation (SIC) techniques are utilized at the transmitter and the receiver, respectively. Again, there are many studies on how to perform these techniques efficiently, for example~\cite{vanka2012superposition,zhang2011unified}.
The code-domain NOMA has its origin in code division multiple access (CDMA), including sparse code multiple access (SCMA)~\cite{nikopour2013sparse} and trellis coded multiple access (TCMA)~\cite{aulin1999trellis}. The signals of multiple users are separated by user-specific features, e.g., the uniquely assigned codeword of each user. In the code-domain NOMA, the main efforts are devoted to the multi-user detection, for example, the design of multidimensional constellations~\cite{nikopour2013sparse,di2019tcm}. To the best of our knowledge, the joint design of the code-domain and power-domain NOMA has never been studied.
In this work, we apply trellis coded modulation (TCM) to the power-domain NOMA, taking advantages of the coding gain and the power optimization. Utilizing superposition coding, the signals for multiple users are superimposed on different power levels. Compared with~\cite{di2019tcm}, the main contribution of this work is introducing the power allocation to code-domain NOMA. The performance can be improved by allocating proper powers to the signals of different users. Instead of utilizing TCM purely for codeword design in~\cite{di2019tcm}, TCM is employed in this work to jointly optimize the error control coding and modulation. Therefore, the Viterbi algorithm can be directly applied to the proposed scheme. By interpreting the modulating process via the tensor product of trellises~\cite{jafarkhani1999design,jafarkhani1999multiple}, we implement the maximum likelihood sequence detection (MLSD) based on the Viterbi algorithm~\cite{viterbi1971convolutional}. Furthermore, we derive the optimal power allocation between the two users by maximizing the free distance of the tensor product trellis.
The key difference between the trellis-coded NOMA and the traditional TCMA lies in the multiple access scheme. In TCMA, the signals of multiple users are differentiated by their unique features, for example, convolutional encoder, constellation, or interleaver~\cite{brannstrom2002iterative}. However, in the trellis-coded NOMA, the signals are differentiated only by the power levels. Furthermore, for the first time, we provide insight into the power optimization for the superimposed TCM signals.
\section{System Model}\label{sec_sys_model}
In this letter, we consider a downlink NOMA system consisting of one base station (BS) and two users. Superposition coding is employed at the transmitter. The power allocated to User~$i$'s signal is denoted as $P_i$, $i = 1, 2$. The channel coefficient between the BS and User~$i$ is represented by $h_i$. We adopt the block fading channel model, i.e., the channel remains static within each block and changes independently from one block to another~\cite{liu2017downlink,choi2016power}. We assume that the channel state information is perfectly known by the BS and users. Without loss of generality, we assume that $|h_1|^2 > |h_2|^2$. To stipulate the user fairness, we set $P_2 > P_1$.
In what follows, the 8-phase-shift keying (PSK) 4-state TCM serves as an example of TCM~\cite{ungerboeck1987trellis}, which is depicted in Fig.~\ref{fig_conv_encoder}. The trellis diagram and the 8-PSK mapping are shown in Figs.~\ref{fig_4_state_trellis} (a) and (b), respectively. In Figs.~\ref{fig_conv_encoder} and \ref{fig_4_state_trellis}, $x_1$ and $x_2$ represent the uncoded bits while $z_0$ and $z_1$ denote the coded bits via the convolutional encoder. For the sake of brevity, we employ the signal constellation with unit signal power, i.e., $\mathrm{E_b} = 1$. Note that the proposed scheme can be applied to the case where two users employ different modulations/trellises and also the case of more than two users.
In the proposed trellis-coded NOMA, the signals for Users~1 and 2 are first modulated by TCM, as shown in Figs.~\ref{fig_conv_encoder} and \ref{fig_4_state_trellis}, and then superimposed on different power levels. Using superposition coding, the $n$th transmitted symbol at the BS is given by $\sqrt{P_1} a_1(n) + \sqrt{P_2} a_2(n)$ where $a_i(n)$ is the $n$th symbol for User~$i$ after TCM. Then, the $n$th received sample at User~$i$ is given by
\begin{align}\label{eq_superposition_coding}
y_i(n) = h_i\left[\sqrt{P_1} a_1(n) + \sqrt{P_2} a_2(n)\right] + w_i(n),
\end{align}
where $w_i(n)\sim \mathcal{CN}(0, \sigma_i^2)$ is the additive noise. At users, the modulated symbols are detected and then the binary information bits are recovered from the modulated symbols, which will be explained in the next section.
\begin{figure}[t b]
\centering
\includegraphics[width=3in]{ZOU_WCL20191219R1_fig1.eps}
\caption{Illustration of an 8-PSK 4-state TCM encoder.}
\label{fig_conv_encoder}
\end{figure}
\begin{figure}[t b]
\centering
\includegraphics[width=3.5in]{ZOU_WCL20191219R1_fig2.eps}
\caption{(a) Trellis representation of 8-PSK 4-state TCM. (b) The mapping of 8-PSK constellation.}
\label{fig_4_state_trellis}
\end{figure}
\section{Tensor Product of Trellises and Detection Design}\label{sec_tensor_product_and_detector}
In this section, we first present the separate detection method with SIC. Then, we propose the joint detection method based on a novel trellis structure known as ``tensor product of trellises''.
\subsection{Separate Detection with SIC}
In the separate detection scheme, the signals for Users~1 and 2 are detected separately. User~2 (the weak user) detects its own signal by considering User~1's signal as noise. User~1 (the strong user) utilizes SIC, i.e., first detects User~2's signal, removes it from the superimposed signal, and then detects its own signal. The Viterbi algorithm~\cite{viterbi1971convolutional} can be employed to determine the sequence with the minimum Euclidean distance from the received sequence using the 4-state trellis in Fig.~\ref{fig_4_state_trellis}~(a).
\begin{figure}[t b]
\centering
\includegraphics[width=2.5in]{ZOU_WCL20191219R1_fig3.eps}
\caption{Underlying tensor product of trellises.}
\label{fig_16_state_trellis}
\end{figure}
\subsection{Joint Detection with Tensor Product of Trellises}\label{subsec_tensor_product}
First, we review the concept of the tensor product of trellises~\cite{jafarkhani1999multiple,jafarkhani1999design}. Let us consider trellises $T_1$ and $T_2$ with $r_1$ and $r_2$ states, respectively, and $S_i^{(l)}$, $i=1,\cdots,r_l$, denotes the $i$th state of $T_l$. The tensor product of $T_1$ and $T_2$, denoted as $T_1 \otimes T_2$, can be represented as a trellis with $r_1\times r_2$ states. Each state in $T_1 \otimes T_2$ is given by $S_i^{(1)} S_j^{(2)}$, $i = 1, \cdots, r_1$, $j = 1, \cdots, r_2$. The state transition from $S_i^{(1)} S_j^{(2)}$ to $S_k^{(1)} S_l^{(2)}$ exists if and only if there exist transitions from $S_i^{(1)}$ to $S_k^{(1)}$ in $T_1$ and from $S_j^{(2)}$ to $S_l^{(2)}$ in $T_2$. One can easily extend the definition of the tensor product trellis to the case of more than two trellises.
Let us revisit the modulating process of two users' signals in Section~\ref{sec_sys_model}. The symbols for Users~1 and 2 are modulated independently through the 4-state trellis, shown in Fig.~\ref{fig_4_state_trellis} (a). Let $T_1$ and $T_2$ stand for the trellises employed to modulate the symbols for Users~1 and 2, respectively. The tensor product trellis $T_1 \otimes T_2$ is the 16-state trellis in Fig.~\ref{fig_16_state_trellis}. Every pair of state transitions in $T_1$ and $T_2$ can be represented by a unique transition path in $T_1 \otimes T_2$. For example, let us assume that the state of $T_1$ transits from $S_i^{(1)}$ to $S_k^{(1)}$ producing the modulated symbol $a_1$ and the state of $T_2$ transits from $S_j^{(2)}$ to $S_l^{(2)}$ generating the modulated symbol $a_2$. From the perspective of $T_1 \otimes T_2$, the state transits from $S_i^{(1)} S_j^{(2)}$ to $S_k^{(1)} S_l^{(2)}$ and the superimposed symbol $\sqrt{P_1}a_1 + \sqrt{P_2}a_2$ is produced. Since every state transition can be realized by two parallel paths in $T_1$ and $T_2$, as shown in Fig.~\ref{fig_4_state_trellis} (a), every state transition in $T_1 \otimes T_2$ includes $2\times 2 = 4$ parallel paths.
The description of the tensor product trellis demonstrates the equivalence of the trellis-coded NOMA and the TCM using the tensor product trellis. The joint detection is to detect both users' signals jointly by treating the trellis-coded NOMA as a regular TCM with the tensor product trellis. In the joint detection, the Viterbi algorithm is implemented using the tensor product trellis. It is worth mentioning that there is no necessity to modulate the signals for Users~1 and 2 jointly using the tensor product trellis at the transmitter. The transmitted symbols for each user can be modulated independently according to its own trellis by applying an appropriate power allocation scheme to ensure a good decoding performance (as shown in Section~V).
Since the Viterbi algorithm can be employed in joint decoding, the computational complexity increases linearly with the number of decoded symbols, $N$. More specifically, if the number of states in $T_i$ ($i=1, 2$) is $K_i$ and the total number of edges in $T_i$ is $L_i$, the computational complexity of the joint detection method is given by $O(N(K_1K_2 + L_1L_2))$ while that of the separate detection method with SIC is $O(N(K_1 + K_2 + L_1 + L_2))$.
\section{Power Optimization}\label{sec_power_ratio}
In this section, we study the power allocation to optimize the performance of the joint detection scheme. The power allocation is optimized under two power constraints. One is the sum power constraint, i.e., $P_1 + P_2 \le P$ where $P$ is the total transmit power. The other constraint is $P_1 < P_2$ which is added with no loss of generality. We adopt the free distance of the tensor product trellis, $d_{\mathrm{free}}$, to measure the performance, which is widely used in the existing TCM studies, for example~\cite{benedetto1999principles}. A larger free distance results in a better performance at high signal-to-noise ratio (SNR). As will be illustrated later, the free distance is a function of the power coefficients $P_1$ and $P_2$. We obtain the optimal powers by maximizing the free distance.
The free distance is defined as the minimum Euclidean distance between any pair of valid and distinct sequences produced by a given trellis, i.e.,
$d_{\mathrm{free}} = \mathop{\arg\min}_{\mathbf{a}_1, \mathbf{a}_2 \in V, \mathbf{a}_1\neq\mathbf{a}_2} ||\mathbf{a}_1 - \mathbf{a}_2||$ where $V$ is the set of all valid sequences. The free distance can be determined by choosing the minimum of two candidates: the minimum Euclidean distance between the symbols produced by the parallel paths, i.e., $d_{\mathrm{parallel}}$, and that between the sequences which diverge from the same state and then merge at the same state, i.e., $d_{\mathrm{D\&M}}$. The subscript $\mathrm{D\&M}$ is the acronym for ``diverging and merging''. In what follows, we analyze these two distances separately. Assume that there are two different paths in $T_1 \otimes T_2$ producing $\sqrt{P_1}u_1 + \sqrt{P_2}v_1$ and $\sqrt{P_1}u_2 + \sqrt{P_2}v_2$, where $u_1$ and $u_2$ are the modulated symbols of $T_1$ and $v_1$ and $v_2$ are those of $T_2$.
\vspace{-2mm}
\subsection{Parallel Paths}\label{subsubsec_paralel_path}
First, we study the case where $\sqrt{P_1}u_1 + \sqrt{P_2}v_1$ and $\sqrt{P_1}u_2 + \sqrt{P_2}v_2$ are produced by the parallel paths in $T_1\otimes T_2$.
Fig.~\ref{fig_parallel_dist} illustrates the possible positions of $\sqrt{P_1}u_1 + \sqrt{P_2}v_1$ and $\sqrt{P_1}u_2 + \sqrt{P_2}v_2$ in the superimposed constellation when $v_1$ and $v_2$ are chosen from $\{1, -1\}$. Because of symmetry, the minimum Euclidean distance for all the other choices will be the same. In Fig.~\ref{fig_parallel_dist}, there are four different markers, hollow/solid square/circle. The superimposed symbols depicted by the same marker are the symbols produced by the parallel paths for a specific state transition in $T_1\otimes T_2$. Every state transition in $T_1\otimes T_2$ can be realized by four parallel paths. Therefore, there are four positions for every marker. The minimum Euclidean distance between parallel paths can be found by calculating the Euclidean distance between the points sharing the same marker. It is clear from Fig.~\ref{fig_parallel_dist} that the minimum Euclidean distance is either $\delta_1$ or $\delta_2$. Thus,
\begin{align}\label{eq_d_parallel}
d_{\mathrm{parallel}} \!=\! \min\left\{\delta_1, \delta_2\right\} \!=\! \min\left\{2\sqrt{P_2} \!-\! 2\sqrt{P_1}, 2\sqrt{P_1}\right\}.
\end{align}
\begin{figure}[t b]
\centering
\includegraphics[width=2.8in]{ZOU_WCL20191219R1_fig4.eps}
\caption{Illustration of the minimum Euclidean distance in the superimposed constellation.}
\label{fig_parallel_dist}
\end{figure}
\begin{figure}[t b]
\centering
\includegraphics[width=2.8in]{ZOU_WCL20191219R1_fig5.eps}
\caption{Illustration of the diverging-and-merging paths with the minimum Euclidean distance.}
\label{fig_diverge_merge_paths}
\end{figure}
\vspace{-5mm}
\subsection{Diverging-and-Merging Paths}
Second, we study the Euclidean distance between the sequences which diverge from the same state and then merge at the same state. It can be shown that if two sequences diverge from any state, it takes at least three transitions to merge at the same state. We utilize the exhaustive search to find a pair of sequences with the minimum Euclidean distance among all pairs of distinct sequences, which is shown in Fig.~\ref{fig_diverge_merge_paths}. Note that all valid codewords start and end at state zero. However, any common sub-sequence will not contribute to $d_\mathrm{free}$. Therefore, to calculate $d_\mathrm{free}$ in Fig.~\ref{fig_diverge_merge_paths}, we need to consider the state transitions $1100\rightarrow 1000 \rightarrow 0100 \rightarrow 1100$ and $1100\rightarrow 1001 \rightarrow 0110 \rightarrow 1100$. As shown in Fig.~\ref{fig_diverge_merge_paths}, the squared Euclidean distance between the diverging-and-merging paths is given by
\begin{align}
d_{\mathrm{D\&M}}^2 = d_{\mathrm{diverge}}^2 + d_{\mathrm{mid}}^2 + d_{\mathrm{merge}}^2.
\end{align}
First, let us focus on the diverging paths in Fig.~\ref{fig_diverge_merge_paths}. According to Fig.~\ref{fig_4_state_trellis}, the superimposed symbol produced by the path $1100\rightarrow 1000$ is given by $\sqrt{P_1}u_1 + \sqrt{P_2}v_1$, where $u_1\in \{e^{j3\pi/4}, e^{j7\pi/4}\}$ and $v_1\in \{1, -1\}$. Similarly, the superimposed symbol produced by $1100\rightarrow 1001$ is given by $\sqrt{P_1}u_2 + \sqrt{P_2}v_2$, where $u_2\in \{e^{j3\pi/4}, e^{j7\pi/4}\}$ and $v_2\in \{e^{j\pi/2}, e^{j3\pi/2}\}$. The positions of the superimposed symbols can be shown in Fig.~\ref{fig_diverge_merge_branch}. The minimum Euclidean distance between the diverging paths is given by
\begin{align*}
d_{\mathrm{diverge}} = \delta_3 = |\sqrt{2P_2} - 2\sqrt{P_1}|.
\end{align*}
One can employ the same approach to derive the minimum Euclidean distance between the merging paths and find that $d_{\mathrm{merge}} = d_{\mathrm{diverge}}$.
Second, we investigate the Euclidean distance $d_{\mathrm{mid}}$ in Fig.~\ref{fig_diverge_merge_paths}. The superimposed symbol produced by the path $1000\rightarrow 0100$ is given by $\sqrt{P_1}u_1 + \sqrt{P_2}v_1$, where $u_1\in \{1, -1\}$ and $v_1\in \{1, -1\}$. Similarly, the superimposed symbol produced by the path $1001\rightarrow 0110$ is given by $\sqrt{P_1}u_2 + \sqrt{P_2}v_2$, where $u_2\in \{1, -1\}$ and $v_2\in \{e^{j\pi/4}, e^{j5\pi/4}\}$. The positions of the superimposed symbols can be shown in Fig.~\ref{fig_mid_branch}. According to Fig.~\ref{fig_mid_branch}, the minimum Euclidean distance $d_{\mathrm{mid}}$ is given by
\begin{align*}
d_{\mathrm{mid}}^2 &= \min\{\delta_4^2,\delta_5^2\} \notag\\
&= (2\!-\!\sqrt{2})P_2 + \min\left\{0, 4P_1 + 2\sqrt{P_1P_2}\left(\sqrt{2} - 2\right)\right\}.
\end{align*}
\begin{figure}[t b]
\centering
\includegraphics[width=2.2in]{ZOU_WCL20191219R1_fig6.eps}
\caption{Illustration of the minimum Euclidean distance between the symbols produced by diverging paths.}
\label{fig_diverge_merge_branch}
\end{figure}
\begin{figure}[t b]
\centering
\includegraphics[width=2.2in]{ZOU_WCL20191219R1_fig7.eps}
\caption{Illustration of the minimum Euclidean distance between the symbols in the intermediate stage of the diverging-and-merging paths.}
\label{fig_mid_branch}
\end{figure}
To summarize, the minimum Euclidean distance between the diverging-and-merging paths is given by
\begin{align}\label{eq_d_dm}
d_{\mathrm{D\&M}}^2 =& d^2_{\mathrm{diverge}} + d^2_{\mathrm{mid}} + d^2_{\mathrm{merge}}\notag\\
=& \left(6-\sqrt{2}\right)P_2 + 8P_1 - 8\sqrt{2P_1P_2} \notag\\
&+ \min\left\{0, 4P_1 + 2\sqrt{P_1P_2}\left(\sqrt{2} - 2\right)\right\}.
\end{align}
\subsection{Free Distance}
The free distance of $T_1 \otimes T_2$ is determined by finding the minimum of $d_\mathrm{parallel}$ and $d_\mathrm{D\&M}$, i.e.,
\begin{align}\label{eq_d_free}
d_{\mathrm{free}}^2 =& \min\{d_{\mathrm{parallel}}^2, d_{\mathrm{D\&M}}^2\}\notag\\
=& \min\!\left\{\!4P_1, \!4\left(\sqrt{P_2} - \sqrt{P_1}\right)^2, \left(6-\sqrt{2}\right)P_2 + 8P_1\right.\notag\\
&\left.\!- 8\sqrt{2P_1\!P_2} \!+\! \min\!\left\{\!0,\! 4P_1 \!+\! 2\sqrt{P_1\!P_2}\left(\!\sqrt{2}\! -\! 2\right)\!\right\}\!\right\}.
\end{align}
The optimal powers can be derived by maximizing the free distance, i.e.,
\begin{align}\label{eq_0.24}
\left[P_1^*, P_2^*\right] &= \arg\max_{P_1, P_2} d_{\mathrm{free}}^2,\ \mathrm{s.t.}\ P_1 + P_2 \le P,
\end{align}
where $P$ is the total transmit power. According to \eqref{eq_d_free}, one can derive that $d_{\mathrm{free}}$ is maximized when $4P_1 = \left(6-\sqrt{2}\right)P_2 + 8P_1 - 8\sqrt{2P_1P_2}$, which then results in $\frac{P_1^*}{P_2^*} = \left(\frac{2\sqrt{2} - \sqrt{2+\sqrt{2}}}{2}\right)^2 \approx 0.2404$. Besides, to combat the channel noise, $P_1 + P_2$ should be maximized. As a result, $P_1^* = \frac{0.2404}{1+0.2404} P \approx 0.1938P$ and $P_2^* \approx 0.8062P$.
While we presented the results for a two-user scenario with 8-PSK 4-state TCM, our approach can be generalized to any TCM.
\section{Simulation Results}
In this section, we present the simulation results of the 8-PSK 4-state trellis-coded NOMA (TC-NOMA), TCMA, and the uncoded NOMA (UC-NOMA) with 4-PSK. We ensure a fair comparison among these schemes since the TCM is implemented without consuming extra bandwidth compared with the uncoded modulation~\cite{ungerboeck1987trellis}. In our simulation, we employ bit error ratio (BER) as the measure of performance. In the uncoded NOMA, the maximum likelihood detection is employed. We also present the results for the TCMA where the signals for Users~1 and 2 are modulated by the identical trellis shown in Fig.~\ref{fig_4_state_trellis} but differentiated by constellation~\cite{aulin1999trellis}. In TCMA, the constellation used by one user is the other user's constellation rotated by $\pi/8$. In contrast to the trellis-coded NOMA, the transmitted signal in TCMA is given by $\sqrt{\left(P_1+P_2\right)/2}[a_1(n) + a_2(n)]$, which ensures a fair comparison by using the same sum transmit power.
\begin{figure}[t b]
\centering
\includegraphics[width=3.3in]{ZOU_WCL20191219R1_fig8.eps}
\caption{BER vs. SNR for TCMA, uncoded and trellis-coded NOMA when $P_1 = 0.1$, $P_2 = 1$, $|h_1|^2 = 2$, $|h_2|^2 = 1$.}
\label{fig_P1=0.1}
\end{figure}
\begin{figure}[t b]
\centering
\includegraphics[width=3.3in]{ZOU_WCL20191219R1_fig9.eps}
\caption{BER vs. SNR for TCMA, uncoded and trellis-coded NOMA when $P_1 = 0.3$, $P_2 = 1$, $|h_1|^2 = 2$, $|h_2|^2 = 1$.}
\label{fig_P1=0.3}
\end{figure}
\begin{figure}[t b]
\centering
\includegraphics[width=3.3in]{ZOU_WCL20191219R1_fig10.eps}
\caption{BER vs. $P_1/P_2$ for TCMA, uncoded and trellis-coded NOMA schemes at users employing the joint detection when $P_1 + P_2 = 1$.}
\label{fig_power_allocation}
\end{figure}
First, we show the BER as a function of SNR for NOMA and TCMA schemes in Figs.~\ref{fig_P1=0.1} and \ref{fig_P1=0.3} when $P_2 = 1$ and $P_1 = 0.1$ or 0.3, respectively. SNR is given by $\frac{1}{\sigma^2}$ where $\sigma^2$ is the variance of noise. For $P_1 = 0.1$ or 0.3, it is manifested that at high-SNR, similar to conventional TCM~\cite{ungerboeck1987trellis}, the trellis-coded NOMA using the joint detection outperforms the uncoded NOMA. Besides, the trellis-coded NOMA using the separate detection achieves a similar performance to that using the joint detection when $P_1 = 0.1$. In contrast, there is a huge gap between the BER curves of the separate detection and those of the joint detection when $P_1 = 0.3$. This is because of the severe inter-user interference when detecting two user's signals separately and the error propagation problem in SIC.
Furthermore, using the joint detection, the trellis-coded NOMA outperforms TCMA at high-SNR in Figs.~\ref{fig_P1=0.1} and \ref{fig_P1=0.3}. Moreover, in the trellis-coded NOMA, the signals of different users can also employ different constellations. The curves with ``TC-NOMA, Joint, Rotate'' in Fig.~\ref{fig_P1=0.3} are for the case where the constellation used by one user is the other user's constellation rotated by $\pi/8$. The trellis-coded NOMA with constellation rotation achieves a better performance compared with the trellis-coded or uncoded NOMA without constellation rotation and TCMA. It can be explained intuitively by considering how the constellation rotation affects the Euclidean distance between superimposed symbols. According to Figs.~\ref{fig_parallel_dist} and \ref{fig_diverge_merge_branch}, the minimum Euclidean distance may increase if the constellation of User~1's signal rotates by $\pi/8$, which then improves the performance.
Fig.~\ref{fig_power_allocation} shows how the average BER changes with the power ratio $P_1/P_2$ using the joint detection at SNRs 16dB and 18dB for the 4-state trellis in Fig.~\ref{fig_4_state_trellis} and the 8-state trellis in Fig.~12.8 of \cite{benedetto1999principles}. It is shown that the minimum BER is achieved when $P_1/P_2 \approx 0.25$ for the uncoded NOMA and the 8-state trellis-coded NOMA. The optimal power ratio for the 4-state trellis-coded NOMA is 0.24 for SNR=16dB and 0.22 for SNR=18dB, which are close to the optimal power ratio of 0.2404 derived in Section~\ref{sec_power_ratio}. Moreover, the trellis-coded NOMA in its best case scenario outperforms the uncoded NOMA in its best case scenario. Besides, the performance of TCMA does not change with $P_1/P_2$. By choosing the proper powers, the trellis-coded NOMA outperforms TCMA.
\section{Conclusions}
In this letter, we study the trellis-coded NOMA and propose a joint detection method based on the tensor product of trellises. Besides, we derive the optimal power allocation between the two users by maximizing the free distance of the tensor product trellis. Simulation results demonstrate that the trellis-coded NOMA outperforms the uncoded NOMA and TCMA using an appropriate power allocation. The study of the trellis-coded NOMA systems with more than two users is our future work.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
{\small
\bibliographystyle{IEEEtran}
|
2,877,628,091,578 | arxiv | \section{Review of some derived algebraic geometry}
\label{sec:dag}
In this appendix, we summarize the deformation theory relevant to us, using the language of derived algebraic geometry. Our primary goal is to explain certain functorialities in the usual deformation-obstruction theory of varieties by interpreting everything in terms of {\em derivations} in the derived category\footnote{We hasten to remark that all statements written here are well-known to the experts, and have been written down simply to provide a convenient reference.}. We do so by first discussing the deformation-obstruction theory for derivations (see \S \ref{derobs} and \S \ref{compat}), then explaining how to realize the theory of square-zero extensions as a special case of the theory of derivations (see \S \ref{sqzeroext}), and then finally recording the corresponding statements for the deformation-obstruction theory for square-zero extensions (see \S \ref{obssqzero}). The format adopted is that of short numbered paragraphs, each one discussing an algebraic problem and its solution first, and then stating the corresponding scheme-theoretic result (with references). To avoid mentioning derived Deligne-Mumford stacks in the {\em statements} of various theorems below, we impose flatness hypotheses in the statements. We hope that this sacrifice of generality will make the statements more readily accessible. Our primary references will be \cite{LurieDAG} and \cite{IllusieCC1}, though occasionally we refer to \cite[Chapter 8]{LurieHA} as well; we freely use the language of \cite{LurieHT} and \cite[Chapter 1]{LurieHA}.
\subsection{Conventions} We use the term $\infty$-groupoid when referring to a mapping space in an $\infty$-category. Given an $\infty$-category $\mathcal{C}$ and objects $X,Y \in \mathcal{C}$, we let $\operatorname{Hom}_{\mathcal{C}}(X,Y)$ denote the $\infty$-groupoid of maps in $\mathcal{C}$ between $X$ and $Y$; we drop the subscript $\mathcal{C}$ from the notation when the category is clear from context. Fix a Grothendieck abelian category $\mathcal{A}$, and consider the stable $\infty$-category $\mathcal{D}$ of (unbounded) chain complexes over $\mathcal{A}$ with its usual t-structure; see \cite[Section 1.3.5]{LurieHA} for more. Given an object $K \in \mathcal{D}$ and an integer $j$, the complex $K[j]$ denotes the complex $K$ with homological degree increased by $j$. We freely identify $\mathcal{D}_{\leq 0}$ with the $\infty$-category of simplicial objects in $\mathcal{A}$ via the Dold-Kan correspondence, and we use the term {\em connective} to refer to such chain complexes. We sometimes denote the shift functor $K \mapsto K[-1]$ by $\Omega$. Since $\mathcal{A}$ has enough injectives, for any two (bounded above) complexes $C$ and $D$ in $\mathcal{D}$, the $\infty$-groupoid $\operatorname{Hom}(C,D)$ of maps $C \to D$ in $\mathcal{D}$ can also be realized as
\[ \operatorname{Hom}(C,D) = \tau_{\leq 0} \operatorname{Hom}^\bullet(C,\tilde{D}). \]
where $D \to \tilde{D}$ is a quasi-isomorphism between $D$ and a complex $\tilde{D}$ of $K$-injectives, and $\operatorname{Hom}^\bullet(C,\tilde{D})$ is the mapping chain complex in the usual sense of homological algebra; note that $\operatorname{Hom}(C,D)$ is a simplicial abelian group. If $f:M \to N$ is a morphism in a stable $\infty$-category $\mathcal{C}$, then $N/M$ denotes the pushout of $f$ along $M \to 0$ and is called the {\em homotopy cokernel} of $f$. Dually, the pullback of $f$ along $0 \to N$ is called the {\em homotopy kernel} of $f$. Note that $\Omega N$ is simply the homotopy kernel of $0 \to N$. We denote by
\[ M \to N \to N/M \]
the exact triangle defined by $f$ in the homotopy category of $\mathcal{C}$; we exclude the boundary map $N/M \to M[1]$ in our depiction of the exact triangle simply for notational convenience.
\subsection{The basic setup} We will work in the setting of derived algebraic geometry provided by the $\infty$-category $ {\operatorname{SAlg}}_k$ of simplicial commutative $k$-algebras over some fixed (ordinary) base ring $k$ rather than any more sophisticated variants; the fullsubcategory of $ {\operatorname{SAlg}}_k$ spanned by discrete $k$-algebras is ordinary and will be denoted $ {\operatorname{Alg}}_k$. All tensor products are assumed to be derived and relative to $k$ unless otherwise specified; in particular, the subcategory $ {\operatorname{Alg}}_k \subset {\operatorname{SAlg}}_k$ is not closed under the $\otimes$-products unless $k$ is a field. The reference \cite{LurieHA} works in the setting of $E_\infty$-rings rather than $ {\operatorname{SAlg}}_k$ (the two coincide with $\mathbf{Q} \subset k$, up to connectivity constraints), but can also be adapted to work for $ {\operatorname{SAlg}}_k$; we choose to ignore this issue when referring to \cite{LurieHA} below.
\subsection{Stable $\infty$-categories of modules and their functorialities} Given an $A \in {\operatorname{SAlg}}_k$, one can define a stable $\infty$-category ${\operatorname{Mod}}(A)$ of $A$-modules which comes equipped with a natural $\otimes$-product structure. When $A$ is discrete, this $\infty$-category realizes the derived category of the abelian category of $A$-modules as its homotopy category; we stress that ${\operatorname{Mod}}(A)$ is {\em not} the ordinary category of $A$-modules when $A$ is discrete. The association $A \mapsto {\operatorname{Mod}}(A)$ obeys the expected functorialities. For example, if $f:A \to B$ is a map in $ {\operatorname{SAlg}}_k$, there is an extension of scalars functor ${\operatorname{Mod}}(A) \to {\operatorname{Mod}}(B)$ induced by tensoring with $B$, and there is a restriction functor ${\operatorname{Mod}}(B) \to {\operatorname{Mod}}(A)$ obtained by remembering only the $A$-structure. These functors are adjoint: given $M \in {\operatorname{Mod}}(A)$ and $N \in {\operatorname{Mod}}(B)$, there exist functorial equivalences
\[ \operatorname{Hom}_{{\operatorname{Mod}}(A)}(M,N) \simeq \operatorname{Hom}_{{\operatorname{Mod}}(B)}(M \otimes_A B,N). \]
\subsection{The cotangent complex} \label{cotcomplex} Let $A$ be an ordinary $k$-algebra. The cotangent complex $L_A$ of $A$ relative to $k$, sometimes denoted by $L_{A/k}$ when the ring $k$ is not clear from context, is an object in ${\operatorname{Mod}}(A)$ constructed as follows: pick an $A_\bullet \in {\operatorname{SAlg}}_k$ with an equivalence $f:A_\bullet \to A$ such that each $A_n$ is a free $k$-algebra. Then the $A$-modules $\Omega^1_{A_n/k} \otimes_{A_n} A$ assemble naturally to form a simplical $A$-module. The corresponding object in ${\operatorname{Mod}}(A)$ is called the cotangent complex $L_A$. This construction can be generalized to an arbitrary $A \in {\operatorname{SAlg}}_k$, and also generalizes to simplicial rings in an arbitrary topos. Note that in each case the cotangent complex is actually {\em connective}; non-connective cotangent complexes arise if one works with Artin stacks, but these do not concern us. A non-abelian derived functor approach to the cotangent complex can be found in \cite{QuillenCRC}; the book \cite{IllusieCC1} is the original source, and contains the details of the above construction.
\subsection{Derivations}\label{defder} Let $A \in {\operatorname{SAlg}}_k$, and let $M$ be an $A$-module. A $k$-linear derivation $A \to M$ is, by definition, a $k$-algebra section of the projection map $A \oplus M \to A$, where $A \oplus M$ is given an $A$-algebra structure via the usual $A$-action on $M$, and with $M \subset A \oplus M$ being a square-zero ideal; one can easily check that this recovers the usual notion when $A$ and $M$ are discrete. Let ${\operatorname{Der}}_k(A,M)$ denote the $\infty$-groupoid of all $k$-linear derivations $A \to M$ (we drop the subscript $k$ from the notation when the base ring $k$ is fixed). By construction of the cotangent complex, one has a derivation $d:A \to L_A$. It is a theorem that this derivation is the universal one:
\begin{theorem}
\label{thm:dag-der}
With notation as above, composition with $d$ induces a functorial equivalence
\begin{equation} \label{defdereq} \operatorname{Hom}(L_A,M) \simeq {\operatorname{Der}}(A,M). \end{equation}
\end{theorem}
The case when $k$, $A$, and $M$ are discrete can be found in \cite[Corollary II.1.2.4.3]{IllusieCC1}, while the case where $M$ is allowed to be a complex can be found in \cite[Proposition II.1.2.6.7]{IllusieCC1}. In \cite{LurieDAG}, the cotangent complex is {\em defined} using the preceding property (see the discussion preceding \cite[Remark 3.2.8]{LurieDAG}). To see that this construction agrees with the Illusie's construction as explained in \S \ref{cotcomplex}, one observes that there is a map from Lurie's cotangent complex to Illusie's. Moreover, this map is an isomorphism when the algebra is free (use \cite[Lemma 3.2.13]{LurieDAG}), and therefore always an isomorphism by passage to free resolutions. A model-categorical approach to the universal properties of the cotangent complex can be found in \cite{QuillenCRC}.
\subsection{Functoriality of derivations} \label{derfunc} Let $f:A \to B$ be a map in $ {\operatorname{SAlg}}_k$, and let $N$ be a $B$-module. Using formula \eqref{defdereq}, we see
\[ {\operatorname{Der}}(A,N) \simeq \operatorname{Hom}_A(L_A,N) \simeq \operatorname{Hom}_B(L_A \otimes_A B,N). \]
In other words, the natural derivation $A \to L_A \to L_A \otimes_A B$ is the universal derivation from $A$ into a $B$-module.
\subsection{The transitivity triangle} \label{dertrans} Let $f:A \to B$ be a map in $ {\operatorname{SAlg}}_k$. Composing the natural derivation $B \to L_B$ with $f$ defines a derivation $A \to L_B$ and, by \S \ref{derfunc}, a map $L_A \otimes_A B \to L_B$. One can show that this map induces an identification
\[ L_B / (L_A \otimes_A B) \simeq L_{B/A} \]
in the stable $\infty$-category of $B$-modules. At the level of triangulated categories, this gives rise to an exact triangle (see \cite[Proposition 3.2.12]{LurieDAG}) and \cite[Proposition II.2.1.2]{IllusieCC1}), called the transitivity triangle, of the form
\[ L_A \otimes_A B \to L_B \to L_{B/A}. \]
The associated boundary map $L_{B/A}[-1] \to L_A \otimes_A B$ is called the {\em Kodaira-Spencer class} of $f$, and is denoted by $\kappa(f)$.
\subsection{Base change} \label{derbasechange} Let $A,B \in {\operatorname{SAlg}}_k$. Then the composite map
\[ L_A \otimes B \to L_{A \otimes B} \to L_{A \otimes B/B}\]
is an isomorphism. One way to see this is by passage to free resolutions; see \cite[Proposition 3.2.9]{LurieDAG} for the corresponding scheme-theoretic statement. Alternately, one can also prove this drectly using \S \ref{dertrans}. This fact (for $k$, $A$, and $B$ discrete) can be found in \cite[Proposition II.2.2.1]{IllusieCC1}.
\subsection{Extending derivations across morphisms } \label{derobs} Let $f:A \to B$ be a map in $ {\operatorname{SAlg}}_k$, and let $M$ be an $A$-module. Given a derivation $D:A \to M$, a natural question to ask is if the following diagram can be filled
\begin{equation} \label{derfuncdiag} \xymatrix{ A \ar[r]^D \ar[d] & M \ar[d] \\
B & M \otimes_A B } \end{equation}
using a $k$-linear derivation $B \to M \otimes_A B$. Formally speaking, we are asking the following: given a $k$-algebra section $s_D:A \to A \oplus M$ of the projection map $A \oplus M \to A$, when can the diagram
\[ \xymatrix{ A \ar[r]^{s_D} \ar[d] & A \oplus M \ar[d] \\
B & B \oplus M \otimes_A B }\]
be filled with a $k$-algebra homomorphism $B \to B \oplus M \otimes_A B$ splitting the projection $B \oplus M \otimes_A B \to B$? By Theorem \ref{thm:dag-der} and its functoriality in $A$ and $M$, the preceding question is equivalent to asking if
\[ \xymatrix{ L_A \ar[r] \ar[d] & M \ar[d] \\
L_B & M \otimes_A B }\]
can be filled using a $B$-module map $L_B \to M \otimes_A B$; here the horizontal map $L_A \to M$ is the map defined by $D$. We may refine this diagram to obtain
\[ \xymatrix{ L_A \ar[r] \ar[d] & M \ar[d] \\
L_A \otimes_A B \ar[r] \ar[d] & M \otimes_A B \ar@{=}[d] \\
L_B & M \otimes_A B. }\]
Thus, requiring the existence of a $k$-linear derivation $B \to M \otimes_A B$ filling diagram \eqref{derfuncdiag} is equivalent to requiring that the map $L_A \otimes_A B \to M \otimes_A B$ induced by the original derivation $A \to M$ factors through $L_A \otimes_A B \to L_B$. Moreover, the space of all possible ways of filling the diagram above is tautologically the homotopy-fiber of
\[ \operatorname{Hom}_B(L_B,M \otimes_A B) \simeq {\operatorname{Der}}_k(B,M \otimes_A B) \to {\operatorname{Der}}_k(A,M) \simeq \operatorname{Hom}_B(L_A \otimes_A B, M \otimes_A B) \]
over the point corresponding to $D:A \to M$. Using the rotated transitivity triangle
\[ L_{B/A}[-1] \to L_A \otimes_A B \to L_B \]
we see that such a factorization exists if and only if the induced map $L_{B/A}[-1] \to M \otimes_A B$ is trivial. We denote this last map by $\mathrm{ob}(f,D)$ and refer to it as the obstruction to extending $D$ across $f$. When $\mathrm{ob}(f,D)$ vanishes, the description as a homotopy-fiber above shows that $\infty$-groupoid of all possible ways of filling in diagram \eqref{derfuncdiag} by a $k$-linear derivation $B \to M \otimes_A B$ (together with the relevant homotopy) is naturally a torsor under $\operatorname{Hom}(L_{B/A},M \otimes_A B)$; in particular, the set of all possible extensions (up to homotopy) of $D$ across $f$ is a torsor under $\pi_0(\operatorname{Hom}(L_{B/A},M \otimes_A B))$. Generalizing this discussion to simplicial rings in a topos, and then specializing to the case of Deligne-Mumford stacks, we obtain:
\begin{theorem}
\label{thm:dag-obs-der}
Let $f:X \to Y$ be a flat morphism of Deligne-Mumford stacks, and let $D_Y:L_Y \to \mathcal{M}$ be a derivation on $Y$ into a connective quasi-coherent complex $\mathcal{M}$ of $\mathcal{O}_Y$-modules. Then the obstruction to the existence of a derivation $D_X:L_X \to f^* \mathcal{M}$ commuting with $f^* D_Y$ is the map
\[ \mathrm{ob}(f,f^* D_Y): L_{X/Y}[-1] \stackrel{\kappa(f)}{\to} f^* L_Y \stackrel{f^* D_Y}{\to} f^*\mathcal{M} \]
where the map $\kappa(f)$ is the Kodaira-Spencer class for $f$. When $\mathrm{ob}(f,f^* D_S)$ vanishes, the set of all pairs $(D_X:L_X \to f^*\mathcal{M},H:D_X \to f^* D_Y)$ (where $D_X$ is a derivation, and $H$ is a homotopy expressing the commutativity of $D_X$ with $f^*D_Y$) up to homotopy is a torsor for $\operatorname{Ext}^0_X(L_{X/Y},f^*\mathcal{M})$. Moreover, the $\infty$-groupoid of automorphisms of any such pair is equivalent to $\operatorname{Hom}_X(L_{X/Y},f^*\mathcal{M}[-1])$
\end{theorem}
Theorem \ref{thm:dag-obs-der} can be easily checked in the case where $\mathcal{M}$ is discrete (in which case the automorphisms mentioned at the end of Theorem \ref{thm:dag-obs-der} are all trivial). The general case comes at no extra cost, and the additional flexibility of allowing $\mathcal{M}$ to be a genuine complex instead of a sheaf will allow us later to treat the obstruction theory of square-zero extensions as a special case of the obstruction theory of derivations as presented in Theorem \ref{thm:dag-obs-der}; see \S \ref{obssqzero}, especially Theorem \ref{thm:dag-sqzero-obs}, which is essentially equivalent to the case of Theorem \ref{thm:dag-obs-der} when $\mathcal{M}$ is taken to be a sheaf placed in homological degree $1$. Theorem \ref{thm:dag-obs-der} is not stated explicitly in \cite{IllusieCC1} for the simple reason that Illusie chooses not to develop an obstruction theory for derivations.
\subsection{Compatibilities for obstructions with respect to a morphism} \label{compat} Let $A \stackrel{f}{\to} B \stackrel{g}{\to} C$ be composable maps in $ {\operatorname{SAlg}}_k$. Given an $A$-module $M$ and a derivation $D:A \to M$, the discussion in \S \ref{derobs} produces maps
\[ \mathrm{ob}(f,D):L_{B/A}[-1] \to M \otimes_A B \quad \textrm{and} \quad \mathrm{ob}(g \circ f,D):L_{C/A}[-1] \to M \otimes_A C\]
which are obstructions to extending the derivation $D$ across $f$ and $g\circ f$ respectively. These obstructions are compatible in the sense that the following diagram commutes:
\[ \xymatrix{ L_{B/A} \otimes_B C \ar[rr]^-{\mathrm{ob}(f,D) \otimes_B C} \ar[d] & & M \otimes_A B \otimes_B C[1] \simeq M \otimes_A C[1] \ar[d] \\
L_{C/A} \ar[rr]^-{\mathrm{ob}(g\circ f,D)} & & M \otimes_A C [1]. } \]
This compatibility follows formally from the commutativity of the following diagram (which we leave to the reader to verify)
\[ \xymatrix{ L_{B/A}[-1] \otimes_B C \ar[r] \ar[d] & L_A \otimes_A B \otimes_B C \ar[r] \ar[d]^{\simeq} & L_B \otimes_B C \ar[d] \\
L_{C/A}[-1] \ar[r] \ar[rd] & L_A \otimes_A C \ar[r] \ar[d] & L_C \\
& M \otimes_A C. & & } \]
Here the first row is the exact triangle induced by tensoring the (rotated) transitivity triangle for $A \to B$ with $C$, while the second row is the transitivity triangle for $A \to C$. Generalizing this discussion to simplicial rings in a topos and then specializing to the case of Deligne-Mumford stacks, we obtain:
\begin{theorem}
\label{thm:dag-obs-der-compat}
Let $g:Y \to S$ and $f:X \to S$ be flat morphisms of Deligne-Mumford stacks, and let $\pi:Y \to X$ be an $S$-morphism. Let $D_S:L_S \to \mathcal{M}$ be a derivation on $S$ into a connective quasi-coherent complex $\mathcal{M}$ of $\mathcal{O}_S$-modules. Then the obstructions $\mathrm{ob}(f,f^* D_S)$ and $\mathrm{ob}(g,g^* D_S)$ (as defined in Theorem \ref{thm:dag-obs-der}) are compatible in the sense that
\[ \xymatrix{ \pi^* L_{X/S}[-1] \ar[rr]^{\pi^*} \ar[rrd]_{\pi^* \mathrm{ob}(f,f^* D_S)} & & L_{Y/S}[-1] \ar[d]^{\mathrm{ob}(g,g^* D_S)} \\
& & \pi^* f^* \mathcal{M} \simeq g^* \mathcal{M} } \]
is commutative, i.e., a canonical homotopy expressing the commutativity exists. In particular, the cohomology classes
\[ \mathrm{ob}(f,f^* D_S) \in \operatorname{Ext}^1_X(L_{X/S},f^*\mathcal{M}) \quad \textrm{and} \quad \mathrm{ob}(g,g^* D_S) \in \operatorname{Ext}^1_Y(L_{Y/S},g^* \mathcal{M}) \]
map to the same class in
\[ \operatorname{Ext}^1_Y(\pi^* L_{X/S}, \pi^* f^* \mathcal{M}) \simeq \operatorname{Ext}^1_Y(\pi^* L_{X/S}, g^* \mathcal{M}).\]
under the natural maps.
\end{theorem}
This theorem is not stated explicitly in \cite{LurieDAG} or \cite{IllusieCC1}, but follows from the formula given in Theorem \ref{thm:dag-obs-der} and the functoriality in $A$ of Theorem \ref{thm:dag-der}.
\subsection{Square-zero extensions} \label{sqzeroext} A square-zero extension of an $A \in {\operatorname{SAlg}}_k$ by an $A$-module $M$ is, by definition, a derivation $A \to M[1]$. In order to see the connection with the usual definition, we use the following construction: a derivation $D:A \to M[1]$ gives rise to, by definition, a $k$-algebra section $s_D:A \to A \oplus M[1]$ of the projection map $A \oplus M[1] \to A$. Hence, we can form the following pullback square
\[ \xymatrix{ A^D \ar[r] \ar[d] & A \ar[d]^{s_D} \\
A \ar[r]^-{s_0} & A \oplus M[1] } \]
where $s_0$ is the map associated to the $0$ derivation $A \to M[1]$, i.e., the standard section. When $A$ and $M$ are discrete, one calculates that $A^D$ is also discrete, and that the algebra map $A^D \to A$ is surjective with kernel a square-zero ideal isomorphic to $M$, justifying the choice of terminology. There also exists an intrinsic definition of square-zero extensions, and it is a theorem of Lurie (see the $k = \infty$ and $n = 0$ case of \cite[Theorem 8.4.1.26]{LurieHA}) that the preceding construction produces all such square-zero extensions when certain connectivity assumptions on $M$ (harmless for applications we have in mind) are satisfied. Hence, we will often abuse notation and denote a square-zero extension of $A$ by $M$ via an algebra map $\tilde{A} \to A$ with kernel $M$. Generalizing this discussion to rings in a topos and then specializing to the case of Deligne-Mumford stacks, we obtain:
\begin{theorem}
\label{thm:dag-sq-zero}
Let $X$ be a Deligne-Mumford stack over some base scheme $S$, and let $\mathcal{I} \in {\operatorname{QCoh}}(X)$. Then the construction above defines an equivalence between the groupoid $\operatorname{Hom}(L_{X/S},\mathcal{I}[1])$ and the groupoid of all square-zero extensions of $X$ by $\mathcal{I}$ over $S$.
\end{theorem}
The notion of ``square-zero extensions'' used in Theorem \ref{thm:dag-sq-zero} coincides with that of \cite[\S III.1]{IllusieCC1}. Theorem \ref{thm:dag-sq-zero} can be deduced from $k = 0$ case of \cite[Proposition 3.3.5]{LurieDAG}, and can also be found in \cite[Theorem III.1.2.3]{IllusieCC1}.
\subsection{Extending square-zero extensions across morphisms and compatibilities} \label{obssqzero} Let $f:A \to B$ be a map in $ {\operatorname{SAlg}}_k$, and let $M$ be an $A$-module. Given a square-zero extension $\tilde{A} \to A$ of $A$ by $M$, a natural question to ask is if there exists a square-zero extension $\tilde{B} \to B$ of $B$ by $M \otimes_A B$ and a map $\tilde{A} \to \tilde{B}$ such that the following diagram commutes and is a pushout\footnote{Note that when $A, M$ and $B$ are discrete and $f$ is flat, the rings $\tilde{A}$ and $\tilde{B}$ are necessarily discrete with the map $\tilde{A} \to \tilde{B}$ being flat by the local flatness criterion. Hence, the preceding question generalizes the ordinary deformation-theoretic question of extending square-zero deformations of the target of a flat morphism of Deligne-Mumford stacks to that of the source.}:
\[ \xymatrix{ \tilde{A} \ar[r] \ar[d] & A \ar[d] \\
\tilde{B} \ar[r] & B } \]
Using our definition of square-zero extensions from \S \ref{sqzeroext}, this question is equivalent to the following: given a derivation $D:A \to M[1]$, when does there exist a derivation $D':B \to M \otimes_A B[1]$ such that the following diagram commutes?
\[ \xymatrix{ A \ar[r]^D \ar[d] & M[1] \ar[d] \\
B \ar[r]^-{D'} & M \otimes_A B[1]. } \]
Using the obstruction theory explained in \S \ref{derobs}, we find that such an extension exists if and only if $\mathrm{ob}(f,D)$ vanishes. When $\mathrm{ob}(f,D)$ does vanish, the $\infty$-groupoid of all possible extensions is naturally a torsor under $\operatorname{Hom}(L_{B/A},M \otimes_A B[1])$; in particular, the set of all possible extensions (up to homotopy) of $\tilde{A} \to A$ across $f$ is a torsor under $\pi_0(\operatorname{Hom}(L_{B/A},M \otimes_A B[1]))$. Generalizing this discussion to rings in a topos and then specializing to the case of Deligne-Mumford stacks, we obtain:
\begin{theorem}
\label{thm:dag-sqzero-obs}
Let $f:X \to S$ be a flat morphism of Deligne-Mumford stacks. Fix a quasi-coherent $\mathcal{O}_S$-module $\mathcal{I}$, and a square-zero thickening $S \hookrightarrow S'$ of $S$ by $\mathcal{I}$ classified by a derivation $D_S:L_S \to \mathcal{I}[1]$. The obstruction to finding a square-zero thickening $X \hookrightarrow X'$ of $X$ by $f^* \mathcal{I}$ lying above $S \hookrightarrow S'$ (via a flat map $X' \to S'$) is the map
\[ \mathrm{ob}(f,f^* D_S):L_{X/S}[-1] \stackrel{\kappa(f)}{\to} f^* L_S \stackrel{f^* D_S}{\to} f^*\mathcal{I}[1] \]
where the map $\kappa(f)$ is the Kodaira-Spencer class of $f$. When $\mathrm{ob}(f,f^* D_S)$ vanishes, the set of all pairs $(X \hookrightarrow X', f':X' \to S')$ (where $X'$ is a thickening of $X$ by $f^* \mathcal{I}$, and $f'$ is a flat map deforming $f$) up to isomorphism is a torsor for $\operatorname{Ext}^1_X(L_{X/S},f^*\mathcal{I})$. Moreover, the group of automorphisms of any such pair is canonically identified with $\operatorname{Ext}^0_X(L_{X/S},f^* \mathcal{I})$.
\end{theorem}
Theorem \ref{thm:dag-sqzero-obs} follows from \cite[Proposition 8.4.2.5]{LurieHA}. Everything except the formula for $\mathrm{ob}(f,f^* D_S)$ can also be found in \cite[Proposition III.2.3.2]{IllusieCC1}, and the formula can be found in \cite[\S III.2.3.4]{IllusieCC1}. Finally, combining Theorem \ref{thm:dag-sqzero-obs} with Theorem \ref{thm:dag-obs-der-compat}, we obtain:
\begin{theorem}
\label{thm:dag-sqzero-obs-compat}
Let $g:Y \to S$ and $f:X \to S$ be flat morphisms of Deligne-Mumford stacks, and let $\pi:Y \to X$ be an $S$-morphism. Fix a quasi-coherent $\mathcal{O}_S$-module $\mathcal{I}$, and square-zero thickening $S \hookrightarrow S'$ of $S$ by $\mathcal{I}$ classified by a derivation $D_S:L_S \to \mathcal{I}[1]$. Then the obstructions $\mathrm{ob}(f,f^* D_S)$ and $\mathrm{ob}(g,g^* D_S)$ (as defined in Theorem \ref{thm:dag-sqzero-obs}) are compatible in the sense that
\[ \xymatrix{ \pi^* L_{X/S}[-1] \ar[rr]^{\pi^*} \ar[rrd]_{\pi^* \mathrm{ob}(f,f^* D_S)} & & L_{Y/S}[-1] \ar[d]^{\mathrm{ob}(g,g^* D_S)} \\
& & \pi^* f^* \mathcal{I}[1] \simeq g^* \mathcal{I}[1] } \]
is commutative, i.e., a canonical homotopy expressing the commutativity exists. In particular, the cohomology classes
\[ \mathrm{ob}(f,f^* D_S) \in \operatorname{Ext}^2_X(L_{X/S},f^*\mathcal{I}) \quad \textrm{and} \quad \mathrm{ob}(g,g^* D_S) \in \operatorname{Ext}^2_Y(L_{Y/S},g^* \mathcal{I}) \]
map to the same class in
\[ \operatorname{Ext}^2_Y(\pi^* L_{X/S}, \pi^* f^* \mathcal{I}) \simeq \operatorname{Ext}^2_Y(\pi^* L_{X/S}, g^* \mathcal{I}).\]
under the natural maps.
\end{theorem}
\section{Introduction}
\label{sec:intro}
The moduli space $\mathcal{M}_g$ of curves and its Deligne-Mumford compactification $\overline{\mathcal{M}_g}$ are two fundamental objects of modern mathematics with wide-ranging applications. A key to their utility is the modularity of the compactification $\overline{\mathcal{M}_g}$: the compactification itself parametrizes curves, possibly with mild singularities. Recent advances in the minimal model program \cite{BC_CP_HCD_MJ_EOM} have provided us with a good higher dimensional analogue of this phenomenon: after fixing the necessary numerical invariants, one now has access to a compact moduli space $\overline{\mathcal{M}_h}$ that contains the space $\mathcal{M}_h$ of smooth objects as an open subspace, with the space $\overline{\mathcal{M}_h}$ itself parametrizing mildly singular varieties called {\em stable} varieties \cite{VE_QPM, KJ_SO, KollarModuliSurvey}. Although $\overline{\mathcal{M}_h}$ shares many nice properties of $\overline{\mathcal{M}_g}$, e.g., it is a DM-stack of finite type over the base field, it may possibly have many connected components that behave very differently \cite{CF_CCO, VR_MLI}. Hence, almost all available results on the global geometry of $\overline{\mathcal{M}_h}$ pertain to specific components of the moduli of surfaces (e.g., \cite{VanOpstallModProd, VOMA_SDIP, LW_SD, RS_CM, AV_PR_ECO, LY_ACO}) or special components of the moduli of log-stable varieties (e.g., \cite{HP_KS_TJ_COT, HP_KS_TJ_SPT, AV_CMI, HB_SLS, HP_CM}).
Our goal in this paper is to produce results applicable to every component of $\overline{\mathcal{M}_h}$ for any $h$ --- and in particular, to any dimension --- by generalizing the work of Van Opstall \cite{VanOpstallModProd}.
Specifically, we explore the behavior of these moduli spaces under the operation of taking products. To explain our results, let us fix some notation first (precise definitions will be given later). Let $k$ be a field of characteristic $0$. Given a stable variety $Z$ over $k$, let $\mathcal{M}(Z)$ denote the connected component of the appropriate moduli space $\overline{\mathcal{M}_h}$ spanned by $Z$; this space is a Deligne-Mumford stack. Given stable varieties $X$ and $Y$, we show that taking products defines a morphism
\[ \Prod_{X,Y}:\mathcal{M}(X) \times \mathcal{M}(Y) \to \mathcal{M}(X \times Y).\]
Our main local result is
\begin{theorem}
\label{mainthm:local}
The map $\Prod_{X,Y}$ is a finite \'etale cover of Deligne-Mumford stacks for any stable varieties $X$ and $Y$.
\end{theorem}
Going one step further, one may ask when the map $\Prod_{X,Y}$ is an isomorphism. By Theorem \ref{mainthm:local}, it suffices to find a {\em single point} of $\mathcal{M}(X \times Y)$ where $\Prod_{X,Y}$ has degree $1$. If $X$ and $Y$ are isomorphic or even simply deformation equivalent, then $\Prod_{X,Y}$ cannot be an isomorphism due to the symmetry of the source. Our main global result is that this is essentially the only obstruction, provided we work with {\em smooth} varieties.
\begin{theorem}
\label{mainthm:global}
Let $X$ and $Y$ be two stable varieties such that $X \times Y$ is smooth and neither
$X$ nor $Y$ can be written nontrivially as a product of two stable varieties. If $X$ and $Y$ are not deformation equivalent, then $\Prod_{X,Y}$ is an isomorphism. Otherwise, the map $\Prod_{X,Y}$ is an $S_2$-torsor.
\end{theorem}
The (slightly technical) notion of deformation equivalence above will be discussed more carefully in \S \ref{sec:definitions}. A generalization of Theorem \ref{mainthm:global} applies to smooth stable varieties admitting a product decomposition, as explained in Theorem \ref{theorem:main}. We expect but do not know if these results are true without the smoothness assumption.
Theorem \ref{mainthm:global} is a consequence of the following more general result
about canonically polarized manifolds (i.e., compact complex manifolds with ample
canonical bundle), whose proof occupies \S\ref{sec:globaltheory} below.
\begin{theorem}
\label{mainthm:productirreducible}
Every canonically polarized manifold decomposes uniquely into
a product of irreducible factors.
\end{theorem}
\subsection*{Comments on proofs}
Granting the existence of a proper moduli stack of stable varieties, Theorem \ref{mainthm:local} immediately reduces to a statement about the deformation theory of stable varieties. We approach this statement via the Abramovich-Hassett theory of canonical covering stacks which relates the {\em admissible} deformation theory of a stable variety $X$ with the usual deformation theory of an associated stack $X^ {\mathrm{can}}$. The key point then (following obvious notation) is to show that ${\operatorname{Def}}_{X^ {\mathrm{can}}} \times {\operatorname{Def}}_{Y^ {\mathrm{can}}}$ is equivalent to ${\operatorname{Def}}_{ (X \times Y)^ {\mathrm{can}} }$; we show this by equating both sides with ${\operatorname{Def}}_{X^ {\mathrm{can}} \times Y^ {\mathrm{can}}}$ via a detailed study of the relevant cotangent complexes.
Theorem \ref{mainthm:global} and Theorem \ref{mainthm:productirreducible} are proven
using differential-geometric methods. The main input is the polystability of the
tangent bundle on a canonically polarized manifold (ensured by two theorems of Yau
and Uhlenbeck-Yau), and the fact that a direct sum decomposition of the tangent
bundle induces a product decomposition of the universal cover (from a theorem of Beauville).
\subsection*{Organization of the paper}
We set up the problem at hand, in \S \ref{sec:stablevarieties}, by describing the appropriate moduli functors for families of stable varieties and constructing the product map. Theorem \ref{mainthm:local} is proven in \S \ref{sec:localtheory} by first considering a general theorem about deformations of products in \S \ref{sec:defthyabstractprod} and then specializing to our moduli spaces in \S \ref{sec:qgordefthy}. These proofs use the language of derived algebraic geometry, which is reviewed in Appendix \ref{sec:dag}. Finally, \S \ref{sec:globaltheory} explains how many ways stable varieties can decompose as products, under assumptions about smoothness; in the many cases for which stable varieties decompose uniquely as products, the associated product map is injective.
\subsection*{Notation}
Throughout this paper, we use $k$ to denote a field of characteristic $0$ with two exceptions: in \S \ref{sec:localtheory}, we allow $k$ to have positive characteristic unless otherwise indicated, and in Appendix \ref{sec:dag}, we allow $k$ to be an arbitrary ring.
\subsection*{Acknowledgements} We thank Dan Abramovich for suggesting the directions pursued here, and the AMS and NSF for providing wonderful working conditions at the MRC program at Snowbird in July 2010, where this project was initiated.
In addition, we would like to thank Stefan Kebekus and Chenyang Xu for help with proving Lemma \ref{stablevarinfaut}, and Zhiyu Tian for working with us when the project started.
\section{Stable varieties and construction of the product map} \label{sec:stablevarieties}
In this section, we define the moduli functor parametrizing stable varieties, and show that it is representable by a proper Deligne-Mumford stack; the properness uses recent results in the minimal model program due to Hacon-McKernan-Xu (unpublished). We then show there is a well-defined product map, which we will investigate in the sequel.
\subsection{Definitions of stable varieties and moduli functors} \label{sec:definitions}
As stated before, the moduli space of smooth projective curves of genus at least $2$ may be compactified by adding points representing some mildly singular curves obtained from smooth curves by a limiting procedure; the resulting curves are called {\em stable curves}. To compactify the space of birational equivalence classes of varieties of general type in higher dimensions, one is then confronted with the problem of determining the singular varieties that should be allowed at the boundary. Mori theory solves this problem by providing a viable candidate definition for higher dimensional {\em stable varieties} and {\em stable families}; the robustness of the solution ensures that the moduli functor thus defined is automatically separated (by an old result of \cite{MatsusakaMumford}) and also proper, granting standard conjectures in higher dimensional geometry that are now theorems.
Our goal in this section is to review the definitions of stable varieties and stable families, and also to say a few words about the resulting moduli space; more information can be found in the survey articles \cite{KovacsYPG,KollarModuliSurvey}. First, we recall some basic definitions. A variety $X$ is said to have {\em log canonical singularities} if $X$ is normal, $\mathbf{Q}$-Gorenstein, and satisfies the following: for a log resolution of singularities $g: \widetilde{X} \to X$ with exceptional divisor $E = \cup_i E_i$, if we write $K_{\widetilde{X}} = g^* K_X + \sum_i a_i E_i$, then we have $a_i \geq -1$ for all $i$. The notion of {\em semi-log canonical singularities} is a non-normal generalization of log canonical singularities. Its definition is almost verbatim the same as of log canonical, but the log resolution is replaced by a good semi-resolution. We refer the reader to \cite[\S 6.5]{KovacsYPG} and \cite[Definition-Lemma 5.1]{KJ_SO} for more, and simply remark here that such singularities are automatically reduced, satisfy Serre's $S_2$ condition, are $\mathbf{Q}$-Gorenstein, and are Gorenstein in codimension $1$. For a coherent sheaf $\mathcal{F}$ on a noetherian scheme $X$ such that ${\operatorname{Supp}}(\mathcal{F}) = X$, the {\em reflexive hull} $\mathcal{F}^{\ast\ast}$ is defined to be the double dual of $\mathcal{F}$. If $X$ is $S_2$ and $G_1$ (Gorenstein in codimension one) and $\mathcal{F}$ is a line bundle in codimension one, say over $U \subseteq X$, then $\mathcal{F}^{**} \cong j_* (\mathcal{F}|_U)$ \cite[Theorem 1.12]{HR_GD}, where $j : U \to X$ is the natural embedding. The {\em reflexive powers} $\mathcal{F}^{[i]}$ are then defined to be $\big(\mathcal{F}^{\otimes i}\big)^{\ast\ast}$ for any integer $i$ with the convention that $\mathcal{F}^{\otimes i} := \mathcal{H}\mathrm{om}(\mathcal{F},\mathcal{O}_X)^{\otimes -i}$ for $i < 0$; these definitions will typically be applied when $\mathcal{F}$ has generic rank $1$.
The main object of study in this paper is contained in the following definition:
\begin{definition}
A proper geometrically connected $k$-variety $X$ is called {\em stable} if $X$ has semi-log canonical singularities and $K_X$ is a $\mathbf{Q}$-Cartier and ample divisor.
\end{definition}
Next, we define families. The naive definition of a family of stable varieties (namely, a flat family with stable fibers) leads to pathologies as there are ``too many'' such families; see \cite[\S 7]{KovacsYPG} for examples. The correct definition, given below, imposes certain global constraints on the family. In the sequel, we sometimes refer to such families as {\em admissible} families. The condition appearing below is known as {\em Koll\'ar's condition}.
\begin{definition} \label{def:stablefamily}
Given a $k$-scheme $S$, a {\em stable family} $f: X \to S$ is a proper flat morphism whose fibers are stable varieties and such that $\omega_{X/S}^{[m]}$ is flat over $S$ and commutes with base change, for every $m \in \mathbf{Z}$.
\end{definition}
Finally, we are ready to define the moduli functor of stable varieties. The functor $SlcMod_h$ in \cite{KollarModuliSurvey} is similar but we choose to keep track of automorphisms.
\begin{definition}
Let $h(m)$ be an integer-valued function. The moduli functor $\overline{\mathcal{M}_h}$ of stable varieties with Hilbert function $h$ is defined by setting $\overline{\mathcal{M}_h}(S)$ to be the groupoid of stable families $f: X \to S$ whose fibers have Hilbert function $h$ with respect to $\omega_{X/S}$. Given a stable variety $X$ over $k$, we let $\mathcal{M}(X)$ denote the connected component of $\overline{\mathcal{M}_{h(X)}}$ that contains $[X]$, where $h(X)$ is the Hilbert function of $X$. Then two varieties $X$ and $Y$ are {\em deformation equivalent} if $\mathcal{M}(X)$ and $\mathcal{M}(Y)$ coincide.
\end{definition}
\subsection{Automorphisms of stable varieties}
In this section, we show that $\mathcal{M}(X)$ is a Deligne-Mumford stack for stable varieties $X$, although this fact must surely be known by the experts. We start with a lemma that bounds how negative the canonical line bundle on a resolution of singularities of a stable variety can be.
\begin{lemma}
\label{lem:lc-bigness}
Let $X$ be a stable variety over $k$, and let $\pi:Y \to X$ be a semi-resolution with (reduced) exceptional divisor $E$. Then $K_Y + E$ is big.
\end{lemma}
\begin{proof}
Let $E = \sum_i E_i$ be the reduced union of the $\pi$-exceptional divisors. As $X$ has semi-log canonical singularities, we can write
\[ K_Y = \pi^* K_X + \sum_i a_i E_i \]
with $a_i \geq -1$, or equivalently, we can write
\[ K_Y + E = \pi^* K_X + \sum_i b_i E_i\]
with $b_i \geq 0$. The stability of $X$ implies that $K_X$ is ample. The preceding formula then expresses $K_Y + E$ as the sum of a big divisor and an effective one, proving bigness.
\end{proof}
We now show that stable varieties do not admit infinitesimal automorphisms; this fact was stated in \cite{VanOpstallModProd}, but the proof was incomplete.
\begin{lemma} \label{stablevarinfaut}
Let $X$ be a stable variety over a field $k$ of characteristic $0$. Then $X$ has no infinitesimal automorphisms.
\end{lemma}
We give two proofs of this result: the first is cohomological and relies on recent work \cite{GD_KS_KSJ_PT_DF}.
\begin{proof}[Proof 1]
We wish to show that
$\operatorname{Hom}_X( L_X, \mathcal{O}_X)=0$. Consider the usual exact triangle
\begin{equation*}
\xymatrix{
\tau_{< 0} L_X \ar[r] & L_X \ar[r] & \Omega_X \ar[r]^-{+1} &
}
\end{equation*}
As $\operatorname{Ext}^i_X(\tau_{< 0} L_X, \mathcal{O}_X) = 0$ for $i = 0, -1$, one has $\operatorname{Hom}_X( L_X, \mathcal{O}_X) \simeq \operatorname{Hom}_X( \Omega_X, \mathcal{O}_X)$, so it is enough to show that the latter is zero. We will show the vanishing of this group when $X$ is normal; the general case is similar but requires an analysis of how $\Omega_{\overline{X}}(\log D)$ relates to $\Omega_X$, where $\mathrm{n} :\overline{X} \to X$ is the normalization of $X$ and $D$ is the divisor of the double locus of $\mathrm{n}$. Because restriction to the smooth locus is fully faithful on the category of reflexive sheaves on a normal scheme, we have
\begin{equation*}
\operatorname{Hom}_X(\Omega_X, \mathcal{O}_X) \simeq
\operatorname{Hom}_{X_\mathrm{sm}}(\Omega_X, \mathcal{O}_X) \simeq
\operatorname{Hom}_{X_\mathrm{sm}} (\omega_X, \Omega_X^{n-1})\simeq
\operatorname{Hom}_X(\omega_X, \Omega_X^{[n-1]}),
\end{equation*}
which vanishes by \cite[Theorem 7.2]{GD_KS_KSJ_PT_DF}.
\end{proof}
The second proof of Lemma \ref{stablevarinfaut} is more direct and geometric.
\begin{proof}[Proof 2]
We give a proof in the case that $X$ is normal and of index $1$, leaving the rest for the reader. Since $X$ is assumed to have an ample canonical bundle, the group sheaf $T \mapsto \operatorname{Aut}(X_T)$ is represented by a closed subgroup scheme $\underline{\operatorname{Aut}}(X) \subset \operatorname{PGL}_n$ for suitable $n$, which allows us to talk about its identity component $\underline{\operatorname{Aut}}^0(X)$. Now assume towards contradiction that $X$ has nontrivial infinitesimal automorphisms, i.e., that $\underline{\operatorname{Aut}}^0(X)$ has a nonzero tangent space at the identity. By Chevalley's theorem, $\underline{\operatorname{Aut}}^0(X)$ either contains a linear algebraic subgroup, or is itself an abelian variety. We treat these cases separately; the idea in either case is to show that the presence of a positive dimensional group action forces $X$ to be fibered over a lower dimensional base with fibers of Kodaira dimension $\leq 0$ (up to an alteration), which is then shown to contradict stability.
Assume first that $\underline{\operatorname{Aut}}^0(X)$ has a nonzero linear algebraic subgroup. Since $\mathrm{char}(k) = 0$, we can pick a one-dimensional connected smooth group scheme $G \subset \underline{\operatorname{Aut}}^0(X)$, necessarily either $\mathbf{G}_m$ or $\mathbf{G}_a$. Let $Z \subset X$ denote the singular locus, and choose a $G$-equivariant resolution of singularities $f:Y \to X$ with exceptional locus $E = \pi^{-1}(Z)_\mathrm{red}$. Now consider the diagram
\[ \xymatrix{ G \times Y \ar[r]^a \ar[d]^\pi & Y \\ Y } \]
where $a$ is the map defining the group action, while $\pi$ is a projection map. Since the representation $G \to \underline{\operatorname{Aut}}(Y)$ is faithful with $\dim(G) > 0$ and $G$ is smooth, we can choose a smooth divisor $H \subset Y$ such that the restriction of $a$ to $G \times H$ is dominant and generically \'etale. By compactifying $\pi|_H$ and resolving singularities, we obtain a diagram
\[ \xymatrix{ G \times H \ar@{^{(}->}[r]^j \ar[d]^{\pi|_H} & \mathcal{C} \ar[d]^{\overline{\pi}} \ar[r]^{q} & Y \\
H \ar@{=}[r] & H & } \]
where $\mathcal{C}$ is smooth, $\overline{\pi}$ is a proper surjective morphism of relative dimension $1$, $j$ is a dense open immersion, and $q$ is a proper, surjective, generically \'etale map extending $a$. In particular, the map $\overline{\pi}$ restricts to the trivial $\overline{G}$-bundle over some dense open subset of $H$, where $\overline{G} \simeq \P^1$ is the natural projective compactification of $G$. We then have the following possibilities for $G$ and the corresponding intersection numbers of $\omega_{\mathcal{C}}(q^{-1}E)$ with a general fiber of $\overline{\pi}$.
\begin{itemize}
\item $G = \mathbf{G}_m$: The general fiber of $\overline{\pi}$ is a $\P^1$ that passes through a general point of $Y$ and meets $E$ in at most two points: its image in $X$ contains the $\mathbf{G}_m$-orbit through a smooth point, and hence meets $\mathrm{Sing}(X) = f(E)$ in at most $2$ points. Since $\omega_{\mathcal{C}}$ restricts to $\mathcal{O}_{\P^1}(-2)$ on the general fiber of $\overline{\pi}$, we find that $\omega_{\mathcal{C}}(q^{-1} E)$ has degree $\leq 0$ on a general fiber of $\overline{\pi}$.
\item $G = \mathbf{G}_a$: Exactly as above, we find that $\omega_{\mathcal{C}}(q^{-1}E)$ has degree $\leq -1$ on a general fiber of $\overline{\pi}$.
\end{itemize}
Hence, the bundle $\omega_{\mathcal{C}}(q^{-1}E)$ always has degree $\leq 0$ on a general fiber of $\overline{\pi}$. On the other hand, Lemma \ref{lem:lc-bigness} shows that $\omega_Y(E)$ is big, and thus so is $\omega_{\mathcal{C}}(q^{-1} E) = q^* \omega_Y(E) \otimes \omega_{\mathcal{C}/Y}$ since $\omega_{\mathcal{C}/Y}$ is effective. Hence, the degree of $\omega_{\mathcal{C}}(q^{-1}E)$ on a general fiber of $\overline{\pi}$ should be positive, which leads to a contradiction proving the claim in this case.
If $\underline{\operatorname{Aut}}^0(X)$ does not contain a linear algebraic subgroup, then $\underline{\operatorname{Aut}}^0(X)$ is an abelian variety $A$, say of dimension $g$. Assume first that $g \geq \dim(X)$. Since $A$ acts faithfully on $X$, there is an open subset $U \subset X$ which consists of points with no $A$-stabilizers. Translating a closed point in $U$ using $A$ then shows that $g = \dim(X)$, and that $X = A$. In particular, $\omega_X$ is trivial, contradicting the ampleness of $\omega_X$. If $g < \dim(X)$, then we argue as in the case of $\mathbf{G}_m$ above, subject to the following changes: use a codimension $g$ subvariety $H \subset Y$ instead of a divisor; use that restriction of $\omega_{\mathcal{C}}$ to the appropriate general fiber is trivial, as abelian varieties have trivial canonical bundle; and observe that the general fibers of $\overline{\pi}$ map to $A$-orbits of smooth points in $X$, and so miss $E$ entirely when mapped to $Y$.
\end{proof}
\begin{remark}
Lemma \ref{stablevarinfaut} admits a simple cohomological proof when $X$ is itself smooth: it suffices to check that $H^0(X,T_X) = 0$, which follows by Serre duality and Kodaira vanishing (using the ampleness of $\omega_X$). An advantage of the cohomological approach is that it also works in characteristic $p$ as long as $X$ lifts to $W_2$ and $\dim(X) < p$. We do not know what happens if either of these assumptions is dropped. The geometric argument in the second proof of Lemma \ref{stablevarinfaut} runs into problems immediately as infinitesimal group actions cannot usually be integrated to positive dimensional group actions in positive characteristic.
\end{remark}
Next, we prove a separation result for the moduli functor:
\begin{lemma}
\label{lemma_separable}
Let $X \to S$ and $Y \to S$ be two families of stable schemes over a curve $S$ with normal generic fiber. Let $0 \in S$ be a point, such that for $U:=S \setminus \{0\}$, $X_U \cong Y_U$ as schemes over $U$. Then, $X \cong Y$ as schemes over $S$.
\end{lemma}
\begin{proof}
First, we may assume that $S$ is affine, by throwing out a point if necessary. Choose a common resolution $Z$ of $X$ and $Y$. Since $X_U \cong Y_U$, $Z$ can be chosen so that it is an isomorphism over $U$. Let $f : Z \to X$ and $g : Z \to Y$ the birational morphisms obtained this way. Since $X$ and $Y$ are families of stable schemes over a smooth curve, they are $S_2$ by \cite[Proposition 6.3.1]{GA_EGA_IV_II}. Since both have normal generic fiber, both $X$ and $Y$ are $R_1$. Hence, they are both normal. Also, since both $\omega_{X/U}$ and $\omega_{Y/U}$ are $\mathbf{Q}$-line bundles, so are $\omega_X$ and $\omega_Y$. Therefore, by \cite[Theorem]{KM_IOA}, $(X,X_s)$ and $(Y,Y_s)$ are log canonical for every $s \in S$. In particular, so are $X$ and $Y$, and furthermore, every divisor with negative discrepancy dominates $S$. That is, the canonical divisors of $X$, $Y$ and $Z$ are related by the equations
\begin{equation}
\label{eq:common_resolution_canonical_divisors}
K_Z + M = f^* K_X + F \qquad K_Z + N = g^* K_Y + G,
\end{equation}
where $F$, $G$, $M$ and $N$ effective, exceptional (with respect to the adequate morphisms) $\mathbf{Q}$-divisors, such that every prime divisor in $ M $ and $ N $ has coefficient at most $1$ and dominates $S$. Furthermore, since $M$ and $N$ are determined on $X_U$ and $Y_U$, in fact $M=N$. We use $M$ to denote both divisors.
Let $r$ be the lowest common multiple of the indices of $K_X$ and $K_Y$. Let $R(X, D):= \bigoplus_j H^0(X, \mathcal{O}_X(jD))$ denote the Cox ring of the divisor $D$ on $X$, for any divisor $D$ on a scheme $X$. Then by \eqref{eq:common_resolution_canonical_divisors}, we have
\begin{equation*}
R(X,r K_X) \simeq R(Z, r(g^* K_Y + G)) \simeq R(Z,r(K_Z + M)) \simeq R(Z,r(f^*K_X + F)) \simeq R(Y, r K_Y)
\end{equation*}
where the first isomorphism follows from $F$ being effective and $f$-exceptional (similarly for the last isomorphism, using $G$).
Since both $rK_X$ and $rK_Y$ are ample line bundles (as they are relatively ample over an affine base), we obtain $$X \simeq \mathrm{Proj} \ R(X, rK_X) \simeq \mathrm{Proj} \ R(Y, rK_Y) \simeq Y.$$ Furthermore, since $S$ was affine, this isomorphism respects $S$.
\end{proof}
The main existence result concerning the moduli functor is
\begin{theorem}
\label{thm:proper_DM}
For a fixed Hilbert function $h$, the moduli functor $\overline{\mathcal{M}_h}$ is a proper Deligne-Mumford stack.
\end{theorem}
\begin{proof}
That $\overline{\mathcal{M}_h}$ is a locally algebraic Artin stack follows from \cite{AbramovichHassett} and Artin's method. Lemma \ref{stablevarinfaut} then shows that $\overline{\mathcal{M}_h}$ is actually a Deligne-Mumford stack. For generically normal families, the separatedness of $\overline{\mathcal{M}_h}$ follows from Lemma \ref{lemma_separable}, and the general case can be proven by similar techniques. Finally, properness follows from recently announced results of Hacon-McKernan-Xu (unpublished).
\end{proof}
\subsection{The stability of products}
This section is devoted to constructing the map $\Prod_{X,Y}$ alluded to in \S \ref{sec:intro}. First, we check that a product of stable varieties is stable.
\begin{lemma}
\label{lem:slc}
The product of varieties with only semi-log canonical singularities has semi-log
canonical singularities.
\end{lemma}
\begin{proof}
This is proved in \cite[Theorem~3.2]{VanOpstallModProd}, but we give another argument here. We
use the criterion that $X$ has semi-log canonical singularities if and only if the pair
$(X', D)$ is log canonical, where $X' \to X$ is the normalization
and $D$ the double point divisor.
Let $X_1$ and $X_2$ be two varieties with only semi-log canonical singularities, and set
$X = X_1 \times X_2$. Then we have $X' = X_1' \times X_2'$, and $D = (D_1 \times X_2)
\cup (X_1 \times D_2)$, and therefore $(X', D) = (X_1', D_1) \times (X_2', D_2)$. By
assumption, $X_1$ and $X_2$ are reduced and $\mathbf{Q}$-Gorenstein, and so the same is
clearly true for $X'$. Now let $f_i \colon Y_i \to X_i'$ be log resolutions for the
two pairs; by assumption on the singularities,
\[
K_{Y_i} \equiv f_i^{\ast} \bigl( K_{X_i'} + D_i \bigr) + \sum_j a_{i,j} E_{i,j}
\]
with $a_{i,j} \geq -1$. Setting $Y = Y_1 \times Y_2$ and $f = f_1 \times f_2$,
the morphism $f \colon Y \to X'$ is a log resolution for the pair
$(X', D)$. We compute that
\[
K_Y \equiv p_1^{\ast} K_{Y_1} + p_2^{\ast} K_{Y_2}
\equiv f^{\ast} \bigl( K_{X'} + D \bigr)
+ \sum_j \bigl( a_{1,j} E_{1,j} \times Y_2 + a_{2,j} Y_1 \times E_{2,j} \bigr),
\]
which shows that $(X', D)$ is indeed log canonical.
\end{proof}
If $\mathcal{F}$ is a sheaf on $X$, we let $\mathcal{F}^* = \mathcal{H}\mathrm{om}_{\mathcal{O}_X}(\mathcal{F}, \mathcal{O}_X)$ denote
the $\mathcal{O}_X$-linear dual. We record an elementary algebraic fact next that will be used repeatedly in the sequel.
\begin{lemma}
\label{lem:reflex-pullback}
Let $f:X \to S$ be a flat morphism of noetherian schemes, and let $\mathcal{F}$ be a coherent sheaf on $S$. If $\mathcal{F}$ is reflexive, so is $f^* \mathcal{F}$. If $f$ is surjective, the converse is also true.
\end{lemma}
\begin{proof}
The formation of $\mathcal{H}\mathrm{om}_S(\mathcal{E},\mathcal{G})$ commutes with flat base change on $S$ for any pair of coherent sheaves $\mathcal{E}$ and $\mathcal{G}$. In particular, the formation of $\mathcal{F}^*$ commutes with flat base change. Now consider the biduality map $\mathcal{F} \to (\mathcal{F}^*)^*$. Since the reflexivity of $\mathcal{F}$ is precisely the condition that this map is an isomorphism, all claims follow from basic properties of flatness.
\end{proof}
Next, we show that exterior products of reflexive sheaves remain reflexive.
\begin{lemma} \label{lem:reflexive}
Let $f \colon X \to B$ and $g \colon Y \to B$ be two flat morphisms, and let $Z = X
\times_B Y$ be their fiber product. Let $\mathcal{F}$ and $\mathcal{G}$ be reflexive sheaves on $X$
and $Y$, respectively. If $\mathcal{F}$ and $\mathcal{G}$ are $B$-flat, then $\mathcal{F} \boxtimes \mathcal{G}$ is a reflexive sheaf on $Z$.
\end{lemma}
\begin{proof}
Let $p_X:Z \to X$ and $p_Y:Z \to Y$ be the two projection maps. By Lemma \ref{lem:reflex-pullback}, the sheaves $p_X^* \mathcal{F}$ and $p_Y^* \mathcal{G}$ are reflexive. Then we have
\begin{align*}
(p_X^* \mathcal{F}^* \otimes p_Y^* \mathcal{G}^*)^*
&= \mathcal{H}\mathrm{om}(p_X^* \mathcal{F}^* \otimes p_Y^* \mathcal{G}^*,\mathcal{O}_Z) \\
&\simeq \mathcal{H}\mathrm{om}( p_X^* \mathcal{F}^*, \mathcal{H}\mathrm{om}(p_Y^* \mathcal{G}^*, \mathcal{O}_Z) ) \qquad (\textrm{by adjunction})\\
&\simeq \mathcal{H}\mathrm{om}( p_X^* \mathcal{F}^*, p_Y^* \mathcal{G}) \qquad (\textrm{by reflexivity of } \mathcal{G}) \\
&\simeq \mathcal{H}\mathrm{om}( p_X^* \mathcal{F}^*, \mathcal{O}_Z) \otimes p_Y^* \mathcal{G} \qquad (\textrm{by flatness of } p_X \textrm{ and } p_Y )\\
&\simeq p_X^* (\mathcal{F}^*)^* \otimes p_Y^* \mathcal{G} \\
&\simeq p_X^* \mathcal{F} \otimes p_Y^* \mathcal{G} \qquad (\textrm{by reflexivity of } \mathcal{F}).
\end{align*}
Thus, $p_X^* \mathcal{F} \otimes p_Y^* \mathcal{G}$ is the dual of a coherent sheaf on $Z$ and, therefore, reflexive.
\end{proof}
We now show that the product of stable families is stable.
\begin{proposition} \label{prop:product}
The fiber product of two stable families is again a stable family.
\end{proposition}
\begin{proof}
Let $f \colon X \to B$ and $g \colon Y \to B$ be two stable families, and set $Z = X
\times_B Y$ and $h \colon Z \to B$. Since $f$ and $g$ are flat, projective, and have
connected fibers, the same is true for $h$. Lemma~\ref{lem:slc} shows that each fiber
$Z_b = X_b \times Y_b$ has semi-log canonical singularities. Next, we verify
Koll\'ar's condition. By assumption, the formation of $\omega_{X/B}^{[k]}$ commutes
with arbitrary base change, and so by \cite[Theorem~5.1.4]{AbramovichHassett}, we may conclude that $\omega_{X/B}^{[k]}$ is flat over $B$; we also reproduce the essential part of this argument below as Lemma \ref{modreflexflat} and Corollary \ref{reflexbcflat} for the convenience of the reader. Since $f$ and
$g$ are flat morphisms, Lemma~\ref{lem:reflexive} shows that
\[
p_X^{\ast} \omega_{X/B}^{[k]} \otimes p_Y^{\ast} \omega_{Y/B}^{[k]}
\]
is again a reflexive sheaf on $Z$. Arguing as in \cite[Lemma~7.3]{KovacsYPG}, we see that it agrees with the
reflexive sheaf $\omega_{Z/B}^{[k]}$ on an open set whose complement has relative codimension
at least two in $Z$. We must therefore have
\[
\omega_{Z/B}^{[k]}
\simeq p_X^{\ast} \omega_{X/B}^{[k]} \otimes p_Y^{\ast} \omega_{Y/B}^{[k]}.
\]
This formula implies that the formation of $\omega_{Z/B}^{[k]}$ commutes with
arbitrary base change, and so Koll\'ar's condition holds for the family $h \colon Z
\to B$. Also, when $k$ is the least common multiple of the index of $X$ and the index
of $Y$, the formula shows that $\omega_{Z/B}^{[k]}$ is a relatively
ample line bundle, proving that $\omega_{Z/B}$ is an ample $\mathbf{Q}$-line bundle. This
concludes the proof that $h \colon Z \to B$ is a stable family.
\end{proof}
The next lemma and following corollary are here for the reader's convenience, as they are used in the proof of Proposition \ref{prop:product}; see \cite{KollarFlatness} for more results like these.
\begin{lemma}
\label{modreflexflat}
Let $f:(R,\mathfrak{m}) \to (S,\mathfrak{n})$ be an essentially finitely presented flat local map of noetherian local rings. Let $M$ be a finitely presented $S$-module. Assume the following:
\begin{enumerate}
\item The locus of points on ${\operatorname{Spec}}(S)$ where $M$ is flat over ${\operatorname{Spec}}(R)$ is dense in the fiber ${\operatorname{Spec}}(S/\mathfrak{m})$.
\item The support of any nonzero $m \in M/\mathfrak{m} M$ contains a generic point of ${\operatorname{Spec}}(S/\mathfrak{m})$.
\end{enumerate}
Then $M$ is $R$-flat.
\end{lemma}
\begin{proof}
By the local flatness criterion, it suffices to check that the natural surjective maps
\[ a_n: \mathfrak{m}^n / \mathfrak{m}^{n+1} \otimes_{R/\mathfrak{m}} M / \mathfrak{m} M \twoheadrightarrow \mathfrak{m}^{n} M / \mathfrak{m}^{n+1} M \]
are isomorphisms for all $n$. Let $K_n = \ker(a_n)$. The assumption that the flat locus is dense in the fibers tells us that $K_n$ is not supported at any of the generic points of ${\operatorname{Spec}}(S/\mathfrak{m})$. Since the source of $a_n$ can be identified with a direct sum of copies of $M/\mathfrak{m} M$, it follows that if $K_n \neq 0$, then $M / \mathfrak{m} M$ admits sections not supported at the generic points of ${\operatorname{Spec}}(S/\mathfrak{m})$. However, this contradicts the second assumption, so $K_n = 0$, proving flatness.
\end{proof}
\begin{corollary}
\label{reflexbcflat}
Let $f:X \to S$ be a locally finitely presented flat map of noetherian schemes with fibers that are $S_2$ and of pure dimension $d$. Let $U \subset X$ be an open subset dense in all the fibers. Let $\mathcal{F}$ be a coherent sheaf on $X$ such that $\mathcal{F}|_U$ is $S$-flat. Assume that $\mathcal{F}|_{X_s}$ is reflexive for any $s \in S$. Then $\mathcal{F}$ is $S$-flat.
\end{corollary}
\begin{proof}
There is nothing to show when $d = 0$ as $U = X$ in that case by density, so we may assume $d > 0$. To show the $S$-flatness of $\mathcal{F}$, we will check that the conditions of Lemma \ref{modreflexflat} hold locally on $X$. The first condition is satisfied by assumption on $U$. For the second one, given a point $s \in S$, the reflexivity of $\mathcal{F}|_{X_s}$ tells us that, locally on $X_s$, we may realize $\mathcal{F}|_{X_s}$ as a subsheaf of a direct sum of copies of $\mathcal{O}_{X_s}$. Since $X_s$ is a pure and positive dimensional $S_2$ scheme, all nonzero local sections of $\mathcal{O}_{X_s}$ are supported at some generic point of $X_s$, and so the same is true for $\mathcal{F}|_{X_s}$, showing the second condition is satisfied. By Lemma \ref{modreflexflat}, we conclude that $\mathcal{F}$ is $S$-flat, as desired.
\end{proof}
By Proposition \ref{prop:product}, the fiber product of two stable families is also a stable family. Hence, we define the desired product map as follows:
\begin{definition}
For any two stable varieties $X$ and $Y$, let $\mathrm{Prod_{X,Y}}$ be the morphism
\begin{equation*}
\Prod_{X,Y} : \mathcal{M}(X) \times \mathcal{M}(Y) \to \mathcal{M}(X \times Y).
\end{equation*}
defined by taking fiber products of stable families.
\end{definition}
\section{The local theory} \label{sec:localtheory}
Our goal in this section is to explain why taking products of stable varieties defines a finite \'etale morphism on moduli spaces. The two main steps of the proof are: (a) showing that the deformation theory of products behaves in the expected way for a fairly large class of algebro-geometric objects, and (b) dealing with the slightly subtle issues related to the deformation theory of stable varieties, stemming ultimately from Koll\'ar's condition in Definition \ref{def:stablefamily} of admissible stable families. We first study (a) in \S \ref{sec:defthyabstractprod}. Then \S \ref{sec:defmorphisms} contains some general results on deformations of morphism, which form the key technical ingredients of the proofs in \S \ref{sec:qgordefthy}, where we carry out step (b).
The two main tools used in our proofs are the Abramovich-Hassett description of the admissible deformation theory of stable varieties in terms of the (usual) deformation theory of certain associated stacks (see \cite{AbramovichHassett}), and derived algebraic geometry. The former reduces the admissible deformation theory of stable varieties to the usual deformation theory of certain associated stacks, permitting us to use the cotangent complex. The main advantage of the derived perspective is an {\em explicit} construction of deformations and obstructions which makes calculations feasible, especially in the singular case (see the proof of Proposition \ref{deformds}). The relevant background is summarized in Appendix \ref{sec:dag}; we note here that all derived rings that occur in the discussion below are especially mild: they are simplicial $k$-algebras with finite dimensional homology.
\subsection{The deformation theory of products}
\label{sec:defthyabstractprod}
Fix a field $k$. The main result of this section, Theorem \ref{mainthm:defthy}, is a general theorem about the deformation theory of products of two Deligne-Mumford stacks. Under some mild hypotheses on the two stacks, the main one being lack of infinitesimal automorphisms, we show that the deformations of the product are given uniquely by products of deformations of the factors. The meat of the proof is a rather thankless task: we check that obstructions behave predictably under taking products. We will use this result in \S \ref{sec:qgordefthy} to understand the infinitesimal behavior of our global product map $\Prod_{X,Y}$. We remark that the aforementioned stack-theoretic description of the admissible deformation theory of a stable variety necessitates formulating and proving results in the present section for stacks rather than varieties.
We introduce two pieces of notation first.
\begin{notation}
Let ${\operatorname{SArt}}_k$ denote the $\infty$-category of derived local artinian $k$-algebras, i.e., those $A \in {\operatorname{SAlg}}_k$ with $\pi_0(A)$ local with residue field $k$, and $\oplus_i \pi_i(A)$ finite dimensional as a $k$-vector space. The category ${\operatorname{SArt}}_k$ provides test objects for deformation-theoretic questions in derived algebraic geometry, and we call its objects small derived algebras. Any map $A \to B$ in ${\operatorname{SArt}}_k$ that is surjective on $\pi_0$ can be factored as $A = A_0 \to A_1 \to \dots \to A_n = B$ with $A_i \to A_{i+1}$ a square-zero extension of $A_{i+1}$ by $k[j]$ for some $j$ (see \cite[Lemma 6.2.6]{LurieDAG}). We let ${\operatorname{Art}}_k$ denote the full subcategory of ${\operatorname{SArt}}_k$ spanned by discrete small derived algebras. Note that ${\operatorname{Art}}_k$ is simply the ordinary category of artinian local $k$-algebras with residue field $k$; we refer to its objects as small algebras.
\end{notation}
\begin{notation}
For a Deligne-Mumford $k$-stack $X$, let ${\operatorname{Def}}_X$ be the $\infty$-groupoid-valued functor which associates to $A \in {\operatorname{SArt}}_k$ the $\infty$-groupoid of all pairs $(f:\mathcal{X} \to {\operatorname{Spec}}(A),i:X \to \mathcal{X})$ where $f$ is a flat morphism, and $i$ identifies $X$ with the special fiber of $f$. We will refer to such pairs $(f,i)$ as flat deformations of $X$. When restricted to ${\operatorname{Art}}_k \subset {\operatorname{SArt}}_k$, this definition recovers the ordinary groupoid-valued functor of flat deformations of $X$. For a morphism $\pi:Y \to X$, let ${\operatorname{Def}}_\pi(A)$ be the $\infty$-groupoid of quadruples $(f:\mathcal{X} \to {\operatorname{Spec}}(A),g:\mathcal{Y} \to {\operatorname{Spec}}(A),\pi_A:\mathcal{Y} \to \mathcal{X},\phi)$ where $f$ and $g$ are flat deformations of $X$ and $Y$ respectively to $A$, $\pi_A$ is an $A$-map deforming $\pi$, and $\phi$ is an identification of $\pi_A \otimes_A k$ with $\pi$.
\end{notation}
Given two Deligne-Mumford $k$-stacks $X$ and $Y$, there is a natural morphism
\[ \mathfrak{p}\mathrm{rod}_{X,Y}:{\operatorname{Def}}_X \times {\operatorname{Def}}_Y \to {\operatorname{Def}}_{X \times Y} \]
given by taking fiber products. Our basic theorem concerns the behavior of the map $\mathfrak{p}\mathrm{rod}_{X,Y}$:
\begin{theorem}
\label{mainthm:defthy}
Fix a field $k$. Let $X$ and $Y$ be proper geometrically connected and geometrically reduced Deligne-Mumford $k$-stacks with no infinitesimal automorphisms. Then the map $\mathfrak{p}\mathrm{rod}_{X,Y}$ considered above is an isomorphism of functors on ${\operatorname{Art}}_k$.
\end{theorem}
Let us record certain vanishings that are available to us; these results enable fluid passage between the product and its factors.
\begin{lemma}
\label{cotpull}
Fix a field $k$. Let $X$ and $Y$ be proper Deligne-Mumford $k$-stacks. Assume that $X$ admits no infinitesimal automorphisms, and that $H^0(Y,\mathcal{O}_Y) = k$. Then the natural map
\[ \operatorname{Ext}_X^i(L_X,\mathcal{O}_X) \to \operatorname{Ext}^i_{X \times Y}({p_1}^*L_X,\mathcal{O}_{X \times Y}) \]
induced by pulling back along the projection $p_1:X \times Y \to X$ is bijective for $i = 0,1$, and injective for $i = 2$.
\end{lemma}
\begin{proof}
The projection formula and adjointness give natural identifications
\begin{eqnarray*}
\operatorname{Ext}^i_{X \times Y}({p_1}^*L_X,\mathcal{O}_{X \times Y}) &=& \operatorname{Ext}^i_X(L_X,\mathrm{R} {p_1}_* \mathcal{O}_{X \times Y}) \\
&=& \operatorname{Ext}^i_X(L_X,\mathrm{R} \Gamma(Y,\mathcal{O}_Y) \otimes \mathcal{O}_X).
\end{eqnarray*}
Now consider the exact triangle
\[ \mathcal{O}_X \stackrel{u}{\to} \mathrm{R}\Gamma(Y,\mathcal{O}_Y) \otimes_k \mathcal{O}_X \to \mathcal{Q} \]
where $\mathcal{Q}$ is defined to be the homotopy cokernel of $u$. Applying $\operatorname{Ext}^i(L_X,-)$ gives
\[ \operatorname{Ext}^i(L_X,\mathcal{Q}[-1]) \to \operatorname{Ext}^i(L_X,\mathcal{O}_X) \to \operatorname{Ext}^i_{X \times Y}({p_1}^*L_X,\mathcal{O}_{X \times Y}) \to \operatorname{Ext}^i_X(L_X,\mathcal{Q}). \]
Thus, it suffices to check that $\operatorname{Ext}^i(L_X,\mathcal{Q}) = 0$ for $i \leq 1$. Since $L_X$ is connective and $\mathcal{Q} \in D^{\geq 1}(X)$, we immediately see that $\operatorname{Ext}^0(L_X,\mathcal{Q}) = 0$. To check that $\operatorname{Ext}^1(L_X,\mathcal{Q})$ vanishes as well, note that the exact triangle
\[ \tau_{\geq 2} \mathcal{Q}[-1] \to \mathcal{H}^1(\mathcal{Q})[-1] \to \mathcal{Q} \]
shows that $\operatorname{Ext}^1(L_X,\mathcal{Q}) \simeq \operatorname{Ext}^0(L_X,\mathcal{H}^1(\mathcal{Q}))$. By construction, $\mathcal{H}^1(\mathcal{Q}) \simeq H^1(Y,\mathcal{O}_Y) \otimes \mathcal{O}_X$ is a free $\mathcal{O}_X$-module. The desired claim now follows from the stability assumption that $\operatorname{Ext}^0(L_X,\mathcal{O}_X) = 0$.
\end{proof}
We can now prove the desired result.
\begin{proof}[Proof of Theorem \ref{mainthm:defthy}]
We will show that
\[ \mathfrak{p}\mathrm{rod}_{X,Y}(A):{\operatorname{Def}}_X(A) \times {\operatorname{Def}}_Y(A) \to {\operatorname{Def}}_{X \times Y}(A) \]
is an equivalence of groupoids for $A \in {\operatorname{Art}}_k$ by working inductively on $\dim_k(A)$. As $X$ and $Y$ lack infinitesimal automorphisms, the groupoids in question are discrete, and will be viewed as sets. When $\dim_k(A) = 1$, we have $A = k$ and there is nothing to show as both sides are reduced to points. By induction, we may assume that the desired claim is known for all $A \in {\operatorname{Art}}_k$ with $\dim_k(A) \leq n$ for some fixed integer $n$. Given an $\tilde{A} \in {\operatorname{Art}}_k$ with $\dim_k(\tilde{A}) = n + 1$, we can find a surjection $\tilde{A} \to A$ with kernel $k$ as an $A$-module. This gives a diagram
\[ \xymatrix{ {\operatorname{Def}}_X(\tilde{A}) \times {\operatorname{Def}}_Y(\tilde{A}) \ar[rr]^-{\mathfrak{p}\mathrm{rod}_{X,Y}(\tilde{A})} \ar[d] && {\operatorname{Def}}_{X \times Y}(\tilde{A})\ar[d] \\
{\operatorname{Def}}_X(A) \times {\operatorname{Def}}_Y(A) \ar[rr]^-{\mathfrak{p}\mathrm{rod}_{X,Y}(A)}& & {\operatorname{Def}}_{X \times Y}(A) } \]
with $\mathfrak{p}\mathrm{rod}_{X,Y}(A)$ bijective by induction. We will show that $\mathfrak{p}\mathrm{rod}_{X,Y}(\tilde{A})$ is bijective. As there is nothing to show if the bottom row is empty, we may fix a base point of the bottom row, i.e., we fix flat deformations $f:\mathcal{X} \to {\operatorname{Spec}}(A)$ and $g:\mathcal{Y} \to {\operatorname{Spec}}(A)$ of $X$ and $Y$ to ${\operatorname{Spec}}(A)$. Let $\pi_{f,g}:\mathcal{X} \times_{{\operatorname{Spec}}(A)} \mathcal{Y} \to {\operatorname{Spec}}(A)$ denote their fiber product, and let $p:\mathcal{X} \times_{{\operatorname{Spec}}(A)} \mathcal{Y} \to \mathcal{X}$ and $q:\mathcal{X} \times_{{\operatorname{Spec}}(A)} \mathcal{Y} \to \mathcal{Y}$ be the two projection maps.
We first show that all fibers of $\mathfrak{p}\mathrm{rod}_{X,Y}(\tilde{A})$ is non-empty, i.e., if $\pi_{f,g}$ admits a deformation across ${\operatorname{Spec}}(A) \hookrightarrow {\operatorname{Spec}}(A')$, then the same is true for $f$ and $g$. Let $D_A:L_A \to k[1]$ be the derivation classifying the surjection $\tilde{A} \to A$ (see Theorem \ref{thm:dag-sq-zero}). Associated to this derivation, we have obstruction classes
\[ \mathrm{ob}(f,f^* D_A):L_{\mathcal{X}/A}[-1] \to \mathcal{O}_X[1] \quad \textrm{and} \quad \mathrm{ob}(g,g^* D_A):L_{\mathcal{Y}/A}[-1] \to \mathcal{O}_{Y}[1] \]
on $\mathcal{X}$ and $\mathcal{Y}$, and the obstruction class
\[ \mathrm{ob}(\pi_{f,g},\pi_{f,g}^* D_A): L_{\mathcal{X} \times_{{\operatorname{Spec}}(A)} \mathcal{Y}/{\operatorname{Spec}}(A)}[-1] \to \mathcal{O}_{\mathcal{X} \times_{{\operatorname{Spec}}(A)} \mathcal{Y}}[1] \]
on the product given by Theorem \ref{thm:dag-sqzero-obs}. By Theorem \ref{thm:dag-obs-der-compat}, these classes are compatible in the sense that the following diagram commutes
\[ \xymatrix{ p^* L_{\mathcal{X}/A}[-1] \ar[rrrr]^-{p^* \mathrm{ob}(f,f^* D_A)} \ar[d] & & & & \mathcal{O}_{\mathcal{X} \times_{{\operatorname{Spec}}(A)} \mathcal{Y}}[1] \ar@{=}[d] \\
L_{\mathcal{X} \times_{{\operatorname{Spec}}(A)} \mathcal{Y}/{\operatorname{Spec}}(A)}[-1] \ar[rrrr]^-{\mathrm{ob}(\pi_{f,g},\pi_{f,g}^* D_A)} & & & & \mathcal{O}_{\mathcal{X} \times_{{\operatorname{Spec}}(A)} \mathcal{Y}}[1] \\
q^* L_{\mathcal{Y}/A}[-1] \ar[rrrr]^-{q^* \mathrm{ob}(g,g^* D_A)} \ar[u] & & & & \mathcal{O}_{\mathcal{X} \times_{{\operatorname{Spec}}(A)} \mathcal{Y}}[1] \ar@{=}[u]. } \]
The assumption that $\pi_{f,g}$ admits a deformation across ${\operatorname{Spec}}(A) \hookrightarrow {\operatorname{Spec}}(A')$ ensures that the middle horizontal arrow in the above diagram is $0$. It follows by the commutativity that the same is true for other horizontal arrows, i.e., that $p^* \mathrm{ob}(f,f^* D_A) = 0$, and similarly for $Y$. To show that $\mathrm{ob}(f,f^* D_A) = 0$, it now suffices to show that the pullback
\[ \pi_0(\operatorname{Hom}_{\mathcal{X}}(L_{\mathcal{X}/{\operatorname{Spec}}(A)}[-1],k \otimes_A \mathcal{O}_\mathcal{X}[1])) \to \pi_0(\operatorname{Hom}_{\mathcal{X} \times_{{\operatorname{Spec}}(A)} \mathcal{Y}}(p_1^*(L_{\mathcal{X}/{\operatorname{Spec}}(A)})[-1],p_1^*(k \otimes_A \mathcal{O}_{\mathcal{X}})[1])) \]
is injective, and similarly for $Y$. Simplifying, this amounts to showing that the pullback
\[ \operatorname{Ext}^2_\mathcal{X}(L_{\mathcal{X}/{\operatorname{Spec}}(A)},\mathcal{O}_X) \to \operatorname{Ext}^2_{\mathcal{X} \times_{{\operatorname{Spec}}(A)} \mathcal{Y}}(p_1^*L_{\mathcal{X}/{\operatorname{Spec}}(A)},\mathcal{O}_{X \times Y}) \]
is injective, and similarly for $Y$. By base change (see \S \ref{derbasechange}) and adjunction, it is enough to check that the pullback
\[ \operatorname{Ext}^2_X(L_X,\mathcal{O}_X) \to \operatorname{Ext}^2_{X \times Y}(p_1^*L_X,\mathcal{O}_{X \times Y})\]
is injective, which follows from Lemma \ref{cotpull}; similarly for $Y$.
Next, we show that all fibers of $\mathfrak{p}\mathrm{rod}_{X,Y}(\tilde{A})$ are reduced to point, i.e., we will check that all possible deformations of $\mathcal{X} \times_{{\operatorname{Spec}}(A)} \mathcal{Y} \to {\operatorname{Spec}}(A)$ across ${\operatorname{Spec}}(A) \hookrightarrow {\operatorname{Spec}}(A')$ are obtained {\em uniquely} by taking products of deformations of each factor. By the above, we may assume that both $\mathcal{X} \to {\operatorname{Spec}}(A)$ and $\mathcal{Y} \to {\operatorname{Spec}}(A)$ admit deformations across ${\operatorname{Spec}}(A) \hookrightarrow {\operatorname{Spec}}(A')$. Following the same method used above to linearize the problem, we immediately reduce to verifying that the natural map
\[ \operatorname{Ext}^1_X(L_X,\mathcal{O}_X) \times \operatorname{Ext}^1_Y(L_Y,\mathcal{O}_Y) \to \operatorname{Ext}^1(L_{X \times Y},\mathcal{O}_{X \times Y}) \]
is bijective. This, in turn, results from the base change formula (see \S \ref{derbasechange}) and Lemma \ref{cotpull}.
\end{proof}
\begin{warning}
The conclusion of Theorem \ref{mainthm:defthy} fails if we consider both sides as functors on the larger category ${\operatorname{SArt}}_k$ of all small derived algebras rather than simply the ordinary ones. Indeed, the data of the functor ${\operatorname{Def}}_X$ on ${\operatorname{SArt}}_k$ is equivalent to the data of the object $\mathrm{R}\operatorname{Hom}(L_X,\mathcal{O}_X)$ (with its extra structure coming from Lie theory; see \cite[Theorem 5.2]{LurieICM}) at least in characteristic $0$. The failure of the product map
\[ \mathrm{R}\operatorname{Hom}(L_X,\mathcal{O}_X) \times \mathrm{R}\operatorname{Hom}(L_Y,\mathcal{O}_Y) \to \mathrm{R}\operatorname{Hom}(L_{X \times Y},\mathcal{O}_{X \times Y}) \]
to be an isomorphism then explains the failure of $\mathfrak{p}\mathrm{rod}_{X,Y}$ to be an equivalence as functors on ${\operatorname{SArt}}_k$. For example, let $X$ and $Y$ be genus $g$ curves for $g > 0$. One then computes that $\operatorname{Ext}^2(L_{X \times Y},\mathcal{O}_{X \times Y}) \neq 0$, but $\operatorname{Ext}^2(L_X,\mathcal{O}_X) = \operatorname{Ext}^2(L_X,\mathcal{O}_Y) = 0$. What this means is that $X \times Y$ has a nontrivial deformation over the derived local artian $k$-algebra $k \oplus k[1]$, while $X$ and $Y$ do not.
\end{warning}
\begin{remark}
The main input from derived algebraic geometry in our proof of Theorem \ref{mainthm:defthy} is an explicit construction of the deformation and obstruction classes associated to a morphism $\pi:Y \to X$; having access to the construction renders the functoriality transparent. It is tempting to deduce this functoriality directly from Illusie's formula for the obstruction class in terms of the cup product of the Kodaira-Spencer class for $\pi$ and the $\operatorname{Ext}^1$ class describing the relevant deformation of $X$. One can implement this strategy with a good understanding of the functoriality of the $\operatorname{Ext}^1$ class describing the relevant deformation of $X$.
\end{remark}
\begin{remark}
The proof of Theorem \ref{mainthm:defthy} has two essential parts: showing that the map $\mathfrak{p}\mathrm{rod}_{X,Y}$ is injective, and showing that $\mathfrak{p}\mathrm{rod}_{X,Y}$ is surjective. The injectivity of $\mathfrak{p}\mathrm{rod}_{X,Y}$ is a standard verification with tangent spaces that holds under fairly general hypothesis. The surjectivity of $\mathfrak{p}\mathrm{rod}_{X,Y}$, on the other hand, crucially needs the stability assumption that $X$ and $Y$ have no infinitesimal automorphisms. For example, if $X$ and $Y$ are elliptic curves, then the product variety $X \times Y$ admits a $4$-dimensional space of first order deformations, while the first order deformations which are products span a $2$-dimensional subspace (and both sides are unobstructed).
\end{remark}
\subsection{Some general results on deformations of morphisms}
\label{sec:defmorphisms}
The general theme of the results discussed in this section is the deformation theory of morphisms. Our goal is to write down some natural conditions on a morphism $\pi:Y \to X$ which allow one to transfer deformation-theoretic information from $X$ to $Y$, and vice versa. These results constitute the heart of the proof of Proposition \ref{defthyindqcov} in \S \ref{sec:qgordefthy}, but may be read independently of the rest of the paper.
We first need the following algebraic lemma:
\begin{lemma}
\label{gorextvan}
Let $R$ be a noetherian ring, and let $M$ be a finitely generated $R$-module. If $M$ vanishes at all points of codimension $\leq N$ of ${\operatorname{Spec}}(R)$ and $R$ satisfies Serre's condition $S_{N+1}$ at all points of ${\operatorname{Supp}}(M)$, then $\operatorname{Ext}^i_R(M,R) = 0$ for $0 \leq i \leq N$.
\end{lemma}
\begin{proof}
Let $d = \dim(R)$, let $X = {\operatorname{Spec}}(R)$, let $\mathcal{F}$ be the coherent $\mathcal{O}_X$-module defined by $M$, let $Z = {\operatorname{Supp}}(\mathcal{F})$, let $U = X - Z$, and let $j:U \to {\operatorname{Spec}}(R)$ be the natural open immersion. Then we have the exact triangle
\[ \mathcal{O}_X \to \mathrm{R} j_* \mathcal{O}_U \to \mathcal{Q} \]
where $\mathcal{Q}$ is the homotopy cokernel, and is identified with the complex $\mathrm{R}\Gamma_Z(\mathcal{O}_X)[1]$. Applying $\mathrm{R} \operatorname{Hom}(\mathcal{F},-)$ and taking homology, we obtain a long exact sequence
\[ \dots \operatorname{Ext}_X^{i-1}(\mathcal{F},\mathcal{Q}) \to \operatorname{Ext}_X^i(\mathcal{F},\mathcal{O}_X) \to \operatorname{Ext}_X^i(\mathcal{F},\mathrm{R} j_* \mathcal{O}_U) \dots . \]
The term on the right is $0$ by adjunction and the fact that $j^* \mathcal{F} = 0$. Hence, it suffices to show that $\operatorname{Ext}_X^{i-1}(\mathcal{F},\mathcal{Q}) = 0$ for $i \leq N$. By connectivity estimates, it suffices to check that $\mathcal{Q}[N-1] \in \mathrm{D}^{[1,\infty]}(\mathcal{O}_X)$, i.e. it suffices to check that $\mathcal{H}^{i-1}(\mathcal{Q}) = 0$ for $i \leq N$. Translating to local cohomology, it suffices to check that $H^i_Z(\mathcal{O}_X) = 0$ for $i \leq N$. Since the codimension of any point occurring in $Z$ is at least $N+1$, the claim now follows from the assumption that $X$ satisfies Serre's condition $S_{N+1}$ at all points of $Z$ coupled with the fact that the $I$-depth of a module $P$ over a ring $R$ with ideal $I$ can be recovered as the infimum of the depths of the localizations of $P$ at all points of ${\operatorname{Spec}}(R/I)$; see, for example, \cite[\S 15, page 105]{MatCA}.
\end{proof}
The following proposition gives some conditions on a map $\pi:Y \to X$ which ensure that deformations of $X$ can be followed by deformations of $Y$ and that of $\pi$.
\begin{proposition}
\label{defcms}
Let $\pi:Y \to X$ be an essentially finitely presented morphism of noetherian Deligne-Mumford stacks. Assume that the following conditions hold:
\begin{enumerate}
\item The map $\pi$ is \'etale on an open set $U \subset Y$ that contains all the codimension $\leq 2$ points of $Y$.
\item The stack $Y$ satisfies Serre's condition $S_3$ at points of $Z = Y - U$.
\end{enumerate}
Then $\operatorname{Ext}^i(L_\pi,\mathcal{O}_X) = 0$ for $i \leq 2$. If $X$ is essentially finitely presented over a field $k$, then the natural map ${\operatorname{Def}}_\pi \to {\operatorname{Def}}_X$ is an equivalence of functors on ${\operatorname{Art}}_k$.
\end{proposition}
\begin{proof}
We first show the $\operatorname{Ext}$ vanishing claim. By the local-to-global spectral sequence for $\operatorname{Ext}$, it suffices to show that $\mathcal{E}\mathrm{xt}^i(L_\pi,\mathcal{O}_Y) = 0$ for $i \leq 2$. Since the latter is a local statement, we may \'etale localize on $Y$ and reduce to the case that $Y$ is a noetherian local scheme. In this local setup, we will check that $\operatorname{Ext}^i(L_\pi,\mathcal{O}_Y) = 0$ for $i \leq 2$. We first filter $L_\pi$ using the filtration in the derived category arising from the standard $t$-structure. This filtration of $L_\pi$ has associated graded pieces of the form $\mathcal{H}^{-j}(L_\pi)[j]$. Hence, the groups $\operatorname{Ext}^i(L_\pi,\mathcal{O}_Y)$ are filtered with graded pieces contained in $\operatorname{Ext}^{i-j}(\mathcal{H}^{-j}(L_\pi),\mathcal{O}_Y)$ for $0 \leq j \leq i$. Thus, to show $\operatorname{Ext}^i(L_\pi,\mathcal{O}_Y) = 0$ for $i \leq 2$, it suffices to show that $\operatorname{Ext}^k(\mathcal{H}^{-j}(L_\pi),\mathcal{O}_Y) = 0$ for $0 \leq k \leq 2$, and any $j$. However, this follows from the $N = 2$ case of Lemma \ref{gorextvan} once we observe that the sheaves $\mathcal{H}^{-j}(L_\pi)$ vanish at all codimension $\leq 2$ points of $Y$ by the assumption that $\pi$ is \'etale at such points.
The claim about deformation functors is deduced in a standard manner from the relative $\operatorname{Ext}$ vanishing proven above. Fix an $A \in {\operatorname{Art}}_k$, and consider the induced map of groupoids $f:{\operatorname{Def}}_\pi(A) \to {\operatorname{Def}}_X(A)$. The vanishing of $\operatorname{Ext}^2(L_\pi,\mathcal{O}_Y)$ implies that $f$ is surjective on $\pi_0$, the vanishing of $\operatorname{Ext}^1(L_\pi,\mathcal{O}_Y)$ implies that $f$ is injective on $\pi_0$, and the vanishing of $\operatorname{Ext}^i(L_\pi,\mathcal{O}_Y)$ for $i \leq 1$ implies that $f$ is bijective on $\pi_1$. To make these assertions precise, one climbs up a tower of small extensions as in the proof of Theorem \ref{mainthm:defthy}; we leave the details to the reader.
\end{proof}
Next, we study the dual question of conditions on a map $\pi:Y \to X$ that ensure that deformations of $Y$ can be followed by deformations of $X$ and $\pi$.
\begin{proposition}
\label{deformds}
Let $\pi:Y \to X$ be a morphism of essentially finitely presented Deligne-Mumford stacks over a field $k$ satisfying $\pi_* \mathcal{O}_Y \simeq \mathcal{O}_X$ and that $\operatorname{Ext}^0_X(\Omega_X, \mathrm{R}^1 \pi_* \mathcal{O}_Y) = 0$. Then the forgetful morphism $q:{\operatorname{Def}}_\pi \to {\operatorname{Def}}_Y$ of functors on ${\operatorname{Art}}_k$ is formally smooth with discrete fibers; it is an equivalence if $X$ has no infinitesimal automorphisms.
\end{proposition}
\begin{proof}
Let $f:X \to {\operatorname{Spec}}(k)$ and $g:Y \to {\operatorname{Spec}}(k)$ denote the structure maps. Fix an $A \in {\operatorname{Art}}_k$ and a flat deformation $\pi_A:\mathcal{Y} \to \mathcal{X}$ of $\pi$ to ${\operatorname{Spec}}(A)$. Given a surjection $A' \to A$ with kernel isomorphic to $k$, we obtain a diagram
\begin{equation}
\label{diag:defmor}
\xymatrix{ {\operatorname{Def}}_\pi(A') \ar[r]^a \ar[d]^b & {\operatorname{Def}}_Y(A') \ar[d]^c \\
{\operatorname{Def}}_\pi(A) \ar[r]^d & {\operatorname{Def}}_Y(A).}
\end{equation}
By induction on $\dim_k(A)$, we may assume that $d$ is surjective on $\pi_0$ and has discrete fibers. Furthermore, if $X$ has no infinitesimal automorphisms we may also assume that $d$ is an equivalence. We will show the following: (a) $a$ is surjective on $\pi_0$ and has discrete fibers, and (b) $a$ is an equivalence if $X$ has no infinitesimal automorphisms.
Fix a flat deformation $\mathcal{Y}' \to {\operatorname{Spec}}(A')$ of $\mathcal{Y} \to {\operatorname{Spec}}(A)$ corresponding to a point $p_{Y'} \in {\operatorname{Def}}_Y(A')$. Let $\mathrm{Fib}(a,p_{Y'})$ denote the homotopy fiber of the map $a$ at the point $p_{Y'}$; this $\infty$-groupoid can be thought of as parametrizing triples $(\mathcal{X}' \to {\operatorname{Spec}}(A'), \pi_{A'}:\mathcal{Y}' \to \mathcal{X}',\phi)$ where $\mathcal{X}' \to {\operatorname{Spec}}(A')$ is a flat deformation of $X$ to ${\operatorname{Spec}}(A')$, $\pi_{A'}$ is a deformation of $\pi$ to ${\operatorname{Spec}}(A')$, and $\phi$ is an identification of the restriction $(\mathcal{X}',\pi_{A'})|_A$ with $(\mathcal{X},\pi_A)$. We will now check that $\mathrm{Fib}(a,p_{Y'})$ is discrete and non-empty, and furthermore it is contractible when $X$ has no infinitesimal automorphisms. First, we record a relation between maps on $X$ and $Y$:
\begin{claimex}
\label{claim:extpullback}
The natural map
\begin{equation*}
\operatorname{Ext}^i_\mathcal{X}(L_{\mathcal{X}/A},\mathcal{O}_X)) \to \operatorname{Ext}^i_\mathcal{Y}(\pi_A^* L_{\mathcal{X}/A},\mathcal{O}_Y)
\end{equation*}
is an isomorphisms for $i \leq 1$ and it is injective for $i=2$.
\end{claimex}
\begin{proof}[Proof of claim]The above natural map is obtained as the composition of the adjunction $\operatorname{Ext}^i_\mathcal{Y}(\pi_A^* L_{\mathcal{X}/A},\mathcal{O}_Y) \cong \operatorname{Ext}^i_\mathcal{X}( L_{\mathcal{X}/A},\mathrm{R} \pi_{A,*}\mathcal{O}_Y)$, and the natural map $\operatorname{Ext}^i_\mathcal{X}( L_{\mathcal{X}/A},\mathcal{O}_X) \to \operatorname{Ext}^i_\mathcal{X}( L_{\mathcal{X}/A},\mathrm{R} \pi_{A,*}\mathcal{O}_Y)$. The former one is an isomorphism, hence we are supposed to prove the claimed properties only for the latter maps. Consider the following exact triangle guaranteed by the condition $f_* \mathcal{O}_Y \cong \mathcal{O}_X$.
\begin{equation*}
\xymatrix{
\mathcal{O}_X \ar[r] & \mathrm{R} \pi_{A,*} \mathcal{O}_Y \ar[r] & \tau_{\geq 1} \mathrm{R} \pi_{A,*} \mathcal{O}_Y
}
\end{equation*}
Applying $\operatorname{Ext}^i_{\mathcal{X}}(L_{\mathcal{X}/A}, \_ )$ implies that it is enough to show that $\operatorname{Ext}^{i}_{\mathcal{X}}(L_{\mathcal{X}/A}, \tau_{\geq 1} \mathrm{R} \pi_{A,*} \mathcal{O}_Y )=0$ for $i \leq 1$. Since $L_{\mathcal{X}/A}$ is supported in non-positive cohomology degrees, while $\tau_{\geq 1} \mathrm{R} \pi_{A,*} \mathcal{O}_Y$ only in positive degrees, this vanishing is immediate for $i \leq 0$. For $i=1$, again by cohomology degree argument, it is the same as showing that the following Ext group is zero.
\begin{equation*}
\operatorname{Ext}^0_{\mathcal{X}}(\mathcal{H}^0(L_{\mathcal{X}/A}), \mathcal{H}^1( \tau_{\geq 1} \mathrm{R} \pi_{A,*} \mathcal{O}_Y)) \cong \operatorname{Ext}^0_{\mathcal{X}}(\Omega_{\mathcal{X/A}}, \mathrm{R}^1 \pi_{A,*} \mathcal{O}_Y)
\cong \operatorname{Ext}^0_X(\Omega_X, \mathrm{R}^1 \pi_{A,*} \mathcal{O}_Y) ,
\end{equation*}
which is exactly one of the assumptions of the proposition. This finishes the proof of Claim.
\end{proof}
To show that $\mathrm{Fib}(a,p_{Y'})$ is non-empty and discrete, we will first construct a deformation of $X$ to $A'$ lifting $\mathcal{X}$, and then show that this deformation admits a morphism from the chosen deformation of $Y$ to $A'$ lifting $\mathcal{Y}$.
We now show the existence of a flat deformation $\mathcal{X'} \to {\operatorname{Spec}}(A')$ of $\mathcal{X} \to {\operatorname{Spec}}(A)$ across ${\operatorname{Spec}}(A) \subset {\operatorname{Spec}}(A')$. The obstruction of the existence of such a deformation the homomorphism $\mathrm{ob}(f,f^* D_A): L_{\mathcal{X}/A}[-1] \to \mathcal{O}_X[1]$. Since $\mathcal{Y}$ already has such a square-zero extension, the corresponding obstruction $\mathrm{ob}(g,g^* D_A) : L_{\mathcal{Y}/A}[-1] \to \mathcal{O}_X[1]$ is homotopic to zero. Furthermore, by Theorem \ref{thm:dag-sqzero-obs-compat}, these two obstructions are related via the following diagram (which is commutative in a specified manner):
\begin{equation*}
\xymatrix{
\pi_A^* L_{\mathcal{X}/A}[-1] \ar[r] \ar[dr]_{\pi_A^* \mathrm{ob}(f, f^* D_A)} & L_{\mathcal{Y}/A}[-1] \ar[d]^{ \mathrm{ob}(g, g^* D_A)} \\
& \mathcal{O}_Y[1]
}
\end{equation*}
In particular, $\pi_A^* \mathrm{ob}(f, f^* D_A)$ is nullhomotopic. By Claim \ref{claim:extpullback}, $\mathrm{ob}(f, f^* D_A)$ is also nullhomotopic, so there exists a deformation $\mathcal{X}' \to {\operatorname{Spec}}(A')$ of $\mathcal{X} \to {\operatorname{Spec}}(A)$, as claimed above; we fix one such deformation.
Next, we show that the deformation $\mathcal{X}' \to {\operatorname{Spec}}(A')$ chosen above can be modified to allow for an $A'$-linear map $\pi_{A'}:\mathcal{Y}' \to \mathcal{X}'$ extending $\pi_A$. Let $D_X : L_{\mathcal{X}} \to \mathcal{O}_X[1]$ (resp. $D_Y:L_{\mathcal{Y}} \to \mathcal{O}_Y[1]$) be the derivation corresponding to the deformation $\mathcal{X}' \to {\operatorname{Spec}}(A')$ constructed above (resp. to the deformation $\mathcal{Y}' \to {\operatorname{Spec}}(A')$ that we started with). We obtain a diagram
\begin{equation}
\label{eq:cotdiag2}
\xymatrix{
g^* L_A \ar[d]^{D_A} \ar[r] & \pi^* L_{\mathcal{X}} \ar[r] \ar[d]^{D_X} & L_{\mathcal{Y}} \ar[d]^{D_Y} \\
g^* k[1] \ar@{=}[r] & \pi^* \mathcal{O}_X[1] \ar@{=}[r] & \mathcal{O}_Y[1]}
\end{equation}
where the square on the left commutes in a specified way by construction of $\mathcal{X}'$, and the outer square commutes in a specified way as $\mathcal{Y}' \to {\operatorname{Spec}}(A')$ lifts $g$. We must replace $D_X$ by a suitable map so that the square on the right also commutes in manner compatible with the other two squares. The failure of the commutativity of the square on the right is measured by the difference $\delta$ of the two paths $\pi^* L_{\mathcal{X}} \to \mathcal{O}_Y[1]$ in the square on the right. Since the outer square commutes, this obstruction $\delta$ factors as a map $\pi^* L_{\mathcal{X}/A} \to \mathcal{O}_Y[1]$. By Claim \ref{claim:extpullback}, this map is obtained as the pullback of a map $\delta':L_{\mathcal{X}/A} \to \mathcal{O}_X[1]$. Replacing $D_X$ with $D'' := D_X + \delta' \circ {\mathrm{can}}$ (where $ {\mathrm{can}}:L_{\mathcal{X}} \to L_{\mathcal{X}/A}$ is the canonical map) as the middle vertical arrow in diagram \eqref{eq:cotdiag2} then makes all squares commute compatibly. This derivation $D''$ and the commutativity of the left hand square give rise to to a deformation $\mathcal{X}'' \to {\operatorname{Spec}}(A')$ of $\mathcal{X} \to {\operatorname{Spec}}(A)$ across ${\operatorname{Spec}}(A) \subset {\operatorname{Spec}}(A')$, while the commutativity of the right hand square give rise to the promised map $\pi_{A'}:\mathcal{Y}' \to \mathcal{X}''$. In particular, this proves that $\mathrm{Fib}(a,p_{Y'})$ is non-empty.
Next, we check that $\mathrm{Fib}(a,p_{Y'})$ is discrete. For a point $(\mathcal{X'} \to {\operatorname{Spec}}(A'), \pi_{A'}:\mathcal{Y}' \to \mathcal{X}',\phi)$ of this groupoid, an automorphism $\sigma$ is given by an automorphism $\sigma$ of $\mathcal{X'}$ that commutes with $\pi_{A'}$ and $\phi$. Since topoi do not change under deformations, it suffices to prove that $\sigma$ acts as the identity on $\mathcal{X}'$. By definition, the induced action on $\pi^{-1} \mathcal{O}_{\mathcal{X}'}$ commutes with the map $\pi_{A'}^*:\pi^{-1} \mathcal{O}_{\mathcal{X}'} \to \mathcal{O}_{\mathcal{Y}'}$. Since the latter map is injective (which can be checked, for instance, by filtering both sides using powers of the maximal ideal of $A'$ to reduce to the known injectivity over $k$), it follows that $\sigma = \operatorname{id}$, proving discreteness.
The conclusion of the preceding paragraphs is that the map $a$ from diagram \eqref{diag:defmor} is surjective with discrete fibers, and consequently that the map $q:{\operatorname{Def}}_\pi \to {\operatorname{Def}}_Y$ is formally smooth with discrete fibers.
Finally, we show that $\mathrm{Fib}(a,p_{Y'})$ is contractible when $X$ has no infinitesimal automorphisms. For $i = 1,2$, let $(\mathcal{X}_i' \to {\operatorname{Spec}}(A'),\pi_{i,A'}:\mathcal{Y}' \to \mathcal{X}_i',\phi_i)$ be two possibly distinct points of $\mathrm{Fib}(a,p_{Y'})$; we will show they are connected. First, we show that $\mathcal{X}_1$ and $\mathcal{X}_2$ are isomorphic as deformations of $\mathcal{X} \to {\operatorname{Spec}}(A)$ across ${\operatorname{Spec}}(A) \subset {\operatorname{Spec}}(A')$; this part will not use the assumption on $X$. Let $D_i:L_{\mathcal{X}} \to \mathcal{O}_X[1]$ be the derivation classifying the deformation $\mathcal{X}_i$. Then the information of $\pi_{i,A'}$ gives, for each $i$, a commutative diagram
\[ \xymatrix{
g^* L_A \ar[d]^{D_A} \ar[r] & \pi^* L_{\mathcal{X}} \ar[r] \ar[d]^{\pi^*D_i} & L_{\mathcal{Y}} \ar[d]^{D_Y} \\
g^* k[1] \ar@{=}[r] & \pi^* \mathcal{O}_X[1] \ar@{=}[r] & \mathcal{O}_Y[1].} \]
The commutativity shows that $\pi^* D_1$ and $\pi^* D_2$ are homotopic maps $\pi^* L_{\mathcal{X}} \to \mathcal{O}_Y[1]$: they are both homotopic to the composition $\pi^* L_{\mathcal{X}} \to L_{\mathcal{Y}} \stackrel{D_Y}{\to} \mathcal{O}_Y[1]$. By Claim \ref{claim:extpullback}, $D_1$ and $D_2$ are also homotopic, which proves that $\mathcal{X}_1$ and $\mathcal{X}_2$ are isomorphic as deformations of $\mathcal{X} \to {\operatorname{Spec}}(A)$ across ${\operatorname{Spec}}(A) \subset {\operatorname{Spec}}(A')$. Hence, to show that $\mathrm{Fib}(a,p_{Y'})$ is contractible, it suffices to check: given deformations $\mathcal{X}' \to {\operatorname{Spec}}(A')$ of $\mathcal{X} \to {\operatorname{Spec}}(A)$, and $\mathcal{Y}' \to {\operatorname{Spec}}(A')$ of $\mathcal{Y} \to {\operatorname{Spec}}(A)$, there exists at most one extension of $\pi_A:\mathcal{Y} \to \mathcal{X}$ to an $A'$-map $\pi_{A'}:\mathcal{Y}' \to \mathcal{X}'$. The $\infty$-groupoid of choices for such extensions is easily verified to be a torsor for
\[ \Omega \operatorname{Hom}_{\mathcal{Y}}(\pi_A^* L_{\mathcal{X}/A},\mathcal{O}_Y[1]) \simeq \operatorname{Hom}_{\mathcal{Y}}(\pi_A^* L_{\mathcal{X}/A},\mathcal{O}_Y).\]
By Claim \ref{claim:extpullback}, the $\infty$-groupoid on the right is equivalent to $\operatorname{Hom}_{\mathcal{X}}(L_{\mathcal{X}/A},\mathcal{O}_X)$. By adjunction (see the proof of Theorem \ref{mainthm:defthy}), this $\infty$-groupoid is identified with $\operatorname{Hom}_X(L_X,\mathcal{O}_X)$ which, by assumption, is contractible.
\end{proof}
\begin{remark}
The methods used to show Proposition \ref{deformds} also show that (under the same hypotheses) one has a natural equivalence $e:{\operatorname{Def}}_{\operatorname{id}_X} \times_{{\operatorname{Def}}_X} {\operatorname{Def}}_\pi \simeq {\operatorname{Def}}_\pi \times_{{\operatorname{Def}}_Y} {\operatorname{Def}}_\pi$ where we view ${\operatorname{Def}}_{\operatorname{id}_X}$ as a space fibered over ${\operatorname{Def}}_X$ with fibers given by the automorphism groups of the corresponding deformation, and the map $e$ is given by $(a,b) \mapsto (a \circ b, b)$.
\end{remark}
\begin{remark}
The technique used in Proposition \ref{deformds} can be used to show the following refinement (under the same hypotheses): the map $q:{\operatorname{Def}}_\pi \to {\operatorname{Def}}_Y$ has a distinguished section $s$. Indeed, in the notation of the proof of Proposition \ref{deformds}, constructing $s$ amounts to constructing a canonical base point of $\mathrm{Fib}(a,p_{Y'})$; such a base point is provided by the deformation of $\mathcal{X}$ coming from the derivation $D_X:L_{\mathcal{X}} \to \mathcal{O}_X[1]$ whose pullback along $\pi^*$ is the derivation $\pi^* L_{\mathcal{X}} \to L_{\mathcal{Y}} \stackrel{D_Y}{\to} \mathcal{O}_Y[1]$. We leave the details to the reader.
\end{remark}
This next lemma relates infinitesimal automorphisms of the source and target of a given morphism under favorable conditions; this will be used in the sequel to move information about discreteness of the automorphism group of a stable variety to its covering stack (see Theorem \ref{mainthm:localagain}).
\begin{proposition}
\label{infautstabvarqcov}
Let $\pi:Y \to X$ be an essentially finitely presented morphism of noetherian Delinge-Mumford stacks (over some base ring $k$). Assume the following:
\begin{enumerate}
\item The map $\pi$ is \'etale on an open subset $U \subset Y$ that contains all the codimension $1$ points of $Y$.
\item The stack $Y$ satisfies Serre's $S_2$ condition at points of $Y - U$.
\item The map $\pi$ satisfies $\pi_* \mathcal{O}_Y \simeq \mathcal{O}_X$.
\end{enumerate}
Then the infinitesimal automorphisms of $X$ and $Y$ coincide, i.e., there is a natural isomorphism $\operatorname{Ext}^0(L_X,\mathcal{O}_X) \simeq \operatorname{Ext}^0(L_Y,\mathcal{O}_Y)$ (where all cotangent complexes are computed relative to $k$).
\end{proposition}
\begin{proof}
The transitivity triangle for $\pi$ and the assumption that $\pi_* \mathcal{O}_Y \simeq \mathcal{O}_X$ give a long exact sequence
\[ 1 \to \operatorname{Ext}^0(L_\pi,\mathcal{O}_Y) \to \operatorname{Ext}^0(L_Y,\mathcal{O}_Y) \to \operatorname{Ext}^0(L_X,\mathcal{O}_X) \to \operatorname{Ext}^1(L_\pi,\mathcal{O}_Y) \to \dots \]
Thus, it suffices to show that $\operatorname{Ext}^i(L_\pi,\mathcal{O}_Y) = 0$ for $i \leq 1$. This follows by the exact same method used in the proof of Proposition \ref{defcms}; we omit the details.
\end{proof}
\subsection{The $\mathbf{Q}$-Gorenstein deformation theory}
\label{sec:qgordefthy}
We now return to the product map for moduli spaces of stable varieties. Our goal is to show that the global product map $\Prod_{X,Y}$ is finite \'etale for two stable varieties $X$ and $Y$. To understand the local behavior of this map, we cannot simply consider the local product map $\mathfrak{p}\mathrm{rod}_{X,Y}$ described in \S \ref{sec:defthyabstractprod} because Koll\'ar's condition restricts the allowable deformations on both sides. Instead, we introduce the {\em canonical covering stack} $Z^ {\mathrm{can}}$ of a variety $Z$ for the reasons explained in \S \ref{sec:intro}. We simply remark here Proposition \ref{defthyindqcov} below, which equates ${\operatorname{Def}}_{X^ {\mathrm{can}} \times Y^ {\mathrm{can}}}$ with ${\operatorname{Def}}_{(X \times Y)^ {\mathrm{can}}}$ under favorable assumptions, is proven using the results of \S \ref{sec:defmorphisms}.
\begin{definition} \label{def:canonicalcoveringstack}
Fix a field $k$, and let $X$ be an essentially finitely presented $\mathbf{Q}$-Gorenstein $k$-scheme satisfying Serre's condition $S_2$. Then we define its {\em canonical covering stack} $\pi:X^ {\mathrm{can}} \to X$ by the formula
\[ X^ {\mathrm{can}} = [{\underline{\Spec}}(\oplus_{i \in \mathbf{Z}} \omega_X^{[i]})/\mathbf{G}_m] \]
where ${\underline{\Spec}}$ denotes the relative spectrum of a quasi-coherent $\mathcal{O}_X$-algebra, $\omega_X^{[i]}$ is the $i$-th reflexive power of the dualizing sheaf $\omega_X$, and the $\mathbf{G}_m$-action is given by the evident grading.
\end{definition}
We now describe some properties of canonical covering stacks.
\begin{lemma}
\label{cmsgorsch}
Fix a field $k$. Let $X$ be an essentially finitely presented $\mathbf{Q}$-Gorenstein $k$-scheme satisfying Serre's condition $S_2$, and let $\pi:X^ {\mathrm{can}} \to X$ denote the structure morphism of the canonical cover. Then the following are true:
\begin{enumerate}
\item The stack $X^ {\mathrm{can}}$ is an essentially finitely presented Artin $k$-stack satisfying Serre's condition $S_2$. If $k$ has characteristic $0$, then $X^ {\mathrm{can}}$ is Deligne-Mumford.
\item The formation of $\pi$ commutes with relatively Gorenstein essentially finitely presented flat base changes $f:U \to X$.
\item The map $\pi$ is a coarse moduli space that is an isomorphism on the Gorenstein locus of $X$.
\item The natural map $\mathcal{O}_X \to \mathrm{R} \pi_* \mathcal{O}_{X^ {\mathrm{can}}}$ is an isomorphism.
\end{enumerate}
\end{lemma}
\begin{proof}
We first observe that the formation of $X^ {\mathrm{can}} \to X$ commutes with localization on $X$ as the same is true for the sheaves $\omega_X$ and their reflexive powers. By the $\mathbf{Q}$-Gorenstein assumption, we may pick an integer $n > 0$ such that $\omega_X^{[n]}$ is actually a line bundle. After localizing on $X$ if necessary, we can pick an isomorphism $\mathcal{O}_X \simeq \omega_X^{[n]}$ defined by a section $s \in \omega_X^{[n]}$. Such a choice allows us to define the structure of a $\mathcal{O}_X$-algebra with a $\mu_n$-action on the coherent $\mathcal{O}_X$-module
\[ \mathcal{A} = \oplus_{i \in \mathbf{Z}/n} \omega_X^{[i]} \]
in the obvious way: we view $\mathcal{A}$ as the quotient algebra of the algebra $\oplus_{i \in \mathbf{Z}} \omega_X^{[i]}$ by the equation $s = 1$, and the $\mu_n$-action corresponds to the induced $\mathbf{Z}/n$-grading. We set $Y = {\underline{\Spec}}(\mathcal{A})$ and observe that the natural map $Y \to X^ {\mathrm{can}}$ is $\mu_n$-equivariant and therefore descends to a map
\[ g:[Y/\mu_n] \to X^ {\mathrm{can}}.\]
We leave it to the reader to check that $g$ is an isomorphism; the key point is that the defining map $X^ {\mathrm{can}} \to B(\mathbf{G}_m)$ factors through $B(\mu_n) \to B(\mathbf{G}_m)$ via the choice of $s$, and the scheme $Y$ is simply the fiber of the resulting map $X^ {\mathrm{can}} \to B(\mu_n)$. This presentation shows that $X^ {\mathrm{can}}$ is an essentially finitely presented Artin $k$-stack if $X$ is so; if $k$ has characteristic $0$, then the presentation gives rise to a Deligne-Mumford stack since $\mu_n$ is discrete. To finish checking property (1), we observe that, by construction, the sheaves $\omega_X^{[i]}$ are $S_2$. Hence, the same is true for the scheme $Y$ and the stack $X^ {\mathrm{can}}$.
Property (2) follows from the next Lemma \ref{gorbc} and Lemma \ref{lem:reflex-pullback}. Indeed, if $\mathcal{F}$ is any coherent $\mathcal{O}_U$-module and $\mathcal{L}$ is a line bundle on $U$, then there is a natural isomorphism of $U$-stacks
\[ [{\underline{\Spec}}\big(\oplus_{i \in \mathbf{Z}} (\mathcal{F} \otimes \mathcal{L})^{[i]} \big) / \mathbf{G}_m] \simeq [{\underline{\Spec}}\big(\oplus_{i \in \mathbf{Z}} \mathcal{F}^{[i]}\big) / \mathbf{G}_m]. \]
This observation applies here with $\mathcal{F} = f^* \omega_X$ and $\mathcal{L} = \omega_f$.
For property (3), we note that $\mathcal{O}_X$ is the sheaf of $\mu_n$-invariants of $\mathcal{A}$, which shows that $\pi$ is a coarse moduli space. The claim concerning the behavior over the Gorenstein locus follows from property (2).
For property (4), observe that the formula $X^ {\mathrm{can}} = [Y/\mu_n]$ identifies the ${\operatorname{QCoh}}(X^ {\mathrm{can}})$ with the category ${\operatorname{QCoh}}(Y)^{\mu_n}$ of $\mu_n$-equivariant quasi-coherent sheaves on $Y$. The functor $\pi_*:{\operatorname{QCoh}}(X^ {\mathrm{can}}) \to {\operatorname{QCoh}}(X)$ is then identified with the functor $\mu_n$-invariants which is exact because $\mu_n$ is linearly reductive, showing that $\mathrm{R}^i \pi_* \mathcal{O}_{X^ {\mathrm{can}}} = 0$ for $i > 0$. Since the claim for $i = 0$ was already shown, the result follows.
\end{proof}
The following lemma is used in the proof of property (2) above:
\begin{lemma}
\label{gorbc}
Let $f:U \to X$ be flat relatively Gorenstein morphism between essentially finitely presented schemes over some field $k$, and assume that $X$ admits a dualizing complex $\omega_X^\bullet$. Then there is a natural isomorphism of sheaves
\[ f^* \omega_X \otimes \omega_f \simeq \omega_U. \]
\end{lemma}
\begin{proof}
We normalize dualizing complexes so that the dualizing sheaf of a scheme sits inside the dualizing complex in homological degree equal to dimension of the scheme. After spreading out $U$ and $X$, we may assume that $f$ is a map between finite type separated $k$-schemes. Choose compatible compactifications $U \subset \overline{U}$ and $X \subset \overline{X}$ together with a map $\overline{f}: \overline{U} \to \overline{X}$ extending $f$. By \cite[Theorem 5.4]{NeemanGD} (which applies because $\overline{U}$ and $\overline{X}$ are noetherian, and because $\mathrm{R} \overline{f}_*$ preserves coproducts by \cite[Lemma 1.4]{NeemanGD}), we have a canonical isomorphism
$$ \overline{f}^* \omega_{\overline{X}}^\bullet \otimes \omega_{\overline{f}} \simeq \omega_{\overline{U}}^\bullet. $$
Note that the dualizing complexes furnished by \cite{NeemanGD} agree with the usual ones for proper $k$-schemes. Restricting to $U$, using the relatively Gorenstein assumption on $f$, and applying $\mathcal{H}^{-\dim(U)}$ now gives the desired claim.
\end{proof}
\begin{remark}
The only place where the characteristic $0$ assumption was used in Lemma \ref{cmsgorsch} was to conclude that $X^ {\mathrm{can}}$ was a Deligne-Mumford stack rather than an Artin stack. This distinction is crucial to our proofs as Deligne-Mumford stacks have connective cotangent complexes, and the connectivity makes the proofs of Proposition \ref{defcms} and Proposition \ref{deformds} work.
\end{remark}
We record the following lemma here for use in Proposition \ref{defthyindqcov}.
\begin{lemma}
\label{depthprod}
Let $(R,\mathfrak{m})$ and $(S,\mathfrak{n})$ be two essentially finitely presented $k$-algebras over some algebraically closed field $k$, and let $(T,\mathfrak{p})$ be the Zariski localization of $R \otimes_k S$ at the maximal ideal generated by $\mathfrak{m}$ and $\mathfrak{n}$. Then we have
\[ \operatorname{depth}_\mathfrak{p}(T) = \operatorname{depth}_\mathfrak{m}(R) + \operatorname{depth}_\mathfrak{n}(S) \]
\end{lemma}
\begin{proof}
The map $(R,\mathfrak{m}) \to (T,\mathfrak{p})$ is an essentially finitely presented flat local homomorphism of noetherian local rings with fiber $T/\mathfrak{m} T \simeq S$. The addition formula for depth (see \cite[\S 21.C, Corollary 1]{MatCA}) now implies the claim.
\end{proof}
Finally, we show that the deformations of products of canonical covering stacks are the same as those for the canonical covering stack of a product provided there are no infinitesimal automorphisms in sight.
\begin{proposition}
\label{defthyindqcov}
Fix a field $k$ of characteristic $0$. Let $X$ and $Y$ be two essentially finitely presented $\mathbf{Q}$-Gorenstein $k$-schemes that are both Gorenstein in codimension $\leq 1$ and satisfy Serre's condition $S_2$. Assume that $X \times Y$ has no infinitesimal automorphisms. Then one has a natural equivalence of deformation functors ${\operatorname{Def}}_{X^ {\mathrm{can}} \times Y^ {\mathrm{can}}} \to {\operatorname{Def}}_{(X \times Y)^ {\mathrm{can}}}$ as functors on ${\operatorname{Art}}_k$.
\end{proposition}
\begin{proof}
Let $\pi:X^ {\mathrm{can}} \times Y^ {\mathrm{can}} \to (X \times Y)^ {\mathrm{can}}$ denote the canonical map, and let ${\operatorname{Def}}_\pi$ denote the deformation functor associated to $\pi$. Forgetting information defines morphisms $a:{\operatorname{Def}}_\pi \to {\operatorname{Def}}_{X^ {\mathrm{can}} \times Y^ {\mathrm{can}}}$ and $b:{\operatorname{Def}}_\pi \to {\operatorname{Def}}_{(X \times Y)^ {\mathrm{can}}}$. We will show that each of these maps is an equivalence.
To show that the map $b$ is an equivalence, we apply Proposition \ref{defcms}. Let $U \subset X^ {\mathrm{can}} \times Y^ {\mathrm{can}}$ denote the locus where $\pi$ is \'etale. We will check that $U$ contains all the codimension $2$ points, and that $X^ {\mathrm{can}} \times Y^ {\mathrm{can}}$ satisfies Serre's condition $S_3$ on the complement of $U$. Since both conditions are local on $X \times Y$, we localize on the latter whenever necessary. Moreover, we freely identify points on a Delinge-Mumford stack and those on the coarse space.
Both $X^ {\mathrm{can}} \times Y^ {\mathrm{can}}$ and $(X \times Y)^ {\mathrm{can}}$ are \'etale over $X \times Y$ at the Gorenstein points of the latter which includes all the codimension $1$ points. Hence, it suffices to check that $\pi$ is \'etale at the codimension $2$ points of $X^ {\mathrm{can}} \times Y^ {\mathrm{can}}$. We first observe that this last claim is clear if one of $X$ or $Y$ is Gorenstein itself: the formation of $X^ {\mathrm{can}} \to X$ commutes with flat relatively Gorenstein base changes on $X$ by property (2) in Lemma \ref{cmsgorsch}. Now a point of $X^ {\mathrm{can}} \times Y^ {\mathrm{can}}$ is given by a product $(x,y)$. Such a product has codimension $2$ if either both $x$ and $y$ have codimension $1$, or one has codimension $2$ and the other has codimension $0$. In either case, one of the factors appearing in the product is Gorenstein, and hence the map is \'etale by the preceding observation; this verifies that $U$ contains all the codimensions $\leq 2$ points.
Next, we check Serre's condition. The same reasoning used above also shows that a point $(x,y)$ in the complement of $U$ defines points $x \in X$ and $y \in Y$ each with codimension $\geq 2$. Property (1) from Lemma \ref{cmsgorsch} implies that each of $X^ {\mathrm{can}}$ and $Y^ {\mathrm{can}}$ satisfy Serre's condition $S_2$. Hence, any point $(x,y) \in X^ {\mathrm{can}} \times Y^ {\mathrm{can}} - U$ automatically satisfies Serre's condition $S_3$ by Lemma \ref{depthprod}. By applying Proposition \ref{defcms}, we may now conclude that $b$ is an equivalence.
To show that the map $a$ is an equivalence, we apply Proposition \ref{deformds}. In order to apply this proposition, we first need to check that $(X \times Y)^ {\mathrm{can}}$ has no infinitesimal automorphisms. This follows from Proposition \ref{infautstabvarqcov} applied to the map $(X \times Y)^ {\mathrm{can}} \to X \times Y$ and the assumption that $X \times Y$ has no infinitesimal automorphisms. Next, we need to verify that $\mathcal{O}_{ (X \times Y)^ {\mathrm{can}} } \stackrel{\simeq}{\to} \pi_* \mathcal{O}_{X^ {\mathrm{can}} \times Y^ {\mathrm{can}}}$ and that $\mathrm{R}^1 \pi_* \mathcal{O}_{X^ {\mathrm{can}} \times Y^ {\mathrm{can}}} = 0$. We may localize to assume that both $X$ and $Y$ are affine. Note that we have a commutative diagram
\[ \xymatrix{X^ {\mathrm{can}} \times Y^ {\mathrm{can}} \ar[r]^\pi \ar[d]^f & (X \times Y)^ {\mathrm{can}} \ar[d]^g \\
B(\mathbf{G}_m \times \mathbf{G}_m) \ar[r]^{p} & B(\mathbf{G}_m) } \]
where the vertical maps classify the defining quotient stack structure, and the map $p$ is induced by the multiplication map $\mathbf{G}_m \times \mathbf{G}_m \to \mathbf{G}_m$. Since we are working with affines, the vertical maps are affine faithfully flat and finitely presented maps and, thus, the corresponding pushforward functors are faithful. Now observe that the category ${\operatorname{QCoh}}(X^ {\mathrm{can}} \times Y^ {\mathrm{can}})$ can be identified as the category of $(\mathbf{G}_m \times \mathbf{G}_m)$-equivariant objects on the fiber of $f$, and similarly for $(X \times Y)^ {\mathrm{can}}$. It is then easy to see that the functor $\pi_*$ is identified with the functor of taking invariants under the antidiagonal $\mathbf{G}_m \subset \mathbf{G}_m \times \mathbf{G}_m$: it suffices to check the analogous claim for the map $p$ since pushing forward along the vertical maps is faithful, and then the claim follows from the basic formalism of classifying stacks. In particular, since $\mathbf{G}_m$ is linearly reductive, the higher direct images $\mathrm{R}^i \pi_* \mathcal{O}_{X^ {\mathrm{can}} \times Y^ {\mathrm{can}}}$ vanish for $i > 0$. The claim for $i = 0$ is an easy exercise in local coordinates, and we leave this to the reader.
\end{proof}
We now may put together all of our results on deformation theory to obtain our main theorem.
\begin{theorem} \label{mainthm:localagain}
Let $X$ and $Y$ be two stable varieties over a field $k$ of characteristic $0$. Then the natural map
\[ \Prod_{X,Y}:\mathcal{M}(X) \times \mathcal{M}(Y) \to \mathcal{M}(X \times Y) \]
is finite \'etale.
\end{theorem}
\begin{proof}
We first note that the morphism is well-defined by Proposition \ref{prop:product}. By Thoerem \ref{thm:proper_DM}, the stacks $\mathcal{M}(X)$ are proper Delign-Mumford stacks. Hence, by Zariski's main theorem for Deligne-Mumford stacks, it suffices to check that the map $\Prod_{X,Y}$ is \'etale at each point of $\mathcal{M}(X) \times \mathcal{M}(Y)$. Moreover, since each point of $\mathcal{M}(X) \times \mathcal{M}(Y)$ is given as a pair of stable varieties, we may without loss of generality restrict our attention to the canonical point of $\mathcal{M}(X) \times \mathcal{M}(Y)$ defined by $X$ and $Y$. In this case, by the main result of \cite{AbramovichHassett}, the deformation theory of $\mathcal{M}(X)$ at the point defined by $X$ is controlled by the functor ${\operatorname{Def}}_{X^ {\mathrm{can}}}$ on ${\operatorname{Art}}_k$, and similarly for $Y$ and $X \times Y$. Hence, it suffices to check that the natural transformation
\[ {\operatorname{Def}}_{X^ {\mathrm{can}}} \times {\operatorname{Def}}_{Y^ {\mathrm{can}}} \to {\operatorname{Def}}_{ (X \times Y)^ {\mathrm{can}} } \]
is an equivalence of functors on ${\operatorname{Art}}_k$. The result now follows from the factorization
\[ {\operatorname{Def}}_{X^ {\mathrm{can}}} \times {\operatorname{Def}}_{Y^ {\mathrm{can}}} \stackrel{a}{\to} {\operatorname{Def}}_{X^ {\mathrm{can}} \times Y^ {\mathrm{can}}} \stackrel{b}{\to} {\operatorname{Def}}_{ (X \times Y)^ {\mathrm{can}} } \]
coupled with the fact that $a$ is an equivalence by Theorem \ref{mainthm:defthy}, while $b$ is an equivalence by Proposition \ref{defthyindqcov} (which applies by Lemma \ref{stablevarinfaut} and Lemma \ref{cmsgorsch}).
\end{proof}
\section{The global theory} \label{sec:globaltheory}
In this section, we prove Theorem \ref{mainthm:productirreducible} and Theorem
\ref{mainthm:global}. Note that, by the Lefschetz principle, we may assume that the
base field is the field of complex numbers, which we shall do from now on.
\subsection{Canonically polarized manifolds}
Recall that a \emph{canonically polarized manifold} is a compact complex manifold
whose canonical line bundle is ample; such a manifold is automatically a smooth
complex projective variety. We shall use two important theorems from
differential geometry -- Yau's theorem about the existence of K\"ahler-Einstein
metrics, and the Uhlenbeck-Yau theorem -- to show that any canonically polarized
manifold can be uniquely decomposed into a product of ``irreducible'' factors.
\begin{definition}
A canonically polarized manifold $X$ is called \emph{irreducible} if it
does not admit a nontrivial product decomposition $X \cong X_1 \times X_2$.
\end{definition}
It is easy to see that, in any product decomposition of a canonically polarized
manifold, every factor is again canonically polarized. By Chow's theorem, such a
decomposition is then automatically also a decomposition in the category of smooth
complex projective varieties.
\begin{theorem}
\label{theorem:main}
Let $X$ be a canonically polarized manifold. Then there is a product decomposition
\begin{equation*}
X \cong X_1 \times \dotsb \times X_r
\end{equation*}
into irreducible canonically polarized manifolds, and this decomposition is unique
up to the order of the factors.
\end{theorem}
\subsection{Products of stable varieties}
The following statements are immediate consequences of Theorem \ref{theorem:main}.
\begin{corollary}
\label{cor:isomorphism}
If $Z$ is a canonically polarized manifold with irreducible decomposition $Z = Z_1 \times Z_2$, such that $\mathcal{M}(Z_1) \neq \mathcal{M}(Z_2)$ as components of the moduli stack of all stable varieties, then the product map
\begin{equation*}
\Prod_{Z_1,Z_2} : \mathcal{M}(Z_1) \times \mathcal{M}(Z_2) \to \mathcal{M}(Z)
\end{equation*}
is an isomorphism. Furthermore, the image of $\Prod_{X,Y}$ intersects $\mathcal{M}(Z)$ if and only if $X \in \mathcal{M}(Z_1)$ and $Y \in \mathcal{M}(Z_2)$ or $Y \in \mathcal{M}(Z_1)$ and $X \in \mathcal{M}(Z_2)$.
\end{corollary}
\begin{corollary}
\label{cor:exact_description}
If $X$ is a canonically polarized manifold with irreducible decomposition
\begin{equation*}
X \cong \prod_{i=1}^r \left( \prod_{j=1}^{n_i} X_{ij} \right)
\end{equation*}
such that $\mathcal{M}(X_{ij}) = \mathcal{M}(X_{i'j'})$ if and only if $i=i'$, then
\begin{equation*}
\mathcal{M}(X) \cong \prod_{i=1}^r \left[ \factor{\mathcal{M}(X_{i1})^{\times n_i}}{S_{n_i}} \right] ,
\end{equation*}
where the symmetric group $S_{n_i}$ acts on $\mathcal{M}(X_{i1})^{\times s_i}$ by
permuting the factors, and the quotient is taken in the stack sense.
\end{corollary}
We also have the following general formula for the degree of the fibers.
\begin{proposition}
\label{prop:product_map_fiber}
If $X$ and $Y$ are stable schemes, then the fiber of the map $\Prod_{X,Y} : \mathcal{M}(X) \times \mathcal{M}(Y) \to \mathcal{M}(X \times Y)$ over $X \times Y$ contains as many points as
\begin{equation*}
\sum_{V \in \mathcal{M}(X), W \in \mathcal{M}(Y), V \times W \cong X \times Y }
\left| \factor{\operatorname{Aut}(X \times Y)}{\operatorname{Aut}(V) \times \operatorname{Aut}(W)} \right|
\end{equation*}
\end{proposition}
\subsection{Polystability of the tangent bundle}
The goal of this and the next section is to show that the the tangent bundle of a
canonically polarized manifold is polystable (with respect to the ample line bundle
$\omega_X$).
\begin{theorem}
\label{thm:canonically_polarized_tangent}
If $X$ is a canonically polarized manifold, then $\mathcal{T}_X$ is polystable with
respect to $\omega_X$. More precisely, $\mathcal{T}_X$ uniquely decomposes into a
direct sum of stable, pairwise non-isomorphic subbundles of slope $\mu(\mathcal{T}_X)$.
\end{theorem}
We shall briefly recall the definition of stability and polystability. For a
torsion-free coherent sheaf $\mathcal{F}$ on $X$, we define the \emph{slope}
$\mu(\mathcal{F})$ with respect to the ample line bundle $\omega_X$ by the formula
\begin{equation} \label{eq:slope}
\mu(\mathcal{F}) =
\frac{c_1(\mathcal{F}) \cdot c_1(\omega_X)^{\dim X-1}}{\operatorname{rk} \mathcal{F}},
\end{equation}
see for example \cite[Definition 1.2.11]{HD_LM_TG}. If $i \colon U \to X$ is the
inclusion of the open subset where $\mathcal{F}$ is locally free, then $\operatorname{rk}
\mathcal{F}$ means the rank of $i^* \mathcal{F}$, and $c_1(\mathcal{F})$ is defined
as the first Chern class of the line bundle $\det \mathcal{F} = i_* \det(i^*
\mathcal{F})$.
\begin{definition}
\label{defn:semi_poly_stable}
Let $\mathcal{F}$ be a torsion-free sheaf on a canonically polarized complex manifold
$X$.
\begin{enumerate}
\item \label{itm:semi_poly_stable:stable} $\mathcal{F}$ is \emph{stable} if for every
subsheaf $\mathcal{G} \subseteq \mathcal{F}$ with $0<\operatorname{rk} \mathcal{G}< \operatorname{rk}
\mathcal{F}$, one has $\mu(\mathcal{G}) < \mu(\mathcal{F})$,
\item \label{itm:semi_poly_stable:poly_stable} $\mathcal{F}$ is \emph{polystable} if
it is the direct sum of stable sheaves of the same slope.
\end{enumerate}
\end{definition}
The following simple lemma will be used in two places below.
\begin{lemma} \label{lem:simple}
Let $\mathcal{E}= \mathcal{E}_1 \oplus \dotsb \oplus \mathcal{E}_n$ be a polystable vector bundle, with
$\mathcal{E}_i$ stable and pairwise non-isomorphic. If $\mathcal{E} = \mathcal{F} \oplus \mathcal{G}$ for two
subsheaves $\mathcal{F}, \mathcal{G} \subseteq \mathcal{E}$, then there is a subset $I \subseteq \{1,
\dotsc, n\}$ with the property that
\[
\mathcal{F} = \bigoplus_{i \in I} \mathcal{E}_i \quad \text{and} \quad
\mathcal{G} = \bigoplus_{i \not\in I} \mathcal{E}_i.
\]
\end{lemma}
\begin{proof}
Since $\mathcal{E}_i$ are stable and pairwise non-isomorphic, \cite[Proposition
1.2.7]{HD_LM_TG} shows that we have
\begin{equation} \label{eq:EiEj}
\operatorname{Hom}(\mathcal{E}_i, \mathcal{E}_j) = \begin{cases}
\mathbb{C} &\text{for $i = j$,} \\
0 &\text{for $i \neq j$.}
\end{cases}
\end{equation}
Now consider the composition $i_{\mathcal{F}} p_{\mathcal{F}} \colon \mathcal{E} \to \mathcal{E}$ of the
projection $p_{\mathcal{F}} \colon \mathcal{E} \to \mathcal{F}$ and the inclusion $i_{\mathcal{F}} \colon \mathcal{F}
\to \mathcal{E}$. It is naturally represented by an $n \times n$-matrix; by \eqref{eq:EiEj} this
matrix is diagonal with entries in $\mathbb{C}$. Moreover, all diagonal entries are
either $0$ or $1$, on account of the identity $(i_{\mathcal{F}} p_{\mathcal{F}})(i_{\mathcal{F}} p_{\mathcal{F}})
= i_{\mathcal{F}} p_{\mathcal{F}}$. The same is true for the matrix representing $i_{\mathcal{G}}
p_{\mathcal{G}}$; since we have $i_{\mathcal{F}} p_{\mathcal{F}} + i_{\mathcal{G}} p_{\mathcal{G}} = \operatorname{id}_{\mathcal{E}}$, the
assertion follows.
\end{proof}
\subsection{Differential geometry}
We shall now use two results from differential geometry to prove
Theorem \ref{thm:canonically_polarized_tangent}.
Let $(X, \omega)$ be a compact K\"ahler manifold; by a slight abuse of notation, we
shall use the symbol $\omega$ both for the K\"ahler metric and for its associated
real closed $(1,1)$-form. We can use the same formula as in \eqref{eq:slope} to
define the slope of torsion-free coherent sheaves on $X$, replacing $c_1(\omega_X)$
by the cohomology class $\lbrack \omega \rbrack \in H^2(X, \mathbb{R})$ of the K\"ahler
form. We therefore have the notion of stability and polystability with respect to
$\omega$.
Now recall that the K\"ahler metric $\omega$ is called \emph{K\"ahler-Einstein} if
$\Ric \omega= \lambda \omega$ for some real number $\lambda$. Here $\Ric \omega$ is
the Ricci curvature form of $\omega$, or equivalently the Chern curvature $\sqrt{-1}
\Theta(\det \mathcal{T}_X, \det \omega)$ of the naturally induced metric on the
holomorphic line bundle $\det \mathcal{T}_X$; the constant $\lambda$ is called the
\emph{scalar Ricci curvature} of $\omega$.
\begin{theorem}[\cite{AT_EDT, YST_OTR}]
If $X$ is a canonically polarized complex manifold, and $\lambda<0$ a real number,
then $X$ admits a unique K\"ahler-Einstein metric with scalar Ricci curvature
$\lambda$.
\end{theorem}
In the following, we shall normalize the K\"ahler-Einstein metric $\omega$ on a
canonically polarized manifold $X$ by taking its scalar Ricci curvature equal to $- 2
\pi$; in other words, we shall assume that $\Ric \omega = - 2 \pi \omega$. With this
convention,
\begin{equation} \label{eq:Kahler_Einstein_constant:chern_equals_metric}
c_1(\omega_X, (\det \omega)^{-1}) = \omega ,
\end{equation} where $c_1(\omega_X,
(\det \omega)^{-1})$ is the Chern form of the induced metric on the canonical line
bundle. Indeed,
\begin{equation*}
c_1(\omega_X, (\det \omega)^{-1}) = - c_1(\det \mathcal{T}_X, \det \omega)
= - \frac{\sqrt{-1}}{2 \pi} \Theta(\det \mathcal{T}_X, \det \omega)
= - \frac{1}{2 \pi} \Ric \omega = \omega, \end{equation*}
where the second equality is by \cite[Example 4.4.8.i]{HD_CG}. This ensures that the
slope with respect to the ample line bundle $\omega_X$ is the same as the slope with
respect to the K\"ahler form $\omega$; in particular, the two notions of stability
(and polystability) coincide.
\begin{proposition}[{e.g.,\cite[Definitions 4.B.1 and 4.B.11]{HD_CG}}]
Let $(X, \omega)$ be a compact K\"ahler-Einstein manifold. Then the
induced metric on the holomorphic tangent bundle $\mathcal{T}_X$ is
Hermite-Einstein.
\end{proposition}
Recall that a Hermitian metric $h$ on a holomorphic vector bundle $\mathcal{E}$ is
is called \emph{Hermite-Einstein} if
\begin{equation}
\sqrt{-1} \Lambda_{\omega} \Theta(\mathcal{E},h) = \lambda \operatorname{id}_{\mathcal{E}};
\end{equation}
here $\Lambda_{\omega}$ is the metric contraction on the space of complex-valued
two-forms, induced by the K\"ahler metric $\omega$. The Uhlenbeck-Yau theorem relates
this differential-geometric condition back to algebraic geometry.
\begin{theorem}[\cite{UK_YST_OTE}]
\label{theorem:metric_polystable}
On a compact K\"ahler manifold $(X,\omega)$, a holomorphic vector bundle
admits a Hermite-Einstein metric if and only if it is polystable with respect to $\omega$.
\end{theorem}
Here is the proof that the tangent bundle of a canonically polarized manifold is
polystable.
\begin{proof}[Proof of Theorem \ref{thm:canonically_polarized_tangent}]
Since $X$ is canonically polarized, it admits a unique K\"ahler-Einstein metric
$\omega$ with $\Ric \omega = - 2 \pi \omega$. The induced metric on the tangent
bundle is Hermite-Einstein, and by the Uhlenbeck-Yau theorem, $\mathcal{T}_X$ is
polystable with respect to $\omega$, hence also polystable with respect to
$\omega_X$. This means that we have a decomposition
\[
\mathcal{T}_X = \mathcal{E}_1 \oplus \dotsb \oplus \mathcal{E}_n
\]
into stable subbundles $\mathcal{E}_i$ of slope
\[
\mu(\mathcal{E}_i) = \mu(\mathcal{T}_X) = -\frac{c_1(\omega_X)^{\dim X}}{\dim X} < 0.
\]
The argument in \cite[Lemma~1.3]{BA_CM} now shows that the $\mathcal{E}_i$ must be pairwise
non-isomorphic: indeed if $\mathcal{E}_i \simeq \mathcal{E}_j$ for $i \neq j$, then $\mathcal{E}_i$ would
carry a flat connection, which is not possible because $\mu(\mathcal{E}_i) < 0$. Finally,
the uniqueness of the decomposition follows from Lemma~\ref{lem:simple}.
\end{proof}
\subsection{Proof of the theorem}
\label{sec:proof}
We now come to the proof of Theorem \ref{theorem:main}. It is easy to see (by
induction on the dimension) that every canonically polarized
manifold has at least one decomposition
\[
X \cong X_1 \times \dotsm \times X_r
\]
into irreducible canonically polarized manifolds $X_i$. It remains to show
that this decomposition is unique, up to the order of the factors. For this, it is
clearly enough to prove that any two product decompositions of a canonically
polarized manifold admit a common refinement. This, in turn, is implied by the
following special case.
\begin{lemma}
Let $X \cong Y \times Z \cong Y' \times Z'$ be two product decompositions of a canonically
polarized manifold. Then there is a common refinement $X \cong W_1 \times W_2 \times
W_3 \times W_4$, with the property that
\begin{equation} \label{eq:refinement}
Y \cong W_1 \times W_2, \quad
Z \cong W_3 \times W_4, \quad
Y' \cong W_1 \times W_3, \quad
Z' \cong W_2 \times W_4.
\end{equation}
\end{lemma}
\begin{proof}
By Theorem \ref{thm:canonically_polarized_tangent}, the tangent bundle of $X$ is
polystable, and in fact, decomposes uniquely as
\begin{equation} \label{eq:splitting_TX}
\mathcal{T}_X = \mathcal{E}_1 \oplus \dotsb \oplus \mathcal{E}_n
\end{equation}
with $\mathcal{E}_i$ stable and pairwise non-isomorphic. To simplify the notation, we put
\[
\mathcal{E}(I) = \bigoplus_{i \in I} \mathcal{E}_i
\]
for any subset $I \subseteq \{1, \dotsc, n\}$. The decompositions $X \cong Y \times Z
\cong Y' \times Z'$ of the manifold $X$ induce decompositions $\mathcal{T}_X = p_Y^* \mathcal{T}_Y
\oplus p_Z^* \mathcal{T}_Z = p_{Y'}^* \mathcal{T}_{Y'} \oplus p_{Z'}^* \mathcal{T}_{Z'}$ of its tangent
bundle. It then follows from Lemma~\ref{lem:simple} that the set $\{1, \dotsc, n\}$
can be partitioned into four disjoint subsets $I_1$, $I_2$, $I_3$, and $I_4$, in such
a way that
\[
p_Y^* \mathcal{T}_Y = \mathcal{E}(I_1 \cup I_2), \quad
p_Z^* \mathcal{T}_Z = \mathcal{E}(I_3 \cup I_4), \quad
p_{Y'}^* \mathcal{T}_{Y'} = \mathcal{E}(I_1 \cup I_3), \quad
p_{Z'}^* \mathcal{T}_{Z'} = \mathcal{E}(I_2 \cup I_4).
\]
Let $\pi \colon \tilde{X} \to X$ be the universal covering space of $X$; note that
$\tilde{X}$ will usually be non-compact. The splitting $\mathcal{T}_X = \mathcal{E}(I_1) \oplus \mathcal{E}(I_2)
\oplus \mathcal{E}(I_3) \oplus \mathcal{E}(I_4)$ lifts to a splitting of $\mathcal{T}_{\tilde{X}}$, and
therefore induces a decomposition
\[
\tilde{X} \cong M_1 \times M_2 \times M_3 \times M_4
\]
into integral submanifolds of the foliations $\pi^* \mathcal{E}(I_k)$, according to
\cite[Theorem~A]{BA_CM}. By the same result, the fundamental group $G = \pi_1(X)$ acts
compatibly on each factor $M_k$, in such a way that the natural action on $\tilde{X}$ is
diagonal. In particular, this means that we have an embedding of groups
\[
G \to \operatorname{Aut}(M_1) \times \operatorname{Aut}(M_2) \times \operatorname{Aut}(M_3) \times \operatorname{Aut}(M_4),
\]
where $\operatorname{Aut}(M_k)$ denotes the group of biholomorphic automorphisms of the complex
manifold $M_k$. Let us denote the preimage of $\operatorname{Aut}(M_k)$ under this embedding by the
letter $G_k$, the preimage of $\operatorname{Aut}(M_k) \times \operatorname{Aut}(M_{\ell})$ by the letter
$G_{k\ell}$, and so on. We claim that $G \cong G_1 \times G_2 \times G_3 \times G_4$.
To prove this claim, we observe that $M_1 \times M_2$ is a simply connected
integral submanifold of the foliation $\pi^* p_Y^* \mathcal{T}_Y$, and must therefore be
the universal covering space of $Y$; consequently, $\pi_1(Y)$ embeds into $\operatorname{Aut}(M_1)
\times \operatorname{Aut}(M_2)$. The same is of course true in the other three cases. Since we
have $\pi_1(X) \cong \pi_1(Y) \times \pi_1(Z) \cong \pi_1(Y') \times \pi_1(Z')$
compatibly with the above decompositions, it follows that
\[
G \cong G_{12} \times G_{34} \cong G_{13} \times G_{24}.
\]
From this, it is easy to deduce that $G \cong G_1 \times G_2 \times G_3 \times G_4$.
To conclude the proof, we define $W_k = M_k / G_k$. We then have $X \cong W_1 \times
W_2 \times W_3 \times W_4$, and so each $W_k$ must be a compact complex manifold;
because $X$ is canonically polarized, each $W_k$ is also canonically polarized, and
therefore a smooth complex projective variety by Chow's theorem. It is clear from the
construction that \eqref{eq:refinement} is satisfied, and so the lemma is proved.
\end{proof}
\newpage
\input{derivedAGreview}
\newpage
|
2,877,628,091,579 | arxiv | \section{Introduction}
\label{sec:introduction}
\IEEEPARstart{O}{bject} detection is a classical task in computer vision which is an operation capturing target objects in images (or video) and feeding back the category and localization of the object. Latest solutions on object detection achieve high computing speed and accuracy. For example, YOLO~\cite{redmon2016yolo9000} produces very high performance in object detection: more than 40 frames per second (FPS) and 78 mean average precision (MAP) on PASCAL Visual Object Classes challenge 2007 (VOC2007). As a sub-field of object detection, pedestrian detection is often applied to video surveillance, automotive safety, and robotics applications. Pedestrian, a special instance in object detection, has a unique trait in videos. Pedestrians in videos have a wide variety of appearances such as body pose, clothing, lighting and occlusion, while the background might be changed in a limited range. The wide range of intra-class variety against relatively small background change has a negative effect on detectors. Above all, many detectors which work well on detecting common objects heavily suffer from occlusion in pedestrian detection, which leads to the decrease of the location quality represented by bounding boxes. Thus, occlusion handling is required to help the detectors recall test samples in different level of occlusions.
\begin{figure}[t]
\centering
\centerline{\includegraphics[width = 0.5\linewidth]{figures/psp.png}}
\caption{Proposal shift problem in pedestrian detection. The colored boxes are the detection proposals, while the black boundaries are their ground truth.}
\label{fig:shiftP}
\end{figure}
\begin{figure*}[t]
\centering
\centerline{\includegraphics[width = 0.8\linewidth]{figures/detection_framework_2.png}}
\caption{Whole framework of the proposed method. The proposed pedestrian network consists of two sub-networks: detection and alignment. }
\label{fig:framework}
\end{figure*}
Felzenszwalb \emph{et al.}~\cite{felzenszwalb2008a, felzenszwalb2010object} proposed a star model to search the whole image for body parts by a multi-scale sliding window technique. This work has inspired researchers to consider part detection in deep learning~\cite{ouyang2012discriminative, tian2015deep, ouyang2013joint, ouyang2013modeling, luo2014switchable}. Ouyang and Wang~\cite{ouyang2013joint} designed a unique part detection layer with 20 convolutional filters of different sizes to detect body parts of the corresponding size ratio. These deep learning-based methods assume that the detection proposals are given by conventional detectors such as SquaresChnFtrs~\cite{benenson2013seeking}. Thus, recent CNN-based pedestrian detectors~\cite{ouyang2012discriminative, tian2015deep, ouyang2013joint, ouyang2013modeling, luo2014switchable, tian2015pedestrian, angelova2015real, li2015scale, hosang2015taking} have transformed pedestrian detection to classification of the detection proposals. Thus, detectors avoid redundant exhaustive search over whole images. JointDeep~\cite{ouyang2013joint} and SDN~\cite{luo2014switchable} used "HOG+CSS" as features and a Linear SVM as a classifier to generate detection proposals (HOG: Histogram of oriented gradient, CSS: Color-self-similarity). The "HOG+CSS+SVM" proposer recalled most pedestrian candidates from images. Also, the performance of the CNN detector was improved by hard negatives generated by the "HOG+CSS+SVM" proposer. Other detection proposals were generated by ACF~\cite{dollar2014fast}, LDCF~\cite{nam2014local}, SquaresChnFtrs~\cite{benenson2013seeking}, and checkerboards~\cite{zhang2015filtered}. For the 2-stage detectors which combine detection proposal and classification are influenced significantly by the performance of detection proposers, especially for intersection over union (IoU) of bounding boxes.
In this paper, we propose part-level CNN for pedestrian detection using fully convolutional networks (FCN) and class activation map (CAM). The proposed network consists of two sub-networks of detection and alignment. In the detection sub-network, we use saliency to assign different weights to pedestrians and background. Based on saliency, we remove false positives such as lamp posts and trees from pedestrians. We adopt the alignment sub-network to recall the lost body parts caused by the detection sub-network. In the alignment sub-network, we utilize localization features of CNN such as FCN abd CAM to produce confidence maps and infer accurate accurate pedestrian location, i.e. bounding box alignment. Although FCN-based feature maps in our previous work~\cite{wang2017part} preserved the localization capability of CNN well, its output resolution was relatively low for bounding box alignment. Therefore, it was hard to obtain accurate feature maps even with upsampling used. To address the resolution problem, we add CAM into the alignment sub-network. With the help of CAM, we produce high resolution feature maps for bounding box alignment. In this work, we divide the proposed CNN detector for training into three body parts considering efficiency: head, torso and legs. In our previous work~\cite{wang2017part}, we divided it into five parts of head, left torso, right torso, left leg and right leg. Moreover, we utilize the detection sub-network to obtain pedestrian proposals, while our previous work~\cite{wang2017part} used SquaresChnFtrs~\cite{benenson2013seeking} based on a combination of conventional hand-crafted features. Experimental results show that the proposed method effectively removes false positives by saliency as well as successfully recall the lost body parts by boundary box alignment. The proposed method achieves 10\% performance improvement in pedestrian detection over our previous work~\cite{wang2017part}. Fig.~\ref{fig:framework} illustrates the whole framework of the proposed method.
Compared with the existing methods, main contributions of this paper are as follows:
\begin{itemize}
\item[$\bullet$] We use saliency in the detection sub-network to remove background areas such as lamp posts and trees from pedestrians.
\item[$\bullet$] We combine FCN and CAM into the alignment sub-network to enhance the resolution of confidence maps and successfully recall the lost body parts.
\end{itemize}
The rest of this paper is organized as follows.
Section~\ref{sec:RelatedWorks} relevant research trends.In Section~\ref{sec:ProposedMethod}, the proposed method are described in detail. Section~\ref{sec:Experiments} experimentally compares the proposed method with existing methods. Section~\ref{sec:Conclusion} draws conclusions.
\section{Related Work}
\label{sec:RelatedWorks}
Up to the present, researchers have proposed many outstanding works for pedestrian detection, and in this section we mainly focus on deep learning models. The first deep model was an unsupervised deep model proposed by Sermanet \emph{et al.}~\cite{sermanet2013pedestrian} to consider limited training data. This model used a few tricks: 1) Multi-stage features, 2) connections to skip layers and integrate global shape information with local distinctive motif information, 3) unsupervised method based on convolutional sparse coding to pre-train the filters at each stage. A series of methods~\cite{ouyang2012discriminative, ouyang2013joint, ouyang2013modeling, luo2014switchable} combined part detection and deep models to improve the detection accuracy in body part occlusion. DBN-Isol~\cite{ouyang2012discriminative} proposed the deformable part model (DPM)~\cite{felzenszwalb2010object} based on a deep belief network to estimate the visibility of pedestrians. JointDeep~\cite{ouyang2013joint} was a deep model that was composed of feature extraction, occlusion handling, deformation and classification in a single network. MultiSDP~\cite{zeng2013multi} built a multi-stage classifier to deal with complex distributed samples in pedestrian datasets. SDN~\cite{luo2014switchable} used switchable Restricted Boltzmann Machines (RBMs) to extract high-level features for body parts. They divided human body into three parts: head-shoulder, upper-body and lower-body.
Tian \emph{et al.}~\cite{tian2015pedestrian} introduced datasets for scene labeling which contained city street scenes to aid the detector for distinguishing background from the proposals. The idea was that the scene labeling datasets contained information similar to the background in pedestrian datasets. Considering part detection, Tian \emph{et al.}~\cite{tian2015deep} also proposed DeepParts to handle occlusion with an extensive body part pool. In this method, SVM detector was not used directly for the CNN output due to its small improvement. Moreover, general object detectors~\cite{girshick2014rich} have been applied to pedestrian detection. Hosang \emph{et al.}~\cite{hosang2015taking} analyzed the feasibility of the region-based CNN~\cite{girshick2014rich} (R-CNN) framework for the pedestrian detection task. They adopted SquaresChnFtrs~\cite{benenson2013seeking}, i.e. a stand-alone pedestrian detector, as the detection proposer and a R-CNN model for classification. Following R-CNN, region proposal network (RPN) built in faster R-CNN~\cite{ren2015faster} produced detection proposals by the network itself.
\section{Proposed Method}
\label{sec:ProposedMethod}
The proposed pedestrian detection framework consists of two sub-networks: detection and alignment. We use a proposal-and-classification approach to detect pedestrians with multi-scales. To get detection prosals, we perform fast pedestrian detection in the detection sub-network based on region proposal network (RPN). To remove false positives, we use saliency in the detection sub-network. Then, we align bounding boxes in the alignment sub-network to recall the lost body parts caused by the detection sub-network. We combine FCN and CAM into the alignment sub-network for accurate pedestrian localization.
\subsection{Detection Framework}
\label{subsec:DetectionFramework}
\textbf{Network Architecture:} The first stage is to generate detection proposals. As shown in Fig.~\ref{fig:detection}, the detection sub-network consists of five convolutional units, one fully-connected (FC) layer, and one global max pooling (GMP) layer for classification and localization. The five convolutional units are configured similar to the VGG-16 network~\cite{simonyan2014very}. Each convolutional unit consists of two or three $3 \times 3$ convolutional layers and one max-pooling layer. The fifth convolutional unit is connected by a global max pooling layer instead of a max pooling layer. These convolutional layer produces a feature map of size $1 \times 1 \times 512$. The feature map is connected to the FC layer, which is separated by two output layers. The first output layer is the classification layer, while the second output layer is the bounding box regression layer. This output layer architecture is taken from Faster R-CNN~\cite{ren2015faster}. For the network training, the loss ($L_{d}$) is defined as follows:
\begin{equation}
L_{d} = L_{d}^{cls} + L_{d}^{bbox}
\end{equation}
where $L_{d}^{cls}$ is the classification loss, i.e. softmax-log loss, and $L_{d}^{bbox}$ is the bounding box regression loss, i.e. smooth L1 loss.
\begin{figure*}[t]
\centering
\centerline{\includegraphics[width = 0.7\linewidth]{figures/detection_part.png}}
\caption{Architecture of the proposed detection sub-network. }
\label{fig:detection}
\end{figure*}
Also, we add three convolutional layers and five deconvolutional blocks in the saliency network since the last pooling layer in the detection sub-network to get saliency maps for pedestrians. The deconvolutional block consists of one bilinear upsampling layer, one or three convolutional units. The layer configuration of the deconvolution block for the saliency network is described in Table~\ref{table:deconv_block_details}. In the last deconvolution block, the output value is limited to 0 to 1 using sigmoid function. For the network training, we calculate the saliency loss $L_{s}$ by simple Euclidean distance from the ground truth.
\begin{table}[t]
\caption{Layer configuration of the deconvolutional block for the saliency network. Input size: $600 \times 800$. Change of in/out channels: $\to$. Change of layer size: $\downarrow, \uparrow$. Data flow: $\leftarrow$.}
\small\addtolength{\tabcolsep}{-3pt}
\begin{center}
\begin{tabular}[c]{|c|c|c|c|c|}
\hline
\bf{Layer} & \bf{Filter} & \bf{Size ($w\times h$)} & \bf{Output} & \bf{etc.} \\
\hline
$pool$ 5 & 2 $\times$ 2 & $18 \times 25 $ & 512 & $\leftarrow conv$ 5-3, $\downarrow$\\
$conv$ 6-1 & 3 $\times$ 3 & $18 \times 25$ & 512$\to$1024 & \\
$conv$ 6-2 & 3 $\times$ 3 & $18 \times 25$ & 1024 & \\
$conv$ 6-3 & 3 $\times$ 3 & $18 \times 25$ & 1024 & \\
\hline
$upsample$ 1 & - & $35 \times 49$ & 1024 & size $\uparrow$ \\
$conv$ 7-1 & 3 $\times$ 3 & $35 \times 49$ & 1024$\to$512 & \\
$conv$ 7-2 & 3 $\times$ 3 & $35 \times 49$ & 512 & \\
\hline
$upsample$ 2 & - & $69 \times 97$ & 512 & size $\uparrow$ \\
$conv$ 8-1 & 3 $\times$ 3 & $69 \times 97$ & 512 $\to$ 256 & \\
$conv$ 8-2 & 3 $\times$ 3 & $69 \times 97$ & 256 & \\
\hline
$upsample$ 3 & - & $137 \times 193$ & 256 & size $\uparrow$ \\
$conv$ 9-1 & 3 $\times$ 3 & $137 \times 193$ & 256$\to$128 & \\
\hline
$upsample$ 4 & - & $273 \times 385$ & 128 & size $\uparrow$ \\
$conv$ 10-1 & 3 $\times$ 3 & $273 \times 385$ & 128$\to$64 & \\
\hline
$upsample$ 5 & - & $ 545\times 769$ & 64 & size $\uparrow$ \\
$conv$ 11-1 & 3 $\times$ 3 & $545 \times 769$ & 64 $\to$ 32 & \\
$conv$ 11-2 & 3 $\times$ 3 & $545 \times 769$ & 32 $\to$ 1 & \\
\hline
\end{tabular}
\end{center}
\label{table:deconv_block_details}
\end{table}
For detection proposals, we train the detection sub-network jointly with the saliency network by optimizing the following combined loss function:
\begin{equation}
L = L_{d} + L_{s}
\end{equation}
where $L_{d}$ and $L_{s}$ are losses of the detection network and of the saliency network, respectively. \\
\textbf{Detection Proposal:} We use Faster R-CNN~\cite{ren2015faster} to extract detection proposals for pedestrians. However, the detection results include some false positives such as vehicle parts, trees, and post lamps. To remove them, we apply different weights to the background and foreground so that the detector focuses on the pedestrian area. To determine the weight, we obtain pedestrian saliency maps using the saliency network from the input image. We update the class probability (score) using saliency map as follows:
\begin{equation}
f_{w}(b) = f(b)* w_{f}
\end{equation}
The weight $w_{f}$ is defined as follows:
\begin{equation}
w_{f} = \lbrace
\begin{array}{cl}
1 & if \quad f(b) > th_{b} \\
\frac{1}{N}\sum_{x,y \in b}s(x,y) & otherwise,
\end{array}
\end{equation}
\begin{figure}[t]
\centering
\subfloat[]{
\label{Fig.sub.1}
\includegraphics[width = 0.15\textwidth]{figures/_1020_img.png}}
\subfloat[]{
\label{Fig.sub.1}
\includegraphics[width = 0.15\textwidth]{figures/_1020_keep_img_roi.png}}
\subfloat[]{
\label{Fig.sub.1}
\includegraphics[width = 0.15\textwidth]{figures/_1020_keep_img.png}}
\subfloat[]{
\label{Fig.sub.1}
\includegraphics[width = 0.15\textwidth]{figures/_1020_sal_img.png}}
\subfloat[]{
\label{Fig.sub.1}
\includegraphics[width = 0.15\textwidth]{figures/_1020_non_keep_img.png}}
\subfloat[]{
\label{Fig.sub.1}
\includegraphics[width = 0.15\textwidth]{figures/_1020_keep_sal_img.png}}
\caption{Examples of detection proposal with saliency weight. (a) Input image, (b) detection proposal (w/o $w_{f}$), (c) NMS result of (b), (d) Saliency map, (e) detection proposal (with $w_{f}$), (f) NMS result of (e) }
\label{fig:withSal}
\end{figure}
where $b$ is bounding boxes of proposals, $s(x,y)$ is saliency scores in the position ($x$, $y$), and $f(b)$ is class scores of the selected bounding box. $th_{b}$ is the threshold value for distinguishing between foreground and background. The new class score $f_{w}(b)$ is calculated by the product of the weight value $w_{f}$ and the bounding box score $f(b)$. Then, we use non-max suppression (NMS)~\cite{ren2015faster} to determine the final detection proposal samples. Fig.~\ref{fig:withSal} shows some examples of the detection proposal samples generated by the proposed method.
\subsection{Alignment Framework}
\label{subsec:AlignmentFramework}
\textbf{Network Architecture:} The second stage is to align the bounding box using part-level detector. Our part-level detector is a combination of one root detector which detects root position of pedestrians and three part-level detectors which detect human body parts of head, torso, and legs. The root/part detector networks are configured similar to VGG-16 network. As shown in Fig~\ref{fig:partD}, the alignment sub-network has two output layers: One is the output layer to obtain FCN and the other is the output layer to obtain CAM with global average pooling.
\begin{figure*}[t]
\centering
\centerline{\includegraphics[width = 0.7\linewidth]{figures/part_d.png}}
\caption{Network architecture of the proposed part-level detector based on VGG-16 network with class activation map}
\label{fig:partD}
\end{figure*}
Our root-detector produces confidence score and root position for detection proposals. Bounding box alignment is performed on the root detector, and we treat this updated position of the aligned bounding box as an anchor position, i.e. the final position. Similarly, part confidence score and part position are produced by each part-level detector. Note that the part detection stage is implemented based on the updated position. Theoretically, bounding box alignment helps the proposed detector by better detection proposals as well as recall the lost body parts which is out of the ground truth. We compute a weighted sum of the confidence scores with a spatial distance penalty term as the final confidence score of a detection proposal. \\
\textbf{Converting CNN into FCN/CAM:} In general, detectors suffer from low detection IoU such as R-CNN, which causes poor localization quality of the detection proposals. In this work, we name it as the proposal shift problem. Hosang \emph{et al.}~\cite{hosang2015taking} reported that the best detection proposal method SpatialPooling+~\cite{paisitkriangkrai2016pedestrian} recalled 93\% samples with 0.5 IoU threshold while only recalling 10\% samples with 0.9 IoU threshold. Zhang \emph{et al.}~\cite{Shanshan2016CVPR} clustered all false positives in 3 categories, and localization quality is one of the main source of false positives. Detection proposals shift the position of samples by various direction and distance. As shown in Fig.~\ref{fig:shiftP}, body parts frequently appear out of the region of the detection proposal, which leads to bad detection response: low confidence score and/or IoU. Thus, we introduce a novel technique based on FCN and CAM to align the bounding boxes. According to the response of FCN and CAM, we generate much larger heat maps. Then, we predict the new position of pedestrians.
To perform bounding box alignment, a larger detection region is needed as the input of the detector. In this larger detection region, our root detector outputs a coarse position of a pedestrian. We simply convert root/part networks into FCN version and generate root/part CAM to get coarse position information, named as root/part-net. In root/part-net, the last pooling layer is fully connected with \verb"FC1" by an inner product weight matrix. Thus, the size of the input image is supposed to be fixed. With the trained root/part-net, we change the shape and dimension of the parameters between the last pooling layer and \verb"FC1" to make these weight matrix convolute on the large feature map. By expanding 25\% from the size of bounding box and changing the size of the input image to $160 \times 96$, we obtain a confidence score heat map ($C_{fcn}$) of the size $5 \times 3$. According to the study on visualizing deep learning~\cite{ZeilerF13Vis1, MahendranV14Vis2}, the deeper the layers, the more abstract the information extracted. That is, the object neurons respond to transform simple edges to advanced information. We use the advanced information to identify categories in input images~\cite{zhou2016learning}. As shown in Fig.~\ref{fig:partD}, the global average pooling (GAP) produces the average space value of the attribute map of each unit in the 4th convolutional layer, and uses the weighted sum of the attribute values to output the final object position. The weighted sum of confidence class activation map ($C_{cam}$) is as follows:
\begin{equation}
C_{cam} = \sum_{x,y}\sum_{k}w_{k}^{c}f_{k}(x,y)
\end{equation}
where $f_{k}(x,y)$ denotes the activation of the unit $k$ in the 4th convolutional layer for the input images, and $w_{k}^{c}$ is the weighted value corresponding to the class position in the unit $k$. Based on the previous research~\cite{zhou2016learning}, it is expected that each unit in the convolutional layer is activated by the visual pattern within the receptive field. \\
\textbf{Shift-and-Stitch for a Larger Confidence Map:} To predict a coarse position of a pedestrian in the large detection region, a higher resolution of $C_{fcn}$ and $C_{cam}$ are needed. We use a simple trick to obtain it as follows. Since there are total $s=32$ pixels between every step, we shift the proposal windows by $f$ steps on the horizontal and vertical axis uniformly and make total distance no more than 32 pixels. This means that the shift distance of every stride is $s/f$. Also, we take root-FCN as an example, and root-FCN generates a $5 \times 3$ heat map by every step interlacing all $f^{2}$ outputs according to the relative direction of every shift-and-stitch. As a result, a $(5\cdot{f})\times(3\cdot{f})$ heat map is generated.
\begin{figure*}[t]
\centering
\centerline{\includegraphics[width = 0.8\linewidth]{figures/bb_alg.png}}
\caption{Pipeline for bounding box alignment. Origin: Original bounding box. The pedestrian is localized at the top left corner of a bounding box. Extend: Enlarged bounding box. Confidence map: Output of FCN and CAM. Better: Aligned bounding box. The lost head part is recalled and thus the pedestrian is accurately localized. }
\label{fig:example}
\end{figure*}
Once got a larger $C_{fcn}$ and $C_{cam}$, we apply a simple up-sampling method to produce a nice aspect ratio score heat map which equals to the aspect ratio of the input region. In this way, shift direction for the target position is calculated without a stretch operation. A coarse body position is estimated by selecting a region having the largest average value in the up-sampled $C_{fcn}$ and $C_{cam}$. We use an enlarging ratio parameter $L$ to determine the size of the target bounding box. Width/height of the rectangle $w/h$ is obtained by multiplying $L$ with the width/height of the input region $W$/$H$.
\begin{equation}
w/h=L\cdot W/H
\end{equation}
Define the coarse position in the input large region as $(x_p, y_p)$, the original position as $(x_o, y_o)$.
Then, we update $x$ by
\begin{equation}
\Delta{x}_{fcn}=\frac{2\times\sum_{i=1}^{n} (C_{fcn,i}^{t}-C_{fcn,i}^{o})^{2}}{\sum_{i=1}^n {C_{fcn,i}^{t}}^{2}+\sum_{i=1}^{n}{C_{fcn,i}^{o}}^{2}}\ast (x_{p}-x_{o})
\end{equation}
where $C_{fcn,i}^{t}$ is the value of the $i$-th element in the target rectangle in the confidence score heat map, $C_{fcn,i}^{o}$ is the value of the $i$-th element in the original rectangle, and $n$ is the total number of elements in the rectangles. $\Delta{x}_{cam}, \Delta{y}_{fcn},\Delta{y}_{cam} $ is obtained in the same way. The position of the detection proposal is updated by
\begin{equation}
x_a=x_o+\frac{\Delta{x}_{fcn}+\Delta{x}_{cam}}{2}
\end{equation}
$y_a$ is also updated in the same way. The updated position of the detection proposal $(x_a, y_a)$ is named as anchor position. Based on the anchor position $(x_a, y_a)$, our part-level detector is operated to yield part scores and part positions. \\
\textbf{Part Merging:} Part detection is considered in the alignment sub-network. The part detector has a different receptive size filter for the aligned BB generated by the root detector. Part score $score_p$ and part position $(x_p, y_p)$ that indicate the possibility and area the part appearance, respectively, are produced by each of the part detectors. The final detection score is defined as:
\begin{equation}
score = score_{root}+\sum_{i=\{parts\}}{w_i}\ast(score_i + P_i)
\end{equation}
where $score_{root}$ is the output score of the body detector; $score_i$ is the output score of three body parts; $w_i$ is the weight that indicates the importance of part scores, and we set $\sum_{i=\{parts\}}w_{i}=1$ in this work. $P_i$ is the penalty term of the spatial distance between anchor position and part position:
\begin{equation}
P=a\ast(|x_{p}-x_{a}|+|y_{p}-y_{a}|)+b\ast(|x_{p}-x_{a}|^{2}-|y_{p}-y_{a}|^{2})
\end{equation}
where $a$ and $b$ are weights of the penalty term that balance the orientation and geometrical shifting distance; $(x_a, y_a)$ is the anchor position which is the position of an aligned detection proposal. For position of the detection, we simply use the anchor position as the final position.
\subsection{Implementation Details}
\label{subsec:Implementation}
\textbf{Target Labels for Training data:} Currently, the datasets such as Caltech~\cite{Dollar2012PAMI}, INRIA~\cite{dalal2005histograms} and ETH~\cite{eth_biwi_00534} do not provide part-level and saliency annotations. Inspired by~\cite{felzenszwalb2008a,felzenszwalb2010object}, we have cropped all ground truth into three parts uniformly and assign their corresponding part labels automatically to generate training data for our part detectors. We have trained part detectors for three body parts of head, torso and legs. In Caltech pedestrian dataset, every frame in which a given sample is visible has two bounding boxes. One bounding box indicates the full extent of the entire body (BB-full), while the other is for visible region (BB-vis). For part detectors, we only select BB-vis for part division to avoid collecting background regions into positives. To generate training data for saliency, we draw a white rectangles in the black background using ground truth bounding boxes. \\
\textbf{Initialization and Settings for Training:} We have implemented the entire learning network using TensorFlow~\cite{tensorflow16tensorflow}. We have performed the learning of the proposed network on a PC with NVIDIA GTX 1080ti of 11GB memory. We have initialized the parameters of convolutional units from VGG-16~\cite{simonyan2014very}, which is pre-trained on ImageNet dataset. If not belong to VGG-16 network, Xivier initialization method~\cite{Xavier10init} is used for the weight initialization of the proposed network. For optimization, we have used ADAM optimizer~\cite{KingmaB14Adam} for learning with the learning rate 0.001 and the iteration epoch 15. Also, we avoid overfitting, and apply a dropout technique~\cite{srivastava14dropout} to the final fully-connected layer with the probability 0.5 for normalization.
\section{Experimental Results}
\label{sec:Experiments}
\subsection{Datasets and Benchmark}
\label{subsec:Dataset}
As shown in Fig.~\ref{fig:dataset}, we evaluate performance of the proposed method on three datsets: Caltech~\cite{Dollar2012PAMI}, INRIA~\cite{dalal2005histograms} and ETH~\cite{eth_biwi_00534}.
\begin{figure}[t]
\centering
\begin{minipage}[b]{0.32\linewidth}
\centering
\centerline{\includegraphics[width = 1.0\textwidth]{figures/USA.jpg}}
\centerline{(a) }
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\centerline{\includegraphics[width = 1.0\textwidth]{figures/INRIA.png}}
\centerline{(b) }
\end{minipage}
\begin{minipage}[b]{0.32\linewidth}
\centering
\centerline{\includegraphics[width = 1.0\textwidth]{figures/ETH.png}}
\centerline{(c) }
\end{minipage}
\caption{Three datasets for experiments. (a) Caltech-USA. (b) INRIA. (c) ETH. }
\label{fig:dataset}
\end{figure}
\textbf{Caltech-USA:} This dataset~\cite{Dollar2012PAMI} consists of approximately 10 hours of $640 \times 480$ 30Hz video taken from a vehicle driving through regular traffic in an urban environment. About 250,000 frames (in 137 approximately minute long segments) with a total 350,000 bounding boxes and 2,300 unique pedestrians have been annotated. We use every 3rd frame to extract training data followed by~\cite{hosang2015taking} and~\cite{nam2014local}. The 4,024 standard testing dataset (sampling every 30th frame from test videos) are evaluated. \\
\textbf{INRIA:} This dataset~\cite{dalal2005histograms} consists of 1,382 training images and 288 testing images taken from a personal digital image collections or the web using Google images. Only upright person (with person height $> 100$ pixels ) have been annotated. The original positive images are of very high resolution (approximately $2592 \times 1944$ pixels), and thus we have cropped these images to highlight persons. Our model is trained with all training images and evaluated on the 288 testing images. \\
\textbf{ETH:} This dataset~\cite{eth_biwi_00534} consists of 1,450 training images and 354 testing images with a resolution of $640 \times 480$ (bayered). The dataset provides the camera calibration and annotations of pedestrian bounding boxes. \\
To evaluate the proposed pedestrian detection method, we mainly use a reasonable subset~\cite{Dollar2012PAMI, li2018scale} which contains pedestrians that have over 50 pixels height and over 65\% visibility. We perform evaluations on the final output: List of detected bounding boxes with category scores. We use the standard parameter setting on Caltech dataset. We use log-average miss rate to evaluate the detector's performance computed by average miss rate at false positive per image (FPPI) rates evenly spaced in log-space in the range $10^{-2}$ to $10^{0}$. The area that overlap with the ground truth exceeds 50\% is set to the true as follows:
\begin{equation}
overlap = \frac{area(BB_{dt} \bigcap BB_{gt})}{area(BB_{dt} \bigcup BB_{gt})} > 0.5
\end{equation}
where $BB_{dt}$ and $BB_{gt}$ are detection bounding box and ground truth bounding box, respectively.
\begin{table}[t]
\caption{Performance evaluation on Caltech dataset (Unit=\%). Proposed I: "Detection Proposal + Saliency". Proposed II: "Proposed I + Shift Handling + Part Detectors". }
\begin{center}
\begin{tabular}[c]{|c|c|c|c|}
\hline
\bf{Subset} & \cite{wang2017part} & Proposed I & Proposed II \\
\hline
Reasonable & 22.52 & 18.82 & 12.40 \\
\hline
Scale=Large & 8.87 & 8.70 & 4.50 \\
\hline
Scale=Near & 11.96 & 10.98 & 6.03 \\
\hline
Scale=Medium & 65.54 & 53.71 & 53.71 \\
\hline
Occ=None & 19.69 & 16.03 & 11.43 \\
\hline
Occ=Partial & 43.74 & 36.32 & 16.68 \\
\hline
\end{tabular}
\end{center}
\label{table:result1}
\end{table}
\subsection{Performance of Part-Level Detectors}
\label{subsec:per-part}
We conduct a set of experiments on Caltech dataset to investigate the detection accuracy of the proposed method. We provide the performance of the pedestrian detection on saliency weights, shift handling, and part merging. When saliency weights are applied to the detection proposals, FPPI is 18.82\% ('Proposed I' in Table~\ref{table:result1}). In comparison with the previous results, the saliency weights help to ensure the correct detection proposal as shown in Fig.~\ref{fig:withSal}). We also confirm that FPPI decreases 12.40\% by solving the proposal shift problem when the bounding box alignment is applied ('Proposed II' in Table~\ref{table:result1}). We apply part-level detection to the larger detection region. Part-level detectors are able to recall the lost body parts beyond detection proposals. With the aligned anchor positions, part positions are more accurate by localizing the largest area with average scores. The spatial distance penalty term between anchor and part positions is very effective to consider the proposal shift problem.
\begin{figure}[t]
\centering
\subfloat[]{
\label{Fig.sub.10}
\includegraphics[width = 0.11\textwidth]{figures/result_2_1.png}
\includegraphics[width = 0.11\textwidth]{figures/result_2_2.png}} \quad
\subfloat[]{
\label{Fig.sub.11}
\includegraphics[width = 0.11\textwidth]{figures/result_7_1.png}
\includegraphics[width = 0.11\textwidth]{figures/result_7_2.png}}\\
\subfloat[]{
\label{Fig.sub.12}
\includegraphics[width = 0.11\textwidth]{figures/result_13_1.png}
\includegraphics[width = 0.11\textwidth]{figures/result_13_2.png}}\quad
\subfloat[]{
\label{Fig.sub.13}
\includegraphics[width = 0.11\textwidth]{figures/result_16_1.png}
\includegraphics[width = 0.11\textwidth]{figures/result_16_2.png}}
\caption{Some successful detection results. The left and right images show the detection results of 'basic (without saliency)' and 'proposed (with saliency)', respectively. Blue box: False positive. Best viewed in color.}
\label{fig:result3}
\end{figure}
\begin{figure}[t]
\centering
\subfloat[]{
\label{Fig.sub.8}
\includegraphics[width = 0.11\textwidth]{figures/result_9_1.png}
\includegraphics[width = 0.11\textwidth]{figures/result_9_2.png}} \quad
\subfloat[]{
\label{Fig.sub.9}
\includegraphics[width = 0.11\textwidth]{figures/result_12_1.png}
\includegraphics[width = 0.11\textwidth]{figures/result_12_2.png}}
\caption{Some successful detection results. The left and right images show the detection results of Basic (without saliency) and Proposed (with "Saliency + Shift Handling + Part Detectors"). Green box: True positive. Best viewed in color.}
\label{fig:result2}
\end{figure}
\begin{figure*}[t]
\centering
\subfloat[]{
\label{Fig.sub.1}
\includegraphics[width = 0.12\textwidth]{figures/result_3_1.png}
\includegraphics[width = 0.12\textwidth]{figures/result_3_2.png}}
\subfloat[]{
\label{Fig.sub.2}
\includegraphics[width = 0.12\textwidth]{figures/result_6_1.png}
\includegraphics[width = 0.12\textwidth]{figures/result_6_2.png}
\subfloat[]{
\label{Fig.sub.3}
\includegraphics[width = 0.12\textwidth]{figures/result_10_1.png}
\includegraphics[width = 0.12\textwidth]{figures/result_10_2.png}
\subfloat[]{
\label{Fig.sub.4}
\includegraphics[width = 0.12\textwidth]{figures/result_4_1.png}
\includegraphics[width = 0.12\textwidth]{figures/result_4_2.png}}\\
\subfloat[]{
\label{Fig.sub.5}
\includegraphics[width = 0.12\textwidth]{figures/result_8_1.png}
\includegraphics[width = 0.12\textwidth]{figures/result_8_2.png}}\quad
\subfloat[]{
\label{Fig.sub.6}
\includegraphics[width = 0.17\textwidth]{figures/result_11_1.png}
\includegraphics[width = 0.17\textwidth]{figures/result_11_2.png}}\quad
\subfloat[]{
\label{Fig.sub.7}
\includegraphics[width = 0.17\textwidth]{figures/result_15_1.png}
\includegraphics[width = 0.17\textwidth]{figures/result_15_2.png}}
\caption{Successful detection results by the proposed method. The left and right images show the detection results of Basic (without saliency) and Proposed (with "Saliency + Shift Handling"), respectively. Blue box: Basic detection result. Green box: Proposed detection result.}
\label{fig:result1}
\end{figure*}
We provide some successful detection results by adding saliency (Figs.~\ref{fig:result3} and ~\ref{fig:result1}), shift handling (Fig.~\ref{fig:result1}), and part-level detector (Fig.~\ref{fig:result2}). The saliency helps to distinguish background components similar to pedestrians. Without saliency, it is easy to falsely detect car parts (Figs.~\ref{Fig.sub.10} and \ref{Fig.sub.11}) or trees (Figs.~\ref{Fig.sub.12} and \ref{Fig.sub.13}) as pedestrians because cars or trees have similar shapes to pedestrians. The proposed method improves the detection performance by separating one box with two pedestrians (Fig.~\ref{Fig.sub.5}) and detecting pedestrians blurred by motion (Fig.~\ref{Fig.sub.7}). Moreover, the proposed method recalls the lost body parts by bounding box alignment as shown in Figs.~\ref{Fig.sub.1}-\ref{Fig.sub.4}). The part-level detector is able to detect partially-occluded or low-resolution pedestrians that the upper body is visible (Fig.~\ref{Fig.sub.8}) and the body parts are occluded (Fig.~\ref{Fig.sub.9}).
\begin{table}[t]
\caption{Performance comparison between different methods on Caltech dataset (MR: Miss rate). }
\begin{center}
\begin{tabular}[c]{|c|c|}
\hline
\bf{Method} & MR(\%) \\
\hline
JointDeep~\cite{ouyang2013joint} & 39.3 \\
\hline
SDN~\cite{luo2014switchable} & 37.9 \\
\hline
CifarNet~\cite{hosang2015taking} & 28.4 \\
\hline
LDCF~\cite{nam2014local} & 24.8 \\
\hline
AlexNet~\cite{hosang2015taking} & 23.3 \\
\hline
TA-CNN~\cite{tian2015pedestrian} & 20.9 \\
\hline
Checkerboards+~\cite{zhang2015filtered} & 17.1 \\
\hline
SA-FasterRCNN~\cite{li2018scale} & 9.7 \\
\hline
Proposed & 12.4 \\
\hline
\end{tabular}
\end{center}
\label{table:result2}
\end{table}
\begin{table}[t]
\caption{Performance comparison between different methods on INRIA dataset (MR: Miss rate). }
\begin{center}
\begin{tabular}[c]{|c|c|}
\hline
\bf{Method} & MR(\%) \\
\hline
InformedHarr~\cite{zhang2014informed} & 14.43 \\
\hline
LDCF~\cite{nam2014local} & 13.79 \\
\hline
Franken~\cite{mathias2013handling} & 13.70 \\
\hline
Roerei~\cite{benenson2013seeking} & 13.53 \\
\hline
SA-FasterRCNN~\cite{li2018scale} & 8.04 \\
\hline
RPN+BF~\cite{zhang2016faster} & 6.88 \\
\hline
Proposed & 10.34 \\
\hline
\hline
\end{tabular}
\end{center}
\label{table:result3}
\end{table}
\begin{table}[t]
\caption{Performance comparison between different methods on ETH dataset (MR: Miss rate). }
\begin{center}
\begin{tabular}[c]{|c|c|}
\hline
\bf{Method} & MR(\%) \\
\hline
JointDeep~\cite{ouyang2013joint} & 45 \\
\hline
LDCF~\cite{nam2014local} & 45 \\
\hline
Franken~\cite{mathias2013handling} & 40 \\
\hline
Roerei~\cite{benenson2013seeking} & 43 \\
\hline
TA-CNN~\cite{tian2015pedestrian} & 35 \\
\hline
RPN+BF~\cite{zhang2016faster} & 30 \\
\hline
Proposed & 31.12 \\
\hline
\end{tabular}
\end{center}
\label{table:result4}
\end{table}
\subsection{Comparisons with Other Deep Models}
\label{subsec:compare}
\textbf{Caltech:} We compare the performance of the proposed method with those of other deep models: JoinDeep~\cite{ouyang2013joint}, SDN~\cite{luo2014switchable}, LDCF~\cite{nam2014local}, TA-CNN~\cite{tian2015pedestrian}, Checkerboards+~\cite{zhang2015filtered}, and SA-FasterRCNN~\cite{li2018scale}. Table~\ref{table:result2} shows performance comparison between different methods on Caltech dataset. The proposed method performs the second by 12.4\% based on saliency and bounding box alignmen and achieves a slightly higher miss rate than SA-FasterRCNN~\cite{li2018scale}. \\
\textbf{INRIA:} We also conduct performance comparison on INRIA dataset with InformedHaee~\cite{zhang2014informed}, LCDF~\cite{nam2014local}, Franken~\cite{mathias2013handling}, Roerei~\cite{benenson2013seeking}, and SA-FasterRCNN~\cite{li2018scale}. Table~\ref{table:result3} shows their performance on INRIA dataset. The INRIA dataset is a group of people-centric data rather than on real roads in a complex environment, which is much different from ETH or Caltech. It includes various types of data covering body parts, and is suitable for performance evaluation of body part detection and pedestrian detection from complex backgrounds. We evaluate the performance of the proposed method with part-level detection. As shown in Table.~\ref{table:result3}, the proposed method achieves comparable performance of 10.34\% to state-of-the-arts in a partially-occlusion dataset. \\
\textbf{ETH:} ETH dataset is not a road environment, but it is worth assessing pedestrian detection performance by containing a large number of pedestrians. The proposed method shows a relatively low miss rate of 32.12\%. We compare our detector with JointDeep~\cite{ouyang2013joint}, LCDF~\cite{nam2014local}, Franken~\cite{mathias2013handling}, Roerei~\cite{benenson2013seeking}, TA-CNN~\cite{tian2015pedestrian} and RPN+BF~\cite{zhang2016faster}. Table~\ref{table:result4} shows performance comparison between them on ETH dataset. As shown in the table, the proposed method performs the second in MR (RPN+BF is the best) and achieves comparable performance to state-of-the-arts.
\section{Conclusions}
\label{sec:Conclusion}
In this paper, we have proposed part-level CNN for pedestrian detection using saliency and boundary box alignment. We have used saliency in the detection sub-network to remove false positives such as lamp posts and trees. We have utilized boundary box alignment in the alignment sub-network to recall the lost body parts. We have generated confidence maps using FCN and CAM, and estimated accurate position of pedestrians based on them. Experimental results demonstrate that the proposed method achieves competitive performance on Caltech, INRIA, and ETH datasets with state-of-the-art deep models for pedestrian detection in terms of MR.
In our future work, we will investigate pedestrian detection in low light condition such as night time with the help of near infrared (NIR) data.
\bibliographystyle{IEEEtran}
|
2,877,628,091,580 | arxiv | \section{INTRODUCTION}
\label{sec:intro}
Electromagnetic quantum fluctuation phenomena lie at the heart of many processes where the interface between classical and quantum physics plays a prominent role. Among these one may cite areas of quantum optics, micro-cavity physics, micro-fluidics, photonic structures, early Universe cosmo-genesis, dark energy and cold-atom technology. In these systems one is often confronted with phenomena that interrelate classical continuum mechanics, classical electromagnetism, cavity quantum electrodynamics and fundamental issues relating to fluctuation-dissipation mechanisms \cite{Loudon,Milton} \!\!. In particular, dynamic (material) fluctuations induced by quantum fluctuations of the electromagnetic field have experimental consequences and offer an exciting opportunity to confront the limitations of basic theory with observable data.
In technology such fluctuations may manifest themselves as quantum induced stresses. For example Casimir stresses cannot be ignored as nano-structures develop ever smaller miniaturizations.
In micro-fluidics, physical processes can be confined to (deformable) micro-cavities that are guided by electromagnetic fields. Such micro-laboratories offer new possibilities to explore cavity QED experimentally as well as enhancing the control features of micro-fluidic design. Indeed it has even been suggested that chemical processes in such an environment may shed light on the mechanism that evolves inert matter into living cells.
The role of quantum fluctuations in determining the behavior of fabricated micro-structures is becoming increasingly important in a wide area of science and technology. Such fluctuations are also at the heart of many fundamental problems in physics ranging from the stability of fundamental constituents of matter to the lack of progress in unifying quantum field theory with gravitation. Many of the problems arise due to a lack of knowledge of basic interactions between fields and matter at some scale and the need to regularize current theories in order to make experimental predictions. For renormalizable theories such as QED in the vacuum these are remarkably accurate. However macroscopic predictions directly from QED that are affected by the presence of bulk macroscopic matter and material interfaces can be made with far less confidence since they depend critically on both geometric and constitutive modeling at macroscopic scales \cite{Inui1,Inui2} \!\!. In particular the role of quantum states of the electromagnetic field on the detailed behavior of isolated closed material micro-domains of polarisable matter remains an unsolved problem \cite{Nester} \!\!.
The quantization of the \EM field in the presence of media satisfying linear electromagnetic constitutive relations relies on a knowledge of a regular basis of eigen-solutions to a modified Helmholtz type equation that determines the electromagnetic fields. If the electromagnetic properties of the media are discontinuous across surfaces in space such solutions must satisfy jump conditions across them dictated by Maxwell's equations for the fields. The nature of the eigen-spectrum of the vacuum Helmholtz operator on time-parameterized differential forms on space is determined by the global topology of spatial domains with boundaries.
Mathematical procedures exist for analyzing such problems using the Hodge-Weyl-Friedrichs decomposition of forms on manifolds with boundary. In principle they can be used to construct a Hilbert space with a real basis of {\it transverse (divergence-less) } forms (i.e. in the kernel of the co-derivative $\delta$) satisfying Dirichlet and Neumann boundary conditions. The split of this space into mutually orthogonal subspaces is responsible for the classification of electromagnetic fields into TE, TM and TEM modes in certain domains.
For example, if the vacuum of 3-dimensional Euclidean space is partitioned into interior and exterior regions by a closed perfectly conducting boundary surface one may establish an ortho-normal basis of transverse $1$-forms $\PHI N(\rr),\,\PSI N(\rr)$ satisfying
appropriate boundary conditions in each region. For a hollow closed cavity these basis modes can be defined in terms of the eigen-$1$-forms $\Phi_N(\rr)$ and $\Psi_N(\rr)$ of the Hodge-de Rham operator (Laplacian) $\Delta$ on forms in space satisfying different boundary conditions:
\begin{eqnarray*}
{\Delta\Phi_N=\mu^2_N\Phi_N}, \quad {\Delta\Psi_N=\lambda^2_N\Psi_N},
\end{eqnarray*}\vspace{-0.7cm}
\begin{eqnarray*}
\PHI N \equiv \frac{1}{\mu^2_N} \delta\, d \Phi_N,\quad \PSI N \equiv\frac{1}{\lambda^2_N}\delta\,d \Psi_N, \quad \delta\PHI N=0,\quad \delta\PSI N=0
\end{eqnarray*}
where $\delta\equiv -\#\, d \, \#$ is the Euclidean exterior co-derivative on $1$-forms on a simply-connected domain ${\cal U}\in {\bf R}^3 $, $d$ denotes the exterior derivative, $-\Delta=d\delta + \delta\, d$ and $N$ labels a triplet of infinitely denumerable discrete numbers labelling the real non-zero eigen-values and associated eigen-forms. In a simply-connected domain such $1$-forms can be employed to represent the Coulomb gauge Maxwell vector-potential $1$-form ${\bm A}={\bm A}^{\sTE}+{\bm A}^{\sTM}$ where:
\begin{eqnarray*}
{\bm A}^{\sTE}(t,\rr)=\sum_N {\cal A}^{\sTE}_N(t)\, \PHI N(\rr), \qquad {\bm A}^{\sTM}(t,\rr)=\sum_N {\cal A}^{\sTM}_N(t)\, (\# d\PSI N(\rr))
\end{eqnarray*}
and $\#$ denotes the Euclidean Hodge map on forms in space.
The eigen-values $\mu_N,\lambda_N$ determine the normal-mode frequencies of electromagnetic fields in the cavity.
A similar analysis can be performed in the exterior region (which may be non-simply connected and involve TEM modes). The computation of the eigenvalues $\lambda_N, \,\mu_N$ is the key precursor to all quantum computations since they characterize the extrinsic domain geometry and determine the spectral content of the infinite number of quantum oscillators that represent the electromagnetic field.
However it is often difficult to determine analytically such bases for generic domains.
Furthermore if the partition involves electrically neutral domains containing linear media that may be inhomogeneous, anisotropic, magneto-electric or dispersive this program involves a modified Helmholtz classical boundary value problem \cite{LeonBook,LeonInhom,LeonInhom2} \!\!.
In this paper the modification necessitated by the presence of an inhomogeneous but non-dispersive and non-conducting medium contained in a closed rectangular 3-dimensional perfectly conducting cavity is explored. This is a precursor to a regularization scheme needed to extract finite quantum expectation values for stresses in the medium induced by quantum electromagnetic fluctuations.
\section{Inhomogeneous Dielectric Media}
\label{ch1}
Consider a smooth open region of space containing a stationary non-dispersive medium characterized by an inhomogeneous permittivity scalar $\eps(\rr)$ and constant permeability scalar $\mu=\mu_0$. Denoting time derivatives with an over-dot the classical source-free Maxwell system to be solved is:
\begin{alignat}{2}
d\,\ee &=-\dot\BB, \qquad &\frac{1}{\mu} d\, \bb &=\eps \dot \EE \\
d\,\BB &=0, \quad & d\,(\eps\EE)& =0 \\
d\,\eps\ne 0, \,&\, d\,\mu = 0, \qquad &\dot\eps=0, \,&\,\dot\mu=0
\end{alignat}
where $\ee,\,\,\bb$ denote time dependent electric and magnetic 1-forms respectively and $\EE=\#\ee, \,\, \BB=\#\bb$.
If the (time-dependent) spatial 1-form $\AAA$ and spatial 0-form $\phi$ belong to a class of gauge-equivalent potentials defining the electric and magnetic fields by
\beqa \ee=-\dot \AAA - d\,\phi, \qquad \bb=\# d\, \AAA \label{FIELDS} \eeqa
then the above system reduces to
\begin{eqnarray*}
\delta(\eps \dot\AAA) + \delta (\eps d\,\phi) &=& 0 \\
\delta d\,\AAA + \eps\mu(d\,\dot\phi + \ddot\AAA) &=& 0
\end{eqnarray*}
In a particular gauge with $$\delta\,(\eps\AAA)=0$$
the equation for the scalar potential decouples:
\begin{eqnarray*}
\delta d\,\AAA+ \eps\mu \ddot\AAA &=& -\eps\mu d\,\dot\phi \\
\delta (\eps d\,\phi) &=& 0 .
\end{eqnarray*}
Furthermore, we may set $d\,\phi=0$ for systems without free charge\cite{Glauber} \!\!. Hence in this gauge the local Maxwell system above is solved in terms of spatial 1-forms satisfying $\delta (\eps\AAA)=0$ and the equation:
\beqa \delta d\,\AAA+\eps\mu\ddot\AAA=0. \label{HELM} \eeqa
This gives rise to a modified Helmholtz equation for time harmonic fields.
Across any non-conducting interface where the dielectric scalar is discontinuous one has two conditions on $\ee$ and $\bb$ restricted to the interface. At each point on the interface the jump in the normal component of $\bb$ and the tangential component of $\ee$ must vanish. Furthermore if there are no real charges or electric currents on the interface the jump in the normal component of $\dd$ where $\dd=\eps \ee$ and the tangential component of $\hh$ where $\hh=\mu^{-1}\bb$ must also vanish on the interface. If the interface is perfectly conducting one assumes that all fields vanish on the side of the interface that is free of the material medium when calculating the jump. It is worth noting that in a bounded inhomogeneous medium where $\eps$ is a continuous function of position one cannot exploit translational invariance in space and spatial Fourier transforms\cite{Brevik} to simply express normal modes in terms of eigen-forms of $ {\bm R}^3 $ spatial translation operators.
If a bounded domain $U \subset {\bm R}^3 $ contains a dielectric with a piecewise inhomogeneous permittivity one writes the general real 1-form solution to (\ref{HELM}) on $U$ as
\beqa \AAA(t,\rr)=\sum_N \calA_N(t,\rr) \label{GEN}\eeqa
where $N$ denotes a triple of discrete labels. Suppose the dielectric is composed of $M$ sub-domains where the permittivity is $\eps_m(\rr)$ in the $m$-th sub-domain. Thus
\begin{eqnarray*}
\eps(\rr)=\sum_{m=1}^M \eps_m(\rr) {\cal Y}_m(\rr)
\end{eqnarray*}
where ${\cal Y}_m(\rr)$ is unity in the subdomain $U_{m}\subset U$ and zero elsewhere. For stationary electrically neutral dielectrics in domains $U$ bounded by conducting surfaces the electromagnetic jump conditions at interfaces above yield a homogeneous system of equations that determine a collection of eigen-modes (up to normalization) and the associated eigen-frequencies $\omega_N$. The number of distinct eigen-spaces and the degeneracies of the associated eigen-frequencies depends on the rank and symmetry of the homogeneous system which in turn reflects how the boundary geometry and boundary conditions affect the nature of the global topology of the domain. For a rank $S$ system the eigen-modes may be written:
\beqa {\cal A}_N(t,\rr)=\sum_{s=1}^S\sum_{m=1}^M \left(
{\cal A}^{(+),s,m}_N(\rr){\cal Y}_m(\rr)\, e^{-i \omega_N^s t} + {\cal A}^{(-),s,m}_N(\rr){\cal Y}_m(\rr) \,e^{i \omega_N^s t}\right)
\eeqa
where the $ \{{\cal A}^{(+),s,m}_N(\rr)\}= \{{\cal A}^{(-),s,m }_N(\rr)\}^{*} $ constitute a basis of solutions to (\ref{HELM}) subject to $ \delta (\eps{\cal A}^{(+),s,m }_N ) =0$ and the above jump conditions in the domain $U$.
\section{Electromagnetic Energy and Stresses}
\label{ch2}
Classical forces (and torques) transmitted by electromagnetic fields through the vacuum can be encoded into a covariant stress-energy-momentum tensor. The modification of such a tensor for fields in a medium has long been a subject of debate and experimental investigation. This debate has continued when the fields become operators subject to quantum laws. In this article we adopt a symmetrization of the stress-energy-momentum tensor for media at rest advocated by Minkowski. In terms of electromagnetic fields in any sub-domain $U_m$ the instantaneous classical electromagnetic energy is:
\beqa {\cal E}_m=\int_{U_m} \frac{1}{2}\left( \ee \wedge \# \dd + \bb \wedge \# \hh \right) \label{ENERGY}.\eeqa
The component of the instantaneous integrated electromagnetic stress \cite{TuckerWalton} \!\!, in a direction defined by a unit spacelike Killing vector field $K$ that generates spatial translations, transmitted across the side of any portion $\Sigma_m$ of an oriented surface in $U_m$ adjacent to the fields in the following integrand, is the force component:
\begin{eqnarray*}\label{FORCE}
{\cal F}_{K,m}=\frac{1}{2}\int_{\Sigma_m}\left( i_K\#\hh \ww \bb - \ee \ww i_K \# \dd - \# \bb \ww \hh(K) - \dd(K) \ww \# \ee\right)
\end{eqnarray*}
\subsection{Computation of Induced Dielectric Stresses by Electromagnetic Mode Fluctuations in a Cavity}
\label{ch3}
The above generalities will now be illustrated for a system comprised of a simply-connected inhomogeneous dielectric medium bounded by a perfectly conducting stationary, inextensible rectangular box with sides of length $L_x,L_y,L_z$. Finding non-trivial exact analytic solutions to (\ref{HELM}) is non-trivial. However, if $\eps(x,y,z)=\epsilon_0 \,\beta \exp(\alpha z/L_z)$ in Cartesian coordinates with real dimensionless positive constants $\beta,\alpha$, then general solutions satisfying the above boundary conditions can be expressed in terms of Bessel and trigonometric functions. Since the interior $U$ of the box is simply connected the boundary conditions yield $S=2$ and a decomposition into orthogonal $TE$ and $TM$ modes with respect to the $z$-axis is possible. With opposite faces of the box at $z=0$ and $z=L_z$ respectively the $TE$ mode structure is given in the above gauge by:
\begin{eqnarray*}
{\cal A}_N^\TE(\rr)=\frac{ {\calN}_N^{\sTE} }{\epsilon_0}\Phi_N^{\sTE}\!\[\eta^{\sTE}_N(z)\] \left\{ \df k_x \sin(k_x x)\cos(k_y y) d\,y -k_y \cos(k_x x) \sin(k_y y) d\, x \right\}
\end{eqnarray*}
where $k_x=\kx, k_y=\ky$, $N$ stands for the triple $(\NN)$ with $n_x,n_y$ positive integers (including zero) and ${\calN}^{\sTE}_N$
denotes a normalization constant. Furthermore
\begin{eqnarray*}
\Phi_N^{\sTE}\[\eta^{\sTE}_N(z)\] &=& J_{\nu_N^{\sTE} }\!\[ \eta^{\sTE}_N(z)\] + \zeta_N^{\sTE} Y_{\nu_N^{\sTE} }\!\[ \eta^{\sTE}_N(z)) \]
\end{eqnarray*}
where
\begin{eqnarray*}
\eta_N^{\sTE}(z)= \frac{ 2 L_z \omega_N^{\sTE} \sqrt{\beta} \exp( \frac{ \alpha z} {2 L_z } ) } { \alpha c_0 }, \quad \nu_N^{\sTE}= \frac{2 L_z } { \alpha}\sqrt{ k_x^2 +k_y^2 }, \quad \zeta_N^{\sTE}= -\frac{J_{\nu_N^{\sTE} }\!\[ \eta^{\sTE}_N(0)\] } { Y_{\nu_N^{\sTE} }\!\[ \eta^{\sTE}_N(0)\] }
\end{eqnarray*}
with $c_{0}^{2}=\frac{1}{\epsilon_{0}\mu_{0}}$ in these expressions and the $\omega_N^{\sTE}$ are the values of the $p$-th roots of the $TE$-mode spectrum generating equation:
\begin{eqnarray*}
J_{ \nu_N^{\sTE} }\!\[\eta^{\sTE}_N(0)\] Y_{ \nu_N^{\sTE}} \!\[ \eta^{\sTE}_N(L_z)\] - J_{ \nu_N^{\sTE} }\!\[ \eta^{\sTE}_N(L_z)\] Y_{ \nu_N^{\sTE} }\!\[ \eta^{\sTE}_N(0)\] &=& 0.
\end{eqnarray*}
The $TM$ mode structure is given by:
\begin{eqnarray*}
{\cal A}_N^\TM(\rr) &=& \frac{ \calN_N^{\sTM} \omega_N^{\sTM} }{\epsilon_0 c_0} \left\{ \left( \Phi_N^{\prime \sTM}\!\[\eta^{\sTM}_N(z)\]
+ \frac{ \Phi_N^{ \sTM}\!\[\eta^{\sTM}_N(z)\] } {\eta^{\sTM}_N(z) } \right)
\left( \df k_x \cos(k_x x)\sin(k_y y) d\,x +k_y \sin(k_x x) \cos(k_y y) d\, y\right) \right. \\
&& \hspace{2.5cm} \left. + \frac{ \alpha}{2L_z \eta_N^{\sTM}} \left( \df (\nu_N^{\sTM})^2-1 \right) \Phi_N^{\sTM}\!\[ \eta^{\sTM}_N(z)\] \sin(k_x x) \sin(k_y y)\,d\,z ) \right\}
\end{eqnarray*}
with normalization constant $\calN^{\sTM}_N$,
\begin{eqnarray*}
\Phi_N^{\sTM}\[ \eta^{\sTM}_N(z) \] &=& J_{ \nu_N^{\sTM} }\!\[\eta^{\sTM}_N(z)\] + \zeta_N^{\sTM} Y_{ \nu_N^{\sTM} }\!\[ \eta^{\sTM}_N(z) \]
\end{eqnarray*}
where
\begin{eqnarray*}
\eta_N^{\sTM}(z)= \frac{ 2L_z \omega_N^{\sTM} \sqrt{\beta} \exp( \frac{ \alpha z}{2 L_z } ) }{ \alpha c_0 }, \quad (\nu_N^{\sTM})^2= \frac{4 L_z^2 } { \alpha^2}{( k_x^2 +k_y^2) } + 1, \quad
\zeta_N^{\sTM}=- \frac{ \eta^{\sTM}_N(0) J_{ \nu_N^{\sTM} }^{\prime}\!\[ \eta^{\sTM}_N(0)\] + J_{ \nu_N^{\sTM} }\!\[ \eta^{\sTM}_N(0) \]}
{\eta^{\sTM}_N(0) Y_{ \nu_N^{\sTM} }^{\prime}\!\[ \eta^{\sTM}_N(0)\] + Y_{ \nu_N^{\sTM} }\!\[ \eta^{\sTM}_N(0) \] }.
\end{eqnarray*}
In this case the $\omega_N^{\sTM}$ are the values of the $p$-th roots of the $TM$-mode spectrum generating equation which may be written in the form:
\begin{eqnarray*}
\widetilde J_{ \nu_N^{\sTM} }\!\[ \eta^{\sTM}_N(0)\]\widetilde Y_{ \nu_N^{\sTM} }\!\[ \eta^{\sTM}_N(L_z)\] - \widetilde J_{ \nu_N^{\sTM} }\!\[ \eta^{\sTM}_N(L_z)\] \widetilde Y_{ \nu_N^{\sTM} }\!\[ \eta^{\sTM}_N(0)\] &=& 0
\end{eqnarray*}
where $\widetilde f(\eta)\equiv \eta f^\prime(\eta) + f(\eta) $ for any $f(\eta)$.
These expressions enable one to calculate the field modes for $\ee,\bb,\hh,\dd$ using (\ref{FIELDS}) and the constitutive relations.
The quantum description can be constructed by generalizing the methods used in vacuum cavity QED. A Fock space of quantised modes is introduced by introducing the annihilation and creation operators $ \aop_N^s $ and $ {\aop_{N^\prime}^{s^\prime}}^{\dagger} $ satisfying the commutator relations:
$$[ \aop_N^s, {\aop_{N^\prime}^{s^\prime}}^{\dagger} ]=\delta_{ N { N^\prime} }\, \delta ^{ s {s^\prime} }$$
for $s\in\{TE,TM\}$.
The Fock space vacuum state $\Lambda_0 $ is annihilated by all $ \aop_N^s $.
In the above gauge stationary quantum modes in a closed cavity are described by the Hermitian operator
\begin{eqnarray*}
\widehat{\cal A}_N(t,\rr)=\sum_{s\in\{TE,TM\}}\sum_{m=1}^M \left( {\aop_{N^\prime}^{s^\prime}}^{\dagger}
{\cal A}^{(+),s,m}_N(\rr){\cal Y}_m(\rr)\, e^{-i \omega_N^s t} + \aop_N^s {\cal A}^{(-),s,m}_N(\rr){\cal Y}_m(\rr) \,e^{i \omega_N^s t}\right).
\end{eqnarray*}
The quantum field modes for the Hermitian operators $\widehat\ee,\widehat\bb,\widehat\hh,\widehat\dd$ follow from (\ref{FIELDS}) but with this mode operator and the corresponding operator constitutive relations. Replacing the classical fields for the dielectric filled cavity in the classical expression for ${\cal E}$ by such operators yields the quantum hamiltonian $\widehat{\cal E}$ for quantum fields in the cavity. Its expectation value $ E_{\Lambda_0}[ \widehat{\cal E} ] $ in the Fock space vacuum state requires renormalization \cite{Santos,BordagBook} \!\!. This is effected by subtracting from an infinite mode sum an expectation value of the energy of a system with a homogeneous medium. To effect this subtraction both mode summations generally require a regularization scheme for their definition. Thus for the system with conducting boundaries and a discrete spectrum one defines:
\begin{eqnarray*}
\langle {\cal E}\rangle_{\text{reg}} &\equiv& \frac{\hbar }{2} \sum_{s \in \{TE,TM\}}\sum_N \omega_N^s \, \psi(\kappa,\omega_N^s)
\end{eqnarray*}
for some suitable smooth function satisfying $\psi(0,\omega_N^s)=1$ that renders the summations meaningful.
Each cavity mode labeled by $N,s$, with eigen-frequency $\omega_N^s$, contributes the factor $ \frac{\hbar}{2}\, \omega_N^s$ to the vacuum expectation of the (regularized) energy.
The well-defined condition
\begin{eqnarray*}\label{QENERGY}
{\cal E}_N^s\equiv \int_{U} \frac{1}{2}\left( \df \ee_N^s \wedge \# \dd_N^s + \bb_N^s \wedge \# \hh_N^s \right) \;\;=\;\; \frac{\hbar}{2} \;\omega_N^s
\end{eqnarray*}
fixes the normalizations ${\cal N }_N^s$ of the mode amplitudes ${\calA}_N^{(+), s}$ and their conjugates:
\begin{eqnarray*}
({\cal N}_N^{\sTE})^2 &=& \frac{16 \hbar \epsilon_0 }{\alpha^{2}\sqrt{\beta} c_{0}({\nu_N^{\sTE}})^2}\frac{L_z^2}{L_x L_y}\frac{1}{I^{\sTE}_{N}\Omega^{\sTE}_{N}}
\end{eqnarray*}
where, with $\Omega^{\sTE}_N=\frac{2 \omega_N^{\sTE} L_z \sqrt{\beta} } { \alpha\, c_0} $,
\begin{eqnarray*}
I^{\sTE}_{N} &=& e^\alpha \left( {\Phi^{\prime}}^{\sTE}_N\!\[\Omega^{\sTE}_N e^{\frac{\alpha}{2}}\] \right)^2 - \left( {\Phi^{\prime}}^{\sTE}_N\!\[ \Omega^{\sTE}_N\] \right)^2
\end{eqnarray*}
and
\begin{eqnarray*}
({\cal N}_N^{\sTM})^2 &=& \frac{64\hbar \epsilon_{0}\sqrt{\beta}}{\alpha^{4}c_{0}[({\nu_N^{\sTM}})^2-1]} \frac{ L_z^4}{L_x L_y} \frac{1}{I^{\sTM}_{N}\Omega_N^{\sTM}}
\end{eqnarray*}
where, with $\Omega^{\sTM}_{N}=\frac{2 \omega_N^{\sTM} L_z \sqrt{\beta} } { \alpha\, c_0} $,
\begin{eqnarray*}
I^{\sTM}_{N} &=& \left(\df 1 - (\nu^{\sTM}_{N})^{2} + (\Omega^{\sTM}_{N})^{2}e^{\alpha} \right) \left( \df \Phi^{\sTM}_{N}\!\[\Omega^{\sTM}_{N}e^{\frac{\alpha}{2}}\]\right)^{2} - \left(\df 1 - (\nu^{\sTM}_{N})^{2} + (\Omega^{\sTM}_{N})^{2} \right)\left(\df \Phi^{\sTM}_{N}\!\[\Omega^{\sTM}_{N}\]\right)^{2}.
\end{eqnarray*}
The mode contributions to the vacuum expectation values of the induced electromagnetic stress field in the dielectric can now be calculated from the Hermitian operator-valued stress 2-form:
\beqa \widehat{\cal \sigma}_{K}=\frac{1}{2}\left( i_K\#\widehat\hh \ww \widehat\bb - \widehat\ee \ww i_K \# \widehat\dd - \# \widehat\bb \ww \widehat\hh(K) -\widehat \dd(K) \ww \# \widehat\ee\right). \label{QFORCE} \eeqa
The quantum expectation value of the regularized force component in the Fock vacuum state $\Lambda_0$ acting perpendicular to a surface $\Sigma_{z_{0}}$ of the box at $z=z_{0}$ is
\begin{eqnarray*}
\langle {\cal F}_{z_{0}} \rangle_{\text{reg}} &\equiv& \left. E_{\Lambda_0}\left[ \int_{\Sigma_{z_{0}}}\, \widehat \sigma_{ \frac{ \partial} { \partial z}} \right]_{\text{reg}} \right|_{z=z_{0}}.
\end{eqnarray*}
In a box containing a homogeneous permittivity, the expectation values of the force at the end faces of the box at $z=0,z=L_z$ are equal. This is not the case for an inhomogeneous dielectric in general. Indeed one finds, after some calculation, the difference between the force expectations
\begin{eqnarray*}
\langle \Delta {\cal F} \rangle_{\text{reg}} \;\;\equiv\;\; \langle {\cal F}_{0} \rangle_{\text{reg}} - \langle {\cal F}_{L_z} \rangle_{\text{reg}} &=& \frac{ \hbar \alpha} { 4 L_z} \sum_{s \in \{ TE,TM\}} \sum_N \omega_N^{s} \, \psi(\kappa,\omega_N^s)
\end{eqnarray*}
This is a surprisingly simple result given the complexity of the mode structures involved. To effect a renormalization of this result requires a computation which will not be reported here.
\section{Conclusion}
\label{ch4}
It is expected that a non-zero $ \langle \Delta {\cal F} \rangle $ will survive renormalization when the regulator is removed and this indicates that any confined inhomogeneous material dielectric must sustain stresses induced by electromagnetic quantum fluctuations if the confining domain is rigid. If the medium remains static such stresses induce mechanical (elastic) stresses in the dielectric to maintain equilibrium. Unlike similarly induced classical stresses by the classical gravitational field in the laboratory (that vary with the orientation of the dielectric) the quantum induced electromagnetic stresses are permanent. In principle they could be detected experimentally by noting the variation of the induced stress field within the dielectric with variations of the permittivity inhomogeneities. Such variations might be detected using photo-elastic effects on the polarization of light passing through the medium.
\acknowledgements
The authors are grateful to STFC and the EPSRC for financial support for this research which is part of the Alpha-X collaboration.
\nocite{*}
|
2,877,628,091,581 | arxiv | \section{Introduction}
The Epoch of Reionisation (EoR) is an essential milestone in the formation and evolution of cosmic structure.
The first luminous objects produced in collapsed dark matter halos
in the early universe ($z \sim 20$) started to reionise the inter galactic medium (IGM) which
was neutral after recombination. Currently we have only
a few observations for the EoR.
The first one is the Ly-$\alpha$ absorption measurement
towards high redshift QSOs which probes the fraction of neutral hydrogen along the line of sight
\citep{2006AJ....132..117F},
and the second one is the large-scale CMB polarisation \citep{2010arXiv1001.4538K}.
These observations indicate that the
IGM was fully ionised by redshift $z \sim 6$.
The recent HST observations found large samples of Lyman break
galaxies (LBGs) at high redshifts,
$7\lesssim z \lesssim 10$ \citep{2010arXiv1006.4360B}. \citet{2010arXiv1006.4360B} have studied reionisation
with a galaxy model based on these data. Their results suggested that, in addition to such high redshift LBGs, other
reionisation sources, for example, faint galaxies and population III
stars, are required to match the optical depth of the WMAP seven-year data.
While, current observational data for the EoR are insufficient to study the details of the EoR.
In recent years, several observations of signals from the EoR have been suggested
to obtain further information about the EoR,
for example fluctuations of the neutral hydrogen 21~cm line
(\citealt{madau-meiksin-rees-1997}, for a review see \citealt{2006PhR...433..181F}),
small-scale CMB anisotropies due to the kinetic Sunyaev-Zel'dovich (kSZ; \citealt{1980MNRAS.190..413S,1986ApJ...306L..51O,1987ApJ...322..597V},
for a review see \citealt{2008RPPh...71f6902A}), and
Ly-$\alpha$ damping of high redshift QSOs and gamma ray bursts
\citep{1998ApJ...501...15M,2004ApJ...601...64B}.
While the latter can provide us with information about
the end of the EoR,
the former two are expected to probe the IGM during the EoR. The
LOFAR\footnote{http://www.lofar.org}, MWA\footnote{http://www.mwatelescope.org/}
and SKA\footnote{http://www.skatelescope.org} are being installed or designed for the measurement of
21~cm line fluctuations, while telescopes such as
ACT\footnote{http://www.physics.princeton.edu/act/}, SPT\footnote{http://pole.uchicago.edu/} and OLIMPO \citep{2008MmSAI..79..887M}
will be used to detect and measure the kSZ signal.
Although both auto-correlations of 21~cm lines and CMB anisotropies during the EoR are good probes of the EoR,
the cross-correlation between 21~cm fluctuations and the CMB anisotropies
created during the EoR is also expected to be useful to study the history of the EoR.
The cross-correlation has a potential to provide additional information other than
their respective auto-correlations.
Besides, the cross-correlation decreases the statistic errors caused by
the foreground and the systematic effects, as compared to their auto-correlation.
There are several analytical or numerical works about the cross-correlation
between CMB and 21 cm fluctuations during the EoR.
\citet{2006ApJ...647..840A} and \citet{2008MNRAS.384..291A}
computed the expected signal on large scales ($\ell \sim 100$)
by analytically calculating the cross-correlation between 21 cm fluctuations
and the CMB Doppler anisotropies in the linear regime of the cosmological perturbations.
\citet{2010MNRAS.402.2617T} studied the detectability of these signals by
LOFAR, MWA and SKA. On small scales ($\ell >1000$), because
the dominant contributions of CMB anisotropies come
from the kSZ effect due to the patchiness of the ionised medium,
\citet{2004PhRvD..70f3509C} has partially studied
the cross-correlation with kSZ anisotropies
and the second order 21 cm fluctuations in a simple reionisation model.
He has also investigated the higher order
cross-correlation by calculating the bispectrum.
\citet{2007MNRAS.377..168S} have also done the study of the
21~cm cross-correlation with the CMB SZ effect
which is caused by hot electrons in the first
supernovae remnants during the EoR.
Since reionisation is a complex physical process, numerical simulations play an important role
in the studies of the 21~cm cross-correlation
with CMB temperature anisotropies. Numerical works
by \citet{2010MNRAS.tmp...65J} and \citet{2005MNRAS.360.1063S},
focus especially on the small-scale cross-correlation due to the patchy reionisation.
Additionally, the 21~cm cross-correlation with CMB polarisation has been calculated
by \citet{2008MNRAS.389..469T} and \citet{2009PhRvD..79j7302D}.
In this paper, we study the cross-correlation
between kSZ anisotropies and the second order 21 cm fluctuations during the EoR
analytically.
\citet{2004PhRvD..70f3509C} has studied this cross-correlation
in the simple analytical reionisation model where the fluctuations of the ionisation fraction
are linearly related to the density fluctuations. He concluded that the cross-correlation cannot appear
due to the geometric cancellation occurring between
the velocity and the density fluctuations.
However, the kSZ effect depends strongly on the evolution of the ionisation bubbles,
and numerical studies of the cross-correlation between 21~cm and kSZ anisotropies
also shows that patchy reionisation generates signals
on small scales
\citep{2005MNRAS.360.1063S}.
Therefore, we revisit this issue with the analytical model
of \citet{2005ApJ...630..643M} which produces a reionisation
history similar to that found in recent numerical simulations.
The outline of our paper is the following.
In Sec.~II, we give the analytical form of the second order
cross-correlation between kSZ anisotropies and
21~cm fluctuations.
In Sec.~III, we give a short description of the analytical reionisation model
based on \citet{2005ApJ...630..643M}.
In Sec.~IV, we show the angular power spectrum of the second order cross-correlation
and we discuss the detectability in the case of the SKA sensitivity.
Section V is devoted to the conclusions.
Throughout the paper, we use the concordance cosmological parameters for a
flat cosmological model, i.e. $h=0.73 \ (H_0=h
\times 100 {\rm ~km/s / Mpc})$, $T_0 = 2.725$K, $\Omega _{\rm b}
=0.05$, $\Omega_{\rm m} =0.27$ and $\sigma_8=0.9$.
\section{The second order cross-correlation}
In this section, we calculate the angular power spectrum
of the cross-correlation
between 21~cm fluctuations and kSZ anisotropies
during the EoR at the second order in the fluctuations.
For simplicity, we assume that both fluctuation fields are
isotropic statistically. Under this assumption,
the angular power spectrum of the cross-correlation
$C_\ell$ is given by
\begin{equation}
\langle a_{\ell_1 m_1} ^{* {\rm kSZ} } a_{\ell_2 m_2}^{21}\rangle
=\delta_{\ell_1 \ell_2} ^D \delta_{m_1 m_2} ^D
C_{\ell_1},
\label{eq:defcrosscl}
\end{equation}
where $a_{\ell_1 m_1}^{{\rm kSZ}}$ and $a_{\ell_2 m_2}^{21}$
are the multipole components of the CMB temperature anisotropies
and 21~cm fluctuations during the EoR.
\subsection{kSZ CMB anisotropies}
During the EoR, secondary CMB temperature anisotropies are caused by
the kinetic SZ effect. Their expression is
\begin{equation}
T_{\rm kSZ} (\hat {\bm n})=-T_{\rm cmb} \int^{\eta_0} ~d\eta
~ g(\eta) \hat {\bm n} \cdot {\bm v} (\eta, \hat {\bm n}),
\label{eq:ksz}
\end{equation}
where ${\bm v}$ is the baryon velocity field,
$g(\eta)$ is the visibility function at
the conformal time $\eta$, and the present value of the conformal
time is $\eta_0$.
The visibility function is given by
$g(\eta) = \dot \tau e^{-\tau}$ where
$\tau$ is the optical depth of Thomson scattering
from $\eta$ to today and
$\dot \tau = \sigma_T x_i n_{\rm H}$
with $\sigma_T$ the cross section of Thomson scattering,
$x_i$ the ionised fraction, and $n_{\rm H}$ the neutral
hydrogen density (we ignore the ionisation of helium).
We can decompose $x_i$ and $n_{\rm H}$
into the background and fluctuation values,
\begin{equation}
n_{H}=\bar n_{H} (1 +\delta), \qquad
x_i=\bar x_i (1+\delta_x),
\label{eq:flucdeco}
\end{equation}
where the symbols with a bar represent the background values.
In Eq.~(\ref{eq:flucdeco}), since we assume that the hydrogen density
follows the dark matter density on scales much bigger than the baryonic Jeans length,
$\delta$ is the total matter density fluctuation field.
Substituting Eq.~(\ref{eq:flucdeco}) into Eq.~(\ref{eq:ksz}),
we obtain
\begin{equation}
T_{\rm kSZ} (\hat {\bm n})=-T_{\rm cmb} \int ~d\eta ~ {\bar g} (\eta)
\hat {\bm n} \cdot {\bm v} (\eta, \hat {\bm n})
~(1+\delta (\eta, \hat {\bm n})+ \delta_x (\eta, \hat {\bm n})
+\delta (\eta, \hat {\bm n}) \delta_x (\eta, \hat {\bm n})).
\label{eq:ksz2}
\end{equation}
We focus on the second order part in Eq.~(\ref{eq:ksz2}),
which can be written in terms of the Fourier components of the fluctuations as
\begin{equation}
\delta T_{\rm kSZ} (\hat {\bm n})= -iT_{\rm cmb} \int d\eta \int {d^3 {\bm k} \over (2 \pi)^3}
\int{d^3 {\bm k'} \over (2 \pi)^3} {\bar g} (\eta)
\frac{\hat {\bm n} \cdot ({\bm k}-{\bm k}')}{|{\bm k}-{\bm k}'|^2}
\dot \delta (\eta,{\bm k}-{\bm k}')
(\delta (\eta, {\bm k}')+ \delta_x (\eta, {\bm k}'))
\exp[i (\eta_0-\eta) (\hat{\bm n} \cdot {\bm k})],
\label{eq:fourierksz}
\end{equation}
where we use the relation ${\bm r}= (\eta_0-\eta) \hat {\bm n }$,
and we relate the velocity to $\delta$ by the continuity equation in
the cosmological linear perturbation theory
\begin{equation}
{\bm v} = i \frac{\bm k}{k^2} \dot \delta (\eta, k),
\end{equation}
where the dot represents the derivative with respect to $\eta$.
Our final aim is to obtain the angular power spectrum of the cross-correlation.
Therefore, we consider the spherical harmonic decomposition of Eq.~(\ref{eq:fourierksz}),
$a_{\ell m} ^{\rm kSZ}=\int d \hat {\bm n} \delta T_{\rm kSZ} (\hat {\bm n}) Y_{\ell} ^m$.
The spherical harmonic coefficients of the kSZ are given by
\begin{eqnarray}
a_{\ell m}^{\rm kSZ}
&=&\sum_{\substack{\ell' m' \ell'' m'' \\ \ell''' m''' m''''}}
\int d\eta
\int {d^3 {\bm k}_1 \over (2 \pi)^3}
\int {d^3 {\bm k}_2 \over (2 \pi)^3}
\nonumber \\
&&\quad \times
A_{\ell \ell' \ell'' \ell''' 1 } ^{m m' m'' m''' m''''} (\eta)
\dot \delta (\eta,{\bm k}_1)
(\delta (\eta, {\bm k}_2)+\delta_{x}(\eta, {\bm k}_2) )
{j_{\ell'}(k_1 r) \over k_1} j_{\ell''}(k_2 r)
Y^{m'''}_{\ell'''}( \hat {\bm k}_1)
Y_{\ell''}^{m''}( \hat {\bm k}_2),
\label{eq:multi-ksz}
\end{eqnarray}
where we replaced ${\bm k}$ and ${\bm k}'$ by
${\bm k}_1 \equiv {\bm k} - {\bm k}'$ and ${\bm k}_2 \equiv {\bm k}'$, and
\begin{equation}
A_{\ell \ell' \ell'' \ell''' 1 } ^{m m' m'' m''' m''''}
=
-i
(-1)^{m+m'-m''+m''''} {64 \pi^3 \over 3 } i^{\ell' + \ell''}
\sqrt{ 3(2 \ell' +1) \over 4 \pi (2 \ell'''' +1)}
C^{\ell' \ell''' 1} _{-m' -m''' m''''}
C^{\ell' \ell''' 1} _{000}
M^{-m m' -m'' m''''}_{\ell \ell' \ell'' 1} T_{\rm cmb} \bar g(\eta).
\end{equation}
Here, $C^{\ell_1 \ell_2 \ell} _{m_1 m_2 m}$ are the Clebsch-Gordan
coefficients and $M^{m_1 m_2 m_3 m_4 } _{\ell_1 \ell_2 \ell_3 \ell_4}$
are the integrals of quadruple spherical harmonics,
\begin{eqnarray}
M^{m_1 m_2 m_3 m_4 } _{\ell_1 \ell_2 \ell_3 \ell_4}
& = &
\int d\hat n~
Y^{m_1 }_{\ell_1} (\hat n) Y^{m_2 }_{\ell_2} (\hat n) Y^{m_3 }_{\ell_3} (\hat n) Y^{m_4 }_{\ell_4} (\hat n)
\nonumber \\
& = &
(-1)^{m_1}
\sum_{\ell' m'}
\sqrt{ (2 \ell _2 +1)( 2 \ell _3 +1) (2 \ell_4 +1) \over 16 \pi^2 (2 \ell_1 +1)}
C^{\ell_3 \ell_4 \ell'} _{m_3 m_4 m'}
C^{\ell_3 \ell_4 \ell'} _{000}
C^{\ell_2 \ell' \ell_1} _{m_2 m' -m_1}
C^{\ell_2 \ell' \ell_1} _{000}.
\end{eqnarray}
\subsection{21~cm fluctuations}
The brightness temperature of the 21~cm line from a redshift $z$
is given as in \cite{madau-meiksin-rees-1997} by
\begin{equation}
T_{21} (z) = \frac{\tau_{21}}{(1+z)}
(T_{\rm s} -T_{\rm CMB})(z),
\label{eq:21cmline}
\end{equation}
where $T_{\rm CMB}$ is the CMB temperature and $T_{\rm s}$
is the spin temperature given by the ratio of the number
density of hydrogen in the excited state to that of hydrogen in the
ground state.
The optical depth for the 21~cm line absorption $\tau_{21}$ is
\begin{equation}
\tau_{21} (z)
= {3 c^3 \hbar A_{10} x_{\rm H} n_{\rm H} \over 16 k \nu_{21} ^2
T_{\rm s} H(z)}
,
\label{eq:tau21}
\end{equation}
where $A_{10}$ is the Einstein A-coefficient, $\nu_{21}$ is the
frequency corresponding to the 21~cm wavelength and
$x_{\rm H}$ is the fraction of neutral hydrogen, which is written as a
function of the ionised fraction $x_i = 1- x_{\rm H}$.
Note that we drop the redshift space distortion by the peculiar velocity
fluctuations of neutral hydrogen in Eq.~({\ref{eq:tau21}),
although this effect enhances the 21~cm fluctuations \citep{bharadwaj-ali-2004}.
Combining Eq.~(\ref{eq:flucdeco}) with Eqs. (\ref{eq:21cmline}) and (\ref{eq:tau21}),
we can obtain the observed 21~cm fluctuations at the observed
frequency $\nu$. The second order fluctuations which we here focus on is given by
\begin{equation}
\delta T_{21} (\hat {\bm n}, \nu)= \int ~d\eta \int {d^3 {\bm k} \over (2 \pi)^3}
{d^3 {\bm k'} \over (2 \pi)^3}
W_{21}(\eta, \eta(z_{\rm obs})) T_{0}(z(\eta))
\delta (\eta,{\bm k}-{\bm k}')
\delta_{H} (\eta, {\bm k}') \exp[i (\eta_0-\eta) (\hat{\bm n} \cdot {\bm k})],
\end{equation}
where $W_{21}(\eta, \eta(z))$ is the spectral response function of the
observation experiment, normalised as $\int d \eta W_{21}(\eta, \eta(z)) =1$
and centred at $\eta(z)$,
the redshift $z_{\rm obs}$ is related to the frequency
$\nu$ as $\nu = \nu_{21}/(1+z_{\rm obs})$,
$\delta _{H} \equiv (x_H - \bar x_H)/\bar x_H$
and $T_{0}$ is a normalisation temperature factor given by
\begin{equation}\label{eq:tzero}
T_{0}(z) =23 \left({\Omega_{\rm b} h^2 \over 0.02} \right)
\left[ \left({0.15\over \Omega_{\rm m} h^2} \right)
\left( {1+z \over 10} \right) \right] ^{1/2} \left({T_{\rm s}-T_{\rm cmb} \over T_{\rm s}} \right)~{\rm mK}.
\end{equation}
The spin temperature is determined by three couplings with CMB, IGM gas and Ly-$\alpha$ photons.
In the EoR, Ly-$\alpha$ photons emitted from ionising sources couple
the spin temperature with the IGM gas temperature \citep{ciadri-madau-2003}.
Meanwhile, since the IGM gas is heated up quickly
by Ly-$\alpha$ and X-ray photons from stars and QSOs, the IGM gas temperature is much
higher than the CMB temperature during reionisation. Therefore, we can assume
$T_{\rm s} \gg T_{\rm cmb}$ during the EoR in Eq. (\ref{eq:tzero}).
Taking the harmonic decomposition, we obtain the
spherical harmonic coefficients of the 21~cm fluctuations,
\begin{equation}
a_{\ell m}^{21} =
\sum_{\ell' m'} \sum_{\ell'' m''}
\int ~d\eta
\int {d^3 {\bm k _1} \over (2 \pi)^3}
\int{d^3 {\bm k _2} \over (2 \pi)^3}
B^{m m' m''}(\eta)
\delta (\eta,{\bm k}_1)
\delta_{H} (\eta, {\bm k}_2)
j_{\ell'}(k_1 r) j_{\ell''}(k_2 r)
Y_{\ell'}^{m'*}(\hat {\bm k}_1) Y_{\ell''}^{m''*}( \hat {\bm k}_2),
\label{eq:multi-21cm}
\end{equation}
where
\begin{equation}
B^{m m' m''}(\eta)
=
16 \pi^2 i^{\ell' + \ell''} W_{21}(\eta) T_{0}(\eta)
M^{m' m'' -m}_{\ell' \ell'' \ell}.
\end{equation}
Here $M^{m_1 m_2 m_3} _{\ell_1 \ell_2 \ell_3}$ is the integral of triple spherical harmonics,
\begin{eqnarray}
M^{m_1 m_2 m_3} _{\ell_1 \ell_2 \ell_3}
& = &
\int d\hat n~
Y^{m_1}_{\ell_1} (\hat n) Y^{m_2}_{\ell_2} (\hat n) Y^{m_3}_{\ell_3} (\hat n)
\nonumber \\
&=&
(-1)^{m_1}
\sqrt{ (2 \ell _2 +1)( 2 \ell _3 +1) \over 4 \pi (2 \ell_1 +1)}
C^{\ell_2 \ell_3 \ell_1} _{m_2 m_3 -m_1}
C^{\ell_2 \ell_3 \ell_1} _{000},
\end{eqnarray}
where $m_1+m_2=m_3$.
\subsection{The cross-correlation}
The second order cross-correlation is given by
substituting Eqs.~(\ref{eq:multi-ksz}) and (\ref{eq:multi-21cm})
into Eq.~(\ref{eq:defcrosscl}).
We obtain
\begin{eqnarray}
C^{{\rm kSZ}-21} _{\ell}
&=&-
\sum_{\ell'_1 m'_1} \sum_{\ell''_1 m''_1}
\sum_{\substack{\ell'_2 m'_2 \\ \ell''_2 m''_2 m'''_2 \\ \ell''''_2 m''''_2}}
\int d\eta
\int d\eta'
\int {d^3 {\bm k_1} \over (2 \pi)^3}
\int {d^3 {\bm k_2} \over (2 \pi)^3}
\int {d^3 {\bm k_1'} \over (2 \pi)^3}
\int {d^3 {\bm k_2'} \over (2 \pi)^3}
\nonumber \\
&&
\langle
\delta^* (\eta,{\bm k}_1)
\delta_{x}^* (\eta, {\bm k}_2)
\dot \delta (\eta',{\bm k}'_1)
(\delta(\eta', {\bm k}'_2)+\delta_x(\eta', {\bm k}'_2))
\rangle
A_{\ell \ell'_2 \ell''_2 1 \ell''''_2 } ^{m m'_2 m''_2 m'''_2 m''''_2}
(\eta')
[B^{m -m'_1 -m''_1}(\eta)]^*
\nonumber \\
&&
\times
j_{\ell'_1}(k_1 r) j_{\ell''_1}(k_2 r)
{j_{\ell'_2}(k'_1 r') \over k'_1} j_{\ell''_2}(k'_2 r')
Y_{\ell'_1}^{m'_1*}(\hat {\bm k}_1) Y_{\ell''_1}^{m''_1*}( \hat {\bm k}_2)
Y^{m''''_2}_{\ell''''_2}( \hat {\bm k}'_1)
Y_{\ell''_2}^{m''_2}( \hat {\bm k}'_2),
\label{eq:pre-cross}
\end{eqnarray}
where $r'=\eta_0-\eta'$ and we use $\delta_x = -\delta_H$.
Under the assumption that all fluctuation fields are Gaussian,
the Wick theorem breaks the ensemble average in Eq.~(\ref{eq:pre-cross})
into components with $\langle \delta \delta\rangle$,
$\langle \delta_x \delta_x \rangle$ and $\langle \delta \delta_x \rangle$.
For the simplification of Eq.~(\ref{eq:pre-cross}),
we assume that $W_{21}(z)= \delta(z-z_{\rm obs})$. This is a good approximation
because, compared to the observed frequency, the spectral resolution is narrow
(for example, the spectral resolution in the LOFAR case is less than 1 MHz
while the observed frequency is about 150 MHz for $z_{\rm obs} \sim 10$).
We can simplify further by using
the approximation for the integration of spherical Bessel functions
with $\ell \gg 1$,
\begin{equation}
\int dr' \int dk ~k^2 F(k) j_\ell (kr) j_\ell(kr') \approx
\left. \int dr'{\pi \over 2} {\delta(r-r') \over r^2} F(k) \right|_{k=\ell/r}
={\pi \over 2} { F(\ell/r)\over r^2}.
\label{eq:int-bessel}
\end{equation}
Finally,
we can rewrite the cross-correlation as
\begin{eqnarray}
C^{{\rm kSZ}-21} _{\ell}
&=&-
\sum_{\ell_1 \ell_2}
{(2 \ell _2 +1) (2 \ell_1+1) \over 2 \pi^2(2 \ell+1)}
|C^{\ell_1 \ell_2 \ell}_{000}|^2
{T_{0} (\eta_{\rm obs}) T_{\rm cmb} \over H_{\rm obs} r_{\rm obs}^2} {\dot G(\eta_{\rm obs})
\over G(\eta_{\rm obs})} \bar g(\eta_{\rm obs})
\int { d { k } } {j_{\ell_1}(k r_{\rm obs}) }
\left. {d j_{ \ell _1}(k r) \over d r } \right|_{r=r_{\rm obs}}
\nonumber \\
&& \times
\left.
\left [
\left(
P_{\delta x} \left(\eta_{\rm obs}, {\ell_2 \over r} \right)
+P_{xx} \left(\eta_{\rm obs}, {\ell_2 \over r} \right)
\right)
P_{\delta \delta}(\eta_{\rm obs}, k)
+\left(
P_{\delta \delta} \left(\eta_{\rm obs}, {\ell_2 \over r} \right)
+ P_{\delta x}\left(\eta ,{\ell_2 \over r} \right)
\right)
P_{\delta x}(\eta_{\rm obs}, k)
\right] \right|_{r=r_{\rm obs}}
,
\label{eq:cross-ksz21}
\end{eqnarray}
where the power spectra $P_{\delta \delta}, P_{x x}$ and $P_{\delta x}$
are defined as
$\langle \delta(\eta,k_1) \delta(\eta,k_2)\rangle=(2 \pi)^3\delta(k_1-k_2)
P_{\delta \delta}(\eta,k_1)$,
$\langle \delta_x(\eta,k_1) \delta_x(\eta,k_2)\rangle=(2 \pi)^3\delta(k_1-k_2)P_{xx}(\eta,k_1)$,
and
$\langle \delta(\eta,k_1) \delta_x(\eta,k_2)\rangle=(2 \pi)^3\delta(k_1-k_2)P_{\delta x}(\eta,k_1)$.
In Eq.~(\ref{eq:cross-ksz21}), $G$ is the growth factor of the dark matter density fluctuations
which is $\delta (k, \eta) = G(\eta) \delta(k)$ with the present density
fluctuations $\delta(k)$. Now, the epoch we are interested in is matter dominated,
so that we can assume $G \propto 1/(1+z)$ in terms of the redshift $z$.
In order to calculate the cross-correlation, the power spectra $P_{xx}$
and $P_{\delta x}$ which are determined by the reionisation model
are essential. We discuss the analytical reionisation model in the following section.
\section{reionisation model}
For an analytical reionisation model,
we adopt the approach of \citet{furlanetto-zaldarriaga-2004}
and \citet{2005ApJ...630..643M}.
Ionisation bubbles start to
evolve from high density galaxy regions into the voids,
as shown in recent numerical simulations \citep[e.g.][and references therein]{2009arXiv0906.4348T}.
Therefore, the mass of ionised gas $m_{\rm ion}$ is associated with the
mass of a collapsed object $m_{\rm gal}$ by the Ansatz, $m_{\rm ion}=\zeta m_{\rm gal}$ where
$\zeta$ is an ionizing efficiency.
The condition for the full ionisation of a region of mass $m$
is that the region contains sufficient sources to self-ionise,
i.e. $f_{\rm coll}\ge\zeta^{-1}$, where $f_{\rm coll}$ is the
fraction of collapsed halos above the critical mass for collapse, $m_{\rm min}$ \citep{1993MNRAS.262..627L}.
This criterion gives the barrier (the density threshold) $\delta_x$
for ``self-ionisation'' which depends on $m$.
\citet{2004ApJ...613...16F} found a reasonable approximation
of the barrier
in the linear form of the
variance of the density fluctuations, $\sigma^2(m,z)$,
as $B(m,z)=B_0+B_1 \sigma^2(m,z)$ where $\sigma(m,z)$ is
obtained by smoothing the density field at the scale $m$. Here,
$B_0=\delta_c -\sqrt{2} K(\zeta) \sigma_{\rm min} (z)$
and $B_1=\partial \delta_x / \partial \sigma^2 |_{\sigma^2 =0}$
where $\sigma_{\rm min} (z)$ is the mass dispersion at
the minimum mass and redshift $z$ for the collapsed ionisation source.
For the linear barrier $B(m,z)$,
the bubble mass function is written as \citep{1998MNRAS.300.1057S}
\begin{equation}
\frac{d n(m)}{d m}dm=\sqrt{\frac{2}{\pi}}\frac{\bar{\rho}}{m^2}\left|\frac{d\log\sigma}{d\log m}
\right|\frac{B_0}{\sigma(m)}\exp\left[-\frac{B^2(m,z)}{2\sigma^2(m)}\right]dm\, ,
\label{eq:bubblemassfunction}
\end{equation}
where $\bar{\rho}$ is the mean mass density of the Universe.
The smallest bubble mass is given by $\zeta m_{\rm min}$. Therefore,
we can obtain the mean ionised fraction (volume averaged) $\bar x_i$ as
\begin{equation}
\bar x_i=\int_{\zeta m_{\rm min}} V(m) \frac{d n(m)}{d m}dm
=\frac{1}{2} e^{-2 B_0 B_1} {\rm
erfc} \left( \frac{B_0 - B_1 \sigma_{\zeta}^2}{\sqrt{2 \sigma_{\zeta
}^2}} \right)
+ \frac{1}{2} {\rm erfc} \left(\frac{B_0 + B_1 \sigma_{\zeta}^2}
{\sqrt{2 \sigma_{\zeta}^2}} \right),
\label{eq:ionhistory}
\end{equation}
where $\sigma_{\zeta}=\sigma(\zeta m , z)$ and $V(m)$ is the comoving volume
of a bubble with mass $m$.
In the case of a linear barrier,
the linear bias of a source of mass $m$ is given by \citep{2005ApJ...630..643M}
\begin{equation}
b(m)=1+ {B(m)/\sigma^2(m) -1/B_0 \over D(z)}.
\end{equation}
Therefore, the mean bias of the bubble $\bar b(m)$ is
obtained from
\begin{equation}
\bar{b} = \bar x_i^{-1} \int dm \, b(m) V(m) \frac{dn(m)}{dm}.
\label{eq:bbar}
\end{equation}
In this reionisation model, the free parameters for the model
are $\zeta$ and $m_{\rm min}$. Here we take two parameter sets
which are motivated from numerical simulations: ``stars'' model
and ``QSOs'' model \citep{2010MNRAS.tmp...65J}.
In both models, the ionised fraction
reaches $\bar x_i= 0.5$ at $z=11$, in order to agree with the WMAP results.
In the ``stars'' model, we assume that stars are
responsible for reionisation.
We take a low efficiency $\zeta \approx 40$ which is reasonable for
normal star formation
and assume that the minimum mass corresponds to
a virial temperature of $10^4$ K, above which cooling by atomic hydrogen becomes efficient.
In the ``QSOs'' model, we assume that the reionisation history is faster
and the bubble size is larger compared to those in the ``stars'' model.
Therefore, we set high a virial temperature ($5 \times 10^{4}$ K)
and a high efficiency $\zeta \approx 200$. The candidates for the ionisation sources
are massive stars and QSOs.
We show the evolution of the ionised fraction for each model in Fig. \ref{fig:ionhistory}.
From Eq.~(\ref{eq:bubblemassfunction}), we can obtain the bubble size distribution
$V d n / dR$ as a function of the comoving size $R$ of a bubble under the assumption
that the bubbles are spherical.
We plot the results in Fig. \ref{fig:bubblesize}.
\begin{figure}
\begin{center}
\includegraphics[keepaspectratio=true,height=60mm]{ionhistory.eps}
\end{center}
\caption{Evolution of the mean ionised fraction. The solid and
dotted lines represent $\bar x_i$ in the ``stars'' and ``QSOs'' models, respectively.}
\label{fig:ionhistory}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[keepaspectratio=true,height=60mm]{bubblesize.eps}
\end{center}
\caption{Ionised bubble comoving size distribution. The dotted, solid and dashed
lines represent the distributions in the ``stars'' model
at $z=10$, $z=11$ and $z=12$, respectively.
The ionised fractions are $\bar x_i=0.8$ at $z=10$, $\bar x_i=0.5$ at $z=11$
and $\bar x_i=0.3$ at $z=12$.
We also plot the distribution
in the ``QSOs'' model at $z=11$ as the thin solid line. The left side of each line
ends at $R(\zeta m_{\rm min})$ where $\zeta m_{\rm min}$
is the minimum mass of the
ionised region.}
\label{fig:bubblesize}
\end{figure}
\subsection{The two-point correlation function $\xi_{xx} (r)$}
In order to obtain the power spectra $P_{xx}$ and $P_{\delta x}$ in Eq.~(\ref{eq:cross-ksz21}),
we need to compute the correlation function $\xi_{xx} (r)=\langle x_i ({\bm x}_1) x_i ({\bm x}_2)
\rangle-\bar x_i^2$ and $\xi_{\delta x} (r) = \langle \delta({\bm x}_1) x_i ({\bm x}_2) \rangle$
where the points ${\bm x}_1$ and ${\bm x}_2$ are separated by $r=|{\bm x}_1-{\bm x}_2|$.
Here we utilize the analytical correlation functions
of \citet{2005ApJ...630..643M}.
As in the case of the density correlation function
in the halo formalism, the correlation function of the ionised fraction $\xi_{xx} (r)$
receives two contributions. One is a one bubble term $P_1$
which is the two-point correlation for the case where
two points which are separated by $r$ are ionised by the one and same ionisation source,
the other is a two bubble term $P_2$ which corresponds to the case where
two points are ionised by two separate sources.
As shown in Fig.~\ref{fig:bubblesize}, the typical size of an ionisation bubble becomes
larger than 5 Mpc when the ionised fraction reaches one half.
In such regime, where the ionisation bubbles become large,
$P_1$ is largely dominant and $P_2$ can be ignored.
Thus, \citet{2005ApJ...630..643M} divide the reionisation process into
two phases: the early phase and the late phase.
In the early phase, both $P_1$ and $P_2$ are important,
while in the late phase, $P_1$ is dominant and $P_2$ can be
ignored. The criterion for these phases is set as $\bar x_i > 0.5$
in order to be in agreement with results from the hybrid
approach of analytic modeling and numerical simulations of
\citet{2005ApJ...630..657Z}.
They define the correlation function $\xi_{xx} (r)$ by
\begin{eqnarray}
\xi_{xx} (r) & = \left \{
\begin{array}{l l}
(1-\bar x_i) \,P_1(r) & {\rm when ~} \bar x_i >0.5, \\
P_1(r) + P_2(r)-\bar x_i^2 &{\rm otherwise},
\end{array} \right.
\label{eq:xx}
\end{eqnarray}
where
\begin{eqnarray}
P_1(r) & = & \int dm \, {dn(m) \over dm} V_0(m, r),
\label{eq:onebub} \\
P_2(r) &=& \int dm_1 {dn(m_1) \over dm} \int d^3{\bm r}_1
\int dm_2 {dn(m_2) \over dm}
\int d^3{\bm r}_2 [1 + \xi({\bm r}_1 - {\bm r}_2 | m_1, m_2)].
\label{eq:twobub}
\end{eqnarray}
Here, $ \xi(r | m_1, m_2)$ is the excess probability to have
a bubble of mass $m_1$ at the distance $r$ from a bubble of mass $m_2$.
For the simplicity of the calculation,
it is assumed that $\xi(r | m_1, m_2)$ can be written in terms of the correlation
function of the matter density $\xi_{\delta \delta}$
as $\xi(r | m_1, m_2) = \bar{b}
\xi_{\delta \delta} ({\rm max}(r, R_1 + R_2))$
where $R_1(m_1)$ and $R_2(m_2)$ are the bubble radii.
In order to calculate the volume in Eqs.~(\ref{eq:onebub}) and (\ref{eq:twobub})
analytically, all ionisation bubbles are assumed spherical.
Therefore, $V_0(m, r)$ is the volume within a sphere of mass $m$ that can
encompass two points separated by a distance $r$.
For the volume integration in Eq.~(\ref{eq:twobub}),
\citet{2005ApJ...630..643M} adopt the overlapping conditions: (1) $m_1$ cannot ionize $r_2$,
and $m_2$ cannot ionize $r_1$; (2) the center of $m_2$ cannot lie inside $m_1$, but any
other part of $m_2$ {\it can} touch $m_1$, and vice versa.
\subsection{The two-point cross-correlation function $\xi_{\delta x} (r)$}
As in the case of $\xi_{x x} (r)$,
the two-point cross-correlation $\xi_{\delta x} (r)$ has two contributions, $P_{\rm in}$ and $P_{\rm out}$.
The contribution $P_{\rm in}$ corresponds to the case of both points being contained within the same ionised bubble. Following \citet{2005ApJ...630..643M}, it is
written as
\begin{eqnarray}
P_{\rm in}(r) &=& \int dm {dn(m) \over dm} V_0(m, r) \int dm_h
\frac{m_h}{\rho}
{d n_h(m_h | m) \over dm_h} \nonumber \\
& = & \int dm {dn(m) \over dm} V_0(m, r)
[1 + B(m,z)],
\label{eq:pin}
\end{eqnarray}
where
the last line in Eq.~(\ref{eq:pin}) is obtained by using the fact
that the inner integral is the mean over-density of the bubble
$1+ \delta_B $ and $\delta_B$ is $B(m,z)$ at linear order.
The contribution $P_{\rm out}$ corresponds to the case when one point
is outside the ionised bubble of the other point.
\citet{2005ApJ...630..643M} give $P_{\rm out}$ in terms of
the mean bias for halos $\bar{b}_h$,
\begin{eqnarray}
P_{\rm out}(r) & = & \bar{x}_i - \int dm {dn(m) \over dm} V_0(m,r)
+ \int dm {dn(m) \over dm}
\int
d^3{\bm r}_ {\rm b} [\bar{b}_h \bar{b} \xi_{\delta \delta}
({\bm r} - {\bm r}_{\rm b})].
\label{eq:pout}
\end{eqnarray}
where $d n_h(m_h | m)/dm_h$ is the conditional mass function.
In Eq.~(\ref{eq:pout}), the integration range of ${\bm r}_{\rm b}$ is
over all bubbles which ionise the point ${\bm r}_{\rm b}$ but not the other point separated
by ${\bm r}$ from ${\bm r}_{\rm b}$.
For simplicity, $\xi_{\delta \, \delta}$ is evaluated at the separation ${\rm max}[R(m), r ]$.
As the reionisation proceeds and the typical size of an ionised bubble
becomes large, the term $P_{out}$ becomes unimportant as compared to $P_{in}$.
Therefore, the computation of $\xi_{\delta x}$ is divided
into two phases again,
\begin{eqnarray}
\xi_{\delta x} (r) & = \left \{ \begin{array}{l l}
P_{\rm in} - P_1 &{\rm when ~} \bar{x}_i > 0.5, \\
P_{\rm in} + P_{\rm out} - \bar{x}_i
& {\rm otherwise}, \end{array} \right.
\label{eq:deltax}
\end{eqnarray}
where we assume that $P_{\rm in}$ is dominant in large
$\bar{x}_i$ ($\bar{x}_i>0.5$) and we subtract $P_1$ given by
Eq.~(\ref{eq:pin}) from $P_{\rm in}$ which is the correlation
between $x_i$ and $\rho/ \bar{\rho}$.
\section{Results and discussion}
We calculate the angular power spectrum of the cross-correlation described
in Eq.~(\ref{eq:cross-ksz21})
in the two models, ``stars'' and ``QSOs''.
First, we show the results in the ``stars'' model in Fig.~\ref{fig:llcl-star.eps}.
In this model, the mean ionised fraction is $0.3$, $0.5$ and $0.8$
at $z=12$, $z=11$ and $z=10$, respectively.
The signal of the cross-correlation between kSZ and 21 cm fluctuations
exhibits an anti-correlation on small scales ($\ell > 1000$).
As mentioned in \citet{2004PhRvD..70f3509C},
there is a geometric cancellation in the cross-correlation.
This cancellation is responsible for a suppression of the amplitude of the cross-correlation.
However, the cross-correlation has a distinctive oscillatory shape.
Especially, we found that the peak position of the anti-correlation
represents the typical size of an ionised bubble at each redshift.
For example, at $z=11$, the typical size of an ionised bubble
is almost $6$ Mpc, as shown in Fig.~\ref{fig:bubblesize},
and the anti-correlation at $z_{\rm obs}=11$ is maximal
at the corresponding multipole $\ell \sim 4000$.
As the Universe evolves, the typical scale of an ionised bubble becomes larger.
The peak position of the anti-correlation
shifts accordingly toward smaller values.
The evolution of the cross-correlation amplitude depends on the
evolution of $\delta_x$ through the power spectra of
$P_{xx}$ and $P_{x\delta}$ which evolve rapidly during the EoR.
Since the amplitudes of $P_{xx}$ and $P_{x\delta}$ increase as the redshift decreases,
the amplitude of the cross-correlation also becomes larger at low redshifts.
However, after the average ionisation rate reaches $\bar x_i \sim 0.9$,
the signal of the 21 cm fluctuations
becomes weak and the cross-correlation amplitude also starts to decrease.
In Fig.~\ref{fig:llcl-star.eps}, we also plot the first order cross-correlation
between 21~cm and CMB Doppler anisotropies
calculated by using the same expression as Eq.~(15) of \citet{2006ApJ...647..840A}.
The sign of the first order cross-correlation depends on the evolution of $\delta_x$.
As long as $\delta_x$ is small, the ionisation process is homogeneous, and
the cross-correlation is negative. On the other hand, in the case of a highly inhomogeneous reionisation,
the sign of the cross-correlation is positive.
In our reionisation model, the first order cross-correlation at the early phase
of reionisation is negative at $\ell < 1000$ (see the top and middle panels in
Fig.~\ref{fig:llcl-star.eps}). We found the amplitude of the first order
cross-correlation at $z_{\rm obs}=11$ is $300~\mu {\rm K}^2$ at the peak
position, $\ell \sim 100$, and decreases rapidly towards zero at large
multipoles. As we can see in Fig.~\ref{fig:llcl-star.eps}, the second order
kSZ-21 cm cross-correlation dominates the first order cross-correlation at
multipoles larger than $\ell = 1000$.
However, as the ionisation process proceeds, the ionisation fraction is highly
inhomogeneous and $\delta_x$ is evolved well. As a result, the first order
cross-correlation has a positive sign and a high amplitude as shown in the bottom
panel of Fig.~\ref{fig:llcl-star.eps}. The first order cross-correlation
becomes comparable to the second order kSZ-21~cm cross-correlation even at
$\ell \sim 1000$, while the kSZ cross-correlation still dominate the first
order cross-correlation and has negative correlation at multipoles higher
than $\ell =1000$.
\begin{figure}
\begin{center}
\includegraphics[keepaspectratio=true,height=150mm]{llcl-star-red.eps}
\end{center}
\caption{Angular power spectra of the second order cross-correlation
in the ``stars'' model. From top to bottom panels, we plot the angular power spectra at $z_{\rm obs}=12$, $z_{\rm obs}=11$ and
$z_{\rm obs}=10$, respectively. The mean ionised
fraction is $\bar x =0.3$ at $z =12$, $\bar x =0.5$ at $z=11$ and
$\bar x =0.8$ at $z=10$. For reference, we show the first order
cross-correlation between the CMB temperature and 21~cm fluctuations
as the dotted line in each panel.}
\label{fig:llcl-star.eps}
\end{figure}
Next we show the dependence of the angular cross-correlation power spectrum on
the ionisation model in Fig.~\ref{fig:llcl-qso.eps}.
In the ``QSOs'' model, the ionisation history is rapid and the typical size
of ionised bubbles is large.
The amplitude of $P_{xx}$ and $P_{x\delta}$ in the ``QSOs'' model
is larger than in the ``stars'' model.
As a result, in the ``QSOs'' model, the signal of the
cross-correlation is large and
the peak position of the anti-correlation appears on small
multipoles, as expected.
We can therefore conclude that the cross-correlation between kSZ and 21 cm fluctuations
at the second order is sensitive to the average size of an ionised bubble.
The first order cross-correlation also has a higher amplitude than in the ``stars'' model
because the amplitude depends on the evolution rate of the background ionisation fraction.
However, the inhomogeneous contribution coming from the term with $P_{x\delta}$ in
Eq.~(15) in \citet{2006ApJ...647..840A}
is partially canceled by the homogeneous one from the term with $P_{\delta \delta}$. As a result,
in the highly inhomogeneous ``QSOs'' reionisation model,
the cross-correlation between kSZ and second order 21 cm fluctuations
reaches a significant amplitude, compared with
the first order cross-correlation at small scales ($\ell \lesssim 1000$).
\begin{figure}
\begin{center}
\includegraphics[keepaspectratio=true,height=60mm]{llcl-qso.eps}
\end{center}
\caption{Dependence of the cross-correlation on the ionisation model.
The solid and the dashed lines are the power spectrum for the ``stars'' model
and the ``QSOs'' model, respectively (see text). We set $z_{\rm obs}=11$ where $\bar x=0.5$ in
both models. For reference, we plot the first order cross-correlation in each
model as the thin lines.}
\label{fig:llcl-qso.eps}
\end{figure}
\subsection{Detectability}
In the previous section, we showed that the peak position of the
anti-correlation is related to the typical bubble size at the
observed redshift of 21 cm fluctuations.
Here, our concern is the detectability of such negative
peak in the kSZ-21 cm cross-correlation.
In order to investigate the detectability, we calculate the signal-to-noise ratio ($S/N$).
For simplicity, we assume that CMB, 21 cm fluctuations and
instrumental noise are Gaussian.
The total $S/N$ can be calculated as
\begin{equation}
\left( {S \over N} \right) ^2 =
f_{\rm sky} \sum_{\ell = \ell_{\rm min}} ^{\ell_{\rm max}} (2 \ell +1)
{| C_\ell ^{21-{\rm CMB}}| ^2 \over | C_\ell ^{21-{\rm CMB}} |^2
+C_\ell ^{21} C_\ell ^{\rm CMB}},
\label{eq:SNratio}
\end{equation}
where $f_{\rm sky}$ is the sky fraction common to the two
cross-correlated signals, and $C_\ell ^{\rm CMB}$, $C_\ell ^{21}$ and
$C_\ell^{21-{\rm CMB}}$ are the angular power spectra of CMB, 21 cm
fluctuation and the cross-correlation between 21 cm and CMB,
respectively.
In order to focus on the detectability of the signal from the typical
bubble size, we set $\ell_{\rm min}=500$ and $\ell_{\rm max}=5000$.
At the multipoles that we are interested in ($\ell>1000$),
the dominant CMB signal is due to the thermal
SZ effect \citep{1969Ap&SS...4..301Z}. However we can remove this
contribution because of the frequency dependence of the SZ effect.
Therefore with the assumption that the foreground can be completely
removed from the CMB map, the main contribution to $C_l^{\rm CMB}$
comes from the primordial CMB anisotropies $C_\ell ^{\rm pri}$ and
the noise of the instrument $N_\ell^{\rm CMB}$. We can write
$C_\ell^{\rm CMB}$ as
\begin{equation}
C_\ell^{\rm CMB} = C_\ell ^{\rm pri}
\exp (-\ell^2 \sigma_{ \rm CMB}^2/2 ) + N_\ell^{\rm CMB},
\label{eq:cmb-signal}
\end{equation}
where we assume the beam profile of CMB observation is Gaussian with
the Full Width at Half Maximum of the beam $\theta_{\rm CMB}$, and
$\sigma_{ \rm CMB} = \theta_{\rm CMB}/\sqrt{8 \ln 2}$.
The effect of the beam size is a damping of the signal of the primordial
CMB on smaller scales than the FWHM.
The noise power spectrum $N_\ell ^{\rm CMB}$ is given by
\citep{knox-1995}
\begin{equation}
N_\ell ^{\rm CMB}= {\sigma_{\rm pix}^2 \Omega_{\rm pix}},
\end{equation}
where $\sigma_{\rm pix}$ is the sensitivity in each pixel and
$\Omega_{\rm pix}$ is the solid angle per pixel;
$\Omega_{\rm pix}= \theta_{\rm CMB}^2$.
As for the 21~cm fluctuations, the noise signal from the instruments
and foreground will dominate the intrinsic signal from the EoR.
Assuming that the foreground can be removed to the level below the
noise from instruments, we can write
\begin{equation}
C_\ell ^{21} =N_\ell^{21}=
{2 \pi \over t_{\rm obs} \Delta \nu} \left( {D \lambda \over A/T}
\right)^2,
\label{eq:21instnoise}
\end{equation}
where we use the noise power spectrum of 21~cm observation
estimated by \citet{2004ApJ...608..622Z}.
In Eq.~(\ref{eq:21instnoise}),
$\Delta \nu$ is the bandwidth, $t_{\rm obs}$ is the total
integration time, $A/T$ is the sensitivity (an effective area divided
by the system temperature) and $D$ is the length of the baseline associated with the FWHM of the 21~cm observation
$\theta_{21} =\lambda /D $.
In the calculation of the cross-correlation signal, we assume that
the foregrounds and noise of 21~cm fluctuations and CMB anisotropy
are not correlated. Therefore, the cross-correlation consists mainly
of the first order Doppler-21~cm cross-correlation and
the second order kSZ-21~cm one,
\begin{equation}
|C_\ell ^{21-{\rm CMB}}| ^2= (|C_\ell ^{21-{\rm Dopper}}|^2 +
|C_\ell ^{21-{\rm kSZ}}|^2) \exp [-\ell^2 (\sigma_{ \rm CMB}^2
+\sigma_{21}^2 )/2],
\end{equation}
where $\sigma_{21} = \theta_{21}/\sqrt{8 \ln 2}$ and
both signals are affected by the angular resolution of the
observations.
Our interest is the detectability of the cross-correlation signal
from the patchy reionisation by {\it Planck} and SKA.
Therefore, in the computation of Eq.~(\ref{eq:cmb-signal}), we adopt
the typical value of {\it Planck} which are
$\theta_{\rm CMB}= 5~$arcmin
and $\sigma_{\rm pix}= 5 \times 10^{-6}$.
The goal sensitivity of SKA is currently designed
as $A/T=5000~{\rm m^2 K^{-1}}$ at 200 MHz.
The configuration area is 20 \% of total collecting area
for 1 km baseline, 50 \% for 5 km baseline, 75 \%
for 150 km baseline. Because we are interested in the scales
$\ell \sim 2000$, we take $D=1~$km and $A/T=1000~{\rm m^2 K^{-1}}$.
The sky fraction $f_{\rm sky}$ corresponds to the one of
SKA because we consider {\it Planck} as CMB observation, which is
almost full-sky.
We assume $200 {\rm deg}^2$ per field of view and 4 independent
survey fields for SKA. Therefore the total
sky fraction is $f_{\rm sky} \sim 0.02$.
We plot $S/N$ as a function of $t_{obs}$ in units of
hours for ``stars'' and ``QSOs'' models at $z=11$ in
Fig.~\ref{fig:snratio}. In both panels in Fig.~\ref{fig:snratio},
$S/N$ of SKA with {\it Planck} is represented by the solid lines.
Obviously, longer observation times make $S/N$ larger. Then,
since the cross-correlation amplitude in the``QSOs'' model is
higher than in the ``stars'' model, $S/N$ in the former model
is lager than in the later model. However both $S/N$ are below
the detection level.
This difficulty of the detection is mainly due to the instrumental
noise of the 21 cm observation.
Although the primary CMB is one of the significant sources of
noise in the detection of the cross-correlation signal between CMB
and 21 cm from EoR on large scales \citep{2010MNRAS.tmp...65J, 2010MNRAS.402.2617T}, the primary CMB suffers Silk damping on the
scales we are interested in here and the noise of {\it Planck} is
also kept below the sufficient level.
In order to clarify the impact of the improvement in the sensitivity
of 21~cm observation, we calculate $S/N$ in the case of a 5
times better sensitivity than that of SKA and plot the result as the
dotted line in Fig.~\ref{fig:snratio}. The improvement of the
sensitivity of 21~cm observation brings large $S/N$. Especially,
the $S/N$ in the ``QSOs'' model can reach $S/N \sim 5$ in 500-hour
observation.
Finally, while we use the same sky fraction $f_{\rm sky} \sim 0.02$
in all calculations, larger sky fractions also make
$S/N$ higher.
\begin{figure}
\begin{tabular}{cc}
\begin{minipage}{0.5\textwidth}
\begin{center}
\includegraphics[keepaspectratio=true,height=50mm]{sn-z11-1km-r.eps}
\end{center}
\end{minipage}
\begin{minipage}{0.5\textwidth}
\begin{center}
\includegraphics[keepaspectratio=true,height=50mm]{sn-qso-1km-r.eps}
\end{center}
\end{minipage}
\end{tabular}
\caption{The $S/N$ ratio for the detection of the cross-correlation
signal at $z=11$ as a function of the observation time. The left
panel is for the ``stars'' model and the right panel is for the
``QSOs'' model. In both panels, the solid and dotted lines represent
$S/N$ for SKA and for the observation with a 5
times better sensitivity than that of SKA, respectively.
We set $f_{\rm sky} \sim 0.02$ in all plots.
}
\label{fig:snratio}
\end{figure}
\section{conclusion}
We investigated the small scale cross-correlation between CMB anisotropies
and the 21~cm fluctuations during the EoR in harmonic space.
The CMB anisotropies at small scales are mainly caused by the kSZ effect which
is the second order fluctuation effect generated by the peculiar velocity and
the fluctuations of the visibility
function. We therefore calculated the cross-correlation with the second order
fluctuations of 21~cm fluctuations.
The cross-correlation signal between kSZ and 21~cm fluctuations is negative on small scales.
This anti-correlation on small scales was found in the numerical
simulations of \citet{2005MNRAS.360.1063S} and
\citet{2010MNRAS.tmp...65J}.
We found that the position of the negative peak is
at the angular scale corresponding to
the typical size of an ionised bubble at the redshift probed by 21~cm fluctuation measurements.
This angular scale shifts to larger scales as ionised bubbles evolve.
The amplitude also increases with the reionisation process until
the average ionisation fraction reaches $\bar x_i \sim 0.9$.
The amplitude of the cross-correlation strongly depends on the typical bubble size.
The cross-correlation in the case of larger bubbles
has a higher amplitude than in the case of smaller bubbles,
even if in both cases the mean ionisation fractions are the same.
Moreover, the amplitude of the cross-correlation from large ionised bubbles
is comparable to that of the first order cross-correlation.
Those characteristic features of the cross-correlation could be used to distinguish
between different reionisation histories with future observations.
We also estimated the detectability of the small-scale cross-correlation by
the current design sensitivity of SKA. It is rather difficult
to detect the cross-correlation
signal even in the radical reionisation cases. However, if the sensitivity is
improved by a factor of 5, the detection or non-detection of the cross-correlation
signal will definitely provide information about the EoR.
\section*{Acknowledgments}
HT is supported by the Belgian Federal Office for Scientific,
Technical and Cultural Affairs through the Interuniversity Attraction Pole P6/11.
|
2,877,628,091,582 | arxiv | \section{Introduction}
Multipartite quantum systems play an important role in quantum mechanics and quantum information.
They are associated with deep quantum concepts like entanglement, correlations which are stronger than classical, violation of classical probabilistic inequalities, etc.
Various statistical quantities are used to describe these phenomena.
In this paper we combine ideas from Markov matrices, permutations with repetitions, and the Gini index in statistics, to develop novel tools for their study.
We first develop these ideas at a classical probabilistic level, and then pass to the quantum level.
Markov matrices play an important role in Markov chains \cite{M1,M2}, Artificial Intelligence, Engineering, etc.
A special case of Markov matrices are the doubly stochastic matrices which are intimately related to majorization\cite{MAJ} and have many applications.
An important theorem is the Birkhoff-von Neumann expansion of doubly stochastic matrices in terms of permutation matrices (without repetitions), which has many applications in Operational Research, in
Linear Programming, etc. In this paper we introduce analogous expansions for row Markov matrices, in terms of matrices related to permutations with repetitions of $d$ integers in ${\mathbb Z}_d$.
A nice physical interpretation of this mathematical formalism is given, in terms of sequences of integers that open random safes described by the Markov matrices.
Various statistical quantities like joint probabilities and correlations, are presented in this context.
Probability vectors multiplied by row Markov matrices on the right, are transformed into other probability vectors.
The sparsity of probability vectors is studied using Lorenz values and the Gini index\cite{Gini,Gini1,Gini2,Gini3,Gini4} which have been used extensively in Mathematical Economics.
We have recently used these concepts in the context of quantum physics\cite{V2,V3}.
In the context of random safes we introduce the Gini vector that describes the sparsity
in the local probability vector for each of the integers in the sequence that opens a random safe. We also introduce the total Gini index that describes the sparsity of the
joint probabilities for the sequence of integers that opens a random safe.
This formalism is applied to $d$-partite quantum systems where each component is described by a $d$-dimensional Hilbert space.
There is a natural link between such
quantum systems and permutations with repetitions of $d$ integers in ${\mathbb Z}_d$ (compare Eq.(\ref{A7}) with Eqs(\ref{333a}),(\ref{333})).
The present work enhances this link, and
the outcome is a plethora of statistical quantities that describe various aspects of multipartite quantum systems.
Local Fourier transforms are used to define locally dual statistical quantities.
Global Fourier transforms are used to define globally dual statistical quantities which
depend on off-diagonal elements that entangle (in general) the various components of the system.
In section 2 we define permutations with repetitions.
In section 3 we introduce Markov matrices and explain their role in describing the statistics of random permutations with repetitions.
In section 4, we introduce an expansion for row Markov matrices, in terms of matrices related to permutations with repetitions.
An important concept here is the scalar product of two row Markov matrices and its physical interpretation.
In section 5 we introduce the Lorenz values of probability vectors, and the related topic of majorization. In the present context majorization
provides an ordinal description of the sparsity in probability vectors.
In section 6 Lorenz values are used to define the Gini index as an indicator of the sparsity (certainty) of probability vectors.
Both the Gini index and the variance (standard deviation) are quantities that describe the variability in probability distributions.
Roughly speaking, the Gini index is the sum of the absolute values of the differences of the probabilities, while the variance
is the sum of the squares of the differences of the probabilities (remark \ref{rem1}).
In the context of random safes we introduce local and total Gini indices.
In section 7 we consider a quantum system with $d$-dimensional Hilbert space $H_d$, and variables in ${\mathbb Z}_d$.
We have shown in ref\cite{V2} that the Gini indices for
probability vectors related to positions and momenta (i.e., dual bases through a Fourier transform) cannot both be very sparse
and we quantified this using Gini indices. It is one way to express the uncertainty principle in this context.
Here we simply state this result without proof because we generalise it later for multipartite systems.
In section 8 we consider $d$-partite quantum systems where each component is described by the $d$-dimensional Hilbert space $H_d$.
They can be viewed as quantum permutations with repetitions (or as quantum safes)
and they are extensions of their random counterparts, because they involve superpositions.
This point of view brings the statistical techniques related to
row Markov matrices, Gini indices, etc, into multipartite quantum systems.
It leads to many novel quantities that describe the statistics of quantum systems.
In quantum mechanics it is interesting to do statistics with respect to dual bases related through a Fourier transform, and study uncertainty relations that link them.
In multipartite systems we have local and global Fourier transforms which are used in sections 8,9, to define locally dual and also globally dual statistical quantities.
We conclude in section 10 with a discussion of our results.
\section{Generalisations of permutations}
\subsection{Permutations}
Let ${\mathbb Z}_d$ be the ring of integers modulo $d$, and $\pi$ a permutation of its elements:
\begin{eqnarray}
(0,1,...,d-1)\;\overset{\pi} \rightarrow\;(\pi(0),\pi(1),...,\pi(d-1)).
\end{eqnarray}
$\pi$ is a bijective map from ${\mathbb Z}_d$ to ${\mathbb Z}_d$, and is an element of the symmetric group ${\cal S}$ which has $d!$ elements.
Multiplication in this group is the composition:
\begin{eqnarray}
(0,1,...,d-1)\;\overset{\pi} \rightarrow\;(\pi(0),\pi(1),...,\pi(d-1))\overset{\wp} \rightarrow\;(\wp [\pi(0)],\wp[\pi(1)],...,\wp[\pi(d-1)]).
\end{eqnarray}
We will use the notation $(\wp \circ \pi)(i)=\wp[\pi(i)]$.
The unity is
\begin{eqnarray}
(0,1,...,d-1)\;\overset{\bf 1} \rightarrow\;(0,1,...,d-1).
\end{eqnarray}
A permutation matrix $M_\pi$ is a $d\times d$ matrix with elements
\begin{eqnarray}\label{PP}
M_\pi(i,j)=\delta(\pi(i),j);\;\;\;{\rm rank} (M_\pi)=d.
\end{eqnarray}
where $\delta$ is the Kronecker delta.
Each row and each column have one element equal to $1$ and the other $d-1$ elements equal to $0$.
Clearly
\begin{eqnarray}
M_\wp M_\pi=M_{\pi\circ \wp};\;\;\;M_{\bf 1}={\bf 1};\;\;\;M_{\pi^{-1}}=[M_\pi]^{-1}=[M_\pi]^T.
\end{eqnarray}
The matrices $M_\pi$ form a representation of the symmetric group ${\cal S}$\cite{SYM}.
\subsection{Permutations with repetitions}
We enlarge the set of permutations by considering maps which might not be bijective.
This is equivalent to choosing $d$ integers from ${\mathbb Z}_d$, allowing repetitions.
Let ${\cal F}$ be the set of all functions
\begin{eqnarray}
f:\;(0,...,d-1)\;\rightarrow\;(f(0),...,f(d-1));\;\;\;f(i)\in{\mathbb Z}_d,
\end{eqnarray}
from ${\mathbb Z}_d$ into itself.
An example of the function $f(i)$ is to consider a safe that opens with a sequence of $d$ integers in ${\mathbb Z}_d$ that has in the $i$-position the number $f(i)$.
This example is discussed further below.
There are $d^d$ functions in ${\cal F}$ (because $0$ can be mapped to any of the $d$ elements in ${\mathbb Z}_d$, $1$ can be mapped to any of the $d$ elements in ${\mathbb Z}_d$, etc).
From them $d!$ are permutations.
The set ${\cal F}$ with the composition
\begin{eqnarray}
(f\circ g) (i)=f[g(i)]
\end{eqnarray}
as multiplication, is a semigroup (because the inverse does not exist in general).
In terms of the example with a safe, the composition represents a change in the sequence that opens the safe.
The function $f\in{\cal F}$ can be represented with the matrix $M_ f$ with elements
\begin{eqnarray}\label{6}
M_f(i,j)=\delta(f(i),j);\;\;\;{\rm rank} (M_f)\le d.
\end{eqnarray}
This is a generalisation of Eq.(\ref{PP}), and includes functions which might not be permutations.
Each row has one element equal to $1$ and the other $d-1$ elements equal to $0$.
But a column might have many elements equal to one.
There are $d^d$ matrices $M_f$ and we refer to them as permutation with repetition matrices. $d!$ of them are permutation matrices.
We note that
\begin{eqnarray}
M_gM_f=M_{f\circ g},
\end{eqnarray}
where $f\circ g$ is the composition of the two functions.
Therefore the set of these matrices with matrix multiplication, is a semigroup isomorphic to ${\cal F}$, and we denote it as ${\cal F}$.
We also consider transpose of these matrices (which have the $(j,i)$ element equal to $\delta(f(i),j)$) that form another semigroup isomorphic to ${\cal F}$.
Here each column has one element equal to $1$ and the other $d-1$ elements equal to $0$.
But a row might have many elements equal to one.
\begin{lemma}\label{LL}
\begin{eqnarray}
\prod _{i=0}^{d-1}[M_fM_g^T](i,i)=\delta(f,g);\;\;\;M_f, M_g \in {\cal F}_R.
\end{eqnarray}
\end{lemma}
\begin{proof}
The $(i,i)$-element of the matrix $M_fM_g^T$ is
\begin{eqnarray}
[M_fM_g^T](i,i)=\sum _j\delta(f(i),j)\delta(g(i),j)
\end{eqnarray}
If $f\ne g$ at least one of these $(i,i)$-elements is $0$, and then their product is $0$.
Only in the case $f=g$ all $(i,i)$-elements are $1$, and then their product is $1$.
\end{proof}
Below we will see that when the matrices $M_f$ (and the row Markov matrices which are more general) act on the right of probability vectors (written as rows) we get
other probability vectors.
\section{Random permutations with repetitions: random safes}
\subsection{Markov matrices }
A $d\times d$ matrix $q$ is called row Markov matrix if
\begin{eqnarray}\label{A7}
\sum _{j=0}^{d-1}q(i,j)=1;\;\;\;q(i,j)\in [0,1];\;\;\;i,j=0,...,d-1.
\end{eqnarray}
The elements of each row are probabilities and we
interpret them in terms of a random safe as follows.
We consider a `large' ensemble of safes each of which opens with a sequence of $d$ integers which take values in ${\mathbb Z}_d$.
Let $q(i,j)$ be the probability that the number in the $i$-position of the opening sequence, is $j$.
The matrix $q$ is a row Markov matrix, because the sum of probabilities that in the $i$-position is some number in ${\mathbb Z}_d$, is $1$.
We call the index $i$ `position index', and the index $j$ `number index'.
We denote as ${\cal M}$ the set of row Markov matrices.
We note that
\begin{itemize}
\item
The product of two row Markov matrices is a row Markov matrix.
\item
The inverse of a row Markov matrix might not exist, or if it exists it might not be a row Markov matrix.
\item
${\cal M}$ is a semigroup with respect to matrix multiplication.
\item
If $q_1, q_2$ are row Markov matrices, then the $\lambda q_1+(1-\lambda) q_2$ where $0\le \lambda \le 1$ is a row Markov matrix.
The semigroup ${\cal M}$ of all $d\times d$ row Markov matrices is a convex polytope with the $d^d$ matrices $M_f$ (in the semigroup ${\cal F}$) as vertices.
This polytope is studied and used in this paper.
\item
A special case of row Markov matrices are the doubly stochastic matrices for which the following relation also holds:
\begin{eqnarray}
\sum _{i=0}^{d-1}q(i,j)=1.
\end{eqnarray}
They are intimately related to majorization\cite{MAJ}, which is a preorder that has been used in various areas, including quantum physics\cite{MA0,MA1,MA2,MA3}.
\item
The matrix ${\mathfrak U}$ with all elements
\begin{eqnarray}\label{UUU}
{\mathfrak U}_{ij}=\frac{1}{d};\;\;\;{\rm rank}({\mathfrak U})=1,
\end{eqnarray}
is a doubly stochastic matrix.
For any row Markov matrix $q$ we get
\begin{eqnarray}
q{\mathfrak U}={\mathfrak U},
\end{eqnarray}
${\mathfrak U}q$ is a row Markov matrix with all rows equal to each other, but in general ${\mathfrak U}q\ne{\mathfrak U}$.
\end{itemize}
Let ${\cal P}$ be the set of probability vectors written as rows
\begin{eqnarray}\label{gt}
{\bf x}=(x(0),...,x(d-1));\;\;\;\sum _i x(i)=1;\;\;\;x(i)\ge 0.
\end{eqnarray}
If $q$ is a row Markov matrix then
${\bf x}q$ is a probability vector, but $q{\bf x}^T$ might not be a probability vector.
Examples of probability vectors are the `most uncertain probability vector' ${\bf u}$, and the `certain probability vectors' ${\bf c}M_\pi$, where
\begin{eqnarray}\label{3D}
{\bf u}=\frac{1}{d}(1,...,1);\;\;\;{\bf c}=(1,0...,0).
\end{eqnarray}
A probability vector is called sparse (`almost certain') if most of its elements are zero or almost zero.
Roughly speaking, sparse probability vectors are at the opposite end of the uncertain probability vector ${\bf u}$.
Below we make this precise with the preorder majorization, and also with the Gini index.
The Birkhoff-von Neumann theorem states that every doubly stochastic matrix ${\mathfrak D}$ can be expanded in terms of the $d!$ permutation matrices as
\begin{eqnarray}\label{hh}
{\mathfrak D}=\sum _{\pi}\lambda _{\mathfrak D}({\pi})M_{\pi};\;\;\;\sum _{\pi} \lambda _{\mathfrak D}({\pi})=1;\;\;\; \lambda _{\mathfrak D}({\pi})\ge 0;\;\;\;\pi \in {\cal S}.
\end{eqnarray}
Here $\lambda _{\mathfrak D}({\pi})$ are probabilities.
So a doubly stochastic matrix can be viewed as a random permutation (without repetitions).
In this paper we introduce expansions analogous to this for row Markov matrices, that involve permutations with repetitions.
They can be viewed as random permutations with repetitions.
Later we go one step further, and introduce quantum permutations with repetitions, that involve superpositions.
\section{Expansions for row Markov matrices and their interpretation in terms of random safes}
In this section we introduce expansions for row Markov matrices, in terms of matrices related to permutations with repetitions of $d$ integers in ${\mathbb Z}_d$.
These expansions are interpreted in terms of the sequences of integers that open random safes described by the Markov matrices.
The expansion is not unique, and it depends on the correlations between the integers in the sequence that opens the random safe.
We first assume absence of correlations and then joint probabilities are equal to products of probabilities.
We show that the coefficients in the expansion are scalar products of row Markov matrices, a concept which is defined and plays an important role in this paper.
Later we consider the case of correlations.
The Markov matrix $q(i,j)$ describes the probability that in the position $i$ of the sequence that opens the random safe, is the integer $j$.
It does not give any information about joint probabilities and correlations.
We can have two different ensembles described by the same Markov matrix $q(i,j)$, one without any correlations between the integers in the sequence that opens the random safe, and the other with correlations.
In this case we have two different expansions of the same row Markov matrix.
\subsection{Expansion for row Markov matrices in the absence of correlations}\label{uncor}
In this subsection we assume independence (lack of correlations) between the integers in the various positions of the sequences that open the safes.
We consider two ensembles of safes (which are independent of each other), described by the row Markov matrices $q$, $p$.
We take randomly one safe from each ensemble, and then
the product $q(i,j)p(i,j)$ is the joint probability that in both safes the number in the $i$-position of the opening sequence is $j$.
Summation over the `number index' $j$, and multiplication over the `position index' $i$ gives
the probability that a safe from the first ensemble
has the same opening sequence, as the safe from the second ensemble.
This motivates the following definition of the scalar product.
\begin{definition}
The scalar product of two row Markov matrices $q,p$ is the product of the diagonal elements of the matrix $qp^T$:
\begin{eqnarray}\label{17}
(q,p)=\prod _{i=0}^{d-1}\left[\sum_{j=0}^{d-1}q(i,j)p(i,j)\right];\;\;\;q,p\in{\cal M}.
\end{eqnarray}
\end{definition}
We note that in many cases a scalar product is a bilinear function, but this is not the case here. Also
\begin{eqnarray}
\prod _{i=0}^{d-1}\left[\sum_{j=0}^{d-1}q(i,j)p(i,j)\right]\ne \sum _{i=0}^{d-1}\left[\prod_{j=0}^{d-1}q(i,j)p(i,j)\right].
\end{eqnarray}
The first expression has $d^d$ products, while the second one has only $d$ products.
It is easily seen that
\begin{eqnarray}
(q,p)=(p,q);\;\;\;(q, {\mathfrak U})=\frac{1}{d^d};\;\;\;(q,{\bf 1})=\prod _{i=0}^{d-1}q(i,i).
\end{eqnarray}
Also
\begin{eqnarray}\label{46}
(q,q)=\prod _{i=0}^{d-1}\left[\sum_{j=0}^{d-1}q(i,j)^2\right].
\end{eqnarray}
If we take two safes from an ensemble described by the row Markov matrix $q$, then
$(q,q)$ is the probability that they will have the same opening sequence.
If $q(i,j)=\delta(j,n_i)$ (in which case the sequence that opens all safes is $(n_0,...,n_{d-1})$), then $(q,q)=1$.
$(q,q)$ takes small values when the distributions $q(i,j)$ are close to the uniform distribution, for all $i$.
In particular $({\mathfrak U}, {\mathfrak U})=\frac{1}{d^d}$.
We can write the result in lemma \ref{LL} as
\begin{eqnarray}\label{RRR}
(M_f,M_g)=\delta(f,g);\;\;\;f,g\in {\cal F}.
\end{eqnarray}
\begin{lemma}
For row Markov matrices the following relations hold:
\begin{itemize}
\item[(1)]
\begin{eqnarray}
0\le (q,p)\le 1.
\end{eqnarray}
\item[(2)]
\begin{eqnarray}\label{21}
(\lambda q_1+(1-\lambda)q_2, p)\ge\lambda ^d(q_1, p)+(1-\lambda )^d(q_2, p);\;\;\;0\le \lambda \le 1.
\end{eqnarray}
\end{itemize}
\end{lemma}
\begin{proof}
\begin{itemize}
\item[(1)]
We get
\begin{eqnarray}
\prod _{i=0}^{d-1}\left [\sum_{j_i=0}^{d-1}q(i,j)p(i,j)\right ]\le \prod _{i=0}^{d-1}\left [\frac{1}{2}\sum_{j_i=0}^{d-1}\left (q(i,j)^2+p(i,j)^2\right )\right ]\le 1
\end{eqnarray}
We used here the fact that
\begin{eqnarray}
\sum_{j=0}^{d-1}q(i,j)=1\;\;\rightarrow\;\;\sum_{j=0}^{d-1}q(i,j)^2\le 1,
\end{eqnarray}
and similarly for $p(i,j)$.
\item[(2)]
We get
\begin{eqnarray}
[\lambda q_1+(1-\lambda)q_2] p^T=\lambda (q_1p^T)+(1-\lambda )(q_2p^T).
\end{eqnarray}
When we get the product of the diagonal elements of the matrix in the left hand side, we get
the products of the diagonal elements of the two matrices in the right hand side, plus non-negative `cross terms' that contain the factor $\lambda ^n(1-\lambda)^{d-n}$
times $n$ $q_1$-variables times $d-n$ $q_2$-variables.
This proves the inequality.
\end{itemize}
\end{proof}
\begin{lemma}\label{L25}
If $q$ is a $d\times d$ matrix and $f\in {\cal F}$, then
\begin{eqnarray}\label{gh1}
\sum _f\prod _{i=0}^{d-1} q(i,f(i))=\prod _{i=0}^{d-1} \left [\sum _{j=0}^{d-1} q(i,j)\right ].
\end{eqnarray}
\end{lemma}
\begin{proof}
We first point out that there is the same number of $d^d$ terms in each side of this equality.
For each term $q(0,f(0))...q(d-1,f(d-1))$ on the left hand side, we find the same term on the right hand side written as $q(0,j_0)...q(d-1,j_{d-1})$
with $j_0=f(0),...,j_{d-1}=f(d-1)$.
The fact that $f$ is a function (as opposed to a general binary relation) implies that there is only one term in the right hand side equal to $q(0,f(0))...q(d-1,f(d-1))$.
\end{proof}
If $q$ is a row Markov matrix and $f\in {\cal F}$, we define the product of probabilities
\begin{eqnarray}\label{gh1}
{\mathfrak M}_q(f)=(q,M_f)=\prod _{i=0}^{d-1} q(i,f(i)).
\end{eqnarray}
In the absence of correlations, ${\mathfrak M}_q(f)$ is the joint probability that a random safe will open with the sequence $(f(0),...,f(d-1))$.
Indeed using lemma \ref{L25} and the fact that $q$ is here a row Markov matrix we prove that
\begin{eqnarray}\label{pro}
\sum _{f\in {\cal F}_R}{\mathfrak M}_q(f)=1.
\end{eqnarray}
The following proposition discusses an expansion of any row Markov matrix $q$ in terms of the matrices $M_f$ with the ${\mathfrak M}_q(f)$ as coefficients.
\begin{proposition}
\begin{itemize}
\item[(1)]
A row Markov matrix $q$ can be expanded in terms of the permutation with repetition matrices $M_f$, as
\begin{eqnarray}\label{gh}
q=\sum _{f\in {\cal F}}{\mathfrak M}_q(f)M_f=\sum _{f\in {\cal F}}(q,M_f)M_f.
\end{eqnarray}
\item[(2)]
For the matrix ${\mathfrak U}$ all the ${\mathfrak M}_{\mathfrak U}(f)$ are equal to each other, and the expansion in Eq.(\ref{gh}) gives
\begin{eqnarray}
{\mathfrak U}=\frac{1}{d^d}\sum _{f \in {\cal F}_R}M_f.
\end{eqnarray}
\item[(3)]
For $q=M_g$ we get ${\mathfrak M}_q(f)=\delta (f,g)$.
In this case the expansion in Eq.(\ref{gh}) has only one term.
\end{itemize}
\end{proposition}
\begin{proof}
\begin{itemize}
\item[(1)]
The proof is similar to the proof of lemma \ref{L25}.
Using ${\mathfrak M}_q(f)=q(0,f(0))...q(d-1,f(d-1))$ we calculate the $(i,j)$ element of the matrix in the right hand side of Eq.(\ref{gh}):
\begin{eqnarray}
&&\sum _{f\in {\cal F}} \left [q(0,f(0))...q(d-1,f(d-1))\right ]\delta(f(i),j)\nonumber\\&&=
\left (\sum _k q(0,k)\right )...\left (\sum _k q(i-1,k)\right )q(i,j)\left (\sum _k q(i+1,k)\right )...\left (\sum _k q(d-1,k)\right )=q(i,j).
\end{eqnarray}
Here we have the sum over all functions such that $f(i)=j$. So for all $k\ne i$ the $f(k)$ takes all possible values.
This proves Eq.(\ref{gh}).
\item[(2)]
In the special case that $q={\mathfrak U}$ we easily see that
\begin{eqnarray}
{\mathfrak M}_{\mathfrak U}({f})=\frac{1}{d^d}.
\end{eqnarray}
\item[(3)]
The proof of this is straightforward.
\end{itemize}
\end{proof}
The expansion in Eq.(\ref{gh}) says that in the ensemble of safes with probabilities described by the matrix $q$,
the probability that the opening sequence is $(f(0),...,f(d-1))$
(i.e., the percentage of safes with opening sequence $(f(0),...,f(d-1))$) is ${\mathfrak M}_q(f)$.
Special cases are:
\begin{itemize}
\item
If $q=M_f$, then all safes have the same opening sequence $(f(0),...,f(d-1))$.
\item
If $q={\mathfrak U}$, all opening sequences are equally probable (with probability $\frac{1}{d^d}$).
\end{itemize}
\begin{cor}
For row Markov matrices:
\begin{itemize}
\item[(1)]
\begin{eqnarray}\label{bb}
{\mathfrak M}_{\lambda q_1+(1-\lambda)q_2}(f)\ge \lambda ^d{{\mathfrak M}_{q_1}}(f)+(1-\lambda)^d{{\mathfrak M}_{q_2}}(f)
\end{eqnarray}
\item[(2)]
If
\begin{eqnarray}
q=\sum _{f\in {\cal F}}{\mathfrak M}_q(f)M_f;\;\;\;p=\sum _{f\in {\cal F}}{\mathfrak M}_p(f)M_f
\end{eqnarray}
then
\begin{eqnarray}\label{579}
(q, p)=\sum _{f\in {\cal F}}{\mathfrak M}_q(f){\mathfrak M}_p(f).
\end{eqnarray}
\end{itemize}
\end{cor}
\begin{proof}
\begin{itemize}
\item[(1)]
We insert $q=M_f$ in Eq.(\ref{21}), and use Eq.(\ref{gh1}).
\item[(2)]
Using lemma \ref{L25} we get:
\begin{eqnarray}
(q,p)=\prod _{i=0}^{d-1}\left[\sum_{j=0}^{d-1}q(i,j)p(i,j)\right]=\sum _f\prod _i q(i,f(i))p(i,f(i))=\sum _{f}{\mathfrak M}_q(f){\mathfrak M}_p(f)
\end{eqnarray}
\end{itemize}
\end{proof}
The $\lambda q_1+(1-\lambda)q_2$ with $0\le \lambda \le 1$,
describes the merger of an ensemble described by $q_1$ which has $\lambda N$ safes, with
an ensemble described by $q_2$ which has $(1-\lambda) N$ safes.
In this case the inequality in Eq.(\ref{bb}) holds.
\subsection{Expansions for row Markov matrices in the presence of correlations}
We consider an ensemble of many safes with correlations between the integers in the various positions of the sequences that open the safes, i.e., lack of independence.
Let $q(f)=q[f(0),...,f(d-1)]$ be $d^d$ joint probabilities (labelled with the functions $f\in {\cal F}$).
In terms of a random safe, they are the joint probabilities that the number in the $0$-position of the opening sequence is $f(0)$,
the number in the $1$-position is $f(1)$, etc.
We call them Markov tensors and
\begin{eqnarray}\label{42}
\sum _{f\in {\cal F}}q(f)=1.
\end{eqnarray}
The
\begin{eqnarray}\label{ghd}
q=\sum _{f\in {\cal F}}q(f)M_f,
\end{eqnarray}
is a row Markov matrix. Indeed
\begin{eqnarray}
\sum _jq(i,j)=\sum _j\sum _{f\in {\cal F}}q(f)M_f(i,j)=\sum _{f\in {\cal F}}q(f)=1.
\end{eqnarray}
Eq.(\ref{ghd}) provides an expansion of $q$ in terms of $M_f$ with the joint probabilities $q(f)$ as coefficients.
Eq.(\ref{gh}) provides an alternative expansion of the same row Markov matrix $q$ in terms of $M_f$ with the products of probabilities ${\mathfrak M}_q(f)$ as coefficients,
and in general $q(f)\ne {\mathfrak M}_q(f)$.
This is because in the presence of correlations joint probabilities are not equal to the product of probabilities.
So the non-uniqueness of the expansion is related to various types of correlations.
Two ensembles might have different Markov tensors $q(f)$ (i.e. different correlations) but the same Markov matrix $q(i,j)$, and then we get two different expansions of the same Markov matrix.
The $d^d$ quantities
\begin{eqnarray}\label{COR}
{\cal C}_q(f)=q(f)-{\mathfrak M}_q(f)=q(f)-\prod _i q(i,f(i));\;\;\;-1\le {\cal C}_q(f)\le 1;\;\;\;\sum _{f\in {\cal F}_R}{\cal C}_q(f)=0,
\end{eqnarray}
is one possible way of quantifying correlations, and we call them correlation coefficients.
In the case of independence (absence of correlations) we get ${\cal C}(f)=0$.
\begin{example}\label{ex23}
We consider the following $3\times 3$ row Markov matrix
\begin{eqnarray}\label{AB1}
q=\begin{pmatrix}
a&1-a&0\\
0&a&1-a\\
0&1-b&b\\
\end{pmatrix}.
\end{eqnarray}
Assuming absence of correlations, we present in table \ref{t1} the probabilities ${\mathfrak M}_q(f)$ and the corresponding permutation with repetition matrices $M_f$ for the expansion in Eq.(\ref{gh}).
In general there are $d^d=27$ terms, but in this example only $8$ of them have non-zero probability.
In the interpretation in terms of safes, the opening sequence consists of $3$ integers which take one of the values $0,1,2$.
The probabilities that the first integer is $0,1,2$ are $a, 1-a, 0$ correspondingly, etc.
We assume independence between the integers in the various positions of the opening sequence.
Then the joint probability that the opening sequence is $(0,1,1)$ is $a^2(1-b)$,
the joint probability that the opening sequence is $(0,1,2)$ is $a^2b$, etc.
Using Eq.(\ref{46}) we also calculated the
\begin{eqnarray}
(q,q)=(2a^2-2a+1)^2(2b^2-2b+1).
\end{eqnarray}
This is the probability that two safes will have the same opening sequence of integers.
\end{example}
\begin{example}\label{ex24}
We consider again the row Markov matrix in Eq.(\ref{AB1}) and for later use we assume that
\begin{eqnarray}\label{AB11}
0\le 2a\le b\le \frac{1}{2}.
\end{eqnarray}
A different expansion for this Markov matrix is
\begin{eqnarray}\label{kk}
q=a\begin{pmatrix}
1&0&0\\
0&1&0\\
0&0&1\\
\end{pmatrix}+(b-a)
\begin{pmatrix}
0&1&0\\
0&0&1\\
0&0&1\\
\end{pmatrix}+(1-b)
\begin{pmatrix}
0&1&0\\
0&0&1\\
0&1&0\\
\end{pmatrix}.
\end{eqnarray}
The corresponding joint probabilities $q(f)$ and correlation coefficients ${\cal C}_q(f)=q(f)-{\mathfrak M}_q(f)$ are shown in table \ref{t1}.
Here we have correlations between the integers in the various positions of the opening sequence, and the joint probability is not equal to the product of probabilities.
In an ensemble of these safes, the percentage of safes with opening sequence $(0,1,2)$ is $a$,
the percentage of safes with opening sequence $(1,2,2)$ is $b-a$, and the percentage of safes with opening sequence $(1,2,1)$ is $1-b$.
\end{example}
\section{Lorenz values of probability vectors}
Lorenz values and the Gini index\cite{Gini,Gini1,Gini2,Gini3} are statistical quantities which have been used extensively in Mathematical Economics for the study of inequality in wealth distribution.
In our context they are used to quantify the sparsity (certainty) in a probability vector.
\subsection{Lorenz values of probability vectors}\label{majo}
Let $\pi$ be the permutation that orders the probabilities of a probability vector ${\bf x}$, in ascending order:
\begin{eqnarray}\label{pp}
x[\pi(0)]\le x[\pi(1)]\le ...\le x[\pi(d-1)].
\end{eqnarray}
We sometimes use the notation $\pi_{\bf x}$ for this permutation.
The Lorenz values of this probability vector are defined as
\begin{eqnarray}
{\cal L}(\ell;{\bf x})=x[\pi(0)]+...+ x[\pi(\ell)];\;\;\;{\cal L}(d-1;{\bf x})=1
\end{eqnarray}
where $\ell=0,...,d-1$.
The Lorenz values are cumulative probabilities with respect to the order $\pi$ in Eq.(\ref{pp}).
The Lorenz values for the `most uncertain probability vector' ${\bf u}$ and the `certain probability vector' ${\bf c}$, are
\begin{eqnarray}
{\cal L}(\ell;{\bf u})=\frac{\ell+1}{d};\;\;\;{\cal L}(\ell;{\bf c})=\delta(\ell, d-1),
\end{eqnarray}
where $\delta$ is the Kronecker delta.
It is easily seen that for any permutation matrix $M_{\pi}$
\begin{eqnarray}\label{hh1}
{\cal L}(\ell;{\bf x}M_\pi)={\cal L}(\ell;{\bf x})
\end{eqnarray}
But for a general permutation with repetition matrix $M_f$ the ${\cal L}(\ell;{\bf x}M_f)$ might be different from ${\cal L}(\ell;{\bf x})$.
\begin{proposition}\label{L1}
\begin{eqnarray}\label{24}
0\le {\cal L}(\ell;{\bf x})\le \frac{\ell+1}{d}.
\end{eqnarray}
\end{proposition}
\begin{proof}
If $x[\pi(\ell)]\le \frac{1}{d}$, then since $x[\pi(k)]\le x[\pi(\ell)]$ for $k<\ell$ we prove easily Eq.(\ref{24}) for this case.
If $x[\pi(\ell)]> \frac{1}{d}$, then since $x[\pi(k)]\ge x[\pi(\ell)]$ for $k>\ell$, we get
\begin{eqnarray}
1=x[\pi(0)]+...+x[\pi(\ell)]+...+x[\pi(d-1)]\ge {\cal L}(\ell;{\bf x})+(d-1-\ell)x[\pi(\ell)]\ge {\cal L}(\ell;{\bf x})+\frac{d-1-\ell}{d}
\end{eqnarray}
From this follows Eq.(\ref{24}) for this case.
\end{proof}
\begin{definition}
Two probability vectors ${\bf x}, {\bf y}$ are comonotonic if $\pi_{\bf x}=\pi_{\bf y}$.
In this case the probability vector $\lambda {\bf x}+(1-\lambda) {\bf y}$ where $0\le \lambda \le 1$, also has the same permutation of ordering of its elements.
\end{definition}
\begin{proposition}\label{pro78}
\mbox{}
\begin{itemize}
\item[(1)]
If ${\bf x}, {\bf y}$ are probability vectors, then for all $\ell$
\begin{eqnarray}\label{hh2}
{\cal L}[\ell;\lambda {\bf x}+(1-\lambda){\bf y}]\ge {\cal L}(\ell;{\bf x})+(1-\lambda){\cal L}(\ell;{\bf y});\;\;\;0\le \lambda\le 1.
\end{eqnarray}
For comonotonic vectors ${\bf x}, {\bf y}$, this becomes equality.
\item[(2)]
If ${\mathfrak D}$ is a doubly stochastic matrix, then ${\cal L}[\ell;{\bf x}{\mathfrak D}]\ge{\cal L}[\ell;{\bf x}]$, for all $\ell$.
This is not true for row Markov matrices, in general.
\end{itemize}
\end{proposition}
\begin{proof}
\mbox{}
\begin{itemize}
\item[(1)]
We consider the probability vector
\begin{eqnarray}
{\bf z}=\lambda {\bf x}+(1-\lambda){\bf y}.
\end{eqnarray}
Then
\begin{eqnarray}\label{V1}
{\cal L}(\ell;{\bf z})&=&z[\pi_z(0)]+...+ z[\pi_z(\ell)]\nonumber\\&=&\{\lambda x[\pi_z(0)]+(1-\lambda) y[\pi_z(0)]\}+...+
\{\lambda x[\pi_z(d-1)]+(1-\lambda) y[\pi_z(d-1)]\}
\end{eqnarray}
$\pi_z$ is the ordering permutation for ${\bf z}$, and in general it is different from the
ordering permutation $\pi_x$ for ${\bf x}$ and $\pi_y$ for ${\bf y}$. Therefore
\begin{eqnarray}\label{V2}
&&x[\pi_z(0)]+ ...+x[\pi_z(\ell)]\ge {\cal L}(\ell;{\bf x})\nonumber\\
&&y[\pi_z(0)]+ ...+y[\pi_z(\ell)]\ge {\cal L}(\ell;{\bf y})
\end{eqnarray}
Combining Eqs(\ref{V1}), (\ref{V2}) we prove Eq.(\ref{hh2}).
For comonotonic probability vectors ${\bf x}, {\bf y}$, the ${\bf z}$ has the same ordering permutation as the ${\bf x}, {\bf y}$.
Then Eqs.(\ref{V2}) become equalities and therefore Eq.(\ref{hh2}) becomes equality.
\item[(2)]
The proof for doubly stochastic matrices is based on Eq.(\ref{hh2}), taking into account Eqs.(\ref{hh}), (\ref{hh1}).
We note that Eq.(\ref{hh1}) is not valid for general permutation with repetition matrices $M_f$ that appear in the expansions of row Markov matrices.
The following example shows that ${\cal L}(\ell;{\bf x})$ can be greater than ${\cal L}(\ell;{\bf x}q)$.
For the row Markov matrix in Eq.(\ref{AB1}) with $a=b=\frac{1}{2}$, we find that
\begin{eqnarray}\label{AB3}
{\bf u}q=\left (\frac{1}{6}, \frac{1}{2}, \frac{1}{3}\right ).
\end{eqnarray}
Therefore
\begin{eqnarray}
{\cal L}(0;{\bf u}q)=\frac{1}{6};\;\;\;{\cal L}(1;{\bf u}q)=\frac{1}{2};\;\;\;{\cal L}(2;{\bf u}q)=1.
\end{eqnarray}
In this example ${\cal L}(\ell;{\bf u})=\frac{\ell+1}{3}$ is greater than ${\cal L}(\ell;{\bf u}q)$.
\end{itemize}
\end{proof}
\subsection{Majorization of probability vectors: `more sparse'}
Majorization is a preorder that has been used in various areas (e.g., \cite{MAJ}), including quantum physics\cite{MA0,MA1,MA2,MA3}.
It provides an ordinal description of the sparsity in probability vectors (i.e., that one probability vector is more sparse than another).
\begin{definition}
If ${\bf x}, {\bf y}$ are two probability vectors, then ${\bf x}\succ {\bf y}$ (${\bf x}$ majorizes ${\bf y}$ or ${\bf x}$ is `more sparse' than ${\bf y}$ ) if ${\cal L}(\ell;{\bf x})\le {\cal L}(\ell;{\bf y})$ for all $\ell$.
\end{definition}
In this case the large probabilities are larger in ${\bf x}$ than in ${\bf y}$, and therefore the `certainty' related to ${\bf x}$ is larger than the certainty related to ${\bf y}$ .
In other words, the probability distribution in ${\bf x}$ is `more certain' than the distribution in ${\bf y}$.
Clearly for any vector ${\bf x}$
\begin{eqnarray}\label{19}
{\bf u}\prec{\bf x}\prec {\bf c}
\end{eqnarray}
where ${\bf u}$ and ${\bf c}$ have been defined in Eq.(\ref{3D}).
It is easily seen that
\begin{eqnarray}\label{mn}
&&{\bf x}\prec{\bf x}\nonumber\\
&&{\bf z}\prec{\bf y}{\;\;\rm and\;\;}{\bf y}\prec{\bf x}\;\;\rightarrow\;\;{\bf z}\prec {\bf x}
\end{eqnarray}
and therefore ${\bf x}\succ {\bf y}$ is a preorder in ${\cal P}$.
Majorization is intimately related to doubly stochastic matrices. ${\bf x}\succ {\bf y}$ if and only if there exists a doubly stochastic matrix ${\mathfrak D}$ such that ${\bf y}={\mathfrak D}{\bf x}$.
Indeed if ${\bf y}={\mathfrak D}{\bf x}$ then using Eqs(\ref{hh}),(\ref{hh1}),(\ref{hh2}) we prove that
${\cal L}(\ell,{\bf x})\ge {\cal L}(\ell,{\bf y})$ for all $\ell$, i.e., that ${\bf x}\succ {\bf y}$.
The converse is also true\cite{MAJ}.
\begin{remark}\label{rem2}
Majorization is an ordinal approach to the concept `more certain' or `more sparse'.
Entropic quantities can be used to quantify these concepts.
There is a relationship between majorization and entropic quantities which involves the concept of
Schur concave functions, i.e., functions $\phi({\bf x})$ for which
\begin{eqnarray}
{\bf x}\succ{\bf y}\;\;\rightarrow\;\;\phi({\bf x})\le \phi ( {\bf y}).
\end{eqnarray}
Many entropic quantities are known to be Schur concave functions(e.g., \cite{BZ}), and for them the entropy corresponding
${\bf x}$ is less than the entropy corresponding to ${\bf y}$. We do not purse further the `entropic direction' in this paper.
\end{remark}
\section{The Gini index as an indicator of the sparsity of probability vectors}
The Gini index quantifies the variability in a probability distribution.
It is an alternative to variance and standard deviation, and below (in remark \ref{rem1}) we compare and contrast these two quantities.
It has been used in Mathematical Economics for the characterisation of wealth inequality.
In this section Lorenz values are used to define the Gini index which quantifies the sparsity in probability vectors.
It has attractive properties (e.g. proposition \ref{pro56} ) which can be used to quantify the concepts `uncertainty increase' or `information loss'.
In this sense it is a complementary quantity to entropy.
\begin{proposition}\label{proG1}
The Gini index is defined by the following relations which are equivalent to each other:
\begin{itemize}
\item[(1)]
\begin{eqnarray}\label{85}
{\cal G}({\bf x})&=&1-\frac{2}{d+1}\sum _{\ell=0}^{d-1}{\cal L}(\ell;{\bf x})=\frac{d-1}{d+1}-\frac{2}{d+1}\sum _{\ell=0}^{d-2}{\cal L}(\ell;{\bf x})\nonumber\\&=&
1-\frac{2}{d+1}\{dx[\pi(0)]+(d-1)x[\pi(1)]+...+x[\pi(d-1)]\}
\end{eqnarray}
\item[(2)]
\begin{eqnarray}\label{3d}
{\cal G}({\bf x})=\frac{1}{2(d+1)}\sum _{r,s}|x(r)-x(s)|
\end{eqnarray}
\end{itemize}
\end{proposition}
\begin{proof}
Using the ordering of the probabilities in Eq.(\ref{pp}) we get
\begin{eqnarray}
\sum _{r,s}|x(r)-x(s)|&=&\sum _{r,s}|x[\pi(r)]-x[\pi(s)]|\nonumber\\&=&
2\sum _{r=1}^{d-1}\sum _{s< r}\{x[\pi(r)]-x[\pi(s)]\}\nonumber\\&=&
2\sum _{r=1}^{d-1} (r-1)x[\pi(r)]-2\sum _{r=1}^{d-1}{\cal L}(r-1;{\bf x})
\end{eqnarray}
We note that
\begin{eqnarray}
\sum _{r=1}^{d-1} (r-1)x[\pi(r)]+\sum _{r=1}^{d-1}{\cal L}(r-1;{\bf x})=d-1
\end{eqnarray}
and we get
\begin{eqnarray}
\sum _{r,s}|x(r)-x(s)|=4\sum _{r=1}^{d-1} (r-1)x[\pi(r)]-2(d-1).
\end{eqnarray}
From this follows Eq.(\ref{85}).
\end{proof}
From Eq.(\ref{24}) follows that
\begin{eqnarray}\label{www}
0\le {\cal G}({\bf x})\le \frac{d-1}{d+1}.
\end{eqnarray}
Large values of the Gini index indicate sparse (certain) probability vectors.
Small values of the Gini index indicate uncertain probability vectors.
\begin{example}
The Gini indices of the vectors ${\bf u}$ and ${\bf c}$ are
\begin{eqnarray}
{\cal G}({\bf u})=0;\;\;\;{\cal G}({\bf c})=\frac{d-1}{d+1}
\end{eqnarray}
\end{example}
\begin{proposition}\label{pro56}
\mbox{}
\begin{itemize}
\item[(1)]
If ${\bf x}$, ${\bf y}$ are probability vectors, then
\begin{eqnarray}\label{pp5}
{\cal G}[\lambda {\bf x}+(1-\lambda){\bf y}]\le \lambda {\cal G}({\bf x})+(1-\lambda){\cal G}({\bf y});\;\;\;0\le \lambda\le 1.
\end{eqnarray}
For comonotonic vectors ${\bf x}, {\bf y}$, this becomes equality.
\item[(2)]
If ${\mathfrak D}$ is a doubly stochastic matrix, then
\begin{eqnarray}\label{P1}
{\cal G}({\bf x}{\mathfrak D})\le {\cal G}({\bf x})
\end{eqnarray}
If the doubly stochastic matrix is a permutation matrix $M_\pi$ then
\begin{eqnarray}\label{P2}
{\cal G}({\bf x}M_{\pi})={\cal G}({\bf x})
\end{eqnarray}
But for a row Markov matrix $q$ the ${\cal G}({\bf x}q)$ might be greater than ${\cal G}({\bf x})$.
\end{itemize}
\end{proposition}
\begin{proof}
\begin{itemize}
\item[(1)]
This follows from Eq.(\ref{85}) and the first part of proposition \ref{pro78}.
\item[(2)]
Eq.(\ref{P1}) follows from Eq.(\ref{85}) and the second part of proposition \ref{pro78}.
Eq.(\ref{P2}) follows from Eq.(\ref{hh1}).
An example where the ${\cal G}({\bf x}q)$ is greater than ${\cal G}({\bf x})$, is as follows.
For the uncertain probability vector in Eq.(\ref{3D}), we get ${\cal G}({\bf u})=0$.
For the probability vector ${\bf r}={\bf u}q$ in Eq.(\ref{AB3}), we get ${\cal G}({\bf r})=\frac{1}{6}$.
\end{itemize}
\end{proof}
The change in the Gini index of a probability vector ${\bf x}$ when it is multiplied by a row Markov matrix $q$, is:
\begin{eqnarray}\label{R}
\Delta {\cal G}({\bf x},{\bf x}q)={\cal G}({\bf x})-{\cal G}({\bf x}q).
\end{eqnarray}
The $\Delta {\cal G}({\bf x},{\bf x}q)$ quantifies the change in the uncertainty as we go from the probability vector ${\bf x}$ to the probability vector ${\bf x}q$.
If $q$ is a doubly stochastic matrix ${\mathfrak D}$ then $\Delta {\cal G}({\bf x},{\bf x}{\mathfrak D})\ge 0$.
Also
\begin{eqnarray}\label{R}
\Delta {\cal G}({\bf x},{\bf x}{\mathfrak U})={\cal G}({\bf x}).
\end{eqnarray}
If $q$ is a row Markov matrix which is not doubly stochastic, then the probability vector ${\bf x}q$ might be more sparse (more certain) than ${\bf x}$, and
the $\Delta {\cal G}({\bf x},{\bf x}q)$ might be negative.
\begin{remark}\label{rem1}
Eq.(\ref{3d}) for the Gini index should be compared and contrasted with the variance $V$ related to these probabilities which is given by
\begin{eqnarray}
V=\frac{1}{2d^2}\sum _{r,s}[x(r)-x(s)]^2.
\end{eqnarray}
It is seen that the variance is the sum of the squares of the differences, whilst the Gini index is the sum of the absolute values of the differences.
It is easily seen that
\begin{eqnarray}
d^2V\le (d+1){\cal G}({\bf x}).
\end{eqnarray}
In our context properties like proposition \ref{pro56}, make the Gini index an attractive quantity for the characterisation of uncertainty.
\end{remark}
\subsection{Bounds for the averages in terms of the Gini index}\label{sec12}
We consider a quantity that takes the values $0,...,d-1$ with probabilities $x(0),...,x(d-1)$, correspondingly.
The average value of this quantity is
\begin{eqnarray}\label{order}
\langle{\bf x}\rangle=0\cdot x(0)+x(1)+...+(d-1)x(d-1).
\end{eqnarray}
In many cases with a finite number of items there is no `natural order'. The values $0,...,d-1$ are just labels and with
a permutation $\wp$ we can change the value $\ell$ into $\wp (\ell)$. Then the average value changes into
\begin{eqnarray}\label{ord}
\langle{\bf x}\rangle_\wp &=&0\cdot x[\wp (0)]+x[\wp (1)]+...+(d-1) x[\wp (d-1)]\nonumber\\
&=&\wp^{-1}(0)x(0)+\wp^{-1}(d-2)x(1)+...+\wp^{-1}(d-1)x(d-1).
\end{eqnarray}
We note that the order $\pi$ of the values of the probabilities in Eq.(\ref{pp}) defines uniquely the Gini index.
In contrast, the average depends on another order $\wp$ that is defined by the physical problem.
The following proposition provides an interval where $\langle{\bf x}\rangle_\wp$ belongs.
\begin{proposition}\label{GG3}
For any permutation $\wp$, the average $\langle{\bf x}\rangle_\wp$ belongs to the interval
\begin{eqnarray}\label{39}
\frac{d-1}{2}-\frac{d+1}{2}{\cal G}({\bf x})\le \langle{\bf x}\rangle _\wp \le \frac{d-1}{2}+\frac{d+1}{2}{\cal G}({\bf x})
\end{eqnarray}
\end{proposition}
\begin{proof}
We first consider the sum
\begin{eqnarray}
S(\ell)=x[\wp (\ell)]+...+x[\wp (d-1)];\;\;\;S(0)=1.
\end{eqnarray}
Clearly
\begin{eqnarray}
{\cal L}(d-\ell-1;{\bf x})\le S(\ell)\le 1-{\cal L}(\ell -1;{\bf x})
\end{eqnarray}
because ${\cal L}(d-\ell -1;{\bf x})$ is the sum of the lowest $d-\ell$ probabilities, and $1-{\cal L}(\ell -1;{\bf x})$ is the sum of the highest $d-\ell $ probabilities. Therefore
\begin{eqnarray}\label{A1}
\langle{\bf x}\rangle_{\wp}=\sum _{r=1}^{d-1}S(r)\ge\sum _{r=1}^{d-1}{\cal L}(d-r-1;{\bf x})=A.
\end{eqnarray}
Also
\begin{eqnarray}\label{A2}
\langle{\bf x}\rangle_{\wp}=\sum _{r=1}^{d-1}S(r)\le\sum _{r=1}^{d-1}[1-{\cal L}(r-1;{\bf x})]=d-1-A.
\end{eqnarray}
Eq.(\ref{85}) shows that
\begin{eqnarray}
A=\frac{d+1}{2}\left [\frac{d-1}{d+1}-{\cal G}({\bf x})\right]
\end{eqnarray}
Using this with Eqs(\ref{A1}),(\ref{A2}) we prove the proposition.
\end{proof}
\subsection{Local and total Gini indices for random safes}
We consider the $d\times d$ row Markov matrix $q$, describing an ensemble of safes (random safe).
As explained earlier $q(i,j)$ is the probability that the integer in the $i$-position of the sequence that opens the safe, is $j$.
For a fixed row $i$, the $q(i,j)$ is a probability vector and we can calculate its Gini index.
We order the $i$-row of $q$ in ascending order as
\begin{eqnarray}
q[i,\pi_i(0)]\le...\le q[i,\pi_i(d-1)],
\end{eqnarray}
where $\pi _i$ is a permutation of the $d$ probabilities $q(i,j)$ in the $i$-row.
Then
\begin{eqnarray}\label{G1}
{\cal G}_i={\cal G}[q(i,j)]=
1-\frac{2}{d+1}\{dq[i,\pi_i(0)]+...+q[i,\pi_i(d-1)]\};\;\;\;0\le {\cal G}_i\le \frac{d-1}{d+1}.
\end{eqnarray}
This is a local Gini index describing the sparsity of the probability vector for the integer in the $i$-position of the sequence that opens the random safe.
Then the `Gini vector'
\begin{eqnarray}\label{G1V}
{\cal G}=({\cal G}_0,...,{\cal G}_{d-1}),
\end{eqnarray}
consists of all local Gini indices, and it does not depend on correlations between the various integers in the sequence that opens the random safe.
We also consider the joint probabilities $q(f_i)$ ($i=0,...,d^d-1$) which form a probability vector with $d^d$ components, and define its Gini index.
We order the $d^d$ joint probabilities $q(f_i)$ in ascending order as
\begin{eqnarray}
q[\pi(f_0)]\le...\le q[\pi(f_{d^d-1})]\},
\end{eqnarray}
where $\pi$ is a permutation of the $d^d$ functions $f_i\in {\cal F}$. Then
\begin{eqnarray}\label{G2}
{\cal G}_T={\cal G}[q(f)]=
1-\frac{2}{d^d+1}\left \{d^dq[\pi(f_0)]+...+q[\pi(f_{d^d-1})]\right \};\;\;\;0\le {\cal G}_T\le\frac{d^d-1}{d^d+1}.
\end{eqnarray}
${\cal G}_T$ is a total Gini index describing the sparsity of the joint probabilities describing the ensemble of safes.
${\cal G}_T$ depends on correlations between the various integers in the sequence that opens the random safe.
\begin{example}
This is a continuation of example \ref{ex24}.
We consider the row Markov matrix in Eq.(\ref{AB1}) and its expansion in Eq.(\ref{kk}).
The Gini vector is easily found to be
\begin{eqnarray}
{\cal G}=\left (\frac{1-a}{2}, \frac{1-a}{2}, \frac{1-b}{2}\right ).
\end{eqnarray}
The joint probabilities are $a$, $b-a$ and $1-b$, and using the inequalities in Eq.(\ref{AB11}) we see that
\begin{eqnarray}
0\le a\le b-a\le 1-b.
\end{eqnarray}
Therefore the total Gini index is
\begin{eqnarray}
{\cal G}_T=1-\frac{2}{28}[3a+2(b-a)+(1-b)]=\frac{13-a-b}{14}.
\end{eqnarray}
\end{example}
\section{Quantum uncertainties in terms of the Gini index}
We consider a quantum system (qudit) with variables in ${\mathbb Z}_d$ (e.g.,\cite{V1}).
$H_d$ is the $d$-dimensional Hilbert space describing this system.
$|j\rangle$ where $j\in {\mathbb Z}_d$, is an orthonormal basis in $H_d$.
With a finite Fourier transform $F$
\begin{eqnarray}\label{FF}
&&F=\frac{1}{\sqrt{d}}\sum _{j,k}\omega_d(jk) \ket{j}\bra{k};\;\;\;\omega_d(\alpha)=\exp \left (i\frac{2\pi\alpha}{d}\right);\;\;\;\alpha,j,k\in{\mathbb Z}_d\nonumber\\
&&F^4={\bf 1};\;\;\;FF^{\dagger}={\bf 1},
\end{eqnarray}
we introduce the dual basis
\begin{eqnarray}\label{FF}
&&\ket{j}_F=F\ket{j}.
\end{eqnarray}
The term dual refers to Fourier transform (in multipartite systems below we make the distinction between locally dual and globally dual for two different types of Fourier transform).
We call $\ket{j}$ positions and $\ket{j}_F$ momenta.
Using the relation
\begin{eqnarray}
\frac{1}{d}\sum _{k}\omega_d[(j+\ell)k]=\delta(j,-\ell),
\end{eqnarray}
we show that $F^2$ is the parity operator
\begin{eqnarray}\label{parity}
F^2=\frac{1}{d}\sum _{j,k,\ell}\omega_d[(j+\ell)k] \ket{j}\bra{\ell}=\sum _{j}\ket{j}\bra{-j}
\end{eqnarray}
For $d=2$ we get $F^2={\bf 1}$.
Let $\varpi(j)$ and ${\widetilde \varpi}(j)$ be the following orthogonal projectors and their `duals':
\begin{eqnarray}\label{333}
\varpi(j)=\ket{j}\bra{j};\;\;\;{\widetilde \varpi}(j)=F\varpi(j)F^\dagger=\ket{j}_F\; _F\bra{j}.
\end{eqnarray}
Measurements with these projectors on a system with density matrix $\rho$, will give the outcome `yes' with probabilities
\begin{eqnarray}\label{rr}
x_\rho (j)={\rm Tr}[\rho \varpi(j)];\;\;\;{\widetilde x}_\rho (j)={\rm Tr}[\rho {\widetilde \varpi}(j)]=x_{F^{\dagger}\rho F}(j),
\end{eqnarray}
correspondingly.
We refer to the ${\widetilde x}_\rho (j)$ as the dual probabilities.
The dual probabilities for the density matrix $\rho$, are the probabilities of the Fourier transformed density matrix $F^\dagger\rho F$.
The Gini indices ${\cal G}(\rho)$ and ${\widetilde {\cal G}}(\rho)$ for these two probability vectors are calculated using Eq.(\ref{85}).
They quantify the sparsity of these two probability vectors and it is easily seen that
\begin{eqnarray}
{\widetilde {\cal G}}(\rho)={\cal G}(F^{\dagger}\rho F);\;\;\;0\le {\cal G}(\rho), {\widetilde {\cal G}}(\rho)\le \frac{d-1}{d+1}.
\end{eqnarray}
In ref\cite{V2} we have used the Gini index to study the uncertainty principle for systems with finite dimensional Hilbert space.
We have shown that ${\cal G}(\rho)+{\widetilde {\cal G}}(\rho)$ cannot take values arbitrarily close to $2\frac{d-1}{d+1}$, i.e.,
we cannot have probability vectors $x_\rho (j)$, ${\widetilde x}_\rho (j)$ which are both `very sparse'.
Based on this we proved that the following `uncertainty coefficient' is greater than zero:
\begin{eqnarray}\label{Q1}
\eta _d=2\frac{d-1}{d+1}-\sup _{\rho }[{\cal G}(\rho)+{\widetilde {\cal G}}(\rho)]>0.
\end{eqnarray}
We use here the supremum of ${\cal G}(\rho)+{\widetilde {\cal G}}(\rho)$ over all density matrices.
$\eta _d$ does not depend on the density matrix.
The uncertainty relation is the fact that $\eta_d$ is non-zero, and it can be expressed as
\begin{eqnarray}\label{Q2}
\Delta(\rho)\ge \eta _d>0;\;\;\;\Delta(\rho)=2\frac{d-1}{d+1}-[{\cal G}(\rho)+{\widetilde {\cal G}}(\rho)].
\end{eqnarray}
In some sense $\eta_d$ is the analogue of $\frac{1}{2}$ in the infinite systems.
We also gave an upper bound for $\eta_d$ but this is not relevant to the uncertainty principle, and it is not used here.
\section{Multipartite quantum systems as quantum permutations with repetitions and as quantum safes}\label{Q}
The interpretation of multipartite quantum systems as quantum permutations with repetitions and as quantum safes,
allows the use of the formalism in the previous sections, in the present context.
Furthermore, practical calculations of quantum quantities like partial traces, expectation values, etc, tacitly use permutations with repetitions.
\subsection{Local Fourier transforms}
We consider a $d$-partite system comprised of $d$ components each of which is a qudit.
This system is described with the $d^d$-dimensional Hilbert space ${\mathfrak H}=H_d\otimes ...\otimes H_d$, and let $\ket{j_0,...,j_{d-1}}$ be an orthonormal basis in it.
Also let $F_L$ be local Fourier transforms applied on each of the $d$ components of the system:
\begin{eqnarray}
F_L=F\otimes ...\otimes F;\;\;\;F_L^4={\bf 1};\;\;\;F_LF_L^{\dagger}={\bf 1}.
\end{eqnarray}
The index $L$ in the notation stands for local.
Acting with $F_L$ on the basis $\ket{j_0,...,j_{d-1}}$ we get the locally dual basis
\begin{eqnarray}\label{103}
\ket{j_0,...,j_{d-1}}_{\rm L}=F_L\ket{j_0,...,j_{d-1}}=\ket{j_0}_F\otimes...\otimes\ket{j_{d-1}}_F=\frac{1}{\sqrt{d^d}}\bigotimes _{r=0}^{d-1}\left [\sum _{k_r=0}^{d-1}\omega_{d}(j_rk_r)\ket{k_r}\right ].
\end{eqnarray}
We introduce `quantum permutations with repetitions' or `quantum safes', by labelling an orthonormal basis in $\mathfrak H$ with the $d^d$ functions in the set ${\cal F}$ (that describes permutations with repetitions):
\begin{eqnarray}\label{kkk}
\ket{f}=\ket{f(0),...,f(d-1)};\;\;\;\sum _f\ket{f}\bra{f}={\bf 1};\;\;\;f\in {\cal F};\;\;\;f(i)\in {\mathbb Z}_d.
\end{eqnarray}
The Hilbert space $H_d\otimes ...\otimes H_d$ describes superpositions of permutations with repetitions or
superpositions of safes.
For example, let $\rho _A, \rho_B$ be the density matrices
\begin{eqnarray}\label{108}
&&\rho _A=|a|^2\ket{f}\bra{f}+|b|^2\ket{g}\bra{g};\;\;\;|a|^2+|b|^2=1;\;\;\;f,g\in {\cal F}\nonumber\\
&&\rho _B=\rho_A+ab^*\ket{f}\bra{g}+a^*b\ket{g}\bra{f}.
\end{eqnarray}
$\rho _A$ describes a probabilistic combination of the permutations with repetitions $f,g$ (random safes).
$\rho _B$ describes a superposition of the permutations with repetitions $f,g$ (quantum safes).
The difference between the separable density matrix $\rho_A$ and the entangled state \cite{H} described by the density matrix $\rho _B$, is the off-diagonal terms.
We consider the commuting projectors and their locally dual counterparts
\begin{eqnarray}\label{333a}
&&\Pi(i,j)={\bf 1}\otimes...\otimes {\bf 1}\otimes \varpi(j)\otimes {\bf 1}\otimes...\otimes {\bf 1};\;\;\;\sum_j\Pi(i,j)={\bf 1};\;\;\;[\Pi(i,j),\Pi(k,\ell)]=0\nonumber\\
&&{\widetilde \Pi}(i,j)=F_L\Pi(i,j)F_L^{\dagger}={\bf 1}\otimes...\otimes {\bf 1}\otimes {\widetilde \varpi}(j)\otimes {\bf 1}\otimes...\otimes {\bf 1}
\end{eqnarray}
The index $i$ (with $i=0,...,d-1$) indicates the position of $\varpi(j)$.
They describe measurements with $\varpi(j)$ (and ${\widetilde {\varpi}} (j)$)on the $i$-component of the system.
$i$ is a `position index' and below it appears mainly in products, while $j$ is a `basis index' (the analogue of `number index' in previous sections) and below it appears mainly in sums.
If $f\in {\cal F}$ is a permutation with repetitions, we consider the $d^d$ projectors and their duals
\begin{eqnarray}\label{333}
&&\Pi(f)=\Pi[f(0),...,f(d-1)]=\varpi[f(0)]\otimes...\otimes \varpi[f(d-1)]=\prod_i\Pi[i,f(i)];\;\;\;\sum_{f}\Pi(f)={\bf 1}\nonumber\\
&&{\widetilde \Pi}(f)=F_L\Pi(f)F_L^\dagger=\prod_i\widetilde \Pi[i,f(i)]
\end{eqnarray}
They describe uncorrelated or independent local measurements on the various components of the system.
The overall outcome is `yes', if the outcome of the measurement $\varpi[f(i)]$ on the $i$-component is `yes', for all components.
\subsection{The Markov matrices formalism in quantum context} \label{markov}
We interpret all quantities for row Markov matrices introduced earlier, in the present quantum context.
Let $\rho $ be a density matrix and
\begin{eqnarray}\label{red}
{\breve \rho}_i={\rm Tr}_{k\ne i}\rho,
\end{eqnarray}
be the reduced density matrix of the $i$-component of the system, which is found
by taking the partial trace with respect to all components except $i$ (denoted as ${\rm Tr}_{k\ne i}$).
Then:
\begin{itemize}
\item
The row Markov matrix $q_\rho$ has elements
\begin{eqnarray}\label{333}
q_\rho(i,j)={\rm Tr}[\rho\Pi(i,j)]={\rm Tr}[{\breve \rho}_i\varpi(j)];\;\;\;\sum _jq_\rho(i,j)=1;\;\;\;q_\rho \in {\cal M},
\end{eqnarray}
Usually the row Markov matrices are used to describe discrete time evolution of classical systems.
In multipartite quantum systems considered here, the row Markov matrices are used in connection with the property $\sum _j{\rm Tr}[\rho\Pi(i,j)]=1$.
The locally dual row Markov matrix has elements
\begin{eqnarray}
{\widetilde q}_\rho(i,j)={\rm Tr}[\rho\widetilde \Pi(i,j)]={\rm Tr}[F_L^\dagger \rho F_L \Pi(i,j)].
\end{eqnarray}
\item
For $\rho=\ket{f} \bra{f}$ where $\ket{f}$ is the state in Eq.(\ref{kkk}), we get
\begin{eqnarray}
q_f(i,j)=\bra{f}\Pi(i,j)\ket{f}=\bra{f(i)}\varpi(j)\ket{f(i)}=\delta(f(i),j)=M_f(i,j);\;\;\;f\in{\cal F}.
\end{eqnarray}
In the quantum context, the matrix $M_f$ describes the probabilities for measurements with $\varpi(j)$ on the state $\ket{f(i)}$.
\item
If $\rho$ is a density matrix and $f\in {\cal F}$ is a permutation with repetitions, the Markov tensor
\begin{eqnarray}
q_\rho(f)={\rm Tr}[\rho \Pi(f)]={\rm Tr}\{\rho[\varpi[f(0)]\otimes...\otimes \varpi[f(d-1)]]\};\;\;\;\sum _{f\in {\cal F}}q_\rho(f)=1,
\end{eqnarray}
is the joint probability that the measurement $\varpi [f(i)]$ on the $i$-component of the system will give `yes', for all components $i$.
We note that the $q_\rho(f)$ cannot detect entangling off-diagonal elements $\ket{f}\bra {g}$ in the density matrix.
For example, in Eq.(\ref{108}) they cannot distinguish the separable density matrix $\rho _A$, from the entangled $\rho _B$.
The Markov tensor $q_\rho(f)$ is related to the matrix $q_\rho$ as follows:
\begin{eqnarray}\label{RT1}
q_\rho=\sum _fq_\rho(f)M_f
\end{eqnarray}
We also introduce
the locally dual joint probabilities
\begin{eqnarray}\label{RT2}
{\widetilde q}_\rho(f)=q_{F_L^\dagger \rho F_L}(f)={\rm Tr}[F_L^\dagger \rho F_L \Pi(f)].
\end{eqnarray}
\item
From the $d\times d$ matrix $q_\rho\in{\cal M}$, we get the product of probabilities
\begin{eqnarray}
{\mathfrak M}_\rho(f)=\prod _i q_\rho(i,f(i)),
\end{eqnarray}
and the correlation coefficients
\begin{eqnarray}\label{cor}
{\cal C}_\rho (f)=q_\rho(f)-{\mathfrak M}_\rho(f);\;\;\;-1\le {\cal C}_\rho (f)\le 1;\;\;\;\sum _{f\in {\cal F}}{\cal C}_\rho(f)=0,
\end{eqnarray}
In a similar way we introduce the locally dual product of probabilities ${\widetilde {\mathfrak M}}_\rho(f)$
and the locally dual correlation coefficients ${\widetilde {\cal C}}_\rho (f)$.
\item
If $\rho, \sigma$ are two density matrices, then the $(q_\rho, q_\sigma)$ is defined in analogous way to Eq.(\ref{17}) as
\begin{eqnarray}
(q_\rho,q_\sigma)=\prod _{i=0}^{d-1}\left[\sum_{j=0}^{d-1}{\rm Tr}[\rho\Pi(i,j)]{\rm Tr}[\sigma\Pi(i,j)]\right]=\prod _{i=0}^{d-1}\left[\sum_{j=0}^{d-1}{\rm Tr}[{\breve \rho}_i\varpi(j)]{\rm Tr}[{\breve \sigma}_i\varpi(j)]\right].
\end{eqnarray}
This is the probability that for all $j,i$
the measurement $\varpi(j)$ on the $i$ component of the system, will give the same result with both density matrices $\rho$ and $\sigma$.
In other words, for all $i,j$ we perform the measurement $\varpi(j)$ on the $i$ component of a system from an ensemble of systems described by the density matrix $\rho$,
and also on a system from an ensemble of systems described by the density matrix $\sigma$.
We repeat that pair of experiments many times, and $(q_\rho,q_\sigma)$ is the percentage of times that the two experiments give the same result.
In particular
\begin{eqnarray}
(q_\rho,q_\rho)=\prod _{i=0}^{d-1}\left[\sum_{j=0}^{d-1}[{\rm Tr}({\breve \rho}_i\varpi(j))]^2\right].
\end{eqnarray}
Using Eq.(\ref{RRR}) we see that for the states in Eq.(\ref{kkk})
\begin{eqnarray}
(q_f,q_g)=(M_f,M_g)=\delta(f,g).
\end{eqnarray}
\end{itemize}
\subsection{Quantum uncertainties in multipartite systems in terms of the Gini index}
Using Eqs.(\ref{G1}), (\ref{G1V}) with the probabilities in Eqs.(\ref{333}),(\ref{333a}) we calculate
the Gini vector ${\cal G}(\rho)=({\cal G}_0,...,{\cal G}_{d-1})$ and the locally dual Gini vector
${\widetilde {\cal G}}(\rho)={\cal G}(F_L^\dagger\rho F_L)=({\widetilde {\cal G}}_0,...,{\widetilde {\cal G}}_{d-1})$.
Then Eqs(\ref{Q1}),(\ref{Q2}) show that
\begin{eqnarray}\label{UN1}
\Delta_i(\rho)\ge \eta _d>0;\;\;\;\Delta_i(\rho)=2\frac{d-1}{d+1}-[{\cal G}_i(\rho)+{\widetilde {\cal G}}_i(\rho)],
\end{eqnarray}
This expresses local uncertainty relations in each of the components of this system.
From the joint probabilities $q_\rho(f)$ in Eq.(\ref{RT1}) we can calculate the total Gini index ${\cal G}_T(\rho)$ using Eq.(\ref{G2}). Also
from the locally dual joint probabilities in Eq.(\ref{RT2})
we calculate the locally dual total Gini index ${\widetilde {\cal G}}_T(\rho)$.
The uncertainty relation in Eqs(\ref{Q1}),(\ref{Q2}) becomes here
\begin{eqnarray}\label{UN2}
&&\Delta_T(\rho)\ge \eta _{d^d}>0;\;\;\;\Delta_T(\rho)=2\frac{d^d-1}{d^d+1}-[{\cal G}_T(\rho)+{\widetilde {\cal G}}_T(\rho)]\nonumber\\
&&\eta _{d^d}=2\frac{d^d-1}{d^d+1}-\sup _{\rho }[{\cal G}_T(\rho)+{\widetilde {\cal G}}_T(\rho)].
\end{eqnarray}
\subsection{Example}\label{ex5}
We consider a bipartite system of two qubits and the density matrices
\begin{eqnarray}\label{cde}
&&\rho=\ket{u}\bra{u};\;\;\;\ket{u}=a\ket{0,0}+b\ket{1,1};\;\;\;|a|^2+|b|^2=1;\;\;\;|a|<|b|\nonumber\\
&&\sigma=\ket{v}\bra{v};\;\;\;\ket{v}=c\ket{0,0}+d\ket{1,0}+e\ket{0,1};\;\;\;|c|^2+|d|^2+|e|^2=1;\;\;\;|e|<|d|<|c|.
\end{eqnarray}
The reduced density matrices are
\begin{eqnarray}
&&{\breve \rho}_0=|a|^2\ket{0}\bra{0}+|b|^2\ket{1}\bra{1}\nonumber\\
&&{\breve \rho}_1=|a|^2\ket{0}\bra{0}+|b|^2\ket{1}\bra{1}\nonumber\\
&&{\breve \sigma}_0=(|c|^2+|d|^2)\ket{0}\bra{0}+|e|^2\ket{1}\bra{1}+ce^*\ket{0}\bra{1}+c^*e\ket{1}\bra{0}\nonumber\\
&&{\breve \sigma}_1=(|c|^2+|e|^2)\ket{0}\bra{0}+|d|^2\ket{1}\bra{1}+cd^*\ket{0}\bra{1}+c^*d\ket{1}\bra{0},
\end{eqnarray}
and from them we find the row Markov matrices of probabilities
\begin{eqnarray}
q_\rho=\begin{pmatrix}
|a|^2&|b|^2\\
|a|^2&|b|^2\\
\end{pmatrix};\;\;\;
q_\sigma=\begin{pmatrix}
|c|^2+|d|^2&|e|^2\\
|c|^2+|e|^2&|d|^2\\
\end{pmatrix}
\end{eqnarray}
The joint probabilities $q(f(0),f(1))$, the products of probabilities ${\mathfrak M}(f(0),f(1))$, and the correlations ${\cal C}(f(0),f(1))$
for the density matrices $\rho, \sigma$ are given in table \ref{t2}.
We also used Eqs.(\ref{G1}),(\ref{G2}) to calculate the Gini vectors
\begin{eqnarray}
{\cal G}(\rho)=\left (\frac{1-2|a|^2}{3}, \frac{1-2|a|^2}{3}\right);\;\;\;{\cal G}(\sigma)=\left (\frac{1-2|e|^2}{3}, \frac{1-2|d|^2}{3}\right)
\end{eqnarray}
and the total Gini indices
\begin{eqnarray}
{\cal G}_T(\rho)=\frac{3-2|a|^2}{5};\;\;\;{\cal G}_T(\sigma)=\frac{3-2|e|^2-|d|^2}{5}.
\end{eqnarray}
It is seen that for small values of $|a|$ the probability vectors $q_\rho(i,j)$ (with fixed arbitrary $i$) are very sparse, and also the joint probability vector $q_\rho(f)$ is very sparse.
Similarly for small values of $|e|$, $|d|$, the probability vectors $q_\sigma(i,j)$ (with fixed arbitrary $i$) are very sparse, and also the joint probability vector $q_\sigma(f)$ is very sparse.
Furthermore we calculated the
\begin{eqnarray}
&&(q_\rho, q_\sigma)=|a|^2(|ac|^2+|b|^2-|bc|^2)+|ed|^2(|a|^2-|b|^2)^2,\nonumber\\
&&(q_\rho, q_\rho)=(|a|^4+|b|^4)^2,\nonumber\\
&&(q_\sigma, q_\sigma)=[(|c|^2+|d|^2)^2+|e|^4][(|c|^2+|e|^2)^2+|d|^4].
\end{eqnarray}
Some special cases are:
\begin{eqnarray}
&&|a|=|e|=0\;\rightarrow\;(q_\rho, q_\sigma)=0\nonumber\\
&&|a|=1,\;\;|b|=0\;\rightarrow\;(q_\rho, q_\rho)=1\nonumber\\
&&|a|=|b|=\frac{1}{2}\;\rightarrow\;(q_\rho, q_\rho)=\frac{1}{4}\nonumber\\
&&|c|=|d|=|e|=\frac{1}{3}\;\rightarrow\;(q_\sigma, q_\sigma)=\frac{25}{81}.
\end{eqnarray}
In the case $|a|=|e|=0$ the measurements $\varpi(j)$ on the various components of the system described by $\rho$ will never give the same results as the corresponding measurements
on the system described by $\sigma$.
In the case $|a|=1$ and $|b|=0$, these measurement on two states from the ensemble described by $\rho$ will always give the same result, etc.
\section{Global Fourier transforms}\label{GF}
The quantities in the previous section do not depend on entangling off-diagonal elements in the density matrix
(e.g., the quantities in the example in section \ref{ex5} depend only on $|a|^2, |b|^2,...$).
This motivates the introduction of the globally dual quantities in this section based on a `global Fourier transform'.
The quantities in this section depend on off-diagonal elements that entangle the various components of the system.
We note here that the non-diagonal elements are necessary but not sufficient requirement for entanglement.
We consider a bijective map between $({\mathbb Z}_d)^d={\mathbb Z}_d\times ...\times {\mathbb Z}_d$ and ${\mathbb Z}_{d^d}$ as follows.
We first take each $j_r$ in the `period' $0\le j_r\le d-1$ and $\widehat j$ in the `period' $0\le \widehat j\le d^d-1$, and introduce the bijective map
\begin{eqnarray}\label{sss}
j=(j_0,...,j_{d-1})\;\leftrightarrow\;\widehat j=j_0+j_1d+...+j_{d-1}d^{d-1}.
\end{eqnarray}
We then take each $j_r$ modulo $d$ and the $\widehat j$ modulo $d^d$, and we get a bijective map from $({\mathbb Z}_d)^d$ to ${\mathbb Z}_{d^d}$.
We note that if $j=(j_0,...,j_{d-1})$ and $k=(k_0,...,k_{d-1})$ then
\begin{eqnarray}
\widehat j\widehat k=j_0k_0+d(j_0k_1+j_1k_0)+...+d^{d-1}(j_0k_{d-1}+...+j_{d-1}k_0)
\end{eqnarray}
The Hilbert space ${\mathfrak H}$ is isomorphic to $H_{d^d}$ (a $d^d$-dimensional Hilbert space describing systems with variables in ${\mathbb Z}_{d^d}$). But the $({\mathbb Z}_d)^d$ as a ring (with addition and multiplication componentwise), is not isomorphic to the ring ${\mathbb Z}_{d^d}$ because $\widehat j+\widehat k\ne \widehat {j+k}$ and $\widehat j\cdot\widehat k\ne \widehat {j\cdot k}$.
For example, in the case $d=3$ we get
\begin{eqnarray}
&&\widehat{(2,1,2)}+\widehat{(1,1,0)}=27;\;\;\;\widehat{(2,1,2)+(1,1,0)}=\widehat{(0,2,2)}=24\nonumber\\
&&\widehat{(2,1,2)}\cdot\widehat{(1,1,0)}=23\cdot4=92=11;\;\;\;\widehat{(2,1,2)\cdot (1,1,0)}=\widehat{(2,1,0)}=5.
\end{eqnarray}
It is seen that $[{\mathbb Z}_{3}]^3$ is non-isomorphic to ${\mathbb Z}_{27}$.
Consequently our `local formalism' in the phase space $({\mathbb Z}_d\times{\mathbb Z}_d)^d$ is different from our `global formalism' in the phase space ${\mathbb Z}_{d^d}\times{\mathbb Z}_{d^d}$.
In this paper we introduce Fourier transforms in both cases.
We introduce the global Fourier transform in ${\mathfrak H}$:
\begin{eqnarray}
&&F_G=\frac{1}{\sqrt{d^d}}\sum _{{\widehat j},{\widehat k}}\omega_{d^d}(\widehat j\widehat k) \ket{j_0,...,j_{d-1}}\bra{k_0,...,k_{d-1}};\;\;\;\omega_{d^d}(\alpha)=\exp \left (i\frac{2\pi\alpha}{d^d}\right);\;\;\;\alpha\in{\mathbb Z}_{d^d}\nonumber\\
&&\omega_{d^d}(\widehat j\widehat k)=\omega_{d^d}[j_0k_0+d(j_0k_1+j_1k_0)+...+d^{d-1}(j_0k_{d-1}+...+j_{d-1}k_0)]\nonumber\\
&&F_G^4={\bf 1};\;\;\;F_GF_G^{\dagger}={\bf 1};\;\;\;F_G\ne F_L.
\end{eqnarray}
The index $G$ in the notation stands for global.
Using the fact that
\begin{eqnarray}
\frac{1}{{d^d}}\sum _{ \widehat k}\omega_{d^d}[(\widehat j+\widehat \ell )\widehat k] =\delta (\widehat j+\widehat \ell ,0);\;\;\;\widehat j, \widehat \ell\in {\mathbb Z}_{d^d}
\end{eqnarray}
in conjunction with the fact that ${\widehat j}+{\widehat \ell}=0$ (${\rm mod}(d^d)$) implies $j_r+\ell _r=0$ (${\rm mod}(d)$),
we prove that $F_G^2$ is the parity operator
\begin{eqnarray}
&&F_G^2=\frac{1}{{d^d}}\sum _{\widehat j, \widehat k, \widehat \ell}\omega_{d^d}[(\widehat j+\widehat \ell )\widehat k] \ket{j_0,...,j_{d-1}}\bra{\ell_0,...,\ell_{d-1}}=
\sum_{j_0,...,j_{d-1}}\ket{j_0,...,j_{d-1}}\bra{-j_0,...,-j_{d-1}}.
\end{eqnarray}
Also using Eq.(\ref{parity}) we prove that $F_L^2$ is the parity operator. Therefore
\begin{eqnarray}
F_G^2=F_L^2.
\end{eqnarray}
Extra care is required in practical calculations, with the modular arithmetic of the indices.
$F_G$ are global transformations in the sense that they cannot be written as $U_0\otimes ...\otimes U_{d-1}$ where $U_r$ are local unitary transformations.
Acting with $F_G$ on the basis $\ket{j_0,...,j_{d-1}}$ we get the globally dual basis
\begin{eqnarray}
&&\ket{j_0,...,j_{d-1}}_{\rm G}=F_G\ket{j_0,...,j_{d-1}}
=\frac{1}{\sqrt{d^d}}\bigotimes _{r=0}^{d-1}\left [\sum _{k_r=0}^{d-1}\omega_{d^d}[(j_0d^r+...+j_{d-r-1}d^{d-1})k_r]\ket{k_r}\right ]
\end{eqnarray}
The states $\ket{j_0,...,j_{d-1}}_{\rm G}$ are `global', in the sense that the coefficients $\omega_{d^d}[(j_0d^r+...+j_{d-r-1}d^{d-1})k_r]$ of the vectors in the $r$-component depend on all $j_0,...,j_{d-1}$.
Information from all components $j_0,...,j_{d-1}$ is needed to determine each of these coefficients.
In the local Fourier transform of Eq.(\ref{103}), the coefficients $\omega_{d}(j_rk_r)$ in the $r$-component depend only on $j_r$.
We will use the term `globally Fourier transformed factorisable states' for the states $F_G\ket{s}$ where $\ket{s}$ are factorisable states.
The example below shows that a globally Fourier transformed factorisable state is in general an entangled state, but in special cases it can be factorisable.
It is easily seen that
\begin{eqnarray}
&&_L\langle\ell _0,...,\ell_{d-1}\ket{j_0,...,j_{d-1}}_{\rm G}=\frac{1}{d^d}\sum_{\widehat k}\omega_{d^d}(\widehat j \widehat k)\omega_d[-(\ell_0k_0+...+\ell_{d-1}k_{d-1})]\nonumber\\
&&|\langle\ell _0,...,\ell_{d-1}\ket{j_0,...,j_{d-1}}_{\rm G}|^2=|\langle\ell _0,...,\ell_{d-1}\ket{j_0,...,j_{d-1}}_{\rm L}|^2=\frac{1}{d^d}
\end{eqnarray}
\begin{example}
In the case $d=3$, we act with $F_G$ on the factorisable state $\ket{j_0,j_1,j_2}$ and we get
\begin{eqnarray}\label{dd}
&&\ket{j_0,j_1,j_2}_{\rm G}
=\frac{1}{\sqrt{27}}\sum _{k_0,k_1,k_2}\omega_{27}[j_0k_0+3(j_1k_0+j_0k_1)+9(j_2k_0+j_1k_1+j_0k_2)]\ket{k_0,k_1,k_2}\nonumber\\
&&=\frac{1}{\sqrt{27}}\sum _{k_0}\omega_{27}[j_0k_0+3j_1k_0+9j_2k_0]\ket{k_0}\otimes
\sum _{k_1}\omega_{27}(3j_0k_0+9j_1k_1)\ket{k_1}\otimes
\sum _{k_2}\omega_{27}(9j_0k_2)\ket{k_2}
\end{eqnarray}
This is an example of a globally Fourier transformed factorisable state which is factorisable.
Also we act with $F_G$ on the factorisable state
\begin{eqnarray}
\ket{s}=a\ket{j_0,j_1,j_2}+b\ket{r_0,j_1,j_2};\;\;\;|a|^2+|b|^2=1,
\end{eqnarray}
and we get the following globally Fourier transformed factorisable state which is entangled (when $a,b\ne 0$):
\begin{eqnarray}\label{dd}
F_G\ket{s}&=&\frac{a}{\sqrt{27}}\sum _{k_0}\omega_{27}[j_0k_0+3j_1k_0+9j_2k_0]\ket{k_0}\otimes
\sum _{k_1}\omega_{27}(3j_0k_0+9j_1k_1)\ket{k_1}\otimes
\sum _{k_2}\omega_{27}(9j_0k_2)\ket{k_2}\nonumber\\
&+&\frac{b}{\sqrt{27}}\sum _{k_0}\omega_{27}[r_0k_0+3j_1k_0+9j_2k_0]\ket{k_0}\otimes
\sum _{k_1}\omega_{27}(3r_0k_0+9j_1k_1)\ket{k_1}\otimes
\sum _{k_2}\omega_{27}(9r_0k_2)\ket{k_2}.
\end{eqnarray}
\end{example}
\subsection{Globally dual quantities}
Acting with $F_G$ on the quantities in section \ref{Q} we get their duals denoted with a `hat'.
We introduce them because they depend on the entangling off-diagonal elements, which do not enter in the quantities in section \ref{Q}.
For example, we introduce the projectors
\begin{eqnarray}
\widehat \Pi(i,j)=F_G\Pi(i,j)F_G^{\dagger};\;\;\;
\widehat \Pi(f)=F_G\Pi[f(0),...,f(d-1)]F_G^{\dagger}=\prod_i\widehat \Pi[i,f(i)],
\end{eqnarray}
the row Markov matrix $\widehat q_\rho$ with the globally dual probabilities
\begin{eqnarray}
\widehat q_\rho(i,j)={\rm Tr}[\rho\widehat \Pi(i,j)]={\rm Tr}[F_G^\dagger\rho F_G \Pi(i,j)]=q_{F_G^\dagger\rho F_G}(i,j),
\end{eqnarray}
the globally dual products of probabilities
\begin{eqnarray}
{\widehat {\mathfrak M}}_\rho(f)=\prod _i \widehat q_\rho(i,f(i))={\mathfrak M}_{F_G^\dagger\rho F_G}(f),
\end{eqnarray}
and the globally dual joint probabilities
\begin{eqnarray}\label{qaz}
\widehat q_\rho(f)={\rm Tr}[\rho \widehat \Pi(f)]=q_{F_G^\dagger\rho F_G}(f).
\end{eqnarray}
So the globally dual probabilities for the density matrix $\rho$, are the probabilities of the Fourier transformed density matrix $F_G^\dagger\rho F_G$.
Using the globally dual probabilities we can calculate the globally dual correlation coefficients
${\widehat {\cal C}}_\rho (f)$ using Eq.(\ref{cor}), and we see that
\begin{eqnarray}
{\widehat {\cal C}}_\rho (f)={\cal C}_{F_G^\dagger\rho F_G} (f).
\end{eqnarray}
We also calculate the globally dual Gini vector ${\widehat{\cal G}}(\rho)=({\widehat {\cal G}}_0,...,{\widehat {\cal G}}_{d-1})$ using Eqs.(\ref{G1}), (\ref{G1V}) and the globally dual total Gini index $\widehat {\cal G}_T(\rho)$ using Eq.(\ref{G2}),
and we get
\begin{eqnarray}
{\widehat{\cal G}}(\rho)={\cal G}(F_G^\dagger\rho F_G);\;\;\;\widehat {\cal G}_T(\rho)={\cal G}_T(F_G^\dagger\rho F_G).
\end{eqnarray}
Since ${\widehat{\cal G}}_i(\rho)$ and ${\cal G}_i(\rho)$ are related through a Fourier transform, we have the uncertainty relations
\begin{eqnarray}\label{U1}
&&D_i(\rho)\ge {\widehat \eta} _d>0;\;\;\;D_i(\rho)=2\frac{d-1}{d+1}-[{\cal G}_i(\rho)+{\widehat {\cal G}}_i(\rho)]\nonumber\\
&&{\widehat \eta} _{d}=2\frac{d-1}{d+1}-\sup _{\rho }[{\cal G}_i(\rho)+{\widehat {\cal G}}_i(\rho)].
\end{eqnarray}
All the component systems are the same, and therefore ${\widehat \eta} _{d}$ does not depend on the index $i$.
Also the ${\widehat{\cal G}}_T(\rho)$ and ${\cal G}_T(\rho)$ are related through a Fourier transform, and we have the uncertainty relation
\begin{eqnarray}\label{U2}
&&D_T(\rho)\ge {\widehat \eta} _{d^d}>0;\;\;\;D_T(\rho)=2\frac{d^d-1}{d^d+1}-[{\cal G}_T(\rho)+{\widehat{\cal G}}_T(\rho)]\nonumber\\
&&{\widehat \eta} _{d^d}=2\frac{d^d-1}{d^d+1}-\sup _{\rho }[{\cal G}_T(\rho)+{\widehat {\cal G}}_T(\rho)].
\end{eqnarray}
\begin{example}
We consider a tripartite system of qutrits ($d=3)$. In the Hilbert space $H_3\otimes H_3\otimes H_3$
we consider the density matrices
\begin{eqnarray}
&&\rho=\ket{r}\bra {r};\;\;\;\ket{r}=\frac{1}{\sqrt{3}}[\ket{0,0,0}+\ket{1,1,0}+\ket{2,2,1}]\nonumber\\
&&\sigma=\frac{1}{3}[\ket{0,0,0}\bra{0,0,0}+\ket{1,1,0}\bra{1,1,0}+\ket{2,2,1}\bra{2,2,1}],
\end{eqnarray}
Using the global Fourier transform described in Eq.(\ref{dd}), we calculated
the row Markov matrix $\widehat q_\rho$ that contains the globally dual probabilities:
\begin{eqnarray}
\widehat q_\rho=\begin{pmatrix}
0.325&0.422&0.253\\
0.394&0.322&0.284\\
0.333&0.333&0.333
\end{pmatrix};\;\;\;
\widehat q_\sigma=\begin{pmatrix}
0.333&0.333&0.333\\
0.333&0.333&0.333\\
0.333&0.333&0.333
\end{pmatrix}
\end{eqnarray}
We then put the probabilities in each row in ascending order, and we calculated the globally dual Gini vectors
\begin{eqnarray}
{\widehat{\cal G}}(\rho)=(0.085, 0.055,0);\;\;\;{\widehat{\cal G}}(\sigma)=(0,0,0).
\end{eqnarray}
We also calculated the $27$ globally dual joint probabilities in Eq.(\ref{qaz}), put them in ascending order,
and found the globally dual total Gini indices
\begin{eqnarray}
\widehat {\cal G}_T(\rho)=0.430;\;\;\;\widehat {\cal G}_T(\sigma)=0.
\end{eqnarray}
The results for $\rho$ are different from the results for $\sigma$, and the difference between these two density matrices are off-diagonal entangling elements.
Therefore the globally dual quantities depend on the off-diagonal entangling elements.
\end{example}
\subsection{Open problems}
In this subsection we mention briefly some open problems.
\begin{itemize}
\item
The `local formalism' in the phase space $({\mathbb Z}_d\times{\mathbb Z}_d)^d$ is different from the `global formalism' in the phase space ${\mathbb Z}_{d^d}\times{\mathbb Z}_{d^d}$.
In a multipartite system where the various parties are distinct physical systems that do not interact with each other, you can argue that the
phase space $({\mathbb Z}_d\times{\mathbb Z}_d)^d$ is more physical. But if the various parties interact with each other, this is no longer true.
In this paper we introduced Fourier transforms in both cases, and in further work other phase space quantities (e.g., displacement operators, Wigner and Weyl functions, etc) can also be defined for the two cases.
Comparison of the two formalisms for various examples, might shed light into the
relationship between global transformations and entanglement.
\item
We can define `globally Fourier transformed separable states' as $F_G\rho_{\rm sep}F_G^{\dagger}$ where $\rho_{\rm sep}$ is a separable mixed state.
Comparison of the two phase space formalisms for various examples, might shed light into various aspects of the entanglement for mixed states.
\item
In many multipartite systems there is no natural ordering of the various components.
Therefore instead of using Eq.(\ref{sss}) we can first perform a permutation $\pi$ on the various components:
\begin{eqnarray}
j=(j_0,...,j_{d-1})\;\rightarrow\;j_\pi=(j_{\pi(0)},...,j_{\pi(d-1)})\;\rightarrow\;\widehat j_\pi=j_{\pi(0)}+j_{\pi (1)}d+...+j_{\pi(d-1)}d^{d-1}.
\end{eqnarray}
So there are many global Fourier transforms (one for each permutation).
\item
A more general problem will be to consider
an $n$-partite system comprised of $n$ components each of which is a qudit with $n\ne d$.
In the language of permutations with repetitions we have sequences of $n$ integers from ${\mathbb Z}_d$, and we deal with non-square matrices.
It is not clear how the material in section 4, can be generalised to non-square matrices.
\item
In this paper a mathematical structure (permutations with repetitions), is transformed with quantum techniques into a more general structure (the fundamental difference is the
concept of superpositions).
This could be applied to many algebraic structures.
\end{itemize}
\section{Discussion}
We have blended ideas from three different areas.
\begin{itemize}
\item
The first area is row Markov matrices.
We introduced expansions for row Markov matrices, in terms of matrices related to permutations with repetitions.
We interpreted this in terms of random safes described by the Markov matrices.
In the expansion of Eq.(\ref{ghd}) the coefficients are joint probabilities, and in the expansion in Eq.(\ref{gh}) the coefficients are products of probabilities.
The difference between the two are the correlations in Eq.(\ref{COR}).
\item
The second area is Lorenz values and the Gini index.
We used them to quantify the sparsity of probability vectors.
The properties of Lorenz values have been presented in propositions \ref{L1}, \ref{pro78}, and the properties of the Gini index in propositions \ref{proG1}, \ref{pro56}, \ref{GG3}.
In the context of random safes we introduced in Eqs(\ref{G1}),(\ref{G1V}) the Gini vector that describes the sparsity
in the local probability vector for each of the integers in the sequence that opens a random safe.
We also introduced the total Gini index of Eq.(\ref{G2}) that describes the sparsity of the
joint probabilities.
\item
The third area is multipartite quantum systems.
We viewed them as quantum permutations with repetitions and as quantum safes, and then used the above two formalisms in a quantum context.
In section \ref{markov} we presented the Markov matrix formalism in a quantum context.
This led to novel statistical quantities that describe classical and quantum correlations in multipartite quantum systems.
Local Fourier transforms led to locally dual statistical quantities.
Global Fourier transforms led to globally dual statistical quantities which
depend on off-diagonal elements that entangle the various components of the system.
In Eqs(\ref{UN1}),(\ref{UN2}),(\ref{U1}),(\ref{U2}) we gave uncertainty relations in terms of Gini indices.
\end{itemize}
The first two parts are related to classical probabilistic multipartite systems (random safes).
The third part is related to quantum multipartite systems (quantum safes).
The work introduces novel methods into multipartite quantum systems.
It also shows that the quantum concept of superposition can be introduced in some mathematical areas (in our case permutations with repetitions) and generalise them into new areas.
\newpage
\begin{table}
\caption{The expansion in Eq.(\ref{gh}) (which assumes independence) for the row Markov matrix in Eq.(\ref{AB1}).
Only $8$ of the terms are assigned non-zero probability and the corresponding matrices $M_f$ together with their joint probabilities ${\mathfrak M}_q(f)$ (which are products of probabilities) for various functions $f\in{\cal F}$, are shown in the first three columns.
A different expansion for the same Markov matrix (in the presence of correlations) is given in Eq.(\ref{kk}) and the corresponding joint probabilities $q(f)$ and correlation coefficients ${\cal C}_q(f)=q(f)-{\mathfrak M}_q(f)$ are shown
in the last two columns.}
\def2{2}
\begin{tabular}{|c|c|c||c|c|}\hline
$(f(0),f(1),f(2))$&$M_f$&${\mathfrak M}_q(f)$&$q(f)$&${\cal C}_q(f)$\\\hline
$(0,1,1)$&$\begin{pmatrix}
1&0&0\\
0&1&0\\
0&1&0
\end{pmatrix}$&
$a^2(1-b)$&$0$&$-a^2(1-b)$\\\hline
$(0,1,2)$&$\begin{pmatrix}
1&0&0\\
0&1&0\\
0&0&1
\end{pmatrix}$&
$a^2b$&$a$&
$a-a^2b$\\\hline
$(1,1,1)$&$\begin{pmatrix}
0&1&0\\
0&1&0\\
0&1&0
\end{pmatrix}$&
$a(1-a)(1-b)$&$0$&
$-a(1-a)(1-b)$\\\hline
$(1,1,2)$&$\begin{pmatrix}
0&1&0\\
0&1&0\\
0&0&1
\end{pmatrix}$&
$a(1-a)b$&$0$&
$-a(1-a)b$\\\hline
$(0,2,1)$&$\begin{pmatrix}
1&0&0\\
0&0&1\\
0&1&0
\end{pmatrix}$&
$a(1-a)(1-b)$&$0$&
$-a(1-a)(1-b)$\\\hline
$(0,2,2)$&$\begin{pmatrix}
1&0&0\\
0&0&1\\
0&0&1
\end{pmatrix}$&
$a(1-a)b$&$0$&
$-a(1-a)b$\\\hline
$(1,2,1)$&$\begin{pmatrix}
0&1&0\\
0&0&1\\
0&1&0
\end{pmatrix}$&
$(1-a)^2(1-b)$&$1-b$&
$(1-b)(2a-a^2)$\\\hline
$(1,2,2)$&$\begin{pmatrix}
0&1&0\\
0&0&1\\
0&0&1
\end{pmatrix}$&
$(1-a)^2b$&$b-a$&
$2ab-a-a^2b$\\\hline
\end{tabular} \label{t1}
\end{table}
\begin{table}
\caption{The joint probabilities $q(f(0),f(1))$, the products of probabilities ${\mathfrak M}(f(0),f(1))$, and the correlations ${\cal C}(f(0),f(1))$
for the density matrices $\rho, \sigma$ in Eq.(\ref{cde}).}
\def2{2}
\begin{tabular}{|c||c|c|c||c|c|c|}\hline
$(f(0),f(1))$&$q_\rho(f)$&${\mathfrak M}_\rho(f)$&${\cal C}_\rho(f)$&$q_\sigma(f)$&${\mathfrak M}_\sigma(f)$&${\cal C}_\sigma(f)$\\\hline
$(0,0)$&$|a|^2$&$|a|^4$&$|ab|^2$&$|c|^2$&$|c|^2+|ed|^2$&$-|de|^2$\\\hline
$(0,1)$&$0$&$|ab|^2$&$-|ab|^2$&$|e|^2$&$|d|^2-|ed|^2$&$|e|^2-|d|^2+|ed|^2$\\\hline
$(1,0)$&$0$&$|ab|^2$&$-|ab|^2$&$|d|^2$&$|e|^2-|ed|^2$&$|d|^2-|e|^2+|ed|^2$\\\hline
$(1,1)$&$|b|^2$&$|b|^4$&$|ab|^2$&$0$&$|e|^2|d|^2$&$-|ed|^2$\\\hline
\end{tabular} \label{t2}
\end{table}
|
2,877,628,091,583 | arxiv | \section{Introduction}
The smallness of neutrino masses may be explained by the presence of right-handed neutrinos (RHNs) with
large Majorana mass realizing the seesaw mechanism \cite{rhn}.
It is conceivable that a dark matter (DM) candidate couples to a RHN and thus
its pair-annihilation to a RHN pair is responsible for the DM freeze-out.
Such a situation can be realized specifically when RHNs are introduced
in association with an extended ($B-L$) gauge symmetry \cite{Bandyopadhyay:2011qm,Bandyopadhyay:2017bgh}. In this scenario, an interesting feature arises in the process of DM thermal freeze-out.
Due to a tiny neutrino Yukawa coupling of a RHN with lepton and Higgs doublet, the RHN may not be fully thermalized and thus the observed DM relic density can be achieved by the DM annihilation rate different from the standard freeze-out value \cite{Bandyopadhyay:2011qm,Bandyopadhyay:2017bgh}.
Such a feature has been realized also in various scenarios \cite{Dror:2016rxc,Okawa:2016wrr,Kopp:2016yji}.
The RHN as a portal to DM was suggested in a simple setup assuming the coupling
$N \chi \phi$ where a fermion $\chi$ or a scalar $\phi$ can be a DM candidate \cite{posp07},
and studied extensively in Refs.~\cite{falk09,gonz16,esc16,tang16,camp17,batell17,Chianese:2018dsz}.
In this paper, we explore the enlarged parameter space including
the RHN Yukawa effect to investigate how it is constrained by
the thermal DM relic density, direct and indirect detections.
We will assume that DM is the fermion field $\chi$, and thus the nucleon-DM scattering arises at one-loop through the $\phi$-$\phi$-Higgs coupling.
The rest of the paper is organized as follows. In Sec.~\ref{sec:relic}, after describing the RHN portal structure with a fermionic DM candidate, we discuss the impact of neutrino Yukawa couplings to the thermal freeze-out condition of the DM pair annihilation to a RHN pair.
This allows us to identify parameter regions satisfying the observed DM relic density,
which are constrained by indirect detection experiments. Applying the latest Fermi-LAT and H.E.S.S. data on gamma-ray signals, produced by RHN decays in our scenario, we put combined constraints on the model parameter space in Sec.~\ref{sec:indirect}.
In Sec.~\ref{sec:Direct}, we consider a direct detection process arising from one-loop induced DM-DM-Higgs coupling and limits from the recent data and future sensitivity on spin-independent (SI) DM scatterings.
Finally we conclude in Sec.~\ref{sec:conclusion}.
\section{DM freeze-out including neutrino Yukawa effect} \label{sec:relic}
Let us consider the simplistic scenario for a RHN-portal DM based on the Type-I seesaw. The Lagrangian of such construct will contain the following new terms
\begin{align}
\label{eq:Lag}
-\mathcal{L} \subset& \frac{1}{2} m_0^2 \phi^2 +\kappa \phi^2 |H|^2 + \Big\{ \frac{1}{2} m_\chi \chi \chi + \frac{1}{2} m_N NN \nonumber \\
+& y_N LHN + \lambda N \chi \phi + {\rm h.c.} \Big\}.
\end{align}
Here $L$ and $H$ are the SM $SU(2)_L$ doublets and $N$ is a Majorana fermion (RHN).
There are two new fields in the dark sector: a real scalar $\phi$ and a Majorana fermion $\chi$ which are singlets under the SM gauge group. For the stability of a DM candidate, we assigned, e.g., a $Z_2$ parity under which the dark sector fields are odd.
After the electroweak symmetry breaking, $H=(v+h)/\sqrt{2}$, we get the scalar mass $m_\phi^2=m_0^2 + \kappa v^2$ and the $h$-$\phi$-$\phi$ coupling $\kappa v$.
There are two couplings $\lambda$ and $\kappa$
which connect the dark sector ($\phi$ and $\chi$) to the visible sector.
When $\phi$ is a thermal DM candidate, the Higgs portal coupling $\kappa$ plays an important role.
In this case, the parameter space is highly constrained by various considerations including the latest XENON1T result \cite{Athron:2018ipf}. The RHN-portal process, $\phi\phi \to NN$ through the $t$-channel exchange of $\chi$, can also be operative to produce the right thermal relic density. Notice that a similar situation was studied in Ref.~\cite{Bandyopadhyay:2011qm} where $\phi$ corresponds to a right-handed sneutrino DM.
In this paper, we concentrate on the fermion $\chi$ as a DM candidate.
Our results on the RHN-portal property can also be applied to the case of the scalar $\phi$ as a DM candidate.
When $\chi$ is lighter than $\phi$, it becomes a viable DM candidate.
For $m_N<m_\chi$, the DM particle $\chi$ can annihilate to the RHN pair via a $t$-channel exchange of $\phi$ (Fig.~\ref{dia:relic}(a)). The thermal average annihilation cross section is given by,
\begin{align}\label{sigv}
\hspace*{-0.2cm}\langle\sigma v \rangle_{\chi\chi\to NN} = \frac{\lambda^4 \left(m_\chi + m_N \right)^2}{16 \pi \left( m_\chi^2 +m_\phi^2 -m_N^2 \right)^2} \left(1- \frac{m_N^2}{m_\chi^2}\right)^{1/2}\!\!\!\!\!.
\end{align}
There are other relevant annihilation processes like $\phi \phi \to \chi \chi\,(N,N)$, $\phi \phi \to $ SM particles and co-annihilation channel $\chi\phi\to N \to$ SM particles which can contribute in the evaluation of DM number density. We quote these expressions in ~\ref{sec:appendix}. We notice that the co-annihilation channel is suppressed by two reasons; firstly the tiny Yukawa coupling $y_N$, secondly the choice of parameter space away from the resonant $N$ production, and thus has insignificant effect in the freeze out mechanism.
\begin{figure}[h]
\begin{center}
\includegraphics[width=1.\linewidth]{dias.pdf}
\caption{The Feynman diagrams for the DM particle $\chi$ annihilation to RHN $N$ pair (a) and the decay of $N$ to SM particles (b), (c) are shown. }\label{dia:relic}
\end{center}
\end{figure}
We start with the coupled Boltzmann equations written in terms of the variable $Y_i\equiv n_i/s$, describing the actual number of particle $i$ per comoving volume, where $n_i$ being the number density, $s$ is the entropy density of the Universe, and the variable $x\equiv m_\chi/T$.
The Boltzmann equations relevant for our study are
\begin{align}
\label{eq:dYDM}
\frac{d Y_\chi}{dx}=& -\frac{1}{x^2} \frac{s(m_\chi)}{H(m_\chi)} \langle \sigma v \rangle_{\chi\chi\to NN}\!\! \left(\!\!Y_\chi^2 \!-\! \left(\frac{Y_\chi^{\text{eq}}}{Y_N^{\text{eq}}}\right)^{\!\!\!2}Y_N^2\!\right) \nonumber \\
+ \frac{1}{x^2}& \frac{s(m_\chi)}{H(m_\chi)} \langle \sigma v \rangle_{\phi\phi\to\chi\chi}\! \left(\!\!Y_\phi^2 \!-\! \left(\!\frac{Y_\phi^{\text{eq}}}{Y_\chi^{\text{eq}}}\right)^{\!\!\!2}\!Y_\chi^2\!\right)\!, \\
\label{eq:dYphi}
\frac{d Y_\phi}{dx}=& - \frac{1}{x^2} \frac{s(m_\chi)}{H(m_\chi)} \langle \sigma v \rangle_{\phi\phi\to\chi\chi} \left(\!Y_\phi^2 \!-\! \left(\!\frac{Y_\phi^{\text{eq}}}{Y_\chi^{\text{eq}}}\right)^{\!\!\!2}Y_\chi^2\right) \nonumber \\
-&\frac{1}{x^2} \frac{s(m_\chi)}{H(m_\chi)} \langle \sigma v \rangle_{\phi\phi\to NN} \left(\!Y_\phi^2 \!-\! \left(\frac{Y_\phi^{\text{eq}}}{Y_N^{\text{eq}}}\right)^{\!\!\!2} Y_N^2\!\right) \nonumber \\
-&\frac{1}{x^2} \frac{s(m_\chi)}{H(m_\chi)} \langle \sigma v \rangle_{\phi\phi\to {\rm SM}} \left(Y_\phi^2 - {Y_\phi^{\text{eq}}}^2\right), \\
\label{eq:dYN}
\frac{d Y_N}{dx} =& \frac{1}{x^2} \frac{s(m_\chi)}{H(m_\chi)} \langle \sigma v \rangle_{\chi\chi \to NN} \left(\!Y_\chi^2 \! -\! \left(\frac{Y_\chi^{\text{eq}}}{Y_N^{\text{eq}}}\right)^{\!\!\!2} Y_N^2\!\right) \nonumber \\
+& \frac{1}{x^2} \frac{s(m_\chi)}{H(m_\chi)} \langle \sigma v \rangle_{\phi\phi\to NN} \left(\!Y_\phi^2 \!-\! \left(\frac{Y_\phi^{\text{eq}}}{Y_N^{\text{eq}}}\right)^{\!\!\!2}Y_N^2\!\right) \nonumber \\
-&\frac{\Gamma}{H(m_\chi)} x \left(Y_N -Y_N^{\text{eq}}\right).
\end{align}
\noindent
The entropy density $s$ and Hubble parameter $H$ at the DM mass is
$$
s(m_\chi)= \frac{2 \pi^2 }{45} g_*\, m_\chi^3, \quad H(m_\chi)= \frac{\pi}{\sqrt{90}} \frac{\sqrt{g_*}}{M^r_{pl}} m_\chi^2, $$ where $M^r_{pl}= 2.44\times {10}^{18}\ensuremath{\mathrm{Ge\kern -0.1em V}}$ is the reduced Planck mass and $Y_N^{\text{eq}}$ is the equilibrium number density of $i$-th particle given by
\begin{align}
\label{eq:Yi_eq}
Y_i^{\text{eq}}&\equiv\frac{n_i^{\text{eq}}}{s} =\frac{45}{2\pi^4} \sqrt{\frac{\pi}{8}}\left( \frac{g_i}{g_*}\right) \left({\frac{m_i}{T}}\right)^{3/2} e^{-\frac{m_i}{T}} \nonumber \\
&\simeq 0.145 \left( \frac{g_i}{100}\right) \left( \frac{m_i}{m_\chi}\right)^{3/2} x^{3/2} e^{-\frac{m_i}{m_\chi} x}.
\end{align}
Here in the last line of Eq.~\eqref{eq:Yi_eq} we use the effective number of relativistic degrees of freedom $g_*\simeq100$ and the internal degrees of freedom $g_{\chi,N}=2$ for the two Majorana particles $\chi$, $N$ and $g_\phi=1$ for $\phi$ being the real scalar.
The first terms on the right-hand side of Eqs.~\eqref{eq:dYDM} and \eqref{eq:dYN} denote the forward and backward reactions of $\chi\chi$ to $NN$ through $t$-channel $\phi$ exchange shown in Fig.~\ref{dia:relic}(a). It can be seen from Eq.~\eqref{eq:Lag} that the Yukawa interaction of the right-handed neutrino allows it to decay to SM particles via the mixing with the SM neutrinos proportional to the coupling $y_N$. The third term of Eq.~\eqref{eq:dYN} describes the decay and the inverse decay of $N$ shown in Fig.~\ref{dia:relic}(b) and (c) where $\Gamma$ being the total decay width of $N$. Below we quote the expressions of the partial decay widths of $N$ to three possible channels $h\nu$, $\ell^\pm W^\mp$ and $Z\nu$, respectively.
\begin{align}
\label{eq:Ndecayh}
\Gamma(N\to h \nu)=&\Gamma(N\to h \bar{\nu}) \nonumber \\
=& \frac{y_N^2 m_N}{64\pi} \left(1- \frac{m_h^2}{m_N^2}\right)^2, \\
\Gamma(N\to \ell^- W^+)= &\Gamma(N\to \ell^+ W^-) \nonumber \\
= \frac{y_N^2 m_N}{32\pi} & \left(1- \!\frac{m_W^2}{m_N^2}\right)^2\!\! \left(1+ 2 \frac{m_W^2}{m_N^2}\right)\!, \\
\label{eq:NdecayZ}
\Gamma(N\to Z \nu ) = &\Gamma(N\to Z \bar{\nu}) \nonumber \\
= \frac{y_N^2 m_N}{64\pi} & \left(1- \frac{m_Z^2}{m_N^2}\right)^2 \!\! \left(1+ 2 \frac{m_Z^2}{m_N^2}\right).
\end{align}
The relic abundance of the DM candidate $\chi$ can be evaluated by,
\begin{align}
\label{eq:relic}
\Omega h^2 = \frac{m_\chi s_0 Y_\chi(\infty)}{\rho_c/h^2},
\end{align}
where $s_0=2890$ cm$^{-3}$ is the current entropy density of the Universe and $\rho_c/h^2=1.05\times 10^{-5}\, \ensuremath{\mathrm{Ge\kern -0.1em V}}/ $cm$^3$ is the critical density. $Y_\chi(\infty)$ is the asymptotic value of the actual number of $\chi$ per comoving volume obtained from numerical solutions of the above Boltzmann equations. We illustrate the effect of decay and inverse decay of RHN in the evaluation of DM density, for a benchmark case, in Fig.~\ref{fig:density}. It can be seen that, in this case, the contribution of scalar DM $\phi$ to relic density is negligible compared to the Majorana fermion $\chi$.
Depending on the flavor structure of the Yukawa coupling $y_N$, the RHN decays differently to each lepton flavor,
which will lead to a different prediction for indirect detection.
For our analysis of indirect detection, we will assume $N$ decaying equally to three lepton flavors.
\begin{figure*}[!h]
\begin{center}
\mbox{\hskip -20 pt \subfigure[]{\includegraphics[width=0.45\linewidth]{pl0.pdf}}
\subfigure[]{\includegraphics[width=0.45\linewidth]{pl1.pdf}}}
\mbox{\hskip -20 pt
\subfigure[]{\includegraphics[width=0.45\linewidth]{pl2.pdf}}
\subfigure[]{\includegraphics[width=0.45\linewidth]{pl3.pdf}}}
\caption{The actual number of $\chi,~\phi$ and $N$ per comoving volume are shown in blue dashed, green dot-dashed and red dotted curves, respectively. The panels from (a) to (d) are obtained by solving the coupled Boltzmann equations (Eqs.~\eqref{eq:dYDM} -- \eqref{eq:dYN}) with the total decay width $\Gamma$ of $N$ as $10^{-10}\, \ensuremath{\mathrm{Ge\kern -0.1em V}}$, $10^{-15}\,\ensuremath{\mathrm{Ge\kern -0.1em V}}$, $10^{-20}\,\ensuremath{\mathrm{Ge\kern -0.1em V}}$ and $0\, \ensuremath{\mathrm{Ge\kern -0.1em V}}$, respectively. The effect of decay term is prominent from the plots. The masses of $\chi$, $N$, $\phi$ are assumed to follow $m_\chi=n\, m_N= 1/n\, m_\phi$ with $n=1.2$, $m_N=300$\,GeV and the couplings $\lambda=0.4$, $\kappa=1$. The observed relic density is satisfied in panel (b) with $\Gamma= 10^{-15}$\,GeV.}
\label{fig:density}
\end{center}
\end{figure*}
\section{Indirect Detection} \label{sec:indirect}
\begin{figure*}[!h]
\begin{center}
\includegraphics[width=0.49\linewidth]{mDM-sigV-R.pdf}
\includegraphics[width=0.49\linewidth]{plNew.pdf}
\caption{The left and right panels show the allowed parameter space in the plane of
($m_\chi$ $\langle \sigma v\rangle$) and ($m_N$,$y_N$), respectively.
The observed relic density is obtained for the DM coupling $\lambda$=0.4 (dashed), 0.6 (dot-dashed), 0.8 (dotted), 1.0 (long-dashed) and $\sqrt{4\pi}$ (solid).
The green and yellow shaded regions are excluded by Fermi-LAT(at 90\% C.L.) and H.E.S.S.(at 95\% C.L.) data, respectively. The blue solid curve represents future bound from CTA where the region above(below) will be excluded at 90\% C.L. for left(right) panel.
The gray region is forbidden by perturbativity limit.
The masses of $\chi$, $N$, $\phi$ are assumed to follow $m_\chi=n\, m_N= 1/n\, m_\phi$ with $n=1.2$, $\kappa=1$,
and the RHN is assumed to decay equally to each lepton flavor.
}\label{fig:sigV}
\end{center}
\end{figure*}
Here we would like to mention that the RHN-portal models can be probed by indirect detection experiments. The annihilation of DM pair to RHNs, which then decay through weak interactions induced by active-sterile neutrino mixing, leads to gamma-ray signals that can be probed by experiments such as Fermi-LAT and H.E.S.S. telescopes \cite{camp17,batell17}. In our work we employed the receipt described in \cite{camp17} to find the H.E.S.S. bounds and the results from \cite{Folgado:2018qlv} for the Fermi-LAT bound on the dark matter annihilation cross section for the $\chi\chi \to N N$ process which is depicted in Fig.~\ref{fig:sigV}. We emphasize that H.E.S.S. and CTA limits rely on the current (projected) sensitivity to gamma-ray emission stemming from the Galactic Center. Since no excess has been observed, stringent constraints have been placed on the dark matter annihilation cross section. It is clear from the figure that the CTA limit is more constraining and this is a direct result of the CTA array containing Large, Medium and Small-Sized Telescopes that will significantly strengthen CTA sensitivity to dark matter models
\cite{Acharya:2017ttl}. We focus our discussion on the benchmark scenario where $m_\chi=n\, m_N= 1/n\, m_\phi$.
The left panel of Fig.~\ref{fig:sigV} in the $\langle \sigma v\rangle-m_\chi$ plane shows the lines satisfying observed relic abundance by Planck data $\Omega h^2 = 0.1199 \pm 0.0027$ ~\cite{Ade:2015xua} achieved for different values of the coupling $\lambda$. The green and yellow shaded regions depict 90\% C.L. limit on annihilation cross section from Fermi-LAT~\cite{Folgado:2018qlv} and 95\% C.L. bound from H.E.S.S. data~\cite{camp17}, respectively. The right-panel shows the corresponding situation in the $m_N-y_N$ plane. One can observe an important feature that given a fixed value of $\lambda$, the observed relic can be obtained for quite extended ranges of the DM mass $m_\chi$ by changing the neutrino Yukawa coupling $y_N$, {\em viz} controlling the decay width $\Gamma$. This parameter space is currently allowed by the limits from indirect detection experiments however can be probed by the projected bound from CTA in future. The system of the coupled Boltzmann equations, \eqref{eq:dYDM} and \eqref{eq:dYN}, reduces to the conventional one where the RHN is assumed to be in thermal equilibrium. This is realized when $\langle \sigma v \rangle \simeq 2 \times 10^{-9}\, \ensuremath{\mathrm{Ge\kern -0.1em V}}^2$ and the result becomes independent of $y_N$, which is nicely depicted in the right panel.
The gray shaded region is forbidden by the perturbativity limit on $\lambda$.
For higher values of $n$, the parallel lines for $y_N \geq 10^{-7}$ in the left panel of Fig.~\ref{fig:sigV} would be satisfied for higher values of $\lambda$ for a given $m_N$. This is due to the fact that an increase in $n$ decreases $\langle\sigma v\rangle$, which can be read from Eq.~\ref{sigv}.
\section{Direct detection} \label{sec:Direct}
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.5\linewidth]{dias2.pdf}
\caption{The interaction of the DM $\chi$ with the Higgs $h$ induced at one-loop level. }\label{dia:hDD}
\end{center}
\end{figure}
Notice that the model contains no tree-level coupling of the fermionic DM to the Higgs boson, but an effective $h$-$\chi$-$\chi$ coupling arises from the one-loop diagram shown in Fig.~\ref{dia:hDD}:
\begin{align} \label{eq:hDD}
-{\cal L}_{h\chi\chi}=& \kappa' h \bar\chi \chi ~~\mbox{where} \nonumber \\
\kappa' \equiv& {\lambda^2 \kappa v \over 16\pi^2}\, {m_\chi c_1(x) -m_N c_0(x) \over m_\phi^2},
\end{align}
and $c_{1,0}(x)$ are loop-functions of $x\equiv m_N^2/m_\phi^2$ given by
\begin{eqnarray}
c_1(x) &=& \frac{1-4x+3x^2-2x^2 \ln x}{2(1-x)^3}, \nonumber \\
c_0(x) &=& \frac{1-x+x\ln x}{(1-x)^2} . \nonumber
\end{eqnarray}
The induced $h$-$\chi$-$\chi$ coupling $\kappa'$ (Eq.~\eqref{eq:hDD}) controls
the SI nucleonic cross-section
\begin{equation}
\sigma_{\rm SI} = \frac{4}{\pi} \mu_r^2
\left({ \kappa' g_{nnh} \over m_h^2 }\right)^2,
\end{equation}
where $\mu_r=m_\chi m_n/(m_\chi+m_n)$ is the reduced mass and $g_{nnh} \approx 0.0011$ is the nucleon-Higgs coupling. The measurements of DM-nucleon SI cross section constrain the effective Higgs-DM coupling stringently and the result is depicted in Fig.~\ref{fig:DirectD} which shows the latest bound from XENON1T 2018 result \cite{Aprile:2018dbl} and the future limits from LZ \cite{Mount:2017qzi} and XENONnT~\cite{Aprile:2015uzo} experiments. The region above the mentioned curves are excluded at 90\% confidence level.
It can be seen that the latest data from XENON1T experiment excludes $|\lambda^2\kappa|\ge \mathcal{O}(1)$ for $m_\chi\le 150\,$GeV and the future sensitivity of XENONnT can rule out such value of $|\lambda^2\kappa|$ up to $600\,{\rm GeV}$ DM mass.
As the direct detection process arises at one-loop level with an additional coupling $\kappa$ irrelevant for the DM annihilation, there remains a wide range of parameter space to be probed by both direct and indirect detections.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1\linewidth]{directDLogN.pdf}
\caption{The contour plot for direct detection cross-section through a loop induced $h$-$\chi$-$\chi$ coupling is shown in $m_\chi- |\lambda^2\kappa|$ plane.
The 2018 XENON1T bound~\cite{Aprile:2018dbl} is shown by the red-dashed curve.
The green- and orange-dotted curves are the expected bounds from LZ \cite{Mount:2017qzi}
and XENONnT~\cite{Aprile:2015uzo} experiments, respectively. The region above the mentioned curves are excluded at 90\% confidence level.
}\label{fig:DirectD}
\end{center}
\end{figure}
\section{Conclusion} \label{sec:conclusion}
The dark sector may possibly be connected to the visible sector through heavy Majorana RHNs
which are introduced to explain the observed neutrino masses and mixing.
Assuming a fermionic DM candidate which pair-annihilates to a RHN pair,
we performed a comprehensive analysis of the parameter space considering the neutrino Yukawa effect in
the thermal freeze-out process and imposing the current results of indirect and direct detection experiments.
When the neutrino Yukawa coupling is too small to maintain the RHN in full thermal equilibrium, the DM annihilation cross-section needs to be larger than the standard freeze-out value to obtain the observed relic density. However, the allowed parameter region is
quite limited and well below the current limits from Fermi-LAT and H.E.S.S. telescopes detecting gamma ray signals.
The CTA will be able to probe a large part of the region as shown in Fig.~\ref{fig:sigV}.
In this scenario, a DM-Higgs coupling arises at one loop and thus could be probed by direct detection experiments through spin-independent scattering. The 2018 XENON1T bound and future limits are illustrated in Fig.~\ref{fig:DirectD}.
\section*{Acknowledgments}
We thank Christoph Weniger for the discussion and encouragement.
EJC is supported by the NRF grant funded by the Korea government (MSIP) (No. 2009-
0083526) through KNRC at Seoul National University.
FSQ thanks the financial support from UFRN, MEC and ICTP-SAIFR FAPESP grant 2016/01343-7.
The work of RM has been supported in part by Grants No. FPA2014-53631-C2-1-P, FPA2017-84445-P and SEV-2014-0398 (AEI/ERDF, EU) and by PROMETEO/\\2017/053 (GV, ES).
|
2,877,628,091,584 | arxiv | \section{Introduction}
The aim of this paper is to show that a direct correspondence exists
between the matrix elements \cite{footnote1} computed by the Algebraic
Bethe Ansatz in non-relativistic integrable models (in the following
simply referred to as Bethe Ansatz models) \cite{korbook} and the Form
Factors considered in relativistic integrable quantum field theories
\cite{smirnov}. As shown below, the relation between the two
quantities can be established along the lines of the recent studies on
non-relativistic limit of a quantum field theory \cite{KMT1,KMT2}.
The discovery of such a correspondence may greatly help deepen our
general knowledge of integrable models and, in particular, shed new
light on the calculation of their correlation functions. The reason is
the following: while the direct computation of Bethe Ansatz matrix
elements proves to be quite a difficult task (often carried out
successfully only for few operators), the computation of Form Factors
is instead a simpler type of problem. In the latter case, for instance, one
can take advantage of additional constraints coming from the
relativistic invariance of the field theory and as a result, explicit
expressions of Form Factors can be usually found not only for a few
operators but also for a larger number of them: actually, the
classification of all operators of a quantum field theory can be
obtained in terms of the different solutions of the Form Factor
equations \cite{cardymussardo,delfino}.
At the heart of this correspondence lies the $S$-matrix, both of the
relativistic field theory and the Bethe Ansatz model: if the
$S$-matrix of the latter is obtained by a suitable non-relativistic
limit of the $S$-matrix of the former, then the Form Factors of the
quantum field theory go smoothly to the Bethe Ansatz matrix
elements. Obviously one has to be sure that there is also a one-to-one
mapping between the Hilbert spaces and operators of the two
theories. But if one can prove that such a mapping exists, it is then
easy to understand why the field theory Form Factors reduce to the
Bethe Ansatz matrix elements: this happens because the analytic
properties of the Form Factors and the Bethe Ansatz matrix elements
are dictated by the $S$-matrices of the corresponding theories. In the
following we provide evidence for this correspondence by analyzing the
simplest models in which it occurs: the Quantum Non-Linear
Schr\"odinger (QNLS) model on one side, and the Sinh--Gordon (Sh-G)
model on the other. In particular, our approach provides a universal
method to compute matrix elements of any local operator in the QNLS
model.
\section{Matrix elements in the Algebraic Bethe Ansatz}
In this section we summarize without derivation some basic properties
and results of the Algebraic Bethe Ansatz solution of the QNLS system
associated to the Lieb--Liniger model \cite{LL}. The interested reader
is referred to the book \cite{korbook} and references therein.
The Hamiltonan of the model in volume $L$ with periodic boundary
conditions is given by
\be{HLL}
H_{\text{QNLS}}=\int_{0}^{L}\,\mathrm d x\left(\p_x\psid\p_x\psi+ c\psid\psid\psi\psi\right)\;,
\ee
where $\psi(x,t)$ and $\psi^\dagger(x,t)$ are canonical Bose
fields
\be{Psi}
[\psi(x,t),\psi^\dagger(y,t)]=\delta(x-y)\;,
\ee
and $c$ is the coupling constant. The Fock vacuum is defined as
\be{fock}
\psi\vac=0,\qquad\cav\psid=0\;.
\ee
The QNLS model can be solved via the Algebraic Bethe Ansatz (BA).
The monodromy matrix reads
\be{mon}
T(\l)=\left(
\begin{array}{cc} A(\l)&B(\l)\\C(\l)&D(\l)
\end{array}\right)
\ee
and its entries act in a space consisting of states
\be{ABA}
\vec{\l_1,\dots,\l_N}=\prod_{j=1}^N \mathbb{B}(\l_j)\vac\;,\qquad N=0,1,\dots\;.
\ee
where $\mathbb{B}(\l)=B(\l)\exp(-i\l L/2)$, $\{\l\}$ are arbitrary
complex parameters, and the pseudo-vacuum $\vac$ coincides with the
Fock-vacuum. Similarly, dual states can be constructed using the
operators $\mathbb{C}(\l)=C(\l)\exp(-i\l L/2)$.
The $R$-matrix describes the commutation relations of the monodromy
matrix entries and it satisfies the Yang--Baxter equations. For the
QNLS model it can be written in the form
\be{Rm}
R(\l,\mu)=\left(
\begin{array}{cccc}
f(\mu,\l) &0&0&0\\
0&g(\mu,\l)&1&0\\
0&1&g(\mu,\l)&0\\
0&0&0&f(\mu,\l)
\end{array}\right)
\ee
with
\be{fg}
f(\mu,\l)=\frac{\mu-\l+ic}{\mu-\l}\;,\qquad g(\mu,\l)=\frac{ic}{\mu-\l}\;.
\ee
The transfer matrix $\tau(\l)=\mathop{\rm tr} T(\l) =A(\l)+D(\l)$
generates the complete set of the conservation laws of the model. The
eigenstates of the transfer matrix have the form \erf{ABA}, however
the parameters $\{\l\}$ are not arbitrary but they satisfy the system
of Bethe equations
\be{BY}
e^{i\l_j L}\prod_{k=1\atop{k\ne j}}^N \tilde S_{\text{QNLS}}(\l_j,\l_k) = 1\;,
\ee
where the two-particle $S$-matrix is given by
\be{SLL}
\tilde S_{\text{QNLS}}(\l_j,\l_k)\equiv\frac {f(\l_k,\l_j)}{f(\l_j,\l_k)}=
\frac{\l_j-\l_k-ic}{\l_j-\l_k+ic}\;.
\ee
The $S$-matrix gives the phase factor by which the state gets
multiplied when the particles $i$ and $j$ are interchanged. Hence the
Bethe equations say that the total phase-shift acquired when a
particle of momentum $\l_j$ is taken to a round trip comes from the
usual phase which is proportional to the momentum plus the scattering
phase shifts picked up when the particle scatters through all the
other particles. Taking the logarithm leads us to
\be{BYlog}
\tilde Q_j
= \l_jL+\sum_{k\ne j}^N\frac1i\log \tilde S_{\text{QNLS}}(\l_j,\l_k)=
2\pi I_j\;,\qquad I_j\in\mathbb{Z}\;.
\ee
Using the algebra satisfied by the monodromy matrix, the scalar
products of the BA states \erf{ABA} can be worked out explicitly, as
well as the action of the operator $\psi$ on these states. However,
the calculation of the scalar products proved to be a highly
non-trivial combinatorial problem (see \cite{korbook} and references
therein). As a result, the norms of the states with parameters $\l$
that satisfy the Bethe equations \erf{BY} are
\be{norm}
\ceev{\l_1,\dots,\l_N}{\l_1,\dots,\l_N} = c^N
\left(\prod_{j,k=1\atop{j\ne k}}^Nf(\l_j,\l_k)\right)\tilde\rho_N\;,
\ee
where
\be{rhoNR}
\tilde\rho_N=\det\left(\frac{\p \tilde Q_j}{\p\l_k}\right)
\ee
is the Gaudin determinant associated to the Bethe equations
\erf{BY}.
Knowing the action of $\psi$ on the BA states and the scalar products,
its {\it unnormalized} matrix elements
\be{FF}
\tilde F_N^{\psi}(\l'_1,\dots,\l'_{N-1}|\l_1,\dots,\l_N) = \3pt{\l'_1,\dots,\l'_{N-1}}{\psi(0,0)}{\l_1,\dots,\l_N}
\ee
can be given explicitly. These matrix elements are originally defined
for states which solve the Bethe equations. However, one can define a
function $F_N$ such that the actual matrix elements will be given by
the value of this function taken at the particular set of momenta
which satisfy the Bethe equations. Hence, the function $F_N$ itself
does not carry any information about the system size $L$: in fact,
this only enters the Bethe equations satisfied by the physical
momenta. Note that the only non-zero matrix elements of $\psi$ are for
states where the difference of the particle numbers is one and that
the functions $F_N$ are symmetric separately in the momenta $\l$ and
$\l'$.
We will give explicit examples for matrix elements in section
\ref{last}. However, it is important to note that they satisfy the recursion
relation \cite{izkorresh}
\begin{multline}
\label{recurs}
\tilde F_N^{\psi}(\l'_1,\dots,\l'_{N-1}|\l_1,\dots,\l_N)\xrightarrow[\l'_1\to\l_1]{} \\
\frac{ic}{\l_1-\l'_1}\left(\prod_{j=2}^{N-1}f(\l'_1,\l'_j)\prod_{j=2}^{N}f(\l_j,\l_1)
- \prod_{j=2}^{N-1}f(\l'_j,\l'_1)\prod_{j=2}^{N}f(\l_1,\l_j)\right)\;\times\\
\times\;\tilde F_{N-1}^{\psi}(\l'_1,\dots,\l'_{N-2}|\l_1,\dots,\l_{N-1}) + \dots\;,
\end{multline}
where the dots stand for non-singular parts.
Similarly, matrix elements of the current operator
$j(x)=\psid(x)\psi(x)$ can also be determined. The current has
non-zero matrix elements only between states having the same number of
particles,
\be{FFj}
\tilde F_N^{j}(\l'_1,\dots,\l'_N|\l_1,\dots,\l_N) = \3pt{\l'_1,\dots,\l'_N}{\psid(0,0)\psi(0,0)}{\l_1,\dots,\l_N}\;,
\ee
and they also satisfy the recursion relations \erf{recurs} with the
obvious change in the number of particles of the dual vector $(N-1)\to
N$.
\section{Form Factors in the \sg model}
The Sh-G model is an integrable relativistic field
theory in $1+1$ dimensions defined by the Lagrangian density
\be{shgL}
\mc{L}= \frac12\left(\frac{\p\phi}{c_l\,\p
t}\right)^2-\frac12\left(\frac{\p\phi}{\p x}\right)^2 -
\frac{m_0^2c_l^2}{g^2\hbar^2}\left(\cosh(g\phi)-1\right)\,,
\ee
where $\phi=\phi(x,t)$ is a {\em real} scalar field, $m_0$ is a mass
scale and $c_l$ is the speed of light. The parameter $m_0$ is related to
the physical (renormalized) mass $m$ of the particle by \cite{babkar}
\be{m0}
m_0^2\,=\,m^2\frac{\pi\alpha}{\sin(\pi\alpha)}\,.
\ee
The integrability of the Sh-G model implies the absence of particle
production processes and its $n$-particle scattering amplitudes are
purely elastic. Moreover, they factorize into $n(n-1)/2$ two-body
$S$-matrices which can be determined exactly via the $S$-matrix
bootstrap \cite{zamzam}. The energy $E(\th)$ and the momentum $P(\th)$
of a particle can be written as $E(\th)=M c_l^2 \cosh\th$, $P(\th)=M
c_l \sinh\th$, where $\th$ is the rapidity. In terms of the rapidities
the exact two-body $S$-matrix is given by \cite{ari}
\be{shgSmat}
S_{\text{Sh-G}}(\th_1,\th_2)\equiv S(\th_1,\th_2)=
\frac{\sinh\th_{12}-i\,\sin(\alpha\pi)}{\sinh\th_{12}+i\,\sin(\alpha\pi)}\;,
\ee
where $\th_{12}=\th_1-\th_2$ and $\alpha$ is the renormalized coupling constant
\be{alpha}
\alpha\,=\,\frac{\hbar c_l\,g^2}{8\pi+\hbar c_l\,g^2}\;.
\ee
The key observation, made in papers \cite{KMT1,KMT2}, is that the
QNLS model can be regarded as a suitable non-relativistic limit of the
Sh-G model, under which the two-particle $S$-matrices, the Hamiltonian
and the Thermodynamic Bethe Ansatz equations of the Sh-G model go into
the corresponding quantities of the QNLS model. The connection between
the two theories is realized by taking a double limit, in which the
speed of light $c_l$ goes to infinity, the coupling $g$ goes to zero,
but with their product kept fixed and given by
\be{lim}
c_l\to\infty\,\,\, ,\,\,\, g\to0\;,\;\quad g\,c_l=\frac{4\sqrt{c}}\hbar\;,
\ee
where $c$ is the coupling constant of the QNLS model. Taking such a
double limit of the $S$-matrix of the Sh-G model one arrives at the
$S$-matrix \erf{SLL} of the QNLS model, once we set $m=1/2$ and
$\hbar=1$. Note that $m_0\to m$ in the limit.
Such a correspondence between $S$-matrices gives a hint that an exact
mapping exists between the two models. Given that in a relativistic
integrable model the two-particle $S$-matrix governs its entire
dynamics (its thermodynamic properties and the Form Factors of all its
operators), if this mapping exists then it has several interesting and
far-reaching consequences. For instance, in the past a mapping
between two relativistic $S$-matrices was used to establish the
correspondence between the form factors and, based on this, between
the operators of the theories \cite{ahnmusdel}. As discussed below,
in the present case the situation is more subtle, because the
mapping operates between a {\em relativistic} theory and {\em a
non-relativistic} model. The analysis in this case requires
additional care about the structure of states and operators of the
two Hilbert spaces.
Here we would like to shed light on the direct relation between the
Bethe Ansatz matrix elements of the QNLS and the Form Factors of the
Sh-G model. In a relativistic field theory defined in infinite volume
the elementary Form Factors of a local operator $\mc O$ are the matrix
elements of $\mc O(0,0)$ between the vacuum and a set of $n$-particle
asymptotic states:
\be{FFdef}
F_n^{\mc O}(\th_1,\th_2,\dots,\th_n)=\FF{\mc O(0,0)}_\text{in}\;.
\ee
The knowledge of all Form Factors of an operator is enough for
realizing how it acts on any state of the theory. In fact, a generic
matrix element of the operator can be expressed in terms of its Form
Factors by using the translation operator and the crossing symmetry
which is implemented by an analytic continuation in the rapidity
variables \cite{footnote2}:
\be{cross}
\3pt{\th'_1,\dots,\th'_n}{\mc O(0,0)}{\th_1,\dots,\th_k}=
F_{n+k}^{\mc O}(\th'_1+i\pi,\dots,\th'_n+i\pi,\th_1,\dots,\th_k)\,.
\ee
The Form Factors satisfy a set of functional and recursive equations,
which for integrable models makes it possible to find in many cases
their explicit expressions (for a review, see \cite{smirnov}). For a
scalar operator unitary and crossing symmetry dictate the following
functional equations
\bes
\begin{align}
F_n(\th_1,\dots,\th_k,\th_{k+1},\dots,\th_n)&=S(\th_k-\th_{k+1})\,F_n(\th_1,\dots,\th_{k+1},\th_k,\dots,\th_n)\;, \\
F_n(\th_1+2\pi i,\dots,\th_n)&=F_n(\th_2,\dots,\th_n,\th_1)\;.
\end{align}
\esu
The Form Factors of integrable theories can have two kinds of simple
poles and, except for these singularities, they are analytic in the
strip $0<\mathrm{Im}\,\th_{ij}<2\pi$ (here
$\th_{ij}=\th_i-\th_j$). The first kind of poles corresponds to
kinematical singularities at $\th_{ij}=i\pi$ and their residues give
rise to a set of recursive equations between the $n$-particle and the
$(n+2)$-particle Form Factors
\be{reskin}
F_{n+2}(\th'+i\pi,\th,\th_1,\dots,\th_n)\xrightarrow[\th'\to\th]{}
\frac{i}{\th'-\th}\left(1-\prod_{j=1}^nS(\th-\th_j)\right)F_n(\th_1,\dots,\th_n)+\dots\;,
\ee
where the dots stand for regular parts.
The second kind of poles is related to the bound states of the theory,
but since there are no bound states in the Sh-G model, there are no
such poles in the Form Factors of this theory and we will not consider
them any further.
Note the striking similarity between the recursive equations
\erf{reskin} and \erf{recurs}: we will show below that, indeed, they
exactly correspond one to the other in the limit $\erf{lim}$. These
recursive equations, together with the requirement of the correct
asymptotic behavior and of the desired analyticity properties, are the
key tools in finding explicit solutions for the Form Factors. In the
Sh-G model a concise expression is provided by the Form Factor of the
exponential operator \cite{mussardo}
\be{koubek}
F_n(k)=\FF{e^{kg\phi}}=
\frac{\sin(k\pi\alpha)}{\sin(\pi\alpha)}\left(\frac{4\sin(\pi\alpha)}{F_\text{min}(i\pi)}\right)^{\frac{n}2}\det
M_n(k)\prod_{j<l}^n \frac{F_\text{min}(\th_j-\th_l)}{e^{\th_j}+e^{\th_l}}\;.
\ee
Here $k$ is an arbitrary real number, $F_{\text{min}}(\th)$ is the
minimal solution of the Form Factor bootstrap equations and $M_n$ is a
$(n-1)\times(n-1)$ matrix
\be{}
\left[M_n(k)\right]_{j,\,l}=\sigma^{(n)}_{2j-l}\,
\frac{\sin\left((j-l+k)\pi\alpha\right)}{\sin(\pi\alpha)}\;,
\ee
where $\sigma^{(n)}_j$ are the elementary symmetric polynomials of the
variables $e^{\th_{j}}$:
\be{}
\sigma^{(n)}_j=\sum_{i_1<\dots<i_j}^n e^{\th_{i_1}}\dots e^{\th_{i_n}}\;.
\ee
We will see that the Form Factors of $e^{kg\phi}$ act as generating
functions for the Form Factors of all the powers of the field $\phi$.
\section{QNLS matrix elements from Sh-G Form Factors}
\label{last}
Before going on with the analysis, it is worth emphasizing the strong
similarities of the QNLS and Sh-G models, in particular, the
similarity of the key equations for the Bethe Ansatz matrix elements
and the Form Factors. Both theories are integrable and they contain a
single type of massive particle without additional bound states. The
pseudo-vacuum, on which the BA states are built, coincides with the
Fock-vacuum, i.e. the zero-particle state. The Sh-G Form Factors are
built on the physical vacuum, but in the non-relativistic limit the
zero-point fluctuations disappear and we obtain the Fock-vacuum of the
QNLS model. The two-particle $S$-matrix, which in the relativistic
context governs every aspect of integrable theories, also corresponds
between the two models. From this it is clear that the Bethe equations
\erf{BYlog},\erf{BYrel} are also mapped to each other. The striking
similarity between the recursive equations \erf{recurs} and
\erf{reskin} is very important, because these are the key equations
that allow for the determination of the Form Factors and the matrix
elements. Moreover, the building block of the relativistic Form
Factors, $F_{\text{min}}(\th)$ has the following behavior: in the
limit \erf{lim}
\be{Fminlim}
F_{\text{min}}(\th_{jl})\to\frac{\l_j-\l_l}{\l_j-\l_l+ic}=f(\l_j,\l_l)\;,\qquad
F_{\text{min}}(\th_{jl}+i\pi)\to1 \qquad \text{for }
\th_{jl}\in\mathbb{R}\;,
\ee
thus it goes into an important building block of the Bethe Ansatz
quantities. With all these correspondences it should be quite clear
that the Sh-G Form Factors will be mapped to the QNLS matrix
elements. In what follows we give the details of this mapping and
provide explicit examples.
In order to obtain QNLS matrix elements from the Sh-G Form Factors
using the limit \erf{lim}, we have to understand both the relation
between the operators and the states of the two models. We also have
to take into account that the Bethe Ansatz matrix elements are defined
in finite volume whereas the relativistic ones discussed so far are
defined in infinite volume. The latter difference can be cured using
the results of \cite{balazs}. Even in relativistic quantum field
theories the states in a finite volume $L$ can be described as
multi-particle scattering states, where the rapidities of the
particles are solutions of the Bethe quantization conditions:
\be{BYrel}
Q_j = mc_lL\sinh\th_j+\sum_{k\ne j}^n\frac1i \log S(\th_j-\th_k) =
2\pi I_j\;,\qquad\quad j=1,\dots,n\;.
\ee
However, this procedure is to be understood as an ``asymptotic Bethe
Ansatz" in the sense that it gives the correct results to all orders
in $1/L$ but it misses residual finite size effects which decay
exponentially with the volume. These corrections can be associated to
processes involving virtual particles with some of them ``travelling
around the world". The main idea of \cite{balazs} is that this picture
of ``asymptotic BA" applies to the form factors as well: the matrix
elements in a finite volume are given by the infinite volume Form
Factors at the particular set of rapidities which solve the
corresponding Bethe equations. In addition, one has to introduce
normalization factors given by the corresponding Gaudin determinants
\be{rho}
\rho_n(\th_1,\dots,\th_n)=\det \frac{\p Q_j}{\p\th_k}
\ee
which can be interpreted as the density of states (in rapidity-space)
in the corresponding sector, or alternatively as the norms of the BA
states like in \erf{norm}. Thus the {\em finite volume} Form Factors
are given by the infinite volume ones taken at the rapidities
satisfying the Bethe equations \erf{BYrel}, divided by the density of
states:
\be{FF_L}
\3pt{\th'_1,\dots,\th'_l}{\mc O(0,0)}{\th_1,\dots,\th_n}_L =
\frac{F^{\mc O}(\th'_1+i\pi,\dots,\th'_l+i\pi,\th_1,\dots,\th_n)}
{\sqrt{\rho_l(\th'_1,\dots,\th'_l)}\sqrt{\rho_n(\th_1,\dots,\th_n)}}
+\mc O(e^{-\mu L}) \;,
\ee
where $\mc O(e^{-\mu L})$ stands for the above-mentioned exponentially
small corrections in $L$. In the non-relativistic limit both the Bethe
equations and the norms of states go over to the QNLS quantities
\erf{BYlog}, \erf{rhoNR}. In particular, the relation between the norms
is given by
\be{rhos}
\rho_N \longrightarrow (mc_l)^N\tilde\rho_N\;.
\ee
We note that \eqref{FF_L} is valid as long as there are no coinciding
rapidities in the ``bra'' and ``ket'' vectors. In the latter case
disconnected terms arise; a proper treatment of such contributions was
given in \cite{fftcsa2}. In this paper we do not elaborate on
diconnected pieces, i.e. we assume that the two sets of rapidities are
completely distinct.
Let us turn now to the relation between the operators in the two
theories which was given in \cite{KMT1}:
\be{phi-psi}
\phi(x,t)\sim\sqrt{\frac{\hbar^2}{2m}}\left(\psi(x,t)\,
e^{-i\frac{mc_l^2}\hbar\,t}+\psid(x,t) e^{+i\frac{mc_l^2}\hbar\,t}\right)\,.
\ee
The exponential terms have to be separated, because the relativistic
Hamiltonian contains also the rest energy which is absent in the
non-relativistic case. The sign $\sim$ means that in any functional
expression of $\phi$ the surviving exponential terms should be
dropped, because in the non-relativistic limit $c_l\to\infty$ they are
rapidly oscillating and give zero when integrated over any small but
finite time interval.
Similarly, we must compensate for the rest energy in the time
evolution of the states, so (without the proper normalizations)
\be{states1}
\vec{\th_1,\dots,\th_n}\;\longleftrightarrow\;
e^{-in\,mc_l^2 t}\vec{\l_1,\dots,\l_n}\;,
\qquad\quad \cev{\th_1,\dots,\th_n}\;\longleftrightarrow\;
e^{+in\,mc_l^2 t}\cev{\l_1,\dots,\l_n}\;.
\ee
The relation between the rapidities and the momenta in the
non-relativistic limit is $\l_i=\th_i/mc_l$. Note that the encounter
of the oscillating terms in the states \erf{states} and in the
operator \erf{phi-psi} (as well as in all its powers) will ensure that
in the non-relativistic limit a given operator (e.g. $\psi$) have
non-zero matrix elements only between states in which the number of
particles differ by a fixed amount (e.g. one).
Let us now deal with the question of the normalization of the
states. First of all, one should note that in contrast with the BA
states the relativistic asymptotic states are not symmetric in the
rapidities, they rather obey
\be{}
\vec{\th_1,\dots,\th_k,\th_{k+1},\dots,\th_n}=
S(\th_k-\th_{k+1})\,\vec{\th_1,\dots,\th_{k+1},\th_k,\dots,\th_n}\;.
\ee
Hence, in order to establish the correspondence with the BA states,
these states should be symmetrized in the rapidities, which can be
done by multiplying with the appropriate phase factors:
\be{phase}
\vec{\th_1,\dots,\th_n}_{\text{symm}}=
\prod_{j>k}\sqrt{S(\th_j-\th_k)}\,\vec{\th_1,\dots,\th_n}\;.
\ee
The normalization \erf{norm} of the BA states should also be taken
into account. From \erf{rhos} it is clear that for the proper
normalization we have to include a factor of $\sqrt{mc_l}$ for every
particle.
Collecting everything we finally arrive at the relation
\be{}
\prod_{j>k}\sqrt{S(\th_j-\th_k)}\,\vec{\th_1,\dots,\th_n}\;\;\sim\;\;
\frac{(mc_l)^{n/2}}{c^{n/2}\prod\limits_{j,k=1\atop{j\ne k}}^N
\sqrt{f(\l_j,\l_k)}} \;e^{-in\,mc_l^2 t} \vec{\l_1,\dots,\l_n}\;.
\ee
Taking the S-matrices to the right hand side and using eqn. \erf{SLL}
for the $S$-matrix recovered in the double limit of the
Sh-G model, this can be written as
\be{states}
\vec{\th_1,\dots,\th_n}\;\;\sim\;\;
\frac{ (mc_l)^{n/2}e^{-in\,mc_l^2 t}}{c^{n/2}\prod\limits_{j<k}^Nf(\l_j,\l_k)}\;\vec{\l_1,\dots,\l_n}\;,
\ee
The relation between the dual vectors is given by the complex
conjugate expression, in particular, the sign of the time-dependent
phase \erf{states1} is opposite.
Using the $S$-matrix \erf{SLL} of the QNLS model and the
correspondence of the states \erf{states} it is easy to prove, as
shown below, that the relativistic recursive equations \erf{reskin}
transform exactly into the recursive equations \erf{recurs}. Note also
that due to \erf{Fminlim} the $f(\l_i,\l_j)$ factors in \erf{states}
will exactly cancel the limiting forms of $F_{\text{min}}(\th_{ij})$.
\bigskip
Now we are in a position to obtain a {\em generic QNLS matrix element} of the form
\be{}
\3pt{\l'_1,\dots,\l'_{N+p-q}}{\psid\,^p\psi^q}{\l_1,\dots,\l_N}
\ee
by performing the following steps:
\begin{enumerate}
\item We determine the infinite volume Sh-G Form Factor
\[\3pt{0}{\no{\phi^{p+q}}}{\th'_1+i\pi,\dots,\th'_{N+p-q}+i\pi,\th_1,\dots,\th_N}\]
by picking the $\mc O(k^{p+q})$ term in the expansion of
\erf{koubek} and (for $p+q>2$) by taking into account the normal
ordering issues \cite{footnote3}. The pre-factor in \erf{phi-psi} is
1 with the choice $m=1/2,\hbar=1$, but simple combinatorial factors
may arise. As we said, the number of crossed rapidities will {\em
automatically} select the correct combination $\psid\,^p\psi^q$
out of $\phi^{p+q}$.
\item We calculate the finite volume Form Factors using
\erf{FF_L}. However, in the double limit the norms of the
finite volume states will go to those of the BA states \erf{norm}
up to a factor of $\sqrt{mc_l}$ for every particle, which needs to
included.
\item We take the double scaling limit \erf{lim} using \erf{alpha} as
well as the relation $\th_i\to\l_i/mc_l$ and the limit of the
minimal Form Factor $F_{\text{min}}$ \erf{Fminlim}.
\item We take into account the different normalizations
according to \erf{states}.
\item To get a proper matching with the Bethe Ansatz matrix elements
we include a factor of $-i$ for each $\psi$ and a $+i$ for each $\psid$.
\end{enumerate}
These steps can be summarized in the formula
\begin{multline}
\label{final}
\3pt{\l'_1,\dots,\l'_{N+p-q}}{\psid\,^p\psi^q}{\l_1,\dots,\l_N} \;=\;\\
i^{p-q}\binom{p+q}{p}^{-1} \left(\frac{2m}{\hbar^2}\right)^{\frac{p+q}2}
\;c^{\frac{2N+p-q}2}\prod\limits_{j<k}^Nf(\l_j,\l_k)\prod\limits_{j<k}^{N+p-q}f(\l'_j,\l'_k)\;\times\\
\times\;\widetilde\lim\left\{ (mc_l)^{-\frac{2N+p-q}2}
\3pt{0}{\no{\phi^{p+q}}}{\th'_1+i\pi,\dots,\th'_{N+p-q}+i\pi,\th_1,\dots,\th_N}\right\}\;,
\end{multline}
where $\widetilde\lim$ denotes the double scaling limit
\erf{lim}.
As a first check of this procedure let us explicitly show the
correspondence of the recursive equations \erf{reskin} and
\erf{recurs}. Starting from the relativistic case we have
\begin{multline}
F_{2N-1}(\th'_1+i\pi,\dots,\th'_{N-1}+i\pi,\th_1,\dots,\th_N)=\\
\prod_{j=2}^{N-1}S(\th'_j+i\pi-\th_1)F_{2N-1}(\th'_1+i\pi,\th_1,\th'_2+i\pi,\dots,\th'_{N-1}+i\pi,\th_2,\dots,\th_N)
\;\;\xrightarrow[\;\;\th'_1\to\th_1\;\;]{}\\
\frac{i}{\th'_1-\th_1}\left\{1-\prod_{j=2}^{N-1}S(\th'_1-\th'_j-i\pi)\prod_{j=2}^{N}S(\th_1-\th_j)\right\}
\prod_{j=2}^{N-1}S(\th'_j+i\pi-\th_1)
F_{2N-3}(\th'_2+i\pi,\dots,\th'_{N-1}+i\pi,\th_2,\dots,\th_N)=\\
\frac{i}{\th'_1-\th_1}\left\{\prod_{j=2}^{N-1}S(\th'_1-\th'_j)-\prod_{j=2}^{N}S(\th_1-\th_j)\right\}
F_{2N-3}(\th'_2+i\pi,\dots,\th'_{N-1}+i\pi,\th_2,\dots,\th_N)\;.
\end{multline}
In the limit using \erf{final} this relation becomes
\begin{multline}
i\left(\frac{mc_l}{c}\right)^{\frac{2N-1}2}\sqrt{\frac{\hbar^2}{2m}}
\frac1{\prod\limits_{j<k}^Nf(\l_j,\l_k)\prod\limits_{j<k}^{N-1}f(\l'_j,\l'_k)}
\tilde F_{N}(\l'_1,\dots,\l'_{N-1},\l_1,\dots,\l_N)
\;\;\xrightarrow[\;\;\l'_1\to\l_1\;\;]{}\\
\shoveleft{\frac{i\,mc_l}{\l'_1-\l_1}\left\{\prod_{j=2}^{N-1}
\tilde S_{\text{QNLS}}(\l'_1-\l'_j)-\prod_{j=2}^{N}\tilde S_{\text{QNLS}}(\l_1-\l_j)\right\}\;\times}\\
\times\;i\left(\frac{mc_l}{c}\right)^{\frac{2N-3}2}\sqrt{\frac{\hbar^2}{2m}}
\frac1{\prod\limits_{2<j<k}^Nf(\l_j,\l_k)\prod\limits_{2<j<k}^{N-1}f(\l'_j,\l'_k)}
\tilde F_{N-1}(\l'_2,\dots,\l'_{N-1},\l_2,\dots,\l_N)\;.
\end{multline}
Simplifying with the common pre-factors and using relation \erf{SLL} between
the $S$-matrix and the function $f(\l,\mu)$ we arrive at equation \erf{recurs}.
\bigskip
Let us take now some explicit examples of the limit for the matrix
elements of the field operator $\psi$ and the current operator
$\psid\psi$. For the matrix elements of $\psi$ a nice determinant
representation was found in \cite{kojima}. They can be expressed also
as \cite{izkorresh}
\be{}
\tilde F_N^{\psi}(\l'_1,\dots,\l'_{N-1}|\l_1,\dots,\l_N)=
\frac{P_N(\l'_1,\dots,\l'_{N-1}|\l_1,\dots,\l_N)}
{\prod\limits_{k=1}^{N-1}\prod\limits_{j=1}^N(\l_j-\l'_k)}\;,
\ee
where $P_N$ are polynomials in $\{\l\}$. The first few examples are
given by
\bes
\label{psiFF}
\begin{align}
P_1&=-i\sqrt{c}\;,\\
P_2&=-2i\sqrt{c}\,c^2\;,\\
P_3&=-4i\sqrt{c}\,c^4\left[-c^2+\left((\l_1-\l_1')(\l_2-\l'_2)+
(\l_1-\l_2')(\l_3-\l'_1)+(\l_2-\l_1')(\l_3-\l'_2)\right)\right]\;.
\end{align}
\esu
The form factors for $\psid\psi$ can be extracted from the matrix
elements of the non-local operator
\be{Q1}
Q_1(x)=\int_0^x\mathrm d y\,j(y)
\ee
by differentiating with respect to $x$. Matrix elements of $Q_1(x)$
are listed for example in \cite{izkor} and a determinant formula can
be found in \cite{slavnov}, which yield
\bes
\label{jFF}
\begin{align}
\tilde F_1^{j}(\l'_1|\l_1)&=c\;,\\
\tilde F_2^{j}(\l'_1,\l'_2|\l_1,\l_2)&=-2c^3\frac{(\l_1+\l_2-\l'_1-\l'_2)^2}{\prod\limits_{j,k=1}^2(\l'_j-\l_k)}\;,
\end{align}
\esu
and so on.
On the Sh-G side the Form Factors of the various powers of $\phi$ can
be obtained by series expanding formula \erf{koubek} in the auxiliary
real variable $k$ \cite{footnote3}. For example, the Form Factors of
$\phi$ are given by
\be{FFphi}
\FF{\phi}=
\frac{\pi\alpha}g\left(\frac4{F_{\text{min}}(i\pi)}\right)^{\frac{n}2}(\sin(\pi\alpha))^{\frac{n}2-1}
\det M_n(0)\prod_{j<l}^n\frac{F_\text{min}(\th_j-\th_l)}{e^{\th_j}+e^{\th_l}}\,,
\ee
or explicitly
\begin{align}
\3pt{0}{\phi}{\th_1} &= \frac2{\sqrt{F_{\text{min}}(i\pi)}}\,
\frac{\pi\alpha}{g\,\sqrt{\sin(\pi\alpha)}}\;,\\
\3pt{0}{\phi}{\th_1,\th_2,\th_3} &=
\frac8{F_{\text{min}}(i\pi)^{\frac32}}\,\frac{\pi\alpha}g\sqrt{\sin(\pi\alpha)}
\,e^{\th_1+\th_2+\th_3}\prod_{j<l}^3\frac{F_{\text{min}}(\th_j-\th_l)}{e^{\th_j}+e^{\th_l}}\;.
\end{align}
Now we can apply the rule \erf{final} with $p=0$, $q=1$ and
$N=1,2,\dots$ It is useful to note that
\be{}
c_l\,\alpha\to\frac{2c}{\pi\hbar}\;,\qquad \frac{\pi\alpha}{g}\to\frac{\sqrt{c}}{2\pi}\;.
\ee
From \erf{Fminlim} we see that $F_{\text{min}}(i\pi)\to1$ and that the
surviving $F_{\text{min}}$ factors exactly cancel the $f$-functions
appearing in \erf{final}. Taking care of the pre-factors the double
limit yields a final result which coincides exactly with the
expressions \erf{psiFF}.
Similarly, the first Form Factors of $\phi^2$ obtained from
\erf{koubek} are
\bes
\label{hopp}
\begin{align}
\3pt{0}{\phi^2}{\th_1,\th_2}&=\frac8{F_{\text{min}}(i\pi)}\frac{\pi^2\alpha^2}
{g^2\,\sin(\pi\alpha)}\,F_\text{min}(\th_1-\th_2)\;,\\
\3pt{0}{\phi^2}{\th_1,\th_2,\th_3,\th_4}&=
\frac{32}{F_{\text{min}}(i\pi)^2}\frac{\pi^2\alpha^2}{g^2}
\,\prod_{j<l}^4\frac{F_\text{min}(\th_j-\th_l)}{e^{\th_j}+e^{\th_l}}\;\times\\
&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\times
\left(e^{\th_1+\th_2+\th_3+\th_4}(e^{\th_1}+e^{\th_2}+e^{\th_3}+e^{\th_4})^2-
e^{2(\th_1+\th_2+\th_3+\th_4)}(e^{-\th_1}+e^{-\th_2}+e^{-\th_3}+e^{-\th_4})^2\right)\;.
\end{align}
\esu
Applying the double limit formula \erf{final} with $p=q=1$ to
\erf{hopp} one retrieves the matrix elements \erf{jFF}. We have also
checked the matrix elements with higher number of particles finding a
perfect agreement with the corresponding Bethe Ansatz matrix elements.
\section{Conclusions}
In this paper we have analyzed the close correspondence between matrix
elements of Bethe Ansatz models and Form Factors of relativistic
integrable field theories. If the former models can be regarded as
non-relativistic limits of the latter theories, then the Bethe Ansatz
matrix elements can be efficiently obtained as the non-relativistic
expressions of the corresponding Form Factors, whose explicit
computation is much simpler.
We have discussed in detail this correspondence between the Quantum
Non-Linear Schr\"odinger model and the Sinh--Gordon model, where we
gave a universal method to compute all the matrix elements of every
local operator, but there are strong arguments in favor of its
validity for other pairs of models as well. Indeed, both in Bethe
Ansatz models and quantum field theories the main properties of matrix
elements are dictated by their $S$-matrices: therefore, if there is a
mapping between the Hilbert spaces and operators of the two theories
and the expressions of the two $S$-matrices coincide in the
non-relativistic limit of the quantum field theory, these two facts
induce a mapping between the matrix elements of the two theories. The
use of Form Factors may help in solving most of the technical
obstacles that have prevented so far the computation of matrix
elements in non-relativistic Bethe Ansatz integrable models, thus
opening new perspectives on the computation of their correlation
functions. Along this direction it would be interesting to investigate
multi-component systems, where the BA results are limited due to the
nested nature of Bethe Ansatz.
\vspace{3mm}
{\bf Acknowledgements} We thank G\'abor Tak\'acs and in particular
Jean-S\'ebastien Caux for useful discussions. M. K. and G. M. were
supported by the grants INSTANS (from ESF) and 2007JHLPEZ (from MIUR).
|
2,877,628,091,585 | arxiv | \section{Introduction}
With the discovery of the Higgs boson at the Large Hadron Collider (LHC) \cite{Aad:2012tfa,Chatrchyan:2012xdj}, the Standard Model of
Particle Physics (SM) is formally complete. While existing deviations between some SM predictions and experiment, such as for the anomalous
magnetic moment of the muon (see for example \cite{Bennett:2006fi, Jegerlehner:2018zrj}), are not conclusive, the SM is not a complete description of nature as it neither accounts for astrophysical phenomena such as dark matter, nor does it incorporate gravity.
Searches for physics beyond the SM have not been successful thus far. Exclusion limits for new particles introduced by SM extensions often exceed the TeV scale. These results suggest that new physics either interacts weakly with the SM, or that the masses of new particles are significantly above the electroweak scale. A well-known example is the Minimal
Supersymmetric Standard Model (MSSM)~\cite{Haber:1984rc}, which requires at least
TeV-scale stops in order to correctly predict the mass of the SM-like
Higgs boson of about $125\ensuremath{\;\text{GeV}}$, see for example
\cite{Allanach:2018fif,Bahl:2018zmf}.
The construction and phenomenological analysis of new physics models with heavy particles is therefore a suitable path to develop viable theories beyond the SM that are consistent with experimental results.
The observables predicted in models with large mass hierarchies,
however, usually suffer from large logarithmic quantum corrections,
which should be resummed in order to obtain precise predictions.
Effective Field Theories (EFTs) are a well-suited tool to resum these
large logarithmic corrections. Conventional matching procedures using
Feynman diagrams, however, are often cumbersome, in particular if the
new physics model contains many new heavy particles and/or complicated
interactions. The Universal One-Loop Effective Action (UOLEA)
\cite{Drozd:2015rsp,Ellis:2017jns,Summ:2018oko}, which has been
developed using functional methods
\cite{Gaillard:1985uh,Cheyette:1987qz,Haba:2011vi,Henning:2014wua,Henning:2016lyp,Ellis:2016enq,Fuentes-Martin:2016uol,Zhang:2016pja},
is a very promising tool to overcome these difficulties. It represents a
generic one-loop expression for the Wilson coefficients of an effective
Lagrangian for a given ultra-violet (UV) model with a large mass
hierarchy. Compared to the conventional matching using Feynman
diagrams, the calculation of the Wilson coefficients with the UOLEA is
straightforward, as it is expressed directly in terms of derivatives
of the UV Lagrangian w.r.t.\ the fields and simple rational functions.
In particular, no loop integration is necessary and spurious infrared
(IR) divergences are absent by construction.
To date, however, the UOLEA is not completely known: Only
contributions from scalar particles \cite{Drozd:2015rsp,Ellis:2017jns}
as well as conversion terms between dimensional regularization and dimensional reduction \cite{Summ:2018oko} have been
calculated at the generic one-loop level up to dimension 6. Whereas
some contributions from fermion loops can be calculated using these
results by squaring the fermionic trace, this treatment is incomplete
when the couplings depend on gamma matrices. Furthermore,
contributions from loops containing both scalars and fermions as well
as terms with open covariant derivatives are unknown.
In this publication we present all one-loop operators of the UOLEA up to
dimension 6 that involve both scalars and fermions in a generic form,
excluding contributions from open covariant derivatives.
Thus, our results go beyond the scope of
\cite{Drozd:2015rsp,Ellis:2017jns} and allow for an application of the
UOLEA to a broader set of new physics models. We publish our generic
expressions as a Mathematica ancillary file \texttt{UOLEA.m} in the
arXiv submission of this publication.
Due to their generic structure, the expressions are well suited to be
implemented into generic spectrum generators such as \texttt{SARAH}\@\xspace
\cite{Staub:2009bi,Staub:2010jh,Staub:2012pb, Staub:2013tta} or \texttt{FlexibleSUSY}\@\xspace
\cite{Athron:2014yba,Athron:2017fvs} or EFT codes in the spirit of
\texttt{CoDEx}\@\xspace \cite{Bakshi:2018ics,DasBakshi:2019vzr}.
This paper is structured as follows: In \secref{sec:calculation} we
present the calculation of the UOLEA involving both scalars and
fermions. We discuss the results in \secref{sec:results} and apply our generic expressions
to various EFTs of the SM and the MSSM in \secref{sec:applications}. Our conclusions are presented in \secref{sec:conclusions}, and the appendices collect further formulae and calculational details.
\section{Calculation of the scalar and fermionic UOLEA}
\label{sec:calculation}
\subsection{Functional matching in a scalar theory}
\label{sec: intro}
In this section we briefly review the most important steps in the
functional matching approach at one-loop level in a scalar theory and fix the notation
for the subsequent sections. Most of what is being discussed here is
well-documented in the literature and more details can be found in
\cite{Henning:2014wua,Henning:2016lyp,Fuentes-Martin:2016uol,Zhang:2016pja}. We
consider a generic UV theory that contains heavy real scalar fields,
collectively denoted by $\Phi$, with masses of the order $M$ and light
real scalar fields, denoted by $\phi$, with masses of the order $m$.
We assume that $m/M \ll 1$ such that an EFT expansion in the mass
ratio $m/M$ is valid. To perform the functional matching the
background field method is used to calculate the generator of
1-light-particle-irreducible (1LPI) Green's functions in the
UV-theory, $\Gamma_{\text{L,UV}}[\classicfield{\phi}]$, and the generator of
1-particle-irreducible (1PI) Green's functions in the EFT,
$\Gamma_{\ensuremath{\text{EFT}}\xspace}[\classicfield{\phi}]$, where $\classicfield{\phi}$ are light background
fields which obey the classical equation of motion. For the determination of these generating functionals beyond
tree-level a regularization scheme must be specified, which is chosen
to be dimensional regularization.\footnote{In principle the results
obtained in this paper can also be applied to a setting where
dimensional reduction is used as a regularization scheme, see \cite{Summ:2018oko}.} This
introduces a dependence on the unphysical renormalization scale $\mu$
in both generating functionals, and the matching condition becomes
\begin{align}
\Gamma_\text{L,UV}[\classicfield{\phi}]=\Gamma_\ensuremath{\text{EFT}}\xspace[\classicfield{\phi}],
\label{eq:scalar_matching_condition}
\end{align}
which is imposed at the matching scale $\mu$, order by order in perturbation theory. In principle the matching scale can be chosen arbitrarily, however, in order to avoid large logarithms the choice $\mu=M$ is preferred. To calculate
$\Gamma_\text{L,UV}[\classicfield{\phi}]$ one starts from the generating functional
of Green's functions
\begin{align}
Z_\ensuremath{\text{UV}}\xspace[J_\Phi,J_\phi]=\int \mathcal{D}\Phi \mathcal{D}\phi \exp\left \{i \int \ensuremath{\mathrm{d}}^d x \, \big[\mathcal{L}_\ensuremath{\text{UV}}\xspace[\Phi,\phi]+J_{\Phi}(x) \Phi(x)+J_{\phi}(x) \phi(x) \big]\right\}
\end{align}
with sources $J_\Phi$ and $J_\phi$ and splits both the heavy and the
light fields into background parts $\classicfield{\Phi}$ and $\classicfield{\phi}$, respectively,
and fluctuations $\delta \Phi$ and $\delta \phi$, respectively, as
\begin{align}
\Phi&=\classicfield{\Phi}+\delta \Phi, \\
\phi&=\classicfield{\phi}+\delta \phi.
\end{align}
The background fields are defined to satisfy the classical equations
of motion,
\begin{align}
\frac{\delta \mathcal{L}_\ensuremath{\text{UV}}\xspace}{\delta \Phi}[\classicfield{\Phi},\classicfield{\phi}]+J_\Phi &= 0, &
\frac{\delta \mathcal{L}_\ensuremath{\text{UV}}\xspace}{\delta \phi}[\classicfield{\Phi},\classicfield{\phi}]+J_\phi &= 0.
\end{align}
The generating functional of the 1LPI Green's functions of the UV
model, $\Gamma_\text{L,UV}[\classicfield{\phi}]$, is then given by
\begin{align}
\Gamma_\text{L,UV}[\classicfield{\phi}]=-i \log Z_\ensuremath{\text{UV}}\xspace[J_\Phi=0,J_\phi]-\int \ensuremath{\mathrm{d}}^d x \, J_\phi(x) \classicfield{\phi}(x),
\end{align}
where $J_\Phi=0$ since we are only interested in Green's functions
with light external particles. Expanding the Lagrangian together with
the source terms around the background fields yields
\begin{align}
\mathcal{L}_\ensuremath{\text{UV}}\xspace[\Phi,\phi]+J_{\Phi}\Phi+J_{\phi}\phi &=
\mathcal{L}_\ensuremath{\text{UV}}\xspace[\classicfield{\Phi},\classicfield{\phi}] + J_{\Phi}\classicfield{\Phi} + J_{\phi}\classicfield{\phi}
-\frac{1}{2}\begin{pmatrix} \delta \Phi^T && \delta \phi^T \end{pmatrix}
\fluct
\begin{pmatrix}
\delta \Phi \\ \delta \phi
\end{pmatrix} + \cdots ,
\label{eq:actionExpansion} \\
\intertext{where the matrix}
\fluct &\equiv -
\begin{pmatrix}
\frac{\delta ^2 \mathcal{L}_\ensuremath{\text{UV}}\xspace}{\delta \Phi \delta \Phi}[\classicfield{\Phi},\classicfield{\phi}] && \frac{\delta ^2 \mathcal{L}_\ensuremath{\text{UV}}\xspace}{\delta \Phi \delta \phi}[\classicfield{\Phi},\classicfield{\phi}] \\
\frac{\delta ^2 \mathcal{L}_\ensuremath{\text{UV}}\xspace}{\delta \phi \delta \Phi}[\classicfield{\Phi},\classicfield{\phi}] && \frac{\delta ^2 \mathcal{L}_\ensuremath{\text{UV}}\xspace}{\delta \phi \delta \phi}[\classicfield{\Phi},\classicfield{\phi}]
\end{pmatrix}
\end{align}
is referred to as the fluctuation operator and the dots indicate
higher order terms in the expansion. Through the equations of motion
with $J_\Phi=0$ the heavy background fields can be expressed in terms
of the light ones such that $\classicfield{\Phi}=\classicfield{\Phi}[\classicfield{\phi}]$. In general,
$\classicfield{\Phi}[\classicfield{\phi}]$ is a non-local object and has to be expanded using a
local operator expansion. The one-loop part of
$\Gamma_\text{L,UV}[\classicfield{\phi}]$ is then found to be
\begin{align}
\Gamma^\text{1\ensuremath{\ell}}_\text{L,UV}[\classicfield{\phi}]= \frac{i}{2} \log \det \fluct.
\end{align}
The above can be re-written as \cite{Fuentes-Martin:2016uol}
\begin{align}
\Gamma^\text{1\ensuremath{\ell}}_\text{L,UV}[\classicfield{\phi}]&=\frac{i}{2} \log \det
\left(\fluct_{11} - \fluct_{12} \fluct_{22}^{-1} \fluct_{21}\right)
+\frac{i}{2}\log \det \fluct_{22}.
\end{align}
Using similar arguments for the Lagrangian of the EFT,
$\mathcal{L}_\ensuremath{\text{EFT}}\xspace[\phi]$, which only depends on the light fields, the
generator of 1PI Green's functions in the EFT can be calculated at
one-loop as
\begin{align}
\Gamma^\text{1\ensuremath{\ell}}_\ensuremath{\text{EFT}}\xspace[\classicfield{\phi}]=\int \ensuremath{\mathrm{d}}^d x \, \mathcal{L}_\ensuremath{\text{EFT}}\xspace^\text{1\ensuremath{\ell}}[\classicfield{\phi}]+\frac{i}{2} \log \det \left(-\frac{\delta ^2 \mathcal{L}_\ensuremath{\text{EFT}}\xspace^\ensuremath{\text{tree}}\xspace}{\delta \phi \delta \phi}[\classicfield{\phi}] \right),
\end{align}
where $\mathcal{L}_\ensuremath{\text{EFT}}\xspace^\text{1\ensuremath{\ell}}$ is the effective Lagrangian whose
couplings are given by the one-loop heavy or heavy/light field
contributions. The second term contains one-loop contributions
constructed from the tree-level part of the effective Lagrangian
$\mathcal{L}_\ensuremath{\text{EFT}}\xspace^\ensuremath{\text{tree}}\xspace$. The matching condition
\eqref{eq:scalar_matching_condition} then implies
\begin{align}
\label{eq:matchingCond}
\int \ensuremath{\mathrm{d}}^d x \, \mathcal{L}_\ensuremath{\text{EFT}}\xspace^\text{1\ensuremath{\ell}}[\phi] ={}& \frac{i}{2} \log \det
\left(\fluct_{11} - \fluct_{12} \fluct_{22}^{-1} \fluct_{21}\right) +\frac{i}{2} \log \det \fluct_{22} \nonumber \\
& -\frac{i}{2} \log \det \left(-\frac{\delta ^2 \mathcal{L}_\ensuremath{\text{EFT}}\xspace^\ensuremath{\text{tree}}\xspace}{\delta \phi \delta \phi}[\classicfield{\phi}] \right).
\end{align}
The functional determinants can be calculated using the relation
$\log \det A = \Tr \log A$ and then calculating the trace. This includes a trace in the Hilbert space as constructed
in \cite{Ball:1988xg}. It is convenient to calculate this trace in
position space and insert the identity in terms of a complete set of
momentum eigenstates. The calculation then involves an integral over
the four-momentum, and expansion by regions
\cite{Beneke:1997zp,Jantzen:2011nz} can be applied to the integrals
\cite{Fuentes-Martin:2016uol,Zhang:2016pja}. It can then be shown
\cite{Zhang:2016pja} that
\begin{align}
\mathcal{L}_\ensuremath{\text{EFT}}\xspace^\text{1\ensuremath{\ell}}[\phi]&=\frac{i}{2} \int \frac{\ensuremath{\mathrm{d}}^dq}{(2\pi)^d} \tr \log \left.
\left(\fluct_{11} - \fluct_{12} \fluct_{22}^{-1} \fluct_{21}\right)\right \rvert ^{P\rightarrow P-q} _\text{hard},
\label{eq:scalarres}
\end{align}
where the final result is given by the hard part of the integrals,
i.e.\ the part for which the integrands can be expanded in the region
$|q^2| \sim M^2 \gg m^2$ and where $P_\mu=i D_\mu$ with $D_\mu$ being
the gauge-covariant derivative. In \eqref{eq:scalarres} the trace over
the Hilbert space has already been performed and ``$\tr$'' designates a
trace over all indices. To derive the currently known form of the
purely scalar UOLEA \cite{Drozd:2015rsp,Ellis:2017jns} from
\eqref{eq:scalarres}, one expands the logarithm in a power series,
which is evaluated up to terms giving rise to operators of
mass dimension 6 and calculates the corresponding coefficients arising
from the momentum integral. In order to keep gauge-invariance
manifest in the resulting $\mathcal{L}_\ensuremath{\text{EFT}}\xspace^\text{1\ensuremath{\ell}}$ a covariant
derivative expansion~\cite{Gaillard:1985uh,Cheyette:1987qz} is used, where $P^\mu$ is kept as a whole
and not split into a partial derivative and gauge fields.
\subsection{Fermionic contributions to the UOLEA}
\label{sec:calc}
In this section we consider a more general theory which contains both
scalar and fermionic fields and calculate their contributions to the
UOLEA.\footnote{As discussed in \cite{Zhang:2016pja} and
\secref{sec:results_vectors}, our final expression for the UOLEA can
also be used in a more general setting, including, for example,
massive vector fields.} This extends the results provided in
\cite{Ellis:2017jns} by including contributions to the matching from
loops containing both scalars and fermions as well as contributions
from purely fermionic loops. The latter are partially contained in the
results of \cite{Ellis:2017jns} since they can be computed by squaring
the purely fermionic trace. However, in this approach contributions
are missed whenever the interaction terms among fermions contain gamma
matrices. These terms would be classified as terms with open covariant
derivatives in the language used in \cite{Ellis:2017jns}. In our
treatment no assumptions are made about the spin structure of the
fermionic interactions. In principle, the calculation can be performed
using the method of covariant diagrams introduced in
\cite{Zhang:2016pja}, however, the calculation is presented starting
from first principles for the following reason. There is some freedom
in choosing the degrees of freedom to integrate over in the path
integral. For complex scalar fields, for example, these can be the
real and imaginary parts of the field. Alternatively one can choose
the field and its conjugate as independent degrees of freedom. For
fermions similar choices can be made. The explicit form of the
fluctuation operator and the transformations necessary to bring the
Gaussian path integral into a form where it can be trivially
performed depend on this choice. To reduce
the number of these transformations we use a formalism where Dirac and
Majorana fermions are treated together in one multiplet in the
diagonalization step. Our formalism has the additional advantage,
that the resulting expressions are more compact compared to the case
when Dirac and Majorana fermions are treated separately. In the
following we will present our formalism in detail and introduce the
notation of the final result.
As mentioned above, there is some freedom in the choice of degrees of
freedom to be integrated over. In order to treat real and complex
scalar fields on the same footing one could split all complex fields
into a real part and an imaginary part and perform the calculation
using these as the fundamental fields. However, for scalars it is
often desirable to maintain the complex fields as they might have some
physical interpretation in the effective theory. We therefore use the
field and its complex conjugate as independent degrees of freedom. Similarly, in order to
treat Dirac and Majorana fermions simultaneously without diagonalizing
the fluctuation operator among these it is convenient to treat any
Dirac fermion and its charge conjugate as independent degrees of
freedom. We collect all light and heavy scalars into the multiplets
$\phi$ and $\Phi$, respectively, and all light and heavy fermions into
the multiplets $\xi$ and $\Xi$, respectively, see table~\ref{table1}.
\begin{table}[tb]
\centering
\begin{tabular}{cll}
\toprule
Multiplet & Components & Description \\
\midrule
$\Xi$ & $\big(\Omega, \ccfield{\Omega}, \Lambda\big)^T$ & \pbox{20cm}{$\Omega$, $\ccfield{\Omega}$: heavy Dirac fermions \\
$\Lambda$: heavy Majorana fermions} \\
\midrule
$\Phi$ & $\big(\Sigma, \Sigma^*, \Theta\big)^T$ & \pbox{20cm}{$\Sigma$, $\Sigma^*$: heavy complex scalars \\
$\Theta$: heavy real scalars} \\
\midrule
$\xi$ & $\big(\omega, \ccfield{\omega}, \lambda\big)^T$ & \pbox{20cm}{$\omega$, $\ccfield{\omega}$: light Dirac fermions \\
$\lambda$: light Majorana fermions} \\
\midrule
$\phi$ & $\big(\sigma, \sigma^*, \theta\big)^T$ & \pbox{20cm}{$\sigma$, $\sigma^*$: light complex scalars \\
$\theta$: light real scalars} \\
\bottomrule
\end{tabular}
\caption{Contents of the different multiplets appearing in the calculation.}
\label{table1}
\end{table}
The charge conjugate of the Dirac spinor $\Omega$ is denoted as
$\ccfield{\Omega}=\mathcal{C}\bar{\Omega}^T$, with $\mathcal{C}$ being the charge
conjugation matrix. Similarly, we define for a light Dirac spinor
$\omega$, $\ccfield{\omega}=\mathcal{C}\bar{\omega}^T$. With these
definitions we may write the second variation of the Lagrangian as
follows
\begin{align}
\delta^2 \mathcal{L} &= \delta^2 \mathcal{L}_\text{S} +\frac{1}{2}\delta \Xi ^T \mathbf{\Delta}_\Xi \delta \Xi - \frac{1}{2} \delta \Xi ^T \tilde{\mathbf{X}}_{\Xi \Phi} \delta \Phi+\frac{1}{2} \delta \Phi ^T \tilde{\mathbf{X}}_{\Phi \Xi} \delta \Xi -\frac{1}{2} \delta \Xi ^T \tilde{\mathbf{X}}_{\Xi \phi} \delta \phi+\frac{1}{2} \delta \phi ^T \tilde{\mathbf{X}}_{\phi \Xi} \delta \Xi \nonumber \\
& \quad+\frac{1}{2} \delta \xi ^T \tilde{\mathbf{X}}_{\xi \Xi} \delta \Xi +\frac{1}{2}\delta \Xi ^T \tilde{\mathbf{X}}_{\Xi \xi} \delta \xi+\frac{1}{2}\delta \xi ^T \mathbf{\Delta}_\xi \delta \xi -\frac{1}{2} \delta \xi ^T \tilde{\mathbf{X}}_{\xi \Phi} \delta \Phi +\frac{1}{2} \delta \Phi ^T \tilde{\mathbf{X}}_{\Phi \xi} \delta \xi \nonumber \\ & \quad -\frac{1}{2} \delta \xi ^T \tilde{\mathbf{X}}_{\xi \phi} \delta \phi+\frac{1}{2} \delta \phi ^T \tilde{\mathbf{X}}_{\phi \xi} \delta \xi,
\label{eq:variation-part1}
\end{align}
where the pure scalar part is given by
\begin{align}
\delta^2 \mathcal{L}_\text{S}=-\frac{1}{2} \delta \Phi ^T \mathbf{\Delta}_{\Phi} \delta \Phi -\frac{1}{2} \delta \phi ^T \mathbf{\Delta}_{\phi} \delta \phi -\frac{1}{2} \delta \Phi ^T \tilde{\mathbf{X}}_{\Phi \phi} \delta \phi-\frac{1}{2}\delta \phi ^T \tilde{\mathbf{X}}_{\phi \Phi} \delta \Phi.
\label{eq:variation-part2}
\end{align}
In eqs.~\eqref{eq:variation-part1} and \eqref{eq:variation-part2} we
introduced the following abbreviations:
\begin{align}
\mathbf{\Delta}_\Xi &= \begin{pmatrix} X_{\Omega \Omega} && \mathcal{C}(\slashed{P}_{\ccfield{\Omega}}-M_\Omega+\mathcal{C}^{-1} X_{\Omega \bar{\Omega}}\mathcal{C}^{-1}) && X_{\Omega \Lambda} \\
\mathcal{C} (\slashed{P}_\Omega-M_\Omega+X_{\bar{\Omega} \Omega}) && \mathcal{C} X_{\bar{\Omega} \bar{\Omega}} \mathcal{C}^{-1} && \mathcal{C} X_{\bar{\Omega} \Lambda} \\
X_{\Lambda \Omega} && X_{\Lambda \bar{\Omega}}\mathcal{C} ^{-1} && \mathcal{C} (\slashed{P}_\Lambda-M_\Lambda+\mathcal{C} ^{-1} X_{\Lambda \Lambda})
\end{pmatrix}
\label{eq:initialDeltaXi}, \\
\tilde{\mathbf{X}}_{\Xi \Phi} &= \begin{pmatrix}
X_{\Omega \Sigma} && X_{\Omega \Sigma^{*}} && X_{\Omega \Theta} \\
\mathcal{C} X_{\bar{\Omega} \Sigma} && \mathcal{C} X_{\bar{\Omega} \Sigma ^*} && \mathcal{C} X_{\bar{\Omega} \Theta} \\
X_{\Lambda \Sigma} && X_{\Lambda \Sigma ^*} && X_{\Lambda \Theta}
\end{pmatrix}, \\
\tilde{\mathbf{X}}_{\Phi \Xi} &= \begin{pmatrix}
X_{\Sigma \Omega} && X_{\Sigma \bar{\Omega}} \mathcal{C} ^{-1} && X_{\Sigma \Lambda} \\
X_{\Sigma ^* \Omega} && X_{\Sigma ^* \bar{\Omega}} \mathcal{C} ^{-1} && X_{\Sigma ^* \Lambda} \\
X_{\Theta \Omega} && X_{\Theta \bar{\Omega}} \mathcal{C} ^{-1} && X_{\Theta \Lambda}
\end{pmatrix}, \\
\tilde{\mathbf{X}}_{\Xi \xi} &= \begin{pmatrix}
X_{\Omega \omega} && X_{\Omega \bar{\omega}}\mathcal{C} ^{-1} && X_{\Omega \lambda} \\
\mathcal{C} X_{\bar{\Omega} \omega} && \mathcal{C} X_{\bar{\Omega} \bar{\omega}} \mathcal{C}^{-1} && \mathcal{C} X_{\bar{\Omega} \lambda} \\
X_{\Lambda \omega} && X_{\Lambda \bar{\omega}} \mathcal{C} ^{-1} && X_{\Lambda \lambda}
\end{pmatrix}, \\
\mathbf{\Delta}_{\Phi} &= \begin{pmatrix}
X_{\Sigma \Sigma} && -P_{\Sigma^{*}}^2+M_\Sigma^2+X_{\Sigma \Sigma ^{*}} && X_{\Sigma \Theta} \\
-P_{\Sigma}^2+M_\Sigma^2+X_{\Sigma ^* \Sigma} && X_{\Sigma ^* \Sigma ^*} && X_{\Sigma^* \Theta} \\
X_{\Theta \Sigma} && X_{\Theta \Sigma ^*} && -P_\Theta^2+M_\Theta^2+X_{\Theta \Theta}
\end{pmatrix},
\label{eq:DeltaPhi}
\end{align}
with similar definitions for $\Phi\rightarrow \phi$ and
$\Xi\rightarrow \xi$. Here $P^\mu \equiv i D^\mu$, with $D^\mu$ being
the gauge-covariant derivative, is a matrix diagonal in field space
for which the subscript indicates which gauge group generators are to
be used. Furthermore we have defined
\begin{align}
(X_{A B})_{ij}\equiv -\frac{\delta ^2 \Lag_{\text{UV,int}}}{\delta A_i \delta B_j},
\end{align}
where $\Lag_{\text{UV,int}}$ is the interaction Lagrangian of the UV theory and $A$
and $B$ designate arbitrary (scalar or fermionic) fields, if not
stated otherwise. Here the indices $i$ and $j$ collectively denote all of the indices carried by the fields $A$ and $B$. It shall be noted that if $P^\mu _\Omega$ contains
generators $T^a _r$ of a representation $r$, then
$P^\mu_{\ccfield{\Omega}}$ contains the generators of the conjugate
representation $\bar{r}$, denoted by $T^{a} _{\bar{r}}$. The same
holds for the generators contained in $P^\mu _\Sigma$ and
$P^\mu _{\Sigma^*}$. Note also that \eqref{eq:variation-part2} is in
principle equivalent to the quadratic term in
\eqref{eq:actionExpansion} with the difference being that in
\eqref{eq:actionExpansion} all scalar fields are assumed to be real,
while in \eqref{eq:variation-part2} complex and real fields are
separate. The different signs in the fermionic terms in
\eqref{eq:variation-part1} result from using the anti-commutation
relation between fermions and derivatives w.r.t.\ fermions.
Before proceeding it is convenient to define
\begin{align}
\tilde{\mathds{1}}\equiv \begin{pmatrix}
0 && \mathds{1} && 0 \\
\mathds{1} && 0 && 0 \\
0 && 0 && \mathds{1}
\end{pmatrix},
\label{eq:fermID}
\end{align}
and rewrite \eqref{eq:initialDeltaXi} as
\begin{align}
\mathbf{\Delta}_\Xi &= \mathcal{C} \tilde{\mathds{1}} (\slashed{P}-M_\Xi) +\tilde{\mathbf{X}}_{\Xi \Xi},
\end{align}
where
\begin{align}
\slashed{P}-M_\Xi &= \begin{pmatrix}
\slashed{P}_{\Omega}-M_\Omega && 0 && 0 \\
0 && \slashed{P}_{\ccfield{\Omega}}-M_\Omega && 0 \\
0 && 0 && \slashed{P}_\Lambda-M_\Lambda
\end{pmatrix},\\
\tilde{\mathbf{X}}_{\Xi \Xi} &= \begin{pmatrix}
X_{\omega \omega} && X_{\Omega \bar{\Omega}}\mathcal{C}^{-1} && X_{\Omega \Lambda} \\
\mathcal{C} X_{\bar{\Omega} \Omega} && \mathcal{C} X_{\bar{\omega} \bar{\omega}} \mathcal{C}^{-1} && \mathcal{C} X_{\bar{\Omega} \Lambda} \\
X_{\Lambda \Omega} && X_{\Lambda \bar{\Omega}}\mathcal{C} ^{-1} && X_{\Lambda \Lambda}
\end{pmatrix}.
\end{align}
We rewrite \eqref{eq:DeltaPhi} in a similar way as
\begin{align}
\mathbf{\Delta}_\Phi=
\tilde{\mathds{1}}(-P^2+M^2_\Phi)+\tilde{\mathbf{X}}_{\Phi \Phi},
\end{align}
with
\begin{align}
-P^2+M^2_\Phi &= \begin{pmatrix}
-P_{\Sigma} ^2 + M^2_\Sigma && 0 && 0 \\
0 && -P_{\Sigma^*} ^2 + M^2_{\Sigma^*} && 0 \\
0 && 0 && -P_{\Theta} ^2 + M^2_{\Theta}
\end{pmatrix}, \\
\tilde{\mathbf{X}}_{\Phi \Phi} &= \begin{pmatrix} X_{\Sigma \Sigma} && X_{\Sigma \Sigma ^{*}} && X_{\Sigma \Theta} \\
X_{\Sigma ^* \Sigma} && X_{\Sigma ^* \Sigma ^*} && X_{\Sigma^* \Theta} \\
X_{\Theta \Sigma} && X_{\Theta \Sigma ^*} && X_{\Theta \Theta}\end{pmatrix}.
\end{align}
The calculation now proceeds by diagonalizing the quadratic variation
in terms of statistics in order to be able to perform the (Gaussian)
path integral. We first eliminate terms that mix scalar fluctuations
and fluctuations of light fermions $\xi$ by rewriting the second
variation as
\begin{align}
\delta ^2 \mathcal{L}_\xi ={}& \frac{1}{2} \delta \xi ^T \tilde{\mathbf{X}}_{\xi \Xi} \delta \Xi +\frac{1}{2}\delta \Xi ^T \tilde{\mathbf{X}}_{\Xi \xi} \delta \xi+\frac{1}{2}\delta \xi ^T \mathbf{\Delta}_\xi \delta \xi -\frac{1}{2} \delta \xi ^T \tilde{\mathbf{X}}_{\xi \Phi} \delta \Phi +\frac{1}{2} \delta \Phi ^T \tilde{\mathbf{X}}_{\Phi \xi} \delta \xi \nonumber \\ & -\frac{1}{2} \delta \xi ^T \tilde{\mathbf{X}}_{\xi \phi} \delta \phi+\frac{1}{2} \delta \phi ^T \tilde{\mathbf{X}}_{\phi \xi} \delta \xi \\
={}& \frac{1}{2} \left(\delta \xi^T+\left[\delta \Xi^T \tilde{\mathbf{X}}_{\Xi \xi}+\delta \Phi^T \tilde{\mathbf{X}}_{\Phi \xi}+\delta \phi^T \tilde{\mathbf{X}}_{\phi \xi}\right]\overleftarrow{\mathbf{\Delta}}_\xi^{-1}\right)\mathbf{\Delta}_\xi \nonumber \\ & \times \left(\delta \xi+\mathbf{\Delta}_\xi^{-1}\left[\tilde{\mathbf{X}}_{\xi \Xi} \delta \Xi-\tilde{\mathbf{X}}_{\xi \Phi} \delta \Phi-\tilde{\mathbf{X}}_{\xi \phi} \delta \phi\right]\right) \nonumber \\
& -\frac{1}{2} \left[\delta \Xi^T \tilde{\mathbf{X}}_{\Xi \xi}+\delta \Phi^T \tilde{\mathbf{X}}_{\Phi \xi}+\delta \phi^T \tilde{\mathbf{X}}_{\phi \xi}\right]\mathbf{\Delta}_\xi^{-1}\left[\tilde{\mathbf{X}}_{\xi \Xi} \delta \Xi-\tilde{\mathbf{X}}_{\xi \Phi} \delta \Phi-\tilde{\mathbf{X}}_{\xi \phi} \delta \phi\right].
\end{align}
In the last step we have introduced $\mathbf{\Delta}_\xi ^{-1}$, which
is the matrix-valued Green's function of $\mathbf{\Delta}_\xi$. The
occurring matrix multiplication also implies an integration, that is
\begin{multline}
\left(\mathbf{\Delta}_\xi^{-1}\left[\tilde{\mathbf{X}}_{\xi \Xi} \delta \Xi-\tilde{\mathbf{X}}_{\xi \Phi} \delta \Phi-\tilde{\mathbf{X}}_{\xi \phi} \delta \phi\right]\right)(x)\equiv \\
\int \ensuremath{\mathrm{d}}^dy \; \mathbf{\Delta}_\xi^{-1}(x,y)\left[\tilde{\mathbf{X}}_{\xi \Xi}(y) \delta \Xi(y)-\tilde{\mathbf{X}}_{\xi \Phi}(y) \delta \Phi(y)-\tilde{\mathbf{X}}_{\xi \phi}(y) \delta \phi(y)\right].
\end{multline}
Similar to $\mathbf{\Delta}_\xi ^{-1}$ we define
$\overleftarrow{\mathbf{\Delta}}_\xi^{-1}$ in such a way that
\begin{align}
\int \ensuremath{\mathrm{d}}^dy \; f(y) \overleftarrow{\mathbf{\Delta}}_\xi ^{-1}(y,x) \overleftarrow{\mathbf{\Delta}} _\xi (x)=f(x),
\end{align}
where
$\overleftarrow{\mathbf{\Delta}} _\xi
(x)=-\overleftarrow{\slashed{P}}-M_{\xi}$. Next, we shift the light
fermion field as
\begin{align}
\delta \xi' &= \delta \xi+\mathbf{\Delta}_\xi^{-1}\left[\tilde{\mathbf{X}}_{\xi \Xi} \delta \Xi-\tilde{\mathbf{X}}_{\xi \Phi} \delta \Phi-\tilde{\mathbf{X}}_{\xi \phi} \delta \phi\right],
\label{eq:xishift} \\
\delta \xi'^T &= \delta \xi^T+\left[\delta \Xi^T \tilde{\mathbf{X}}_{\Xi \xi}+\delta \Phi^T \tilde{\mathbf{X}}_{\Phi \xi}+\delta \phi^T \tilde{\mathbf{X}}_{\phi \xi}\right]\overleftarrow{\mathbf{\Delta}}_\xi^{-1},
\label{eq:xishift_T}
\end{align}
under which the path integral measure is invariant. Since $\xi$ is a
multiplet of Majorana-like spinors, the two shifts \eqref{eq:xishift}
and \eqref{eq:xishift_T} are not independent. The required relation
between the two shifts is proven in \appref{sec:shifts}. After the
shifts have been performed we arrive at
\begin{align}
\delta ^2 \mathcal{L}_\xi
={}& \frac{1}{2} \delta \xi'^{T} \mathbf{\Delta}_\xi \delta \xi'-\frac{1}{2}\delta \Xi^T \tilde{\mathbf{X}}_{\Xi \xi} \mathbf{\Delta}_\xi^{-1} \tilde{\mathbf{X}}_{\xi \Xi} \delta \Xi+\frac{1}{2}\delta \Xi ^T \tilde{\mathbf{X}}_{\Xi \xi} \mathbf{\Delta}_\xi ^{-1} \tilde{\mathbf{X}}_{\xi \Phi} \delta \Phi+\frac{1}{2}\delta \Xi ^T \tilde{\mathbf{X}}_{\Xi \xi} \mathbf{\Delta}_\xi ^{-1} \tilde{\mathbf{X}}_{\xi \phi} \delta \phi \nonumber \\
& +\frac{1}{2} \delta \Phi^T \tilde{\mathbf{X}}_{\Phi \xi} \mathbf{\Delta}_\xi ^{-1} \tilde{\mathbf{X}}_{\xi \Xi} \delta \Xi-\frac{1}{2} \delta \Phi^T \tilde{\mathbf{X}}_{\Phi \xi} \mathbf{\Delta}_\xi ^{-1} \tilde{\mathbf{X}}_{\xi \Phi} \delta \Phi-\frac{1}{2} \delta \Phi^T \tilde{\mathbf{X}}_{\Phi \xi} \mathbf{\Delta}_\xi ^{-1} \tilde{\mathbf{X}}_{\xi \phi} \delta \phi \nonumber \\
& +\frac{1}{2} \delta \phi^T \tilde{\mathbf{X}}_{\phi \xi} \mathbf{\Delta}_\xi ^{-1} \tilde{\mathbf{X}}_{\xi \Xi} \delta \Xi-\frac{1}{2} \delta \phi^T \tilde{\mathbf{X}}_{\phi \xi} \mathbf{\Delta}_\xi ^{-1} \tilde{\mathbf{X}}_{\xi \Phi} \delta \Phi-\frac{1}{2} \delta \phi^T \tilde{\mathbf{X}}_{\phi \xi} \mathbf{\Delta}_\xi ^{-1} \tilde{\mathbf{X}}_{\xi \phi} \delta \phi.
\label{eq:original_2nd_variation}
\end{align}
We proceed by eliminating terms
that mix scalar fluctuations and fluctuations of heavy fermions $\Xi$.
It is convenient to first introduce
\begin{align}
\bar{\mathbf{X}}_{A B}&\equiv \tilde{\mathbf{X}}_{A B}-\tilde{\mathbf{X}}_{A \xi}\mathbf{\Delta}_\xi ^{-1} \tilde{\mathbf{X}}_{\xi B},
\label{eq:quantitiesWithTilde_1} \\
\bar{\mathbf{\Delta}}_A&\equiv \mathbf{\Delta}_A-\tilde{\mathbf{X}}_{A \xi}\mathbf{\Delta}_\xi ^{-1} \tilde{\mathbf{X}}_{\xi A},
\label{eq:quantitiesWithTilde_2}
\end{align}
and write the second variation as
\begin{align}
\delta^2 \mathcal{L}= \delta^2 \bar{\mathcal{L}}_\text{S}+\frac{1}{2}\delta \Xi ^T \bar{\mathbf{\Delta}}_{\Xi} \delta \Xi- \frac{1}{2} \delta \Xi ^T \bar{\mathbf{X}}_{\Xi \Phi} \delta \Phi+\frac{1}{2} \delta \Phi ^T \bar{\mathbf{X}}_{\Phi \Xi} \delta \Xi -\frac{1}{2} \delta \Xi ^T \bar{\mathbf{X}} _{\Xi \phi} \delta \phi+\frac{1}{2} \delta \phi ^T \bar{\mathbf{X}} _{\phi \Xi} \delta \Xi.
\label{eq:d2_Lag_step_1}
\end{align}
In \eqref{eq:d2_Lag_step_1} the first term on the r.h.s.,
$\delta^2 \bar{\mathcal{L}}_\text{S}$, is obtained by replacing
$\tilde{\mathbf{X}}_{A B}$ and $\mathbf{\Delta}_A$ in
$\delta^2 \mathcal{L}_\text{S}$ via the relations
\eqref{eq:quantitiesWithTilde_1}--\eqref{eq:quantitiesWithTilde_2}.
By shifting the $\delta \Xi$ in a similar way,
\begin{align}
\delta \Xi' &= \delta \Xi-\bar{\mathbf{\Delta}}_\Xi^{-1}\left[\bar{\mathbf{X}}_{\Xi \Phi} \delta \Phi+\bar{\mathbf{X}}_{\Xi \phi} \delta \phi\right], \\
\delta \Xi'^T &= \delta \Xi ^T+\left[\delta \Phi^T \bar{\mathbf{X}}_{\Phi \Xi} +\delta \phi ^T \bar{\mathbf{X}}_{\phi \Xi} \right] \overleftarrow{\bar{\mathbf{\Delta}}}_\Xi^{-1}
\label{eq:Xishift}
\end{align}
one finds
\begin{align}
\delta^2 \mathcal{L} ={}&
-\frac{1}{2} \delta \Phi ^T (\bar{\mathbf{\Delta}}_{\Phi}-\bar{\mathbf{X}}_{\Phi \Xi}\bar{\mathbf{\Delta}}^{-1}_{\Xi}\bar{\mathbf{X}}_{\Xi \Phi}) \delta \Phi
-\frac{1}{2} \delta \phi ^T (\bar{\mathbf{\Delta}}_{\phi}-\bar{\mathbf{X}}_{\phi \Xi}\bar{\mathbf{\Delta}}^{-1}_{\Xi}\bar{\mathbf{X}}_{\Xi \phi}) \delta \phi \nonumber \\
& -\frac{1}{2} \delta \Phi ^T (\bar{\mathbf{X}}_{\Phi \phi}-\bar{\mathbf{X}}_{\Phi \Xi}\bar{\mathbf{\Delta}}^{-1}_{\Xi}\bar{\mathbf{X}}_{\Xi \phi}) \delta \phi \nonumber \\
& -\frac{1}{2}\delta \phi ^T (\bar{\mathbf{X}}_{\phi \Phi}-\bar{\mathbf{X}}_{\phi \Xi}\bar{\mathbf{\Delta}}^{-1}_{\Xi}\bar{\mathbf{X}}_{\Xi \Phi}) \delta \Phi+\frac{1}{2} \delta \xi'^{T} \mathbf{\Delta}_\xi \delta \xi'+\frac{1}{2} \delta \Xi'^{T} \bar{\mathbf{\Delta}}_\Xi \delta \Xi' \\
={}& -\frac{1}{2} \begin{pmatrix}
\delta \Phi^T && \delta \phi^T
\end{pmatrix}
\begin{pmatrix}
\bar{\mathbf{\Delta}}_{\Phi}-\bar{\mathbf{X}}_{\Phi \Xi}\bar{\mathbf{\Delta}}^{-1}_{\Xi}\bar{\mathbf{X}}_{\Xi \Phi} && \bar{\mathbf{X}}_{\Phi \phi}-\bar{\mathbf{X}}_{\Phi \Xi}\bar{\mathbf{\Delta}}^{-1}_{\Xi}\bar{\mathbf{X}}_{\Xi \phi} \\
\bar{\mathbf{X}}_{\phi \Phi}-\bar{\mathbf{X}}_{\phi \Xi}\bar{\mathbf{\Delta}}^{-1}_{\Xi}\bar{\mathbf{X}}_{\Xi \Phi} && \bar{\mathbf{\Delta}}_{\phi}-\bar{\mathbf{X}}_{\phi \Xi}\bar{\mathbf{\Delta}}^{-1}_{\Xi}\bar{\mathbf{X}}_{\Xi \phi}
\end{pmatrix}
\begin{pmatrix}
\delta \Phi \\
\delta \phi
\end{pmatrix}\nonumber \\
& +\frac{1}{2} \delta \xi'^{T} \mathbf{\Delta}_\xi \delta \xi'+\frac{1}{2} \delta \Xi'^{T} \bar{\mathbf{\Delta}}_\Xi \delta \Xi' \\
\equiv{}& -\frac{1}{2}\begin{pmatrix}
\delta \Phi^T && \delta \phi^T
\end{pmatrix} \fluctS
\begin{pmatrix}
\delta \Phi \\
\delta \phi
\end{pmatrix} +\frac{1}{2} \delta \xi'^{T} \mathbf{\Delta}_\xi \delta \xi'+\frac{1}{2} \delta \Xi'^{T} \bar{\mathbf{\Delta}}_\Xi \delta \Xi' \\
\equiv{}& \delta^2\mathcal{L}_{\text{SF}} + \delta^2\mathcal{L}_{\text{F}}
\label{eq:second_var}
\end{align}
with
\begin{align}
\delta^2\mathcal{L}_{\text{SF}} &= -\frac{1}{2}
\begin{pmatrix}\delta \Phi^T && \delta \phi^T\end{pmatrix} \fluctS
\begin{pmatrix}\delta \Phi \\ \delta \phi\end{pmatrix}, \\
\delta^2\mathcal{L}_{\text{F}} &= \frac{1}{2} \delta \xi'^{T} \mathbf{\Delta}_\xi \delta \xi'
+ \frac{1}{2} \delta \Xi'^{T} \bar{\mathbf{\Delta}}_\Xi \delta \Xi'.
\end{align}
At this point there are no terms including both a scalar and a
fermionic fluctuation and the path integrals over scalars and fermions
can be performed separately. As has been pointed out in
\cite{Fuentes-Martin:2016uol} it is convenient to diagonalize the
scalar part such that
\begin{align}
\fluctS = \begin{pmatrix}
\hat{\mathbf{\Delta}}_\Phi-\hat{\mathbf{X}}_{\Phi \phi} \hat{\mathbf{\Delta}}_{\phi}^{-1}\hat{\mathbf{X}}_{\phi \Phi} && 0 \\
0 && \hat{\mathbf{\Delta}}_\phi
\end{pmatrix},
\end{align}
where
\begin{align}
\hat{\mathbf{\Delta}}_A &= \bar{\mathbf{\Delta}}_{A}-\bar{\mathbf{X}}_{A \Xi}\bar{\mathbf{\Delta}}^{-1}_{\Xi}\bar{\mathbf{X}}_{\Xi A},\\
\hat{\mathbf{X}}_{A B} &= \bar{\mathbf{X}}_{A B}-\bar{\mathbf{X}}_{A \Xi}\bar{\mathbf{\Delta}}^{-1}_{\Xi}\bar{\mathbf{X}}_{\Xi B},
\end{align}
with $A,B \in \{\phi, \Phi\}$. The contribution from this mixed
scalar/fermionic part to the effective action is then given by
\begin{align}
\mathcal{L}_{\ensuremath{\text{EFT}}\xspace\text{,SF}}^\text{1\ensuremath{\ell}}&=\frac{i}{2}\int \frac{\ensuremath{\mathrm{d}}^d q}{(2\pi)^d}\left[ \tr \log
\left(\hat{\mathbf{\Delta}}_\Phi-\hat{\mathbf{X}}_{\Phi \phi} \hat{\mathbf{\Delta}}_{\phi}^{-1}\hat{\mathbf{X}}_{\phi \Phi}\right) + \tr \log \left.
\hat{\mathbf{\Delta}}_\phi\right]\right \rvert ^{P\rightarrow P-q} _\text{hard}
\label{eq:scalarcontribution}
\end{align}
and it can be calculated using a covariant
derivative expansion as outlined in e.g.\
\cite{Zhang:2016pja}. However, care has to be taken since
$\hat{\mathbf{\Delta}}_\phi$ contains contributions from heavy
fermions and hence does not vanish completely in the hard region of
the momentum integration. The corresponding contributions can be
calculated by using
\begin{align}
\log \det \left(\bar{\mathbf{\Delta}}_{\phi}-\bar{\mathbf{X}}_{\phi \Xi}\bar{\mathbf{\Delta}}^{-1}_{\Xi}\bar{\mathbf{X}}_{\Xi \phi}\right)=\log \det \left(\bar{\mathbf{\Delta}}_{\phi}\right)+\log \det \left(\mathds{1}-\bar{\mathbf{\Delta}}_{\phi}^{-1}\bar{\mathbf{X}}_{\phi \Xi}\bar{\mathbf{\Delta}}^{-1}_{\Xi}\bar{\mathbf{X}}_{\Xi \phi}\right),
\end{align}
where the first term on the right hand side vanishes in the hard
region as it only contains contributions from light fields.
Since a lot of terms are generated when re-expressing the hatted and
barred quantities in terms of the quantities arising in the original
variation \eqref{eq:original_2nd_variation} we abstain from writing
out the result explicitly. It is, however, useful to consider the
expansion of the hatted operators in order to understand the
ingredients entering the final result. In particular we will show that
it is possible to absorb all explicit factors of $\tilde{\mathds{1}}$
and $\mathcal{C}$ by appropriate re-definitions of
$\tilde{\mathbf{X}}_{A B}$. In order to achieve that we first
expand
$(\mathbf{\Delta}_\xi ^{-1})_{P_\mu\to P_\mu-q_\mu}\equiv
\mathbf{\Delta}_\xi ^{-1}(q)$ as
\begin{align}
\mathbf{\Delta}_\xi ^{-1}(q)&=\left[\mathcal{C} \tilde{\mathds{1}}(\slashed{P}-\slashed{q}-M_\xi) +\tilde{\mathbf{X}}_{\xi \xi}\right]^{-1} \\
&= \left[\mathds{1}-\left(-\slashed{q}-M_{\xi}\right)^{-1}\tilde{\mathds{1}}\mathcal{C}^{-1}\left(-\mathcal{C} \tilde{\mathds{1}}\slashed{P}-\tilde{\mathbf{X}}_{\xi \xi}\right)\right]\left(-\slashed{q}-M_{\xi}\right)^{-1}\tilde{\mathds{1}}\mathcal{C}^{-1} \\
&= \sum_{n=0} ^\infty \left[\left(-\slashed{q}-M_{\xi}\right)^{-1}\tilde{\mathds{1}}\mathcal{C}^{-1}\left(-\mathcal{C} \tilde{\mathds{1}}\slashed{P}-\tilde{\mathbf{X}}_{\xi \xi}\right) \right]^n \left(-\slashed{q}-M_{\xi}\right)^{-1}\tilde{\mathds{1}}\mathcal{C}^{-1} \\
&= \sum_{n=0} ^\infty \left[\left(-\slashed{q}-M_{\xi}\right)^{-1}\left(-\slashed{P}-\mathbf{X}_{\xi \xi}\right) \right]^n \left(-\slashed{q}-M_{\xi}\right)^{-1}\tilde{\mathds{1}}\mathcal{C}^{-1},
\label{eq:DeltaxiInv}
\end{align}
where we defined
\begin{align}
\mathbf{X}_{\xi \xi}\equiv \tilde{\mathds{1}}\mathcal{C} ^{-1} \tilde{\mathbf{X}}_{\xi \xi}.
\end{align}
Then
\eqref{eq:quantitiesWithTilde_1}--\eqref{eq:quantitiesWithTilde_2}
become
\begin{align}
\bar{\mathbf{X}}_{A B}&= \tilde{\mathbf{X}}_{A B}-\tilde{\mathbf{X}}_{A \xi}\sum _{n=0} ^{\infty} \left[\left(-\slashed{q}-M_{\xi}\right) ^{-1} \left(-\mathbf{X} _{\xi \xi}-\slashed{P}\right) \right]^n \left(-\slashed{q}-M_{\xi}\right) ^{-1} \mathbf{X}_{\xi B}, \\
\bar{\mathbf{\Delta}}_A&= \mathbf{\Delta}_A-\tilde{\mathbf{X}}_{A \xi}\sum _{n=0} ^{\infty} \left[\left(-\slashed{q}-M_{\xi}\right) ^{-1} \left(-\mathbf{X} _{\xi \xi}-\slashed{P}\right) \right]^n \left(-\slashed{q}-M_{\xi}\right) ^{-1} \mathbf{X}_{\xi A},
\end{align}
where we introduced
$\mathbf{X}_{\xi B}\equiv \mathcal{C} ^{-1}\tilde{\mathds{1}}
\tilde{\mathbf{X}}_{\xi B}$. Next we consider
\begin{align}
\bar{\mathbf{\Delta}}_\Xi ^{-1}(q) &= \Bigg[\mathcal{C} \tilde{\mathds{1}}\left(-\slashed{q}-M_{\Xi}\right) +\mathcal{C} \tilde{\mathds{1}} \slashed{P}+\tilde{\mathbf{X}}_{\Xi \Xi} \nonumber \\
&~~~~~~~ -\tilde{\mathbf{X}}_{\Xi \xi}\sum _{n=0} ^{\infty} \left[\left(-\slashed{q}-M_{\xi}\right) ^{-1} \left(-\mathbf{X} _{\xi \xi}-\slashed{P}\right) \right]^n \left(-\slashed{q}-M_{\xi}\right) ^{-1} \mathbf{X}_{\xi \Xi} \Bigg]^{-1} \\
&=\sum _{m=0} ^{\infty} \left\{\mathcal{K}_\Xi ^{-1} \left(-\mathbf{X} _{\Xi \Xi}-\slashed{P}\right) + \mathcal{K}_\Xi ^{-1} \mathbf{X}_{\Xi \xi}\sum _{n=0} ^{\infty} \left[\mathcal{K}_\xi ^{-1} \left(-\mathbf{X} _{\xi \xi}-\slashed{P}\right) \right]^n \mathcal{K}_\xi ^{-1} \mathbf{X}_{\xi \Xi} \right\}^m \mathcal{K}_\Xi ^{-1} \mathcal{C} ^{-1} \tilde{\mathds{1}},
\label{eq:DeltaXiTildeInv}
\end{align}
where
\begin{align}
\mathcal{K}_A &\equiv \left(-\slashed{q}-M_A\right), \\
\mathbf{X}_{\Xi \xi} &\equiv \mathcal{C} ^{-1} \tilde{\mathds{1}} \tilde{\mathbf{X}}_{\Xi \xi}.
\end{align}
Note that in \eqref{eq:DeltaxiInv} and \eqref{eq:DeltaXiTildeInv} the
expressions for $\mathbf{\Delta}_\xi ^{-1}$ and
$\bar{\mathbf{\Delta}}^{-1}_{\Xi}$ contain the factor
$\mathcal{C} ^{-1} \tilde{\mathds{1}}$ on the very right. This means that in
the combination
\begin{align}
\bar{\mathbf{\Delta}}^{-1}_{\Xi}\bar{\mathbf{X}}_{\Xi B}&=\bar{\mathbf{\Delta}}^{-1}_{\Xi} (\tilde{\mathbf{X}}_{\Xi B}-\tilde{\mathbf{X}}_{\Xi \xi}\mathbf{\Delta}_\xi ^{-1} \tilde{\mathds{1}} \mathcal{C} \mathcal{C}^{-1} \tilde{\mathds{1}} \tilde{\mathbf{X}} _{\xi B}) \\
&=\bar{\mathbf{\Delta}}^{-1}_{\Xi} \tilde{\mathds{1}} \mathcal{C} \mathcal{C}^{-1} \tilde{\mathds{1}} (\tilde{\mathbf{X}}_{\Xi B}-\tilde{\mathbf{X}}_{\Xi \xi}\mathbf{\Delta}_\xi ^{-1} \tilde{\mathds{1}} \mathcal{C} \mathcal{C}^{-1} \tilde{\mathds{1}} \tilde{\mathbf{X}} _{\xi B}) \\
&=\bar{\mathbf{\Delta}}^{-1}_{\Xi} \tilde{\mathds{1}} \mathcal{C} (\mathbf{X}_{\Xi B}-\mathbf{X}_{\Xi \xi}\mathbf{\Delta}_\xi ^{-1} \tilde{\mathds{1}} \mathcal{C} \mathbf{X}_{\xi B}),
\end{align}
all appearances of $\mathcal{C}$ and $\tilde{\mathds{1}}$ cancel once
$\bar{\mathbf{\Delta}}^{-1}_{\Xi}$ and $\mathbf{\Delta}_\xi ^{-1}$ are
inserted and $\tilde{\mathbf{X}}_{A B}$ is expressed in terms of
$\mathbf{X}_{A B}$ with
$\mathbf{X}_{A B}=\mathcal{C}^{-1} \tilde{\mathds{1}}
\tilde{\mathbf{X}}_{A B}$. A similar property holds for
$\tilde{\mathbf{X}}_{\Phi B}$ and $\tilde{\mathbf{X}}_{\phi B}$,
which only appear as
$\mathbf{X}_{\Phi B}=\tilde{\mathds{1}}\tilde{\mathbf{X}}_{\Phi
B}$ and
$\mathbf{X}_{\phi B}=\tilde{\mathds{1}}\tilde{\mathbf{X}}_{\phi
B}$.
Hence, the result can be expressed
entirely through the matrices $\mathbf{X}_{A B}$ and neither
$\tilde{\mathds{1}}$ nor $\mathcal{C}$ explicitly appears in the final
operator structures.
To complete the calculation we need to compute the purely fermionic
part of the second variation \eqref{eq:second_var}, which reads
\begin{align}
\delta^2 \mathcal{L}_\text{F} = \frac{1}{2}\delta \Xi'^T \bar{\mathbf{\Delta}}_\Xi \delta \Xi'
+ \frac{1}{2} \delta \xi'^T \mathbf{\Delta}_\xi \delta \xi'.
\end{align}
Again, we are only interested in the contribution from the hard region
where the light only part $\mathbf{\Delta} _\xi$ does not
contribute. Hence we only need to consider
$\bar{\mathbf{\Delta}}_\Xi$. We find
\begin{align}
\tr \log \Big(&\mathbf{\Delta}_\Xi(q) - \mathbf{X}_{\Xi \xi}\Delta_\xi ^{-1}(q) \mathbf{X}_{\xi \Xi}\Big)\nonumber \\ &= \tr \log \left(\mathcal{C} \tilde{\mathds{1}} \mathcal{K}_\Xi+\mathcal{C} \tilde{\mathds{1}} \slashed{P}+\tilde{\mathbf{X}}_{\Xi \Xi}-\mathbf{X}_{\Xi \xi}\mathbf{\Delta}_\xi ^{-1}(q) \tilde{\mathbf{X}}_{\xi \Xi}\right) \\
&= \tr \log \left( \mathcal{C} \tilde{\mathds{1}}\mathcal{K}_\Xi \right) + \tr \log \left[ \mathds{1}-\mathcal{K}_\Xi^{-1} \left(-\slashed{P}-\mathbf{X}_{\Xi \Xi}+\mathbf{X}_{\Xi \xi}\mathbf{\Delta}_\xi ^{-1}(q) \tilde{\mathbf{X}}_{\xi \Xi} \right) \right],
\label{eq:trlog_fermionic}
\end{align}
where the first term on the r.h.s.\ of \eqref{eq:trlog_fermionic} is
absorbed in the normalization of the path integral. Inserting
$\mathbf{\Delta}_\xi ^{-1}(q)$ from \eqref{eq:DeltaxiInv} yields
\begin{align}
\mathcal{L}_\text{\ensuremath{\text{EFT}}\xspace,F}^{1\ensuremath{\ell}} &= \frac{i}{2} \sum_{n=1}^{\infty} \frac{1}{n} \tr \left[\mathcal{K}_\Xi^{-1}\left(-\slashed{P}-\mathbf{X}_{\Xi \Xi}+\mathbf{X} _{\Xi \xi}\sum _{m=0} ^{\infty} \left[\mathcal{K} _\xi ^{-1} \mathbf{X} _{\xi \xi} \right]^m \mathcal{K} _\xi ^{-1} \mathbf{X} _{\xi \Xi} \right) \right]^n.
\end{align}
In order to obtain the final UOLEA from the sum
\begin{align}
\mathcal{L}_{\ensuremath{\text{EFT}}\xspace}^{1\ensuremath{\ell}} = \mathcal{L}_{\ensuremath{\text{EFT}}\xspace\text{,SF}}^{1\ensuremath{\ell}} + \mathcal{L}_\text{\ensuremath{\text{EFT}}\xspace,F}^{1\ensuremath{\ell}}
\label{eq:UOLEA_final}
\end{align}
one needs to expand all functional traces on the r.h.s.\ of
\eqref{eq:UOLEA_final} to a given mass dimension and calculate the
coefficients and operator structures. In this expansion we keep
$P^\mu$ as a whole to obtain a manifestly gauge-invariant effective
Lagrangian.
It can be shown, by using the Baker-Campbell-Hausdorff
formula, that every $P_\mu$ appears in commutators of the form
$[P_\mu,\bullet]$ \cite{Gaillard:1985uh,Cheyette:1987qz}. To combine
all $P^\mu$ operators into commutators one can either explicitly use
the Baker-Campbell-Hausdorff formula in the calculation as was done in
\cite{Drozd:2015rsp} or construct a basis for these commutators and
then solve a system of equations to fix the coefficients of the basis
elements as was pointed out in \cite{Zhang:2016pja}. In this
publication the second method was deployed.
Our final expression for $\mathcal{L}_{\ensuremath{\text{EFT}}\xspace}^{1\ensuremath{\ell}}$ is contained in the
ancillary file \texttt{UOLEA.m} in the arXiv submission of this
publication and will be described further in the next section.
\section{Discussion of the result}
\label{sec:results}
\subsection{Published operators and coefficients}
\label{sec:results_ops_coeffs}
In the following we describe the calculated scalar/fermionic
operators, which we publish in the ancillary file \texttt{UOLEA.m} in
the arXiv submission of this publication. The file contains the
following four lists:
\begin{itemize}
\item \texttt{mixedLoopsNoP}: Mixed scalar/fermionic operators without
$P^\mu$.
\item \texttt{mixedLoopsWithP}: Mixed scalar/fermionic operators with
$P^\mu$.
\item \texttt{fermionicLoopsNoP}: Purely fermionic operators without
$P^\mu$.
\item \texttt{fermionicLoopsWithP}: Purely fermionic operators with
$P^\mu$.
\end{itemize}
For convenience, the additional list \texttt{uolea} is defined, which
is the union of the four lists from above. The lists contain the
calculated operators in the form
$\{F^\alpha(M_i,M_j,\dots),\mathcal{O}^\alpha_{ij\cdots}\}$, where $F^\alpha(M_i,M_j,\dots)$
is the coefficient of the operator $\mathcal{O}^\alpha_{ij\cdots}$, which is
expressed through the integrals
$\tilde{\mathcal{I}} [q^{2n_c}]^{n_i n_j \dots n_L} _{i j \dots 0}$ defined in
\appref{sec:loop_functions}. The operators $\mathcal{O}^\alpha_{ij\cdots}$ are
expressed in terms of the symbols
$X[\text{A},\text{B}][i,j]$, with
$\text{A}, \text{B}\in \{\text{S},\text{s},\text{F},\text{f}\}$, which
correspond to the matrices defined in \secref{sec:calc} as follows:
\begin{align*}
X[\text{S},\text{F}] &\equiv \mathbf{X}_{\Phi \Xi}= \begin{pmatrix}
X_{\Sigma ^* \Omega} && X_{\Sigma ^* \bar{\Omega}} \mathcal{C} ^{-1} && X_{\Sigma ^* \Lambda} \\
X_{\Sigma \Omega} && X_{\Sigma \bar{\Omega}} \mathcal{C} ^{-1} && X_{\Sigma \Lambda} \\
X_{\Theta \Omega} && X_{\Theta \bar{\Omega}} \mathcal{C} ^{-1} && X_{\Theta \Lambda}
\end{pmatrix}, \nonumber \\
X[\text{s},\text{F}] &\equiv \mathbf{X}_{\phi \Xi}= \begin{pmatrix}
X_{\sigma ^* \Omega} && X_{\sigma ^* \bar{\Omega}} \mathcal{C} ^{-1} && X_{\sigma ^* \Lambda} \\
X_{\sigma \Omega} && X_{\sigma \bar{\Omega}} \mathcal{C} ^{-1} && X_{\sigma \Lambda} \\
X_{\theta \Omega} && X_{\theta \bar{\Omega}} \mathcal{C} ^{-1} && X_{\theta \Lambda}
\end{pmatrix}, \nonumber \\
X[\text{S},\text{f}] &\equiv \mathbf{X}_{\Phi \xi}=\begin{pmatrix}
X_{\Sigma ^* \omega} && X_{\Sigma ^* \bar{\omega}} \mathcal{C} ^{-1} && X_{\Sigma ^* \lambda} \\
X_{\Sigma \omega} && X_{\Sigma \bar{\omega}} \mathcal{C} ^{-1} && X_{\Sigma \lambda} \\
X_{\Theta \omega} && X_{\Theta \bar{\omega}} \mathcal{C} ^{-1} && X_{\Theta \lambda}
\end{pmatrix}, \nonumber \\
X[\text{s},\text{f}] &\equiv \mathbf{X}_{\phi \xi}= \begin{pmatrix}
X_{\sigma ^* \omega} && X_{\sigma ^* \bar{\omega}} \mathcal{C} ^{-1} && X_{\sigma ^* \lambda} \\
X_{\sigma \omega} && X_{\sigma \bar{\omega}} \mathcal{C} ^{-1} && X_{\sigma \lambda} \\
X_{\theta \omega} && X_{\theta \bar{\omega}} \mathcal{C} ^{-1} && X_{\theta \lambda}
\end{pmatrix}, \nonumber \\
X[\text{F},\text{S}] &\equiv \mathbf{X}_{\Xi \Phi} = \begin{pmatrix}
X_{\bar{\Omega} \Sigma} && X_{\bar{\Omega} \Sigma ^*} && X_{\bar{\Omega} \Theta} \\
\mathcal{C}^{-1} X_{\Omega \Sigma} && \mathcal{C}^{-1} X_{\Omega \Sigma^{*}} && \mathcal{C}^{-1} X_{\Omega \Theta} \\
\mathcal{C}^{-1} X_{\Lambda \Sigma} && \mathcal{C}^{-1} X_{\Lambda \Sigma ^*} && \mathcal{C}^{-1} X_{\Lambda \Theta}
\end{pmatrix}, \nonumber \\
X[\text{f},\text{S}] &\equiv \mathbf{X}_{\xi \Phi} = \begin{pmatrix}
X_{\bar{\omega} \Sigma} && X_{\bar{\omega} \Sigma ^*} && X_{\bar{\omega} \Theta} \\
\mathcal{C}^{-1} X_{\omega \Sigma} && \mathcal{C}^{-1} X_{\omega \Sigma^{*}} && \mathcal{C}^{-1} X_{\omega \Theta} \\
\mathcal{C}^{-1} X_{\lambda \Sigma} && \mathcal{C}^{-1} X_{\lambda \Sigma ^*} && \mathcal{C}^{-1} X_{\lambda \Theta}
\end{pmatrix}, \nonumber \\
X[\text{F},\text{s}] &\equiv \mathbf{X}_{\Xi \phi} = \begin{pmatrix}
X_{\bar{\Omega} \sigma} && X_{\bar{\Omega} \sigma ^*} && X_{\bar{\Omega} \theta} \\
\mathcal{C}^{-1} X_{\Omega \sigma} && \mathcal{C}^{-1} X_{\Omega \sigma^{*}} && \mathcal{C}^{-1} X_{\Omega \theta} \\
\mathcal{C}^{-1} X_{\Lambda \sigma} && \mathcal{C}^{-1} X_{\Lambda \sigma ^*} && \mathcal{C}^{-1} X_{\Lambda \theta}
\end{pmatrix}, \nonumber \\
X[\text{f},\text{s}] &\equiv \mathbf{X}_{\xi \phi} = \begin{pmatrix}
X_{\bar{\omega} \sigma} && X_{\bar{\omega} \sigma ^*} && X_{\bar{\omega} \theta} \\
\mathcal{C}^{-1} X_{\omega \sigma} && \mathcal{C}^{-1} X_{\omega \sigma^{*}} && \mathcal{C}^{-1} X_{\omega \theta} \\
\mathcal{C}^{-1} X_{\lambda \sigma} && \mathcal{C}^{-1} X_{\lambda \sigma ^*} && \mathcal{C}^{-1} X_{\lambda \theta}
\end{pmatrix}, \nonumber \\
X[\text{F},\text{F}] &\equiv \mathbf{X}_{\Xi \Xi}=\begin{pmatrix}
X_{\bar{\Omega} \Omega} && X_{\bar{\Omega} \bar{\Omega}} \mathcal{C}^{-1} && X_{\bar{\Omega} \Lambda} \\
\mathcal{C}^{-1} X_{\Omega \Omega} && \mathcal{C} ^{-1} X_{\Omega \bar{\Omega}}\mathcal{C}^{-1} && \mathcal{C} ^{-1} X_{\Omega \Lambda} \\
\mathcal{C} ^{-1} X_{\Lambda \Omega} && \mathcal{C} ^{-1} X_{\Lambda \bar{\Omega}}\mathcal{C} ^{-1} && \mathcal{C} ^{-1} X_{\Lambda \Lambda}
\end{pmatrix}, \nonumber \\
X[\text{f},\text{f}] &\equiv \mathbf{X}_{\xi \xi}=\begin{pmatrix}
X_{\bar{\omega} \omega} && X_{\bar{\omega} \bar{\omega}} \mathcal{C} ^{-1} && X_{\bar{\omega} \lambda} \\
\mathcal{C} ^{-1} X_{\omega \omega} && \mathcal{C} ^{-1} X_{\omega \bar{\omega}}\mathcal{C}^{-1} && \mathcal{C} ^{-1} X_{\omega \lambda} \\
\mathcal{C} ^{-1} X_{\lambda \omega} && \mathcal{C} ^{-1} X_{\lambda \bar{\omega}}\mathcal{C} ^{-1} && \mathcal{C} ^{-1} X_{\lambda \lambda}
\end{pmatrix}, \nonumber \\
X[\text{F},\text{f}] &\equiv \mathbf{X}_{\Xi \xi}=\begin{pmatrix}
X_{\bar{\Omega} \omega} && X_{\bar{\Omega} \bar{\omega}} \mathcal{C}^{-1} && X_{\bar{\Omega} \lambda} \\
\mathcal{C} ^{-1} X_{\Omega \omega} && \mathcal{C} ^{-1} X_{\Omega \bar{\omega}}\mathcal{C} ^{-1} && \mathcal{C}^{-1} X_{\Omega \lambda} \\
\mathcal{C} ^{-1} X_{\Lambda \omega} && \mathcal{C} ^{-1} X_{\Lambda \bar{\omega}} \mathcal{C} ^{-1} && \mathcal{C}^{-1} X_{\Lambda \lambda}
\end{pmatrix}, \nonumber \\
X[\text{f},\text{F}] &\equiv \mathbf{X}_{\xi \Xi}=\begin{pmatrix}
X_{\bar{\omega} \Omega} && X_{\bar{\omega} \bar{\Omega}} \mathcal{C} ^{-1} && X_{\bar{\omega} \Lambda} \\
\mathcal{C} ^{-1} X_{\omega \Omega} && \mathcal{C} ^{-1} X_{\omega \bar{\Omega}}\mathcal{C} ^{-1} && \mathcal{C}^{-1} X_{\omega \Lambda} \\
\mathcal{C} ^{-1} X_{\lambda \Omega} && \mathcal{C} ^{-1} X_{\lambda \bar{\Omega}} \mathcal{C} ^{-1} && \mathcal{C}^{-1} X_{\lambda \Lambda}
\end{pmatrix}, \nonumber \\
X[\text{S},\text{S}] &\equiv \mathbf{X}_{\Phi \Phi}=\begin{pmatrix}
X_{\Sigma ^* \Sigma} && X_{\Sigma ^* \Sigma ^*} && X_{\Sigma^* \Theta} \\
X_{\Sigma \Sigma} && X_{\Sigma \Sigma ^{*}} && X_{\Sigma \Theta} \\
X_{\Theta \Sigma} && X_{\Theta \Sigma ^*} && X_{\Theta \Theta}\end{pmatrix}, \nonumber \\
X[\text{S},\text{s}] &\equiv \mathbf{X}_{\Phi \phi}=\begin{pmatrix}
X_{\Sigma ^* \sigma} && X_{\Sigma ^* \sigma ^*} && X_{\Sigma^* \theta} \\
X_{\Sigma \sigma} && X_{\Sigma \sigma ^{*}} && X_{\Sigma \theta} \\
X_{\Theta \sigma} && X_{\Theta \sigma ^*} && X_{\Theta \theta}\end{pmatrix}, \nonumber \\
X[\text{s},\text{S}] &\equiv \mathbf{X}_{\phi \Phi}=\begin{pmatrix}
X_{\sigma ^* \Sigma} && X_{\sigma ^* \Sigma ^*} && X_{\sigma^* \Theta} \\
X_{\sigma \Sigma} && X_{\sigma \Sigma ^{*}} && X_{\sigma \Theta} \\
X_{\theta \Sigma} && X_{\theta \Sigma ^*} && X_{\theta \Theta}\end{pmatrix}, \nonumber \\
X[\text{s},\text{s}] &\equiv \mathbf{X}_{\phi \phi}=\begin{pmatrix}
X_{\sigma ^* \sigma} && X_{\sigma ^* \sigma ^*} && X_{\sigma^* \theta} \\
X_{\sigma \sigma} && X_{\sigma \sigma ^{*}} && X_{\sigma \theta} \\
X_{\theta \sigma} && X_{\theta \sigma ^*} && X_{\theta \theta}\end{pmatrix} .
\end{align*}
The indices $i,j\in\mathbb{N}$ label a specific element of the
respective matrix. The full one-loop effective action is then obtained
as
\begin{align}
\mathcal{L}_{\ensuremath{\text{EFT}}\xspace}^{1\ensuremath{\ell}} = \kappa\sum_\alpha \sum _{ij \cdots} F^\alpha(M_i,M_j,\dots) \mathcal{O}^\alpha_{ij\cdots},
\label{eq:L_all_generic}
\end{align}
where $\kappa=1/(4\pi)^2$ and the sum over $\alpha$ runs over all operators and their
corresponding coefficients.
Several comments regarding the use of the operators of
\eqref{eq:L_all_generic} are in order. First, no assumptions have
been made about the dependence of the second derivatives
$X_{A B}$ regarding gamma matrices. The result is valid for any
spin $1/2$ spinor structure appearing in these derivatives.
Second, care has to be taken to retain the poles of the coefficients
since the gamma algebra has to be performed in $d = 4 - \epsilon$
dimensions, which may generate finite contributions when combined with
the poles. The function \texttt{ExpandEps}, contained in
the ancillary Mathematica file \texttt{LoopFunctions.m} in the arXiv
submission of this paper, can be used to extract these finite
contributions.
Third, some of the coefficients diverge in the case of degenerate
masses if the degenerate limit is not taken carefully. The most
convenient way to deal with degenerate masses may be to first set the
masses equal, which modifies the integrals appearing in the
coefficients $F^\alpha(M_i,M_j,\dots)$, and to then calculate these
integrals using the reduction algorithm implemented in the ancillary
Mathematica file \texttt{LoopFunctions.m}.
Last, there are no $c_s$ or $c_F$ factors appearing in the final
result, in contrast to \cite{Drozd:2015rsp,Ellis:2017jns,Summ:2018oko}. In our
formulation these prefactors have been fixed by our treatment of the
different kinds of fields and are absorbed in the coefficients.
\subsection{Infrared and ultra-violet divergences}
It appears that the operator coefficients have infrared divergences,
which might be surprising as the
infrared physics should cancel in the matching. The reason for the
appearance of such poles is the fact that expansion by regions was
used to perform the calculation as discussed in \secref{sec:
intro}. For a heavy-light loop this means that the one-loop integral
$I_\text{full}$ in the full integration region is split into a part
$I_\text{soft}$, calculated in the soft region, and a part $I_\text{hard}$,
calculated in the hard region,
\begin{align}
I_\text{full} = I_\text{soft} + I_\text{hard}.
\end{align}
Only the hard part remains, since the soft part is canceled in the
matching by the EFT contribution. For the example of $I_\text{full}$
being finite, a UV-divergence in the soft part of the integration region
cancels with an IR-divergence in the hard part with the condition
\begin{align}
\frac{1}{\eps_{\text{UV}}}=\frac{1}{\eps_{\text{IR}}},
\label{eq: epsrel}
\end{align}
which assures that scaleless integrals vanish in dimensional
regularization. Since the soft part is removed in the matching, the
IR-divergence of the hard part remains. However, such an IR-divergence
should be interpreted as a subtracted UV-divergence coming from the
EFT as indicated by \eqref{eq: epsrel}. It is not surprising that
these divergences do not cancel in the matching since the UV behavior
of the EFT is modified as compared to the UV-theory. However, since
these genuine UV-divergences may still combine with an $\epsilon$ from
the gamma algebra to yield finite contributions they must be treated
in the same way as $1/\epsilon$ poles stemming from the UV behavior of
the UV-theory. After performing the trace and the gamma algebra,
remaining terms containing $1/\epsilon$ poles can be discarded, which
amounts to performing a matching calculation in the $\ensuremath{\overline{\text{MS}}}\xspace$ scheme.
\subsection{Application to models with massive vector fields}
\label{sec:results_vectors}
The operators calculated in
this paper can be used to treat massive vector fields in Feynman gauge
as described in \cite{Zhang:2016pja}. Furthermore, couplings of
fermions to massless gauge bosons can be correctly accounted for as
well using the same technique and the treatment is complete when the
UV-theory is renormalizable. This follows from the fact that the
gauge-kinetic term of a fermion $\psi$ is linear in the covariant
derivative so that $X_{A_\mu \psi}$ is independent of $P_\mu$. This is
not the case for scalar fields, since the kinetic term is quadratic in
$P_\mu$, which means that even for a renormalizable UV-theory there
are further operators stemming from the coupling of scalar fields to
massless gauge bosons. Of course, once one considers the matching of a
UV-theory that already contains higher dimensional operators with
covariant derivatives to an EFT, further operators arise also for
fermions. These missing operators all stem from open covariant
derivatives and are currently unknown.
\subsection{Extraction of $\beta$-functions}
As was pointed out in \cite{Henning:2016lyp} functional methods can be
used to calculate $\beta$-functions since they allow for the
computation of the loop-corrected generator of 1PI Green's
functions. To one-loop we have
\begin{align}
\Gamma[\Phi]=\Gamma^\ensuremath{\text{tree}}\xspace[\Phi]+\Gamma^{1\ensuremath{\ell}}[\Phi],
\end{align}
where $\Gamma^\ensuremath{\text{tree}}\xspace[\Phi]=S[\Phi]$ is the tree-level generator of 1PI
Green's functions, which is simply the classical action. Assume that
$\Gamma^\ensuremath{\text{tree}}\xspace[\Phi]$ contains a kinetic term $\mathcal{O}_K[\Phi]$ and
an interaction term $g \mathcal{O}_g[\Phi]$. Then, in general, the
one-loop contribution will contain corrections to these, which depend
on the renormalization scale $\mu$, so that
\begin{align}
\Gamma[\Phi] \supset \int \ensuremath{\mathrm{d}}^4 x \; \big\{a_K(\mu) \mathcal{O}_K[\Phi]+a_g(\mu) \mathcal{O}_g[\Phi]\big\}.
\end{align}
Canonically normalizing the kinetic term
for the field $\Phi$ yields
\begin{align}
\Gamma[\Phi] \supset \int \ensuremath{\mathrm{d}}^4 x \; \big\{\mathcal{O}_K[\Phi]+a'_g(\mu) \mathcal{O}_g[\Phi]\big\},
\end{align}
where
\begin{align}
\mu \frac{\ensuremath{\mathrm{d}}}{\ensuremath{\mathrm{d}}\mu}a'_g(\mu)=0
\label{eq: running equation}
\end{align}
due to the Callan-Symanzik equation
\cite{Callan:1970yg,Symanzik:1970rt}. Eq.~\eqref{eq: running equation}
can be solved for the one-loop $\beta$-function of the coupling $g$.
In a specific sense, the UOLEA represents an expression for
$\Gamma^{1\ensuremath{\ell}}$ of a model with operators up to dimension
6, and it can thus be used to calculate the one-loop $\beta$-functions
of all dimension 6 operators for any given Lagrangian as described
above. In order to calculate $\Gamma^{1\ensuremath{\ell}}$, the UOLEA operators
\eqref{eq:L_all_generic} must be re-interpreted as follows: Since one
is interested in the full $\Gamma^{1\ensuremath{\ell}}$, a distinction between
heavy and light fields must not be made and all fields shall be treated
as ``heavy'' fields. As a consequence, the one-loop effective action
of a scalar theory is given by
\begin{align}
\Gamma[\Phi] = S[\Phi]
+ \frac{i}{2} \log\det\left(-\frac{\delta^2 \mathcal{L}_\text{int}}{\delta\Phi\delta\Phi}\right),
\label{eq:gamma_1L_heavy}
\end{align}
where $\Phi$ represents the collection of all scalar fields contained
in the model. The expression on the r.h.s.\ of
\eqref{eq:gamma_1L_heavy} can be expanded as outlined e.g.\ in
\cite{Drozd:2015rsp,Henning:2016lyp,Fuentes-Martin:2016uol} and one
arrives at the heavy-only part of the UOLEA \eqref{eq:L_all_generic},
which contains only operators built out of derivatives of the
Lagrangian with respect to ``heavy'' $\Phi$ fields.
This procedure is not restricted to a theory with only
scalars and can also be applied to models with both scalars and
fermions using the heavy-only part of \eqref{eq:L_all_generic}.
However, higher-dimensional operators with covariant derivatives have
not been treated in this work and hence their influence on the running
of the couplings cannot be determined using our result.
\section{Applications}
\label{sec:applications}
\subsection{Integrating out the top quark from the Standard Model}
As a simple first example we consider the corrections to the Higgs
tadpole and mass parameter that arise when integrating out the top
quark from the Standard Model. The considered interaction Lagrangian
shall contain only one coupling
\begin{align}
\mathcal{L}_\ensuremath{\text{SM}}\xspace \supset -\frac{g_t}{\sqrt{2}}h \bar{t}t,
\end{align}
where $h$ denotes the physical Higgs field, $t$ is the top quark and
$g_t$ is the top Yukawa coupling. The relevant operators of the UOLEA
\eqref{eq:UOLEA_final} are given by
\begin{align}
\frac{1}{\kappa} \mathcal{L}_\ensuremath{\text{EFT}}\xspace^{1\ensuremath{\ell}} = \tr \Bigg\lbrace & \frac{1}{4} m_{\Xi i} m_{\Xi j}^3 \tilde{\mathcal{I}} ^{13} _{ij} [P_\mu,(\mathbf{X}_{\Xi \Xi})_{ij}][P^\mu,(\mathbf{X}_{\Xi \Xi})_{ji}]
\nonumber \\ & -\frac{1}{2} \tilde{\mathcal{I}}[q^4] ^{22} _{ij} \gamma^\nu [P_\mu,(\mathbf{X}_{\Xi \Xi})_{ij}]\gamma_\nu[P^\mu,(\mathbf{X}_{\Xi \Xi})_{ji}]
\nonumber \\ & - \tilde{\mathcal{I}}[q^4] ^{22} _{ij} \gamma^\nu [P_\nu,(\mathbf{X}_{\Xi \Xi})_{ij}]\gamma_\mu[P^\mu,(\mathbf{X}_{\Xi \Xi})_{ji}]
\nonumber \\ & +\frac{1}{2} m_{\Xi i} \tilde{\mathcal{I}} ^1 _i (\mathbf{X}_{\Xi \Xi})_{ii}
\nonumber \\ & -\frac{1}{4} m_{\Xi i} m_{\Xi j} \tilde{\mathcal{I}} ^{11} _{ij} (\mathbf{X}_{\Xi \Xi})_{ij} (\mathbf{X}_{\Xi \Xi})_{ji}
\nonumber \\ & -\frac{1}{4} \tilde{\mathcal{I}}[q^2] ^{11} _{ij} \gamma ^\mu (\mathbf{X}_{\Xi \Xi})_{ij} \gamma_\mu (\mathbf{X}_{\Xi \Xi})_{ji}
\Bigg\rbrace,
\label{eq:UOLEALAG-topout}
\end{align}
where $m_{\Xi i}$ denotes the mass of the $i$th component of $\Xi$. The matrix $(\mathbf{X}_{\Xi \Xi})$ is given by
\begin{align}
(\mathbf{X}_{\Xi \Xi})_{\alpha \beta ij} = \begin{pmatrix}
(X_{\bar{t}t})_{\alpha \beta ij} & 0 \\
0 & \mathcal{C}^{-1} _{\alpha \rho} (X_{t\bar{t}})_{\rho \sigma ij} \mathcal{C}^{-1} _{\sigma \beta}
\end{pmatrix}
=
-\frac{g_t}{\sqrt{2}}h \delta_{\alpha \beta} \delta_{ij} \mathbf{1}_{2\times 2},
\label{eq:top-derivative}
\end{align}
with $\alpha,\beta = 1,\ldots,4$ being spinor indices and
$i,j = 1,2,3$ being color indices. In \eqref{eq:UOLEALAG-topout} we
included terms with two covariant derivatives in order to obtain the
field-redefinition of the Higgs field that is necessary to canonically
normalize the corresponding Higgs field $\hat{h}$ in the effective
theory. Since this redefinition arises from the correction to the
kinetic term only, we can set $P^\mu = i\partial ^\mu$. Inserting
\eqref{eq:top-derivative} into \eqref{eq:UOLEALAG-topout} and
calculating the trace yields
\begin{align}
\frac{1}{\kappa}\mathcal{L}_\ensuremath{\text{EFT}}\xspace ^{1\ensuremath{\ell}} ={}& -3g_t^2 \left(m_t^4 \tilde{\mathcal{I}} ^4 _t-2d\tilde{\mathcal{I}} [q^4]^4 _{t}-4 \tilde{\mathcal{I}} [q^4] ^4 _{t} \right) (\partial_\mu h) (\partial^\mu h) \nonumber \\
& -3g_t^2 \left(\tilde{\mathcal{I}} ^2_t m_t^2+d \tilde{\mathcal{I}} [q^2]^2_t \right)h^2-\frac{12}{\sqrt{2}} g_t m_t \tilde{\mathcal{I}}^1_t h,
\label{eq: HiggsEFT}
\end{align}
where $d = 4 - \epsilon = g^\mu _\mu$ has to be retained since the
integrals contain poles in $1/\epsilon$. The loop functions $\tilde{\mathcal{I}}$ are
defined in \appref{sec:loop_functions}. It is customary to introduce
the canonically normalized field $\hat{h}$ which is related to $h$
through
\begin{align}
\hat{h}=\left(1+\frac{1}{2}\delta Z_h \right) h.
\end{align}
From \eqref{eq: HiggsEFT} one can read off $\delta Z_h$ to be
\begin{align}
\delta Z_h = -6g_t^2\left(m_t^4 \tilde{\mathcal{I}} ^4 _t-2d\tilde{\mathcal{I}} [q^4]^4 _{t}-4 \tilde{\mathcal{I}} [q^4] ^4 _{t}\right)
= -6g_t^2\left(m_t^4 \tilde{\mathcal{I}} ^4_t - 12 \tilde{\mathcal{I}} [q^4] ^4_{t} + \frac{1}{6}\right).
\label{eq:delta_Zh_top}
\end{align}
The loop functions that appear in \eqref{eq: HiggsEFT} and
\eqref{eq:delta_Zh_top} can be calculated with the Mathematica file
\texttt{LoopFunctions.m} and read
\begin{align}
\tilde{\mathcal{I}}^1_t &= 2 \tilde{\mathcal{I}}[q^2]^2_t = m_t^2 \left(\frac{2}{\epsilon} + 1 - \log\frac{m_t^2}{\mu^2}\right), \\
\tilde{\mathcal{I}}^2_t &= 24 \tilde{\mathcal{I}}[q^4]^4_t = \frac{2}{\epsilon}-\log\frac{m_t^2}{\mu^2}, \\
\tilde{\mathcal{I}}^4_t &= \frac{1}{6 m_t^4}.
\end{align}
\subsection{MSSM threshold correction to the quartic Higgs coupling}
\label{sec:lambdacalc}
As a first nontrivial application and a check we reproduce the one-loop threshold
correction of the quartic Higgs coupling $\lambda$ when matching the MSSM to the
SM at one-loop \cite{Bagnaschi:2014rsa} in the unbroken phase. As
discussed in \cite{Bagnaschi:2014rsa} there are several contributions
of distinct origins. The scalar contribution
$\Delta \lambda^{1\ensuremath{\ell},\phi}$ arises from interactions of the SM-like
Higgs with heavy Higgs bosons, squarks and sleptons, and the relevant
interaction Lagrangian is given by
\begin{align}
\mathcal{L}_{\phi} ={}& - \frac{g_t^2}{2} h^2 (\st{L}^* \st{L} + \st{R}^*\st{R})-\frac{1}{\sqrt{2}} g_t X_t h (\st{L}^* \st{R} + \st{L}\st{R}^*) \nonumber \\ & -\frac{1}{8} c_{2\beta} h^2\sum_{i} \left[\left(g_2^2 - \frac{g_1^2}{5}\right) \tilde{u}^* _{Li} \tilde{u} _{Li}+ \frac{4}{5} g_1^2 \tilde{u}_{Ri}^* \tilde{u}_{Ri}- \left(g_2^2 + \frac{g_1^2}{5}\right) \tilde{d}_{Li}^* \tilde{d}_{Li}- \frac{2}{5} g_1^2 \tilde{d}_{Ri}^* \tilde{d}_{Ri}\right]
\nonumber \\ &-\frac{1}{8} c_{2\beta} h^2\sum_{i} \left[\left(g_2^2 + g_1^2\frac{3}{5}\right) \tilde{\nu}^* _{Li} \tilde{\nu} _{Li}- \left(g_2^2 - g_1^2\frac{3}{5}\right) \tilde{e}_{Li}^* \tilde{e}_{Li}- \frac{6}{5} g_1^2 \tilde{e}_{Ri}^* \tilde{e}_{Ri}\right]
\nonumber \\ & +\frac{1}{16} c_{2\beta}^2 \left(\frac{3}{5} g_1^2 + g_2^2\right) h^2 A^2- \frac{1}{8} \left((1 + s_{2\beta}^2) g_2^2 - \frac{3}{5} g_1^2 c_{2\beta}^2\right) h^2 H^{-} H^{+} \nonumber \\ & - \frac{1}{16} \left(\frac{3}{5} g_1^2 + g_2^2\right) (3 s_{2\beta}^2 - 1) h^2 H^2- \frac{1}{8} \left(\frac{3}{5} g_1^2 + g_2^2\right) s_{2\beta} c_{2\beta} h^3 H \nonumber \\ & + \frac{1}{8} \left(\frac{3}{5} g_1^2 + g_2^2\right) s_{2\beta} c_{2\beta} h^2 (G^{-} H^{+} + H^{-} G^{+})+ \frac{1}{8} \left(\frac{3}{5} g_1^2 + g_2^2\right) s_{2\beta} c_{2\beta} h^2 G^0 A.
\end{align}
Here $g_1$ and $g_2$ are the GUT-normalized electroweak gauge
couplings, $X_t$ is the stop mixing parameter, and $g_t = y_t s_\beta$
with $y_t$ being the MSSM top Yukawa coupling and $s_\beta=\sin (\beta)$. The three generations
of left- and right-handed squarks and sleptons are denoted as
$\tilde{u}_{Li}$, $\tilde{u}_{Ri}$, $\tilde{d}_{Li}$, $\tilde{d}_{Ri}$, $\tilde{e}_{Li}$,
$\tilde{e}_{Ri}$, $\tilde{\nu}_{Li}$ ($i=1,2,3$), respectively, where
$\st{L} \equiv \tilde{u}_{L3}$ and $\st{R} \equiv \tilde{u}_{R3}$ are the left-
and right-handed stops. Furthermore we have defined
$h=\sqrt{2}\, \ensuremath{\Re\mathfrak{e}} (\mathcal{H}^0)$, where $\mathcal{H}^0$ is the
neutral component of the SM-like Higgs doublet $\mathcal{H}$ related
to the Higgs doublets $H_u$ and $H_d$ through
\begin{align}
\mathcal{H} = - c_\beta \varepsilon H^{*}_d + s_\beta H_u,
\label{eq:rot_H}
\end{align}
where $\varepsilon$ is the antisymmetric tensor with
$\varepsilon_{12}=1$ and $c_\beta = \cos(\beta)$, $s_{2\beta} = \sin(2\beta)$ and
$c_{2\beta} = \cos(2\beta)$. The fields $G^0$ and $G^{\pm}$ are
Goldstone bosons arising from the same Higgs doublet. The heavy Higgs
bosons $H$, $A$ and $H^\pm$ arise from the heavy doublet
$\mathcal{A}$, which is related to the MSSM doublets through
\begin{align}
\mathcal{A} = s_\beta \varepsilon H^{*}_d + c_\beta H_u.
\label{eq:rot_A}
\end{align}
Note, that since we work in the unbroken phase, $\beta$ should not be
regarded as a ratio of vacuum expectation values, but as the
fine-tuned mixing angle which rotates the two MSSM Higgs doublets
$H_u$ and $H_d$ into $\mathcal{H}$ and $\mathcal{A}$ as given in
\eqref{eq:rot_H}--\eqref{eq:rot_A} \cite{Bagnaschi:2014rsa}.
The fermionic contribution $\Delta \lambda ^{1\ensuremath{\ell},\chi}$ to the
threshold correction of $\lambda$ originates from interactions of the
Higgs boson with charginos $\tilde{\chi}^{+}_i$ ($i=1,2$) and
neutralinos $\tilde{\chi}^0_i$ ($i=1,\ldots,4$) described by the
interaction Lagrangian
\begin{align}
\mathcal{L}_\chi ={}& - \frac{g_2}{\sqrt{2}} h c_\beta (\overline{\tilde{\chi}^{+}_1} P_R \tilde{\chi}^{+} _2 + \overline{\tilde{\chi}^{+}_2} P_L \tilde{\chi}^{+} _1)- \frac{g_2}{\sqrt{2}} h s_\beta (\overline{\tilde{\chi}^{+}_2} P_R \tilde{\chi}^{+} _1 + \overline{\tilde{\chi}^{+}_1} P_L \tilde{\chi}^{+}_2 )\nonumber
\\ & +i \frac{g_Y}{2\sqrt{2}} (c_\beta - s_\beta) h \overline{\tilde{\chi}^0_1} \gamma^5 \tilde{\chi}^0_3-\frac{g_Y}{2\sqrt{2}} (c_\beta + s_\beta)h \overline{\tilde{\chi}^0_1} \tilde{\chi}^0_4 \nonumber
\\ & -i \frac{g_2}{2\sqrt{2}} (c_\beta - s_\beta) h \overline{\tilde{\chi}^0_2} \gamma^5 \tilde{\chi}^0_3+ \frac{g_2}{2\sqrt{2}} (c_\beta + s_\beta) h \overline{\tilde{\chi}^0_2} \tilde{\chi}^0_4 ,
\end{align}
where $\overline{\tilde{\chi}^0_i} = (\tilde{\chi}^0_i)^T \mathcal{C}$ and
$g_Y = \sqrt{3/5}\, g_1$.
To calculate the one-loop threshold correction for $\lambda$, the
following contributions with purely scalar and purely fermionic
operators from our generic UOLEA \eqref{eq:UOLEA_final} are relevant,
\begin{align}
\frac{1}{\kappa} \mathcal{L}_\ensuremath{\text{EFT}}\xspace^{1\ensuremath{\ell}} = \tr \Bigg\lbrace & \frac{1}{2} \tilde{\mathcal{I}} ^{1} _{i} (\mathbf{X}_{\Phi \Phi})_{ii}+\frac{1}{2} \tilde{\mathcal{I}} [q^2]^{22} _{ij} [P_\mu, (\mathbf{X}_{\Phi \Phi})_{ij}] [P^\mu, (\mathbf{X}_{\Phi \Phi})_{ji}]\nonumber \\ & +\frac{1}{4} \tilde{\mathcal{I}} ^{11} _{ij} (\mathbf{X}_{\Phi \Phi})_{ij}(\mathbf{X}_{\Phi \Phi})_{ji}+\frac{1}{6} \tilde{\mathcal{I}}^{111} _{ijk}(\mathbf{X}_{\Phi \Phi})_{ij}(\mathbf{X}_{\Phi \Phi})_{jk}(\mathbf{X}_{\Phi \Phi})_{ki}
\nonumber \\ &+\frac{1}{8} \tilde{\mathcal{I}} ^{1111} _{ijkl} (\mathbf{X}_{\Phi \Phi})_{ij}(\mathbf{X}_{\Phi \Phi})_{jk} (\mathbf{X}_{\Phi \Phi})_{kl} (\mathbf{X}_{\Phi \Phi})_{li} + \frac{1}{2} \tilde{\mathcal{I}} ^{1} _{i} (\mathbf{X}_{\Phi \phi})_{ij}(\mathbf{X}_{\phi \Phi})_{ji} \nonumber \\ & -\frac{1}{8} m_{\Xi i}m_{\Xi j} m_{\Xi k} m_{\Xi l} \tilde{\mathcal{I}} ^{1111} _{ijkl}(\mathbf{X}_{\Xi \Xi})_{ij}(\mathbf{X}_{\Xi \Xi})_{jk} (\mathbf{X}_{\Xi \Xi})_{kl} (\mathbf{X}_{\Xi \Xi})_{li}
\nonumber \\ & -\frac{1}{2} m_{\Xi i}m_{\Xi j} \tilde{\mathcal{I}} [q^2] ^{1111} _{ijkl}(\mathbf{X}_{\Xi \Xi})_{ij}(\mathbf{X}_{\Xi \Xi})_{jk}\gamma^\mu (\mathbf{X}_{\Xi \Xi})_{kl} \gamma_\mu (\mathbf{X}_{\Xi \Xi})_{li}
\nonumber \\ & -\frac{1}{4} m_{\Xi i}m_{\Xi k} \tilde{\mathcal{I}} [q^2] ^{1111} _{ijkl}(\mathbf{X}_{\Xi \Xi})_{ij}\gamma^\mu(\mathbf{X}_{\Xi \Xi})_{jk} (\mathbf{X}_{\Xi \Xi})_{kl} \gamma_\mu (\mathbf{X}_{\Xi \Xi})_{li}
\nonumber \\ & -\frac{1}{8} g_{\mu \nu \rho \sigma} \tilde{\mathcal{I}} [q^4] ^{1111} _{ijkl}\gamma^\mu (\mathbf{X}_{\Xi \Xi})_{ij}\gamma^\nu(\mathbf{X}_{\Xi \Xi})_{jk} \gamma^\rho (\mathbf{X}_{\Xi \Xi})_{kl} \gamma^\sigma (\mathbf{X}_{\Xi \Xi})_{li}
\nonumber \\ & +\frac{1}{4} m_{\Xi i} m_{\Xi j}^3 \tilde{\mathcal{I}} ^{13} _{ij} [P_\mu,(\mathbf{X}_{\Xi \Xi})_{ij}][P^\mu,(\mathbf{X}_{\Xi \Xi})_{ji}]
\nonumber \\ & -\frac{1}{2} \tilde{\mathcal{I}}[q^4] ^{22} _{ij} \gamma^\nu [P_\mu,(\mathbf{X}_{\Xi \Xi})_{ij}]\gamma_\nu[P^\mu,(\mathbf{X}_{\Xi \Xi})_{ji}]
\nonumber \\ & - \tilde{\mathcal{I}}[q^4] ^{22} _{ij} \gamma^\nu [P_\nu,(\mathbf{X}_{\Xi \Xi})_{ij}]\gamma_\mu[P^\mu,(\mathbf{X}_{\Xi \Xi})_{ji}]
\Bigg\rbrace,
\label{eq:UOLEAToLambda}
\end{align}
where $\kappa=1/(4\pi)^2$. The operators containing covariant
derivatives can be removed by a field-strength renormalization of the
Higgs field to canonically normalize the kinetic term. This field
renormalization propagates into every Higgs coupling that has a
non-vanishing tree-level contribution and hence also into the quartic
coupling.
Next, we compute the $\mathbf{X}_{AB}$ matrices as the
second derivatives of the Lagrangian with respect to the different
kinds of fields. We start with
\begin{align}
\mathbf{X}_{\Phi \Phi}=\begin{pmatrix}
X_{\Sigma ^* \Sigma} && X_{\Sigma ^{*} \Sigma ^{*}} && X_{\Sigma^* \Theta} \\
X_{\Sigma \Sigma} && X_{\Sigma \Sigma ^{*}} && X_{\Sigma \Theta} \\
X_{\Theta \Sigma} && X_{\Theta \Sigma ^*} && X_{\Theta \Theta}\end{pmatrix}
\label{eq: heavy-scalar-heavy-scalar}
\end{align}
and define
\begin{align}
\Sigma &=
\begin{pmatrix}
\tilde{u}_{Li} & \tilde{u}_{Ri} & \tilde{d}_{Li} & \tilde{d}_{Ri} & \tilde{e}_{Li} & \tilde{e}_{Ri} & \tilde{\nu}_{Li} & H^{+}
\end{pmatrix}^T, &
\Theta &= \begin{pmatrix}
A & H
\end{pmatrix}^T ,
\end{align}
where $i=1,2,3$ denotes the generation index. The non-vanishing
derivatives with respect to two heavy scalar fields read
\begin{align}
X_{\tilde{u}_{Li}^* \tilde{u}_{Lj}}&=X_{\tilde{u}_{Li} \tilde{u}_{Lj}^*}=\frac{1}{8}c_{2\beta}h^2 \delta_{ij}\left(g_2^2-\frac{1}{5}g_1^2\right)+\delta_{3i}\delta_{3j}\frac{g_t^2}{2}h^2, \\
X_{\tilde{u}_{Ri}^* \tilde{u}_{Rj}}&=X_{\tilde{u}_{Ri} \tilde{u}_{Rj}^*}=\frac{1}{10}c_{2\beta}h^2 \delta_{ij}g_1^2+\delta_{3i}\delta_{3j}\frac{g_t^2}{2}h^2, \\
X_{\tilde{d}_{Li}^* \tilde{d}_{Lj}}&=X_{\tilde{d}_{Li} \tilde{d}_{Lj}^*}=-\frac{1}{8}c_{2\beta}h^2 \delta_{ij}\left(g_2^2+\frac{1}{5}g_1^2\right), \\
X_{\tilde{d}_{Ri}^* \tilde{d}_{Rj}}&=X_{\tilde{d}_{Ri} \tilde{d}_{Rj}^*}=\frac{1}{20}c_{2\beta}h^2 \delta_{ij}g_1^2, \\
X_{\tilde{e}_{Li}^* \tilde{e}_{Lj}}&=X_{\tilde{e}_{Li} \tilde{e}_{Lj}^*}=\frac{1}{8}c_{2\beta}h^2 \delta_{ij}\left(g_2^2-\frac{3}{5}g_1^2\right), \\
X_{\tilde{e}_{Ri}^* \tilde{e}_{Rj}}&=X_{\tilde{e}_{Ri} \tilde{e}_{Rj}^*}=-\frac{1}{20}c_{2\beta}h^2 \delta_{ij}g_1^2, \\
X_{\tilde{\nu}_{Li}^* \tilde{\nu}_{Lj}}&=X_{\tilde{\nu}_{Li} \tilde{\nu}_{Lj}^*}=\frac{1}{8}c_{2\beta}h^2 \delta_{ij}\left(g_2^2+\frac{3}{5}g_1^2\right), \\
X_{H^+ H^-}&=X_{H^- H^+}=\frac{1}{8}h^2 \left[(1+s_{2\beta}^2)g_2^2-\frac{3}{5}g_1^2 c_{2\beta}^2\right] \\
X_{AA}&=-\frac{1}{16}c_{2\beta}^2\left(\frac{3}{5}g_1^2+g_2^2\right)h^2, \\
X_{HH}&=\frac{1}{16}(2s_{2\beta}^2-1)\left(\frac{3}{5}g_1^2+g_2^2\right)h^2, \\
X_{\tilde{u}_{Li}^* \tilde{u}_{Rj}}&=X_{\tilde{u}_{Li} \tilde{u}_{Rj}^*}=\frac{1}{\sqrt{2}}\delta_{3i}\delta_{3j}g_t X_t h.
\end{align}
Given these derivatives we find that $\mathbf{X}_{\Phi \Phi}$ is
block-diagonal with the blocks being
\begin{align}
X_{\Sigma^* \Sigma}&=\begin{pmatrix}
X_{\tilde{u}_{Li}^* \tilde{u}_{Lj}} & X_{\tilde{u}_{Li}^* \tilde{u}_{Rj}} & \mathbf{0}_{1\times 6} \\
X_{\tilde{u}_{Ri}^* \tilde{u}_{Lj}} & X_{\tilde{u}_{Ri}^* \tilde{u}_{Rj}} & \mathbf{0}_{1\times 6} \\
\mathbf{0}_{6\times 1} & \mathbf{0}_{6\times 1} & X_{\Pi^* \Pi}
\end{pmatrix}, \\
X_{\Pi^* \Pi}&=\mathop{\rm diag}(X_{\tilde{d}_{Li}^* \tilde{d}_{Lj}},X_{\tilde{d}_{Ri}^* \tilde{d}_{Rj}},X_{\tilde{e}_{Li}^* \tilde{e}_{Lj}},X_{\tilde{e}_{Ri}^* \tilde{e}_{Rj}},X_{\tilde{\nu}_{Li}^* \tilde{\nu}_{Lj}},X_{H^+ H^-}), \\
X_{\Sigma \Sigma^*}&=\begin{pmatrix}
X_{\tilde{u}_{Li} \tilde{u}_{Lj}^*} & X_{\tilde{u}_{Li} \tilde{u}_{Rj}^*} & \mathbf{0}_{1\times 6} \\
X_{\tilde{u}_{Ri} \tilde{u}_{Lj}^*} & X_{\tilde{u}_{Ri} \tilde{u}_{Rj}^*} & \mathbf{0}_{1\times 6} \\
\mathbf{0}_{6\times 1} & \mathbf{0}_{6\times 1} & X_{\Pi \Pi^*}
\end{pmatrix}, \\
X_{\Pi \Pi^*}&=\mathop{\rm diag}(X_{\tilde{d}_{Li} \tilde{d}_{Lj}^*},X_{\tilde{d}_{Ri} \tilde{d}_{Rj}^*},X_{\tilde{e}_{Li} \tilde{e}_{Lj}^*},X_{\tilde{e}_{Ri} \tilde{e}_{Rj}^*},X_{\tilde{\nu}_{Li} \tilde{\nu}_{Lj}^*},X_{H^- H^+}), \\
X_{\Theta \Theta}&=\mathop{\rm diag}(X_{AA},X_{HH}),
\end{align}
where $\mathbf{0}_{m\times n}$ denotes the $m \times n$ matrix of only
zeros. We next calculate $\mathbf{X}_{\phi \Phi}$ and
$\mathbf{X}_{\Phi\phi}$, which contain derivatives with respect to one
heavy and one light scalar field. We define the light scalar field
multiplets as
\begin{align}
\sigma &= (G^+), &
\theta &=
\begin{pmatrix}
h & G^0
\end{pmatrix}^T.
\end{align}
As discussed in \secref{sec: intro} the derivatives w.r.t.\ the fields
are evaluated at the background field configurations, and the heavy
background fields are expressed in terms of the light ones using a
local operator expansion.\footnote{An explicit example is given in
\secref{sec: gluinoOut} in the treatment of dimension 5
operators.} This corresponds to an expansion in $\Box/M^2$ for a
heavy scalar field of mass $M$ and hence it leads to contributions
suppressed by at least $1/M^2$. Since we are not interested in these
suppressed contributions here, we only consider derivatives of the
Lagrangian which exclusively contain light background fields and set
all other derivatives to zero. The non-vanishing derivatives are given
by
\begin{align}
X_{Hh} &= X_{hH}=\frac{3}{8}\left(\frac{3}{5}g_1^2+g_2^2\right)s_{2\beta}c_{2\beta}h^2 ,\\
X_{AG^0} &= X_{G^0A}=-\frac{1}{8}\left(\frac{3}{5}g_1^2+g_2^2\right)s_{2\beta}c_{2\beta}h^2, \\
X_{H^{+}G^{-}} &= X_{H^{-}G^{+}}=-\frac{1}{8}\left(\frac{3}{5}g_1^2+g_2^2\right)s_{2\beta}c_{2\beta}h^2.
\end{align}
We then find that $\mathbf{X}_{\Phi \phi}$ is block-diagonal with the blocks being
\begin{align}
X_{\Sigma^* \sigma}&=\begin{pmatrix} \mathbf{0}_{7 \times 1} \\ X_{H^{-} G^{+}}
\end{pmatrix}, \\
X_{\Sigma \sigma^*}&=\begin{pmatrix} \mathbf{0}_{7 \times 1} \\ X_{H^{+} G^{-}}
\end{pmatrix}, \\
X_{\Theta \theta}&=\begin{pmatrix} 0 & X_{A G^0}\\
X_{Hh} & 0
\end{pmatrix}.
\end{align}
Similarly, $\mathbf{X}_{\phi \Phi}$ is block-diagonal with diagonal entries
\begin{align}
X_{\sigma^* \Sigma}&=\begin{pmatrix} \mathbf{0}_{1 \times 7} & X_{G^{-} H^{+}}
\end{pmatrix}, \\
X_{\sigma \Sigma^*}&=\begin{pmatrix} \mathbf{0}_{1 \times 7} & X_{G^{+} H^{-}}
\end{pmatrix}, \\
X_{\theta \Theta}&=\begin{pmatrix} 0 & X_{h H}\\
X_{G^0 A} & 0
\end{pmatrix}.
\end{align}
Finally, we need the derivatives with respect to two heavy fermions to
construct the matrix $\mathbf{X}_{\Xi \Xi}$. We define
\begin{align}
\Omega &= \begin{pmatrix}
\tilde{\chi}^+ _1 & \tilde{\chi}^+ _2
\end{pmatrix}^T, & \Lambda &= \begin{pmatrix}
\tilde{\chi}^0 _1 & \tilde{\chi}^0 _2 & \tilde{\chi}^0 _3 & \tilde{\chi}^0 _4
\end{pmatrix}^T
\end{align}
and the matrix $\mathbf{X}_{\Xi \Xi}$ is again block-diagonal with the
non-vanishing entries
\begin{align}
X_{\bar{\Omega} \Omega}&=\mathcal{C} ^{-1} X^T_{\Omega \bar{\Omega}} \mathcal{C}^{-1}=-\frac{g_2}{\sqrt{2}}h\begin{pmatrix}
0 & c_\beta P_R+ s_\beta P_L \\
c_\beta P_L+s_\beta P_R & 0
\end{pmatrix}, \\
\mathcal{C}^{-1} X_{\Lambda \Lambda}&=\frac{h}{2\sqrt{2}}\begin{pmatrix}
0 & 0 & i g_Y(c_\beta-s_\beta)\gamma^5 & -g_Y(c_\beta+s_\beta) \\
0 & 0 & -ig_2(c_\beta-s_\beta)\gamma^5 & g_2(c_\beta+s_\beta) \\
i g_Y(c_\beta-s_\beta)\gamma^5 & -ig_2(c_\beta-s_\beta)\gamma^5 & 0 & 0 \\
-g_Y(c_\beta+s_\beta) & g_2(c_\beta+s_\beta) & 0 & 0
\end{pmatrix},
\end{align}
where the relations of \appref{sec: spinor algebra} were used to
simplify the expressions. Note, that in the calculation of
$X_{\Lambda \Lambda}$ for a given Majorana fermion $\lambda$ the two
fields $\bar{\lambda}$ and $\lambda$ are not independent, but are
related via $\bar{\lambda}=\lambda^T \mathcal{C}$. Inserting all of the
derivatives into \eqref{eq:UOLEAToLambda}, summing over all indices
and canonically normalizing the kinetic term for the SM-like Higgs
boson as
\begin{align}
h ={}& \left(1 - \frac{1}{2} \delta Z_h\right) \hat{h}, \\
\delta Z_h ={}& -6 g_t^2 X_t^2 \tilde{\mathcal{I}}[q^2]_{\tilde{q}\tilde{u}}^{22}+\frac{s_{2 \beta}}{2} \mu \left(g^2_Y M_1 \mu^2 \tilde{\mathcal{I}} ^{13}_{1\mu}+g^2_Y M_1^3 \tilde{\mathcal{I}} ^{31}_{1\mu}-3g^2_2 M_2 \mu^2 \tilde{\mathcal{I}} ^{13}_{2\mu}-3g^2_2 M_2^3 \tilde{\mathcal{I}} ^{31}_{2\mu}\right)
\nonumber \\& +2(2+d)\left(-g_Y^2 \tilde{\mathcal{I}}[q^4]^{22}_{1\mu}+3g_2^2 \tilde{\mathcal{I}}[q^4]^{22}_{2\mu}\right) ,
\end{align}
one finds the following effective Lagrangian
\begin{align}
\mathcal{L}_{\ensuremath{\text{EFT}}\xspace}^{1\ensuremath{\ell}} = \frac{1}{2}(\partial \hat{h})^2
- \frac{\lambda}{8} \hat{h}^4 + \cdots
\end{align}
with
\begin{align}
\lambda &= \frac{1}{4} \left( \frac{3}{5} g_1^2 + g_2^2 \right) c_{2\beta}^2
+ \kappa \Delta\lambda^{1\ensuremath{\ell}} , \\
\Delta\lambda^{1\ensuremath{\ell}} &= \Delta\lambda^{1\ensuremath{\ell},\text{reg}}
+ \Delta\lambda^{1\ensuremath{\ell},\phi} + \Delta\lambda^{1\ensuremath{\ell},\chi},
\end{align}
and
\begin{align}
\Delta \lambda^{1\ensuremath{\ell},\phi} ={}&
g_t^4 \left[
-3 X_t^4 \tilde{\mathcal{I}}_{\tilde{q}\sq\tilde{u}\su}^{1111}
-6 X_t^2 \left(\tilde{\mathcal{I}}_{\tilde{q}\sq\tilde{u}}^{111} + \tilde{\mathcal{I}}_{\tilde{q}\tilde{u}\su}^{111}\right)
-3 \left(\tilde{\mathcal{I}}_{\tilde{q}\sq}^{11} + \tilde{\mathcal{I}}_{\tilde{u}\su}^{11}\right)
\right] \nonumber \\
&+\frac{3}{10} g_t^2 c_{2\beta} \Big\{ X_t^2 \left[
2 c_{2\beta} \left(3 g_1^2+5g_2^2\right) \tilde{\mathcal{I}}[q^2]_{\tilde{q}\tilde{u}}^{22}
+\left(g_1^2-5 g_2^2\right) \tilde{\mathcal{I}}_{\tilde{q}\sq\tilde{u}}^{111}
-4 g_1^2 \tilde{\mathcal{I}}_{\tilde{q}\tilde{u}\su}^{111}\right] \nonumber \\
&~~~~~~~~~~~~~~~~ + \left(g_1^2-5 g_2^2\right) \tilde{\mathcal{I}}_{\tilde{q}\sq}^{11}
-4 g_1^2 \tilde{\mathcal{I}}_{\tilde{u}\su}^{11}\Big\} \nonumber \\
& -\frac{c_{2\beta}^2}{200} \sum_{i=1}^3 \Big[
3 \left(g_1^4+25 g_2^4\right) \tilde{\mathcal{I}}_{\tilde{q}_i\tilde{q}_i}^{11}
+24 g_1^4 \tilde{\mathcal{I}}_{\tilde{u}_i\tilde{u}_i}^{11}
+6 g_1^4 \tilde{\mathcal{I}}_{\tilde{d}_i\tilde{d}_i}^{11} \nonumber \\
&~~~~~~~~~~~~~~~ +\left(9 g_1^4+25 g_2^4\right) \tilde{\mathcal{I}}_{\tilde{l}_i\tilde{l}_i}^{11}
+18 g_1^4 \tilde{\mathcal{I}}_{\tilde{e}_i\tilde{e}_i}^{11}
\Big] \nonumber \\
&+ \frac{1}{200} \Big\{6 c_{2\beta}^2 \left(c_{2\beta}^2-1\right) \left(3
g_1^2+5 g_2^2\right)^2 \tilde{\mathcal{I}}_{A0}^{11} - \Big[9 \left(3
c_{2\beta}^4-3 c_{2\beta}^2+1\right) g_1^4 \nonumber \\
&~~~~~~~~~~ +30 \left(3 c_{2\beta}^4-4
c_{2\beta}^2+1\right) g_1^2 g_2^2+25 \left(3 c_{2\beta}^4-5
c_{2\beta}^2+3\right) g_2^4\Big]
\tilde{\mathcal{I}}_{AA}^{11}\Big\},\\
\Delta\lambda^{1\ensuremath{\ell},\chi} ={}& -\frac{1}{4}
\Big\{-d \big(2 g_Y^4 M_1^2 \tilde{\mathcal{I}}[q^2] ^{22} _{1 \mu} +
2 g_2^4 M_2^2 \tilde{\mathcal{I}}[q^2] ^{22} _{2 \mu} +
g_Y^4 \mu^2 \tilde{\mathcal{I}}[q^2] ^{22} _{1 \mu} \nonumber \\ &\qquad ~~~~~~~~ -
g_Y^4 \mu^2 c_{4 \beta} \tilde{\mathcal{I}}[q^2] ^{22} _{1 \mu} +
g_2^4 \mu^2 \tilde{\mathcal{I}}[q^2] ^{22} _{2 \mu}
- g_2^4 \mu^2 c_{4 \beta} \tilde{\mathcal{I}}[q^2] ^{22} _{2 \mu} \nonumber \\ &\qquad ~~~~~~~~ +
4 g_Y^2 g_2^2 M_1 M_2 \tilde{\mathcal{I}}[q^2] ^{112} _{1 2 \mu} +
2 g_Y^2 g_2^2 \mu^2 \tilde{\mathcal{I}}[q^2] ^{112} _{1 2 \mu} -
2 g_Y^2 g_2^2 \mu^2 c_{4 \beta} \tilde{\mathcal{I}}[q^2] ^{112} _{1 2 \mu}\big) \nonumber \\ &\qquad~~ -
d (2 + d) \big(2 g_Y^4 \tilde{\mathcal{I}}[q^4] ^{22} _{1 \mu} +
2 g_2^4 \tilde{\mathcal{I}}[q^4] ^{22} _{2 \mu} +
4 g_Y^2 g_2^2 \tilde{\mathcal{I}}[q^4] ^{112} _{1 2 \mu}\big) \nonumber \\ &\qquad~~
- g_2^4 \big[2d (2 + d) (3 + c_{4 \beta}) \tilde{\mathcal{I}}[q^4] ^{22} _{2 \mu}
+ 16 c_{\beta} s_{\beta} (d M_2 \tilde{\mathcal{I}}[q^2] ^{22} _{2 \mu} (\mu + M_2 c_{\beta} s_{\beta})
\nonumber \\ &\qquad ~~~~~~~~~ +
\mu \{M_2^2 \mu c_{\beta} \tilde{\mathcal{I}} ^{22} _{2 \mu} s_{\beta} +
d \tilde{\mathcal{I}}[q^2] ^{22} _{2 \mu} (M_2 + \mu c_{\beta} s_{\beta})\})\big]
\nonumber \\ &\qquad~~ -
4 d \mu \big(2 g_Y^4 M_1 \tilde{\mathcal{I}}[q^2] ^{22} _{1 \mu} +
2 g_2^4 M_2 \tilde{\mathcal{I}}[q^2] ^{22} _{2 \mu} +
2 g_Y^2 g_2^2 M_1 \tilde{\mathcal{I}}[q^2] ^{112} _{1 2 \mu} \nonumber \\ &\qquad ~~~~~~~~~~~ +
2 g_Y^2 g_2^2 M_2 \tilde{\mathcal{I}}[q^2] ^{112} _{1 2 \mu}\big) s_{2 \beta}
\nonumber \\ &\qquad~~ -
2 \mu^2 \big(g_Y^4 M_1^2 \tilde{\mathcal{I}} ^{22} _{1 \mu} +
g_2^2 M_2 (g_2^2 M_2 \tilde{\mathcal{I}} ^{22} _{2 \mu} +
g_Y^2 M_1 2\tilde{\mathcal{I}} ^{112} _{12 \mu} +
)\big) s_{2 \beta}^2 \nonumber \\ &\qquad~~ -
2 g_2^2 (g_Y^2 + g_2^2) c_{
2 \beta}^2 \big(-4 (2 + d) \tilde{\mathcal{I}}[q^4] ^{22} _{2 \mu} +
M_2 \mu (\mu^2 \tilde{\mathcal{I}} ^{13} _{2 \mu} +
M_2^2 \tilde{\mathcal{I}} ^{31} _{2 \mu}) s_{2 \beta}\big)\nonumber \\ &\qquad~~ - (g_Y^2 +
g_2^2) c_{
2 \beta}^2 \big(-4 (2 + d) g_Y^2 \tilde{\mathcal{I}}[q^4] ^{22} _{1 \mu} -
4 (2 + d) g_2^2 \tilde{\mathcal{I}}[q^4] ^{22} _{2 \mu} \nonumber \\ &\qquad ~~~~~~ +
\mu \{g_Y^2 M_1 \mu^2 \tilde{\mathcal{I}} ^{13} _{1 \mu} +
g_Y^2 M_1^3 \tilde{\mathcal{I}} ^{31} _{1 \mu} +
g_2^2 M_2 (\mu^2 \tilde{\mathcal{I}} ^{31} _{2 \mu} +
M_2^2 \tilde{\mathcal{I}} ^{31} _{2 \mu})\} s_{2 \beta}\big)\Big\}.
\end{align}
The subscripts $1$ and $2$ of the loop functions are
shorthand for $M_1$ and $M_2$, respectively. The terms involving
$d = 4 - \epsilon$ originate from contractions of gamma matrices and
metric tensors, see \appref{sec:DREG_DRED}. Note, that
$\lambda$ is expressed entirely in terms of the MSSM gauge couplings,
in contrast to \cite{Bagnaschi:2014rsa}.
It is sensible to regularize the MSSM using dimensional reduction
(DRED) \cite{Siegel:1979wq}, whereas the SM is more naturally
regularized in dimensional regularization (DREG)
\cite{Bollini:1972ui,Ashmore:1972uj,Cicuta:1972jf,tHooft:1972tcz,tHooft:1973mfk}.
Such a regularization scheme change leads to further contributions to
the threshold correction denoted by
$\Delta\lambda^{1\ensuremath{\ell},\text{reg}}$, which can be obtained using the
DRED--DREG regularization scheme translating operators presented in
\cite{Summ:2018oko}. This contribution originates from the operator
\begin{align}
\frac{1}{\kappa} \epsilon \mathcal{L}_{\ensuremath{\text{EFT}}\xspace,\epsilon}^{1\ensuremath{\ell}} =
\frac{1}{2} \tr \{\epsdim{X}^{\mu \nu}_{\epsilon \epsilon} \epsdim{X}_{\epsilon \epsilon \mu \nu} \},
\label{eq: epsilonconttolambda}
\end{align}
where on the r.h.s.\ $\epsilon$ denotes all epsilon scalars that couple to
the Higgs and
\begin{align}
\epsdim{X}^{\mu \nu}_{\epsilon \epsilon} = \epsdim{g}^\mu_\sigma \epsdim{g}^\nu_\rho \fourdim{X}^{\sigma\rho}_{\epsilon \epsilon}
\end{align}
is the projection of the $4$-dimensional
$\fourdim{X}^{\sigma\rho}_{\epsilon \epsilon}$ onto the
$\epsilon$-dimensional $Q\epsilon S$ space
\cite{Stockinger:2005gx,Summ:2018oko} with
$\epsdim{g}^{\mu\nu}\epsdim{g}_{\mu\nu} = \epsilon$, see
\appref{sec:DREG_DRED}.
In the MSSM we have the following couplings to epsilon scalars to the
SM-like doublet $\mathcal{H}$,
\begin{align}
\mathcal{L}_{\epsilon \mathcal{H}}=\mathcal{H}^*_i \epsdim{g}_{\mu \nu}\left(g_2 ^2 T^a_{ij} T^b_{jl} a^{a\mu} a^{b\nu}+\sqrt{\frac{3}{5}} g_1 g_2 T^a_{il} a^{a\mu} b^\nu +\frac{3}{20}g_1^2 b^\mu b^\nu \delta_{il}\right)\mathcal{H}_l,
\end{align}
where the indices $i,j,l$ are $SU(2)_L$ indices of the fundamental
representation with the generators $T^a_{ij}$. The fields
$a^{a \mu}$ and $b^\mu$ denote the epsilon scalars
corresponding to $SU(2)_L$ and $U(1)_Y$, respectively. One obtains
the derivative
\begin{align}
\epsdim{X}^{\mu \nu}_{\epsilon \epsilon}=-\epsdim{g}^{\mu \nu} \begin{pmatrix}
\mathcal{H}^{*}_i g_2^2 \{T^a,T^b\}_{il} \mathcal{H}_l & \sqrt{\frac{3}{5}} g_1 g_2 \mathcal{H}^{*}_i T^a _{il} \mathcal{H}_l \\
\sqrt{\frac{3}{5}} g_1 g_2 \mathcal{H}^{*}_i T^a _{il} \mathcal{H}_l & \frac{3}{10} g_1^2 \mathcal{H}^{*}_i \mathcal{H}_i
\end{pmatrix}.
\end{align}
Inserting this into \eqref{eq: epsilonconttolambda} we obtain
\begin{align}
\Delta\lambda^{1\ensuremath{\ell},\text{reg}} &=
- \frac{9}{100}g_1^4 - \frac{3}{10} g_1^2 g_2^2
-\frac{3}{4} g_2^4.
\end{align}
We do not find the term proportional to $c_{2\beta}^2$ given in
\cite{Bagnaschi:2014rsa} since this term only arises once the
tree-level expression for $\lambda$ is expressed in terms of SM gauge
couplings, as opposed to MSSM parameters as in our case. Up to terms
arising from this conversion the one-loop threshold corrections agree
with the results of \cite{Bagnaschi:2014rsa}.
\subsection{Integrating out stops and the gluino from the MSSM}
\label{sec:matching_MSSM_to_SMEFT}
As a second nontrivial application we reproduce known threshold corrections from
the MSSM to the Standard Model Effective Field Theory (SMEFT) from
heavy stops and the gluino in the gaugeless limit ($g_1 = g_2 = 0$) in
the unbroken phase and for vanishing Yukawa couplings, except for the
one of the top quark. In particular we reproduce the Wilson
coefficient of the higher-dimensional $\hat{h}^6$ operator calculated
in \cite{Drozd:2015rsp,Bagnaschi:2017xid}. Furthermore, this example
application again represents a scenario, where a heavy Majorana
fermion is integrated out and the formalism introduced in
\secref{sec:calculation} must be carefully applied.
We consider the following part of the MSSM Lagrangian
\begin{align}
\begin{split}
\mathcal{L}_\ensuremath{\text{MSSM}}\xspace \supset{}&
|\partial\st{L}|^2 - m^2_{\sq} |\st{L}|^2
+ |\partial\st{R}|^2 - m^2_{\su} |\st{R}|^2
+ \frac{1}{2}(\gluino{a})^T \mathcal{C} (i\slashed{\partial} - m_{\gluino{}}) \gluino{a}\\
&
- \frac{y_t s_\beta}{\sqrt{2}} h \bar{t} t
- \frac{y_t^2 s_\beta^2}{2} h^2 \left(|\st{L}|^2 + |\st{R}|^2\right)
- \frac{y_t s_\beta X_t}{\sqrt{2}} h \left(\st{L}^* \st{R} + \text{h.c.}\right) \\
&
- \sqrt{2} g_3 \left[
\bar{t} P_R \gluino{a} T^a \st{L} - \bar{t} P_L \gluino{a} T^a \st{R}
+ \st{L}^* (\gluino{a})^T T^a \mathcal{C} P_L t - \st{R}^* (\gluino{a})^T T^a \mathcal{C} P_R t
\right] ,
\end{split}
\label{eq:LMSSM_stop}
\end{align}
where we use the same notation as in \secref{sec:lambdacalc} and $g_3$
is the strong gauge coupling. The top
quark is denoted as $t$ and is defined as a Dirac fermion built from
the upper component of the left-handed quark-doublet $q_L$ and the
right-handed top $t_R$. The gluino is denoted as $\gluino{a}$ and we
have used the relation
$\overline{\gluino{a}} = (\ccfield{(\gluino{a})})^T \mathcal{C} =
(\gluino{a})^T \mathcal{C}$ to express \eqref{eq:LMSSM_stop} in terms of the
gluino Majorana spinor $\gluino{a}$.
Upon integrating out the heavy stops and the gluino the Lagrangian of
the effective theory becomes
\begin{align}
\mathcal{L}_\ensuremath{\text{SMEFT}}\xspace \supset
- \frac{y_t s_\beta}{\sqrt{2}} h \bar{t} t
+ \mathcal{L}_\ensuremath{\text{SMEFT}}\xspace^\text{1\ensuremath{\ell}}.
\end{align}
In our limit the one-loop term $\mathcal{L}_\ensuremath{\text{SMEFT}}\xspace^\text{1\ensuremath{\ell}}$ receives
contributions from the following generic operators from
\eqref{eq:UOLEA_final}
\begin{align}
\frac{1}{\kappa} \mathcal{L}_\ensuremath{\text{EFT}}\xspace^{1\ensuremath{\ell}} \supset{}& \frac{1}{2} \tilde{\mathcal{I}} ^1 _i (\mathbf{X}_{\Phi \Phi})_{ii}+\frac{1}{4} \tilde{\mathcal{I}} ^{11} _{ik} (\mathbf{X}_{\Phi \Phi})_{ik} (\mathbf{X}_{\Phi \Phi})_{ki}+\frac{1}{6} \tilde{\mathcal{I}} ^{111} _{lik} (\mathbf{X}_{\Phi \Phi})_{ik} (\mathbf{X}_{\Phi \Phi})_{kl} (\mathbf{X}_{\Phi \Phi})_{li}\nonumber \\
& +\frac{1}{8} \tilde{\mathcal{I}} ^{1111} _{likn} (\mathbf{X}_{\Phi \Phi})_{ik} (\mathbf{X}_{\Phi \Phi})_{kl} (\mathbf{X}_{\Phi \Phi})_{ln} (\mathbf{X}_{\Phi \Phi})_{ni}\nonumber \\
& +\frac{1}{10} \tilde{\mathcal{I}} ^{11111} _{iklnp} (\mathbf{X}_{\Phi \Phi})_{ik} (\mathbf{X}_{\Phi \Phi})_{kl} (\mathbf{X}_{\Phi \Phi})_{ln} (\mathbf{X}_{\Phi \Phi})_{np} (\mathbf{X}_{\Phi \Phi})_{pi}\nonumber \\
& +\frac{1}{12} \tilde{\mathcal{I}} ^{111111} _{iklnpr} (\mathbf{X}_{\Phi \Phi})_{ik} (\mathbf{X}_{\Phi \Phi})_{kl} (\mathbf{X}_{\Phi \Phi})_{ln} (\mathbf{X}_{\Phi \Phi})_{np} (\mathbf{X}_{\Phi \Phi})_{pr} (\mathbf{X}_{\Phi \Phi})_{ri}\nonumber \\
& +\frac{1}{2} \tilde{\mathcal{I}} [q^2]^{22} _{ki} [P_\mu, (\mathbf{X}_{\Phi \Phi})_{ik}] [P^\mu, (\mathbf{X}_{\Phi \Phi})_{ki}]\nonumber \\
& -\tilde{\mathcal{I}} [q^2] ^{21}_{il} (\mathbf{X}_{\Phi \Xi})_{il} \gamma^\mu [P_\mu, (\mathbf{X}_{\Xi \Phi})_{li}]\nonumber \\
& - \frac{1}{2} m_{\Xi_{k}} \tilde{\mathcal{I}} ^{111} _{ikl} (\mathbf{X}_{\Phi \Xi})_{ik} (\mathbf{X}_{\Xi \Phi})_{kl} (\mathbf{X}_{\Phi \Phi})_{li} .
\label{eq:L_SMEFT_operators}
\end{align}
We furthermore
set $P_\mu \equiv i \partial_\mu$ to omit contributions from gauge
bosons. In our scenario we identify $\Sigma = (\st{L}, \st{R})$ as
the vector of (complex) heavy stops and $\Lambda = \gluino{a}$ as the
heavy gluino. From \eqref{eq:LMSSM_stop} we then obtain the following
non-vanishing derivatives
\begin{align}
(X_{\st{L}^* \st{L}})_{ij} &= (X_{\st{L} \st{L}^*})_{ij} =
(X_{\st{R}^* \st{R}})_{ij} = (X_{\st{R} \st{R}^*})_{ij} = \frac{1}{2}(y_t s_\beta h)^2 \delta_{ij},\\
(X_{\st{L}^* \st{R}})_{ij} &= (X_{\st{L} \st{R}^*})_{ij} =
(X_{\st{R}^* \st{L}})_{ij} = (X_{\st{R} \st{L}^*})_{ij} = \frac{1}{\sqrt{2}} y_t s_\beta h X_t \delta_{ij},\\
(X_{\st{L} \gluino{a}})_{i\alpha}^a &= (X_{\gluino{a} \st{L}})_{i\alpha}^a = -\sqrt{2} g_3 (\bar{t}_j P_R)_{\alpha} T^a_{ji}, \label{eq:sign_1}\\
(X_{\st{R} \gluino{a}})_{i\alpha}^a &= (X_{\gluino{a} \st{R}})_{i\alpha}^a = \sqrt{2} g_3 (\bar{t}_j P_L)_{\alpha} T^a_{ji}, \label{eq:sign_2}\\
(X_{\gluino{a} \st{L}^*})_{i\alpha}^a &= (X_{\st{L}^* \gluino{a}})_{i\alpha}^a = \sqrt{2} g_3 T^a_{ij} (\mathcal{C} P_L t_j)_{\alpha},\\
(X_{\gluino{a} \st{R}^*})_{i\alpha}^a &= (X_{\st{R}^* \gluino{a}})_{i\alpha}^a = -\sqrt{2} g_3 T^a_{ij} (\mathcal{C} P_R t_j)_{\alpha},
\end{align}
where $i,j=1,2,3$ and $a=1,\ldots,8$ are color indices and
$\alpha=1,\ldots,4$ is a 4-component spinor index. Note the flipped
sign in eqs.\ \eqref{eq:sign_1}--\eqref{eq:sign_2} due to one
anti-commutation of the spinor $\bar{t}$ with the derivative w.r.t.\
the spinor $\gluino{a}$. The bold derivative matrices thus become
\begin{align}
\mathbf{X}_{\Phi \Phi} &=
\begin{pmatrix}
X_{\Sigma ^* \Sigma} & X_{\Sigma ^* \Sigma ^*} \\
X_{\Sigma \Sigma} & X_{\Sigma \Sigma ^{*}}
\end{pmatrix} =
\begin{pmatrix}
(X_{\st{L}^* \st{L}})_{ij} & (X_{\st{L}^* \st{R}})_{ij} & 0 & 0 \\
(X_{\st{R}^* \st{L}})_{ij} & (X_{\st{R}^* \st{R}})_{ij} & 0 & 0 \\
0 & 0 & (X_{\st{L} \st{L}^*})_{ij} & (X_{\st{L} \st{R}^*})_{ij} \\
0 & 0 & (X_{\st{R} \st{L}^*})_{ij} & (X_{\st{R} \st{R}^*})_{ij}
\end{pmatrix} \\
&= \delta_{ij} \; \mathbf{1}_{2\times 2} \otimes
\begin{pmatrix}
\frac{1}{2}(y_t s_\beta h)^2 & \frac{1}{\sqrt{2}} y_t s_\beta h X_t \\
\frac{1}{\sqrt{2}} y_t s_\beta h X_t & \frac{1}{2}(y_t s_\beta h)^2
\end{pmatrix}, \\
\mathbf{X}_{\Phi \Xi} &=
\begin{pmatrix}
X_{\Sigma ^* \Lambda} \\
X_{\Sigma \Lambda}
\end{pmatrix}
=
\begin{pmatrix}
(X_{\st{L}^* \gluino{a}})_{i\alpha}^a \\
(X_{\st{R}^* \gluino{a}})_{i\alpha}^a \\
(X_{\st{L} \gluino{a}})_{i\alpha}^a \\
(X_{\st{R} \gluino{a}})_{i\alpha}^a
\end{pmatrix}
= \sqrt{2} g_3
\begin{pmatrix}
T^a_{ij} (\mathcal{C} P_L t_j)_{\alpha} \\
-T^a_{ij} (\mathcal{C} P_R t_j)_{\alpha} \\
-(\bar{t}_j P_R)_{\alpha} T^a_{ji} \\
(\bar{t}_j P_L)_{\alpha} T^a_{ji}
\end{pmatrix},
\\
\mathbf{X}_{\Xi \Phi} &=
\begin{pmatrix}
\mathcal{C}^{-1} X_{\Lambda \Sigma}, && \mathcal{C}^{-1} X_{\Lambda \Sigma ^*}
\end{pmatrix} \\
&= (\mathcal{C}^{-1})_{\alpha\beta}
\begin{pmatrix}
(X_{\gluino{a} \st{L}})_{i\beta}^a, &&
(X_{\gluino{a} \st{R}})_{i\beta}^a, &&
(X_{\gluino{a} \st{L}^*})_{i\beta}^a, &&
(X_{\gluino{a} \st{R}^*})_{i\beta}^a
\end{pmatrix} \\
&= \sqrt{2} g_3 (\mathcal{C}^{-1})_{\alpha\beta}
\begin{pmatrix}
-(\bar{t}_j P_R)_{\beta} T^a_{ji}, &&
(\bar{t}_j P_L)_{\beta} T^a_{ji}, &&
T^a_{ij} (\mathcal{C} P_L t_j)_{\beta}, &&
-T^a_{ij} (\mathcal{C} P_R t_j)_{\beta}
\end{pmatrix} \\
&= \sqrt{2} g_3
\begin{pmatrix}
-(\bar{t}_j P_R (\mathcal{C}^{-1})^T)_{\alpha} T^a_{ji}, &&
(\bar{t}_j P_L (\mathcal{C}^{-1})^T)_{\alpha} T^a_{ji}, &&
T^a_{ij} (P_L t_j)_{\alpha}, &
-T^a_{ij} (P_R t_j)_{\alpha}
\end{pmatrix} .
\end{align}
By inserting the $\mathbf{X}_{AB}$ operators into
\eqref{eq:L_SMEFT_operators} and summing over all fields and colors we
obtain
\begin{align}
\mathcal{L}_\ensuremath{\text{EFT}}\xspace^\text{1\ensuremath{\ell}} &=
c_t h\bar{t}t + c_L\bar{t}i\slashed{\partial}P_Lt + c_R \bar{t}i\slashed{\partial}P_Rt
+ c_2' (\partial h)^2 + c_2 h^2 + c_4 h^4 + c_6 h^6 + \cdots,
\end{align}
where
\begin{align}
c_t &= -\frac{4 \sqrt{2}}{3}\kappa g_3^2 y_t s_\beta m_{\gluino{}} X_t \tilde{\mathcal{I}} ^{111} _{\gluino{}\tilde{q}\tilde{u}},\\
\begin{split}
c_L &= \frac{16}{3}\kappa g_3^2 \tilde{\mathcal{I}} [q^2] ^{21} _{\tilde{u} \gluino{}},
\end{split}\\
c_R &= c_L|_{\tilde{q} \to \tilde{u}},\\
c_2' &= -3 \kappa (y_t s_\beta)^2 X_t^2 \tilde{\mathcal{I}} [q^2] ^{22} _{\tilde{q} \tilde{u}},\\
c_2 &= \frac{3}{2}\kappa (y_t s_\beta)^2 \left[\tilde{\mathcal{I}} ^{1} _{\tilde{q}} + \tilde{\mathcal{I}} ^{1} _{\tilde{u}} + X_t^2 \tilde{\mathcal{I}} ^{11} _{\tilde{q}\tilde{u}}\right], \\
c_4 &= \frac{3}{8}\kappa (y_t s_\beta)^4 \left[
\tilde{\mathcal{I}} ^{11} _{\tilde{q}\sq} + \tilde{\mathcal{I}} ^{11} _{\tilde{u}\su} + 2 X_t^2 (\tilde{\mathcal{I}} ^{111} _{\tilde{q}\sq\tilde{u}} + \tilde{\mathcal{I}} ^{111} _{\tilde{q}\tilde{u}\su}) + X_t^4 \tilde{\mathcal{I}} ^{1111} _{\tilde{q}\sq\tilde{u}\su}\right],\\
\begin{split}
c_6 &= \frac{1}{8}\kappa (y_t s_\beta)^6 \big[
\tilde{\mathcal{I}} ^{111} _{\tilde{q}\sq\tilde{q}} + \tilde{\mathcal{I}} ^{111} _{\tilde{u}\su\tilde{u}} + 3 X_t^2 ( \tilde{\mathcal{I}} ^{1111} _{\tilde{q}\sq\tilde{q}\tilde{u}} + \tilde{\mathcal{I}} ^{1111} _{\tilde{q}\sq\tilde{u}\su} + \tilde{\mathcal{I}} ^{1111} _{\tilde{q}\tilde{u}\su\tilde{u}} ) \\
& ~~~~~~~~~~~~~~~~~~
+ 3 X_t^4 ( \tilde{\mathcal{I}} ^{11111} _{\tilde{q}\sq\tilde{q}\tilde{u}\su} + \tilde{\mathcal{I}} ^{11111} _{\tilde{q}\sq\tilde{u}\su\tilde{u}} )
+ X_t^6 \tilde{\mathcal{I}} ^{111111} _{\tilde{q}\sq\tilde{q}\tilde{u}\su\tilde{u}} \big] .
\end{split}
\end{align}
To canonically normalize the kinetic terms
of $\mathcal{L}_\ensuremath{\text{SMEFT}}\xspace$ we re-define the Higgs and the top quark field as
\begin{align}
h &= \left(1 - \frac{1}{2} \delta Z_h\right) \hat{h} , \\
t_L &= \left(1 - \frac{1}{2} \delta Z_L\right) \hat{t}_L , \\
t_R &= \left(1 - \frac{1}{2} \delta Z_R\right) \hat{t}_R ,
\end{align}
where the field renormalizations $\delta Z_{h/L/R}$ are given by
\begin{align}
\delta Z_h &= 2c_2', \\
\delta Z_L &= c_L, \\
\delta Z_R &= c_R.
\end{align}
If we parameterize the \ensuremath{\text{SMEFT}}\xspace Lagrangian as
\begin{align}
\mathcal{L}_\ensuremath{\text{SMEFT}}\xspace \supset
- \frac{g_t}{\sqrt{2}} \hat{h} \bar{\hat{t}} \hat{t}
+ \frac{m^2}{2} \hat{h}^2 - \frac{\lambda}{8} \hat{h}^4
- \frac{\tilde{c}_6}{8} \hat{h}^6,
\end{align}
then the SMEFT parameters $g_t$, $\lambda$ and $m^2$ are given by
\begin{align}
g_t &= y_t s_\beta \left[1 - \frac{1}{2}(c_L + c_R) - c_2' - \frac{\sqrt{2}c_t}{y_t s_\beta} \right], \\
m^2 &= 2 c_2,\\
\lambda &= -8 c_4 ,\\
\tilde{c}_6 &= -8 c_6,
\end{align}
which agrees with the results calculated in
\cite{Bagnaschi:2014rsa,Bagnaschi:2017xid,Huo:2015nka,Drozd:2015rsp}.\footnote{It
was noted in \cite{Bagnaschi:2017xid} that the logarithmic term in
the last line of eq.~(D.4) in \cite{Drozd:2015rsp} should come with a
minus sign.}
\subsection{Integrating out the gluino from the MSSM with light stops}
\label{sec: gluinoOut}
In this section we calculate some of the terms that arise when
integrating out the gluino from the MSSM. This \ensuremath{\text{EFT}}\xspace\ scenario is
relevant when there is a large hierarchy between the gluino mass and
the stop masses in the MSSM. This example is also a direct
application of most of the operators calculated in \secref{sec:calc},
in particular operators where Majorana and Dirac fermions appear in
loops at the same time.
We consider the following part of the MSSM Lagrangian
\begin{align}
\mathcal{L}_\ensuremath{\text{MSSM}}\xspace \supset{}&
|\partial\st{L}|^2 - m^2_{\sq} |\st{L}|^2
+ |\partial\st{R}|^2 - m^2_{\su} |\st{R}|^2
+ \frac{1}{2}(\gluino{a})^T \mathcal{C} (i\slashed{\partial} - m_{\gluino{}}) \gluino{a} \nonumber \\
& -\sqrt{2} g_3 \left(
\bar{t} P_R \gluino{a} T^a \st{L} - \bar{t} P_L \gluino{a} T^a \st{R}
+ \st{L}^* (\gluino{a})^T T^a \mathcal{C} P_L t - \st{R}^* (\gluino{a})^T T^a \mathcal{C} P_R t
\right)\nonumber \\
& +\left(-y_t^2+\frac{g_3^2}{2}\right)(\st{L}^*\st{R})(\st{L}\st{R}^*)-\frac{g_3^2}{6}|\st{L}|^2|\st{R}|^2,
\end{align}
where we use the same notation as in \secref{sec: gluinoOut} with $t$
being the top quark, defined as a Dirac fermion, and
$\gluino{a} = \ccfield{(\gluino{a})}$ denotes the gluino, which is a
Majorana fermion. The complex scalar fields $\st{L}$ and $\st{R}$
represent the stops.
In the following we determine the one-loop Wilson coefficients of the
following operators in the \ensuremath{\text{EFT}}\xspace:
\begin{align}
\mathcal{L}_\ensuremath{\text{EFT}}\xspace^{1\ensuremath{\ell}} & \supset c_{t_L} \bar{t}_Li\slashed{\partial} t_L+c_{t_R} \bar{t}_Ri\slashed{\partial} t_R+c_{\st{L}} \partial_\mu\st{L}^*\partial^\mu \st{L}-\delta m_{\tilde{q}} ^2 |\st{L}|^2+c_{\st{R}} \partial_\mu\st{R}^*\partial^\mu \st{R}-\delta m_{\tilde{u}} ^2 |\st{R}|^2
\nonumber \\ & \quad +c^L_{41} \left(\st{Li}^* \st{Li} \right)^2+c^L_{42} \left(\st{Li}^* \st{Lj} \right) \left(\st{Lj}^* \st{Li}\right)+c^R_{4} \left(\st{R}^* \st{R}\right)^2\nonumber
\\ & \quad +c^{LR}_{41} \left(\st{Li}^* \st{Li}\right)\left(\st{Rj}^* \st{Rj}\right)+c^{LR}_{42} \left(\st{Li}^* \st{Lj}\right)\left(\st{Rj}^* \st{Ri}\right)+ c_G G^a_{\mu \nu} G_a ^{\mu \nu}\nonumber
\\ & \quad +[c^{LL} _{51} (\bar{t}_{Li} T^a _{ij} \st{Lj})(\ccfield{t}_{Rk} T^a _{kl} \st{Ll})+c^{LL} _{52} (\st{Li}^* T^a _{ij} \overline{\ccfield{t_{Rj}}})(\st{Lk}^* T^a _{kl} t_{Ll})+(L \leftrightarrow R)]\nonumber
\\ & \quad +[c^{LR} _{51} (\bar{t}_{Li} T^a _{ij} \st{Lj})(\st{Rk}^* T^a_{kl} t_{Rl})+c^{LR} _{52} (\st{Li} \st{Ri}^*) (\bar{t}_{Lj} t_{Rj})+(L \leftrightarrow R)]
\nonumber \\ & \quad +c_{61} ^L (\tilde{t}_{Li}^* \tilde{t}_{Li})^3+c_{62} ^L (\tilde{t}_{Li}^* \tilde{t}_{Li})(\tilde{t}_{Lj}^* \tilde{t}_{Lk})(\tilde{t}_{Lk}^* \tilde{t}_{Lj})+c_{63} ^L (\tilde{t}_{Li}^* \tilde{t}_{Lj})(\tilde{t}_{Lj}^* \tilde{t}_{Lk})(\tilde{t}_{Lk}^* \tilde{t}_{Li})+c_6^R (\tilde{t}_{Ri}^* \tilde{t}_{Ri})^3
\nonumber \\ & \quad +[c_{61} ^{LR} (\tilde{t}_{Li}^* \tilde{t}_{Li})^2(\tilde{t}_{Ri}^* \tilde{t}_{Ri}) +c_{62} ^{LR} (\tilde{t}_{Li}^* \tilde{t}_{Li})(\tilde{t}_{Lj}^* \tilde{t}_{Lk})(\tilde{t}_{Rk}^* \tilde{t}_{Rj})+c_{63} ^{LR} (\tilde{t}_{Li}^* \tilde{t}_{Lj})(\tilde{t}_{Lj}^* \tilde{t}_{Li})(\tilde{t}_{Rk}^* \tilde{t}_{Rk}) \nonumber \\ & \quad + c_{64} ^{LR} (\tilde{t}_{Li}^* \tilde{t}_{Lj})(\tilde{t}_{Lj}^* \tilde{t}_{Lk})(\tilde{t}_{Rk}^* \tilde{t}_{Ri})+c_{61} ^{RL} (\tilde{t}_{Ri}^* \tilde{t}_{Ri})^2(\tilde{t}_{Li}^* \tilde{t}_{Li}) +c_{62} ^{RL} (\tilde{t}_{Ri}^* \tilde{t}_{Ri})(\tilde{t}_{Rj}^* \tilde{t}_{Rk})(\tilde{t}_{Lk}^* \tilde{t}_{Lj})]
\nonumber \\ & \quad +[c_{61} ^{L^\mu L_\mu}\left(\bar{t}_{Li} \gamma^\mu t_{Li}\right)\left(\bar{t}_{Lj} \gamma_\mu t_{Lj}\right)+c_{62} ^{L^\mu L_\mu}\left(\bar{t}_{Li} \gamma^\mu t_{Lj}\right)\left(\bar{t}_{Lj} \gamma_\mu t_{Li}\right)+(L \leftrightarrow R)]\nonumber
\\ & \quad +c_{61}^{(LR)^\mu (RL)_\mu} \left(\overline{\ccfield{t_{Ri}}} \gamma^\mu t_{Rj}\right)\left(\bar{t}_{Rj} \gamma_\mu \ccfield{t}_{Ri}\right)+ c_{62}^{(LR)^\mu (RL)_\mu} \left(\overline{\ccfield{t_{Rj}}} \gamma^\mu t_{Ri}\right)\left(\bar{t}_{Rj} \gamma_\mu \ccfield{t}_{Ri}\right)\nonumber \\
& \quad + [c_{61} ^{LL}\left(\overline{\ccfield{t_{Ri}}} t_{Li}\right)\left(\bar{t}_{Lj} \ccfield{t}_{Rj}\right)+c_{62}^{LL}\left(\overline{\ccfield{t_{Ri}}} t_{Lj}\right)\left(\bar{t}_{Lj} \ccfield{t}_{Ri}\right)+(L\leftrightarrow R)]\nonumber
\\ & \quad +c_{61} ^{(LR)(RL)} \left(\bar{t}_{Ri} t_{Lj}\right)\left(\bar{t}_{Lj} t_{Ri}\right) +c_{62} ^{(LR)(RL)} \left(\bar{t}_{Rj}t_{Li}\right)\left(\bar{t}_{Lj} t_{Ri}\right).
\label{eq: gluonOutFirstEFTLag}
\end{align}
These operators represent all derived one-loop stop interactions in
the gaugeless limit and in the unbroken phase,
without contributions from higher-dimensional operators with covariant derivatives. Terms which
involve SUSY particles beyond the stop are omitted for brevity. In
\eqref{eq: gluonOutFirstEFTLag} the color indices $i,j,k=1,2,3$ and
$a=1,\ldots,8$ are written out explicitly. Note that in general
$\mathcal{L}_\ensuremath{\text{EFT}}\xspace^{1\ensuremath{\ell}}$ contains $SU(2)_L$ and $SU(3)_C$ invariant terms of the
form
$(\tilde{q}^\dagger_{Li}\tilde{q}_{Li})(\tilde{q}^\dagger_{Lj}\tilde{q}_{Lj})$
and
$(\tilde{q}^\dagger_{Li}\tilde{q}_{Lj})(\tilde{q}^\dagger_{Lj}\tilde{q}_{Li})$,
where the $SU(2)_L$ indices are contracted within parentheses, but the
color indices are contracted differently among the terms. In
\eqref{eq: gluonOutFirstEFTLag}, however, the corresponding terms with
the couplings $c_{41}^L$ and $c_{42}^L$ have the same structure,
because we have omitted the sbottom quark.
The dimension 5 operators have contributions already at tree-level,
which stem from the insertion of the gluino background field
$\classicfield{\tilde{g}}$ into the Lagrangian of the MSSM. The necessary part of the
gluino background field can be extracted from the equation of motion
\begin{align}
[\mathcal{C} (i\slashed{\partial}-m_{\gluino{}})]_{\alpha \beta} (\classicfield{\tilde{g}})_\beta^a=\sqrt{2} g_3 \left(-
\bar{t}_{L \alpha} T^a \st{L} + \bar{t}_{R \alpha} T^a \st{R}
+ \st{L}^* T^a (\mathcal{C} t_L)_\alpha - \st{R}^* T^a (\mathcal{C} t_R)_\alpha
\right),
\end{align}
which yields
\begin{align}
(\classicfield{\tilde{g}})_\beta^a &= \sqrt{2} g_3 (i\slashed{\partial}-m_{\gluino{}})_{\beta \alpha}^{-1} \left[-
(\bar{t}_L \mathcal{C}) _\alpha T^a \st{L} + (\bar{t}_R \mathcal{C})_\alpha T^a \st{R}
+ \st{L}^* T^a t_{L\alpha} - \st{R}^* T^a t_{R\alpha} \right]
\\ &= \frac{\sqrt{2} g_3}{m_{\gluino{}}} \left[
(\bar{t}_L \mathcal{C}) _\beta T^a \st{L} - (\bar{t}_R \mathcal{C})_\beta T^a \st{R}
- \st{L}^* T^a t_{L\beta} + \st{R}^* T^a t_{R\beta} + \cdots \right] ,
\label{eq: ClassicalGluinoField}
\end{align}
where the ellipsis designate higher order terms of
$\order{\partial/m_{\gluino{}}}$ with at least one
derivative. Inserting \eqref{eq: ClassicalGluinoField} into both the
kinetic term of the gluino and the interaction Lagrangian one finds
the tree-level values of $c_{5i} ^{AB}$ ($A,B\in \{L,R\}$) to be
\begin{align}
c_{51} ^{LL,\ensuremath{\text{tree}}\xspace}&=c_{52} ^{LL,\ensuremath{\text{tree}}\xspace}=c_{51} ^{RR,\ensuremath{\text{tree}}\xspace}=c_{52} ^{RR,\ensuremath{\text{tree}}\xspace}=\frac{g^2_3}{m_{\gluino{}}}, \\
c_{51} ^{LR,\ensuremath{\text{tree}}\xspace}&=c_{51} ^{RL,\ensuremath{\text{tree}}\xspace}=-\frac{2g^2_3}{m_{\gluino{}}}, \\
c_{52} ^{LR,\ensuremath{\text{tree}}\xspace}&=c_{52} ^{RL,\ensuremath{\text{tree}}\xspace}=0.
\end{align}
At one-loop the relevant contributions from the UOLEA are
\begin{align}
\frac{1}{\kappa}\mathcal{L}_\ensuremath{\text{EFT}}\xspace^{1\ensuremath{\ell}} = \tr \Big\{&(-\tilde{\mathcal{I}} [q^4]^{31} _{\gluino{} 0} +\frac{m^2_{\gluino{}}}{12} \tilde{\mathcal{I}} [q^2] ^{22} _{\gluino{} 0}) \gamma_\mu [P^\nu,(\mathbf{X}_{\Xi \xi})^a_i] \gamma ^\mu [P_\nu,(\mathbf{X}_{\xi \Xi})^a_i] \nonumber \\
& +(-2\tilde{\mathcal{I}} [q^4]^{31} _{\gluino{} 0} +\frac{m^2_{\gluino{}}}{6} \tilde{\mathcal{I}} [q^2] ^{22} _{\gluino{} 0}) \gamma_\mu [P^\mu,(\mathbf{X}_{\Xi \xi})^a_i] \gamma ^\nu [P_\nu,(\mathbf{X}_{\xi \Xi})^a_i] \nonumber \\ & + (-\tilde{\mathcal{I}} [q^{2}]^{12} _{\gluino{}0}-2 m^2_{\phi_i} \tilde{\mathcal{I}}[q^2] ^{13} _{\gluino{}0}) (\mathbf{X}_{\phi \Xi})_i \gamma ^\mu [P_\mu,(\mathbf{X}_{\Xi \phi})_i]\nonumber
\\
& +\frac{1}{4} \tilde{\mathcal{I}} [q^2] ^{22}_{\gluino{}0} (\mathbf{X}_{\phi \Xi})_i \gamma^\mu (\mathbf{X}_{\Xi \phi})_j (\mathbf{X}_{\phi \Xi})_j \gamma_\mu (\mathbf{X}_{\Xi \phi})_i
\nonumber \\ & -\frac{1}{2}m_{\gluino{}} \tilde{\mathcal{I}} ^{12}_{\gluino{}0}(\mathbf{X}_{\phi \phi})_{ij} (\mathbf{X}_{\phi \Xi})_j (\mathbf{X}_{\Xi \phi})_i
\nonumber \\ & +\frac{1}{4} m^2_{\gluino{}} \tilde{\mathcal{I}} ^{22}_{\gluino{}0} (\mathbf{X}_{\phi \Xi})_i (\mathbf{X}_{\Xi \phi})_j (\mathbf{X}_{\phi \Xi})_j (\mathbf{X}_{\Xi \phi})_i-\frac{1}{2} \tilde{\mathcal{I}} [q^2]^{11} _{{\gluino{}} 0} \gamma^\mu (\mathbf{X}_{\Xi \xi})_i \gamma_\mu (\mathbf{X}_{\xi \Xi})_i \nonumber
\\ & -\frac{1}{4}m^2_{\gluino{}} \tilde{\mathcal{I}} [q^2] ^{22} _{\gluino{}0} (\mathbf{X}_{\Xi \xi})^a_i \gamma^\mu (\mathbf{X}_{\xi \Xi})^b_i (\mathbf{X}_{\Xi \xi})^b_j \gamma_\mu (\mathbf{X}_{\xi \Xi})^a_j
\nonumber
\\
& -\frac{1}{4} \tilde{\mathcal{I}} [q^4] ^{22} _{\gluino{}0} g_{\mu \nu \rho \sigma} (\mathbf{X}_{\Xi \xi})^a_i \gamma^\mu (\mathbf{X}_{\xi \Xi})^b_i \gamma^\nu (\mathbf{X}_{\Xi \xi})^b_j \gamma^\rho (\mathbf{X}_{\xi \Xi})^a_j \gamma^\sigma
\nonumber
\\
& -\frac{1}{2}m^2 _{\gluino{}} \tilde{\mathcal{I}} [q^4] ^{33} _{\gluino{}0} g_{\mu \nu \rho \sigma} (\mathbf{X}_{\Xi \xi})^a_i \gamma^\mu (\mathbf{X}_{\xi \Xi})^b_i (\mathbf{X}_{\Xi \xi})^b_j \gamma^\nu (\mathbf{X}_{\xi \Xi})^c_j \gamma^\rho (\mathbf{X}_{\Xi \xi})^c_k \gamma^\sigma (\mathbf{X}_{\xi \Xi})^a_k
\nonumber
\\
& -\frac{1}{6} \tilde{\mathcal{I}} [q^6] ^{33} _{\gluino{}0} g_{\mu \nu \rho \sigma \kappa \lambda} (\mathbf{X}_{\Xi \xi})^a_i \gamma^\mu (\mathbf{X}_{\xi \Xi})^b_i \gamma^\nu (\mathbf{X}_{\Xi \xi})^b_j \gamma^\rho (\mathbf{X}_{\xi \Xi})^c_j \gamma^\sigma (\mathbf{X}_{\Xi \xi})^c_k \gamma^\kappa (\mathbf{X}_{\xi \Xi})^a_k \gamma^\lambda
\nonumber \\ & +\frac{1}{6}\tilde{\mathcal{I}} ^{2} _{\gluino{}}[P_\mu,P_\nu][P^\mu,P^\nu]\Big\},
\label{eq: UOLEAContrGluino}
\end{align}
where $g_{\mu \nu \cdots}$ is the combination of metric tensors which
is totally symmetric in all indices, see
\appref{sec:loop_functions}. The derivatives with respect to the stops
and the gluino have already been calculated in
\secref{sec:matching_MSSM_to_SMEFT} and are given by
\begin{align}
\mathbf{X}_{\phi \Xi} &=
\begin{pmatrix}
X_{\sigma ^* \Lambda} \\
X_{\sigma \Lambda}
\end{pmatrix}
=
\begin{pmatrix}
(X_{\st{L}^* \gluino{a}})_{i\alpha}^a \\
(X_{\st{R}^* \gluino{a}})_{i\alpha}^a \\
(X_{\st{L} \gluino{a}})_{i\alpha}^a \\
(X_{\st{R} \gluino{a}})_{i\alpha}^a
\end{pmatrix}
= \sqrt{2} g_3
\begin{pmatrix}
T^a_{ij} (\mathcal{C} P_L t_j)_{\alpha} \\
-T^a_{ij} (\mathcal{C} P_R t_j)_{\alpha} \\
-(\bar{t}_j P_R)_{\alpha} T^a_{ji} \\
(\bar{t}_j P_L)_{\alpha} T^a_{ji}
\end{pmatrix},
\\
\mathbf{X}_{\Xi \phi} &=
\begin{pmatrix}
\mathcal{C}^{-1} X_{\Lambda \sigma}, & \mathcal{C}^{-1} X_{\Lambda \sigma ^*}
\end{pmatrix} \\
&= (\mathcal{C}^{-1})_{\alpha\beta}
\begin{pmatrix}
(X_{\gluino{a} \st{L}})_{i\beta}^a, &
(X_{\gluino{a} \st{R}})_{i\beta}^a, &
(X_{\gluino{a} \st{L}^*})_{i\beta}^a, &
(X_{\gluino{a} \st{R}^*})_{i\beta}^a
\end{pmatrix} \\
&= \sqrt{2} g_3
\begin{pmatrix}
-(\bar{t}_j P_R \mathcal{C})_{\alpha} T^a_{ji}, &
(\bar{t}_j P_L \mathcal{C})_{\alpha} T^a_{ji}, &
T^a_{ij} (P_L t_j)_{\alpha}, &
-T^a_{ij} (P_R t_j)_{\alpha}
\end{pmatrix},
\end{align}
the difference being that the stops are now considered to be light
fields.
For the purpose of this application we also need the derivatives with
respect to a top and a gluino, which read
\begin{align}
(X_{\bar{t} \gluino{a}})_{i \alpha \beta}^a&=-\sqrt{2}g_3T^a_{ij}\left[(P_R)_{\alpha\beta}\st{Lj}-(P_L)_{\alpha\beta}\st{Rj}\right],\\
(X_{t \gluino{a}})_{i \alpha \beta}^a&=-\sqrt{2}g_3T^a_{ji}\left[-\st{Lj}^*(\mathcal{C} P_L)_{\beta \alpha}+\st{Rj}^*(\mathcal{C} P_R)_{\beta \alpha}\right],\\
(X_{\gluino{a}\bar{t}})_{i \alpha \beta}^a&=\sqrt{2}g_3T^a_{ij}\left[(P_R)_{\beta \alpha}\st{Lj}-(P_L)_{\beta \alpha}\st{Rj}\right],\\
(X_{ \gluino{a} t})_{i \alpha \beta}^a&=\sqrt{2}g_3T^a_{ji}\left[-\st{Lj}^*(\mathcal{C} P_L)_{ \alpha \beta}+\st{Rj}^*(\mathcal{C} P_R)_{\alpha \beta}\right],
\end{align}
and are collected into
\begin{align}
\mathbf{X}_{\Xi \xi}&=\begin{pmatrix}
\mathcal{C} ^{-1} X_{\Lambda \omega}, & \mathcal{C} ^{-1} X_{\Lambda \bar{\omega}} \mathcal{C} ^{-1}
\end{pmatrix}\\
&=\begin{pmatrix}
(\mathcal{C} ^{-1}X_{ \gluino{a} t})_{i \alpha \beta}^a , & (\mathcal{C} ^{-1} X_{\gluino{a}\bar{t}} \mathcal{C} ^{-1})_{i \alpha \beta}^a
\end{pmatrix} \\
&= \begin{pmatrix}
-\sqrt{2}g_3T^a_{ji}\left[\st{Lj}^*(P_L)_{ \alpha \beta}-\st{Rj}^*(P_R)_{ \alpha \beta}\right] , & -\sqrt{2}g_3T^a_{ij}\left[(P_R)_{\alpha \beta}\st{Lj}-(P_L)_{\alpha \beta}\st{Rj}\right]
\end{pmatrix}, \\
\mathbf{X}_{\xi \Xi}&=\begin{pmatrix}
X_{\bar{\omega} \Lambda} \\
\mathcal{C}^{-1} X_{\omega \Lambda}
\end{pmatrix}=\begin{pmatrix}
(X_{\bar{t} \gluino{a}})_{i \alpha \beta}^a \\
(\mathcal{C}^{-1}X_{t \gluino{a}})_{i \alpha \beta}^a
\end{pmatrix}=\begin{pmatrix}
-\sqrt{2}g_3T^a_{ij}\left[(P_R)_{\alpha\beta}\st{Lj}-(P_L)_{\alpha\beta}\st{Rj}\right]\\
-\sqrt{2}g_3T^a_{ji}\left[\st{Lj}^*(P_L)_{\alpha \beta}-\st{Rj}^*(P_R)_{\alpha \beta}\right]
\end{pmatrix}.
\end{align}
Finally we give the derivatives with respect to two stops
\begin{align}
\mathbf{X}_{\phi \phi} &= \begin{pmatrix}
\mathbf{Y}_{\phi \phi} & \mathbf{0}_{2\times2} \\
\mathbf{0}_{2\times2} & (\mathbf{Y}_{\phi \phi})^*
\end{pmatrix},\\
\mathbf{Y}_{\phi \phi} &=
\begin{pmatrix}
x_t \st{Rj}^* \st{Ri}-\frac{g_3 ^2}{6 }\st{R}^* \st{R} \delta_{ij} && x_t \delta_{ij} \st{L} \st{R}^*-\frac{g_3^2}{6}\st{Li} \st{Rj}^* \\
x_t \delta_{ij} \st{L}^* \st{R}-\frac{g_3^2}{6}\st{Ri} \st{Lj}^* && x_t \st{Lj}^* \st{Li}-\frac{g_3 ^2}{6 }\st{L}^* \st{L} \delta_{ij}
\end{pmatrix},
\end{align}
where we have introduced the abbreviation $x_t \equiv y_t^2-g_3 ^2/2$.
Substituting these derivatives into \eqref{eq: UOLEAContrGluino} and
summing over all indices one finds
\begin{align}
c_{t_L}&=\frac{16}{3}g^2_3\left(\tilde{\mathcal{I}} [q^{2}]^{12} _{\gluino{}0}+2 m^2_{\tilde{q}} \tilde{\mathcal{I}}[q^2] ^{13} _{\gluino{}0}\right), \\
c_{t_R}&=\frac{16}{3}g^2_3\left(\tilde{\mathcal{I}} [q^{2}]^{12} _{\gluino{}0}+2 m^2_{\tilde{u}} \tilde{\mathcal{I}}[q^2] ^{13} _{\gluino{}0}\right), \\
c_{\st{L}}&=c_{\st{R}}=\frac{32}{3}g^2_3(d+2)\left(-\tilde{\mathcal{I}} [q^{4}]^{31} _{\gluino{}0}+ \frac{m^2_{\tilde{q}}}{2} \tilde{\mathcal{I}}[q^2] ^{22} _{\gluino{}0}\right),\\
c_{61}^{L^\mu L_\mu}&=c_{61} ^{R^\mu R_\mu}=\frac{7}{6}g^4_3 \tilde{\mathcal{I}} [q^2] ^{22}_{\gluino{}0}, \\
c_{62} ^{L^\mu L_\mu}&=c_{62} ^{ R^\mu R_\mu}=\frac{1}{18}g^4_3 \tilde{\mathcal{I}} [q^2] ^{22}_{\gluino{}0}, \\
c_{61} ^{(LR)^\mu (RL)_\mu}&=\frac{10}{9}g_3^4 \tilde{\mathcal{I}} [q^2] ^{22}_{\gluino{}0}, \\
c_{62}^{(LR)^\mu (RL)_\mu}&=-\frac{2}{9}g_3^4 \tilde{\mathcal{I}} [q^2] ^{22}_{\gluino{}0}, \\
c_{61} ^{LL}&=c_{61} ^{ R R}=\frac{5}{18}g^4_3 m^2_{\gluino{}}\tilde{\mathcal{I}} [q^2] ^{22}_{\gluino{}0}, \\
c_{62} ^{LL}&=c_{62} ^{ R R}=-\frac{1}{6}g^4_3 m^2_{\gluino{}}\tilde{\mathcal{I}} [q^2] ^{22}_{\gluino{}0}, \\
c_{61} ^{(LR)(RL)}&=\frac{7}{6}g_3^4 m^2_{\gluino{}}\tilde{\mathcal{I}} [q^2] ^{22}_{\gluino{}0}, \\
c_{62} ^{(LR) (RL)}&=\frac{1}{18}g_3^4 m^2_{\gluino{}}\tilde{\mathcal{I}} [q^2] ^{22}_{\gluino{}0}, \\
\delta m_{\tilde{q}} ^2 &= \delta m_{\tilde{u}} ^2 =\frac{16}{3}dg^2 _3 \tilde{\mathcal{I}}[q^2] ^{11} _{\gluino{}0}, \\
c^L _{41} &= -\frac{40}{9}m^2_{\gluino{}}g_3^4 \tilde{\mathcal{I}}[q^2] ^{22} _{\gluino{}0}-\frac{1}{9}d(d+2)g_3^4 \tilde{\mathcal{I}}[q^4] ^{22} _{\gluino{}0}, \\
c^R _{4} &= -\frac{16}{3}m^2_{\gluino{}}g_3^4 \tilde{\mathcal{I}}[q^2] ^{22} _{\gluino{}0}-\frac{22}{9}d(d+2)g_3^4 \tilde{\mathcal{I}}[q^4] ^{22} _{\gluino{}0}, \\
c^L _{42} &= \frac{8}{3}m^2_{\gluino{}}g_3^4 \tilde{\mathcal{I}}[q^2] ^{22} _{\gluino{}0}-\frac{7}{3}d(d+2)g_3^4 \tilde{\mathcal{I}}[q^4] ^{22} _{\gluino{}0}, \\
c^{LR} _{41} &= -\frac{8}{9}m^2_{\gluino{}}g_3^4 \tilde{\mathcal{I}}[q^2] ^{22} _{\gluino{}0}-\frac{20}{9}d(d+2)g_3^4 \tilde{\mathcal{I}}[q^4] ^{22} _{\gluino{}0}, \\
c^{LR} _{42} &= -\frac{56}{3}m^2_{\gluino{}}g_3^4 \tilde{\mathcal{I}}[q^2] ^{22} _{\gluino{}0}+\frac{4}{9}d(d+2)g_3^4 \tilde{\mathcal{I}}[q^4] ^{22} _{\gluino{}0}, \\
c^{L} _{61} &= \frac{1}{54}d(d+2)g_3^6 m^2_{\gluino{}} \tilde{\mathcal{I}}[q^4] ^{33} _{\gluino{}0}+\frac{2}{81}d(d^2+6d+8)g_3^6 \tilde{\mathcal{I}}[q^6] ^{33} _{\gluino{}0}, \\
c^{L} _{62} &=- \frac{2}{3}d(d+2)g_3^6 m^2_{\gluino{}} \tilde{\mathcal{I}}[q^4] ^{33} _{\gluino{}0}-\frac{2}{9}d(d^2+6d+8)g_3^6 \tilde{\mathcal{I}}[q^6] ^{33} _{\gluino{}0}, \\
c^{L} _{63} &= \frac{1}{2}d(d+2)g_3^6 m^2_{\gluino{}} \tilde{\mathcal{I}}[q^4] ^{33} _{\gluino{}0}-\frac{4}{3}d(d^2+6d+8)g_3^6 \tilde{\mathcal{I}}[q^6] ^{33} _{\gluino{}0}, \\
c^{R} _{6} &=-\frac{4}{27}d(d+2)g_3^6 m^2_{\gluino{}} \tilde{\mathcal{I}}[q^4] ^{33} _{\gluino{}0}-\frac{124}{81}d(d^2+6d+8)g_3^6 \tilde{\mathcal{I}}[q^6] ^{33} _{\gluino{}0}, \\
c^{LR} _{61} &= \frac{1}{18}d(d+2)g_3^6 m^2_{\gluino{}} \tilde{\mathcal{I}}[q^4] ^{33} _{\gluino{}0} +\frac{2}{27}d(d^2+6d+8)g_3^6 \tilde{\mathcal{I}}[q^6] ^{33} _{\gluino{}0},\\
c^{LR} _{62} &=- \frac{12}{9}d(d+2)g_3^6 m^2_{\gluino{}} \tilde{\mathcal{I}}[q^4] ^{33} _{\gluino{}0}-\frac{10}{9}d(d^2+6d+8)g_3^6 \tilde{\mathcal{I}}[q^6] ^{33} _{\gluino{}0}, \\
c^{LR} _{63} &= -\frac{1}{6}d(d+2)g_3^6 m^2_{\gluino{}} \tilde{\mathcal{I}}[q^4] ^{33} _{\gluino{}0}-\frac{14}{9}d(d^2+6d+8)g_3^6 \tilde{\mathcal{I}}[q^6] ^{33} _{\gluino{}0}, \\
c^{LR} _{64} &= \frac{2}{9}d(d^2+6d+8)g_3^6 \tilde{\mathcal{I}}[q^6] ^{33} _{\gluino{}0}, \\
c^{RL} _{61} &= -\frac{1}{9}d(d+2)g_3^6 m^2_{\gluino{}} \tilde{\mathcal{I}}[q^4] ^{33} _{\gluino{}0}-\frac{40}{27}d(d^2+6d+8)g_3^6 \tilde{\mathcal{I}}[q^6] ^{33} _{\gluino{}0},\\
c^{RL} _{62} &=- \frac{12}{9}d(d+2)g_3^6 m^2_{\gluino{}} \tilde{\mathcal{I}}[q^4] ^{33} _{\gluino{}0}+\frac{8}{9}d(d^2+6d+8)g_3^6 \tilde{\mathcal{I}}[q^6] ^{33} _{\gluino{}0}, \\
c_{51} ^{LR\text{,1\ensuremath{\ell}}}&=c_{51} ^{RL\text{,1\ensuremath{\ell}}}=-\frac{g_3^4}{3}m_{\gluino{}} \tilde{\mathcal{I}} ^{12}_{\gluino{}0}, \\
c_{52} ^{LR\text{,1\ensuremath{\ell}}}&=c_{52} ^{RL\text{,1\ensuremath{\ell}}}=-\frac{8}{3}g_3^4 x_t m_{\gluino{}} \tilde{\mathcal{I}} ^{12}_{\gluino{}0}, \\
c_G&=-\frac{g_3^2}{2}\tilde{\mathcal{I}} ^{2} _{\gluino{}}.
\end{align}
In the calculation of these corrections the relations
$g^{\mu\nu}g_{\mu\nu} = d = 4 - \epsilon$ and \eqref{eq:genrel} were used
repeatedly. The one-loop corrections $\delta m_{\tilde{q}}^2$ and
$\delta m_{\tilde{u}}^2$ to the third generation squark mass parameters have
already been calculated in \cite{Aebischer:2017aqa} and our results
agree with the expressions found there.
Since supersymmetry is only softly broken in the MSSM it is convenient
to use DRED as a regulator. Once the gluino is
integrated out from the theory, supersymmetry is explicitly broken and
it is natural to regularize the EFT in
DREG\@. This switch in the regularization scheme introduces further
contributions to the couplings of the EFT coming from the epsilon
scalars. In the formalism of the UOLEA the relevant operators which
contribute here are given by \cite{Summ:2018oko}
\begin{align}
\begin{split}
\frac{\epsilon}{\kappa} \mathcal{L}^{1\ell}_\text{reg} =
&-\sum _{i} (m^2_{\epsilon})_{i} (\epsdim{X}^\mu _{\epsilon \epsilon \mu})_{ii}
+ \frac{1}{2} \sum_{ij} (\epsdim{X}^{\mu}_{\epsilon \epsilon \nu})_{ij} (\epsdim{X}^{\nu}_{\epsilon \epsilon \mu})_{ji} \\
&+\sum_{ij} 2^{c_{F_j}} \left\{2 m_{\psi j} (\epsdim{X}^\mu_{\epsilon \psi})_{ij} (\epsdim{X} _{\bar{\psi} \epsilon \mu})_{ji} + (\epsdim{X}^\mu_{\epsilon \psi})_{ij} \gamma^\nu \left[P_\nu,(\epsdim{X}_{\bar{\psi} \epsilon \mu})_{ji}\right]\right\} \\
&-\sum_{i j k} 2^{c_{F_j}+c_{F_k}-1} (\epsdim{X}^\mu_{\epsilon \psi})_{ij} \gamma ^\nu (X_{\bar{\psi} \psi})_{jk} \gamma_{\nu} (\epsdim{X}_{\bar{\psi} \epsilon \mu})_{ki} \\
& + \frac{\epsilon}{12} \tr\left[ G'_{\mu \nu} G'^{\mu \nu} \right],
\end{split}
\label{eq:epsilon-scalar contributions}
\end{align}
The $\epsdim{X}$ operators are projections of the corresponding
$4$-dimensional ones $\fourdim{X}$ onto the $\epsilon$-dimensional
$Q\epsilon S$ space, i.e.
\begin{align}
\epsdim{X}^\mu &= \epsdim{g}^\mu_\sigma \fourdim{X}^\sigma, \\
\epsdim{X}^{\mu\nu} &= \epsdim{g}^\mu_\sigma \epsdim{g}^\nu_\rho \fourdim{X}^{\sigma\rho},
\end{align}
see \appref{sec:DREG_DRED}. Furthermore,
$G'_{\mu\nu} = -ig_3 G^a_{\mu\nu} T^a$ is the gluon field strength
tensor. For the top quark (a Dirac fermion) we have $c_F = 0$, and
for the gluino (a Majorana fermion) $c_F = 1$. From
\eqref{eq:epsilon-scalar contributions} we obtain the following
additional contributions to the couplings of the EFT
\begin{align}
(\delta m^2 _{\tilde{q}})_\epsilon &=(\delta m ^2 _{\tilde{u}})_\epsilon=-\frac{4}{3} g_3^2 m_\epsilon ^2, \label{eq:delta_m2_eps} \\
(c_{t_L})_\epsilon &= (c_{t_R})_\epsilon=\frac{4}{3}g_3^2, \\
(c^L_{41})_\epsilon &= \frac{1}{72}g_3^4, \\
(c^L_{42})_\epsilon&=\frac{7}{24}g_3^4, \\
(c^R_{4})_\epsilon &= \frac{11}{36}g_3^4, \\
(c^{LR}_{41})_\epsilon &= \frac{1}{36}g_3^4, \\
(c^{LR}_{42})_\epsilon &= \frac{7}{12}g_3^4, \\
(c^{LL}_{51})_\epsilon &= (c^{LL}_{52})_\epsilon = (c^{RR}_{51})_\epsilon=(c^{RR}_{52})_\epsilon = \frac{3g_3^4 }{2 m_{\gluino{}}}d, \\
(c^{LR}_{51})_\epsilon &= (c^{RL}_{52})_\epsilon = -\frac{3g_3^4 }{m_{\gluino{}}}d, \\
(c_G)_\epsilon &= -\frac{g_3^2}{4}.
\end{align}
The term $\propto m_\epsilon^2$ on the r.h.s.\ of
\eqref{eq:delta_m2_eps} can be removed by switching from the \ensuremath{\overline{\text{DR}}}\xspace\ to
the \ensuremath{\overline{\text{DR}}'}\xspace\ scheme \cite{Jack:1994rk}, which involves shifting
$m^2_{\tilde{q}}$ and $m ^2_{\tilde{u}}$ by finite terms.
Notice also that the one-loop DRED--DREG conversion corrections to the
coefficients of the dimension 5 operators arise from the third line of
\eqref{eq:epsilon-scalar contributions}, which among other terms
contains the term
\begin{align}
(\epsdim{X}^\mu_{\epsilon t})\gamma ^\nu (X_{\bar{t} \gluino{}}) \gamma_{\nu} (\epsdim{X}_{\bar{\gluino{}} \epsilon \mu}).
\end{align}
Here $(\epsdim{X}_{\bar{\gluino{}} \epsilon \mu})$ has an explicit
dependence on the gluino spinor $\gluino{}$,
\begin{align}
(\epsdim{X}_{\bar{\gluino{}} \epsilon \mu})^{ba}=\frac{ig_3}{2}\epsdim{\gamma}^\mu f^{abc}\gluino{c},
\end{align}
which must be eliminated by inserting the background field from
\eqref{eq: ClassicalGluinoField}. As noted above the threshold
corrections for the two stop masses agree with the results derived in
\cite{Aebischer:2017aqa} when the effect of the sbottom quarks is
neglected.
\section{Conclusions}\label{sec:conclusions}
In this paper we have presented an extension of the Universal One-Loop
Effective Action (UOLEA) by all one-loop operators up to dimension 6
for generic theories with scalar and fermionic fields, excluding
operators stemming from open covariant derivatives in the UV
Lagrangian. Our generic results can be used to derive the analytic
expressions of all one-loop Wilson coefficients up to dimension 6 of
an effective Lagrangian from a given UV theory with heavy scalar or
fermionic particles, as long as second derivatives of the UV
Lagrangian w.r.t.\ the fields do not contain covariant derivatives.
Thus, our new results allow for an application of the UOLEA to a
broader class of UV models than before.
To illustrate and test our generic results we have applied the UOLEA
to different EFTs of the SM and the MSSM, where parts of the spectrum are heavy.
We were able to reproduce known results from the
literature, including the prediction of some one-loop Wilson
coefficients of higher-dimensional operators of the SMEFT.
We have published our results in form of the two ancillary Mathematica
files \texttt{UOLEA.m} and \texttt{LoopFunctions.m}, which allow for a
direct use of our expressions and a potential implementation into
generic tools such as \texttt{CoDEx}\@\xspace or spectrum generator generators such as
\texttt{SARAH}\@\xspace\ and \texttt{FlexibleSUSY}\@\xspace.
\acknowledgments{%
We kindly thank Jérémie Quevillon for helpful discussions regarding
the UOLEA. BS would like to thank the Institute for Theoretical Physics in Heidelberg, where part of this work was completed, for its hospitality. This research was supported by the German DFG
Collaborative Research Centre \textit{P\textsuperscript{3}\!H: Particle Physics Phenomenology after the Higgs Discovery}
(CRC TRR 257).
}
|
2,877,628,091,586 | arxiv | \section{\label{sec:Introduction}Introduction\protect }
Angle-resolved photoemission spectroscopy (ARPES) is one of the prime experimental techniques for determining the electronic band structure in crystalline solids. Apart from being able to capture the bare energy-momentum dispersion of electronic states in a material, ARPES directly probes the single-particle spectral function which contains information regarding band dispersion and many-body interactions\cite{damascelli2003angle}. For this reason, ARPES has become an indispensable tool for understanding emergent phenomena in quantum materials (QM) where quasiparticles or collective excitations are core ingredients and naturally invoke a description beyond the simple non-interacting single-particle picture \cite{hengsberger1999electron,lashell1996spin}.
Over the past decades, there have been great improvements in synchrotron-based light sources which, together with new and more versatile electron analyzers, have pushed the energy and momentum-space resolutions in ARPES experiments to unprecedented levels \cite{HighResARPES_Review_Iwasawa2020,kutnyakhov2020time}. In parallel to this development, ARPES has taken a leap into the time-domain due to the advent of femtosecond high-power lasers that have enabled ultrafast extreme-ultraviolet (XUV) light sources based on high-harmonic generation (HHG) in noble gases\cite{lee2020high,cucini2020coherent,mills2019cavity,sie2019time,corder2018ultrafast} or non-linear crystals\cite{parham2017ultrafast,kuroda2017ultrafast,gauthier2020tuning,peli2020time}. While several successful implementations of time- and angle-resolved photoemission spectroscopy (tr-ARPES) systems have been demonstrated, the technique continues to rapidly evolve and there is still much progress to be made in terms of increased repetition rate, photon flux, improved time and energy resolutions, photon energy and momentum range coverage, as well as pump versatility. To date, most HHG based tr-ARPES systems\cite{petersen2011clocking,eich2014time,rohde2016time,buss2019setup,puppin2019time} have focused on reaching a high time resolution with limited repetition rates since this requires relatively modest average laser powers and therefore is more accessible Some state-of-the-art examples of tr-ARPES setups are shown in Fig. \ref{fig:survey}.
Ideally, for conducting tr-ARPES experiments one would like an ultrafast high-repetition rate laser-based light source that simultaneously provides high temporal and energy resolutions through short pulses with a narrow bandwidth, respectively, as well as providing sufficiently energetic photons to cover a large region of momentum space. A high repetition rate of the probe, on one hand, mitigates space charge induced broadening\cite{SolidStateDynamics_book,zhou2005space,passlack2006space} of the ARPES spectra by reducing the number of photoelectrons generated per pulse while maintaining high average count rates. On the other hand, an increased repetition rate reduces the maximum achievable pulse energy which affects the laser’s ability to efficiently drive the HHG process. Furthermore, shorter pulses provide higher peak power for the same pulse energy and lead to both improved time resolution and HHG efficiency, but at the expense of reduced energy resolution. In addition, one would ideally also prefer a tunable pump, covering a large wavelength range, so that the energy can be selected to optimize excitation efficiently of the process of interest.
Clearly, there is an inherent trade-off between the achievable time and energy resolution in tr-ARPES, intimately linked to the fact that short pulses in the time-domain have a broad spectral content. Designing a light source for tr-ARPES, therefore, requires careful considerations in order to reach the combination of repetition rate, available photon energies, and time and energy resolutions that match the time and energy scales of the electron dynamics that one ultimately wants to study.
Here, we present a laser-based XUV source that is integrated into the BALTAZAR facility\cite{berntsen2011experimental} for tr-ARPES. The light source has been designed to achieve high energy resolution at high repetition rates in the ARPES setup in order to resolve detailed electronic structure features in quantum materials such as superconducting gaps in superconductors. The key parameters of the light source and photoemission setup are listed in Tab. \ref{tab:parameters}. The light source uses a tabletop laser (Amplitude, Tangor 100), delivering 461~fs long pulses at a wavelength of 1030~nm, which are then frequency tripled, through second harmonic and sum frequency generation in non-linear crystals, to 343~nm. The 343~nm light pulses are in turn used to produce vacuum-ultraviolet (VUV) photons with energies 10.8~eV, 18.1~eV, 25.3~eV and 32.5~eV at a repetition rate of 250~kHz through high harmonic generation in argon gas. Static ARPES measurements of the Fermi edge in polycrystalline Au using 10.8~eV, 18.1~eV and 25.3 eV photon energies yield a total experimental energy resolution of 9~meV, 14~meV and 18~meV, respectively. This energy resolution is the total system resolution, including contributions from the light source, analyzer, space charge and stray fields. As such, it puts an upper bound on the spectral width of the harmonics. Time-resolved ARPES measurements on graphene using probe energies of 18.1~eV and 25.3~eV exhibit a time resolution of 204~fs and 165~fs, respectively. The photon energies available through the HHG process give a momentum coverage that extends beyond the first Brillouin zone in most QM and the high detection efficiency of the angle resolved time-of-flight (ARTOF) analyzer allows for measuring the electron dynamics over a two-dimensional region in momentum space in parallel, without the need for rotating the sample or deflecting the emitted photoelectrons. The 250~kHz repetition rate permits space charge broadening to be kept below a detectable level, while maintaining high average count rates on the detector. The HHG drive laser optically seeds a second 280~fs 1030~nm laser (Amplitude, Tangerine HP2) that drives a two-stage optical parametric amplifier (OPA; Fastlite, TwinStarzz) which in turn provides pump pulses in the $0.65-9$~$\mu m$ wavelength range. Together, the wide combination of pump and probe wavelengths permits tailoring of the pump-probe combination for the particular system under study.
\begin{figure}[htp]
\includegraphics[scale=0.1]{fig1_survey.jpg}
\caption{\label{fig:survey}State-of-the-art energy resolutions of current time-resolved ARPES setups plotted as a function of the photon energy in semi-logarithmic scale. The different techniques for generating the probe source are indicated by the shape of the markers.}
\end{figure}
\begin{table}
\caption{\label{tab:parameters} Key parameters for the tr-ARPES setup.}
\begin{ruledtabular}
\begin{tabular}{lcccr}
\qquad \qquad \qquad \qquad \quad System performance\\
\hline
Photon energy (eV) &10.8 &18.1&25.3&32.5\\
Energy resolution (meV)\footnotemark &9 & 14 & 18 & 111\\
Time resolution (fs) & - & 204 & 165& - \\
XUV pulse duration (fs) & - & 178 & 131 & - \\
Time-bandwidth product (meV$\cdot$fs) & - & 2492 & 2358 & - \\
Photon flux on sample (photons/s)\footnotemark & $2e11$&$8e9$&$7e8$&$7e7$\\
XUV spotsize($\mu m^2$) & \multicolumn{4}{c}{96 (H) $\times$ 85 (V)}\\
Repetition rate (kHz)\footnotemark & \multicolumn{4}{c}{250}\\
\end{tabular}
\footnotetext[1]{Total system energy resolution deduced from ARPES measurements.}
\footnotetext[2]{Determined based on the drain current measurement from tantalum foil, with a picoampmeter (Keithley, 6485). Yield efficiencies of 0.08, 0.20, 0.10 and 0.08 for the $3^{\mathrm{rd}}$, $5^{\mathrm{th}}$, $7^{\mathrm{th}}$ and $9^{\mathrm{th}}$ harmonics, respectively, are estimated from Refs. \onlinecite{feuerbacher1972experimental,diaz2019experimental}.}
\footnotetext[3]{For time-resolved measurements. Repetition rate for static ARPES is practically tunable from 250~kHz to 1~MHz.}
\end{ruledtabular}
\end{table}
\section{System overview}
Figure \ref{fig:Setup}a) presents a schematic overview of the tr-ARPES setup. In this section, following the beam path, we describe the principle segments of the system.
\begin{figure*}[htp]
\includegraphics[scale=0.8]{fig2_setup.png}
\caption{\label{fig:Setup}Schematic overview of the HHG-based time-resolved ARPES setup. a) The layout of the setup, showing the pump and probe lines. Abbreviations: EP - Expander, AT - Attenuator, RF - Reflective axicons, BS - Beam stabilization, WP - Waveplate, PM - Parabolic mirror, MW - Mirror wheel, MI - Motorized iris, FW - Filter wheel, FC - Flux check. b) Drawings of the mirror- and filter-wheels that are used for wavelength selection. c) Temporal profile of the $\sim$461-fs-long ultraviolet (UV) pulse used for driving the HHG probe-line. d) OPA performance, depicting the average pulse energy as a function of the output wavelength in semi-logarithmic scale.}
\end{figure*}
\subsection{The probe-line}
A high-power femtosecond laser (Amplitude, Tangor 100) is used as the source, providing infrared (IR) pulses centered at 1030~nm, with an adjustable repetition rate from single shot up to 40~MHz. The maximum average-output-power exceeds 100~W. At 250~kHz repetition rate, the energy per pulse is 300~$\mu J$. We stress the $\sim$ 461~fs long pulse length adopted here, as shown in Fig.\ref{fig:Setup}b), as the key point for achieving high energy resolution. Following the laser amplifier, there is a third harmonic generation (THG) module which uses nonlinear crystals to convert the IR into 343 nm (3.6 eV). The efficiency for this process is $\sim 30 \% $, and it corresponds to a maximum output power of $\sim$30~W and the pulse energy of 88~$\mu J$. For a practical pump-probe scheme, the repetition rate has to be the result of a compromise. Considering measurement statistics and resolution, one favors higher repetition rate, as space charge can be mitigated and count rates kept high. However, excited-state relaxation time and thermal diffusion of pump energy in the sample after excitation set an upper bound on the repetition rate - as does the available laser power and resulting photon flux. The effect of thermal load and photo yield is also sample dependent which makes optimizing these parameters a non-trivial problem. In the present case, 250~kHz was chosen as a reasonable trade off for a large range of samples and pump conditions. The choice of OPA repetition rate was made at the design stage and the working point of 250 kHz has therefore not been subject to experimental optimization.
A critical step in the present setup is beam shaping before the high harmonics generation. The aim is to transform the intensity distribution of the beam from a Gaussian profile to an annular shape, in which the intensity is near zero in the central region. This approach permits the drive beam to be separated from the generated harmonics along the beam propagation direction. This forms the basis for the use of refocusing mirrors and filters to select the photon energy without having to handle the full power of the drive beam. Specifically, the 3.6~eV (343 nm) Gaussian drive beam with diameter $\sim$3~mm (1/$e^2$), is first expanded, using a convex and a concave dielectric mirror, to a diameter of $\sim$6 mm. This pre-expansion is done in order to reduce the power load on the downstream optical elements, thus reducing thermal wavefront distortion and damage risk. The former is a particular problem for transmission optics such as wave-plates and windows. A power attenuator (EKSMA) is placed after the beam expander, and is used to modulate the pulse energy externally, without altering the running conditions of the drive laser, which can result in changes in pulse characteristics and beam pointing. Following the attenuator, a pair of convex and concave reflective axicons (Mflens Natsume) are used to transform the Gaussian beam into an annular beam. The inner and outer diameters of the annular beam are 3~mm and 9~mm, respectively. The beam profile after the reflective axicons is shown in the inset of Fig.\ref{fig:Setup}a). Note that there is remaining intensity in the beam center, and this intensity is dumped by reflecting the beam off a mirror with a center hole at a 45$^\circ$ angle prior to passing the beam into the HHG chamber.
A delay stage (Newport, ESP301) carrying two plane dielectric mirrors, is used to generate the optical delay between the probe and the pump. The delay stage is placed in the probe-line, since the output of OPA covers a wide wavelength range from visible to mid-infrared, which have different beam paths, thus making it more challenging to implement the delay stage in the pump path. In order to maintain HHG, probe- and pump-beam stability, both beam paths are actively stabilized using piezo mirrors and beam sensors (Thorlabs PDA90A).
\subsection{High harmonics generation}
HHG is by now a well established technique for upconverting visible or infrared laser light into the vacuum-ultraviolet (VUV) and X-ray regions\cite{LaserHHG_KepteynMurnanePRL1996, HHG_CoherentSoftXrays_Murnane_PRL1997, HHG_CoherentXrays_WaterWindow_Krausz_Science1997}. The physical process behind HHG is well understood and well described theoretically\cite{gorlach2020quantum}. For a HHG source to be used as a photon source for ARPES it should ideally be bright, have a high repetition rate (> 100 kHz), have a narrow line width and be tunable. Since HHG generation requires drive laser intensities on the order of $I_L \approx 10^{14} W/cm^2$, the drive laser pulse length is usually kept short in order to limit the necessary pulse energy. In the case of high repetition rates on the order of kHz, this also limits the necessary average power. In the present case, the goal has been to achieve a high repetition rate photon source that can serve as a narrow bandwidth source with a line width on the order of 10 meV without the need for monochromatization. In order to achieve this, several trade-offs had to be made. The drive pulse length has to be substantially longer than commonly applied, and the pulse energy has to be kept low in order for the average power to remain reasonable. Long pulses of low pulse energy need a very tight focus to achieve the necessary intensity given above, and as a result the gas target pressure needs to be high\cite{rothhardt2014absorption}. Even under these conditions, the efficiency of the HHG process drops dramatically for longer pulses, being an order of magnitude lower for 450 fs pulses as compared to 45 fs pulses\cite{shiner2009wavelength,hadrich2016single}. To address this issue, a cascaded approach is used in which the drive laser is upconverted to the 3rd harmonic, and this harmonic is then used to drive the HHG process. The cascaded approach significantly improves the HHG efficiency since the HHG generation efficiency scales as $\lambda^{-(5-6)}$\cite{tate2007scaling,comby2019cascaded}. The use of a shorter drive wavelength, however, reduces the available harmonics as seen from the single atom high energy cut-off relation\cite{HHG_UP_theoryCalc_Krause_PRL1992} $h\nu_{cutoff}= I_p+3.17U_p$, where $h$ is Planck's constant, $I_p$ the ionization potential and $U_p$ the quiver energy of the electron. The cut-off energy scales as $\lambda^2$ since the quiver energy of the electron $U_p \propto I_L \lambda_L^2$, with $I_L$ being the drive laser intensity and $\lambda_L$ the wavelength, leading to a dramatically reduced cut-off energy for shorter wavelengths\cite{HHG_Ne_Ar_LHullier_PRA1993}.
In order to be able to spatially separate most of the drive beam from the generated HHG beam, the intensity profile of the drive laser beam is transformed into an annular shape using reflective axicons as described above. The annular beam is focused into an argon gas jet\cite{HHG_GasJetOptimisation_KatsumiPRA2000} using an off-axis parabolic mirror (Thorlabs, MPD169) resulting in a Gaussian annular beam focus of $\sim$6.8 $\mu$m diameter (1/$e^2$) and $\sim$164 $\mu$m Rayleigh length. The theoretical radial intensity profile at the focus as well as the on-axis intensity profile across the beam focus are shown in Fig.\ref{fig:HHG}, demonstrating that the annular intensity profile will not significantly change the quality of the focus compared to a Gaussian beam.
When focusing the beam in the gas target, in order to achieve phase matching at the focus while minimizing re-absorption, a very high density gas target is required with a size that extends on the order of the Rayleigh length. In the present setup, a high density gas jet is used as the target which is provided by a 150 $\mu$m diameter de Laval nozzle with a 50 $\mu$m throat diameter. The experimentally determined phase matching pressure is achieved with a 4.5 bar injection pressure. In order to maintain the best possible background pressure, the injection nozzle faces a counter nozzle with a 2 mm diameter opening situated approximately 200 $\mu$m in front of the injection nozzle. The counter nozzle is pumped by a high capacity scroll pump (Edwards, XDS35i) and the vacuum chamber itself is pumped by a 500 l/s turbo pump (Pfeiffer, HiPace 700). The resulting vacuum chamber background pressure is $\sim 7\times10^{-7}$ mbar and $\sim 6\times10^{-4}$ mbar during HHG generation.
\begin{figure}[htp]
\includegraphics[scale=0.75]{fig3_ann_cal.png}
\caption{\label{fig:HHG}Numerical simulation of the focusing annular beam. a) The geometry of the annular beam with the dimensions. b) The radial and c) on-axis intensities across the focus for a 9~mm (1/$e^2$) diameter Gaussian beam and an annular beam with 3~mm inner diameter. The focal length is set to 100~mm.}
\end{figure}
\subsection{Monochromator}
Gratings are commonly adopted in ARPES setups to select the desired wavelength. In the present setup where the available wavelengths are well separated by the HHG process we instead do energy selection by employing band-pass mirrors and thin-film filters. This configuration brings advantages in at least three aspects. Firstly, we can extract the photon flux of the direct reflection, instead of the limited flux of the 1st diffraction order from a grating. Secondly, the stretching of the pulse from diffraction can be avoided, which is important for a time-resolved setup. Thirdly, it helps with the XUV imaging properties as there is no angular dispersion introduced.
For compactness and ease of alignment, a near normal incidence geometry with multiple mirrors mounted on a revolving wheel was chosen, c.f. Fig.\ref{fig:Setup}b). The rotatable wheel is placed after the HHG interaction region and has a set of mirrors optimized for the different wavelengths. All mirrors are spherical in order to refocus the diverging HHG beam onto the sample position. The wheel is on a high-precision translation stage (Smaract), which enables a nanometer-scale fine tuning of its position. One of the mirrors is SiC coated, providing high reflectivity for the 5th harmonic (18.1 eV). Two multi-layer coated mirrors are used for the 7th (25.3 eV) and 9th (32.5 eV) harmonic and one $MgF_2/Al$ mirror for the 3rd harmonic (10.8 eV). The mirrors for the 7th and 9th harmonics are coated by Ultrafast Innovations and provide peak reflectivities of $41\%$ and $27\%$, respectively. The angle of incidence of the refocusing mirrors is $\approx$1$^\circ$ and the radius of curvature is 1000~mm.
A set of thin-film filters (Lebow) is used to clean the spectra from residual intensity remaining from other harmonics than the one selected as well as from the drive beam. This assembly is illustrated in Fig.\ref{fig:Setup}b). Matching the mirror selection the filter wheel contains the following options: LiF (10.8 eV), Sn (18.1 eV), Si (25.3 eV) and Ti (32.5 eV) thin films for selecting the four different harmonics, as well as an option of using an Al filter (cuts harmonics below 15 eV as well as the drive beam). The thickness of the filters, with the exception of LiF, is less than 200 nm, which improves transmission but makes them sensitive to high power loads. A motorized iris (Standa) is placed before the filters to further reduce the power load on the filters. Additionally, it can be used to regulate the photon flux without changing the power of the drive laser, thus keeping the HHG generation conditions fixed, which allows space charge effects to be directly monitored.
The selected harmonic is guided into the ARPES analysis chamber by a steering mirror. This mirror is mounted in a 19$^\circ$ grazing incidence angle and is coated with gold to provide high reflectivity in the XUV, as well as MIR, wavelength range. The drain current from the steering mirror can be monitored during experiments and used as a photon flux reference. The steering mirror is used to steer the beam onto the sample, as well as align the beam spot on sample with the focus of the electron energy analyzer.
\subsection{The pump-line}
An additional femtosecond laser (Amplitude, Tangerine HP2) is used as the pump light source. It is an Ytterbium-Doped Fiber Amplifier (YDFA) laser and delivers the same central wavelength of 1030 nm as the Tangor laser used for the probe line. The two lasers are optically synchronised by sharing a common oscillator. The pulse picker of the Tangerine is triggered by a pulse generator (Quantum composer, 9200) to ensure that the Tangerine (pump) constantly picks the same pulse as the one picked by Tangor (probe) from the pulse train that the oscillator yields. The Tangerine output wavelength is 1030~nm (1.2 eV), with the pulse length of $\sim$280 fs and a maximum output power that exceeds 50 W. The Tangerine is used to drive the OPA which in turn delivers 3 tunable output modes: signal (0.65~$\mu$m - 0.95~$\mu$m), idler (1.15~$\mu$m - 2.4~$\mu$m) and difference frequency generation (DFG; 2.5~$\mu$m - 9~$\mu$m). The performance of the OPA is presented in Fig.\ref{fig:Setup}d), where the adjustable wavelength range and the pulse energy for each mode is given. As noted above, the choice of a 250~kHz repetition rate for the OPA was made in the design phase and is practically non-tunable. Note that due to the strong absorption of mid-infrared in air, the beam path of the pump line is fully enclosed and purged with nitrogen gas. The pump beam is coupled into the probe beam path by a 45$^{\circ}$ mirror with a center hole through which the probe beam can pass. This makes the pump and probe beams propagate close to colinear through the last section of the beam path leading to the sample, and allows for simultaneous adjustment of the position of both beams on the sample using the aforementioned steering mirror. When focusing the pump onto the sample in the analysis chamber of the ARPES setup, the spot size is $\sim$0.8 mm for the signal, and $\approx$ 2mm for the idler. The DFG beam size has not been characterized due to the difficulty of observing wavelengths in the IR range, but is expected to be 3-4~mm in the present focusing geometry. The pump-pulse duration is compressed to below 100~fs.
\subsection{Photoemission setup}
The photoemission setup consists of the analysis chamber, a preparation chamber, where sample preparation and characterisation can be done, and a load lock for fast sample entry. The details of the entire chamber layout and the functionalities can be found in Ref$\cite{berntsen2011experimental}$. Briefly, the functions of the preparation chamber include sputtering and annealing option for the sample cleaning, low-energy electron diffraction (LEED) for structure and quality determination of the sample surface, and thin-film deposition by electron-beam evaporation with up to three source cells.
The analysis chamber is capable of maintaining a base pressure of < 1 $\times$10$^{-10}$ mbar. The analysis chamber is connected to the photon beamline via a differential ion pump (XIA, DP-03). This provides a windowless line-of-sight transition from high vacuum (HV) in the photon beam line to ultra high vacuum (UHV) in the ARPES analysis chamber. The windowless solution has the advantage that it provides rapid switching between photon energies. A solution that uses a series of window valves, where different thin filters act as the vacuum barrier between the chambers, was initially considered but was deemed less flexible and robust. The current solution would, for example, permit future filter-less operation if other means for harmonic selection is developed, or additional harmonics are added. A motorized 4-axis manipulator (SPECS) is mounted on the analysis chamber and equipped with a closed-cycle cryostat (ARS, 4K). The lowest sample temperature that can be reached is $\sim$8 K.
The ARTOF analyzer (SPECS, Themis 1000) is a line of sight analyzer consisting of an electrostatic lens system, and a Delay-Line-Detector (DLD, Surface Concept, DLD4040). The lens system can provide several imaging modes such as direct imaging or angle resolving modes for ARPES measurements. The DLD is synchronized to the photon pulse by a fast photodiode. The flight time and position data of the photoemitted electrons acquired by the DLD is converted into a three-dimensional data set of $k_x$, $k_y$ and $E_k$, where $k_x,k_y$ are crystal momentum coordinates and $E_k$ the kinetic energy of the photoelectron. This three dimensional, in parallel, data collection is the major advantage of the ARTOF as compared to a hemispherical analyzer and allows for an efficient acquisition of ARPES spectra over an extended area in momentum space without the need for rotating the sample or deflecting the photoelectrons.
\section{\label{sec:Performance}Performance\protect }
In this section, we show results from incorporating the HHG light source into the ARPES setup. We characterize two important parameters of the combined system, namely the XUV spot size at the sample position in the photoemission analysis chamber and the achievable total energy resolution in ARPES measurements using the different harmonics of the light source. We then present a few examples of ARPES measurements on quantum materials, thus directly demonstrating the practical capabilities of the source.
\subsection{XUV spot size}
\begin{figure}[htp]
\includegraphics[scale=0.45]{fig4_spotsize.png}
\caption{\label{fig:SpotSize}Spot size measurement for the 7th harmonic (25.3 eV). a) Camera image of the probe beam on a YAG crystal which is placed at the sample analysis position in the ARPES chamber. The spatial scale of the image in the horizontal and vertical directions are calibrated using the motorized sample-manipulator displacement. b) Gaussian fits of the vertical and horizontal profiles of the spot, yielding a spot size of approximately 96~$\mu$m and 85~$\mu$m in the horizontal and vertical directions, respectively.}
\end{figure}
The spot size of the 7th harmonic (25.3 eV) was determined to be $\sim$ 96 $\mu m$ $\times$ 85 $\mu m$, from photoluminescence on a YAG (Y$_3$Al$_5$O$_{12}$) crystal. The YAG crystal was placed at the sample analysis position in the ARPES chamber. The scale of the camera pixels in both directions was calibrated to the manipulator displacements, which can be precisely controlled using stepper motors. Figure \ref{fig:SpotSize}a) shows the YAG crystal together with the photoluminescence of the 25.3 eV probe spot. Figure \ref{fig:SpotSize}b) shows a zoom-in of the beam spot and the results of fitting the beam intensity distribution to a Gaussian profile along the horizontal (H) and vertical (V) directions, respectively. The beam size differs between the horizontal and vertical directions as expected from the off-normal incidence geometry but has overall a profile very close to that of a two-dimensional Gaussian profile.
\subsection{Experimental energy resolutions}
\begin{figure}[htp]
\includegraphics[scale=0.7]{fig5_res.png}
\caption{\label{fig:Au_FL}Fermi edge measurements on polycrystalline Au taken at 8 K. a) Data acquired at 10.8~eV (blue), 18.1~eV (red) and 25.3~eV (yellow) plotted with a vertical offset. Solid black lines are fits using a convolution of the temperature dependent Fermi-Dirac function with a Gaussian function, where the full-width at half maximum of the Gaussian represents the overall system energy resolution. b) The same data without vertical offset. c) Results for 32.5 eV. Purple dots represent data, black line is the fit.}
\end{figure}
The experimental energy resolution of the system is characterized by Fermi-edge measurements on polycrystalline gold. Fresh gold is evaporated in the preparation chamber onto a gold foil mounted on a copper sample holder and transferred \textit{in situ} into the analysis chamber. This way, the achievable energy resolution for available harmonics was determined. Figure \ref{fig:Au_FL} shows the raw data (dots) from the measurements at the different harmonics together with Fermi-edge fits. The fitted curve consists of a convolution of the temperature-dependent Fermi-Dirac distribution and a Gaussian-profile, the latter representing the system energy resolution. The temperature used for the measurements and the fitting is 8~K. Overall, the 3rd harmonic (10.8~eV) shows the ability of reaching an energy resolution of 8.9 meV. The 5th (18.1~eV) and 7th (25.3~eV) harmonic yields energy resolution of 13.9~meV and 18.5~meV, respectively. This overall energy resolution contains contributions not only from the harmonic line width but also analyzer resolution, space-charge effects and stray fields. The majority contribution to the energy broadening is believed to come from the linewidth of the harmonics in view of the fact that the analyzer has previously demonstrated resolution better than 5~meV \cite{berntsen2011experimental}, and that we do not observe any improvement of resolution or shift of Fermi edge if the photon flux is decreased. This is further supported by simulations for the expected analyzer resolution which show an expected resolution of 0.9~meV, 0.9~meV, and 6.8~meV for the 3rd, 5th and 7th harmonic Fermi-edge measurement, respectively. We note that the intrinsic linewidth of the driving laser is 3.9~meV, which corresponds to 0.37~nm at 343.4~nm wavelength. Therefore, the overall energy resolution of our system is mainly limited by the HHG process, during which the pulse experiences temporal compression and energy broadening. Included in Fig.\ref{fig:Au_FL} is also a Fermi edge measurement using the 9th harmonic (32.5~eV), which exhibits an energy resolution of 110.5~meV. This harmonic has very low intensity and the fitted Gaussian width in this case is not limited by the bandwidth of the harmonic but rather the analyzer resolution ($\sim$68.4~meV) as well as space charge broadening due to the presence of lower harmonics in the beam during the measurement. The very limited photon flux of this harmonic also prevents time-resolved measurements from being performed.
\subsection{ARPES test cases}
\subsubsection{Resolving the superconducting gap}
\begin{figure*}[htbp]
\includegraphics[scale=0.67]{fig6_cuprate.png}
\caption{\label{fig:BISCO}Low temperature (8~K) static ARPES measurement of the high-$T_\mathrm{c}$ cuprate superconductor Bi-2212 using 18.1 eV photons. a) Constant energy contour at 20~meV below the Fermi level. b) Symmetrised EDCs (black curves) extracted at the position of the red circles in a) along with fits to the EDCs (red lines). c) The superconducting gap as a function of the FS angle and 0.5$\times$$|\mathrm{cos}(k_\mathrm{x}a)-\mathrm{cos}(k_\mathrm{y}a)|$. d) Example slices taken at various FS angles, as indicated by the blue lines in a).}
\end{figure*}
The first test case consists of static measurements on the copper-based high-$T_\mathrm{c}$ superconductor Bi$_2$Sr$_2$CaCu$_2$O$_{8+x}$ (Bi-2212), to show the capability and feasibility of resolving the band structure in the whole first Brilloune zone, as well as the superconducting gap, with a HHG source. Since the discovery of high-$T_\mathrm{c}$ superconductors (HTS) decades ago\cite{bednorz1986possible}, this group of materials has attracted considerable attention. Among the HTS, the copper-based compounds (cuprates) are typically known for their high transition temperature and the comparative simplicity of their layered crystal structure. A complete understanding of the mechanism(s) underlying superconductivity in these systems is still lacking despite immense experimental and theoretical efforts\cite{imada1998metal,damascelli2003angle,sobota2021angle}. Although HHG-based light sources have progressed rapidly during recent years, access to the superconducting gap has been limited for HHG-based ARPES due to the relatively large bandwidth of these setups as compared to synchrotron radiation based setups. However, HHG is so far the most practical approach for ultrafast XUV source generations, which is favorable for time-resolved ARPES, especially considering the achievable time resolution and availability of lab-based systems. Figure \ref{fig:BISCO} shows static ARPES results for an optimally doped Bi-2212, measured with a photon energy of 18.1~eV. The data were acquired at a sample temperature of 8~K, well below the superconducting transition temperature ($T_\mathrm{c}=~$90~K).
The wide-angle mode (WAM) of the ARTOF analyzer, with an acceptance angle of $\pm15^\circ$, was used to collect the data. Figure \ref{fig:BISCO}a) shows a constant energy contour at 20~meV below the Fermi level. The complete nodal to anti-nodal section is within the analyzer acceptance window. The splitting of the main band at the off-nodal direction, into a bonding (BB) and an anti-bonding (AB) band, can be resolved, and the red circles in Fig.\ref{fig:BISCO}a) are fit of the bonding band. Apart from the main band, the super-structure and shadow bands are also observed with a relatively lower intensity. Figure \ref{fig:BISCO}b) presents symmetrized energy distribution curves (EDCs) taken along the bonding band for Fermi surface angles ($\phi$) in the range 10$^\circ$ to 44$^\circ$ at $k$-space positions given by the fitted red circles in Fig.\ref{fig:BISCO}a). The red lines in Fig.\ref{fig:BISCO}b) are fits of the symmetrized EDCs using a Norman function convoluted with a Gaussian\cite{norman1998phenomenology}. The superconducting gap is determined as the difference in peak positions of the fitted phenomenological form. The gap size is plotted as a function of Fermi surface (FS) angle and 0.5$\times|\mathrm{cos}(k_\mathrm{x}a)-\mathrm{cos}(k_\mathrm{y}a)|$ in Fig.\ref{fig:BISCO}c). The black line represents the $d$-wave form, $30\times\mathrm{cos}(2\phi)$. The determined gap size and shape of the optimally doped Bi-2212, agrees well with previously published result\cite{sun2018temperature}. The near FS dispersion for a few selected FS angles, as indicated by blue lines in Fig.\ref{fig:BISCO}a), are presented in Fig.\ref{fig:BISCO}d), where the phonon-coupling induced kink ($E_\mathrm{b}\sim$70~meV) and the evolution of mode coupling strength in momentum can be clearly resolved. The presented data undeniably demonstrates the capability of the present setup to resolve and study the momentum dependence of the near-FS electronic structures and superconducting gaps in cuprate systems with future applications for the study of ultrafast dynamics in high-$T_\mathrm{c}$ superconductors\cite{zonno2021time}.
\subsubsection{Momentum coverage}
\begin{figure*}[]
\includegraphics[width=.85\textwidth]{fig7_WSe2.png}
\caption{\label{fig:WSe2_Bulk}Static WSe$_2$ band structure measured at room temperature with 25.3~eV photon energy. a) Constant energy contours taken at the $\bar{\Gamma}$ and $\bar{K}$ points and at 1, 2 and 3 eV below the valence band maximum. WSe$_2$ band structure b) at the $\bar{\Gamma}$ point along the $\bar{K}-\bar{\Gamma}-\bar{M}$ direction and c) at the $\bar{K}$ point along the $\bar{M}-\bar{K}-\bar{\Gamma}$ direction. A valence band spin-orbit splitting of 514~meV at the $\bar{K}$ point is clearly visible in c). The binding energy scale is set to 0~eV at the valence band maximum at $\bar{\Gamma}$.}
\end{figure*}
To showcase measurements at large in-plane momenta, we use 2H-WSe$_2$ as an example, a member of the transition metal dichalcogenide (TMDC) family. 2H-WSe$_2$ (from now on referred to as WSe$_2$) is a semiconducting TMDC with an indirect gap of 1.25 eV \cite{2H_WEs2_IndirectGap, WSe2_BandGap} that retains bulk inversion symmetry while still exhibiting a large spin polarization of its bulk electronic states\cite{2H_WSe2_PDCKing}. Due to the presence of large spin polarisation in inequivalent $K$ and $K'$ valleys \cite{WSe2_BerryCurvature_ParkPRL2018, 2H_WSe2_PDCKing}, WSe$_2$ is a prospective candidate for spin- and valleytronic devices\cite{Valleytronics_rev}, making it scientifically and technologically an interesting system. The WSe$_2$ sample was cleaved \textit{in situ} using the top-post method and measured at a pressure of $1\times10^{-10}$~mbar. The analyzer WAM mode was used also in this case, together with an off-normal emission geometry to reach the $\bar{K}$ point of the first Brillouin zone of WSe$_2$ (1.3 {\AA}$^{-1}$). Measurements were done using a photon energy of 25.3~eV, which allows us to access the $\bar{K}$ point within our available polar rotation range and has favourable matrix elements for WSe$_2$ at both $\bar{\Gamma}$ and $\bar{K}$ points. The multi-layer refocusing mirror (coated for 25.3~eV) and an Al filter were employed to clean the spectra of higher and lower harmonics. Measurements were performed at room temperature, and total recording time was 6 hours. Figure \ref{fig:WSe2_Bulk}a) shows the constant energy contour at the $\bar{\Gamma}$ and $\bar{K}$ points at 1, 2 and 3 eV below the valence band maximum (VBM), while Fig.\ref{fig:WSe2_Bulk}b) and c) show example cuts through the data volume around the $\bar{\Gamma}$ and $\bar{K}$ points along the high-symmetry directions $\bar{K}-\bar{\Gamma}-\bar{M}$ and $\bar{M}-\bar{K}-\bar{\Gamma}$, respectively. The VBM for bulk WSe$_2$ is located at the $\bar{\Gamma}$ point, and the energy axis is referenced to the VBM. Valleys of WSe$_2$ are located at the $\bar{K}$ point, and a spin-orbit splitting of 514~meV at the $\bar{K}$ point is clearly resolved. These results show the feasibility of acquiring high statistics data at large in-plane momenta. The high detection efficiency and momentum coverage of the ARTOF analyzer becomes apparent at higher order harmonics where a large fraction of the Brillouin zone can be covered in one single measurement\cite{TOF_vs_Hemisphere_Mac_RevSciInstr2020}.
\subsubsection{Time-resolved experiment}
\begin{figure*}[htp]
\includegraphics[scale=0.86]{fig8_graphene.png}
\caption{\label{fig:TR_GR}Pump-probe measurements on p-doped graphene measured with 25.3~eV photons. a) Ultrafast dynamics of the graphene, excited with a pump beam of 1.2 $\mu$m wavelength and $\sim$100~fs pulse duration. Purple vertical line indicates the integration range in energy for the corresponding delay curve displayed in panel c). b) From left to right: energy dispersion cuts from a static measurement without pump (\textcircled{1}), at the excitation time ($t_0$, \textcircled{2}-\textcircled{1}) and 1~ps after $t_0$ (\textcircled{3}-\textcircled{1}). c) Fit to the decay data. The decay data is taken from an integration of the energy window marked by the purple line on the right side of a). The fitting profile is a 2-component exponential decay curve convoluted with a Gaussian function. d) Energy resolution ($\Delta E$) versus time resolution ($\Delta t$) for the photon energies of 18.1~eV and 25.3~eV, the solid line shows the theoretically resolution limit assuming a Fourier transform limited Gaussian pulse.}
\end{figure*}
In order to determine the temporal resolution reachable with the current source, p-doped graphene was used as a test sample. P-doped graphene has fast enough intrinsic dynamics to reflect the system-limited temporal resolution\cite{gierz2013snapshots,johannsen2013direct}. The specific sample used here was a quasi-freestanding monolayer graphene on 6H-SiC (0001)\cite{forti2011large}, showing a hole-pocket around the $\bar{K}$ point, as seen in Fig.\ref{fig:TR_GR}b). Similar to the case of WSe$_{2}$, graphene has a small real-space unit cell, which corresponds to a large first Brillouin zone in the reciprocal space, making it challenging to reach the zone boundary for most of the lab-based, non-HHG laser ARPES setups. Figure \ref{fig:TR_GR}a) illustrates the electronic response to the optical excitation, for which we utilized the idler mode of the OPA at 1.2 $\mu$m wavelength. Figure \ref{fig:TR_GR}a) is based on the momentum-integrated excitation spectrum of the graphene sample, plotted as a function of delay time between the pump- and probe-beam. The probe photon energy was 25.3~eV. The data was measured at room temperature and the acquisition time for each delay point was 5 minutes. In Fig.\ref{fig:TR_GR}b) we plot the electronic structure at the $\bar{K}$ point at different stages of time delays. The leftmost window shows the static spectrum whereas the middle and rightmost windows show the difference between the excited and static spectrum at $t_0$ and $t_0$+1~ps, respectively. The incidence angle of the probe beam is 15$^\circ$ upward in the vertical direction, and polarization is close to horizontal, resulting in unequal intensity of the two branches\cite{mucha2008characterization}. The energy window indicated by the purple line on the right side of the Fig.\ref{fig:TR_GR}a) was integrated in energy and plotted in Fig.\ref{fig:TR_GR}c) as a function of delay time. The fit to the data (red line) is based on a 2-component exponential decay curve convoluted with a Gaussian. The $\tau_1$ and $\tau_2$ of the exponential decay function represent contributions from electron scattering with optical and acoustic phonons, respectively\cite{gierz2014non}. The overall temporal resolution is determined from the FWHM of the Gaussian distribution, which represents the temporal resolution broadening, while the $\tau$-parameters describe the rate of decay after the excitation. For the 7th harmonic (\textit{h$\nu$} = 25.3~eV), the system temporal resolution is determined to be $\sim$165 fs. Corresponding data recorded using 18.1~eV photons (not shown here) indicate a time resolution of $\sim$204~fs. In order to compare the time-bandwith product to that of a Fourier transform limited pulse, we combine these results with the energy resolution determined above and plot the energy resolution, $\Delta E$, as a function of the time resolution, $\Delta t$, in Fig.\ref{fig:TR_GR}d). Note that in the plot, the temporal contribution from the pump beam has been removed. Overall, the results show a $\Delta E \cdot\Delta t \sim 2400$~meV$\cdot$fs as compared to the transform limit of 1825~meV$\cdot$fs for a Gaussian pulse. This demonstrate that the present source is only 31\% above the transform limit. Several factors can contribute to the extra broadening both in time and in energy. Measured energy broadening can originate from space charge, analyzer resolution, stray fields and HHG conditions. Extra time broadening can, on the other hand, originate from the THG and HHG processes themselves, as well as from chirp induced by optical components, filters and windows. However, as noted previously, we do not observe any improved energy resolution with decreasing photon flux. Given the simulated analyzer energy resolutions for the settings used in our Fermi edge measurements (0.9 meV, 0.9 meV and 6.8 meV for photon energies 10.8 eV, 18.1 eV and 25.3 eV, respectively), we do not expect considerable analyzer contributions to the overall energy resolution and the determined time-bandwidth product is likely intrinsic to the light. The deviation from the ideal time-bandwidth product is not surprising as neither the time-bandwidth product of THG pulses nor the HHG generation conditions are expected to be ideal. Further improvements to both the drive pulses and the generation conditions could thus potentially improve the overall time-bandwidth product of the system further.
\section{\label{sec:Conclusion}Conclusion\protect }
In conclusion, we have designed a narrow bandwidth, high repetition rate XUV source for time-resolved ARPES. The available photon energies cover a wide range from 10.8~eV to 32.5~eV with an energy resolution of 9~meV at the lowest photon energy. The technical performance and suitability of the light source for time-resolved ARPES is demonstrated across test samples and typical quantum material systems such as gold, graphene, transition metal dichalcogenides and high temperature superconductors. The pump-line equipped with an OPA provides wavelength from 0.65~$\mu$m to 9~$\mu$m with pulse duration < 100~fs, allowing for both above-the-gap pumping and sub-gap pumping of coherent phonons across a wide range of materials. The combination of high repetition rate, wide range of photon energies and a continuously tunable wide-range of pump energies with a time-of-flight detector, makes it possible to study the ultrafast dynamics over the whole first Brillouin zone in most crystalline materials.
\begin{acknowledgments}
This work was financially supported by the Knut and Alice Wallenberg foundation (No.2018-0104) and the Swedish research council VR (No.2019-00701). Q.G acknowledges the fellowship from China scholarship council (No.201907930007). M.D. acknowledges financial support from the Göran Gustafsson foundation. We thank Dr. Qiang Gao and professor Xingjiang Zhou from the institute of physics at Chinese Academy of Sciences, for providing the high-quality Bi-2212 sample, and professor Youguo Shi (IOP, CAS), for the high-quality WSe$_2$ sample.
\end{acknowledgments}
\section*{DATA AVAILABILITY}
The data that support the findings of this study are available
from the corresponding author upon reasonable request.
\nocite{*}
|
2,877,628,091,587 | arxiv | \section{Introduction}
\vspace{-0.05in}
In recent years, we have made significant advances in standard recognition tasks such as image classification~\cite{he2016deep}, detection~\cite{ren2015faster} or segmentation~\cite{chen2016attention}. Most of these gains are a result of using feed-forward end-to-end learned ConvNet models. Unlike humans where visual reasoning about the space and semantics is crucial~\cite{biederman1982scene}, our current visual systems lack any context reasoning beyond convolutions with large receptive fields. Therefore, a critical question is how do we incorporate both \emph{spatial} and \emph{semantic} reasoning as we build next-generation vision systems.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{teaser}
\caption{{\small Current recognition systems lack the reasoning power beyond convolutions with large receptive fields, whereas humans can explore the rich space of spatial and semantic relationships for reasoning: \eg inferring the fourth ``window'' even with occlusion, or the ``person'' who drives the ``car''. To close this gap, we present a generic framework that also uses relationships to iteratively reason and build up estimates.}\label{fig:teaser}}
\vspace{-0.2in}
\end{figure}
Our goal is to build a system that can not only extract and utilize hierarchy of convolutional features, but also improve its estimates via spatial and semantic relationships. But what are spatial and semantic relationships and how can they be used to improve recognition? Take a look at Fig.~\ref{fig:teaser}. An example of spatial reasoning (top-left) would be: if three regions out of four in a line are ``window'', then the fourth is also likely to be ``window''. An example of semantic reasoning (bottom-right) would be to recognize ``school bus'' even if we have seen few or no examples of it -- just given examples of ``bus'' and knowing their connections. Finally, an example of spatial-semantic reasoning could be: recognition of a ``car'' on road should help in recognizing the ``person'' inside ``driving'' the ``car''.
A key recipe to reasoning with relationships is to \emph{iteratively} build up estimates.
Recently, there have been efforts to incorporate such reasoning via top-down modules~\cite{ronneberger2015u,wei2016convolutional} or using explicit memories~\cite{xiong2016dynamic,marino2016more}. In the case of top-down modules, high-level features which have class-based information can be used in conjunction with low-level features to improve recognition performance. An alternative architecture is to use explicit memory. For example, Chen \& Gupta~\cite{chen2017spatial} performs sequential object detection, where a \emph{spatial memory} is used to store previously detected objects, leveraging the power of ConvNets for extracting dense context patterns beneficial for follow-up detections.
However, there are two problems with these approaches: a) both approaches use stack of convolutions to perform \emph{local} pixel-level reasoning~\cite{divvala2009empirical}, which can lack a \emph{global} reasoning power that also allows regions farther away to directly communicate information; b) more importantly, both approaches assume enough examples of relationships in the training data -- so that the model can learn them from scratch, but as the relationships grow exponentially with increasing number of classes, there is not always enough data. A lot of semantic reasoning requires learning from few or no examples~\cite{fei2006one}. Therefore, we need ways to exploit additional structured information for visual reasoning.
In this paper, we put forward a generic framework for both spatial and semantic reasoning. Different from current approaches that are just relying on convolutions, our framework can also learn from structured information in the form of knowledge bases~\cite{chen2013neil,zhu2015building} for visual recognition. The core of our algorithm consists of two modules: the local module, based on spatial memory~\cite{chen2017spatial}, performs pixel-level reasoning using ConvNets. We make major improvements on efficiency by parallel memory updates. Additionally, we introduce a global module for reasoning beyond local regions. In the global module, reasoning is based on a \emph{graph} structure. It has three components: a) a knowledge graph where we represent classes as nodes and build edges to encode different types of semantic relationships; b) a region graph of the current image where regions in the image are nodes and spatial relationships between these regions are edges; c) an assignment graph that assigns regions to classes. Taking advantage of such a structure, we develop a reasoning module specifically designed to pass information on this graph. Both the local module and the global module roll-out iteratively and cross-feed predictions to each other in order to refine estimates. Note that, local and global reasoning are not isolated: a good image understanding is usually a compromise between background knowledge learned \emph{a priori} and image-specific observations. Therefore, our full pipeline joins force of the two modules by an attention~\cite{chen2016attention} mechanism allowing the model to rely on the most relevant features when making the final predictions.
We show strong performance over plain ConvNets using our framework. For example, we can achieve $8.4\%$ absolute improvements on ADE~\cite{zhou2016semantic} measured by per-class average precision, where by simply making the network deeper can only help ${\sim}1\%$.
\vspace{-0.05in}
\section{Related Work}
\vspace{-0.05in}
\noindent{\bf Visual Knowledge Base.} Whereas past five years in computer vision will probably be remembered as the successful resurgence of neural networks, acquiring visual knowledge at a large scale -- the simplest form being labeled instances of objects~\cite{russakovsky2015imagenet,lin2014microsoft}, scenes~\cite{zhou2016semantic}, relationships~\cite{krishna2016visual} \etc -- deserves at least half the credit, since ConvNets hinge on large datasets~\cite{chensun2017}. Apart from providing labels using crowd-sourcing, attempts have also been made to accumulate structured knowledge (\eg relationships~\cite{chen2013neil}, $n$-grams~\cite{divvala2014learning}) automatically from the web. However, these works fixate on building knowledge bases rather than using knowledge for reasoning. Our framework, while being more general, is along the line of research that applies visual knowledge base to end tasks, such as affordances~\cite{zhu2015building}, image classification~\cite{marino2016more}, or question answering~\cite{wu2016ask}.
\noindent{\bf Context Modeling.} Modeling context, or the interplay between scenes, objects and parts is one of the central problems in computer vision. While various previous work (\eg scene-level reasoning~\cite{torralba2003context}, attributes~\cite{farhadi2009describing,parikh2011relative}, structured prediction~\cite{krahenbuhl2011efficient,desai2011discriminative,tu2010auto}, relationship graph~\cite{johnson2015image,lu2016visual,xu2017scene}) has approached this problem from different angles, the breakthrough comes from the idea of feature learning with ConvNets~\cite{he2016deep}. On the surface, such models hardly use any explicit context module for reasoning, but it is generally accepted that ConvNets are extremely effective in aggregating local pixel-to-level context through its ever-growing receptive fields~\cite{zeiler2014visualizing}. Even the most recent developments such as top-down module~\cite{xie2016top,lin2016feature,tdm_cvpr17}, pairwise module~\cite{santoro2017simple}, iterative feedback~\cite{wei2016convolutional,newell2016stacked,carreira2016human}, attention~\cite{yang2016stacked}, and memory~\cite{xiong2016dynamic,chen2017spatial} are motivated to leverage such power and depend on variants of convolutions for reasoning. Our work takes an important next step beyond those approaches in that it also incorporates learning from structured visual knowledge bases directly to reason with spatial and semantic relationships.
\noindent{\bf Relational Reasoning.} The earliest form of reasoning in artificial intelligence dates back to symbolic approaches~\cite{newell1980physical}, where relations between abstract symbols are defined by the language of mathematics and logic, and reasoning takes place by deduction, abduction~\cite{hobbs1988interpretation}, \etc. However, symbols need to be grounded~\cite{harnad1990symbol} before such systems are practically useful. Modern approaches, such as path ranking algorithm~\cite{lao2011random}, rely on statistical learning to extract useful patterns to perform relational reasoning on structured knowledge bases. As an active research area, there are recent works also applying neural networks to the graph structured data~\cite{scarselli2009graph,henaff2015deep,li2015gated,kipf2016semi,niepert2016learning,das2016chains,marino2016more}, or attempting to regularize the output of networks with relationships~\cite{deng2014large} and knowledge bases~\cite{hu2016deep}. However, we believe for visual data, reasoning should be both local and global: discarding the two-dimensional image structure is neither efficient nor effective for tasks that involve regions.
\begin{figure*}[t]
\centering
\includegraphics[width=1.0\linewidth]{local-global}
\caption{{\small Overview of our reasoning framework. Besides a plain ConvNet that gives predictions, the framework has two modules to perform reasoning: a local one (Sec.~\ref{conv}) that uses spatial memory $\mathcal{S}_i$, and reasons with another ConvNet $\mathcal{C}$; and a global one (Sec.~\ref{beyond}) that treats regions and classes as nodes in a graph and reasons by passing information among them. Both modules receive combined high-level and mid-level features, and roll-out iteratively (Sec.~\ref{iter}) while cross-feeding beliefs. The final prediction $f$ is produced by combining all the predictions $f_i$ with attentions $a_i$ (Sec.~\ref{attend}).}\label{fig:overview}}
\vspace{-0.2in}
\end{figure*}
\vspace{-0.05in}
\section{Reasoning Framework}
\vspace{-0.05in}
In this section we build up our reasoning framework. Besides plain predictions $p_0$ from a ConvNet, it consists of two core modules that reason to predict. The first one, local module, uses a spatial memory to store previous beliefs with parallel updates, and still falls within the regime of convolution based reasoning (Sec.~\ref{conv}). Beyond convolutions, we present our key contribution -- a global module that reasons directly between regions and classes represented as nodes in a graph (Sec.~\ref{beyond}). Both modules build up estimation iteratively (Sec.~\ref{iter}), with beliefs cross-fed to each other. Finally taking advantage of both local and global, we combine predictions from all iterations with an attention mechanism (Sec.~\ref{attend}) and train the model with sample re-weighting (Sec.~\ref{train}) that focuses on hard examples (See Fig.~\ref{fig:overview}).
\subsection{Reasoning with Convolutions\label{conv}}
Our first building block, the local module, is inspired from~\cite{chen2017spatial}. At a high level, the idea is to use a spatial memory $\mathcal{S}$ to store previously detected objects at the very location they have been found. $\mathcal{S}$ is a tensor with three dimensions. The first two, height $H$ and width $W$, correspond to the reduced size ($1/16$) of the image. The third one, depth $D$ (${=}512$), makes each cell of the memory $c$ a vector that stores potentially useful information at that location.
$\mathcal{S}$ is updated with both high-level and mid-level features. For high-level, information regarding the estimated class label is stored. However, just knowing the class may not be ideal -- more details about the shape, pose \etc can also be useful for other objects. For example, it would be nice to know the pose of a ``person'' playing tennis to recognize the ``racket''. In this paper, we use the logits $f$ before soft-max activation, in conjunction with feature maps from a bottom convolutional layer $h$ to feed-in the memory.
Given an image region $r$ to update, we first crop the corresponding features from the bottom layer, and resize it to a predefined square ($7{\times}7$) with bi-linear interpolation as $h$. Since high-level feature $f$ is a vector covering the entire region, we append it to all the $49$ locations. Two $1{\times}1$ convolutions are used to fuse the information~\cite{chen2017spatial} and form our input features $f_r$ for $r$. The same region in the memory $\mathcal{S}$ is also cropped and resized to $7{\times}7$, denoted as $s_r$. After this alignment, we use a convolutional gated recurrent unit (GRU)~\cite{chung2014empirical} to write the memory:
\begin{equation}\label{gru}
s'_r = u \circ s_r + (1 - u) \circ \sigma(W_f f_r + W_s(z \circ s_r) + b),
\end{equation}
where $s'_r$ is the updated memory for $r$, $u$ is update gate, $z$ is reset gate, $W_f$, $W_s$ and $b$ are convolutional weights and bias, and $\circ$ is entry-wise product. $\sigma(\cdot)$ is an activation function. After the update, $s'_r$ is placed back to $\mathcal{S}$ with another crop and resize operation\footnote{Different from previous work~\cite{chen2017spatial} that introduces an inverse operation to put the region back, we note that crop and resize \emph{itself} with proper extrapolation can simply meet this requirement.}.
\noindent {\bf Parallel Updates.} Previous work~\cite{chen2017spatial} made sequential updates to memory. However, sequential inference is inefficient and GPU-intensive -- limiting it to only give ten outputs per image~\cite{chen2017spatial}. In this paper we propose to update the regions in parallel as an approximation. In overlapping cases, a cell can be covered multiple times from different regions. When placing the regions back to $\mathcal{S}$, we also calculate a weight matrix $\Gamma$ where each entry $\gamma_{r,c}{\in}[0,1]$ keeps track of how much a region $r$ has contributed to a memory cell $c$: $1$ meaning the cell is fully covered by the region, $0$ meaning not covered. The final values of the updated cell is the weighted average of all regions.
The actual reasoning module, a ConvNet $\mathcal{C}$ of three $3{\times}3$ convolutions and two $4096$-D fully-connected layers, takes $\mathcal{S}$ as the input, and builds connections within the local window of its receptive fields to perform prediction. Since the two-dimensional image structure and the location information is preserved in $\mathcal{S}$, such an architecture is particularly useful for relationships with spatial reasoning.
\begin{figure}[t]
\centering
\includegraphics[width=.8\linewidth]{paths}
\caption{{\small Illustration of directly passing information on a graph with multiple edge types. Here four nodes are linked with two edge types. Each node is represented as an input feature vector $m_i$ (aggregated as $M$). Weight matrix $W_j$ is learned for edge type $j$ to transform inputs. Then adjacency matrix $A_j$ is applied to pass information to linked nodes. Finally, output $G$ is generated by accumulating all edge types and apply activation function.}\label{fig:paths}}
\vspace{-0.2in}
\end{figure}
\subsection{Beyond Convolutions\label{beyond}}
Our second module goes beyond local regions and convolutions for global reasoning. Here the meaning of \emph{global} is two-fold. First is \emph{spatial}, that is, we want to let the regions farther away to directly communicate information with each other, not confined by the receptive fields of the reasoning module $\mathcal{C}$. Second is \emph{semantic}, meaning we want to take advantage of visual knowledge bases, which can provide relationships between classes that are globally true (\ie commonsense) across images. To achieve both types of reasoning, we build a graph $\mathcal{G}=(\mathcal{N}, \mathcal{E})$, where $\mathcal{N}$ and $\mathcal{E}$ denote node sets and edge sets, respectively. Two types of nodes are defined in $\mathcal{N}$: region nodes $\mathcal{N}_r$ for $R$ regions, and class nodes $\mathcal{N}_c$ for $C$ classes.
As for $\mathcal{E}$, three groups of edges are defined between nodes. First for $\mathcal{N}_r$, a spatial graph is used to encode spatial relationships between regions ($\mathcal{E}_{r{\rightarrow}r}$). Multiple types of edges are designed to characterize the relative locations. We begin with basic relationships such as ``left/right'', ``top/bottom'' and we define edge weights by measuring the pixel-level distances between the two. Note that we do not use the raw distance $x$ directly, but instead normalizing it to $[0,1]$ with a kernel $\kappa(x){=}\exp(-x/\Delta)$ (where $\Delta{=}50$ is the bandwidth), with the intuition that closer regions are more correlated. The edge weights are then used directly in the adjacency matrix of the graph. Additionally, we include edges to encode the coverage patterns (\eg intersection over union, IoU~\cite{everingham2010pascal}), which can be especially helpful when two regions overlap.
A second group of edges lie between regions and classes, where the assignment for a region to a class takes place. Such edges shoulder the responsibility of propagating beliefs from region to class ($e_{r{\rightarrow}c}$) or backwards from class to region ($e_{c{\rightarrow}r}$). Rather than only linking to the most confident class, we choose full soft-max score $p$ to define the edge weights of connections to all classes. The hope that it can deliver more information and thus is more robust to false assignments.
Semantic relationships from knowledge bases are used to construct the third group of edges between classes ($\mathcal{E}_{c{\rightarrow}c}$). Again, multiple types of edges can be included here. Classical examples are ``is-kind-of'' (\eg between ``cake'' and ``food''), ``is-part-of'' (\eg between ``wheel'' and ``car''), ``similarity'' (\eg between ``leopard'' and ``cheetah''), many of which are universally true and are thus regarded as commonsense knowledge for humans. Such commonsense can be either manually listed~\cite{russakovsky2015imagenet} or automatically collected~\cite{chen2013neil}. Interestingly, even relationships beyond these (\eg actions, prepositions) can help recognition~\cite{marino2016more}. Take ``person ride bike'' as an example, which is apparantly more of an image-specific relationship. However, given less confident predictions of ``person'' and ``bike'', knowing the relationship ``ride'' along with the spatial configurations of the two can also help prune other spurious explanations. To study both cases, we experimented with two knowledge graphs in this paper: one created in-house with mostly commonsense edges, and the other also includes more types of relationships accumulated at a large-scale. For the actual graphs used in our experiments, please see Sec.~\ref{data} for more details.
Now we are ready to describe the graph-based reasoning module $\mathcal{R}$. As the input to our graph, we use $M_r{\in}\mathbb{R}^{R\times D}$ to denote the features from all the region nodes $\mathcal{N}_r$ combined, where $D$ (${=}512$) is the number of feature channels. For each class node $n_c$, we choose off-the-shelf word vectors~\cite{joulin2016fasttext} as a convenient representation, denoted as $M_c{\in}\mathbb{R}^{C\times D}$. We then extend previous works~\cite{scarselli2009graph,niepert2016learning} and pass messages directly on $\mathcal{G}$ (See Fig.~\ref{fig:paths}). Note that, because our end-goal is to recognize regions better, all the class nodes should only be used as intermediate ``hops'' for better region representations. With this insight, we design two reasoning paths to learn the output features $G_r$: a \emph{spatial} path on which only region nodes are involved:
\begin{equation}\label{spatial}
G^{spatial}_r = \sum_{e{\in} \mathcal{E}_{r{\rightarrow}r}}{A_e M_r W_e},
\end{equation}
where $A_e{\in}\mathbb{R}^{r\times r}$ is the adjacency matrix of edge type $e$, $W_e{\in}\mathbb{R}^{d\times d}$ is weight (bias is ignored for simplicity). The second reasoning path is a \emph{semantic} one through class nodes:
\begin{equation}\label{semantic}
G^{semantic}_c = \sum_{e{\in} \mathcal{E}_{c{\rightarrow}c}}{A_e \sigma(A_{e_{r{\rightarrow}c}} M_r W_{e_{r{\rightarrow}c}} + M_c W_c) W_e},
\end{equation}
where we first map regions to classes through $A_{e_{r{\rightarrow}c}}$ and $W_{e_{r{\rightarrow}c}}$, combine the intermediate features with class features $M_c$, and again aggregate features from multiple types of edges between classes.
Finally, the output for regions $G_r$ are computed by merging these two paths:
\begin{equation}\label{output}
G_r = \sigma(G^{spatial}_r + \sigma(A_{e_{c{\rightarrow}r}} G^{semantic}_c W_{e_{c{\rightarrow}r}})),
\end{equation}
which first propagates semantic information back to regions, and then applies non-linear activation (See Fig.~\ref{fig:ss}).
Just like convolution filters, the above-described paths can also be stacked, where the output $G_r$ can go through another set of graph operations -- allowing the framework to perform joint spatial-semantic reasoning with deeper features. We use three stacks of operations with residual connections~\cite{he2016deep} in $\mathcal{R}$, before the output is fed to predict.
\begin{figure}[t]
\centering
\includegraphics[width=.9\linewidth]{spatial-semantic}
\caption{{\small Two reasoning paths used in our global reasoning module $\mathcal{R}$. Taking the region and class inputs $M_r$ and $M_c$, the spatial path directly passes information in the region graph with region-to-region edges $\mathcal{E}_{r{\rightarrow}r}$, whereas the semantic path first assigns regions to classes with $e_{r{\rightarrow}c}$, passes the information on to other classes with class-to-class edges $\mathcal{E}_{c{\rightarrow}c}$, and then propagates back. Final outputs are combined to generate output region features $G_r$.}\label{fig:ss}}
\vspace{-0.2in}
\end{figure}
\subsection{Iterative Reasoning\label{iter}}
A key ingredient of reasoning is to iteratively build up estimates. But how does information pass from one iteration to another? Our answer is \emph{explicit} memory, which stores all the history from previous iterations. The local module uses spatial memory $\mathcal{S}$, and the global module uses another memory $\mathcal{M}$ but without spatial structures. At iteration $i$, $\mathcal{S}_i$ is followed by convolutional reasoning module $\mathcal{C}$ to generate new predictions $f_i^l$ for each region. Similarly, global module also gives new predictions $f_i^g$ from $\mathcal{R}$. These new predictions as high-level features can then be used to get the updated memories $\mathcal{S}_{i+1}$ and $\mathcal{M}_{i+1}$. The new memories will lead to another round of updated $f_{i+1}$s and the iteration goes on.
While one can do local and global reasoning in isolation, both the modules work best in conjunction. Therefore, for our full pipeline we want to join force of both modules when generating the predictions. To this end, we introduce \emph{cross-feed} connections. After reasoning, both the local and global features are then concatenated together to update the memories $\mathcal{S}_{i+1}$ and $\mathcal{M}_{i+1}$ using GRU. In this way, spatial memory can benefit from global knowledge of spatial and semantic relationships, and graph can get a better sense of the local region layouts.
\subsection{Attention\label{attend}}
Inspired from the recent work on attention~\cite{chen2016attention}, we make another modification at the model output. Specifically, instead of only generating scores $f$, the model also has to produce an ``attention'' value $a$ that denotes the relative confidence of the current prediction compared to the ones from other iterations or modules. Then the fused output is a weighted version of all predictions using attentions. Mathematically, if the model roll-outs $I$ times, and outputs $N{=}2I{+}1$ (including $I$ local, $I$ global and $1$ from plain ConvNet) predictions $f_n$, using attentions $a_n$, the final output $f$ is calculated as:
\begin{equation}\label{att}
f = \sum_{n}{w_n f_n}, \quad\mathrm{where}\quad w_n=\frac{\exp(-a_n)}{\sum_{n'}{\exp(-a_{n'})}}.
\end{equation}
Note again that here $f_n$ is the logits before soft-max, which is then activated to produce $p_n$. The introduction of attention allows the model to intelligently choose feasible predictions from different modules and iterations.
\subsection{Training\label{train}}
Finally, the overall framework is trained end-to-end, with a total loss function consists of: a) plain ConvNet loss $\mathcal{L}_{0}$; b) local module loss $\mathcal{L}^l_{i}$; c) global module loss $\mathcal{L}^g_{i}$; and d) the final prediction loss with attentions $\mathcal{L}_f$.
Since we want our reasoning modules to focus more on the harder examples, we propose to simply \emph{re-weight} the examples in the loss, based on predictions from previous iterations. Formally, for region $r$ at iteration $i{\ge}1$, the cross-entropy loss for both modules is computed as:
\begin{equation}\label{reweight}
\mathcal{L}_{i}(r) = \frac{\max(1. - p_{i-1}(r), \beta)}{\sum_{r'}\max(1. - p_{i-1}(r'), \beta)}\log(p_{i}(r)),
\end{equation}
where $p_{i}(r)$ is the soft-max output of the ground-truth class, and $\beta{\in}[0,1]$ controls the entropy of the weight distribution: when $\beta{=}1$, it is uniform distribution; and when $\beta{=}0$, entropy is minimized. In our experiments, $\beta$ is set to $0.5$. $p_{i-1}(r)$ is used as features without back-propagation. For both local and global, $p_{0}(r)$ is the output from the plain ConvNet.
\vspace{-0.05in}
\section{Experiments}
\vspace{-0.05in}
In this section we evaluate the effectiveness of our framework. We begin with our experimental setups, which includes the datasets to work with (Sec.~\ref{data}), the task to evaluate on (Sec.~\ref{task}) and details of our implementation (Sec.~\ref{details}). We discuss our results and analyze them in Sec.~\ref{results} and Sec.~\ref{ablative} respectively.
\subsection{Datasets and Graphs\label{data}}
Datasets are biased~\cite{torralba2011unbiased}. For context reasoning we would naturally like to have scene-focused datasets~\cite{zhou2016semantic} as opposed to object-focused ones~\cite{russakovsky2015imagenet}. To showcase the capabilities of our system, we need densely labeled dataset with a large number of classes. Finally, one benefit of using knowledge graph is to transfer across classes, therefore a dataset with \emph{long-tail} distribution is an ideal test-bed. Satisfying all these constraints, ADE~\cite{zhou2016semantic} and Visual Genome (VG)~\cite{krishna2016visual} where regions are densely labeled in open vocabulary are the main picks of our study.
For ADE, we use the publicly released training set ($20,210$) images for training, and split the validation set ($2,000$ images) into {\tt val-1k} and {\tt test-1k} with $1,000$ images each. The original raw names are used due to a more detailed categorization~\cite{zhou2016semantic}. We filter out classes with less than five instances, which leaves us with $1,484$ classes. With the help of parts annotations in the dataset, a commonsense knowledge graph is created with five types of edges between classes: a) ``is-part-of'' (\eg ``leg'' and ``chair''); b) ``is-kind-of'' (\eg ``jacket'' and ``clothes''); c) ``plural-form'' (\eg ``tree'' and ``trees''); d) ``horizontal-symmetry'' (\eg ``left-arm'' and ``right-arm''); e) ``similarity'' (\eg ``handle'' and ``knob''). Notice that the first four types are directed edges, hence we also include their inverted versions.
For VG, the latest release (v$1.4$) is used. We split the entire set of $108,077$ images into $100$K, $4,077$ and $4$K as {\tt train}, {\tt val} and {\tt test} set. Similar pre-processing is done on VG, except that we use synsets~\cite{russakovsky2015imagenet} instead of raw names due to less consistent labels from multiple annotators. $3,993$ classes are used. For knowledge graph between classes, we take advantage of the relationship annotations in the set, and select the top $10$ most frequent relationships to automatically construct edges beyond commonsense relationships constructed for ADE. For each type of relationships, the edge weights are normalized so that each row of the adjacency matrix is summed-up to one. While this approach results in a noisier graph, it also allows us to demonstrate that our approach is scalable and robust to noise.
Finally, we also show experiments on COCO~\cite{lin2014microsoft}. However, since it is detection oriented -- has only $80$ classes picked to be mutually-exclusive, and covers less percentage of labeled pixels, we only report results a) without the knowledge graph and b) without a test split ({\tt trainval35k}~\cite{chen2017spatial} for training and {\tt minival} for evaluation). This setup is for analysis purposes only.
\begin{table}[t]
\centering
\renewcommand{\arraystretch}{1.1}
\renewcommand{\tabcolsep}{1.2mm}
\caption{\label{tab:final}{Main results on ADE {\tt test-1k} and VG {\tt test}. AP is average precision, AC is classification accuracy. Superscripts show the improvement $\nabla$ over the baseline.}}
\resizebox{1.0\linewidth}{!}{
\begin{tabular}{@{} C{0.5cm} !{\vrule} L{2.5cm} !{\vrule} x{1.2cm} x{1.2cm} !{\vrule} x{1.2cm} x{1.2cm} @{}}
\Xhline{1pt}
\multirow{2}{*}{$\%$} & \multirow{2}{*}{\textbf{Method}} & \multicolumn{2}{c!{\vrule}}{per-instance} & \multicolumn{2}{c}{per-class} \\
\Xcline{3-6}{0.5pt}
& & AP\textsuperscript{$\nabla$} & AC\textsuperscript{$\nabla$} & AP\textsuperscript{$\nabla$} & AC\textsuperscript{$\nabla$} \\
\Xhline{1pt}
\parbox[t]{2.5mm}{\multirow{7}{*}{\rotatebox[origin=c]{90}{\small ADE}}} & Baseline & 67.0 & 67.0 & 40.1 & 33.2 \\
& ~~~~{\small w/ ResNet-101} & 68.2 & 68.3 & 40.8 & 34.4 \\
& ~~~~{\small w/ $800$-input} & 68.2 & 68.2 & 41.0 & 34.3 \\
& ~~~~{\small Ensemble} & 68.7 & 68.8 & 42.9 & 35.3 \\
\Xcline{2-6}{0.5pt}
& Ours\textsubscript{-Local} & 71.6\textsuperscript{+4.6} & 71.7\textsuperscript{+4.7} & 47.9\textsuperscript{+7.8} & 38.7\textsuperscript{+5.7} \\
& Ours\textsubscript{-Global} & 69.8\textsuperscript{+2.8} & 69.8\textsuperscript{+2.8} & 44.5\textsuperscript{+4.4} & 36.8\textsuperscript{+3.6} \\
& Ours\textsubscript{-Final} & {\bf 72.6}\textsuperscript{+5.6} & {\bf 72.6}\textsuperscript{+5.6} & {\bf 48.5}\textsuperscript{+8.4} & {\bf 39.5}\textsuperscript{+6.3} \\
\Xhline{0.5pt}
\parbox[t]{2.5mm}{\multirow{7}{*}{\rotatebox[origin=c]{90}{\small VG}}} & Baseline & 49.1 & 49.6 & 16.9 & 12.1 \\
& ~~~~{\small w/ ResNet-101} & 50.3 & 50.8 & 18.0 & {\bf 13.0} \\
& ~~~~{\small w/ $800$-input} & 49.5 & 50.0 & 17.0 & 12.2 \\
& ~~~~{\small w/ Ensemble} & 50.2 & 50.7 & 17.7 & 12.3 \\
\Xcline{2-6}{0.5pt}
& Ours\textsubscript{-Local} & 51.4\textsuperscript{+2.3} & 51.9\textsuperscript{+2.3} & 18.8\textsuperscript{+1.9} & 12.8\textsuperscript{+0.7} \\
& Ours\textsubscript{-Global} & 50.9\textsuperscript{+1.8} & 51.5\textsuperscript{+1.9} & 18.3\textsuperscript{+1.4} & 12.6\textsuperscript{+0.5} \\
& Ours\textsubscript{-Final} & {\bf 51.7}\textsuperscript{+2.6} & {\bf 52.2}\textsuperscript{+2.6} & {\bf 19.1}\textsuperscript{+2.2} & 12.9\textsuperscript{+0.8} \\
\Xhline{1pt}
\end{tabular}
}
\vspace{-0.2in}
\end{table}
\begin{figure*}[t]
\centering
\includegraphics[width=1.0\linewidth]{examples}
\caption{{\small Qualitative examples from ADE {\tt test-1k} (best if zoomed-in). For regions highlighted in blue, the predictions from baseline and our model are compared. Other regions are also listed to provide the context. For example, the ``right-leg'' is less confused with ``left-leg'' after reasoning (top-left); the ``mouse'' on the ``desk'' is predicted despite low resolution (top-third); and ``detergent-dispenser'' is recognized given the context of ``washing-machine'' (top-right). At bottom-right we show a failure case where context does not help ``remote-control'', probably because it has never appeared on the ``night-table'' before and no semantic relationship is there to help.}\label{fig:examples}}
\vspace{-0.2in}
\end{figure*}
\subsection{Task and Evaluation\label{task}}
We evaluate our system on the task of region classification, where the goal is to assign labels to designated regions denoted by rectangular bounding boxes. For both training and testing, we use provided ground-truth locations. We picked this task for three reasons. The {\bf first} one is on evaluation. As the number of classes increases in the vocabulary, \emph{missing} labels are inevitable, which is especially severe for object parts (\eg ``rim'', ``arm'') and related classes (\eg ``shoes'' \vs ``sneakers'') where external knowledge is valuable. If there are missing labels, fair evaluation becomes much more difficult since accuracy becomes impossible to evaluate -- cannot tell if a prediction is wrong, or the label itself is missing. Interestingly, such an issue also happens to other research areas (\eg recommendation systems~\cite{sarwar2001item} and link prediction~\cite{liben2007link}). Borrowing ideas from them, a practical solution is to evaluate \emph{only} on what we already know -- in our case ground-truth regions. {\bf Second}, although region classification is a simplified version of object detection and semantic segmentation, it maintains a richer set of labels, especially including ``stuff'' classes like ``road'', ``sky'', and object instances. Modeling ``stuff-object'' and instance-level relationships is a crucial capability which would be missed in a pure detection/segmentation setting. {\bf Finally} as our experiment will show (Sec.~\ref{ablative}), while object detectors can be used off-the-shelf, the additional manually defined parameters and components (\eg overlapping threshold for a region to be positive/negative, predefined scale/aspect ratio sets of anchors~\cite{ren2015faster}) in its pipeline pose limitations on how much context can benefit. For example, after non-maximal suppression (NMS), highly overlapping objects (\eg ``window'' and ``shutter'') will be suppressed, and ironically this is exactly where context reasoning could have helped. On the other hand, by feeding fixed regions directly for end-to-end learning, we can at least factorize the \emph{recognition} error from the \emph{localization} one~\cite{hoiem2012diagnosing}, and get a clean focus on how context can help discriminating confusing classes.
Since ADE is a segmentation dataset, we convert segmentation masks to bounding boxes. For object classes (\eg ``person''), each instance is created a separate box. Part (\eg ``head'') and part-of-part (\eg ``nose'') are also included. For VG and COCO, boxes are directly used.
For evaluation, we use classification accuracy (AC) and average precision (AP)~\cite{everingham2010pascal}. Note that since all the regions are fixed with known labels, there is no need to set a region overlap threshold for AP. Results can be aggregated in two ways: the first way (``per-class'') computes metrics separately for each class in the set, and take the mean; since the final scores are all taken from a calibrated soft-max output, a second way (``per-instance'') that computes metrics simultaneously for all classes. Intuitively, ``per-class'' assigns more weights to instances from rare classes.
\subsection{Implementation Details\label{details}}
A simplified version of \emph{tf-faster-rcnn}\footnote{\url{https://github.com/endernewton/tf-faster-rcnn}} is used to implement our baseline for region classification, with region proposal branch and bounding box regression components removed. Unless otherwise noted, ResNet-50~\cite{he2016deep} pre-trained on ImageNet~\cite{russakovsky2015imagenet} is used as our backbone image classifier, and images are enlarged to shorter size $600$ pixels during both training and testing. Specifically, full-image shared convolutional feature maps are computed till the last \emph{conv4} layer. Then the ground-truth boxes are used as regions-of-interest to compute region-specific features (crop and resize to $7{\times}7$ without max-pool). All layers of \emph{conv5} and up are then adopted to obtain the final feature for the baseline prediction $p_0$. Batch normalization parameters are fixed.
For the local module, we use the last \emph{conv4} layer as our mid-level features to feed the spatial memory $\mathcal{S}$. For the global module, mid-level features are the final \emph{conv5} ($2048$-D) layer after avg-pool. Both features are fused with the logits before soft-max $f$, and then fed into the memory cells. Word vectors from fastText~\cite{joulin2016fasttext} are used to represent each class, which extracts sub-word information and generalizes well to out-of-vocabulary words. ReLU is selected as the activation function. We roll-out the reasoning modules $3$ times and concurrently update all regions at each iteration, as more iterations do not offer more help.
We apply stochastic gradient descent with momentum to optimize all the models, and use the validation set to tune hyper-parameters. Our final setups are: $5e^{-4}$ as the initial learning rate, reduced once ($0.1{\times}$) during fine-tuning; $1e^{-4}$ as weight decay; $0.9$ as momentum. For ADE, we train $320$K iterations and reduce learning rate at $280$K. For VG and COCO the numbers are $640$K/$500$K and $560$K/$320$K, respectively\footnote{Training longer still reduces cross-entropy, but drops both AP and AC.}. We use a single image per step, and the only data augmentation technique used during training is left-right flipping\footnote{The labels for class pairs like ``left-hand'' and ``right-hand'' are swapped for flipped images.}. No augmentation is used in testing.
\subsection{Main Results\label{results}}
Quantitative results on ADE {\tt test-1k} and VG {\tt test} are shown in Tab.~\ref{tab:final}. Besides plain ConvNet $p_0$, we also add three more baselines. First, we use ResNet-101 as the backbone to see the performance can benefit from deeper networks. Second, we increase the input image size with a shorter side $800$ pixels, which is shown helpful especially for small objects in context~\cite{lin2016feature}. Finally, to check whether our performance gain is a result of more parameters, we include model ensemble as the third baseline where the prediction of two separate baseline models are averaged.
As can be seen, our reasoning modules are performing much better than all the baselines on ADE. The local module alone can increase per-class AP by $7.8$ absolute points. Although the global module alone is not as effective ($4.4\%$ improvement), the performance gain it offers is \emph{complementary} to the local module, and combining both modules we arrive at an AP of $48.5\%$ compared to the baseline AP $40.1\%$. On the other hand, deeper network and larger input size can only help ${\sim}1\%$, less than model ensembles. Additionally, our models achieve higher per-class metric gains than per-instance ones, indicating that \emph{rare} classes get helped more -- a nice property for learning from few examples. Some qualitative results are listed in Fig.~\ref{fig:examples}.
We also report the speed for future reference. On Titan Xp, the final model on ADE trains at 0.344s per iteration, compared to the baseline ResNet-50 at $0.163$s and ResNet-101 at $0.209$s. For testing, our model takes $0.165$s, whereas ResNet-50 $0.136$s, ResNet-101 $0.156$s. We believe the additional
cost is minimal with regard to the extra accuracy.
We see a similar but less significant trend on VG. This can potentially be a result of \emph{noisier} labels -- for ADE (and COCO shown later), the per-instance AP and AC values are within $0.1\%$, intuitively suggesting that \emph{higher} scores usually correspond to correct classifications. However, on VG the difference is at ${\sim}0.5\%$, meaning more of the highly confident predictions are not classified right, which are likely caused by missing ground-truths.
\subsection{Analysis\label{ablative}}
Our analysis is divided into two major parts. In the first part, we conduct thorough ablative analysis on the framework we have built. Due to space limitation, we only report results on ADE here at Tab.~\ref{tab:contribute}, for more analysis on VG, please check our supplementary material.
As can be seen, re-weighting hard examples with Eq.~\ref{reweight} helps around $0.5\%$ regardless of reasoning modules. Spatial memory $\mathcal{S}$ is critical in the local module -- if replaced by feeding last \emph{conv4} layer directly the performance drops almost to baseline. Local context aggregator $\mathcal{C}$ is less influential for ADE since the regions including background are densely labeled. A different story takes place at the global module: removing the reasoning module $\mathcal{R}$ steeply drops performance, whereas further removing memory $\mathcal{M}$ does not hurt much. Finally, for our full pipeline, removing cross-feeding and dropping the number of iterations both result in worse performance.
\begin{table}[t]
\centering
\renewcommand{\arraystretch}{1.1}
\renewcommand{\tabcolsep}{1.2mm}
\definecolor{LightGreen}{rgb}{0.75,1,0.75}
\definecolor{LightRed}{rgb}{1,0.75,0.75}
\definecolor{LightBlue}{rgb}{0.75,0.75,1}
\caption{\label{tab:contribute}{Ablative analysis on ADE {\tt test-1k}. In the first row of each block we repeat Local, Global and Final results from Tab.~\ref{tab:final}. Others see Sec.~\ref{ablative} for details.}}
\resizebox{1.0\linewidth}{!}{
\begin{tabular}{@{} C{0.5cm} !{\vrule} L{2.5cm} !{\vrule} x{1.2cm} x{1.2cm} !{\vrule} x{1.2cm} x{1.2cm} @{}}
\Xhline{1pt}
\multirow{2}{*}{$\%$} & \multirow{2}{*}{\textbf{Analysis}} & \multicolumn{2}{c!{\vrule}}{per-instance} & \multicolumn{2}{c}{per-class} \\
\Xcline{3-6}{0.5pt}
& & AP & AC & AP & AC \\
\Xhline{1pt}
\parbox[t]{2.5mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{\small Local}}} & Ours\textsubscript{-Local} & 71.6 & 71.7 & 47.9 & 38.7 \\
& ~~~~{\small w/o re-weight} & 71.3 & 71.3 & 46.7 & 37.9 \\
& ~~~~{\small w/o $\mathcal{C}$} & 70.9 & 71.0 & 46.1 & 37.5 \\
& ~~~~{\small w/o $\mathcal{S}$} & 67.6 & 67.6 & 42.1 & 34.4 \\
\Xhline{0.5pt}
\parbox[t]{2.5mm}{\multirow{6}{*}{\rotatebox[origin=c]{90}{\small Global}}} & Ours\textsubscript{-Global} & 69.8 & 69.8 & 44.5 & 36.8 \\
& ~~~~{\small w/o re-weight} & 69.2 & 69.2 & 43.8 & 36.7 \\
& ~~~~{\small w/o spatial} & 67.8 & 67.8 & 41.5 & 35.0 \\
& ~~~~{\small w/o semantic} & 69.1 & 69.2 & 43.9 & 35.9 \\
& ~~~~{\small w/o $\mathcal{R}$} & 67.1 & 67.2 & 41.5 & 34.5 \\
& ~~~~{\small w/o $\mathcal{M}$ \& $\mathcal{R}$} & 67.1 & 67.1 & 41.0 & 34.0 \\
\Xhline{0.5pt}
\parbox[t]{2.5mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{\small Final}}} & Ours\textsubscript{-Final} & 72.6 & 72.6 & 48.5 & 39.5 \\
& ~~~~{\small w/o re-weight} & 72.1 & 72.2 & 47.3 & 38.6 \\
& ~~~~{\small w/o cross-feed} & 72.2 & 72.2 & 47.6 & 39.0 \\
& ~~~~{\small $2$ iterations} & 71.9 & 72.0 & 48.1 & 39.0 \\
\Xhline{1pt}
\end{tabular}
}
\vspace{-0.1in}
\end{table}
\noindent{\bf Missing Regions.} So far we have shown results when all the regions are present. Next, we want to analyze if our framework is robust to missing regions: if some percentage of regions are not used for reasoning. This will be a common scenario if we use our framework in the detection setting -- the underlying region proposal network~\cite{ren2015faster} may itself miss some regions. We perform this set of experiments on COCO, since its regions are object-focused.
We test three variations. In the first variation, the same region classification pipeline is applied as-is. In the other two, we drop regions. While we could have done it randomly, we simulate the real-world scenario by using region proposals from faster R-CNN~\cite{ren2015faster} ($1190$K/$900$K, {\tt minival} detection mAP $32.4\%$) for testing, where $300$ region proposals after NMS are applied to filter the ground-truth regions (max IoU${>}\delta$). Evaluation is only done on the remaining regions. Here we choose not to use region proposals directly, since the model has seen ground truth regions only. We test two variations: a) ``pre'', where the regions are filtered before inference, \ie only the remaining ground-truths are fed for reasoning; ``post'', where regions are filtered after inference. Note that for the baseline, ``pre'' and ``post'' makes no difference performance-wise.
\begin{table}[t]
\centering
\renewcommand{\arraystretch}{1.1}
\renewcommand{\tabcolsep}{1.2mm}
\caption{\label{tab:coco}{Results with missing regions when region proposals are used. COCO {\tt minival} is used since it is more detection oriented. {\bf pre} filters regions before inference, and {\bf post} filters after inference.}}
\resizebox{1.0\linewidth}{!}{
\begin{tabular}{@{} L{1.8cm} !{\vrule} C{0.6cm} C{0.6cm} !{\vrule} x{1.2cm} x{1.2cm} !{\vrule} x{1.2cm} x{1.2cm} @{}}
\Xhline{1pt}
\multirow{2}{*}{\textbf{Method}} & \multirow{2}{*}{\bf \small pre} & \multirow{2}{*}{\bf \small post} & \multicolumn{2}{c!{\vrule}}{per-instance} & \multicolumn{2}{c}{per-class} \\
\Xcline{4-7}{0.5pt}
& & & AP\textsuperscript{$\nabla$} & AC\textsuperscript{$\nabla$} & AP\textsuperscript{$\nabla$} & AC\textsuperscript{$\nabla$} \\
\Xhline{1pt}
Baseline & & & 83.2 & 83.2 & 83.7 & 75.9 \\
Ours\textsubscript{-Local} & & & 84.9\textsuperscript{+1.7} & 84.9\textsuperscript{+1.7} & 85.8\textsuperscript{+2.1} & 77.6\textsuperscript{+1.7} \\
Ours\textsubscript{-Global} & & & 85.6\textsuperscript{+2.4} & 85.7\textsuperscript{+2.5} & 86.9\textsuperscript{+3.2} & 78.2\textsuperscript{+2.3} \\
Ours\textsubscript{-Final} & & & {\bf 86.0}\textsuperscript{+2.8} & {\bf 86.0}\textsuperscript{+2.8} & {\bf 87.4}\textsuperscript{+3.7} & {\bf 79.0}\textsuperscript{+3.1} \\
\Xhline{0.5pt}
Baseline & - & - & 87.0 & 87.0 & 87.7 & 80.2 \\
Ours\textsubscript{-Final} & \cmark & & 88.6\textsuperscript{+1.6} & 88.6\textsuperscript{+1.6} & 89.9\textsuperscript{+2.2} & {\bf 82.6}\textsuperscript{+2.4} \\
Ours\textsubscript{-Final} & & \cmark & {\bf 88.8}\textsuperscript{+1.8} & {\bf 88.8}\textsuperscript{+1.8} & {\bf 90.1}\textsuperscript{+2.4} & 82.5\textsuperscript{+2.3} \\
\Xhline{1pt}
\end{tabular}
}
\vspace{-0.1in}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=1.\linewidth]{trend-new.pdf}
\caption{{\small Trends of recall and per-class AP when varying IoU threshold $\delta$ from $0$ to $.9$ to drop regions. See text for details.}\label{fig:trend}}
\vspace{-0.2in}
\end{figure}
The results are summarized in Tab.~\ref{tab:coco}. Interestingly, despite lacking a knowledge graph, our global module works better than the local module with the region graph alone, likely due to its power that allows direct region-to-region communication even for farther-away pairs. Combining the two, we report $3.7\%$ absolute advantage on per-class AP over the baseline even with all classes being objects -- no ``stuff'' classes involved.
In Fig.~\ref{fig:trend}, we vary $\delta$ from $0$ to $.9$: with $0$ keeping all regions and $0.9$ dropping the most. As the trend shows, while the reasoning module suffers when regions are dropped, it is quiet resilient and the performance degradation is smooth. For example (listed in Tab.~\ref{tab:coco}), with an IoU threshold $\delta$ of $0.5$ that recalls $78.1\%$ of the ground truth boxes, we still outperform the baseline by $2.4\%$ in the ``post'' setting, and $2.2\%$ in ``pre'' where not all regions can be fed for reasoning. The lower gap implies a) region proposals are usually corresponding to easy examples where less context is needed, and b) context reasoning frameworks like ours benefit from more known regions. At $\delta{=}.8$ the recall ($30.5\%$) is so small that it cannot afford much reasoning, and at $\delta{=}.9$ (recall $3.9\%$), reasoning even hurts the performance.
\vspace{-0.05in}
\section{Conclusion}
\vspace{-0.05in}
We presented a novel framework for iterative visual reasoning. Beyond convolutions, it uses a graph to encode spatial and semantic relationships between regions and classes and passes message on the graph. We show strong performance over plain ConvNets, \eg achieving an $8.4\%$ absolute gain on ADE and $3.7\%$ on COCO. Analysis also shows that our reasoning framework is resilient to missing regions caused by current region proposal approaches.
\noindent {\bf Acknowledgements}: This work was supported in part by ONR MURI N000141612007. XC would also like to thank Shengyang Dai and Google Cloud AI team for support during the internship.
{\small
\bibliographystyle{ieee}
|
2,877,628,091,588 | arxiv | \section{Detailed Architecture}\label{section:supplementary_material_A}
The detailed architecture of TranSTAM is given in Table~\ref{table:achitecture}.
\begin{table}[htp]
\begin{center}
\caption{Architectures for TranSTAM. $H_i$ and $D_i$ is the number of heads and embedding feature dimension in the $i$th MHSA module. $R_i$ is the feature dimension expansion ratio in the $i$th MLP layer. $L_i$ is the feature dimension in the $i$th Linear layer.}
\label{table:achitecture}
\begin{tabular}{l | c | c| c }
\hline
& Layer Name & Tracklet Path & Detection Path \\
\hline
\multirow{Proj.} & Linear & \multicolumn{2}{c}{$L_1 = 256$} \\
& MLP & \multicolumn{2}{c}{$L_1=256, N = 3$}\\
\hline
\multirow{Encoder \& Decoder} & MHSA & \multirow{
$\left[\begin{array}{c}
H_1 = 8, D_1 = 256 \\
\\
R_1 = 4 \\
(L_1=5) \times 8 \\
\end{array}\right]\times 2$} & \multirow{
$\left[\begin{array}{c}
H_1 = 8, D_1 = 256 \\
H_2 = 8, D_2 = 256 \\
R_1 = 4 \\
(L_1=5, L_2 = 5) \times 8 \\
\end{array}\right] \times 2 $} \\
& MHSA & & \\
& MLP & & \\
& Linear & & \\
\hline
Head & MLP & \multicolumn{2}{c}{$R_2 = 4$}\\
\hline
\multicolumn{2}{c}{Params} & \multicolumn{2}{c}{10.07M}\\
\hline
\end{tabular}
\end{center}
\end{table}
\clearpage
\bibliographystyle{splncs04}
\section{Introduction}
This document serves as an example submission. It illustrates the format
we expect authors to follow when submitting a paper to ECCV.
At the same time, it gives details on various aspects of paper submission,
including preservation of anonymity and how to deal with dual submissions,
so we advise authors to read this document carefully.
\section{Initial Submission}
\subsection{Language}
All manuscripts must be in English.
\subsection{Paper length}
Papers submitted for review should be complete.
The length should match that intended for final publication.
Papers accepted for the conference will be allocated 14 pages (plus additional pages for references) in the proceedings.
Note that the allocated 14 pages do not include the references. The reason for this policy
is that we do not want authors to omit references for sake of space limitations.
Papers with more than 14 pages (excluding references) will be rejected without review.
This includes papers where the margins and
formatting are deemed to have been significantly altered from those
laid down by this style guide. Do not use the TIMES, or any other font than the default. The reason such papers will not be reviewed is that there is no provision for supervised revisions of manuscripts. The reviewing process cannot determine the suitability of the paper for presentation in 14 pages if it is reviewed in 16.
\subsection{Paper ID}
It is imperative that the paper ID is mentioned on each page of the manuscript.
The paper ID is a number automatically assigned to your submission when
registering your paper submission on the submission site.
All lines should be numbered in the initial submission, as in this example document. This makes reviewing more efficient, because reviewers can refer to a line on a page. Line numbering is removed in the camera-ready.
\subsection{Mathematics}
Please number all of your sections and displayed equations. Again,
this makes reviewing more efficient, because reviewers can refer to a
line on a page. Also, it is important for readers to be able to refer
to any particular equation. Just because you didn't refer to it in
the text doesn't mean some future reader might not need to refer to
it. It is cumbersome to have to use circumlocutions like ``the
equation second from the top of page 3 column 1''. (Note that the
line numbering will not be present in the final copy, so is not an
alternative to equation numbers). Some authors might benefit from
reading Mermin's description of how to write mathematics:
\url{www.pamitc.org/documents/mermin.pdf}.
\section{Policies}
To avoid confusion, in case of discrepancies between policies mentioned here and those in the ECCV 2022 webpage, the web page is the one that is updated regularly and its policies shall overrule those appearing here.
\subsection{Review Process}
By submitting a paper to ECCV, the authors agree to the review process and understand that papers are processed by the Toronto system to match each manuscript to the best possible chairs and reviewers.
\subsection{Confidentiality}
The review process of ECCV is confidential. Reviewers are volunteers not part of the ECCV organisation and their efforts are greatly appreciated. The standard practice of keeping all information confidential during the review is part of the standard communication to all reviewers. Misuse of confidential information is a severe professional failure and appropriate measures will be taken when brought to the attention of ECCV organizers. It should be noted, however, that the organisation of ECCV is not and cannot be held responsible for the consequences when reviewers break confidentiality.
Accepted papers will be published by Springer (with appropriate copyrights) electronically up to three weeks prior to the main conference. Please make sure to discuss this issue with your legal advisors as it pertains to public disclosure of the contents of the papers submitted.
\subsection{Dual and Double Submissions}
By submitting a manuscript to ECCV 2022, authors acknowledge that it has not been previously published or accepted for publication in substantially similar form in any peer-reviewed venue including journal, conference, or workshop. Furthermore, no paper substantially similar in content has been or will be submitted to a journal, another conference or workshop during the review period (March 07, 2022 – July 3, 2022). The authors also attest that they did not submit substantially similar submissions to ECCV 2022. Violation of any of these conditions will lead to rejection and the violation will be reported to the other venue or journal, which will typically lead to rejection there as well.
The goals of the dual submission policy are (i) to have exciting new work be published for the first time at ECCV 2022, and (ii) to avoid duplicating the efforts of the reviewers.
Therefore, all papers under review are checked for dual submissions and this is not allowed, independent of the page size of submissions.
For already published papers, our policy is based upon the following particular definition of ``publication''. A publication, for the purposes of the dual submission policy, is defined to be a written work longer than four pages that was submitted for review by peers for either acceptance or rejection, and, after review, was accepted. In particular, this definition of publication does not depend upon whether such an accepted written work appears in a formal proceedings or whether the organizers declare that such work ``counts as a publication''.
An arXiv.org paper does not count as a publication because it was not peer-reviewed for acceptance. The same is true for university technical reports. However, this definition of publication does include peer-reviewed workshop papers, even if they do not appear in a proceedings, if their length is more than 4 pages including citations. Given this definition, any submission to ECCV 2022 should not have substantial overlap with prior publications or other concurrent submissions. As a rule of thumb, the ECCV 2022 submission should contain no more than 20 percent of material from previous publications.
\subsection{Requirements for publication}
Publication of the paper in the ECCV 2022 proceedings of Springer requires that at least one of the authors registers for the conference and present the paper there. It also requires that a camera-ready version that satisfies all formatting requirements is submitted before the camera-ready deadline.
\subsection{Double blind review}
\label{sec:blind}
ECCV reviewing is double blind, in that authors do not know the names of the area chair/reviewers of their papers, and the area chairs/reviewers cannot, beyond reasonable doubt, infer the names of the authors from the submission and the additional material. Avoid providing links to websites that identify the authors. Violation of any of these guidelines may lead to rejection without review. If you need to cite a different paper of yours that is being submitted concurrently to ECCV, the authors should (1) cite these papers, (2) argue in the body of your paper why your ECCV paper is non trivially different from these concurrent submissions, and (3) include anonymized versions of those papers in the supplemental material.
Many authors misunderstand the concept of anonymizing for blind
review. Blind review does not mean that one must remove
citations to one's own work. In fact it is often impossible to
review a paper unless the previous citations are known and
available.
Blind review means that you do not use the words ``my'' or ``our''
when citing previous work. That is all. (But see below for
technical reports).
Saying ``this builds on the work of Lucy Smith [1]'' does not say
that you are Lucy Smith, it says that you are building on her
work. If you are Smith and Jones, do not say ``as we show in
[7]'', say ``as Smith and Jones show in [7]'' and at the end of the
paper, include reference 7 as you would any other cited work.
An example of a bad paper:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of our
previous paper [1], and show it to be inferior to all
previously known methods. Why the previous paper was
accepted without this analysis is beyond me.
[1] Removed for blind review
\end{quote}
An example of an excellent paper:
\begin{quote}
\begin{center}
An analysis of the frobnicatable foo filter.
\end{center}
In this paper we present a performance analysis of the
paper of Smith [1], and show it to be inferior to
all previously known methods. Why the previous paper
was accepted without this analysis is beyond me.
[1] Smith, L. and Jones, C. ``The frobnicatable foo
filter, a fundamental contribution to human knowledge''.
Nature 381(12), 1-213.
\end{quote}
If you are making a submission to another conference at the same
time, which covers similar or overlapping material, you may need
to refer to that submission in order to explain the differences,
just as you would if you had previously published related work. In
such cases, include the anonymized parallel
submission~\cite{Authors14} as additional material and cite it as
\begin{quote}
1. Authors. ``The frobnicatable foo filter'', BMVC 2014 Submission
ID 324, Supplied as additional material {\tt bmvc14.pdf}.
\end{quote}
Finally, you may feel you need to tell the reader that more
details can be found elsewhere, and refer them to a technical
report. For conference submissions, the paper must stand on its
own, and not {\em require} the reviewer to go to a techreport for
further details. Thus, you may say in the body of the paper
``further details may be found in~\cite{Authors14b}''. Then
submit the techreport as additional material. Again, you may not
assume the reviewers will read this material.
Sometimes your paper is about a problem which you tested using a tool which
is widely known to be restricted to a single institution. For example,
let's say it's 1969, you have solved a key problem on the Apollo lander,
and you believe that the ECCV audience would like to hear about your
solution. The work is a development of your celebrated 1968 paper entitled
``Zero-g frobnication: How being the only people in the world with access to
the Apollo lander source code makes us a wow at parties'', by Zeus.
You can handle this paper like any other. Don't write ``We show how to
improve our previous work [Anonymous, 1968]. This time we tested the
algorithm on a lunar lander [name of lander removed for blind review]''.
That would be silly, and would immediately identify the authors. Instead
write the following:
\begin{quotation}
\noindent
We describe a system for zero-g frobnication. This
system is new because it handles the following cases:
A, B. Previous systems [Zeus et al. 1968] didn't
handle case B properly. Ours handles it by including
a foo term in the bar integral.
...
The proposed system was integrated with the Apollo
lunar lander, and went all the way to the moon, don't
you know. It displayed the following behaviours
which show how well we solved cases A and B: ...
\end{quotation}
As you can see, the above text follows standard scientific convention,
reads better than the first version, and does not explicitly name you as
the authors. A reviewer might think it likely that the new paper was
written by Zeus, but cannot make any decision based on that guess.
He or she would have to be sure that no other authors could have been
contracted to solve problem B. \\
For sake of anonymity, it's recommended to omit acknowledgements
in your review copy. They can be added later when you prepare the final copy.
\section{Manuscript Preparation}
This is an edited version of Springer LNCS instructions adapted
for ECCV 2022 first paper submission.
You are strongly encouraged to use \LaTeX2$_\varepsilon$ for the
preparation of your
camera-ready manuscript together with the corresponding Springer
class file \verb+llncs.cls+.
We would like to stress that the class/style files and the template
should not be manipulated and that the guidelines regarding font sizes
and format should be adhered to. This is to ensure that the end product
is as homogeneous as possible.
\subsection{Printing Area}
The printing area is $122 \; \mbox{mm} \times 193 \;
\mbox{mm}$.
The text should be justified to occupy the full line width,
so that the right margin is not ragged, with words hyphenated as
appropriate. Please fill pages so that the length of the text
is no less than 180~mm.
\subsection{Layout, Typeface, Font Sizes, and Numbering}
Use 10-point type for the name(s) of the author(s) and 9-point type for
the address(es) and the abstract. For the main text, please use 10-point
type and single-line spacing.
We recommend using Computer Modern Roman (CM) fonts, which is the default font in this template.
Italic type may be used to emphasize words in running text. Bold
type and underlining should be avoided.
With these sizes, the interline distance should be set so that some 45
lines occur on a full-text page.
\subsubsection{Headings.}
Headings should be capitalized
(i.e., nouns, verbs, and all other words
except articles, prepositions, and conjunctions should be set with an
initial capital) and should,
with the exception of the title, be aligned to the left.
Words joined by a hyphen are subject to a special rule. If the first
word can stand alone, the second word should be capitalized.
The font sizes
are given in Table~\ref{table:headings}.
\setlength{\tabcolsep}{4pt}
\begin{table}
\begin{center}
\caption{Font sizes of headings. Table captions should always be
positioned {\it above} the tables. The final sentence of a table
caption should end without a full stop}
\label{table:headings}
\begin{tabular}{lll}
\hline\noalign{\smallskip}
Heading level & Example & Font size and style\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
Title (centered) & {\Large \bf Lecture Notes \dots} & 14 point, bold\\
1st-level heading & {\large \bf 1 Introduction} & 12 point, bold\\
2nd-level heading & {\bf 2.1 Printing Area} & 10 point, bold\\
3rd-level heading & {\bf Headings.} Text follows \dots & 10 point, bold
\\
4th-level heading & {\it Remark.} Text follows \dots & 10 point,
italic\\
\hline
\end{tabular}
\end{center}
\end{table}
\setlength{\tabcolsep}{1.4pt}
Here are some examples of headings: ``Criteria to Disprove Context-Freeness of
Collage Languages'', ``On Correcting the Intrusion of Tracing
Non-deterministic Programs by Software'', ``A User-Friendly and
Extendable Data Distribution System'', ``Multi-flip Networks:
Parallelizing GenSAT'', ``Self-determinations of Man''.
\subsubsection{Lemmas, Propositions, and Theorems.}
The numbers accorded to lemmas, propositions, and theorems etc. should
appear in consecutive order, starting with the number 1, and not, for
example, with the number 11.
\subsection{Figures and Photographs}
\label{sect:figures}
Please produce your figures electronically and integrate
them into your text file. For \LaTeX\ users we recommend using package
\verb+graphicx+ or the style files \verb+psfig+ or \verb+epsf+.
Check that in line drawings, lines are not
interrupted and have constant width. Grids and details within the
figures must be clearly readable and may not be written one on top of
the other. Line drawings should have a resolution of at least 800 dpi
(preferably 1200 dpi).
For digital halftones 300 dpi is usually sufficient.
The lettering in figures should have a height of 2~mm (10-point type).
Figures should be scaled up or down accordingly.
Please do not use any absolute coordinates in figures.
Figures should be numbered and should have a caption which should
always be positioned {\it under} the figures, in contrast to the caption
belonging to a table, which should always appear {\it above} the table.
Please center the captions between the margins and set them in
9-point type
(Fig.~\ref{fig:example} shows an example).
The distance between text and figure should be about 8~mm, the
distance between figure and caption about 5~mm.
\begin{figure}
\centering
\includegraphics[height=6.5cm]{eijkel2}
\caption{One kernel at $x_s$ ({\it dotted kernel}) or two kernels at
$x_i$ and $x_j$ ({\it left and right}) lead to the same summed estimate
at $x_s$. This shows a figure consisting of different types of
lines. Elements of the figure described in the caption should be set in
italics,
in parentheses, as shown in this sample caption. The last
sentence of a figure caption should generally end without a full stop}
\label{fig:example}
\end{figure}
If possible (e.g. if you use \LaTeX) please define figures as floating
objects. \LaTeX\ users, please avoid using the location
parameter ``h'' for ``here''. If you have to insert a pagebreak before a
figure, please ensure that the previous page is completely filled.
\subsection{Formulas}
Displayed equations or formulas are centered and set on a separate
line (with an extra line or halfline space above and below). Displayed
expressions should be numbered for reference. The numbers should be
consecutive within the contribution,
with numbers enclosed in parentheses and set on the right margin.
For example,
\begin{align}
\psi (u) & = \int_{0}^{T} \left[\frac{1}{2}
\left(\Lambda_{0}^{-1} u,u\right) + N^{\ast} (-u)\right] dt \; \\
& = 0 ?
\end{align}
Please punctuate a displayed equation in the same way as ordinary
text but with a small space before the end punctuation.
\subsection{Footnotes}
The superscript numeral used to refer to a footnote appears in the text
either directly after the word to be discussed or, in relation to a
phrase or a sentence, following the punctuation sign (comma,
semicolon, or full stop). Footnotes should appear at the bottom of
the
normal text area, with a line of about 2~cm in \TeX\ and about 5~cm in
Word set
immediately above them.\footnote{The footnote numeral is set flush left
and the text follows with the usual word spacing. Second and subsequent
lines are indented. Footnotes should end with a full stop.}
\subsection{Program Code}
Program listings or program commands in the text are normally set in
typewriter font, e.g., CMTT10 or Courier.
\noindent
{\it Example of a Computer Program}
\begin{verbatim}
program Inflation (Output)
{Assuming annual inflation rates of
years};
const
MaxYears = 10;
var
Year: 0..MaxYears;
Factor1, Factor2, Factor3: Real;
begin
Year := 0;
Factor1 := 1.0; Factor2 := 1.0; Factor3 := 1.0;
WriteLn('Year
repeat
Year := Year + 1;
Factor1 := Factor1 * 1.07;
Factor2 := Factor2 * 1.08;
Factor3 := Factor3 * 1.10;
WriteLn(Year:5,Factor1:7:3,Factor2:7:3,Factor3:7:3)
until Year = MaxYears
end.
\end{verbatim}
\noindent
{\small (Example from Jensen K., Wirth N. (1991) Pascal user manual and
report. Springer, New York)}
\subsection{Citations}
The list of references is headed ``References" and is not assigned a
number
in the decimal system of headings. The list should be set in small print
and placed at the end of your contribution, in front of the appendix,
if one exists.
Please do not insert a pagebreak before the list of references if the
page is not completely filled.
An example is given at the
end of this information sheet. For citations in the text please use
square brackets and consecutive numbers: \cite{Alpher02},
\cite{Alpher03}, \cite{Alpher04} \dots
\section{Submitting a Camera-Ready for an Accepted Paper}
\subsection{Converting Initial Submission to Camera-Ready}
To convert a submission file into a camera-ready for an accepted paper:
\begin{enumerate}
\item First comment out \begin{verbatim}
\usepackage{ruler}
\end{verbatim} and the line that follows it.
\item The anonymous title part should be removed or commented out, and a proper author block should be inserted, for which a skeleton is provided in a commented-out version. These are marked in the source file as \begin{verbatim}
\end{verbatim} and \begin{verbatim}
\end{verbatim}
\item Please write out author names in full in the paper, i.e. full given and family names. If any authors have names that can be parsed into FirstName LastName in multiple ways, please include the correct parsing in a comment to the editors, below the \begin{verbatim}\author{}\end{verbatim} field.
\item Make sure you have inserted the proper Acknowledgments.
\end{enumerate}
\subsection{Preparing the Submission Package}
We need all the source files (LaTeX files, style files, special fonts, figures, bib-files) that are required to compile papers, as well as the camera ready PDF. For each paper, one ZIP-file called XXXX.ZIP (where XXXX is the zero-padded, four-digit paper ID) has to be prepared and submitted via the ECCV 2022 Submission Website, using the password you received with your initial registration on that site. The size of the ZIP-file may not exceed the limit of 60 MByte. The ZIP-file has to contain the following:
\begin{enumerate}
\item All source files, e.g. LaTeX2e files for the text, PS/EPS or PDF/JPG files for all figures.
\item PDF file named ``XXXX.pdf" that has been produced by the submitted source, where XXXX is the four-digit paper ID (zero-padded if necessary). For example, if your paper ID is 24, the filename must be 0024.pdf. This PDF will be used as a reference and has to exactly match the output of the compilation.
\item PDF file named ``XXXX-copyright.PDF": a scanned version of the signed copyright form (see ECCV 2022 Website, Camera Ready Guidelines for the correct form to use).
\item If you wish to provide supplementary material, the file name must be in the form XXXX-supp.pdf or XXXX-supp.zip, where XXXX is the zero-padded, four-digit paper ID as used in the previous step. Upload your supplemental file on the ``File Upload" page as a single PDF or ZIP file of 100 MB in size or less. Only PDF and ZIP files are allowed for supplementary material. You can put anything in this file – movies, code, additional results, accompanying technical reports–anything that may make your paper more useful to readers. If your supplementary material includes video or image data, you are advised to use common codecs and file formats. This will make the material viewable by the largest number of readers (a desirable outcome). ECCV encourages authors to submit videos using an MP4 codec such as DivX contained in an AVI. Also, please submit a README text file with each video specifying the exact codec used and a URL where the codec can be downloaded. Authors should refer to the contents of the supplementary material appropriately in the paper.
\end{enumerate}
Check that the upload of your file (or files) was successful either by matching the file length to that on your computer, or by using the download options that will appear after you have uploaded. Please ensure that you upload the correct camera-ready PDF–renamed to XXXX.pdf as described in the previous step as your camera-ready submission. Every year there is at least one author who accidentally submits the wrong PDF as their camera-ready submission.
Further considerations for preparing the camera-ready package:
\begin{enumerate}
\item Make sure to include any further style files and fonts you may have used.
\item References are to be supplied as BBL files to avoid omission of data while conversion from BIB to BBL.
\item Please do not send any older versions of papers. There should be one set of source files and one XXXX.pdf file per paper. Our typesetters require the author-created pdfs in order to check the proper representation of symbols, figures, etc.
\item Please remove unnecessary files (such as eijkel2.pdf and eijkel2.eps) from the source folder.
\item You may use sub-directories.
\item Make sure to use relative paths for referencing files.
\item Make sure the source you submit compiles.
\end{enumerate}
Springer is the first publisher to implement the ORCID identifier for proceedings, ultimately providing authors with a digital identifier that distinguishes them from every other researcher. ORCID (Open Researcher and Contributor ID) hosts a registry of unique researcher identifiers and a transparent method of linking research activities to these identifiers. This is achieved through embedding ORCID identifiers in key workflows, such as research profile maintenance, manuscript submissions, grant applications and patent applications.
\subsection{Most Frequently Encountered Issues}
Please kindly use the checklist below to deal with some of the most frequently encountered issues in ECCV submissions.
{\bf FILES:}
\begin{itemize}
\item My submission package contains ONE compiled pdf file for the camera-ready version to go on Springerlink.
\item I have ensured that the submission package has all the additional files necessary for compiling the pdf on a standard LaTeX distribution.
\item I have used the correct copyright form (with editor names pre-printed), and a signed pdf is included in the zip file with the correct file name.
\end{itemize}
{\bf CONTENT:}
\begin{itemize}
\item I have removed all \verb| \vspace| and \verb|\hspace| commands from my paper.
\item I have not used \verb|\thanks| or \verb|\footnote| commands and symbols for corresponding authors in the title (which is processed with scripts) and (optionally) used an Acknowledgement section for all the acknowledgments, at the end of the paper.
\item I have not used \verb|\cite| command in the abstract.
\item I have read the Springer author guidelines, and complied with them, including the point on providing full information on editors and publishers for each reference in the paper (Author Guidelines – Section 2.8).
\item I have entered a correct \verb|\titlerunning{}| command and selected a meaningful short name for the paper.
\item I have entered \verb|\index{Lastname,Firstname}| commands for names that are longer than two words.
\item I have used the same name spelling in all my papers accepted to ECCV and ECCV Workshops.
\item I have inserted the ORCID identifiers of the authors in the paper header (see http://bit.ly/2H5xBpN for more information).
\item I have not decreased the font size of any part of the paper (except tables) to fit into 14 pages, I understand Springer editors will remove such commands.
\end{itemize}
{\bf SUBMISSION:}
\begin{itemize}
\item All author names, titles, and contact author information are correctly entered in the submission site.
\item The corresponding author e-mail is given.
\item At least one author has registered by the camera ready deadline.
\end{itemize}
\section{Conclusions}
The paper ends with a conclusion.
\clearpage\mbox{}Page \thepage\ of the manuscript.
\clearpage\mbox{}Page \thepage\ of the manuscript.
This is the last page of the manuscript.
\par\vfill\par
Now we have reached the maximum size of the ECCV 2022 submission (excluding references).
References should start immediately after the main text, but can continue on p.15 if needed.
\clearpage
\bibliographystyle{splncs04}
\section{Introduction}\label{section:introduction}
Multiple object tracking (MOT) in videos is an important problem in many application domains. Particularly, estimating humans location and their motion is of great interest in surveillance, business analytics, robotics and autonomous driving.
There is a significant amount of research in this domain~\cite{braso2020learning, kim2015multiple, schulter2017deep}, and most state-of-the-art MOT works follow the tracking-by-detection (TBD) paradigm which divides the MOT task into two sub-tasks: first, obtaining frame-by-frame object detections; second, associating the set of detections into trajectories.
In this paper, we also follow the TBD strategy, and focus on the data association part.
Both the appearance features and the spatial-temporal relationships are crucial cues for MOT. Traditional data association methods~\cite{Wojke2017simple} usually rely on hand-crafted rules to fuse these affinities. Also, the spatial-temporal relationships are modeled with manually designed models~\cite{luo2017multiple}, such as linear motion model~\cite{Breitenstein2009}, social force model~\cite{FengPengming2017}, or crowd motion pattern model~\cite{HuMin2008}.
The recent trend in MOT is heading towards leveraging deep learning to boost the tracking performance.
In particular, with the success of Transformer in various computer vision tasks, such as object detection~\cite{DETR} and segmentation~\cite{strudel2021}, there are a few pioneer investigations~\cite{chu2021transmot, meinhardt2021trackformer, transtrack, xu2021transcenter} in applying Transformer to MOT.
Both TrackFormer~\cite{meinhardt2021trackformer} and TransTrack~\cite{transtrack} are built on top of DETR architecture~\cite{DETR}, and follow the joint-detection-and-tracking framework. To mitigate the occlusion problem inherent to anchor-based bounding-box tracking methods, TransCenter~\cite{xu2021transcenter} uses dense pixel-level multi-scale queries to replace the sparse queries.
As mentioned in~\cite{chu2021transmot}, the tracking performance of the above DETR-based trackers is not satisfactory, because they fail to model the long-term spatial-temporal dependencies.
Different from~\cite{meinhardt2021trackformer, transtrack, xu2021transcenter}, Chu \emph{et al.}~\cite{chu2021transmot} proposes a spatial-temporal graph Transformer (TransMOT) model, which follows the TBD strategy and exploits the Transformer architecture to learn the affinities between the historical tracklets and the new detections.
In TransMOT, to effectively model the spatial relationship of the objects, an extra spatial graph convolutional neural network is added within the Transformer, which inevitably increases the model complexity.
In this paper, we propose a simple yet effective solution named TransSTAM to solve the MOT problem with Transformer. Similar to TransMOT~\cite{chu2021transmot}, our method also follows the TBD strategy and utilizes Transformer to learn the affinities between the tracked objects and detections in the current frame.
Moreover, our method is solely based on the encoder-decoder architecture, and this compact network design enjoys good performance with less computational cost compared to TransMOT.
To apply Transformer in MOT, we propose three simple yet effective positional encoding methods.
First, we propose an \textit{Absolute Spatial Positional Encoding} (ASPE) and a \textit{Relative Spatial-Temporal Positional Encoding} (RSTPE) method for representation learning.
As shown in Figure~\ref{fig:picture001}, ASPE seeks to obtain a representation that encodes the bounding box coordinates of each detection, and RSTPE captures the relative spatial-temporal relation among detections.
Equipped with these encodings, Transformer can better model the spatial-temporal relationships of detections and effectively integrate them with the appearance features.
Second, we propose an \textit{Assignment Positional Encoding} (APE) for affinity modeling.
With APE, the encoded features of all tracklets can be used when calculating the affinity between certain detection-tracklet pair in decoder, which enlarges the query's field of view to the global context and thereby improving final tracking performance.
The main contribution of the paper is in three folds:
(1) We propose a novel Transformer based method named TranSTAM for MOT, which enjoys a compact network design and is computationally efficient.
(2) We propose three simple yet effective positional encoding methods on the basis of the Transformer for representation learning and affinity modeling.
(3) We show significantly improved state-of-the-art results of our method on multiple MOTChallenge benchmarks.
\section{Related Work}\label{section:relatedwork}
Most state-of-the-art MOT trackers follow the TBD paradigm.
The TBD framework generates tracklets by associating detections on a frame-by-frame basis for online applications~\cite{hu2020multi, Wojke2017simple, xu2019spatial, zhou2018deep, zhu2018online} or a batch basis for offline scenarios~\cite{berclaz2011multiple, braso2020learning, milan2015multi}.
Traditional data-association methods differ in the specific optimization methods, including network flow \cite{Pirsiavash2011Globally}, generalized maximum multi clique \cite{dehghan2015gmmcp}, linear programming \cite{jiang2007linear}, conditional random field \cite{yang2012online}, \emph{etc}.
However, the authors in \cite{bergmann2019tracking} showed that the significantly higher computational cost of these over-complicated optimization methods does not translate to significantly higher accuracy.
Recently, deep learning-based association algorithm is gaining popularity in MOT~\cite{braso2020learning, chu2019famnet, peng2021lpc, schulter2017deep, xu2020train}.
Chu \emph{et al.}~\cite{chu2019famnet} proposed an end-to-end model, named FAMNet, to refine feature representation, affinity model and multi-dimensional assignment in a single deep network.
Xu \emph{et al.}~\cite{xu2020train} presented a differentiable Deep Hungarian Net (DHN) to approximate the Hungarian matching algorithm and provided a soft approximation of the optimal prediction-to-ground-truth assignment.
Schulter \emph{et al.}~\cite{schulter2017deep} designed a bi-level optimization framework which frames the optimization of a smoothed network flow problem as a differentiable function of the pairwise association costs.
Bras\'o \emph{et al.}~\cite{braso2020learning} modeled the non-learnable data-association problem as a differentiable edge classification task.
Dai \emph{et al.}~\cite{peng2021lpc} proposed a proposal-based learnable framework, which is similar to the two-stage object detector Faster RCNN and models MOT as a proposal generation, proposal scoring and trajectory inference paradigm. The proposal scoring can be solved by a learnable graph convolutional network.
Recently, Transformer based architectures are applied to various tasks such as object detection~\cite{DETR} and segmentation~\cite{strudel2021}.
There are also a few pioneer investigations~\cite{chu2021transmot, meinhardt2021trackformer, transtrack, xu2021transcenter} in applying Transformer to MOT.
Tim \emph{et al.}~\cite{meinhardt2021trackformer} and Sun \emph{et al.}~\cite{transtrack} built the Transformer-based trackers on DETR architecture~\cite{DETR}, and followed the joint-detection-and-tracking framework. One common feature of these two methods is that the data association is achieved with attention operations over feature map of the current frame and track queries from the previous frames.
To mitigate the occlusion problem inherent to anchor-based bounding-box tracking methods, Xu \emph{et al.}~\cite{xu2021transcenter} replaced the sparse queries with dense pixel-level multi-scale queries.
As mentioned in~\cite{chu2021transmot}, the tracking performance of the above DETR-based trackers is not the state-of-the-art, because they cannot model the long-term spatial-temporal dependencies.
Different from~\cite{meinhardt2021trackformer, transtrack, xu2021transcenter}, Chu \emph{et al.}~\cite{chu2021transmot} proposed a spatial-temporal graph Transformer (TransMOT) model, which follows the TBD strategy and exploits the Transformer architecture to learn the affinities between the historical tracklets and the new detections.
In TransMOT, an extra spatial graph convolutional neural network is added within the Transformer to model the spatial relationship of the objects at each timestamp independently, which may limit the Transformer's ability in modeling global attention and be less computationally efficient.
Meanwhile, due to the lack of Positional Encoding (PE) in the decoder, the affinity estimation of each detection-tracklet pair can only rely on the local features within this tracklet.
It may reduce the ability of capturing long-range dependencies among all tracklets through cross-attention, which is arguably the main source for the success of Transformer.
\begin{figure*}[htb]
\centering
\includegraphics[width=0.90\textwidth]{framework_overview.png}
\caption{Overview of the proposed TranSTAM for online MOT. It follows the typical encoder and decoder architecture of Transformer. Given a set of $N$ tracklets till frame $t-1$ and $M$ detections at frame $t$, a CNN model and an absolute spatial positional encoding (ASPE) model is first adopted to extract appearance and spatial features for each detection, respectively. Then, the encoder learns discriminative features for each tracklet with the help of a relative spatial temporal positional encoding (RSTPE) module. All the features are concatenated to form the memory in decoder. Next, an assignment positional encoding (APE) is proposed to generate positional encodings for $N\times M$ queries and the decoder adopts the standard cross-attention mechanism to calculate the assignment matrix $\boldsymbol{A} \in \mathbb{R}^{N \times M}$. Finally, a simple Hungarian algorithm is adopted to generate the association results.}
\label{fig:picture001}
\end{figure*}
\section{Method}\label{section:method}
\subsection{Framework Overview}\label{subsectionFO}
As shown in Figure~\ref{fig:picture001}, our framework is built upon tracking-by-detection framework, and aims at tracking multiple objects in an online fashion.
Assume that, at time step $t$, the framework maintains a set of $N$ tracklets $\Xi =\{\mathcal{T}_{1}, \cdots ,\mathcal{T}_{N}\}$, each of which represents a tracked object and is generated by linking detections over the previous $T$ image frames.
It should be noticed that not every tracklet has all the detections in the previous $T$ frames due to occlusion, missing detection, \emph{etc}.
For illustration purpose, we consider the situation that there is no missing detections for each tracklet. The extension of our method to the cases with missing detections is easy and straightforward.
Meanwhile, a set of $M$ detections $\mathcal{D}=\{\boldsymbol{d}_{1}, \cdots ,\boldsymbol{d}_{M}\}$ is obtained by applying an object detector on frame $t$.
The task of online MOT is to associate detections for the existing tracklets, determine whether any tracklets should be terminated, and generate new tracklets for new objects that enter the scene.
As shown in Figure~\ref{fig:picture001}, our framework consists of two major parts:
(1) utilize the powerful self-attention mechanism of Transformer encoder to jointly model the spatial-temporal information and fuse with the appearance feature for each tracklet;
(2) calculate the $N\times M$ assignment matrix with the help of the standard multi-head self- and cross-attention mechanisms in Transformer decoder.
In the first part, a CNN with shared weights is used to extract appearance feature $\boldsymbol{a}_{i} \in \mathbb{R}^{d}$ directly from RGB data of each detection $\boldsymbol{d}_{i}$.
To model the spatial-temporal information of each tracklet more effectively, we decompose the spatial-temporal modeling into absolute spatial modeling of each detection and relative spatial-temporal modeling between detections.
For absolute spatial modeling, we introduce a ASPE, a 3-layer MLP with shared weights, to extract feature embeddings $\boldsymbol{p}_{i} \in \mathbb{R}^{d}$ on the normalized bounding box coordinates ($\Bar{x}_{i}, \Bar{y}_{i}, \Bar{w}_{i}, \Bar{h}_{i})$ of $\boldsymbol{d}_{i}$.
Then, for each detection $\boldsymbol{d}_{i}$ and each tracklet $\mathcal{T}_{i}$, its feature embedding can be represented as $\boldsymbol{f}_{i}=\boldsymbol{a}_{i} + \boldsymbol{p}_{i}$ and $\mathcal{T}_{i}=[\boldsymbol{f}_{j}]^{t-1}_{j=t-T}$, respectively.
For relative spatial-temporal modeling, as shown in Figure~\ref{fig:picture002} and Figure~\ref{fig:picture003}, we propose a RSTPE to capture the spatial-temporal relationships between detections and then use it as a bias term in the self- and cross-attention module.
Concretely, for each detection pair, RSTPE computes the dot-product of the relative spatial-temporal features and the learnable embeddings.
Incorporating with the spatial information obtained by ASPE and a better attention mechanism with RSTPE, the Transformer encoder could learn a more discriminative feature representation for each tracklet.
The details of joint spatial-temporal and appearance feature learning with Transformer encoder will be explained in Sec.~\ref{subsectionEncoder}.
In the second part, we aim at computing an assignment matrix $\boldsymbol{A} \in \mathbb{R}^{N \times M}$ between the existing $N$ tracklets and $M$ detections with the help of Transformer decoder.
Two ingredients are essential for generating the assignment matrix $\boldsymbol{A}$:
(1) an APE that forces the order of the predicted assignment results;
(2) an RSTPE that captures the spatial-temporal relationships between tracklets and detections.
As shown in Figure~\ref{fig:picture003}, for $(i,j)$-th query $\boldsymbol{q}_{ij}$, its query feature is represented as the feature embedding of $j$-th detection, and its corresponding positional encoding is extracted from $i$-th tracklet with APE.
By fusing the query feature with its corresponding positional encoding, we can make $A_{ij}$ (\emph{i.e.}, $(i,j)$-th element of $\boldsymbol{A}$) explictly correspond to the affinity between tracklet $\mathcal{T}_{i}$ and detection $\boldsymbol{d}_{j}$.
Meanwhile, with APE, each query detection can attend to all tracklets, hence taking full advantage of Transformer's global receptive field to improve the association accuracy.
In Transformer decoder, RSTPE is utilized in the cross-attention module to encode the relative spatial-temporal relationships between tracklets and detections.
Different from TransMOT~\cite{chu2021transmot} which first models the spatial relationship of different objects at each timestamp independently and then encodes the temporal dimension for each tracklet independently, RSTPE could encode both spatial and temporal relationship of any two detections simultaneously. It makes our model more accurate.
The details of data association with Transformer decoder will be explained in Sec.~\ref{subsectionDecoder}.
\subsection{Feature Learning with Transformer Encoder}\label{subsectionEncoder}
\begin{figure}[tb]
\centering
\includegraphics[width=0.49\textwidth]{transformer_encoder.png}
\caption{Architecture of spatial-temporal transformer encoder.}
\label{fig:picture002}
\end{figure}
The Transformer encoder aims at learning a more discriminative feature representation for each tracklet.
Most of the existing Transformer-based trackers~\cite{meinhardt2021trackformer, transtrack, xu2021transcenter} heavily rely on the appearance features, and cannot model long term spatial-temporal dependencies.
In this paper, we will simultaneously model the appearance and the spatial-temporal features with one Transformer model.
The input of Transformer encoder is a set of tracklets $\Xi =\{\mathcal{T}_{1}, \cdots ,\mathcal{T}_{N}\}$ for the past $T$ frames.
Since all the tracklets are processed by the encoder independently, we take one tracklet $\mathcal{T}_{i} = \{\boldsymbol{d}^{i}_{t-1}, \cdots ,\boldsymbol{d}^{i}_{t-T}\}$ as an example for illustration. Without introducing ambiguity, we omit index $i$ to ease reading in the following, \emph{i.e.}, $\mathcal{T} = \{\boldsymbol{d}_{t-1}, \cdots ,\boldsymbol{d}_{t-T}\}$.
As described in Sec.~\ref{subsectionFO}, the appearance and spatial features for each detection are first embedded through a CNN and an MLP network, respectively.
Then, a simple ``add'' fusion operator is adopted to fuse the appearance and spatial features for each detection. These detection features are arranged into a feature tensor $\mathcal{F}=[\boldsymbol{f}_{t-1}^T, \cdots, \boldsymbol{f}_{t-T}^T]^T \in \mathbb{R}^{T \times d}$, where $d$ is the output dimension of the embedding layer.
It is further passed to the Transformer encoder module, as shown in Figure~\ref{fig:picture002}.
The Transformer encoder module consists of $L \times$ Transformer layers~\cite{vaswani2017attention}, where each Transformer layer has two parts: a self-attention module and a feed-forward
network (FFN).
The standard self-attention is calculated as:
\begin{equation}\label{equa:self-attention1}
Q=\mathcal{F} W_{Q}, \quad K=\mathcal{F} W_{K}, \quad V=\mathcal{F} W_{V}
\end{equation}
\begin{equation}\label{equa:self-attention2}
A^{attn}=\frac{QK^T}{\sqrt{d_{k}}}, \quad Attention(\mathcal{F}) = softmax(A^{attn})V
\end{equation}
where $W_{Q}\in \mathbb{R}^{d \times d_{k}}$, $W_{K}\in \mathbb{R}^{d \times d_{k}}$, $W_{V}\in \mathbb{R}^{d \times d_{v}}$ are projection matrices, $d_{k}$ and $d_{v}$ indicate the scaling factors, $A^{attn}$ is a matrix capturing the similarity between queries and keys.
For simplicity of illustration, we consider the single-head self-attention and assume $d_{k}=d_{v}=d$. The extension to the multi-head attention is standard and straightforward.
To encode the relative spatial-temporal information between detections, we propose RSTPE.
As shown in Figure~\ref{fig:picture002}, for each detection pair $(\boldsymbol{d}_{i}, \boldsymbol{d}_{j})$, we utilize the relative spatial-temporal feature $[\delta t, \delta x, \delta y, \delta w, \delta h]^T$ as its edge feature and compute the dot-product of the edge feature and a learnable embedding.
Then, the dot-product is used as a bias term to the attention module.
Concretely, we modify the $(i, j)$-element of $A^{attn}$ in Eq.~\ref{equa:self-attention2} with the relative spatial-temporal encoding $a_{i,j}$ as:
\begin{equation}\label{equa:RSTPE1}
A_{i,j}^{attn}=\frac{(\boldsymbol{f}_{i} W_{Q})(\boldsymbol{f}_{j} W_{K})^{T}}{\sqrt{d}} + a_{i,j}
\end{equation}
\begin{equation}\label{equa:RSTPE2}
a_{i,j}= \boldsymbol{w} \cdot [\delta t, \delta x, \delta y, \delta w, \delta h]^T
\end{equation}
where $\boldsymbol{w}$ represents the learnable embedding, and shared across all layers.
The attention weighted feature tensor is projected through a FFN and a normalization layer. The features of all the tracklets are concatenated to get the final output of the Transformer encoder layer $\mathcal{F}^{en} \in \mathbb{R}^{NT \times d}$.
\subsection{Data Association with Transformer Decoder}\label{subsectionDecoder}
\begin{figure}[tb]
\centering
\includegraphics[width=0.49\textwidth]{transformer_decoder.png}
\caption{Architecture of spatial-temporal transformer decoder.}
\label{fig:picture003}
\end{figure}
The Transformer decoder generates the assignment matrix $\boldsymbol{A}$ from the output of the Transformer encoder $\mathcal{F}^{en}$ and the features of $M$ detections $\mathcal{F}^{det} \in \mathbb{R}^{M \times d}$.
First, $\mathcal{F}^{det}$ is duplicated $N$ times such that $\mathcal{F}^{det} \to \mathcal{F}^{det'} \in \mathbb{R}^{NM \times d}$.
Then, $\mathcal{F}^{det'}$ is utlized as the queries and is passed into the Transformer decoder module.
Since the decoder is permutation-invariant, the $N\times M$ queries must be different to produce different results.
As mentioned above, when estimating the elements of the same column (\emph{e.g.}, $i$-th column) in $\boldsymbol{A}$, their corresponding query features are just the same (the feature of $i$-th detection).
In order to solve this issue, we propose our APE.
For $(i, j)$-th query which is used to calculate the affinity between $i$-th tracklet and $j$-th detection, APE extracts feature $\boldsymbol{f}_{i}^{PE}$ from $i$-th tracklet as positional encoding and then fuse it with the feature embedding $\boldsymbol{f}_{j}$ of $j$-th detection to form the final feature $\boldsymbol{q}_{ij}$.
\begin{equation}\label{equa:APE1}
\boldsymbol{f}_{i}^{PE} = \frac{1}{T} \sum_{k=1}^{T} \boldsymbol{a}_{t-k} + \boldsymbol{p}_{t-1}
\end{equation}
\begin{equation}\label{equa:APE2}
\boldsymbol{q}_{ij} = \phi (\boldsymbol{f}_{j}, \boldsymbol{f}_{i}^{PE})
\end{equation}
where $\{\boldsymbol{a}_{t-1}, \cdots \boldsymbol{a}_{t-T}\}$ indicates the appearance features of $i$-th tracklet in the past $T$ frames, $\boldsymbol{p}_{t-1}$ represents the latest spatial feature of $i$-th tracklet, $\phi$ is a fusion function.
As for the feature fusion $\phi$, we have tried three different fusion operators:
(1) add the query feature and positional encoding together;
(2) concatenate the query feature and positional encoding;
(3) subtract positional encoding from the query feature.
We will show how different fusion operator affects the performance in the experiments.
The ``subtract'' fusion operator is adopted in our final network for its best performance.
The decoder takes the detection features of the current frame $\mathcal{F}^{det'} \in \mathbb{R}^{NM \times d}$ as the query, and uses the encoded features of all the tracklets $\mathcal{F}^{en} \in \mathbb{R}^{NT \times d}$ as the key and value.
The decoder follows the standard architecture of the Transformer~\cite{DETR}, transforming $N\times M$ embeddings of size $d$ using multi-head self- and cross-attention mechanisms.
The difference with the original Transformer~\cite{DETR} is that our model utilizes RSTPE to encode the relative spatial-temporal distance between the query detection and the tracklet detection, and serves it as a bias term in the cross-attention module.
By using RSTPE, each query in a single Transformer layer can adaptively attend to all tracklet detections according to the learned attention weights.
For example, if $a_{i,j}$ in Eq.~\ref{equa:RSTPE1} is learned to be a decreasing function with respect to spatial distance. For each query, the model will likely pay more attention to the tracklet detections near it and pay less attention to the tracklet detections far away from it.
One benefit of using Transformer decoder to calculate the assignment matrix is that the Transformer layer could provide a global receptive field.
Therefore, all the tracklet features can be utilized when calculating the affinity of specific detection-tracklet pair, which is critical to avoid ID switches in MOT.
The output of the Transformer decoder $\mathcal{F}^{de} \in \mathbb{R}^{NM \times d}$ can be passed through a FFN and a softmax layer to generate the assignment matrix $\boldsymbol{A} \in \mathbb{R}^{N \times M}$.
\subsection{Training and Inference}\label{subsectionTraining}
The proposed model is trained end-to-end with the guidance of the groundtruth assignment matrix $\boldsymbol{A}^{g}$.
We formulate the prediction of the assignment matrix as a binary classification problem. Therefore, the cross-entropy loss is applied to optimize the network.
\begin{equation}\label{equa:loss}
\begin{aligned}
\mathcal{L} = \frac{-1}{MN}\sum_{i=1}^{N}\sum_{j=1}^{M} & A_{i, j}^{g}log(A_{i, j}) \ + \\ & (1 - A_{i, j}^{g})\log{(1-A_{i, j})}
\end{aligned}
\end{equation}
where $A_{i, j}$ and $A_{i, j}^{g}$ indicate the $(i,j)$-th element of the predicted assignment matrix $\boldsymbol{A}$ and the groundtruth assignment matrix $\boldsymbol{A}^{g}$, respectively.
In MOT, each detection in frame $t$ can only have either one matched tracklet or no match at all. In other words, each row and column of the $\boldsymbol{A}^{g}$ can only be a one-hot vector (\emph{i.e.}, a vector with 1 in a single entry and 0 in all other entries) or an all-zero vector. It will cause serious data imbalance issue.
To mitigate the data imbalance, we will downsample negative pairs and try to keep the number of positive and negative pairs comparable.
In each training iteration, a continuous sequence of $T$ +1 frames are randomly sampled from the training set. The bounding boxes and their corresponding IDs are collected from each frame.
Similar to~\cite{chu2021transmot}, the groundtruth bounding boxes are replaced by the bounding boxes generated from the object detector by matching their IoUs, hence making the model be more robust to detection noise.
We do data augmentation by randomly removing detections from tracklets to simulate missing detections.
During inference, our model first calculates the assignment matrix $\boldsymbol{A}$.
To reduce the computational cost and accelerate the inference, a simple filtering strategy based on the spatial-temporal distances is adopted to reduce the number of queries. Precisely, all the tracklet-detection pairs whose average moving speed is larger than a threshold will be dropped.
Then, a threshold $\tau_{th}$ is adopted on $\boldsymbol{A}$ for binarization.
In other words, only the tracklet-detection pairs with affinity larger than $\tau_{th}$ can be associated.
Next, a simple Hungarian algorithm~\cite{MunkresJames} is adopted to generate tracking output, while complying with the typical tracking constraints like no detection assigned to more than one tracklet.
The detections that do not match any tracklet will be assigned with a new ID, and the tracklets that do not match any detections in the past consecutive $T$ frames will be terminated.
\section{Experiment}
\subsection{Experimental Setup}
\subsubsection{Datasets and Metrics}
\
\newline
\indent
All experiments are done on the multiple object tracking benchmark MOTChallenge, which consists of several challenging pedestrian tracking sequences with frequent occlusions and crowded scenes.
We choose three separate tracking benchmarks, namely MOT16~\cite{milan2016mot16}, MOT17~\cite{milan2016mot16} and MOT20~\cite{dendorfer2020mot20}.
These three benchmarks consist of challenging video sequences with varying viewing angle, size, number of objects, camera motion, illumination and frame rate in unconstrained environments.
To ensure a fair comparison with other methods, we use the public detections provided by MOTChallenge, and regress them by using the same method as in~\cite{Stadler_2021_CVPR}.
This regression strategy is widely used in published methods~\cite{braso2020learning, inproceedingsLiu, Stadler_2021_CVPR}.
For the performance evaluation, we use the widely accepted MOT metrics~\cite{bernardin2008evaluating, 2020HOTA, ristani2016performance, wu2006tracking}, including Multiple Object Tracking Accuracy (MOTA), ID F1 score (IDF1), Higher Order Tracking Accuracy (HOTA), Mostly Track targets (MT), Mostly Lost targets (ML), False Positives (FP), False Negatives (FN), ID switches (IDs), \emph{etc}.
Among these metrics, MOTA, IDF1 and HOTA are the most important ones.
MOTA and IDF1 quantify two of the main aspects of MOT, namely, detection and association.
HOTA balances the effect of performing accurate detection and association into a single unified metric.
IDF1 and HOTA are preferred over MOTA for evaluation due to their ability in measuring association accuracy.
\subsubsection{Implementation Details}
\
\newline
\indent
\textbf{ReID Model.}
Similar to LPC~\cite{peng2021lpc}, we also employ a variant of ResNet50, named ResNet50-IBN~\cite{luo2019strong}, to extract ReID features.
And, the ResNet50-IBN model is trained on two publicly available datasets: ImageNet~\cite{deng2009imagenet} and Market1501~\cite{zheng2015scalable}.
\textbf{Parameter Setting.}
Both the encoder and decoder apply 2 individual layers of feature width 256. Each attention layer applies multi-headed self-attention~\cite{vaswani2017attention} with 8 attention heads.
The feature dimension of FFN is set to 1024.
We set the length of the temporal window to $T=150$, which means that the tracklets without merged to any detections for 150 frames will be terminated.
\textbf{Training.}
The transformer model is trained end-to-end with SGD optimizer, where weight decay term is set to $1 \times 10^{-4}$ and $\beta_{1}$ is set to 0.9, respectively.
The batch size is set to 4.
We train for 10 iterations in total with a learning rate $1 \times 10^{-3}$.
\textbf{Post Processing.}
We perform simple bilinear interpolation along missing frames to fill gaps in our trajectories.
\subsection{Ablation Study}
The ablation experiments are evaluated on the training sequences of the MOT17 dataset with a 3-fold cross-validation split as described in~\cite{braso2020learning}.
\textbf{Importance of ASPE and RSTPE.}
There are two kinds of positional encodings in feature learning: ASPE and RSTPE. In order to validate their effectiveness, we perform experiments with various combinations of positional encodings. And the results can be found in Table~\ref{table:abs_sptmha}.
It can be concluded that: (i) both ASPE and RSTPE play an important role in identity preservation, hence improving IDF1 score by 0.7 and 0.8 respectively, (ii) combination of these two positional encodings can further improve the tracking performance.
Given these ablations, we conclude that both ASPE and RSTPE play an important role in the final tracking performance.
\begin{table}[tp]
\begin{center}
\caption{Results for different combinations of positional encoding in feature learning.}
\label{table:abs_sptmha}
\begin{tabular}{p{1.25cm}<{\centering} p{1.25cm}<{\centering} c c}
\hline
ASPE & RSTPE & IDF1 & MOTA \\
\hline
& & 67.7 & 62.7\\
\checkmark & & 68.4 & 62.8 \\
& \checkmark & 68.5 & 62.8 \\
\checkmark & \checkmark & 69.2 & 63.2 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure*}[htp]
\centering
\includegraphics[width=0.90\textwidth]{qualitative_results.png}
\caption{A qualitative example showing a failure case, as shown in column (a), without using ASPE and RSTPE, which leads to a spatial-temporal jumping case when one person is fully occluded. Using ASPE and RSTPE can effectively handle this case, as shown in column (b). The numbers are the object IDs. Best viewed in color.}
\label{fig:picture004}
\end{figure*}
In order to further illustrate the effectiveness of ASPE and RSTPE, we also give the qualitative results with and without ASPE and RSTPE. As shown in Figure~\ref{fig:picture004} (a), without using ASPE and RSTPE, there is an abnormal identity transfer when the person with ID 35 is fully occluded. By introducing ASPE and RSTPE, this case can be solved, as shown in Figure~\ref{fig:picture004} (b). It demonstrates that ASPE and RSTPE can effectively model the spatial-temporal information, and improve the identity preservation in occlusion cases.
\textbf{Effectiveness of Fusion Operators in Decoder.}
As discussed in Section~\ref{subsectionDecoder}, a fusion operator is needed in Transformer decoder to fuse the query feature and its corresponding positional encoding.
We perform an experiment to study the impact of different fusion operators. Table~\ref{table:abs_DEM} lists the detailed quantitative comparison results by using add, concatenate and subtract operator, respectively.
The results show that the subtract operator, \emph{i.e.}, subtract positional encoding from the query feature, achieves the best performance in both IDF1 and MOTA.
Hence, we use the subtract operator in our final configuration.
\begin{table}[tp]
\begin{center}
\caption{Results for different fusion strategies in decoder.}
\label{table:abs_DEM}
\begin{tabular}{p{2.8cm} c c }
\hline
Fusion Strategy & IDF1 & MOTA \\
\hline
Add & 55.9 & 61.0 \\
Concatenate & 61.6 & 61.2 \\
Subtract & 69.2 & 63.2 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[tp]
\begin{center}
\caption{Effect of the number of encoder and decoder layers.}
\label{table:abs_layer}
\begin{tabular}{p{2.8cm}<{\centering} c c }
\hline
Layer Num & IDF1 & MOTA \\
\hline
1 & 68.6 & 62.9 \\
2 & 69.2 & 63.2 \\
3 & 67.8 & 62.7 \\
4 & 67.3 & 62.7 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[!tp]
\begin{center}
\caption{Influence of the length of the temporal window $T$.}
\label{table:history-length}
\begin{tabular}{p{2.8cm}<{\centering} c c }
\hline
Temporal Window & IDF1 & MOTA \\
\hline
50 & 68.9 & 63.0 \\
100 & 69.6 & 63.1 \\
150 & 69.2 & 63.2 \\
300 & 68.4 & 62.0 \\
\hline
\end{tabular}
\end{center}
\end{table}
\textbf{Number of Encoder and Decoder Layers.}
We evaluate the tracking performance over various number of encoder and decoder layers. As shown in Table~\ref{table:abs_layer}, both IDF1 and MOTA improve when the number of layers is increased from 1 to 2. However, the performance decreases when the number of layers increases further. One possible reason is that the training data is not enough to train such a large model. We set the number of layers as 2 in the following experiments.
\textbf{Influence of the Length of the Temporal Window.}
The length of the temporal window $T$, which determines the maximum frame gap that the tracklets and the detections can be associated, is critical for the final tracking performance.
Intuitively, increasing the length of the temporal window $T$ allows our method to handle longer occlusions. Hence, one would expect higher values to yield better performance.
We test this hypothesis in Table~\ref{table:history-length} by showing the detailed IDF1 and MOTA scores with respect to $T$.
As expected, enlarging the temporal window from $T=50$ to $T=150$ improves the overall performance.
However, the performance decreases when the temporal window is unduly large ($T=300$). One possible reason might be that the reliability of the spatial-temporal modeling decreases with the increase of the temporal window, thus degrading the tracking performance. Hence, we use $T=150$ in our final configuration.
\begin{table*}[tp]
\begin{center}
\caption{Performance comparison with start-of-the art on three MOTChallenge benchmarks. The online methods are indicated with ($\mathbb{O}$). The arrows in the table indicate low or high optimal metric values. Note that we used bold for the best number.}
\label{table:mot_eval}
\begin{tabular}{p{3.5cm} c c c c c c c c c }
\hline
Method & MOTA $\uparrow$ & IDF1$\uparrow$ & HOTA$\uparrow$ & MT $\uparrow$ & ML$\downarrow$ & FP $\downarrow$ & FN $\downarrow$ & IDs $\downarrow$ & Hz $\uparrow$ \\
\hline
& & & & MOT16\\
\hline
BLSTM$\textunderscore$MTP$\textunderscore$O~\cite{Kim_2021_CVPR} ($\mathbb{O}$) & 48.3 & 53.5 & 39.7 & 17.0 & 38.7 & 9792 & 83707 & 735 & \textbf{21.0} \\
Tracktor++v2~\cite{Bergmann_2019_ICCV} ($\mathbb{O}$) & 56.2 & 54.9 & 44.6 & 20.7 & 35.8 & \textbf{2394} & 76844 & 1068 & 1.6 \\
MPNTrack~\cite{braso2020learning} & 58.6 & 61.7 & 48.9 & 27.3 & 34.0 & 4949 & 70252 & \textbf{354} & 6.5 \\
LPC~\cite{peng2021lpc} & 58.8 & 67.6 & 51.7 & 27.3 & 35.0 & 6167 & 68432 & 435 & 4.3 \\
Aplift~\cite{HorKai2021} & 61.7 & 66.1 & 51.3 & \textbf{34.3} & 31.2 & 9168 & 60180 & 495 & 0.6 \\
TMOH~\cite{Stadler_2021_CVPR} ($\mathbb{O}$) & 63.2 & 63.5 & 50.7 & 27.0 & 31.0 & 3122 & 63376 & 1486 & 0.7 \\
TranSTAM ($\mathbb{O}$) & \textbf{63.8} & \textbf{70.6} & \textbf{54.7} & 30.3 & \textbf{30.6} & 7412 & \textbf{57975} & 629 & 11.2 \\
\hline
& & & & MOT17\\
\hline
BLSTM$\textunderscore$MTP$\textunderscore$O~\cite{Kim_2021_CVPR} ($\mathbb{O}$) & 51.5 & 54.9 & 41.3 & 20.4 & 35.5 & 29616 & 241619 & 2566 & \textbf{20.1} \\
Tracktor++v2~\cite{Bergmann_2019_ICCV} ($\mathbb{O}$) & 56.3 & 55.1 & 44.8 & 21.1 & 35.3 & \textbf{8866} & 235449 & 1987 & 1.5 \\
MPNTrack~\cite{braso2020learning} & 58.8 & 61.7 & 49.0 & 28.8 & 33.5 & 17413 & 213594 & 1185 & 6.5 \\
LPC~\cite{peng2021lpc} & 59.0 & 66.8 & 51.5 & 29.9 & 33.9 & 23102 & 206848 & \textbf{1122} & 4.8 \\
Aplift~\cite{HorKai2021} & 60.5 & 65.6 & 51.1 & \textbf{33.9} & 30.9 & 30609 & 190670 & 1709 & 1.8 \\
CenterTrack~\cite{zhou2020tracking} ($\mathbb{O}$) & 61.5 & 59.6 & 48.2 & 26.4 & 31.9 & 14076 & 200672 & 2583 & 17.0 \\
TMOH~\cite{Stadler_2021_CVPR} ($\mathbb{O}$) & 62.1 & 62.8 & 50.4 & 26.9 & 31.4 & 10951 & 201195 & 1897 & 0.7 \\
TranSTAM ($\mathbb{O}$) & \textbf{63.0} & \textbf{69.9} & \textbf{54.6} & 30.4 & \textbf{30.6} & 23022 & \textbf{183659} & 1842 & 10.8 \\
\hline
& & & & MOT20\\
\hline
Tracktor++v2~\cite{Bergmann_2019_ICCV} ($\mathbb{O}$) & 52.6 & 52.7 & 42.1 & 29.4 & 26.7 & \textbf{6930} & 236680 & 1648 & 1.2 \\
LPC~\cite{peng2021lpc} & 56.3 & 62.5 & 49.0 & 34.1 & 25.2 & 11726 & 213056 & 1562 & 0.7\\
MPNTrack~\cite{braso2020learning} & 57.6 & 59.1 & 46.8 & 38.2 & 22.5 & 16953 & 201384 & \textbf{1210} & \textbf{6.5} \\
Aplift~\cite{HorKai2021} & 58.9 & 56.5 & 46.6 & 41.3 & 21.3 & 17739 & 192736 & 2241 & 0.4 \\
TMOH~\cite{Stadler_2021_CVPR} ($\mathbb{O}$) & \textbf{60.1} & 61.2 & 48.9 & \textbf{46.7} & \textbf{17.8} & 38043 & 165899 & 2342 & 0.6 \\
TranSTAM ($\mathbb{O}$) & \textbf{60.1} & \textbf{66.7} & \textbf{51.7} & 46.5 & \textbf{17.8} & 37657 & \textbf{165866} & 2926 & 4.0 \\
\hline
\end{tabular}
\end{center}
\end{table*}
\subsection{Benchmark Evaluation}
We report the quantitative results obtained by our TranSTAM on the test sets of three MOTChallenge benchmarks MOT16, MOT17 and MOT20 in Table~\ref{table:mot_eval}, and follow the standard evaluation practice to compare it to methods that are officially published and peer reviewed on the three benchmarks.
It should be noticed that both \textit{online} and \textit{offline} approaches are presented, and the online methods are indicated with ($\mathbb{O}$).
All the results are sorted with ascending MOTA.
As shown in Table~\ref{table:mot_eval}, our approach surpasses the state-of-the-art online methods on all evaluated benchmarks by a large margin, improving especially the IDF1 measure by 7.1, 7.1, and 5.5 percentage points respectively, and the HOTA measure by 4.0, 4.2, and 2.8 percentage points respectively.
It demonstrates that our method can achieve strong performance in identity preservation.
It is worth noting that our method even achieves better performance than all the offline methods.
Moreover, owing to the concise network and no complex post-processing, our approach is faster than most of the online and offline methods.
\begin{table*}[tp]
\begin{center}
\caption{Performance comparison with TransMOT on MOT17 benchmark. The arrows in the tables indicate low or high optimal metric values. Note that we used bold for the best number.}
\label{table:comparison_with_tra}
\begin{tabular}{p{2.0cm} c c c c c c c c c }
\hline
Method & MOTA $\uparrow$ & IDF1$\uparrow$ & HOTA$\uparrow$ & MT $\uparrow$ & ML$\downarrow$ & FP $\downarrow$ & FN $\downarrow$ & IDs $\downarrow$ & Hz $\uparrow$ \\
\hline
& & & & Overall\\
\hline
TransMOT~\cite{chu2021transmot} & \textbf{76.7} & \textbf{75.1} & \textbf{61.7} & 1200 & 387 & 36231 & \textbf{93150} & \textbf{2346} & 1.1 \\
TranSTAM & 76.1 & 74.0 & 60.7 & \textbf{1203} & 387 & \textbf{36213} & 93243 & 5343 & \textbf{10.4} \\
\hline
& & & & MOT17-01\\
\hline
TransMOT~\cite{chu2021transmot} & \textbf{65.9} & 66.1 & 52.6 & 10 & 3 & 303 & 1876 & \textbf{18} & - \\
TranSTAM & 65.6 & \textbf{77.3} & \textbf{57.1} & 10 & 3 & \textbf{295} & 1876 & 46 & - \\
\hline
& & & & MOT17-03\\
\hline
TransMOT~\cite{chu2021transmot} & \textbf{90.6} & \textbf{87.1} & \textbf{71.2} & \textbf{134} & 3 & \textbf{2820} & 6898 & \textbf{86} & - \\
TranSTAM & 90.5 & 84.2 & 69.8 & 133 & 3 & 2843 & \textbf{6924} & 222 & - \\
\hline
& & & & MOT17-06\\
\hline
TransMOT~\cite{chu2021transmot} & \textbf{58.5} & \textbf{61.4} & \textbf{50.3} & 100 & 46 & 1722 & 3016 & \textbf{155} & - \\
TranSTAM & 57.1 & 50.1 & 42.6 & \textbf{101} & 46 & \textbf{1718} & \textbf{3012} & 327 & - \\
\hline
& & & & MOT17-07\\
\hline
TransMOT~\cite{chu2021transmot} & \textbf{67.8} & 56.2 & 47.6 & 32 & 2 & 1783 & 3539 & \textbf{120} & - \\
TranSTAM & 67.1 & \textbf{67.9} & \textbf{53.0} & 32 & 2 & \textbf{1781} & \textbf{3537} & 232 & - \\
\hline
& & & & MOT17-08\\
\hline
TransMOT~\cite{chu2021transmot} & \textbf{55.7} & 47.9 & \textbf{41.9} & \textbf{28} & 9 & 1847 & \textbf{7297} & \textbf{220} & - \\
TranSTAM & 53.9 & \textbf{50.1} & 41.7 & 27 & 9 & \textbf{1835} & 7310 & 595 & - \\
\hline
& & & & MOT17-12\\
\hline
TransMOT~\cite{chu2021transmot} & \textbf{51.4} & \textbf{66.1} & \textbf{53.1} & 45 & 24 & 1791 & 2383 & \textbf{35} & - \\
TranSTAM & 51.0 & 62.3 & 51.8 & \textbf{46} & 24 & 1791 & 2383 & 70 & - \\
\hline
& & & & MOT17-14\\
\hline
TransMOT~\cite{chu2021transmot} & \textbf{56.7} & \textbf{66.6} & \textbf{49.5} & 51 & 42 & 1811 & 6041 & \textbf{148} & - \\
TranSTAM & 56.0 & 63.6 & 47.7 & \textbf{52} & 42 & \textbf{1808} & \textbf{6039} & 289 & - \\
\hline
\end{tabular}
\end{center}
\end{table*}
\subsection{Additional Comparison with TransMOT }\label{section:additional_experiment}
It should be noticed that there is no peer reviewed Transformer-based MOT methods.
In this subsection, we provide an extended comparison of our method with TransMOT~\cite{chu2021transmot} which is a top-performing Transformer-based MOT method.
We directly use the TransMOT's results that are officially published on the MOTChallenge benchmark.
Note that TransMOT uses private detections, we adopt the same set of detections as TransMOT for a fair comparison.
The results are summarized in Table~\ref{table:comparison_with_tra}.
Overall, TransMOT outperforms our method by 0.6 percentage points in MOTA measure and 1.1 percentage points in IDF1 measure.
If further refined to each video sequence, our method outperforms TransMOT by a large margin in terms of IDF1 score on MOT17-01, MOT17-07, MOT17-08, while getting worse IDF1 score than TransMOT on MOT17-06, MOT17-12, MOT17-14.
It should be noticed that MOT17-01, MOT17-07 and MOT17-08 are recorded by static cameras, while MOT17-06, MOT17-12 and MOT17-14 are recorded by moving cameras.
It demonstrates that our method may achieve better performance in identity preservation than TransMOT on static video sequences, while getting worse performance on moving video sequences.
One possible reason is that the effectiveness of the relative spatial-temporal positional encoding (RSTPE) decreases in moving video sequences.
In TransMOT, there are a few additional modules needed for data association, such as the Kalman-Filter based motion predictor, the long-term occlusion and duplicated detection handling module.
It will increase the computational cost.
In contrast, our method does not require any additional modules, hence being up to one order of magnitude faster than TransMOT, i.e., 10.4Hz vs 1.1Hz.
\section{Conclusion}\label{section:conclusion}
In this paper, we propose a novel Transformer-based method named TransSTAM for online MOT.
The key innovations of TransSTAM are three simple yet effective positional encoding methods. TransSTAM has two major advantages: (1) Its architecture is compact and is computationally efficient; (2) It can effectively learn spatial-temporal and appearance features jointly within single model, hence achieving better tracking accuracy.
The superiority of the proposed TransSTAM is shown on three popular benchmarks, where we achieve state-of-the-art results.
\clearpage
\bibliographystyle{splncs04}
|
2,877,628,091,589 | arxiv | \section{Introduction}
Many-body phenomena in ultracold quantum gases is a subject of extensive
ongoing research. Interaction between atoms plays a
crucial role in many situations and is responsible for the most
striking experimental observations of solitons in
Bose-Einstein condensates trapped by harmonic~\cite{becsol} and periodic~\cite{markus}
potentials. The typical number of atoms in such solitons
varies from several hundreds to tens of thousands.
Therefore these structures can be well described by the mean-field Gross-Pitaevsky
equation. However, the mean-field theory does never provide an exact description
of interacting quantum systems (e.g., due to unavoidable depletion) and it becomes
interesting to investigate quantum effects in the soliton propagation~\cite{bth}.
If the atoms are loaded into the optical lattice, the interaction effects become more
important and in the limit of the small number of atoms an exact quantum analysis
of the solitons~\cite{Scott,el}
reveals strong deviations from the results provided by the
Gross-Pitaevsky equation with lattice potential~\cite{skryabin} or
its discrete version~\cite{Scott,discr}.
The use of Feshbach resonances to control interaction between
ultracold atoms in optical potentials is a widely spread technique allowing
transformation of atoms into molecules and changing magnitude and
sign of the effective scattering length of the atoms (see, e.g.,~\cite{MVA,fesh}).
Coherent solitons in condensed atomic-molecular mixtures
without optical lattice were studied in several papers, see,
e.g.,~\cite{par1,par2}. The mathematical model previously used for the coupled
atomic-molecular condensates is equivalent to the model describing
parametric interaction of photons in quadratically nonlinear
crystals~\cite{rev}. It was demonstrated in the mean-field
limit that the resonant atomic-molecular interaction serves as a
mechanism responsible for supporting bright solitons in the case
of repulsive bosons and for preventing collapse in the case of
attractive bosons~\cite{par2,rev}. The quantum atomic-molecular
solitons in the system without periodic potential were also
studied~\cite{par1,quant}.
Optical parametric solitons in
the system of coupled waveguides, playing the role of a periodic potential
for photons, were recently observed experimentally~\cite{steg}.
Atomic-molecular solitons in a
deep optical lattice have been theoretically considered in the
mean-field approximation~\cite{konotop}, which is mathematically equivalent
to the system studied in~\cite{steg}.
In this work, we demonstrate existence and study
properties of the quantum atomic-molecular solitons in an optical
lattice near Feshbach resonance.
Under the quantum lattice soliton we understand the quantum state of the system
of interacting particles with the localized eigenfunction and the discrete
energy level belonging to a spectral interval forbidden for the spatially extended
periodic states~\cite{Scott}.
Note that the discrete energy levels belonging to the intervals forbidden
for the linear waves are also a generic feature of the classical lattice solitons.
Advances in manipulation of ultracold atomic systems with small number of particles per
lattice site~\cite{Greiner} as well as in cooling and trapping of single
atoms~\cite{single} allow one to hope that quantum lattice
solitons will soon become relevant for experimental research.
\section{Hamiltonian}
We consider two atoms of mass $m$ in an optical lattice created by a
far-detuned standing laser wave. If the laser wavelength is $\lambda_{\rm L}=2\pi/k_{\rm L}$,
then the lattice constant $d=\lambda_{\rm L}/2$. It is convenient
to represent the amplitude of the periodic potential in the form $\hbar\omega_{\rm R} s$,
where $\omega_{\rm R}=\hbar k_{\rm L}^2/(2 m)$ is the recoil frequency and
$s$ is a dimensionless parameter.
In the case of a deep optical lattice every lattice site can be described by a harmonic potential
with the frequency $\omega=2\omega_{\rm R}\sqrt{s}$
and the lowest-band atomic Wannier function
is well approximated by a Gaussian with the characteristic length
$l_{\rm a}=\sqrt{\hbar/m \omega}$.
The atoms in the lattice are subject to the magnetic field $B$,
with $B=B_0$ corresponding to the Feshbach resonance of the width $\Delta B$.
There are several processes which are to be
taken into account in such a system: atomic interaction, molecule production and
atomic and molecular hopping.
Taking into account only the hopping between the nearest lattice sites
as well as on-site atomic interactions and in the lowest-band approximation
the Hamiltonian of the system is given by~\cite{DKOS,comment}
\begin{eqnarray}
\label{bh}
H
&=&
-
t_{\rm a}
\sum_{\langle i,j \rangle}
a^{\dagger}_{i}
a^{\phantom \dagger}_{j}
-
t_{\rm m}
\sum_{\langle i,j \rangle}
b^{\dagger}_{i}
b^{\phantom \dagger}_{j}
+
\left(
\delta-\frac{3}{2}\hbar\omega
\right)
\sum_{i}
b^{\dagger}_{i}
b^{\phantom \dagger}_{i}
+
\frac{U_{\rm bg}}{2}
\sum_{i}
a^{\dagger}_{i}
a^{\dagger}_{i}
a^{\phantom \dagger}_{i}
a^{\phantom \dagger}_{i}
\nonumber\\
&+&
\tilde g
\sum_{i}
\left(
b^{\dagger}_{i}
a^{\phantom \dagger}_{i}
a^{\phantom \dagger}_{i}
+
a^{\dagger}_{i}
a^{\dagger}_{i}
b^{\phantom \dagger}_{i}
\right)
\;,
\end{eqnarray}
where
$a^{\dagger}_{i}$~($b^{\dagger}_{i}$)
and
$a^{\phantom \dagger}_{i}$~($b^{\phantom \dagger}_{i}$)
are creation and annihilation operators of a single atom (molecule) at a lattice site $i$,
$\delta=\Delta\mu(B-B_0)$ is a detuning from the Feshbach resonance.
Here, $\Delta\mu$ is the difference in magnetic moments of the two atoms and a molecule.
The atom-molecule conversion is determined by
$
\tilde g
=
\hbar
\sqrt{2 \pi a_{\rm bg}
\Delta B \Delta \mu /m}
/(2 \pi l_{\rm a}^2)^{3/4}
$
and the background on-site atomic interaction parameter is
$
U_{\rm bg}
=
\sqrt{2/\pi}
\hbar \omega
\left(
a_{\rm bg} / l_{\rm a}
\right)
$
with $a_{\rm bg}$ being the background scattering length.
In the Gaussian approximation, the atomic and molecular tunneling matrix elements are given by
$
t_{\rm a,m}
=
\frac{\hbar \omega}{2}
\left[
1 - \left( \frac{2}{\pi} \right)^{2}
\right]
\left(
\frac{\lambda_{\rm L}}{4 l_{\rm a,m}}
\right)^{2}
e^{
-
\left(
\lambda_{\rm L}/4 l_{\rm a,m}
\right)^{2}
}
$.
Since
$l_{\rm m} = l_{\rm a}/\sqrt{2}$,
the molecular tunneling rate is much smaller than the atomic one.
\section{Solution of the on-site problem}
The on-site problem for the Hamiltonian (\ref{bh}) can be easily solved
analitically. In the case when the atoms are on the same lattice site
there are two eigenmodes which are superpositions of the two-atom and
molecular states with the energies
\begin{equation}
\label{e}
E_\pm
=
\frac
{\delta'+U_{\rm bg}}
{2}
\pm
\sqrt
{
\left(
\frac
{\delta'-U_{\rm bg}}
{2}
\right)^2
+
2
\tilde g^2
}
\;,
\end{equation}
and the probability to find a molecule
\begin{equation}
\label{pm}
p_{{\rm m}\pm}
=
\frac{1}{2}
\left[
1
\pm
\frac
{\delta'-U_{\rm bg}}
{
\sqrt
{
\left(
\delta'-U_{\rm bg}
\right)^2
+
8
\tilde g^2
}
}
\right]
\;,
\end{equation}
where
$\delta'=\delta-\frac{3}{2}\hbar\omega$
is an effective detuning.
The two-atoms on-site problem was exactly solved in Ref.~\cite{DKOS} for the infinite number of bands
neglecting the atom-atom interaction.
The eigenenergies $E$ are shown to be determined by the equation
\begin{equation}
\label{emo}
E - \delta'
=
\frac{2\sqrt{\pi}\tilde g^2}{\hbar \omega}
\frac
{\Gamma (- E / 2 \hbar \omega)}
{\Gamma (- E / 2 \hbar \omega - 1/2)}
\;.
\end{equation}
The eigenenergies given by Eq.~(\ref{e}) for $U_{\rm bg}=0$ and Eq.~(\ref{emo})
are plotted in Fig.~\ref{ee}. As we see, our lower-branch solution $E_-$ in Eq.~(\ref{e})
is in excellent agreement with the corresponding branch of Eq.~(\ref{emo}) for arbitrary $\delta$.
The upper-branch solution $E_+$ fails to reproduce the second branch of Eq.~(\ref{emo})
if $\delta$ is far above the Feshbach resonance where the contribution of the second band
becomes significant, remaning however in a very good agreement near the resonance
and below it. This implies
that the lowest-band approximation is valid if the effective detuning
$\delta'$ is less than
the gap between the two lowest Bloch bands, which is the quantity of the order of $\hbar\omega$,
and/or if we are interested in the eigenmodes of the Hamiltonian (\ref{bh})
with the energies less than the energy of the second Bloch band.
The latter is always the case in the present work.
In addition, the parameters $U_{\rm bg}$ and $\tilde g$ must be much smaller than
the bands separation which is also fulfilled.
\begin{figure}[tb]
\centering
\hspace{-3cm}
\includegraphics[width=8cm]{figure1.eps}
\caption{Eigenenergies in the case of two atoms on the same lattice site.
Solid lines show the results given by Eq.~(\ref{e}) which corespond
to the lowest-band approximation. The results for the infinite number of bands [Eq.~(\ref{emo})]
are shown by dashed lines.
$U_{\rm bg}=0$,
$2 \sqrt{\pi} \tilde g^2 / \left(\hbar\omega\right)^2=0.1$.
}
\label{ee}
\end{figure}
\section{Eigenmodes of the complete Hamiltonian and the soliton band}
We consider a one-dimensional model with $L$ lattice sites
and assume that $L$ is odd\footnote{In the case of even $L$ there will be only unessenstial modifications
in the equations.}.
Under periodic boundary conditions the eigenstates of the Hamiltonian~(\ref{bh}) are
\begin{eqnarray}
\label{psi}
|\psi_k\rangle
&=&
c^{\rm m}_{k}
\sum_{j=1}^L
\left(
\hat T/\tau_k
\right)^{j-1}
|1_{\rm m} 0 \dots 0\rangle
+
c^{\rm a}_{0k}
\sum_{j=1}^L
\left(
\hat T/\tau_k
\right)^{j-1}
|2 0 \dots 0\rangle
\nonumber\\
&+&
c^{\rm a}_{1k}
\sum_{j=1}^L
\left(
\hat T/\tau_k
\right)^{j-1}
|1 1 0 \dots 0\rangle
+
\dots
\nonumber\\
&+&
c^{\rm a}_{(L-1)/2,k}
\sum_{j=1}^L
\left(
\hat T/\tau_k
\right)^{j-1}
|1 0 \dots 0 1 0 \dots 0\rangle
\;,
\end{eqnarray}
where $|1_{\rm m} 0 \dots 0\rangle$ is a state with one molecule
on the first lattice site and all the other sites being unoccupied,
$|n_1 \dots n_L\rangle$ is a state with $n_i$ atoms on site $i$, $i=1,\dots,L$.
$\hat T$ is the translation operator which has the eigenvalues
$
\tau_k
=
\exp
\left(
i \pi k/k_{\rm L}
\right)
$
with the wave number
$k=k_{\rm L} 2 \nu/L$, $\nu=0,\pm 1,\dots,\pm (L-1)/2$~\cite{Scott}.
The eigenvalue problem for the Hamiltonian~(\ref{bh}) can be written down in the matrix form
\begin{equation}
\label{evp}
\left(
\begin{array}{cc}
\epsilon_k^{\rm m} & A^T \\
A & Q_k
\end{array}
\right)
\left(
\begin{array}{c}
{c}_k^{\rm m} \\
{\bf c}_k^{\rm a}
\end{array}
\right)
=
E_k
\left(
\begin{array}{c}
{c}_k^{\rm m} \\
{\bf c}_k^{\rm a}
\end{array}
\right)
\;,
\end{equation}
where
$
\epsilon_{k}^{\rm m}
=
\delta'
-
2 t_{\rm m}
\cos
\left(
\pi k/k_{\rm L}
\right)
$.
The vector $A$ has a length $(L+1)/2$ and its nonvanishing
element is $A_{1}=\sqrt{2}\tilde g$.
The nonvanishing elements of the tridiagonal $(L+1)/2 \times (L+1)/2$ matrix $Q_k$
are given by~\cite{Scott}
\begin{eqnarray}
&&
Q_{11}=U_{\rm bg}
\;,\;
Q_{21} = Q_{12}^* = - t_{\rm a} \sqrt{2} (1+\tau_k)
\;,
\\
&&
Q_{i+1,i} = Q_{i,i+1}^* = - t_{\rm a} (1+\tau_k)
\;,\;
i=2,\dots,(L-1)/2
\;,
\nonumber\\
&&
Q_{(L+1)/2,(L+1)/2}
=
- t_{\rm a}
\left[
\tau_k^{(L+1)/2}
+
\tau_k^{(L-1)/2}
\right]
\;.
\nonumber
\end{eqnarray}
The eigenvectors in Eq.~(\ref{evp}) consist of two parts
${c}_k^{\rm m}$,
$
{\bf c}_k^{\rm a}
=
{\rm col}
\left[
c^{\rm a}_{0k},\dots,c^{\rm a}_{(L-1)/2,k}
\right]
$,
and satisfy the normalization condition
\begin{eqnarray}
\label{norm}
\left|
c_{k}^{\rm m}
\right|^2
+
\sum_{i=0}^{(L-1)/2}
\left|
c_{ik}^{\rm a}
\right|^2
&=&
1
\;.
\end{eqnarray}
In the absence of the molecular mode, the eigenvalue problem~(\ref{evp}) reduces
to that one solved in Ref.~\cite{Scott}, where it was shown that in the case of attractive
interaction the energy spectrum consists always of (quasi)continuum band
and a discrete level below the (quasi)continuum which corresponds to the bright soliton.
Its characteristic feature is that
$
\left|
c_{0k}^{\rm a}
\right|^2
\gg
\left|
c_{ik}^{\rm a}
\right|^2
$,
$i=1,\dots,(L-1)/2$, i.e., the probability of finding two atoms on the same lattice site
is much higher than all the other ones.
This localization corresponds to the soliton solution of the discrete nonlinear
Schr\"odinger equation and, therefore, the discrete level can be called
a "soliton band"~\cite{Scott}. Our aim is to investigate
the influence of the molecular mode on the soliton band.
After the eigenvalue problem~(\ref{evp}) is solved, one can calculate
the soliton binding energy $E_{\rm b}$
which is defined as the difference of the energy at the bottom
of the (quasi)continuum and the soliton level at $\nu=0$ which corresponds to $k=k_0=0$.
The effective mass $m^*$ can be worked out using a quadratic approximation for the eigenenergy
at some small value of $\nu$ (e.g., $\nu=1$)
$
E_{k_1}
=
E_{k_0}
+
\hbar^2 k_1^2
/
\left(
2 m^*
\right)
$,
which leads to
\begin{eqnarray}
m^*
&=&
2 \hbar^2 k_{\rm L}^2
/
\left[
\left(
E_{k_1}
-
E_{k_0}
\right)
L^2
\right]
\;.
\end{eqnarray}
According to Eq.~(\ref{psi}) the distance between the atoms $w_k$
is a random variable which takes the values
$w_{ki}=0,1,\dots,(L-1)/2$,
with the probabilities
$
\left|
c_{ik}^{\rm a}
\right|^2
/
\left(
1
-
\left|
c_{k}^{\rm m}
\right|^2
\right)
$.
Thus, it is necessary to calculate
not only the mean interatomic distance $\langle w_k \rangle$ but also its standard deviation
\begin{eqnarray}
\sigma_{wk}
&=&
\sqrt
{
\langle w_k^2 \rangle
-
\langle w_k \rangle^2
}
\;,
\langle w_k^l \rangle
=
\sum_{i=0}^{(L-1)/2}
\frac
{
i^l
\left|
c_{ik}^{\rm a}
\right|^2
}
{
1
-
\left|
c_{k}^{\rm m}
\right|^2
}
\;,
\end{eqnarray}
and the soliton width can be defined as
$
\sqrt
{
\langle w_k^2 \rangle
}
$.
We have solved the eigenvalue problem~(\ref{evp}) numerically for finite values of $L$
and analytically in the limit of infinite lattice.
The results are presented below.
We consider the cases of attractive and repulsive atomic interactions and
concentrate on the properties of the lower-energy modes.
\subsection{Attractive atomic interaction}
\begin{figure}[tb]
\centering
\hspace{-3cm}
\includegraphics[width=8cm]{figure2.eps}
\caption{Energy eigenvalues of the Hamiltonian~(\ref{bh}).
The parameters are $s=5$,
$2 \sqrt{\pi} \tilde g^2 / \left(\hbar\omega\right)^2=0.1$,
$\delta'/\hbar\omega=3$,
$a_{\rm bg}/\lambda_L=-0.005$.
Dots are the results of numerical solution of Eq.~(\ref{evp}) for $L=41$ and
the solid lines correspond to the limit $L\to\infty$.
The spectrum is truncated from above in order to be consistent
with the lowest-band approximation.
}
\label{s}
\end{figure}
\begin{figure}[tb]
\centering
\hspace{-3cm}
\includegraphics[width=8cm]{figure3.eps}
\caption{Probabilities of molecular and atomic states corresponding to the soliton band:
$
\left|
c_{0}^{\rm m}
\right|^2
$ (m),
$
\left|
c_{i0}^{\rm a}
\right|^2
$ ($i$),
$i=0,1,2$
[$
\left|
c_{20}^{\rm a}
\right|^2
$
is shown by the dashed line].
The parameters are the same as in Fig.~\ref{s}.
}
\label{pn}
\end{figure}
\begin{figure}[tb]
\centering
\hspace{-3cm}
\includegraphics[width=8cm]{figure4.eps}
\caption{Soliton binding energy.
The parameters are the same as in Fig.~\ref{s} and $k=0$.
}
\label{ebn}
\end{figure}
\begin{figure}[tb]
\centering
\hspace{-3cm}
\includegraphics[width=8cm]{figure5.eps}
\caption{The ratio of the soliton effective mass $m^*$ to the effective mass
$\tilde m$ at the bottom of (quasi)continuum band.
The parameters are the same as in Fig.~\ref{s}.
}
\label{mn}
\end{figure}
\begin{figure}[tb]
\centering
\hspace{-3cm}
\includegraphics[width=10cm]{figure6.eps}
\caption{Mean interatomic distance $\langle w_k \rangle$ (left panel)
and its standard deviation $\sigma_{wk}$ (right panel).
The parameters are the same as in Fig.~\ref{s}.
}
\label{wdn}
\end{figure}
We consider first bosons with attractive interactions ($U_{\rm bg}<0$).
If $\delta'$ is negative and its absolute value is very large,
the coupling between the molecular
mode and the atomic mode is negligible and we have two discrete levels below the (quasi)continuum.
The lower one corresponds to the pure molecule and another one to the atomic bright soliton.
If $\delta'$ increases, i.e, we come closer to the Feshbach resonance,
both discrete levels approach the (quasi)continuum.
At some critical value of $\delta'=\delta'_-$
the upper level merges with the (quasi)continuum.
In the numerical calculations it is not quite clear how to determine $\delta'_-$
exactly because there are several possibilities to define it.
Analytical analysis in the case of the infinite lattice shows that the mergence occurs
if at least one of the inequalities
\begin{equation}
\label{ineq}
\left|
c_{1k}^{\rm a}
\right|
>
\left|
c_{ik}^{\rm a}
\right|
\;,
i=2,3,\dots,
\end{equation}
is violated. We adopt this as a definition of $\delta'_-$ and by doing numerical diagonalization
for $k=0$ and for the values of parameters in the caption of Fig.~\ref{s} we obtain
$\delta'_-=-1.531\,\hbar\omega$. In order to have inequalities (\ref{ineq}) again fulfilled,
one has to increase $\delta'$ up to $\delta'_+$. Using the same values of the parameters
we get $\delta'_+=-1.490\,\hbar\omega$. If $\delta'>\delta'_+$, a discrete level
appears above the (quasi)continuum, while the lower one
which becomes a linear combination of atomic and molecular states remains below (see Fig.~\ref{s}).
If we increase $\delta'$ further and go far away from the Feshbach resonance
($\delta' \gg \hbar\omega$),
the contribution of the molecular mode into the lowest-energy eigenstate becomes negligible~(Fig.\ref{pn})
and we have a pure atomic bright soliton below the (quasi)continuum~\cite{Scott}.
The upper discrete level is located very far above the (quasi)continuum and cannot be interpreted
within the lowest-band approximation.
The soliton binding energy $E_{\rm b}$ is shown in Fig.~\ref{ebn}.
Due to the large contribution of the molecular mode
near the resonance the binding energy is larger than its asymptotic value at $\delta'\to\infty$.
The effective mass $m^*$ is also larger at smaller values of $\delta'$ (see Fig.~\ref{mn})
because due to the fact that $t_{\rm m} \ll t_{\rm a}$
the effective mass of the molecule is much larger than the atomic effective mass.
The corresponding contributions of the molecular and atomic states into the soliton band
are shown in Fig.~\ref{pn}.
The mean interatomic distance $\langle w_k \rangle$ as well as its standard deviation $\sigma_{wk}$
are shown in Fig.~\ref{wdn}. The interatomic distance is well below the lattice constant $d$
and the maximal localization is achieved at the edges of the Brillouin zone.
However, quantum fluctuations are very strong and $\sigma_{wk}>\langle w_k \rangle$.
\subsection{Repulsive atomic interaction}
\begin{figure}[tb]
\centering
\hspace{-3cm}
\includegraphics[width=8cm]{figure7.eps}
\caption{Probabilities of molecular and atomic states corresponding to the soliton band:
$
\left|
c_{0}^{\rm m}
\right|^2
$ (m),
$
\left|
c_{i0}^{\rm a}
\right|^2
$ ($i$),
$i=0,1,2$
[$
\left|
c_{20}^{\rm a}
\right|^2
$
is shown by the dashed line].
$a_{\rm bg}/\lambda_L=0.005$ and the other
parameters are the same as in Fig.~\ref{s}.
}
\label{pp}
\end{figure}
\begin{figure}[tb]
\centering
\hspace{-3cm}
\includegraphics[width=8cm]{figure8.eps}
\caption{The ratio of the soliton effective mass $m^*$ to the effective mass
$\tilde m$ at the bottom of (quasi)continuum band.
The parameters are the same as in Fig.~\ref{pp}.
}
\label{mp}
\end{figure}
\begin{figure}[tb]
\centering
\hspace{-3cm}
\includegraphics[width=10cm]{figure9.eps}
\caption{Mean interatomic distance $\langle w_k \rangle$ (left panel)
and its standard deviation $\sigma_{wk}$ (right panel).
The parameters are the same as in Fig.~\ref{pp}.
}
\label{wdp}
\end{figure}
In the case of repulsive atomic interaction ($U_{\rm bg}>0$), the situation is quite different.
There can be only one discrete level below the (quasi)continuum which is occupied
by the molecule as long as $\delta'$ is negative and its absolute value remains very large.
Above the (quasi)continuum, there is another discrete level corresponding to the atomic
bright soliton. If we increase $\delta'$
the lower discrete level approaches the (quasi)continuum and the probabilities of the atomic
states become larger meaning that the system enters the bright soliton regime
supported by the molecule creation (see Fig.~\ref{pp}).
Inequalities (\ref{ineq}) are satisfied in this regime.
If we increase $\delta'$ further and reach the value $\delta'_-$,
the probability of the molecular state becomes very small.
Inequalities (\ref{ineq}) are violated and
the discrete level merges with the (quasi)continuum, i.e.,
the bright soliton is destroyed.
The probability
$
\left|
c_{k}^{\rm m}
\right|^2
$
can be also interpreted as a relative population of the molecular component.
It is a decreasing function of $\delta'$ like in the case of classical
atomic-molecular solitons~\cite{konotop}.
If the detuning is further increased up to $\delta'_+$,
the soliton band appears above the (quasi)continuum.
For the values of parameters used in our numerical estimations,
$\delta_-'=1.479\,\hbar\omega$ and
$\delta_+'=1.521\,\hbar\omega$.
The soliton binding energy $E_{\rm b}$ is again
a decreasing function of $\delta'$ which vanishes at $\delta'=\delta_-'$.
The effective mass $m^*$ equals to the effective mass of the molecule for large
negative $\delta'$ and reaches the value $\tilde m$ at $\delta'=\delta_-'$ (Fig.~\ref{mp}).
If we come closer to $\delta_-'$ the solitons become less localized especially at $k=0$
and the interatomic-distance fluctuations increase (Fig.~\ref{wdp}).
According to our definition of the soliton width, its behavior is similar to that of
$\langle w_k \rangle$ and $\sigma_{wk}$ shown in Fig.~\ref{wdp}.
\subsection{The limit $L\to\infty$}
\begin{figure}[tb]
\centering
\hspace{-3cm}
\includegraphics[width=8cm]{figure10.eps}
\caption{Deviations of the eigenvalues of Eq.(\ref{evp}) for finite $L$ from that
obtained in the limit $L\to\infty$.
$\Delta_t$ is shown by the dashed line.
The parameters are the same as in Fig.~\ref{s}.
}
\label{d}
\end{figure}
In the limit $L\to\infty$, the wave number $k$ becomes a continuous variable.
The energies of the continuum band are enclosed in the interval
$
\left|
E_k
\right|
\le
q_k
$,
where
$
q_k
=
4 t_{\rm a}
\cos
\left(
\frac{\pi}{2}
\frac{k}{k_{\rm L}}
\right)
$,
and the coefficients
$c_{0k}^{\rm a}$, $c_{k}^{\rm m}$,
in Eq.~(\ref{evp}) become negligibly small.
Outside of the continuum band, the solutions have the form
$
c_{jk}^{\rm a}
=
a_k b_k^j
\exp
\left(
i
\frac{\pi}{2}
\frac{k}{k_{\rm L}}
j
\right)
$,
$j=1,2,\dots,\infty$.
Substituting this ansatz into Eq.~(\ref{evp}) we obtain the equation
for the eigenenergy ${\cal E}_k = \lim_{L\to\infty}E_k$
\begin{eqnarray}
\label{E}
{\cal E}_k^2
&=&
\left(
U_{\rm bg}
+
U_k
\right)^2
+
q_k^2
\;,\;
U_k
=
2
\tilde g^2
/
\left(
{\cal E}_k-\epsilon_{k}^{\rm m}
\right)
\;.
\end{eqnarray}
Note that the quantity $U_{\rm bg} + U_k$ plays a role of the effective atomic
interaction.
The values of $a_k$ and $b_k$ corresponding to a certain ${\cal E}_k$ are given by
$
a_k
=
\sqrt{2}
c_{0k}^{\rm a}
$,
$
b_k
=
\left(
U_{\rm bg}
+
U_k
-
{\cal E}_k
\right)/q_k
$,
and the expressions for the probabilities of the state with two atoms
on the same lattice site and the molecular state take the form
\begin{eqnarray}
\label{c0a}
\left|
c_{0k}^{\rm a}
\right|^2
&=&
\left(
1-b_k^2
\right)
/
\left[
1
+
b_k^2
+
\left(
1 - b_k^2
\right)
S_k
\right]
\;,
\nonumber\\
\left|
c_{k}^{\rm m}
\right|^2
&=&
2
\tilde g^2
\left|
c_{0k}^{\rm a}
\right|^2
/
\left(
{\cal E}_k
-
\epsilon_{k}^{\rm m}
\right)^2
\;,
\end{eqnarray}
where
$
S_k
=
2
\tilde g^2
/
{
\left(
{\cal E}_k
-
\epsilon_{k}^{\rm m}
\right)^2
}
$.
Eq.~(\ref{E}) can be multiplied by
$
\left(
{\cal E}_k
-
\epsilon_{k}^{\rm m}
\right)^2
$
and treated as quartic equation for ${\cal E}_k$ which contains always four roots.
However, depending on the values of the parameters only one or two roots are real
and provide normalized eigenstates implying that the others are unphysical
and should be rejected. The normalization condition (\ref{norm}) requires
$
a_k^2/2<1
$
as well as
$
\left|
b_k
\right|
< 1
$.
One can easily show that in the special case $t_{\rm a}=t_{\rm m}=0$ the physical solutions
of Eq.~(\ref{E}) are given by (\ref{e}).
We substitute ${\cal E}_{k\pm} = \pm q_k$ corresponding to the edges of the continuum band
into Eq.(\ref{E}) and get
\begin{equation}
\label{b}
U_{\rm bg}+U_{k\pm}=0
\;,
\end{equation}
which leads to the identity
$
\left|
b_k
\right|
= 1
$
and as a consequence to the violation of inequalities~(\ref{ineq}).
Eq.~(\ref{b}) allows to obtain the boundaries $\delta'_-$ and $\delta'_+$ of the interval
of $\delta'$ within which there is only one physical solution:
\begin{equation}
\label{dpm}
\delta'_\pm
=
\pm q_k
+
2 t_{\rm m}
\cos
\left(
\pi k/k_{\rm L}
\right)
+
2 \tilde g^2/U_{\rm bg}
\,.
\end{equation}
For the values of parameters used in the numerical diagonalization, we find
$\delta'_-=-1.531\,\hbar\omega$
and
$\delta'_+=-1.479\,\hbar\omega$
in the case of attractive interaction, and
$\delta'_-=1.479\,\hbar\omega$
and
$\delta'_+=1.532\,\hbar\omega$
in the case of repulsive interaction.
The values of $\delta_-'$ are in perfect agreement with the results of numerical
calculations for $L=41$, while $\delta_+'$ have small deviations from the corresponding
numerical estimations.
In the special case $t_{\rm a}=t_{\rm m}=0$, $\delta'_-=\delta'_+=\delta'_*$ and Eq.~(\ref{b})
leads to the condition
\begin{equation}
\label{cond}
U_{\rm bg}
-
2
\tilde g^2/\delta'_*
=
0
\;.
\end{equation}
This is equivalent to the requirement that the effective scattering length
$
a_{\rm bg}(1-\Delta B \Delta\mu/\delta')
$,
which appears in the mean-field theory as a result of the adiabatic elimination
of the molecular field~\cite{MVA}, vanishes.
The calculations presented above show that in the interval of the detunings
$\delta'_- < \delta' <\delta'_+$
the effective atomic interaction is gradually switched from the attractive
to the repulsive one.
The probabilities
$
\left|
c_{jk}^{\rm a}
\right|^2
$,
$j=1,2,\dots$,
of the atomic states in Eq.(\ref{psi}) decrease with $j$ and have the form
\begin{eqnarray}
\left|
c_{ik}^{\rm a}
\right|^2
=
\left(
1 - b_k^2
\right)
b_k^{2(i-1)}
\left[
1
-
\left|
c_{0k}^{\rm a}
\right|^2
\left(
1
+
S_k
\right)
\right]
\;.
\end{eqnarray}
The soliton effective mass
\begin{equation}
m^*
=
\hbar^2
\left(
\left.
\frac
{\partial^2 {\cal E}_k}
{\partial k^2}
\right|_{k=0}
\right)^{-1}
\\
=
\hbar^2
\left.
\frac
{
{\cal E}_k
+
\left(
U_{\rm bg}
+
U_k
\right)
S_k
}
{
2
t_{\rm m}
\left(
U_{\rm bg}
+
U_k
\right)
S_k
- 4 t_{\rm a}^2
}
\right|_{k=0}
\end{equation}
is smaller than that at the bottom of the continuum
\begin{equation}
\tilde m
=
\hbar^2
\left(
-
\left.
\frac
{\partial^2 q_k}
{\partial k^2}
\right|_{k=0}
\right)^{-1}
=
\frac
{\hbar^2 k_{\rm L}^2}
{\pi^2 t_{\rm a}}
\;.
\end{equation}
The first two moments of the interatomic-distance distribution can be shown to be
\begin{eqnarray}
\langle
w_k
\rangle
&=&
2
\left|
c_{0k}^{\rm a}
\right|^2
b_k^2
/
\left[
\left(
1 - b_k^2
\right)^2
\left(
1
-
\left|
c_{k}^{\rm m}
\right|^2
\right)
\right]
\;,
\\
\langle
w_k^2
\rangle
&=&
2
\left|
c_{0k}^{\rm a}
\right|^2
b_k^2
\left(
1 + b_k^2
\right)
/
\left[
\left(
1 - b_k^2
\right)^3
\left(
1
-
\left|
c_{k}^{\rm m}
\right|^2
\right)
\right]
\;.
\nonumber
\end{eqnarray}
In order to demonstrate the convergence to the limit $L\to\infty$,
we have plotted in Fig.~\ref{d} the quantities
$
\Delta_{\rm s}
=
\sup_k
\left|
E_k^{({\rm s})}
-
{\cal E}_k
\right|
/
\hbar\omega
$
as well as
$
\Delta_{\rm t}
=
\sup_k
\left|
E_k^{({\rm t})}
-
q_k
\right|
/
\hbar\omega
$
and
$
\Delta_{\rm b}
=
\sup_k
\left|
E_k^{({\rm b})}
+
q_k
\right|
/
\hbar\omega
$
for different $L$, where
$E_k^{({\rm s})}$, $E_k^{({\rm t})}$, and $E_k^{({\rm b})}$
are the eigenenergies of Eq.(\ref{evp}) corresponding to the soliton band,
the top and the bottom of the quasi-continuum band, respectively.
$\Delta_{\rm s}$ decreases exponentially with the increase of $L$ and it is very small
even for low values of $L$. The convergence for the boundaries of the continuum
band is slower, but the limit $L\to\infty$ describes quite well the results of
the numerical diagonalization already for a few tens of the lattice sites.
In addition, we have compared the results of the calculations
obtained on the basis of numerical solution
of the eigenvalue problem~(\ref{evp}) for $L=41$
which are presented in Figs.~\ref{s}-\ref{wdp}
with that worked out in the limit $L\to\infty$
and did not find any noticeable discrepancies.
This is consistent with the exponential decrease of $\Delta_{\rm s}(L)$.
In the absence of the magnetic field, the molecule creation is impossible
and one has to put $\tilde g=0$, $\delta'=0$, $t_{\rm m}=0$, in all the equations.
In this special case, the normalizable solution of Eq.~(\ref{E}) is given by
$
{\cal E}_k^{(0)}
=
{\rm sign}(U_{\rm bg})
\sqrt{
U_{\rm bg}^2
+
q_k^2
}
$,
which leads to the following expression for the effective mass
$
m^{*(0)}
=
-
\hbar^2
{\rm sign}(U_{\rm bg})
\sqrt{
U_{\rm bg}^2
+
16 t_{\rm a}^2
}
/
\left(
4 t_{\rm a}^2
\right)
$.
These are exactly the results presented in Ref.~\cite{Scott}.
The soliton band exists again for repulsive as well as attractive atomic
interaction, but in the case of repulsive interaction it appears to be
a highly excited mode with the energy above the continuum band.
\section{Conclusion}
Summarizing, we have investigated quantum lattice solitons in a system of two
ultracold bosons near the Feshbach resonance.
Binding energy, effective mass, and spatial width of the solitons,
can be manipulated varying the detuning from
the Feshbach resonance.
In the case of attractive atomic interactions, the molecule creation stabilizes
the solitons increasing their effective mass as well as the binding energy
and decreasing the width.
In the case of repulsive interactions, the molecule creation leads
to the possibility of existence of bright solitons in some interval of detunings
analogous to the corresponding classical system.
The presence of quantum fluctuations leads to the fact that the interatomic distance
is a random quantity. Its standard deviation is even larger than the mean value.
The classical limit of the problem studied in the present work was considered
in~\cite{konotop}. Our results for the relative populations of the atomic
and molecular components are in agreement with the corresponding classical results.
In order to understand the transition from quantum to classical solitions
it is necessary to perform analogous calculations for higher number of atoms.
This can be done employing the same method as in the present study.
However, one has to keep in mind
that the dimension of the Hilbert space increases rapidly with the increase
of the particle number and the number of lattice sites.
\ack
This work was partly supported by the INTAS (Project No. 01-855) and SFB/TR 12.
K.V.K. would like to thank the University of Bath for kind hospitality.
\section*{References}
|
2,877,628,091,590 | arxiv | \section{Introduction}
RCW 86 (also known as MSH 14$-$63 or G315.4$-$2.3) is one of the supernova remnants (SNRs) that has been detected in the whole electromagnetic spectrum, from the radio continuum, optical, and infrared domains to the energetic X-rays and GeV/TeV $\gamma$-rays \citep[e.g.,][]{1987A&A...183..118K,1997AJ....114.2664S,2011ApJ...741...96W,2014MNRAS.441.3040B,2016ApJ...819...98A,2016arXiv160104461H}.
Of particular interest are the bright TeV $\gamma$-rays and non-thermal X-rays, which are tightly related with the production of cosmic-rays (CRs) via the
diffusive shock acceleration (DSA) mechanism in SNRs \citep{1978MNRAS.182..147B,1978ApJ...221L..29B}. RCW 86 is therefore suitable for studying the origin of Galactic CRs in an energy range $E < 3 \times 10^{15}$ eV and their relationship with the surrounding interstellar medium (ISM) by using multi-wavelength datasets.
RCW 86 is a relatively young SNR, first recorded in AD 185 in the Chinese historical book $Houhanshu$ \citep{1975Obs....95..190C,2006ChJAA...6..635Z}. The SNR is located slightly away from the Galactic plane ($l$, $b$) $\sim$ ($315\fdg4$, $-2\fdg3$), at only
$\sim2.5$ kpc from us \citep[e.g.,][by association with the edge of the molecular supershell GS 314.8$-$0.1$-$34 discovered by Matsunaga et al. 2001]{1969AJ.....74..879W,1996A&A...315..243R,2013MNRAS.435..910H}. The shell-like morphology of RCW 86 was first discovered
in radio continuum observations \citep{1961AuJPh..14..497M,1967AuJPh..20..297H}.
After half a century, such morphology has been
confirmed at all wavelengths, including $\gamma$-rays. The observed diameter is approximately 40 arcmin, corresponding to a diameter of $\sim30$ pc at 2.5 kpc. The progenitor system of RCW 86 (Type Ia or core-collapse (CC)) remains contentious. The CC hypothesis is supported by the presence of several B-type stars in
the neighbourhood of RCW 86 \citep{1969AJ.....74..879W}.
However, recent optical and X-ray studies reporting Fe-rich ejecta and
Balmer filaments encircling the shell suggest a Type Ia explosion \citep[e.g.][]{1997AJ....114.2664S,2011PASJ...63S.837Y,2011ApJ...741...96W,2014MNRAS.441.3040B}.
Besides, RCW 86 lacks a central compact object such as a neutron star or region of O-rich ejecta. Therefore, it is unlikely that RCW 86 is a CC SNR. According to numerical simulations, the progenitor system is also consistent with an off-centered Type Ia explosion \citep[e.g.,][]{2011ApJ...741...96W}.
\begin{figure}
\begin{center}
\includegraphics[width=87mm,clip]{rcw86_f01_cmyk.eps}
\caption{Three-color image of the SNR RCW 86 observed by $XMM$-$Newton$. The red, green, and blue colors represent the energy bands, 0.5--1.0, 1.0--2.0, and 2.0--4.5 keV, respectively. The white solid line indicates the region observed with the MOS and PN detectors. Contours represent the MOST radio continuum at a frequency of 843 MHz \citep{1996A&AS..118..329W}. The contour levels are 5, 10, 20, 40, 80, and 160 mJy beam$^{-1}.$}
\label{f01}
\end{center}
\end{figure}
RCW 86 has received much attention since the discovery of TeV $\gamma$-ray emission with the high-energy stereoscopic system (H.E.S.S.) by \cite{2009ApJ...692.1500A}. The TeV $\gamma$-ray flux of RCW 86 is ten times lower than that of the
Crab nebula, but the origin of which is not yet settled. Subsequently, \cite{2012A&A...545A..28L} and \cite{2014ApJ...785L..22Y}
obtained GeV $\gamma$-ray images and spectra with the $Fermi$
Large Area Telescope (LAT). By using a broad-band spectral energy distribution
(SED) fitting, they also discussed whether the $\gamma$-rays are hadronic or
leptonic in origin. They concluded that the leptonic origin was more reasonable,
but the low photon statistics did not rule out a hadronic origin. Recently,
\cite{2016arXiv160104461H} analyzed the new H.E.S.S. dataset and revealed the
shell-like morphology in TeV $\gamma$-rays, the origin of which is not yet discerned. Most recently,
\cite{2016ApJ...819...98A} obtained new GeV $\gamma$-ray images and spectra from
a 6.5-year dataset of $Fermi$ LAT. They concluded that the broad-band SED favors
the leptonic origin under the two-zone model. If the process is hadronic, the
$\gamma$-rays should spatially correspond to the interstellar gas \citep[e.g.,][]{2008A&A...481..401A,2012ApJ...746...82F,2013ApJ...768..179Y,2014ApJ...788...94F}. Therefore, a detailed spatial comparison between the interstellar gas and $\gamma$-rays is highly desirable in order to establish origin of the high energy emission.
\begin{deluxetable*}{cccccccc}[]
\tablewidth{\linewidth}
\tablecaption{Basic information of $XMM$-$Newton$ observations}
\tablehead{
&&&&&\multicolumn{3}{c}{Exposure}\\
\cline{6-8}\\
Observation ID & $\alpha_{\mathrm{J2000}}$ & $\delta_{\mathrm{J2000}}$ & Start Date & End Date & MOS1 & MOS2 & PN \\
& (degree) & (degree) & (yyyy-mm-dd hh:mm:ss) & (yyyy-mm-dd hh:mm:ss) & (ks) & (ks) & (ks) \\
}
\startdata
0110010701 & 220.73 & $-62.63$ & 2000-08-16 04:04:38 & 2000-08-16 10:43:07 & 17 & 16 & 15\\
0110011301 & 221.31 & $-62.41$ & 2000-08-16 12:03:46 & 2000-08-16 17:37:28 & 11 & 11 & \phantom{0}5\\
0110011401 & 220.51 & $-62.22$ & 2000-08-16 20:18:03 & 2000-08-17 01:36:33 & \phantom{0}9 & 10 & \phantom{0}6\\
0110010501 & 220.14 & $-62.60$ & 2001-08-17 11:47:26 & 2001-08-17 16:25:47 & \phantom{0}9 & \phantom{0}7 & \phantom{0}3\\
0110012501 & 220.24 & $-62.72$ & 2003-03-04 09:46:14 & 2003-03-04 13:11:34 & \phantom{0}8 & \phantom{0}9 & \phantom{0}6\\
0208000101 & 221.26 & $-62.34$ & 2004-01-26 22:30:59 & 2004-01-27 15:12:51 & 46 & 47 & 44\\
0504810101 & 221.57 & $-62.30$ & 2007-07-28 07:45:25 & 2007-07-29 16:12:53 & 95 & 98 & 76\\
0504810601 & 221.57 & $-62.30$ & 2007-07-30 15:45:31 & 2007-07-31 01:52:21 & 18 & 19 & 16\\
0504810201 & 221.40 & $-62.47$ & 2007-08-13 17:42:42 & 2007-08-14 14:37:56 & 50 & 55 & 37\\
0504810401 & 220.15 & $-62.60$ & 2007-08-23 03:17:26 & 2007-08-23 23:33:12 & 62 & 62 & 50\\
0504810301 & 220.50 & $-62.22$ & 2007-08-25 02:49:31 & 2007-08-25 23:34:05 & 61 & 62 & 44\\
0724940101 & 221.22 & $-62.68$ & 2014-01-27 18:48:07 & 2014-01-29 00:03:07 & 96 & 95 & 77
\enddata
\tablecomments{All exposure times correspond to the flare-filtered exposure.}
\label{tab_ex}
\end{deluxetable*}
Studies of the ISM in SNR environments have improved
our understanding of SNR evolution, shock heating/ionization, acceleration of CRs, and high-energy radiation \citep[e.g.,][]{2012ApJ...746...82F,2012ApJ...744...71I,2013ApJ...768..179Y,2013ApJ...778...59S}. In RCW 86, however, deep studies of the ISM have not
been reported. \cite{2011ApJ...741...96W} revealed the interstellar dust distribution of RCW 86 using the $Spitzer$ $Space$ $Telescope$ and the $Wide$-$Field$
$Infrared$ $Survey$ $Explorer$ ($WISE$). They noted the distribution of thin
dust filaments in the east region, which appear to trace the SNR shockwaves. In contrast, neutral atomic gas (H{\sc i}) forms a cavity-like structure at radial velocities of approximately $-34$ km s$^{-1}$ \citep{2016ApJ...819...98A,2016BAAA...58..212D}, although the detailed velocity structure and its
relationship with the SNR shockwaves have not been presented. In
particular, observations of molecular clouds traced by carbon monoxide (CO)
emission have not been attempted to date. In proper-motion measurements, the shock velocity was found to differ from region to region perhaps owing to the inhomogeneous interstellar environment and/or different stages of interaction with the surroundings \citep[e.g.,][]{1997A&A...328..628V,2013MNRAS.435..910H}. The highest shock velocity ($\sim3,000$ km s$^{-1}$) occurs in the northeast region, which mainly comprises non-thermal X-rays \citep{2013MNRAS.435..910H,2016ApJ...820L...3Y}. Conversely, the lowest shock velocities (500--900 km s$^{-1}$) are observed in the southwest and northwest regions, which strongly emit thermal X-rays \citep{1990ApJ...358L..13L,2001ApJ...547..995G}. Moreover, according to \cite{2002ApJ...581.1116R} and \cite{2011PASJ...63S.837Y}, interactions between the dense clouds and SNR shockwaves manifest as reverse or secondary shocks in some parts of the shell.
In the present study, we aim to identify the interstellar molecular/atomic gas distribution associated with RCW 86 and to compare it with the thermal/non-thermal X-rays, radio continuum, and H$\alpha$ datasets. We seek for the physical connection between the surrounding gas components, and pursuit the origin of the thermal/non-thermal X-rays, shock properties, and the progenitor system of the SNR. In a subsequent paper, we will compare the shock-interacting gas and TeV $\gamma$-rays (Sano et al. 2017, in preparation). Section 2 presents the observations and data reduction of NANTEN2 CO, ATCA $\&$ Parkes H{\sc i}, $XMM$-$Newton$ X-rays, and the datasets at the other wavelengths. Section 3 comprises four subsections. Subsection 3.1 overviews the CO, H{\sc i}, and X-ray distributions; subsections 3.2 and 3.3 present a
detailed analysis of the distributions and physical conditions of CO and H{\sc i} respectively; and subsection 3.4 presents a detailed comparison between these and the X-ray distributions. Discussion and conclusions are presented in Sections 4 and 5, respectively.
\section{OBSERVATIONS $\&$ DATA REDUCTIONS}
\subsection{CO}
We performed $^{12}$CO($J$ = 1--0, 2--1) observations with NANTEN2 4 m millimeter/sub-millimeter telescope at Pampa la Bola in northern Chile (4,865 m above sea level).
Observations of the $^{12}$CO($J$ = 1--0) emission line at 115 GHz were conducted from December 2012 to January 2013. The front end was a 4-K cooled superconductor-insulator-superconductor (SIS) mixer receiver. The double-sideband (DSB) system temperature was $\sim110$ K toward the zenith including the atmosphere. The back end was a digital Fourier transform spectrometer (DFS) with 16,384 channels of 1 GHz bandwidth, corresponding to a velocity coverage of $\sim2,600$ km s$^{-1}$. Frequency and velocity resolutions were 61 kHz and $\sim0.16$ km s$^{-1}$ ch$^{-1}$, respectively. We used the on-the-fly (OTF) mode with Nyquist sampling, and the observed area was one square degree. After convolving the datacube with a Gaussian kernel of $\sim90$ arcsec (FWHM), the typical noise level was 0.42 K ch$^{-1}$. The final beam size was $\sim180$ arcsec (FWHM). The pointing accuracy was checked every 3 hours. An offset better than $\sim$25 arcsec was achieved. The absolute intensity was calibrated by observing IRAS 16293$-$2422 [$\alpha$(J2000) = $16^{\mathrm{h}}32^{\mathrm{m}}23.3^{\mathrm{s}}$, $\delta$(J2000) = $-24{^\circ}28\arcmin39\farcs2$] \citep{2006AJ....131.2921R}.
\begin{figure*}
\begin{center}
\includegraphics[width=168mm,clip]{rcw86_f02_rgb.eps}
\caption{Three-color images of the SNR RCW 86 and its surroundings. The red, blue, and green colors represent the $XMM$-$Newton$ broad-band X-rays (0.5--4.5 keV), NANTEN2 $^{12}$CO($J$ = 2--1), and the ATCA $\&$ Parkes H{\sc i}, respectively. The velocity range of CO and H{\sc i} is from $-46.0$ to $-28.0$ km s$^{-1}$. The contours indicate the H{\sc i} integrated intensity. The lowest contour level and the contour interval are 806.4 K km s$^{-1}$ ($\sim 240 \sigma$) and 33.6 K km s$^{-1}$ ($\sim 10 \sigma$), respectively. }
\label{f02}
\end{center}
\end{figure*}%
\begin{figure*}
\begin{center}
\includegraphics[width=168mm,clip]{rcw86_f03_cmyk.eps}
\caption{Maps of the (a) associated and (b) non-associated $^{12}$CO($J$ = 1--0) clouds shown by colored contours. White contours indicate the X-ray intensity in the energy band from 0.5 to 4.5 keV. The contour levels are 6.00$\times$10$^{-5}$, 1.42$\times$10$^{-4}$, 3.87$\times$10$^{-4}$, 7.95$\times$10$^{-4}$, 1.37$\times$10$^{-3}$, 2.10$\times$10$^{-3}$, and 3.00$\times$10$^{-3}$ photons s$^{-1}$ pixel$^{-1}$. The integration velocity ranges are as follows: $-39.0$ to $-34.4$ km s$^{-1}$ for CO $-37$ E (contours: lowest $\sim$1.4 K km s$^{-1}$, intervals $\sim$2.6 K km s$^{-1}$), $-34.0$ to $-35.6$ km s$^{-1}$ for CO $-35$ SE (contours: lowest $\sim$0.9 K km s$^{-1}$, intervals $\sim$0.6 K km s$^{-1}$), $-35.9$ to $-29.3$ km s$^{-1}$ for CO $-33$ S (contours: lowest $\sim$1.8 K km s$^{-1}$, intervals $\sim$2.5 K km s$^{-1}$), $-44.3$ to $-35.2$ km s$^{-1}$ for CO $-40$ NW (contours: lowest and intervals $\sim$2.1 K km s$^{-1}$), $-56.0$ to $-54.4$ km s$^{-1}$ for CO $-55$ N (contours: lowest $\sim$0.9 K km s$^{-1}$, intervals $\sim$1.6 K km s$^{-1}$), $-48.6$ to $-46.5$ km s$^{-1}$ for CO $-48$ NE (contours: lowest $\sim$0.7 K km s$^{-1}$, intervals $\sim$1.1 K km s$^{-1}$), $-57.8$ to $-55.7$ km s$^{-1}$ for CO $-57$ SW (contours: lowest $\sim$1.1 K km s$^{-1}$, intervals $\sim$1.8 K km s$^{-1}$), and $-4.9$ to $-4.1$ km s$^{-1}$ for CO $-5$ SW (contours: lowest and intervals $\sim$0.8 K km s$^{-1}$). The positions of CO peaks detected at different radial velocities (see Table \ref{tab1}) are identified by letters (A--L).}
\label{f03}
\end{center}
\end{figure*}%
Observations of the $^{12}$CO($J$ = 2--1) emission line at 230 GHz were conducted in November 2008. The front end was a 4-K cooled SIS mixer. The system temperature in DSB was $\sim120$ K toward the zenith including the atmosphere. We used an acousto-optical spectrometer with 2,048 channels of 250 MHz bandwidth corresponding to a velocity coverage of $\sim385$ km s$^{-1}$. The frequency and velocity resolutions were 250 kHz and $\sim0.38$ km s$^{-1}$ ch$^{-1}$, respectively. We used the OTF mode with Nyquist sampling, and the observed area was $\sim0.88$ square degrees. After convolving the datacube with a Gaussian kernel of $\sim45$ arcsec (FWHM), the typical one sigma noise fluctuations were less than 0.3 K ch$^{-1}$. The final smoothed beam size was $\sim90$ arcsec (FWHM). The pointing error was less than 15 arcsec, and the intensity calibration was applied by observing Oph EW4 [$\alpha$(J2000) = $16^{\mathrm{h}}26^{\mathrm{m}}21.92^{\mathrm{s}}$, $\delta$(J2000) = $-24{^\circ}25\arcmin40\farcs4$(J2000)] \citep{2005ApJ...625..194K}.
\subsection{\rm H{\sc i}}
We performed H{\sc i} observations at 1420 MHz using the Australia Telescope Compact Array (ATCA), which consists of six 22-m dishes located at Narrabri, Australia. Observations were conducted during 13 hours on March 24--25, 2002, with the ATCA in the EW 367 configuration (baselines from 46 to 367 m, or from 0.3 to 1.75 k$\lambda$, excluding the 6$^{\rm th}$ antena). We employed the mosaicking technique, with 45 pointings covering an area of $\sim 4$ square degrees. The absolute flux density scale was determined by observing PKS B1934$-$638, which was used as the primary amplitude and bandpass calibrator. We also periodically observed PKS 1352$-$63 for gain and phase calibration. Data reduction was performed by using the MIRIAD software package \citep{1995ASPC...77..433S}. The images were retrieved using a superuniform weighting and keeping only visibilities shorter than 1.1 k$\lambda$. To include extended emission, we combined the ATCA data-set with single-dish observations performed with the Parkes 64 m telescope. The final beam size is 160 arcsec $\times$ 152 arcsec with a position angle of $-3^\circ$. Typical noise level is 1.0 K at 0.82 km s$^{-1}$ velocity resolution. The data are identical to those published by \citet{2016ApJ...819...98A}.
\subsection{X-rays}
Twelve $XMM$-$Newton$ pointed observation data are available for RCW~86 as
summarized in Table \ref{tab_ex}. We analyzed both the EPIC-pn and EPIC-MOS
datasets by using the HEAsoft version 6.18 and the $XMM$-$Newton$ Science
Analysis System (SAS) version 15.0. We reprocessed the observation data files
following standard procedures provided by the $XMM$-$Newton$ extended source
analysis software \citep[ESAS,][]{2008A&A...478..575K}. In order to create
instrumental background-subtracted, exposure-corrected, adaptively-smoothed
images, we prepared exposure maps and quiescent particle background (QPB) images
for each observation by using the mos-/pn-filter and mos-/pn-back tasks. Then,
we combined the images after subtracting the QPB, and the combined images were
divided by the merged exposure maps. An adaptive smoothing process was also
applied to emphasize diffuse components with the pixel size of $4''$. Finally,
we obtained QPB-subtracted, exposure-corrected, adaptively-smoothed images in
the energy bands of 0.5--1.0 (soft) /1.0--2.0 (middle) / 2.0--4.5 (hard) /
0.5--4.5 (broad) keV. In this analysis, high background periods are removed and
the net exposure time is shown in Table \ref{tab_ex}.
Figure \ref{f01} shows an X-ray tricolor image of RCW 86. The red, green, and blue regions emit at 0.5--1.0 keV (soft band), 1.0--2.0 keV (medium band), and 2.0--4.5 keV (hard band), respectively. The soft and hard bands are dominated by continuum radiation from thermal plasma and synchrotron X-rays produced by the TeV CR electrons, respectively \citep{2002ApJ...581.1116R,2016ApJ...819...98A}.
In the present paper, we shall hereafter refer to the emission seen in the image in the soft band as ``thermal X-rays'' and that of the hard band as ``non-thermal X-rays'' because each energy band is dominated by the continuum radiation from thermal plasma and non-thermal synchrotron X-rays, respectively \citep[e.g.,][]{2002ApJ...581.1116R,2016ApJ...819...98A}. Moreover, thermal X-rays are dominated by the ISM plasma components, whose distribution is significantly different from the ejecta component except for the SW region \citep{2011PASJ...63S.837Y}.
\subsection{Astronomical Data at the Other Wavelengths}
H$\alpha$ and radio continuum data are used to derive the spatial distribution of the ionized gas and low-energy CR electrons. We used the H$\alpha$ and 843 MHz radio continuum data that appear in the Southern H-Alpha Sky Survey Atlas \citep[SHASSA;][]{2001PASP..113.1326G} and the Molonglo Observatory Synthesis Telescope (MOST) Supernova Remnant Catalogue \citep[MSC;][]{1996A&AS..118..329W}, in addition to the CO/H{\sc i} and X-ray data. The angular resolutions of H$\alpha$ and radio continuum are 48 arcsec and 43 arcsec, respectively.
\section{RESULTS}
\subsection{Overview of CO, H{\sc i}, and X-ray Distributions}
To determine the velocity range of the atomic and molecular gas associated with the SNR RCW 86, we carried out the following steps:
\begin{enumerate}
\item Searching by visual inspection for a good spatial correspondence between the ISM and X-ray intensities in the velocity channel distribution of CO/H{\sc i} overlaid upon the X-ray contours (see Appendix and Figure \ref{fa1});
\item Investigating the physical conditions of associated molecular clouds using the $^{12}$CO $J$ = 2--1/1--0 intensity ratio maps (see Section 3.2);
\item Exploring possible evidences of expanding motions of H{\sc i} and CO due to the SNR shockwaves and/or stellar winds from the progenitor of RCW 86 (see Section 3.3).
\end{enumerate}
This analysis led us to conclude that the gas associated with RCW 86 is most likely found at a velocity range from $-46$ to $-28$ km s$^{-1}$. The ATCA $\&$ Parkes H{\sc i} and NANTEN2 $^{12}$CO($J$ = 2--1) emissions integrated in this velocity range are displayed in green and blue, respectively, in Figure \ref{f02}, together with the $XMM$-$Newton$ X-ray image (red: 0.5--4.5 keV) of RCW 86.
Towards the north, there is an H{\sc i} intensity gradient increasing from east to west, and the most prominent features, with intensities above 1,000 K km s$^{-1}$, lie in the northwest region. The overall distribution of the H{\sc i} clouds
tend to encircle the X-ray shell-like structure. We also find that the diffuse
H{\sc i} gas, with an intensity of $\sim700$ K km s$^{-1}$, fills the interior
of the SNR shell. To the east, a large CO cloud with diffuse H{\sc i} emission
is located toward the X-ray filaments. The high angular resolution of CO
allowed us to see that the X-ray emission of the filament located around ($l$,
$b$) $\sim$ ($315\fdg7$, $-2\fdg5$) is higher where the emission of the CO cloud is lower.
The CO clouds are located not only at the east but also at the south and the northwest. Four additional CO clouds are visible toward the SNR: CO $-$57 SW, CO $-$55 N, CO $-$48 NE, and CO $-$5 SW (Figure \ref{f03}b). These are probably not interacting with the SNR because their radial velocities do not coincide with those of the associated CO clouds. Hereafter, we shall focus on the velocity range from $-$46 to $-$28 km s$^{-1}$ which contains the associated CO clouds.
\begin{deluxetable*}{lccccccccccc}
\tabletypesize{\scriptsize}
\tablecaption{Physical Properties of $^{12}$CO($J$ = 1--0) Clouds}
\tablewidth{0pt}
\tablehead{
\multicolumn{1}{c}{Name} & Position & $l$ & $b$ & $T_R^\ast $ & $V_{\mathrm{peak}}$ & $\Delta V_{\mathrm{LSR}}$ & $N_\mathrm{p}$(H$_2$) & Size & Mass & Associated\\
& &(deg) & (deg) & (K) & (km $\mathrm{s^{-1}}$) & (km $\mathrm{s^{-1}}$) & ($\times 10^{21}$ cm$^{-2}$) & (pc) & ($M_\sun $) & \\
\multicolumn{1}{c}{(1)} & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11)
}
\startdata
\multirow{3}{*}{CO $-37$ E} & A & 315.57 & $-2.52$ & 6.9 & $-35.5$ & 2.2 & 5.7 & \multirow{3}{*}{12.4} & \multirow{3}{*}{3699} & \multirow{3}{*}{Yes} \\
& B & 315.56 & $-2.47$ & 4.2/3.5 & $-35.7$/$-37.6$ & 2.0/2.8 & 5.8\\
& C & 315.73 & $-2.40$ & 5.6 & $-36.3$ & 2.4 & 5.3\\
\hline
CO $-35$ SE & D & 315.43 & $-2.70$ & 2.0 & $-34.7$ & 1.6 & 3.4 & 5.1 & 170 & Yes \\
\hline
\multirow{2}{*}{CO $-33$ S} & E & 315.22 & $-2.73$ & 2.7 & $-33.7$ & 4.6 & 4.6 & \multirow{2}{*}{$>$8.6} & \multirow{2}{*}{$>$1520} & \multirow{2}{*}{Yes} \\
& F & 315.17 & $-2.78$ & 3.8 & $-30.8$ & 3.0 & 4.5 & \\
\hline
\multirow{2}{*}{CO $-40$ NW} & G & 315.36 & $-1.93$ & 1.9 & $-35.8$ & 1.3 & 1.9 & \multirow{2}{*}{$>$9.0} & \multirow{2}{*}{$>$1070} & \multirow{2}{*}{Yes} \\
& H & 315.28 & $-1.92$ & 2.6 & $-43.0$ & 2.4 & 2.9 & \\
\hline
CO $-55$ N & I & 315.36 & $-2.10$ & 3.3 & $-55.1$ & 1.7 & 1.9 & & & No \\
\hline
CO $-48$ NE & J & 315.70 & $-2.23$ & 1.6 & $-47.5$ & 1.9 & 1.1 & & & No \\
\hline
CO $-57$ SW & K & 315.02 & $-2.25$ & 5.2 & $-56.7$ & 2.0 & 3.8 & & & No \\
\hline
CO\phantom{0} $-5$ SW & L & 314.93 & $-2.32$ & 4.5 & \phantom{0}$-4.5$ & 0.8 & 1.4 & & & No
\enddata
\label{tab1}
\tablecomments{Col. (1): Cloud name. Col. (2): Position name. Cols. (3--4): Position of the maximum CO intensity for each velocity component. Cols. (5--8): Physical properties of the $^{12}$CO($J$ = 1--0) emission obtained at each position. Col. (5): Peak radiation temperature, $T_R^{\ast} $. Col. (6): $V_{\mathrm{peak}}$ derived from a single Gaussian fitting. Col. (7): Full-width half-maximum (FWHM) line width, $\bigtriangleup V_{\mathrm{peak}}$. Col. (8): Proton column density $N_\mathrm{p}$(H$_2$) derived from the CO integrated intensity, $W$($^{12}$CO), $N$($\mathrm{H_2}$) = 2 $\times $ $10^{20}$ [$W$($^{12}$CO)/(K km $\mathrm{s^{-1}}$)] ($\mathrm{cm^{-2}}$) \citep{1993ApJ...416..587B}. Col. (9): Cloud size defined as ($A$/$\pi$)$^{0.5} \times 2$, where $A$ is the total cloud surface area surrounded by the 3 $\sigma$ levels in the integrated intensities of each CO cloud. Col. (10): Mass of the cloud derived using the relation between the molecular hydrogen column density $N$($\mathrm{H_2}$), and the $^{12}$CO($J$ = 1--0) integrated intensity, $W$($^{12}$CO), shown in Col. (8).}
\end{deluxetable*}
\begin{figure*}[h]
\begin{center}
\includegraphics[width=168mm,clip]{rcw86_f04_cmyk.eps}
\caption{(a--f) Velocity channel maps of the line intensity ratio $^{12}$CO $J$ = 2--1/1--0 at an interval of 3 km s$^{-1}$ in a velocity range from $-46.0$ to $-28.0$ km s$^{-1}$. The white contours show the X-ray intensity distributions shown in Figure \ref{f03}. (a$'$, b$'$, d$'$) Enlarged views toward dashed regions in Figures \ref{f04}a, b, and d. The yellow contours indicate the MOST radio continuum at a frequency of 843 MHz. The contour levels are 2.5, 5, 10, 20, 40, 80, and 160 mJy beam$^{-1}.$}
\label{f04}
\end{center}
\end{figure*}%
Figure \ref{f03}a shows the distribution of the molecular clouds associated
with the SNR. These clouds are named CO $-40$ NW, CO $-37$ E, CO $-35$ SE, and
CO $-33$ S, respectively, and their peak radial velocities are derived from a single Gaussian fitting. The basic physical properties of the CO clouds are listed in Table \ref{tab1}. All physical parameters were estimated based on a distance of 2.5 kpc. The CO clouds are at the same distance since they have similar radial velocities around $\sim -35$ km s$^{-1}$. We see that there are no broad-line features with velocity-widths above 10 km s$^{-1}$ in the CO spectra. In order to estimate the mass of the CO clouds $M_\mathrm{cloud}$, we used the following equation:
\begin{eqnarray}
M_{\mathrm{cloud}} = \mu m_{\mathrm{H}} \sum_{i} [D^2 \Omega N_i(\mathrm{H}_2)],
\label{eq1}
\end{eqnarray}
where $\mu$ is the mean molecular weight, $m_{\mathrm{H}}$ is the mass of the atomic hydrogen, $D$ is the distance to the CO cloud, $\Omega$ is the solid angle of a square pixel, and $N_i(\mathrm{H}_2)$ is the hydrogen column density of each pixel $i$ in the Galactic longitude-latitude plane. We used $\mu$ = 2.8 to account for a helium abundance of 20 $\%$. The hydrogen column density $N(\mathrm{H}_2)$ is derived by using the relationship
\begin{eqnarray}
X = N(\mathrm{H}_2) / W(^{12}\mathrm{CO}),
\label{eq2}
\end{eqnarray}
where $X$ is an X-factor in units of cm$^{-2}$ (K km s$^{-1}$)$^{-1}$. We used $X$ = 2.0 $\times$ 10$^{20}$ in the present paper \citep{1993ApJ...416..587B}. We estimated the total mass of the CO clouds to be at least $\sim$6,500 $M_{\odot}$.
\begin{figure*}
\begin{center}
\includegraphics[width=168mm,clip]{rcw86_f05_cmyk.eps}
\caption{(a) Velocity-Latitude diagram of H{\sc i}. The integration range in Galactic longitude is from $315\fdg48$ to $315\fdg56$, as shown in Figure \ref{f02}. Black contours indicate the intensity distribution of $^{12}$CO($J$ = 2--1). The lowest contour level and intervals are 0.03 K degree and 0.02 K degree, respectively. Dashed circle and vertical solid lines show an asymmetrically expanding spherical shell and the velocity integration range of Figure \ref{f02}. (b) H{\sc i} spectra at ($l$, $b$) = ($315\fdg42$, $-2\fdg45$). Velocity range and vertical lines are the same as Figure \ref{f05}a.}
\label{f05}
\end{center}
\end{figure*}%
\subsection{Physical Conditions of Molecular Gas}
In order to investigate the physical conditions of the associated CO clouds, we
have calculated the line intensity ratio map using the $^{12}$CO($J$ =
2--1) and $^{12}$CO($J$ = 1--0) emission lines. The intensity ratio corresponds
to the degree of the rotational excitation of molecules, which reflects the gas
density and/or temperature. Both datasets were smoothed to
an angular resolution of $\sim$180 arcsec (FWHM) and summed up to 1 km s$^{-1}$
per velocity bin. The data points used for the analysis were those above
the $3\sigma$ noise level in both lines.
Figure \ref{f04} shows the velocity channel distributions of the line intensity
ratio $^{12}$CO $J$ = 2--1/1--0 every 3 km s$^{-1}$. We found that part of the
CO $-37$ E cloud shows an intensity ratio significantly higher than 0.8 (Figure \ref{f04}c), while the region in
the immediate vicinity of the cloud shows values smaller than 0.6 (Figures \ref{f04}c, \ref{f04}d, and \ref{f04}e). This may be
due to some external influences that affect only the surface of
the clouds because an intensity ratio of $<$ 0.6 is typical
of dark molecular clouds in the Milky Way
without extra heating \citep[e.g.,][]{1997ApJ...486..276S}. In addition to the
CO $-37$ E cloud, we note that the edges of the CO $-40$ NW, CO $-35$ SE, and CO
$-33$ S clouds also have intensity ratios
higher than 0.8. Figures \ref{f04}a$'$, b$'$, and d$'$ show the line intensity ratio maps toward these clouds superposed with
the same radio continuum contours as in Figure \ref{f01}. The regions having intensity ratios higher than 0.8 are located along the radio shell of RCW 86. This is not considered to be
due to stellar feedback since there are
no $IRAS$/$AKARI$ infrared point sources or OB type stars in these regions \citep[e.g.,][]{1969AJ.....74..879W,1988iras....7.....H,2010A&A...514A...1I}. Therefore, this enhanced ratio indicates shock heating/compression due to the forward shock and/or stellar winds from the progenitor of RCW 86, which supports the association between the SNR and the CO clouds.
\begin{figure*}
\begin{center}
\includegraphics[width=166mm,clip]{rcw86_f06_cmyk.eps}
\caption{Top left: $XMM$-$Newton$ X-ray image in the energy band of 0.5--1.0 keV overlaid with the H$\alpha$ intensity contours (lowest: 500 dR, interval: 250 dR) taken from SHASSA \citep{2001PASP..113.1326G}. Other panels: Distributions of H{\sc i} and $^{12}$CO($J$ = 2--1) obtained with ATCA $\&$ Parkes and NANTEN2 (rainbow scale) superposed with the $XMM$-$Newton$ X-ray contours in the energy band of 0.5--1.0 keV. The three regions toward the north, east, and southwest are shown with inserts in the X-ray image on the top left panel. The velocity range spans from $-32$ to $-29$ km s$^{-1}$ in the north; from $-36$ to $-34$ km s$^{-1}$ in the east; and from $-42$ to $-28$ km s$^{-1}$ in the southwest. The contour levels of the X-rays are 6.00$\times$10$^{-5}$, 1.42$\times$10$^{-4}$, 3.87$\times$10$^{-4}$, 7.95$\times$10$^{-4}$, 1.37$\times$10$^{-3}$, 2.10$\times$10$^{-3}$, and 3.00$\times$10$^{-3}$ photons s$^{-1}$ pixel$^{-1}$ for the north and the southwest; 2.50$\times$10$^{-5}$, 3.54$\times$10$^{-5}$, 6.67$\times$10$^{-5}$, 1.19$\times$10$^{-4}$, 1.92$\times$10$^{-4}$, 2.85$\times$10$^{-4}$, and 4.00$\times$10$^{-4}$ photons s$^{-1}$ pixel$^{-1}$ for the east.}
\label{f06}
\end{center}
\end{figure*}%
\subsection{Expanding Structure and Physical Properties of {\rm H}{\sc i} and CO}
Figure \ref{f05}a shows the H{\sc i} velocity-latitude diagram. The integration
range in Galactic longitude is from $315\fdg48$ to $315\fdg56$, as shown in
Figure \ref{f02}. We found an H{\sc i} cavity-like structure in the radial velocity range from $-46$ to $-28$ km s$^{-1}$, which has a size similar to RCW 86 in terms of the Galactic latitude range ($\sim40$ arcmin; $\sim30$ pc at the distance of 2.5 kpc).
The large velocity range involved, nearly 20 km s$^{-1}$, cannot be explained by the Galactic rotation. We suggest that this feature represents an expanding structure driven by the stellar feedback of the progenitor of RCW~86.
The H{\sc i} expanding motion was also seen in the velocity channel distribution from $-45$ to $-25$ km s$^{-1}$ (see Appendix
Figure \ref{fa1}) and the H{\sc i} line profile in Figure \ref{f05}b. We also show the $^{12}$CO($J$ = 2--1) contours in black. At $b > -2\fdg5$, the CO cloud has velocities higher (from $-40$ to $-35$ km s$^{-1}$) than the rest of the CO cloud, at $b < -2\fdg5$, for which velocities
span from $-37$ to $-32$ km s$^{-1}$.
\begin{figure*}
\begin{center}
\includegraphics[width=174mm,clip]{rcw86_f07_cmyk.eps}
\caption{Radial Profiles of the atomic proton column density $N_{\mathrm{p}}$(H{\sc i}) (gray filled areas), thermal X-rays (red), and H$\alpha$ emissions (light blue) for each rectangle region as shown by Figures \ref{f06}. Black, red, and light-blue dashed lines indicate the intensity peaks of $N_{\mathrm{p}}$(H{\sc i}), thermal X-rays, and H$\alpha$, respectively.}
\label{f07}
\end{center}
\end{figure*}%
The bright region of the H{\sc i} image is shifted toward the center of the SNR with a velocity increase from $-45$ to $-25$ km s$^{-1}$. We interpret that the H{\sc i} components of $-46$ km s$^{-1}$ and $-28$ km s$^{-1}$ correspond to the blue- and red-shifted sides of the expanding H{\sc i} wall, respectively. We note that the H{\sc i} intensity
of the red-shifted side is approximately twice as high as that of the blue-shifted side. If the emission is optically thin, the H{\sc i} intensity
corresponds to the mass.
By assuming this inhomogeneous gas distribution, the central velocity, $V_\mathrm{center}$, and expansion velocity, $\Delta V$, were estimated to be $V_\mathrm{center} \sim -35$ km s$^{-1}$ and $\Delta V \sim 7$--11 km s$^{-1}$, respectively. Here the central velocity corresponds to the kinematic distance of $\sim 2.4 \pm 0.2$ kpc adopting
the Galactic rotation curve model of \citet{1993A&A...275...67B}. The error was derived using the uncertainty in the central velocity intrinsic to this method.
The estimated distance is consistent with previous studies \citep[e.g.,][]{1969AJ.....74..879W,1996A&A...315..243R,2013MNRAS.435..910H,2016ApJ...819...98A}. The total mass and mean density of neutral atomic gas are estimated to be $\sim2\times10^4$ $M_{\odot}$ and $\sim70$ cm$^{-3}$, where the shell radius and thickness are assumed to be $\sim15$ pc and $\sim5$ pc, respectively \citep[c.f.,][]{2016arXiv160104461H}. H{\sc i} gas is generally considered to be optically thin (optical depth $\tau \ll 1$), having a column density, $N_\mathrm{p}$(H{\sc i})$'$ \citep[e.g.,][]{1990ARA&A..28..215D}:
\begin{eqnarray}
N_\mathrm{p}(\mathrm{H}{\textsc{i}})'= 1.823 \times 10^{18} \int T_\mathrm{L}(V) dV (\mathrm{cm}^{-2}),
\label{eq3}
\end{eqnarray}
where $T_\mathrm{L}(V)$ is the observed H{\sc i} brightness temperature in units of K. On the other hand, according to \cite{2015ApJ...798....6F}, 85 $\%$ of H{\sc i} gas is optically thick ($\tau \sim 0.5$--3) in the Milky Way, and the averaged column density is approximately 2--2.5 times higher than that derived on the optically thin assumption described by equation (\ref{eq3}). Subsequently, the authors established a more accurate relationship under consideration of the dust growth model (Fukui et al. 2017 in preparation). Therefore, we used the following relationship to calculate the ``true'' H{\sc i} column density, $N_\mathrm{p}(\mathrm{H}{\textsc{i}})$, instead of equation (\ref{eq3}):
\begin{eqnarray}
N_\mathrm{p}(\mathrm{H}{\textsc{i}}) = S \times N_\mathrm{p}(\mathrm{H}{\textsc{i}})' (\mathrm{cm}^{-2}),
\label{eq4}
\end{eqnarray}
where $S$ is the conversion factor from $N_\mathrm{p}(\mathrm{H}{\textsc{i}})'$ to $N_\mathrm{p}(\mathrm{H}{\textsc{i}})$. In the region around RCW 86, the conversion factor, $S$, is estimated to be 2.3. Unless otherwise noted, we used equation (\ref{eq4}) and $S$ = 2.3 to calculate the H{\sc i} column density in this article. In the SNR RCW 86, $N_\mathrm{p}(\mathrm{H}{\textsc{i}})$ is accurately determined within $\sim8\%$, while the integrated intensity of H{\sc i} varies from 600 to 1,000 K km s$^{-1}$.
\subsection{Detailed Comparison with X-rays}
In order to establish a more detailed correspondence between the ISM and
X-ray filaments in the velocity range from $-46$ km s$^{-1}$ to $-28$ km s$^{-1}$, we compare the integrated CO/H{\sc i} intensity map with the thermal and non-thermal X-rays.
Figure \ref{f06} shows the intensity distribution of thermal X-rays, H{\sc i}, and CO. The H$\alpha$ contours with 500 dR or higher are also shown in the upper left of Figure \ref{f06}. We focused on the eastern, northern, and southwestern regions where thermal X-rays show filamentary distributions. In the eastern region the thermal X-ray filaments are distributed along with the CO $-$37 E cloud. The X-ray distribution cannot be interpreted by interstellar photoelectric absorption of the low-energy X-rays, because the thermal X-rays are not superposed onto the intensity peak of the CO cloud. We also found that the X-ray filament TX-E1 complex ($l$, $b$) $\sim$ ($315\fdg68$, $-2\fdg46$) is slightly aligned with the CO clumpy structure, while another filament TX-E2 complex, ($l$, $b$) $\sim$ ($315\fdg60$, $-2\fdg55$), is not much correlated with the CO cloud. This trend suggests that the degree of interaction between the SNR shocks and the CO cloud is different between the two regions. In the northern and southwestern regions, the distribution of the thermal X-rays shows a good spatial correlation with that of the H{\sc i} cavity wall at a scale of $\sim1$ pc, where the H{\sc i} intensity is significantly increased outwards from the SNR.
\begin{deluxetable*}{lccccccccc}
\tabletypesize{\scriptsize}
\tablecaption{Results of the Projected Profile towards the Thermal X-ray Peaks}
\tablewidth{0pt}
\tablehead{
&\multicolumn{3}{c}{Thermal X-rays} & & \multicolumn{2}{c}{Separation}\\
\cline{2-4}\cline{6-7}
\multicolumn{1}{c}{Name} & $l$ & $b$ & Peak Intensity & $<$$N_\mathrm{p}$(H{\sc i})$>$ & H{\sc i} Peak & H$\alpha$ Peak\\
& (deg) & (deg) & ($\times10^{-4}$ counts s$^{-1}$ pixel$^{-1}$) & ($\times10^{21}$ cm$^{-2}$) & (arcsec) & (arcsec) \\
\multicolumn{1}{c}{(1)} & (2) & (3) & (4) & (5) & (6) & (7)}
\startdata
TX-N1 & 315.58 & $-2.11$ & $1.30 \pm 0.07$ & $0.92 \pm 0.05$ & $-48$ & ------ \\
\hline
TX-N2 & 315.43 & $-2.06$ & $3.79 \pm 0.11$ & $0.96 \pm 0.07$ &$+32$ & 16 \\
\hline
TX-N3 & 315.23 & $-2.16$ & $3.94 \pm 0.07$ & $1.05 \pm 0.05$ &$-48$ & ------ \\
\hline
TX-SW1 & 315.14 & $-2.32$ & $12.6\phantom{0} \pm 0.3$\phantom{0}\phantom{0} & $3.03 \pm 0.15$ &$-304$\phantom{0} & 16 \\
\hline
TX-SW2 & 315.12 & $-2.47$ & $20.6\phantom{0} \pm 0.8$\phantom{0}\phantom{0}& $3.29 \pm 0.06$ & $-16$ & \phantom{0}0 \\
\hline
TX-SW3 & 315.24 & $-2.51$ & $\phantom{0}4.8\phantom{0} \pm 0.2\phantom{0}\phantom{0}$ & \multirow{2}{*}{$3.24 \pm 0.08$} & $+144$\phantom{0} & ------ \\
TX-SW4 & 315.22 & $-2.57$ & $\phantom{0}6.0\phantom{0} \pm 0.2\phantom{0}\phantom{0}$ & & $-68$ & ------
\enddata
\label{tab2}
\tablecomments{Col. (1): X-ray peak name. Cols. (2--4): Physical properties of the thermal X-rays. Cols. (2--3): Position of the X-ray peak. Col. (4): Peak intensity of the X-ray. Col. (5): Mean H{\sc i} column density, $<$$N_\mathrm{p}$(H{\sc i})$>$, within each region shown by Figure \ref{f06}. Col. (6): Separation between each intensity peak of the X-ray and H{\sc i} emissions. Col. (7): Separation between each intensity peak of the X-ray and H$\alpha$ emissions.}
\end{deluxetable*}
Figure \ref{f07} shows the radial profiles of the proton column density $N_\mathrm{p}(\mathrm{H}{\textsc{i}})$ (gray filled areas) and the thermal X-ray intensity (red) for each region of dashed rectangles, perpendicular to the shell as shown in Figure \ref{f06}. Each region has a $670'' \times 160''$ size corresponding to 8 pc $\times$ 2 pc, and is centered on the X-ray filament (see Table \ref{tab2}). We defined the origin of the radial profile as the position of the maximum X-ray intensity in the projected distance. Positive and negative values correspond to the outer and inner sides of the SNR shell, respectively. The regions are selected every $\sim3$ pc relative to the azimuthal direction, and all of them cross local X-ray peaks. We also added the H$\alpha$ distribution (light blue) in the TX-N2, TX-SW1, and TX-SW2 regions, which have H$\alpha$ fluxes of 500 dR or higher. The intensity scales of the thermal X-rays and H$\alpha$ are normalized by their maximum values, and the positions of the intensity peaks are indicated by the vertical dashed lines. We find that the positions of the H{\sc i} intensity peaks correspond well with those of X-rays and H$\alpha$, except in TX-SW1. In order to evaluate quantitatively this trend, we estimated the accurate values of intensity peaks on the radial profiles. Table \ref{tab2} shows a summary of the radial profile towards each thermal X-ray peak. We defined the separation from the X-ray intensity peak to the H{\sc i} or H$\alpha$ intensity peaks in the radial distribution, in such a way that a positive (negative) value
implies that the X-ray peak lies to the left (right) of the other peaks in the diagram. The separations between the thermal X-ray peak and the H{\sc i}/H$\alpha$ peaks are smaller than the beam size of the H{\sc i} data, $\sim156$ arcsec, except for the H{\sc i} peak of TX-SW1.
We also estimated the mean H{\sc i} column density $<$$N_\mathrm{p}(\mathrm{H}{\textsc{i}})$$>$ within each rectangle. The double-logarithm plot in Figure \ref{f08} shows the correlation
between $<$$N_\mathrm{p}(\mathrm{H}{\textsc{i}})$$>$ and the
peak intensity of the thermal X-rays. The solid line shows the linear regression
by least-squares fitting, with a correlation coefficient of $\sim0.76$. We conclude that the thermal X-ray intensity increases following roughly a power-law dependence with the column density of neutral atomic gas at a pc scale.
\begin{figure}
\begin{center}
\includegraphics[width=87mm,clip]{rcw86_f08_cmyk.eps}
\caption{Correlation plot between the averaged H{\sc i} column density $<$$N_\mathrm{p}$(H{\sc i})$>$, and the peak intensity of thermal X-rays. The error bars are also indicated. The solid line shows the linear regression of the double logarithm plot applying a least squares fit, where the correlation coefficient is $\sim 0.76$.}
\label{f08}
\end{center}
\end{figure}%
Figure \ref{f09} shows the intensity distribution of the non-thermal X-rays and
H{\sc i}. We focused on the northern, southwestern, and northeastern regions,
where non-thermal X-rays are prominent. In the northern and southwestern regions, the non-thermal X-ray filaments are spatially well correlated with the H{\sc i} bright-rim at a pc scale, as well as the thermal X-rays in Figure \ref{f07}. In contrast, the X-ray peaks NTX-NE1 and NTX-SW5 are located inside the H{\sc i} bright wall, while the shape of NTX-NE1 filament slightly matches the H{\sc i} distribution. In addition to this, we also find
that the non-thermal X-ray complex of NTX-NE4, ($l$, $b$) $\sim$ ($315\fdg57$, $-2\fdg24$), is located inwards with respect to NTX-NE1, while the NTX-SW6 complex, ($l$, $b$) $\sim$ ($315\fdg44$, $-2\fdg66$), lies outwards from NTX-SW5.
\begin{figure*}
\begin{center}
\includegraphics[width=166mm,clip]{rcw86_f09_cmyk.eps}
\caption{Top left: $XMM$-$Newton$ X-ray image in the energy band of 2.0--4.5 keV overlaid with MOST radio continuum contours as shown in Figure \ref{f01}. Other three panels: distributions of H{\sc i} obtained with ATCA $\&$ Parkes (rainbow scale) superposed with the $XMM$-$Newton$ X-ray contours in the energy band of 2.0--4.5 keV. The three regions toward the north, the northeast, and the southwest are shown with inserts in the X-ray image on the top left panel. The velocity range spans from $-32$ to $-29$ km s$^{-1}$ in the north; from $-38$ to $-33$ km s$^{-1}$ in the northeast; and from $-42$ to $-28$ km s$^{-1}$ in the southwest. The contour levels of the X-rays are 8.0$\times$10$^{-8}$, 1.7$\times$10$^{-5}$, 4.4$\times$10$^{-5}$, 8.8$\times$10$^{-5}$, and 1.5$\times$10$^{-4}$ photons s$^{-1}$ pixel$^{-1}$.}
\label{f09}
\end{center}
\end{figure*}%
We analyzed each non-thermal X-ray peak in a manner similar to the thermal X-ray case. Figure \ref{f10} shows the radial profiles of the proton column density, $N_\mathrm{p}(\mathrm{H}{\textsc{i}})$ (gray filled areas), and non-thermal X-ray intensity (green) for each rectangle, as shown by Figure \ref{f09}. Here, we considered the column density of the molecular hydrogen, $N(\mathrm{H}_2)$, toward the NEX-NE2, -NE3 region, where there is a significant amount of molecular mass. Finally, we estimated the total proton column density, $N_\mathrm{p}$(H$_2 + $H{\sc i}), by
\begin{eqnarray}
N_\mathrm{p}(\mathrm{H}_2 + \mathrm{H}{\textsc{i}}) = 2 \times N(\mathrm{H}_2) + N_\mathrm{p}(\mathrm{H}{\textsc{i}}),
\label{eq5}
\end{eqnarray}
\begin{figure*}
\begin{center}
\includegraphics[width=174mm,clip]{rcw86_f10_cmyk.eps}
\caption{Radial profiles of proton column density (black), non-thermal X-rays (green), and radio continuum (blue) for each rectangle region, as shown by Figures \ref{f09}. Gray and black filled areas represent the atomic proton column density, $N_{\mathrm{p}}$(H{\sc i}), and the molecular proton column density, $N_{\mathrm{p}}$(H$_2$), respectively. Green and blue dashed lines indicate the intensity peaks of non-thermal X-rays and radio continuum radiation, respectively.}
\label{f10}
\end{center}
\end{figure*}%
We also added the radio continuum distribution (blue) in all the regions. The difference in the distributions of the non-thermal X-rays and the radio continuum indicate the energy difference of the CR electrons. The X-ray and radio peaks are located around the H{\sc i} intensity peaks. We did not find a specific trend among them, such as the correlation between X-rays and H{\sc i}. The position of the intensity peaks is represented by the vertical dashed lines of both the X-ray and radio peaks. We note that the relative radial positions between the X-ray and radio peaks show significant offsets from each other. Specifically, the non-thermal X-ray intensity peaks NTX-N1--N4, NTX-NE1, and NTX-SW2 are positioned farther outside than the nearest radio peaks, while the NTX-NE2--3, NTX-SW1, and NTX-SW3--5 peaks are located farther inside than the nearest radio peaks. Table \ref{tab3} shows the trend quantitatively. The positive (negative) values of the separation correspond to the case in which the non-thermal X-ray intensity peaks are positioned inwards (outwards) from the nearest radio peaks. Most of the separations are larger than half of the beam size for both the radio and X-rays ($\Delta \theta \sim 20$ arcsec). Considering the extended radial distribution of the radio peaks, all the separations are regarded to be significant. Figure \ref{f11} shows a logarithmic plot of the correlation between the averaged total proton column density $<$$N_\mathrm{p}(\mathrm{H}_2 + \mathrm{H}{\textsc{i}})$$>$ and the peak intensity of the non-thermal X-rays. Filled circles represent positive separations, while open triangles are used to plot negative ones. The vertical dashed line indicates $<$$N_\mathrm{p}(\mathrm{H}_2 + \mathrm{H}{\textsc{i}})$$>$ = $1 \times 10^{21}$ cm$^{-2}$. In contrast to the thermal X-ray case, there is no significant correlation between the non-thermal X-rays and $<$$N_\mathrm{p}(\mathrm{H}_2 + \mathrm{H}{\textsc{i}})$$>$: if a least-square fitting is attempted to the double logarithm plot, the correlation coefficient turns out to be $\sim0.03$. We find that, with the only exception of NTX-SW2, the negative separations are clustered within the low-density region ($\sim1 \times 10^{21}$ cm$^{-2}$), whereas the positive separations are only located in regions of density higher than $\sim3 \times 10^{21}$ cm$^{-2}$.
\begin{deluxetable*}{lcccccccc}[h]
\tabletypesize{\scriptsize}
\tablecaption{Results of the Projected Profile towards the Non-Thermal X-ray Peaks}
\tablewidth{0pt}
\tablehead{
&\multicolumn{3}{c}{Non-Thermal X-rays} & & Separation\\
\cline{2-4}\cline{6-6}
\multicolumn{1}{c}{Name} & $l$ & $b$ & Peak Intensity & $<$$N_\mathrm{p}$(H$_2+$H{\sc i})$>$ & Radio Peak \\
& (deg) & (deg) & ($\times10^{-5}$ counts s$^{-1}$ pixel$^{-1}$) & ($\times10^{21}$ cm$^{-2}$) & (arcsec) \\
\multicolumn{1}{c}{(1)} & (2) & (3) & (4) & (5) & (6)}
\startdata
NTX-N1 & 315.53 & $-2.08$ & $2.56\phantom{0} \pm 0.11\phantom{0}$ & \multirow{2}{*}{$0.95 \pm 0.04$} & $-16$\\
NTX-N2 & 315.56 & $-2.02$ & $2.45 \phantom{0} \pm 0.06\phantom{0}$ & & $-32$\\
\hline
NTX-N3 & 315.34 & $-2.01$ & $1.22 \phantom{0} \pm 0.05\phantom{0}$ & \multirow{2}{*}{$0.96 \pm 0.03$} & $-48$\\
NTX-N4 & 315.37 & $-1.96$ & $0.401 \pm 0.009$ & & $-48$\\
\hline
NTX-NE1 & 315.72 & $-2.25$ & $9.4\phantom{0}\phantom{0} \pm 0.2\phantom{0}\phantom{0}$ & $0.94 \pm 0.07$ & $-16$\\
\hline
NTX-NE2 & 315.66 & $-2.41$ & $2.06\phantom{0} \pm 0.10\phantom{0}$ & \multirow{2}{*}{$3.6\phantom{0} \pm 1.6\phantom{0}$} & $+64$\\
NTX-NE3 & 315.70 & $-2.42$ & $0.52\phantom{0} \pm 0.02\phantom{0}$ & & $+48$\\
\hline
NTX-SW1 & 315.12 & $-2.38$ & $8.8 \phantom{0}\phantom{0}\pm 0.7\phantom{0}\phantom{0}$ & \multirow{2}{*}{$3.00 \pm 0.15$} & $+96$\\
NTX-SW2 & 315.05 & $-2.44$ & $0.490 \pm 0.008$ & & $-16$\\
\hline
NTX-SW3 & 315.17 & $-2.45$ & $13.0\phantom{0}\phantom{0} \pm 0.5$\phantom{0}\phantom{0}\phantom{0}& $3.04\pm 0.06$ & $+32$\\
\hline
NTX-SW4 & 315.25 & $-2.56$ & $16.6\phantom{0}\phantom{0} \pm 1.2$\phantom{0}\phantom{0}\phantom{0} & $3.19 \pm 0.09$ & $+16$\\
\hline
NTX-SW5 & 315.40 & $-2.51$ & $3.92\phantom{0} \pm 0.09\phantom{0}$& $2.45\pm 0.16$ & $+240$\phantom{0}
\enddata
\label{tab3}
\tablecomments{Col. (1): X-ray peak name. Cols. (2--4): Physical properties of the non-thermal X-rays. Cols. (2--3): Position of the X-ray peak. Col. (4): Peak intensity of the X-ray. Col. (5): Mean proton column density $<$$N_\mathrm{p}$(H$_2+$H{\sc i})$>$ within each region shown by Figure \ref{f09}. Col. (6): Separation between each intensity peak of the X-ray and radio continua.}
\end{deluxetable*}
\begin{figure}
\begin{center}
\includegraphics[width=87mm,clip]{rcw86_f11_cmyk.eps}
\caption{Correlation plot between the averaged proton column density $<$$N_\mathrm{p}$(H$_2 + $H{\sc i})$>$, and the peak intensity of non-thermal X-rays.
Positive and negative separations (see text) are represented by circles and triangles, respectively, where the separation is defined as the distance between the non-thermal X-rays and radio continuum intensity peaks. The vertical dashed line indicates $<$$N_\mathrm{p}$(H$_2+$H{\sc i})$>$ = $1 \times10^{21}$ cm$^{-2}$.}
\label{f11}
\end{center}
\end{figure}%
\section{DISCUSSION}
\subsection{Progenitor System of RCW 86}
There have been considerable debates on the progenitor system (CC or Type Ia) of RCW86 since its discovery \citep{1969AJ.....74..879W,1989ApJ...337..399C,1992A&A...264..654K,1997A&A...328..628V,2000PASJ...52.1157B,2011PASJ...63S.837Y,2011ApJ...741...96W,2014MNRAS.441.3040B}. Recent multi-wavelength observations as well as theoretical studies in the last several years reveal the progenitor is a Type Ia SN. \cite{2007PASJ...59S.171U} and \cite{2008PASJ...60S.123Y} found that the abundant Fe ejecta and the absence of rich O ejecta are consistent with a Type Ia SNR. \cite{2011ApJ...741...96W} argued that the H$\alpha$ filamentary distributions are created by the interaction between the SNR shocks and the neutral gas and suggested that the interaction between the SNR shock and the ambient gas was weak \citep[e.g.,][]{1980ApJ...235..186C,1997AJ....114.2664S}. This is consistent with the accretion wind by a Type Ia progenitor
\citep[e.g.,][]{1996ApJ...470L..97H,2007ApJ...663.1269N}. A central compact stellar remnant like a neutron star or a pulsar wind nebula is not yet detected, again favoring a Type Ia \citep[e.g.,][]{2004ApJS..153..269K}. \cite{2011ApJ...741...96W} calculated an off-center explosion by
using a 2D hydrodynamic model, applying parameters given by the Type Ia progenitor accretion wind model proposed by \cite{2007ApJ...662..472B}. The authors showed that the size of the SNR, shock-velocity, and post-shock gas density are well reproduced, concluding that RCW 86 is Type Ia explosion in an accretion-wind bubble.
In this section we discuss the progenitor system of RCW 86 based on the results of the associated interstellar gas.
\begin{figure*}
\begin{center}
\includegraphics[width=166mm,clip]{rcw86_f12_cmyk.eps}
\caption{Large-scale integrated intensity map of the $^{12}$CO($J$ = 1--0) toward the RCW 86 taken with NANTEN (Matsunaga et al. 2001). The velocity range is from $-40$ to $-30$ km s$^{-1}$. The lowest contour level and contour interval of CO are 10 K km s$^{-1}$ and 4 K km s$^{-1}$, respectively. Magenta contours indicate the MOST radio continuum at a frequency of 843 MHz as shown in Figure \ref{f01}. The dashed lines represent the boundary of the CO supershell identified by \citet{2001PASJ...53.1003M}.}
\label{f12}
\end{center}
\end{figure*}%
First, we shall argue that the H{\sc i}/CO expanding structure is inconsistent with the acceleration by only the accretion wind of the progenitor. The expansion velocity of H{\sc i} $\sim7$ km s$^{-1}$ in the red-shifted (gas-rich) side and the total H{\sc i} mass $2\times10^4$ $M_{\odot}$ lead to a momentum of $\sim3 \times 10^{38}$ N$\cdot$s for the H{\sc i} shell. Figure \ref{f05}a shows that the CO peak velocity is shifted by $\sim$3 km towards the interior of the SNR. The momentum of this shifted component is only 5 $\%$ of the whole CO $-37$ E cloud and the
CO kinetic energy is negligible as compared to
the H{\sc i} kinetic energy. \cite{1996ApJ...470L..97H} showed that the accretion wind of a Type Ia progenitor has a typical duration of $3 \times 10^5$ yr, where the wind mass and wind velocity are $\sim10^{-6} M_{\odot}$ yr$^{-1}$ and $\sim1,000$ km s$^{-1}$, respectively. This means that the momentum released by the accretion wind amounts to $\sim6 \times 10^{35}$ N$\cdot$s, which is quite small to explain the observed momentum of the H{\sc i} shell. Moreover, it is difficult to explain the shell formation in terms of the SN shock waves. RCW 86 has an age of $\sim1,800$ yr and the duration of the shock interaction with the ambient medium is too short to transfer the momentum significantly. This is consistent with the absence of wing like emission spaning more than 10 km s$^{-1}$ in the CO spectra \citep[e.g.,][]{1998ApJ...505..286S,2013ApJ...768..179Y} and the fact that only a thin surface of CO gas is heated by shock interaction (see also Figure \ref{f04}). The short duration of the interaction is also suggested from the X-ray spectroscopy. \cite{1997A&A...328..628V} showed that the thermal plasma in RCW 86 is dramatically deviated from the thermal equilibrium and noted that there is a spot of extremely short ionization timescale in the SNR. In particular, the time elapsed since the Fe ejecta was heated by the reverse SNR shock is estimated to be $<$ 380 yr, a fourth of the SNR age \citep[e.g.,][]{2008PASJ...60S.123Y}. We suggest that the H{\sc i}/CO expanding structure could be formed by the stellar winds from nearby OB stars \citep[e.g.,][]{1969AJ.....74..879W}. Figure \ref{f12} shows that RCW 86 is at the inner edge of a molecular supershell created by multiple supernova remnants in the Galactic plane. The expansion in RCW 86 may originate in the supershell.
\begin{figure*}
\begin{center}
\includegraphics[width=180mm,clip]{rcw86_f13_cmyk.eps}
\caption{(a) Thermal X-ray image of RCW 86 in the energy range from 0.5 to 1.0 keV superposed with H{\sc i} intensity contours (light blue) toward the southwest of the SNR. The color scale is linear, in units of $10^3$ photons s$^{-1}$ pixel$^{-1}$. The H{\sc i} velocity is $V_\mathrm{LSR}$ = $-$35 km s$^{-1}$. The lowest contour level and contour interval of H{\sc i} are 54 K ($\sim 72 \sigma$) and 0.75 K ($\sim 1 \sigma$), respectively. (b) Non-thermal X-ray image in the energy range from 2.0 to 4.5 keV, superposed with H{\sc i} intensity contours. The color scale, unit, and contour levels are the same as in the left panel. (c) The same thermal X-ray image as Figure \ref{f13}a, superposed with the non-thermal X-ray contours (orange). The lowest contour level and contour interval of the non-thermal X-rays are $5 \times 10^{-5} $ photons s$^{-1}$ pixel$^{-1}$ and $2.5 \times 10^{-5} $ photons s$^{-1}$ pixel$^{-1}$, respectively.}
\label{f13}
\end{center}
\end{figure*}%
Comparison with other CC SNRs of similar ages and properties reinforces that RCW 86 is a Type Ia SNR with a low-velocity wind. Because of their similarities,
RX J1713.7$-$3946, a CC shell SNR with an age of $\sim1,600$ years, emitting bright TeV $\gamma$-ray and non-thermal X-rays \citep{2003PASJ...55L..61F,2004A&A...427..199C, 2005ApJ...631..947M}, is the best target to compare with RCW 86. In RX J1713.7$-$3946, molecular clouds of $n$ $> 10^4$ cm$^{-3}$ remained without being swept
up by the SNR shock wave, whereas the intercloud and diffuse H{\sc i} gas were evacuated by the strong stellar winds from the massive progenitor \citep[e.g.,][]{2003PASJ...55L..61F,2005ApJ...631..947M,2010ApJ...724...59S,2013ApJ...778...59S,2013PASA...30...55M}. As a result, in RX J1713.7$-$3946 we do not see the H{\sc i} envelope of CO clouds and strong thermal X-rays are not detected \citep[e.g.,][]{2008PASJ...60S.131T,2012ApJ...746...82F,2015ApJ...799..175S}. In RCW 86, instead, we see diffuse H{\sc i} toward the CO clouds. In particular, we can clearly see an H{\sc i} envelope around CO $-37$ E (see Figure \ref{f02}). This suggests that the progenitor had a weaker wind than the massive progenitor of the CC SNR RX J1713.7$-$3946 and is consistent with the accretion wind hypothesis. The CC scenario with an early B-star, nevertheless, cannot be ruled out. The thermal X-rays observed over the whole RCW 86 suggest that a large amount of H{\sc i} gas is distributed inside the shell.
These thermal X-rays are produced by the interaction of shock waves with the
preexistent neutral and ionized gas,
even though observational evidence for the interaction
was not obtained \citep[e.g.,][]{2002ApJ...581.1116R,2011PASJ...63S.837Y}. Figure \ref{f07} shows that the thermal X-ray peaks coincide with the H{\sc i} peaks, indicating that the shock waves have collided into the H{\sc i} cavity-wall and radiate the thermal X-rays. Only in TX-S1 the H{\sc i} does not coincide with the X-ray peak. This difference is explained if we assume that the H{\sc i} associated with TX-S1 is already ionized, because the X-rays peak at the H$\alpha$ peak and the electron density is estimated to be $\sim100$ cm$^{-3}$ in the South \citep{1981ApJ...243..814R}. In addition, the intensity of the thermal X-rays increases with the neutral gas density, suggesting that the gas is thermalized by the shock passage. Detailed spectrum analysis of X-rays comparable to the interstellar distribution will reveal a better correlation between the proton column density and thermal X-ray intensity. Based on the considerations above, we conclude that RCW 86 is the remnant of a Type Ia explosion in
a wind-bubble and state that a thorough investigation of the neutral gas is an important tool to investigate the progenitor system and the origin of the thermal X-rays.
\subsection{Efficient CR Acceleration}
\cite{2015ApJ...799..175S} argued that in RX J1713.7$-$3946, the efficient CR electron acceleration up to $\sim10$ TeV currently at
work has a tight physical connection with the ambient ISM. The
authors showed that the distribution of the photon index $\Gamma$ of the non-thermal X-rays, synchrotron X-rays, and both the gas-rich and -poor regions is small, with $\Gamma < 2.4$, and suggested that these regions correspond to the sites of high roll-off energy of the synchrotron emission. If the synchrotron cooling is efficient, the roll-off energy $\varepsilon_0$ of the synchrotron photons is given by the following equation \citep{2007A&A...465..695Z}:
\begin{eqnarray}
\varepsilon_0 = 0.55 \times (v_\mathrm{sh}\:/\:3000 \;\mathrm{km\; s^{-1}})^2 \:\eta^{-1} \;\mathrm{(keV)},
\label{eq6}
\end{eqnarray}
where $v_\mathrm{sh}$ is the shock velocity, and $\eta$ = $B^2$/$\delta B^2$ ($> 1$) the degree of magnetic field fluctuations (the gyro-factor). The case of $\eta = 1$ is called the Bohm diffusion limit and indicates the limit of the maximum magnetic turbulence. In the gas-rich/clumpy site, the shock-cloud interaction amplifies the turbulent magnetic field around dense gas clumps and the synchrotron X-rays are enhanced \citep[e.g.,][]{2012ApJ...744...71I}. As a consequence, $\eta \sim 1$, hence increasing $\varepsilon_0$. On the other hand, in the gas-poor/diffuse sites the shock waves are not decelerated, and the high $v_\mathrm{sh}$ results also in this case in a high $\varepsilon_0$ and, accordingly, in an enhancement in the X-rays emission. Sano et al. argued that the ambient conditions of the neutral ISM play a role in increasing the roll-off energy in the CR acceleration and the non-thermal X-rays. In what follows, we discuss the properties of the non-thermal X-rays in RCW 86 by comparing the X-ray properties in RX J1713.7$-$3946. The most intense non-thermal X-ray filaments in RCW 86 (Figure \ref{f01}) are seen at the NE and SW. The average proton column density $<$$N_\mathrm{p}(\mathrm{H}_2 + \mathrm{H}{\textsc{i}})$$>$ is 0.94 $\pm$ 0.07 $ \times 10^{21}$ cm$^{-2}$ in NTX-NE1 and 3.19 $\pm$ 0.09 $\times$ 10$^{21}$ cm$^{-2}$ in NTX-SW4, i.e., they differ by a factor of three. In RCW 86 the gas-rich/clumpy region corresponds to the SW, and the gas-poor/diffuse region, to the NE. At the SW, however, the H{\sc i} cloud does not have a CO clumpy
counterpart. Unlike RX~J1713.7$-$3946, there were no cold H{\sc i} clumps detected as self-absorption features. Figure \ref{f13} shows the X-rays and the H{\sc i} clump in SW. We see the thermal/non-thermal X-rays are enhanced around the cold H{\sc i} clump. This suggests that shock-cloud interaction with the cold H{\sc i} clump amplifies turbulence and magnetic field, causing the rim-brightened non-thermal X-rays. A similar enhancement of the thermal X-rays is seen. This indicates that the shock waves are heating up the surface of the cold H{\sc i} clump. The H{\sc i} peak brightness temperature of the clump is low, $\sim58$ K, suggesting that the clump has density $\sim150$ cm$^{-3}$ \citep{2014ApJ...796...59F,2015ApJ...798....6F}. The same complementary spatial distribution between cold H{\sc i} and
X-rays due to shock-cloud interaction is also observed in RX~J1713.7$-$3946 \citep[see Figure 4 of ][]{2013ApJ...778...59S}. It is also expected that the synchrotron X-ray flux will vary within a scale of several years due to the strong magnetic field \citep[e.g.,][]{2007Natur.449..576U}.
\begin{figure*}
\begin{center}
\includegraphics[width=180mm,clip]{rcw86_f14_cmyk.eps}
\caption{(a) Non-thermal X-ray image of RCW 86 in the energy range from 2.0 to 4.5 keV superposed with $^{12}$CO($J$ = 2--1) intensity contours (white) toward the east of the SNR. The color scale is linear, and is given in units of $10^5$ photons s$^{-1}$ pixel$^{-1}$. The CO velocity range is the same as Figure \ref{f06} East. The lowest contour level and the contour interval of CO are 2.1 K ($\sim 15 \sigma$) and 1.0 K ($\sim 10 \sigma$), respectively. (b) Radio continuum image superposed on $^{12}$CO($J$ = 2--1) intensity contours (white). The linear color scale is given in mJy beam$^{-1}$. The contour levels are the same as in the left panel. (c) Same non-thermal X-ray image as in Figure \ref{f14}a, superposed with radio continuum contours (orange). The lowest contour level and the contour interval of the radio continuum are 7 mJy beam$^{-1}$ and 10 mJy beam$^{-1}$, respectively.}
\label{f14}
\end{center}
\end{figure*}%
Within the gas-poor/diffuse region at the NE, the
acceleration by the fast shock is highly efficient. Figure \ref{f09} also shows that the ISM density
at the NE is lower than at the SW and the N, as shown by the lower H{\sc i} intensity. H$\alpha$ and X-ray observations indicate that the maximum shock velocity at the NE is $\sim3,000$ km s$^{-1}$ (with an
average of $\sim1,200$ km s$^{-1}$), 3--6 times larger than that at the SW and NW \citep{1990ApJ...358L..13L,2001ApJ...547..995G,2013MNRAS.435..910H}. The difference in velocity by a factor of 3 corresponds to an $\varepsilon_0$ larger by
an order of magnitude. We thus suggest that in the NE, the fast
shock waves increased $\varepsilon_0$ and the intensity of the synchrotron radiation. It is suggested that the shock velocity at the SW has been slowing down rapidly for the last 200 years \citep{2013MNRAS.435..910H}. This is consistent with the deformation of the shock front toward NTX-NE1 along the curved H{\sc i} cavity-wall (see Northeast in Figure \ref{f09}). In spite of that, the shock velocity remains three times higher than in the SW, suggesting that the
H{\sc i} gas is physically associated with the SNR. In 1,000 years we would expect that the shock waves in the NE will come into contact completely with the H{\sc i} cavity wall and the X-rays will be enhanced by the shock-cloud interaction.
\begin{figure*}
\figurenum{A.1}
\begin{center}
\includegraphics[width=174mm,clip]{rcw86_fa1_cmyk.eps}
\caption{Velocity channel distributions of the $^{12}$CO($J$ = 1--0) and H{\sc i} brightness temperatures superposed with the same X-ray intensity contours as in Figure \ref{f03}a. Each panel of CO/H{\sc i} shows intensity distributions averaged every 5 km s$^{-1}$ in a velocity range from $-70$ to +5 km s$^{-1}$. The scale and color bars for H{\sc i} and CO are shown on top of the set of panels.}
\label{fa1}
\end{center}
\end{figure*}%
\subsection{Forward and Reverse Shocks}
In this section we discuss the forward and reverse shocks in RCW 86. Based on $Chandra$ observations toward the SW shell, \cite{2002ApJ...581.1116R} showed that relativistic CR electrons are accelerated by the reverse shock, since the non-thermal X-rays are located at the interior of the heated H{\sc i} traced by thermal X-rays. In addition,
the different spatial distributions of radio continuum emission and non-thermal X-rays reflect the different energy ranges of the emitting CR electrons. For a magnetic field of 10 $\mu$G, the synchrotron radiation whose peak is $h\nu$ = 4 keV loses the energy at a decay timescale of only 900 years, while the CR electrons emitting at 1 GHz radio continuum can radiate over $10^7$ years. The synchrotron X-rays originate in
the high-energy electrons close to the shock front and the radio emission from
lower energy electrons downstream \citep{2002ApJ...581.1116R}. \cite{2007PASJ...59S.171U} and \cite{2011PASJ...63S.837Y} studied the Fe ejecta distribution over the SNR and showed that highly ionized Fe ejecta are likely heated up by the reverse shock.
Our comparative study of multi-wavelength observations of the ISM provides a tool to discriminate the reverse shock and the forward shock in different regions. In Figure \ref{f11} we showed the relative location between the non-thermal X-ray peaks and radio peaks in the radial distribution. The positive values of the separation (Separation $> 0$) correspond to the case in which the non-thermal X-ray peaks are located farther inside than the nearest radio peak, while the negative values of the separation (Separation $< 0$) correspond to the opposite case. Rho et al. (2002) interpreted that the former case corresponds to the reverse shock and the latter, to the forward shock.
The reverse shock is located only in regions with the average gas column density $<$$N_\mathrm{p}(\mathrm{H}_2 + \mathrm{H}{\textsc{i}})$$>$ $> 10^{21}$ cm$^{-2}$, whereas the forward shock is located, except for one point, in regions with the average gas column density $<$$N_\mathrm{p}(\mathrm{H}_2 + \mathrm{H}{\textsc{i}})$$>$ $< 10^{21}$ cm$^{-2}$. This supports the idea that the reverse shock recoiled after collision with the ISM. It is possible that the exceptional point be explained if the shock slipped through the clumpy gas-rich region. Conversely, in the gas-poor region the reverse shock has not been detected.
Particularly, toward the dense clumpy CO $-37$ E the forward/reverse shock has a complicated distribution. Figure \ref{f14} shows the X-ray
non-thermal and radio continuum distribution of CO $-37$ E. The upper right corner of the images points in the direction of the SNR center. In addition to the NTX-NE2--3 complexes, we find various non-thermal X-ray filaments similar to NTX-E1--2 as a representative case and the filaments show complimentary distributions to the clumpy CO clouds. We see a trend consisting in the non-thermal X-rays being located more in the inner part than the radio continuum. In the typical region NTX-NE2--3, the separation between the non-thermal and radio continuum is 48--64 arcsec{,} which is equivalent to 0.6--0.8 pc at 2.5 kpc. The shock speed of this area is estimated to be $1,653 \pm 228$ km s$^{-1}$ by \cite{2013MNRAS.435..910H}. If we assume that the reverse shock and the forward shock are moving at the same velocity, the shock wave collided with the molecular cloud 400 yrs ago. This agrees well with the shock age $< 380$ yr of Fe ejecta \citep[e.g.,][]{2008PASJ...60S.123Y}.
In addition, the reverse shock may hold the key to understand an efficient acceleration mechanism of CR electrons with $\sim$1 TeV or higher. According to the numerical simulations, turbulence in downstream regions can create strong magnetic fields, up to mG or $\sim$ 50 $\mu$G on average \citep[e.g.,][]{2012ApJ...744...71I}. In this unusual situation, some additional acceleration mechanisms will become important, including acceleration with magnetic reconnection \citep[e.g.,][]{2012PhRvL.108m5003H}, reverse shock acceleration \citep[e.g.,][]{2001SSRv...99..305E}, non-linear effect of DSA \citep[e.g.,][]{2001RPPh...64..429M}, second-order Fermi acceleration \citep{1949PhRv...75.1169F}, etcetera. Detailed X-ray spectroscopy and comparative studies with the interstellar gas will reveal efficient acceleration mechanisms of CR electrons.
To summarize, a thorough investigation of the ISM is extremely important to study the progenitor system, the origin of thermal X-rays, the acceleration mechanism of CR electrons, and the shock dynamics of SNRs. Observations of the ISM at high angular resolution better than 45 arcsec will allow us to make a comparison of small-scale structures of the ISM with observations at the other wavelengths, and will enable us to pursue more detailed physical process. A spectral analysis of the X-ray data is indispensable to derive the distributions of the photon index and the roll-off energy and provide a firm basis to elucidate the relationship between the CR acceleration and the ISM. The synchrotron radiation above 10 keV from the electrons accelerated by the reverse shock will be obtained in the hard X-ray imaging with $Nustar$. $Chandra$ high-resolution measurements of the proper motion will reveal the kinematics of the X-ray filaments in detail.
\section{CONCLUSIONS}
We summarize the present work as follows.
\begin{enumerate}
\item We have revealed atomic and molecular gas associated with the young TeV $\gamma$-ray SNR RCW 86 by using NANTEN2 CO and ATCA $\&$ Parkes H{\sc i} datasets. The H{\sc i} gas is distributed surrounding the X-ray shell and shows a cavity-like distribution with an expanding velocity of $\sim7$ km s$^{-1}$, while the CO clouds are located only in the east, south, and northwest, showing the high-intensity ratio of CO $J$ = 2--1/1--0 ratio $> 0.8$ enhanced by the shock heating and/or compression in the surface of the clouds.
\item Thermal X-ray filaments show a good spatial correspondence with the H{\sc i} wall and small-scale structures of CO clouds. We also found a correlation between the total proton column density and the thermal X-ray intensity. This indicates that the atomic/molecular gas of density 10--100 cm$^{-3}$ is associated with the SNR shockwaves.
\item Non-thermal X-rays are bright both in the gas-rich and -poor regions. We interpret that the shock-cloud interaction between the cold H{\sc i} clumps and the high shock velocity could enhance the non-thermal X-rays, which is a situation similar to that discussed by \cite{2015ApJ...799..175S} in the SNR RX J1713.7$-$3946. In addition, the reverse shock is detected only in the gas-rich region with a total proton column density of $\sim10^{21}$ cm$^{-2}$ or higher.
\item Our study confirms that the progenitor of RCW 86 was a system consisting of a white dwarf and a low-mass star with low-velocity accretion winds, as suggested by \cite{2011ApJ...741...96W}.
\end{enumerate}
\section*{ACKNOWLEDGEMENTS}
We are grateful to Aya Bamba, Takaaki Tanaka, Hiroyuki Uchida, and Hiroya Yamaguchi for thoughtful comments and their contribution on the X-ray properties. We acknowledge Anne Green for her valuable support during the H{\sc i} observations and reduction, and Gloria M. Dubner, PI of the ATCA project C1011 carried out to obtain the reported H{\sc i} data, who provided them to Yasuo Fukui. We also acknowledge to Shinya Tabata, Momo Hattori, Shigeki Shimizu, Sho Soga, Daichi Nakashima, Shingo Otani, Yutaka Kuroda, Masashi Wada, Ryo Kaji, Keisuke Hasegawa, and Rey Enokiya for contributions on the observations of $^{12}$CO($J$ = 1--0) data. This work was financially supported by Grants-in-Aid for Scientific Research (KAKENHI) of the Japanese society for the Promotion of Science (JSPS, grant Nos. 22740119, 12J10082, 24224005, 15H05694, and 16K17664). This work also was supported by ``Building of Consortia for the Development of Human Resources in Science and Technology'' of Ministry of Education, Culture, Sports, Science and Technology (MEXT, grant No. 01-M1-0305). This research was based on observations obtained with $XMM$-$Newton$, an ESA science mission with instruments and contributions directly funded by ESA Member States and NASA. We also utilize data from MOST and SHASSA. The Molonglo Observatory Synthesis Telescope (MOST) is operated by The University of Sydney with support from the Australian Research Council and the Science Foundation for Physics within The University of Sydney. The Southern H-Alpha Sky Survey Atlas (SHASSA) is supported by the National Science Foundation. EMR is member of the Carrera del Investigador Cient\'\i fico of CONICET, Argentina, and is partially supported by CONICET grant PIP 112-201207-00226.
\section*{APPENDIX: VELOCITY CHANNEL MAPS OF CO AND H{\sc i}}
Figure \ref{fa1} shows the velocity channel distributions of the $^{12}$CO($J$ = 1--0) and H{\sc i} brightness temperature every 5 km s$^{-1}$ from $-70$ km s$^{-1}$ to +5 km s$^{-1}$ superposed with the X-ray intensity contours. First, we investigated the spatial correlation and anti-correlation between the X-ray and interstellar gas (CO and H{\sc i}) distributions. We found that the X-ray shell is complementary to the CO/H{\sc i} structure at a radial velocity of $\sim -35$ km s$^{-1}$. In particular, the H{\sc i} cavity and its expanding motion showed evidence for the association with the SNR shocks (see also Figure \ref{f05} and Section 3.3). \cite{2016ApJ...819...98A} presented the image of the H{\sc i} cavity-like structure, but the authors did not mention the existence of an expanding shell motion. Apparently, the H{\sc i} cloud at a radial velocity from $-10$ to 0 km s$^{-1}$ is also well correlated with the X-ray shell; however, as it is a local cloud component, we ignored it from the standpoint of interaction with the SNR.
|
2,877,628,091,591 | arxiv | \section{Introduction}
Topologically nontrivial phases are exotic states of matter that have an
electronic band gap in their bulk and protected gapless excitations at their
boundaries.~\cite{REF:Hasan10,REF:Qi11,REF:BookFranz13} Superconductors, being
quasiparticle insulators, also feature topological phases with a quasiparticle
gap in the bulk and excitations at their edges. For 1D systems, these edge
states are fermionic zero-energy modes called Majorana
states.~\cite{REF:Alicea12,REF:Leijnse12a,REF:Beenakker13,REF:BookBernevig13,REF:Elliott15}
These states attracted intense attention owing to their non-Abelian nature,
which led to proposals to use them as topological qubits immune to
decoherence.~\cite{REF:Kitaev03,REF:Nayak08} Although predicted to appear in
exotic condensed matter systems with unconventional superconducting
pairing,~\cite{REF:Jackiw81,REF:Salomaa88,REF:Moore91,REF:Read00,REF:Ivanov01,REF:Kitaev01}
recent proposals~\cite{REF:Alicea10,REF:Lutchyn10,REF:Sau10,REF:Oreg10}
involving hybrid structures of more conventional materials have
appeared.~\footnote{Note1} This led to the recent conductance measurements done
on a proximity coupled InSb nanowire,~\cite{REF:Mourik12} which showed possible
evidence of Majorana end states in the form of zero bias conductance peaks.
Other experiments reported further observations of zero bias peaks in similar
settings.~\cite{REF:Das12,REF:Deng12,REF:Finck13,REF:Churchill13,REF:Lee14}
Very recently, scanning-tunneling spectroscopy experiments carried out on
magnetic adatom chains on a conventional superconductor reported ZBPs at the
ends of the chains.~\cite{REF:Nadj-Perge14} While it is compelling to interpret
the observation of these ZBPs as signatures of Majorana states, the issue is
still under intense discussion.~\footnote{Note2}
Semiconductor nanowire structures that are proximity-coupled to superconductors
are technologically attractive platforms for Majorana physics. However,
disorder has been prominently present in all such experimental samples to date.
This led to a renewed interest in disordered superconducting wires,
particularly focusing on the effects of disorder on Majorana
states.~\cite{REF:Motrunich01,REF:Gruzberg05,REF:Akhmerov11,REF:Fulga11,
REF:Potter11a,REF:Potter11b,REF:Stanescu11,REF:Brouwer11a,REF:Brouwer11b,REF:Sau12,
REF:Lobos12,REF:Pientka13b,REF:DeGottardi13a,REF:Neven13,REF:Sau13,
REF:Rieder13,REF:Chevallier13,REF:DeGottardi13b,REF:Jacquod13,REF:Adagideli14,
REF:Hui14a} These works focused mostly on disordered \textit{p}-wave
superconducting wires (PW wires) and showed that disorder is detrimental to the
spectral gap as well as to the formation of Majorana fermions in both strictly
1D systems~\cite{REF:Motrunich01,REF:Gruzberg05,
REF:Brouwer11a,REF:Sau13,REF:Adagideli14,REF:Hui14a} and in multichannel
wires.~\cite{REF:Stanescu11,REF:Pientka12,REF:Neven13,REF:Rieder13} In a recent
study on the experimentally relevant hybrid structures with Rashba spin-orbit
interaction (SOI) proximity coupled to an \textit{s}-wave superconductor (RSW
nanowires for short), some of us showed that disorder need not be detrimental
to and in fact can even \textit{create} topological order in strictly 1D
wires.~\cite{REF:Adagideli14} We are not aware of a systematic study of the
effects of disorder on the phase diagram of multichannel RSW nanowires.
In Majorana experiments, the subband spacing is typically considerably larger
than the Zeeman splitting. For example, in InSb nanowires a subband spacing of
order 15meV has been measured~\cite{REF:vanWeperen12,REF:Kammhuber16} together
with a g-factor of 40 to 58. Zero bias peaks that might signal Majorana
fermions in these works are typically measured at magnetic fields from 0.1mT -
1T~\cite{REF:Mourik12,REF:Zhang16} and exceptionally up to 2.5T. In all of
these cases the Zeeman splitting remains smaller than the level spacing. Hence,
one can argue that RSW nanowires are more experimentally relevant than PW
nanowires, which require Zeeman splitting be much larger than level spacing.
In this Manuscript, we investigate topological properties of disordered
multichannel RSW and PW superconductor nanowires. The usual expectation for
these nanowires is that if their topological state is switched by modifying
certain external parameters (such as gate potential or magnetic field), the
spectral gap will close and open concomitantly with this transition. We show
that for disordered nanowires, the closing and opening of a \textit{transport}
gap can cause further topological transitions, even in the presence of finite
density of states (DOS), extending our earlier work on single channel
wires~\cite{REF:Adagideli14} to multichannel wires. We derive analytical
expressions for the boundaries of the topological phases of a disordered
multichanneled RSW nanowire and find new topological regions in the phase
diagram that show up as additional reentrant behavior in the experimentally
relevant parameter regimes. In particular, new topological regions that show up
in the low magnetic field limit, requires full description of all spin bands as
shown by our analytical results (see
Fig.~\ref{FIG:SWave_TB_Analytical_Numerical_Combined}). Hence, our results go
beyond a simple \textit{p}-wave description that requires a fully spin
polarized wire. Finally we perform numerical simulations using a tight-binding
(TB) approach and find very good agreement with our analytical formulae.
This Manuscript is organized as follows: We begin the next section by
specifying the system in question. We then derive the topological index in
terms of the Lyapunov exponents and the effective superconducting length of the
disordered multichannel RSW wire in
subsection~\ref{SUBSECT:Topological_index_RSW}. In
subsection~\ref{SUBSECT:Calculation_Topological_index_RSW}, we analytically
calculate this topological index using experimentally relevant system and
transport parameters and compare our results with numerical tight-binding
simulations. We then present our conclusions, finding that in disordered
multichannel RSW nanowires with experimentally relevant parameters, the
topological phase diagram is fragmented and previously unreported reentrant
topologically nontrivial regions appear. In the Appendices, we detail the
calculation of the mean free path of the system
(Appendix~\ref{SECT:Appendix_MFP_FermiGoldenRule}), detail our numerical
simulations (Appendix~\ref{SECT:Appendix_TB}), present a full bandwith versions
of our plots in the main text as opposed to the low energy region
(Appendix~\ref{SECT:Appendix_FullBWPlots}), and finally present our plots
similar to the RSW system but preoduced for a \textit{p}-wave nanowire with
disorder, as system previously studied in literature, for completeness and
comparison (Appendix~\ref{SECT:Appendix_PWave}).
\section{Topological order in disordered multichannel wires}\label{SECT:DisorderedTSWires}
\begin{figure}
\centering \includegraphics[width=0.9\columnwidth]{{fig_One}.pdf}
\caption{The multichanneled nanowire of width $W$, which is an RSW
topological superconductor with a Gaussian disorder having an average value
$\left\langle V\right\rangle=0$. a) In the leads, we take
$\alpha_\textrm{SO}$, $\Delta$ and $V(x,y)$ to be zero, making the leads
metallic. Our analytical results assume a semi-infinite wire
($L\rightarrow\infty$), whereas in our numerical full tight-binding
calculations we use wires of length $L\gg l_\text{MFP}, \xi,
l_\textrm{SO}$. b) The form of the wire used to construct the Majorana
solutions in section \ref{SUBSECT:Topological_index_RSW}. The part of the
wire left of the scattering region is again metallic.}\label{FIG:2DWire}
\end{figure}
In this section, we investigate the topological properties of multichanneled
topological superconductor nanowires. Such wires are experimentally realized
by proximity coupling a semiconductor nanowire
with Rashba spin-orbit interaction to an \textit{s}-wave superconductor (RSW,
see Fig.~\ref{FIG:2DWire} (a)). The quasiparticles in RSW nanowires are described
by the following Bogoliubov--de Gennes (BdG)
Hamiltonian:~\cite{REF:Lutchyn10,REF:Oreg10,REF:deGennes99}
\begin{align}\label{EQN:SWaveHamiltonian}
H &= \int \Psi^\dagger \, \mathcal{H}_\mathrm{BdG} \, \Psi \, d\mathbf{r} \nonumber\\
\mathcal{H}_\mathrm{BdG} &=\left(h_0 +\alpha_\textrm{SO}(\mathbf{p}\times\mathbf{\sigma})\right)\tau_z+B\sigma_x+\Delta\tau_x,
\end{align}
where $h_0 = \varepsilon(p) + V(\mathbf{r})$, $\Psi^\dagger =
[\psi_\uparrow^\dagger, \psi_\downarrow^\dagger, \psi_\downarrow,
-\psi_\uparrow]$ is the Nambu spinor with $\psi_{\uparrow(\downarrow)}$ being
the destruction operator for an electron with spin up(down). The kinetic energy
term $\varepsilon(p)$ is given by $\frac{\mathbf{p}^2}{2m}-\mu$ in a continuum
system. We consider a 2D wire with $\mathbf{p}=(p_x,p_y)$. The on-site
potential is given by $V(\mathbf{r})$, $\mu$ is the chemical potential measured
from the bottom of the band, $\alpha_\textrm{SO}$ is the spin-orbit coupling
(SOC) strength, $B$ is the Zeeman field and $\Delta$ is the proximity-induced
\textit{s}-wave superconducting gap. The Pauli matrices $\sigma_i$ ($\tau_i$)
act on the spin (electron-hole) space.
In the limit of large $B$, the wire is completely spin polarized. Then the
low-energy quasiparticles are described by an effective \textit{p}-wave
Hamiltonian as discussed in previous literature.~\cite{REF:Adagideli14,
REF:Akhmerov11,REF:Brouwer11b,REF:DeGottardi13a,REF:Fulga11,REF:Hui14a,
REF:Lobos12,REF:Rieder13,REF:Potter11a,REF:Potter11b,REF:Rieder12,REF:Sau12,
REF:Sau13} For completeness, we discuss this limit in
Appendix~\ref{SECT:Appendix_PWave}.
The Hamiltonian in Eq.~(\ref{EQN:SWaveHamiltonian}) is in the
Altland-Zirnbauer (AZ) symmetry class D (\textit{class D} for short) in
two dimensions~\cite{REF:Altland97} with a topological number
$Q_\textrm{D} \in \mathbb{Z}_2$ . In the absence of SOC along the
$y$-direction, i.e. when the $\alpha_\textrm{SO} \,p_y \,\sigma_x
\tau_z$ term is set to zero, this Hamiltonian also possesses a chiral
symmetry, placing it into AZ symmetry class BDI (\textit{class BDI} for
short) with an integer topological number $Q_\textrm{BDI}\in
\mathbb{Z}$.~\cite{REF:Tewari12,REF:Rieder13} In the thin wire limit,
i.e. $W\ll l_\textrm{SO}$, chiral symmetry breaking terms are
$\mathcal{O}\left(( W/l_\textrm{SO})^2\right)$. Hence, the system in
Eq.~(\ref{EQN:SWaveHamiltonian}) has an approximate chiral
symmetry.~\cite{REF:Rieder12,REF:Tewari12,REF:Diez12} We show in the
next section that the class-BDI (chiral) topological number
$Q_\textrm{BDI}\in \mathbb{Z}$ and the class-D topological number are
related as $Q_\textrm{D} = (-1)^{Q_\textrm{BDI}}$ (see
Eq.~(\ref{EQN:Q_D})).~\cite{REF:Fulga11}
\subsection{Topological index for a disordered multichannel \textit{s}-wave wire}\label{SUBSECT:Topological_index_RSW}
To obtain the relevant topological index that counts the number of the Majorana
end states for a RSW wire with disorder, we start with the BdG Hamiltonian
$\mathcal{H}_\mathrm{BdG}$ in Eq.~(\ref{EQN:SWaveHamiltonian}). Following
Adagideli \textit{et al.},~\cite{REF:Adagideli14} we perform the unitary
transformation $\mathcal{H}_\mathrm{BdG}\rightarrow
\mathcal{H}_\mathrm{BdG}'=\mathcal{U}^\dagger
\mathcal{H}_\mathrm{BdG}\mathcal{U} $, where
$\mathcal{U}=(1+i\sigma_x)(1+i\tau_x) \left[1+\sigma_z+(1-\sigma_z)
\tau_x\right]/4$. Having thus rotated the Hamiltonian to the basis that
off-diagonalizes its dominant part and leaves the small chiral symmetry
breaking terms $\tau_z\sigma_z$ in the diagonal block, we obtain
\begin{align}\label{EQN:OffDiagonalizedHamiltonian}
\mathcal{H}_\mathrm{BdG}' &= -\tau_y \left(\sigma_z\,h_0+\alpha_\textrm{SO}\,p_x\right) + \tau_x \left(B\,\sigma_x+\Delta\right)\nonumber\\
&\quad +\,\tau_z\sigma_y\,\alpha_\textrm{SO}\,p_y\,.
\end{align}
We first set the chiral symmetry breaking term $\tau_z\,\sigma_y\,\alpha_\textrm{SO}\,
p_y$ to zero and focus on $E=0$. The eigenvalue equation then decouples into
the upper and lower spinor components. The solutions are of the form
$\chi_+=(\phi_+,0)^T$ and $\chi_-=(0, \phi_-)^T$ where $\phi_\pm$ obey the
following equation:
\begin{align}\label{EQN:Appendix:NormalStateHamiltonian}
\left(\varepsilon(p) \sigma_z - i\, p_x \alpha_\textrm{SO} \sigma_x \mp B \mp \Delta \sigma_x \right)\, \phi_\pm &= 0.
\end{align}
Here, we have performed an additional rotation $\sigma_z \rightarrow \sigma_y$,
$\sigma_y \rightarrow -\sigma_z$ and premultiplied with $\pm \sigma_x$. We note
that the operator acting on $\phi_\pm$ is not Hermitian.
We now perform a gauge transformation $\phi_\pm(x,y) \rightarrow
e^{-\kappa_\alpha x} \phi_\pm(x, y)$ with a purely imaginary parameter
$i\kappa_\alpha$. We take $\kappa_\alpha$ to be of first order in
$\alpha_\textrm{SO}$ and identify the following terms in the nonhermitian
operator in Eq.~(\ref{EQN:Appendix:NormalStateHamiltonian}) in order of
increasing power of $\alpha_\textrm{SO}$:
\begin{align}\label{EQN:ExpandForSmallAlpha}
H_0 &= h_0(p; x,y)\sigma_z \mp B \mp \Delta \sigma_x \nonumber\\
H_1 &= \frac{i \hbar \kappa_\alpha p_x}{m} \sigma_z - i \alpha_\textrm{SO} p_x \sigma_x \nonumber\\
H_2 &= -\frac{\hbar^2 \kappa_\alpha^2}{2m} \sigma_z + \hbar \alpha_\textrm{SO} \kappa_\alpha \sigma_x,
\end{align}
where we have indicated the $(x,y)$ dependence of $h_0(p; x,y)$ through the
potential $V(x,y)$. We absorb $H_2$ into $H_0$ by redefining $\mu$ and
$\Delta$. We now identify $\kappa_\alpha$ with the inverse of the effective
superconducting length $\xi_\textrm{eff}$, setting $\kappa_\alpha =
\mp \xi_\textrm{eff}^{-1} = \mp m \alpha_\textrm{SO} \Delta/\hbar \epsilon$ with
$\epsilon = \sqrt{B^2-\Delta^2}$. With this choice, $\{H_0,H_1\}_+=0$, which
allows us to write the local solutions as follows:
\begin{align}\label{EQN:GeneralEigensolution}
\phi_\pm &= \sum_n\xi_\pm(\epsilon) {\rm e}^{\pm \kappa_\alpha x} \big(A_n f_n(x,y;\epsilon) + B_n g_n(x,y;\epsilon) \big) \nonumber \\
&\quad+ \xi_\pm(-\epsilon) {\rm e}^{\mp \kappa_\alpha x} \big( C_n f_n(x,y;-\epsilon) + D_n g_n(x,y;-\epsilon) \big),
\end{align}
where $\xi_\pm(\epsilon)$ are the eigenvectors of the $2\times 2$ matrix
$\epsilon \sigma_z \mp \Delta \sigma_x$ with eigenvalue $\pm |B|$ and $f_n$ and
$g_n$ are the local solutions of the equation $h_0 \psi=\epsilon \psi$. The
presence of a multiple number of local solutions, which is the new aspect of
the present problem, reflects the multichannel nature of the wire.
We then consider a semi-infinite wire ($x>0$, $0<y<W$) described by the Hamiltonian in
Eq.~(\ref{EQN:SWaveHamiltonian}) with Gaussian disorder. After going through the
steps described above, we choose without loss of generality $f_n$ to be the
decaying and $g_n$ the increasing function of $x$. We invoke a well known
result from disordered multichannel normal state wires and express the
asymptotic solutions as $f_n = e^{-\Lambda_n x} u_n(x,y)$ and $g_n =
e^{\Lambda_n x} v_n(x,y)$ where $u_n(x,y), v_n(x,y)$ are ${\cal O}(1)$
functions as $x\rightarrow \infty$ and $\Lambda_n>0$ are the \textit{Lyapunov
exponents}.~\cite{REF:Beenakker97,REF:Fulga11,REF:DeGottardi13a,REF:Rieder13,
REF:Adagideli14}
We now focus on a tight-binding system, where the number of Lyapunov exponents
$N_\textrm{max}$ is finite. (In the continuum limit, we have
$N_\textrm{max}\rightarrow \infty$.) For the boundary conditions at $x=0$, we
first extend the hardwall back to $x=-L'$ with $L'$ a small value, and consider
a normal metal in the strip $-L'<x<0$ and $0<y<W$ (see Figure \ref{FIG:2DWire}
(b); in Eq.~(\ref{EQN:SWaveHamiltonian}), $\alpha_\textrm{SO}=0$, $\Delta=0$,
$V(x,y)=0$). The hardwall boundary condition at $x=-L'$ can be expressed as
$\underline{\underline{R}} \cdot \underline{b}_+ = \underline{b}_-$ with
$\underline{b}_+ \equiv (\ldots, A_n, C_n, \ldots)^T$, $\underline{b}_- \equiv
(\ldots, B_n, D_n, \ldots)^T$ and $\underline{\underline{R}}$ as the extended
reflection matrix.~\cite{REF:Mello04} We therefore have $2N_\textrm{max}$
boundary conditions, leaving $2N_\textrm{max}$ of the $4N_\textrm{max}$
parameters undetermined.
The boundary conditions at $x\rightarrow\infty$ require that $\phi_\pm$ have
only exponentially decaying solutions. We focus on the $B>\Delta$ case,
yielding real $\kappa_\alpha$ and $\epsilon$. (As discussed in References
\cite{REF:Sau10} and \cite{REF:Oreg10}, the $B<\Delta$ case yields no
solutions.) We take $\kappa_\alpha>0$ for definiteness. (The following
arguments can be extended trivially to the $\kappa_\alpha<0$ case.) The
exponential asymptotic factors in the solutions contain a factor of $e^{\pm
\kappa_\alpha x}$ in various sign combinations, affecting the overall
convergence at $x\rightarrow\infty$. In particular, the solutions $\phi_+$ have
exponential factors of $e^{(\kappa_\alpha-\lambda_n(\epsilon))x}$,
$e^{(\kappa_\alpha+\lambda_n(\epsilon))x}$,
$e^{(-\kappa_\alpha-\lambda_n(-\epsilon))x}$ and
$e^{(-\kappa_\alpha+\lambda_n(-\epsilon))x}$, whereas the $\phi_-$ solutions
have the same form of exponential factors with the sign of $\kappa_\alpha$
switched. For $|\kappa_\alpha|$ smaller than all Lyaponov exponents, all $B_n$
and $D_n$ are set to zero as they would represent diverging solutions at $x
\rightarrow \infty$. There are therefore $2N_\textrm{max}$ more conditions ,
bringing the total up to $4N_\textrm{max}$, to determine a total of
$4N_\textrm{max}$ parameters, yielding only accidental solutions. However, for
a given $n=n_*$, if $\textrm{min}\,(\lambda_{n_*}(\epsilon),
\lambda_{n_*}(-\epsilon)) < \kappa_\alpha <
\textrm{max}\,(\lambda_{n_*}(\epsilon), \lambda_{n_*}(-\epsilon))$, there are
three growing solutions for one of the $\phi_\pm$ sectors and only one for the
other sector. (If $\lambda_{n_*}(\epsilon)<\lambda_{n_*}(-\epsilon)$, the
$\phi_+$ sector has the three growing solutions and vice versa.) The sector
with three growing solutions thus has the number of boundary conditions
increased by one and the other sector has the number of boundary conditions
decreased by one. If any sector has more than $4N_\textrm{max}$ boundary
conditions in total, then there are no solutions for that sector. Therefore,
the BDI topological number $Q_\textrm{BDI}\in \mathbb{Z}$ is given by the
number of free parameters, which is equal to $4N_\textrm{max}$ minus the total
number of equations arising from the boundary condition at $x=-L'$. We obtain:
\begin{align}\label{EQN:Q_BDI}
Q_\textrm{BDI} &= \sum_{n}\Theta\left(\xi_\text{eff}^{-1} - \Lambda_n(\epsilon) \right)\, \Theta\left(\Lambda_n(-\epsilon) - \xi_\text{eff}^{-1} \right) \nonumber\\
&\quad -\sum_{n}\Theta\left(\xi_\text{eff}^{-1} - \Lambda_n(-\epsilon) \right)\, \Theta\left(\Lambda_n(\epsilon) - \xi_\text{eff}^{-1} \right).
\end{align}
We see that each Lyapunov exponent pair $\Lambda_n(\pm\epsilon)$ contributes a
topological charge $Q^{(n)}_\textrm{BDI}$ to the overall topological charge.
Hence $Q_\textrm{BDI}=\sum_n Q^{(n)}_\textrm{BDI}$, where
\[
Q^{(n)}_\textrm{BDI} =
\begin{cases}
+1 \quad \textrm{if } \Lambda_n(-\epsilon) > \xi_\textrm{eff}^{-1} > \Lambda_n(\epsilon) \\
-1 \quad \textrm{if } \Lambda_n(-\epsilon) < \xi_\textrm{eff}^{-1} < \Lambda_n(\epsilon) \\
{\phantom +}0 \quad \textrm{otherwise.}
\end{cases}
\]
We thus generalize the resuls of Ref.~\cite{REF:Adagideli14} to a multichannel
RSW wire. We note, however, that the total number of Majorana end states for a
multichannel RSW wire in class BDI, given by $|Q_\textrm{BDI}|$, is not equal
to sum of the Majorana states per Lyapunov exponent pair,
i.e.~$|Q_\textrm{BDI}| \ne \sum_n |Q^{(n)}_\textrm{BDI}|$.
We now consider the full Hamiltonian in Eq.~(\ref{EQN:SWaveHamiltonian}) with
the chiral symmetry breaking term included. This Hamiltonian in two dimensions
is in class D and only approximately in class BDI. The chiral symmetry breaking
term pairwise hybridizes the Majorana states described above, moving them away
from zero energy. However, because of the particle-hole symmetry in the
topological superconductor, any disturbance or any perturbation that is higher
order in $\alpha_\textrm{SO}$ can only move the solutions away from zero energy
eigenvalue in pairs; i.e. for any solution moving away from zero eigenvalue
towards a positive value, a matching solution must move to a negative
eigenvalue. Therefore, the number of zero eigenvalue solutions changes in
pairs. Hence, the parity doesn't change. The parity changes, however, every
time one of the Lyapunov exponents passes through the value of
$\xi_\text{eff}^{-1}$. We therefore arrive at the class D topological index
$Q_\textrm{D} = (-1)^{Q_\textrm{BDI}}$ as~\cite{REF:Fulga11}
\begin{align}\label{EQN:Q_D}
Q_\textrm{D} &= \prod_{n,\pm}{\rm sgn}\big(\Lambda_n(\pm\epsilon)\,\xi_\text{eff} -1\big),
\end{align}
indicating that there's a class D Majorana solution at zero energy
($Q_\textrm{D} = -1$) if there are an odd number of BDI Majorana states per
edge. Therefore, for the topological state of the RSW wire to change from
trivial to nontrivial or vice versa, it is necessary and sufficient to have
$Q_\textrm{BDI}$ described in Eq.~(\ref{EQN:Q_BDI}) change by one. The above
equation thus constitutes the multichannel generalization of Eq.(7) of
Ref.~\cite{REF:Adagideli14}.
To calculate the topological index $Q_\textrm{D}$ in Eq.~(\ref{EQN:Q_D}), we relate
the Lyapunov exponents in Eq.~(\ref{EQN:Q_BDI}) to transport properties, namely
the mean free path, of a disordered wire. We first note that as $L \rightarrow
\infty$, the Lyapunov exponents $\Lambda_n$ are self-averaging, with a mean
value $\bar{\Lambda}_n$ given by
\begin{align}\label{EQN:LyapunovExponent_from_MFP}
\bar{\Lambda}_{n}(\mu_\textrm{eff})&=\frac{n}{(\bar{N}(\mu_\textrm{eff})+1)\,l_\text{MFP}}
\end{align}
where $\mu_\textrm{eff}=\mu\pm\epsilon$, $\bar{N}(\mu_\textrm{eff}) = \lfloor W
k_F(\mu_\textrm{eff})/\pi \rfloor$, $k_F=\sqrt{2m\mu_\textrm{eff}/\hbar^2}$,
$n\in1\ldots \bar{N}(\mu_\textrm{eff})$ and $l_\text{MFP}$ is the MFP of the
disordered wire.~\cite{REF:Beenakker97} We use Fermi's Golden Rule to
approximate the mean free path $l_\textrm{MFP}$ by calculating the lifetime of
a momentum state and multiplying it with the Fermi speed. We obtain, for a
quadratic dispersion relation $\varepsilon(p) = p^2/2m-\mu$,
\begin{align}\label{EQN:inverseMFP_FreeElectron_EndResult}
l_\text{MFP}^{-1}&=\frac{4m^2\gamma}{\hbar^4\pi k_F}\,\zeta_N^{-1},
\end{align}
where $\zeta_N^{-1}$ is a dimensionless number whose detailed form is given in
Eq.~(\ref{EQN:Appendix:inverseZetaN_FreeElectron}). The details of this
calculation can be found in Appendix~\ref{SECT:Appendix_MFP_FermiGoldenRule}.
In order to compare our numerical tight-binding results with the analytical
results obtained through Eq.~(\ref{EQN:Q_D}) and (\ref{EQN:Q_BDI}), we also
calculate the mean free path $l_{\text{MFP}}^\text{TB}$ for a tight-binding
(TB) dispersion relation $\varepsilon(k_{x,n}) = 2t \,
\left(2-\cos{(k_{x,n}a)}-\cos{(n\pi a/W)}\right)$, where $t$ is the hopping
parameter, $a$ is the lattice parameter for the TB lattice, $W$ is the width of
the lattice and $k_{x,n}$ is defined through $k_{x,n}^2+k_{y,n}^2=k_F^2$ with
$k_{y,n}=n\pi/W$. We obtain
\begin{align}\label{EQN:inverseMFP_TB_EndResult}
(l_{\text{MFP}}^\text{TB})^{-1}&=\frac{\gamma}{\bar{N}^\textrm{TB} Wa^2t^2}\,(\zeta_N^{\text{TB}})^{-1}.
\end{align}
where $\bar{N}^\textrm{TB}$ is given by $\lfloor (W/\pi a)
\arccos{(1-\varepsilon/2t)} \rfloor$ for $0<\varepsilon <4t$ and $\lfloor
(W/\pi a) \arccos{(1-(4-\varepsilon/2t))} \rfloor$ for $4t<\varepsilon <8t$.
The details of the calculation and the dimensionless constant
$\zeta_N^{\text{TB}}$ are again found in
Appendix~\ref{SECT:Appendix_MFP_FermiGoldenRule}.
The topological phase boundaries, shown in
Figures~\ref{FIG:SWave_TB_NumericalAnalytical_MuvsB_Zoomed} and
\ref{FIG:SWave_TB_Analytical_Numerical_Combined} as the bold black lines, are
calculated by equating $\xi^{-1}$ to $\Lambda_n$ obtained from
Eq.~(\ref{EQN:LyapunovExponent_from_MFP}) and
(\ref{EQN:inverseMFP_TB_EndResult}). We thus obtain the critical field $B^*$ at
which the system goes through a topological phase transition via thie following
implicit equation:
\begin{align}\label{EQN:Topological_phase_boundaries_TB}
B^* &= \Delta\,\sqrt{\beta \, \Gamma_n^\textrm{TB}\left(\mu_\textrm{eff}(B^*)\right) +1}
\end{align}
where $\beta = (Wa^2t^2/\gamma l_\textrm{SO})^2$, $\mu_\textrm{eff}(B^*) = \mu \pm
\sqrt{(B^*)^2+\Delta^2}$ and
\begin{align*}
\Gamma_n^\textrm{TB}\left(\mu_\textrm{eff}\right) &= \left(\frac{\bar{N}^\textrm{TB}\left(\mu_\textrm{eff}\right)}{n}\right)^2\nonumber\\
&\qquad \times\left(\zeta_N^{\text{TB}}\left(\mu_\textrm{eff}\right)\right)^2 \left(\bar{N}^\textrm{TB}\left(\mu_\textrm{eff}\right)+1)\right)^2.
\end{align*}
Equation~(\ref{EQN:Topological_phase_boundaries_TB}) constitutes the central
finding of our paper. It is an analytical expression that determines all
topological phase boundaries of a multichannel disordered wire.
An experimentally interesting point is the largest values of various system
parameters that allow a topological transition. Using
Equations~(\ref{EQN:Q_BDI}) and (\ref{EQN:Q_D}), we estimate the upper critical
field $B^*|_\gamma\,$, i.e. the minimum value of $B$ above which the system is
always in a topologically trivial state at a given disorder strength $\gamma$,
as
\begin{align}\label{EQN:Bmax}
B^*|_\gamma &\sim \Delta \, \frac{l_\textrm{tr}^\textrm{max}}{l_\textrm{SO}},
\end{align}
where $l_\textrm{tr}^\textrm{max} = \textrm{max}(\{\Lambda_n^{-1}\})$ is the
maximum localization length achievable in the system. For a fixed nonzero
disorder, $B^*|_{\gamma> 0}$ is infinite for a continuum system as the
localization length increases indefinitely with increasing Fermi energy. For a
TB system, the upper critical field $B^*|_{\gamma> 0}$ is finite because the
localization length is bounded in TB systems. For a clean wire,
$B^*|_{\gamma=0}$ is infinite for both the TB and the continuum models.
\subsection{Numerical simulations}\label{SUBSECT:Calculation_Topological_index_RSW}
\begin{figure}
\hspace*{-1cm}
\includegraphics[width = 0.98\columnwidth]{{fig_Two}.pdf}
\caption{(Color online) $\mu$ vs. $B$ vs. $Q_\textrm{D}$ for a five-channel
system (compare with
Figs.~\ref{FIG:Appendix:SWave_TB_NumericalAnalytical_MuvsB_FullBandwidth}
and \ref{FIG:Appendix:SWave_TB_Analytical_MuvsB_FourChannels}.) The
background red-white colors are obtained using a numerical
tight-binding simulation with $L=30000a$ and $W=5a$, while the black
lines, which represent the topological phase boundaries, are obtained
analytically using Eq.~(\ref{EQN:Q_D}). Here, $V_0=\sqrt{\gamma/
a^2}=0.2 t$, $\alpha_\textrm{SO}=0.02\hbar/ma$ ($l_\textrm{SO}=4.08\mu
m$) and $\Delta=0.164t$, where $t=\hbar^2/2ma^2$ and $a=0.01
l_\textrm{SO}$ is the tight-binding lattice spacing. The fragmented
nature of the topological phase diagram seen in (b) cannot be
explained in a \textit{p}-wave picture. See
Appendix~\ref{SECT:Appendix_TB} for a discussion of corresponding
experimental parameters.}
\label{FIG:SWave_TB_NumericalAnalytical_MuvsB_Zoomed}
\end{figure}
\begin{figure}
\hspace*{-1cm}
\includegraphics[width=1.0\columnwidth]{{fig_Three}.pdf}
\caption{(Color online)
$\mu$ vs. $V_0 = \sqrt{\gamma/a^2}$ vs. $Q$ for a multichannel
RSW wire. The black lines, which represent topological phase boundaries, are
obtained analytically using Eq.~(\ref{EQN:Q_D}). The background
red-white colors are obtained using tight-binding numerical simulations
with $L=60000a$. In both cases, $W=4a$,
$\alpha_\textrm{SO} =0.015\hbar/ma$, $\Delta = 0.20t$ and
$B=0.35t$, where $t=\hbar^2/2ma^2$ is the tight-binding hopping parameter and
$a$ is the TB lattice spacing. See Appendix~\ref{SECT:Appendix_TB} for
a discussion of corresponding experimental parameters.}
\label{FIG:SWave_TB_Analytical_Numerical_Combined}
\end{figure}
In this section, we obtain the topological index of a disordered multichannel
wire numerically and compare it with our analytical results from the previous
section. For our numerical simulations, we take the TB form of the Hamiltonian
in Eq.~(\ref{EQN:SWaveHamiltonian}) whose details can be found in the
Appendix~\ref{SECT:Appendix_TB}. We consider a wire of length $L\gg
l_\text{MFP}$, $\xi$ or $\l_\textrm{SO}$, with metallic leads
($\alpha_\textrm{SO}=0$, $\Delta=0$ and $V(x,y)=0$ in the leads). We use the
results of Fulga \textit{et al.} to obtain the topological quantum number of
the disordered multichannel wire from the scattering matrices of the
wires.~\cite{REF:Fulga11} For a semi-infinite wire in the symmetry class D, the
topological charge is given by $Q_\textrm{D} = \text{det}(r)$ where $r$ is the
reflection matrix. For a quasiparticle insulator, this determinant can only
take the values $\pm 1$. However, for a finite system this determinant can in
general have any value in the $[-1,1]$ interval. We obtain the reflection
matrix of the TB system in our numerical TB simulations using the Kwant
library~\cite{REF:Kwant14} and then use this relation to calculate
$Q_\textrm{D}$. We plot the topological phase diagram in
Figures~\ref{FIG:SWave_TB_NumericalAnalytical_MuvsB_Zoomed} and
\ref{FIG:SWave_TB_Analytical_Numerical_Combined}, where the red and white
colors represent $Q_\textrm{D}=-1$ and $Q_\textrm{D}=+1$ respectively.
Figure~\ref{FIG:SWave_TB_NumericalAnalytical_MuvsB_Zoomed} exemplifies our
central result given in Eq.~(\ref{EQN:Topological_phase_boundaries_TB}). We find
that for a nearly depleted wire
(Fig.~\ref{FIG:SWave_TB_NumericalAnalytical_MuvsB_Zoomed}a), the topological
phase merely shifts to the higher values of the chemical potential in agreement
with Ref.~\cite{REF:Adagideli14}. For higher chemical potentials/doping, we
observe a fragmented topological phase diagram
(Fig.~\ref{FIG:SWave_TB_NumericalAnalytical_MuvsB_Zoomed}b). We find good
agreement with our analytical results from
Eq.~(\ref{EQN:Topological_phase_boundaries_TB}). We note in passing that, this
fragmentation cannot be explained by a simple \textit{p}-wave picture as these
topological phases arise despite the incomplete spin-polarization of the wire
under a low magnetic field. For a full phase diagram over the entire bandwidth,
but for slightly different material parameters, see
Figure~\ref{FIG:Appendix:SWave_TB_NumericalAnalytical_MuvsB_FullBandwidth},
where the reentrant phases are apparent.
In Fig.~\ref{FIG:SWave_TB_Analytical_Numerical_Combined}, we plot the
topological number $Q_\textrm{D}$ as a function of $\mu$ and the disorder
strength $\sqrt{\gamma/a^2}$ for a constant $B_\textrm{Zeeman}$ over the full
TB bandwidth. The reentrant nature of the topological phase diagram can also be
seen in this plot, for example, by following the $\mu=1.5$ line as $\gamma$ is
increased. As the disorder strength increases, series of topological
transitions occur, similar to the PW wire.~\cite{REF:Rieder13} However, unlike
the PW wire, the number of transitions is given by
$\bar{N}(\mu+\epsilon)+\bar{N}(\mu-\epsilon)$ rather than $\bar{N}(\mu)$, with
$\bar{N}(\mu)$ defined as $\bar{N}(\mu_\textrm{eff}) = \lfloor W
k_F(\mu_\textrm{eff})/\pi \rfloor$. For further discussion of the emergence of
effective p-wave picture at high magnetic fields, see
Appendix~\ref{SECT:Appendix_FullBWPlots}.
\section{Conclusion}
In summary, we investigate the effect of disorder in multichannel Rashba SOC
proximity-induced topological superconductor nanowires (RSW nanowires) at
experimentally relevant parameter ranges. We derive formulae that determine all
topological phase boundaries of a multichannel disordered RSW wire. We test
these formulae with numerical tight-binding simulations at experimentally
relevant parameter ranges and find good agreement without any fitting
parameters. We show that there are additional topological transitions for the
RSW wires leading to a richer phase diagram with further fragmentalization
beyond that of the \textit{p}-wave models.
\begin{acknowledgments}
This work was supported by funds of the Erdal {\.I}n{\"o}n{\"u} chair, by
T{\"U}B{\.I}TAK under grant No. 110T841, by the Foundation for Fundamental
Research on Matter (FOM) and by Microsoft Corporation Station Q. \.{I}A is a
member of the Science Academy---Bilim Akademisi---Turkey; BP, AT and \"{O}B thank
The Science Academy---Bilim Akademisi---Turkey for the use of their facilities
throughout this work.
\end{acknowledgments}
|
2,877,628,091,592 | arxiv | \section{Some examples and best-practices}
\section{ATLAS ITk upgrade}
To cope with the increased luminosity, data rate and radiation damage at the HL-LHC \cite{HL-LHC_PDR}, major upgrades of the \mbox{ATLAS} experiment \cite{ATLAS:Experiment} are foreseen \cite{ATLAS:Phase_II_LoI,ATLAS:Phase_II_Scoping}.
One upgrade will be the replacement of the current tracking detector.
The proposed new inner tracker (ITk) \cite{ATLAS:TDR-strips} is designed to operate under conditions featuring an increased track density, following from approximately 200 inelastic proton-proton collisions per beam crossing, and a high radiation dose estimated from the expected integrated luminosity of \SI{3000}{\per{fb}} over ten years of operation.
The ITk will be an all-silicon tracker with five pixel and four strip layers in the central region, enclosed by end-caps with six strip disks and a number of pixel rings on each side \cite{ATLAS:TDR-strips}.
Quad chip modules are foreseen in the outer layers and the rings of the pixel region. A quad module consists of four read-out chips which are bump bonded to one silicon sensor.
\section{n-in-n quad modules}
The two quad module prototypes presented here consist of oxygenated n-doped
float zone silicon sensors which carry highly n-doped pixel implants with
moderated p-spray isolation and a p-doped backside.
The n-bulk thickness is \SI{285}{\textmum}.
This technology was selected since the current ATLAS pixel detector also consists of such planar n-in-n pixel sensors \cite{Pixel:Electronics_Sensor}.
The pixels are connected to a front-end chip with the help of a flip-chip
process employing tin-lead-bumps.
The used FE-I4 front-end chip \cite{GARCIASCIVERES2011S155} was developed for the Insertable B-Layer (IBL) \cite{ATLAS:IBL_TDR,ATLAS:IBL_TDR2} of the ATLAS experiment.
It has a feature size of \SI{130}{nm} resulting in \SI{$250\times50$}{\textmum$^2$} pixel cells, arranged in 80 columns and 336 rows.
Four FE-I4 chips with a bulk thickness of \SI{700}{\textmum} are flip-chipped to one sensor.
For technological reasons, two chips should not touch each other, which causes gaps in the horizontal and vertical directions.
To cover the gap in the direction of the long pixel side between two read-out chips, two `long pixels' are extended to a length of \SI{450}{\textmum} in every row.
To cover the gap in the direction of the short pixel side between two read-out chips, in every column four `ganged pixels' are connected via a metal trace to four pixels without dedicated read-out channels, with three `inter-ganged pixels' in-between.
`Inter-ganged pixels' are standard \SI{$250\times50$}{\textmum$^2$} pixel cells with a dedicated read-out channel. Because they are enclosed by `ganged pixels', they are treated separately.
The layout of the sensor's central region is shown in \Fig{fig:inter-chip}, where the combination of these designs leads to special pixel cells like `long-ganged pixels'.
To reduce inactive sensor area, the first and the last pixel column is shifted completely beneath the guard rings. These pixel columns are referred to as `edge pixels'.
If all these pixels are taken into account, the sensor consists of $160 \times 680$ individual pixel cells with the external dimensions of \SI{$40.4 \times 17.0$}{mm$^2$}, resulting in a total sensitive area of \SI{13.736}{cm$^2$}.
Each quad module assembly is mounted on a PCB. Sensor and front-end pads were wire-bonded to allow calibration and read-out.
The PCB thickness is \SI{1.5}{mm}. It has four rectangular openings of \SI{$1.0\times0.8$}{cm$^2$} in the middle of each read-out chip and an additional opening in the center of the sensor.
After investigation in lab and testbeam measurements, one quad was irradiated at CERN-PS IRRAD \cite{Ravotti:2014} in a first step up to a fluence of $5 \times 10^{14}$\,n$_\text{eq}$cm$^{-2}$.
\begin{figure}[htbp]
\centering
\includegraphics[width=.65\textwidth]{Quad_Zwischenchip.png}
\caption{\label{fig:inter-chip} Layout of the sensor's central region with `ganged', `inter-ganged', `long ganged' and `long inter-ganged' pixel cells. The two grey crosses and dots are alignment marks for the flip-chip process. The orange circles represent openings for the bump bond connections.}
\end{figure}
\section{IV measurements}
\label{sec:IV}
Current-voltage characteristics (IV curves) reveal important sensor properties and are also a main criterion for quality control.
Certain acceptance criteria must be fulfilled by the sensors in order to be considered for the ITk.
For unirradiated planar pixel sensors, the leakage current at \SI{20}{$^\circ$C} should be less than \SI{0.75}{\textmu\amperecm$^{-2}$} at the operating bias voltage.
The breakdown voltage should be at least \SI{20}{V} higher than the operating voltage.
After irradiation to $2 \times 10^{15}$\,n$_\text{eq}$cm$^{-2}$, for a sensor of \SI{150}{\textmum} thickness, the leakage current at \SI{-25}{$^\circ$C} should be less than \SI{20}{\textmu\amperecm$^{-2}$} at \SI{400}{V}, the breakdown voltage should be higher than \SI{400}{V}.
Before irradiation, the current was measured up to a maximum voltage of \SI{300}{V} in a climate chamber at \SI{20}{$^\circ$C}. The diode-like curve is shown in \Fig{fig:IV_unirrad}. At \SI{100}{V}, where the plateau starts, a current of \SI{2.1}{\textmuA} is measured. The slope is determined to be \SI{1.9}{nA\perV}. Thus, the acceptance criteria are fulfilled, the current is far below the \SI{10.3}{\textmuA} requested for a sensor with an area of \SI{13.7}{cm$^2$} and no breakdown occurs.
After irradiation, the maximum voltage was increased to \SI{1000}{V}. Measurements were performed at multiple temperatures in a climate chamber.
To ensure reproducibility, a second curve was recorded immediately after the first.
As shown in \Fig{fig:IV_irrad}, again no breakdown occurred. Since only a fluence of $5 \times 10^{14}$\,n$_\text{eq}$cm$^{-2}$ was investigated, no statement about the acceptance criteria can be made.
The given temperature corresponds to the set temperature of the climate chamber. An offset in sensor temperature caused by self heating is expected but not taken into account.
The humidity inside the climate chamber was controlled and kept low by its circuit, but it was not monitored.
\begin{figure}[htbp]
\begin{minipage}[t]{0.49\textwidth}
\centering
\includegraphics[width=1.0\textwidth]{IV_quad_unirrad.pdf}
\caption{\label{fig:IV_unirrad} IV curve of the quad module taken before irradiation in a climate chamber at \SI{20}{$^\circ$C}.}
\end{minipage}
\hfill
\begin{minipage}[t]{0.49\textwidth}
\centering
\includegraphics[width=1.0\textwidth]{IV_quad_irrad1.pdf}
\caption{\label{fig:IV_irrad} IV curves of the quad module taken after irradiation in a climate chamber. Repeated measurements are displayed with a similar colour and a rotated triangle.}
\end{minipage}
\end{figure}
\section{Tuning of front-end chips}
\label{sec:tuning}
Each pixel read-out cell of the FE-I4 chip contains a discriminator with an adjustable threshold.
If a signal in the sensor exceeds this threshold, the time over threshold (ToT) is measured.
This ToT is therefore directly related to the induced charge in the sensor.
The threshold value and the ToT response at an injected reference charge can be adjusted by various DACs. This is necessary to guarantee homogeneous responses under different conditions because the FE-I4 electronics are susceptible to influences such as temperature and radiation. The procedure of adjusting the response of all pixels is called \emph{tuning}.
Before irradiation, measurements were performed using a tuning with a threshold of \SI{$(3182 \pm 65)$}{e} and a response of \SI{$(6.00 \pm 0.09)$}{ToT} units at a reference charge of \SI{20}{ke}.
The results for the tuning used for measurements after irradiation are a threshold of \SI{$(3190 \pm 70)$}{e} and a response of \SI{($6.1 \pm 0.3$)}{ToT} units at a reference charge of \SI{20}{ke}. These distributions are shown in \Fig{fig:tuning}.
\begin{figure}[htbp]
\centering
\includegraphics[width=.49\textwidth]{DO-Q02_27th_Threshold_Scan.png}
\includegraphics[width=.49\textwidth]{DO-Q02_27th_Tot_Verif.png}
\caption{\label{fig:tuning} Threshold and ToT distributions of an irradiated quad module after tuning.}
\end{figure}
\section{Source measurements}
Before irradiation, measurements were performed with a Sr-90 source. Emitted $\beta$-particles pass through the sensor, the read-out chip, the PCB or its openings and are finally detected in a scintillator, which sends a read-out trigger to the chip. This methodology reduces noise from scattered particles, but if a backscattered particle is passing through the sensor in coincidence with a later triggered particle, both deposited charges are measured.
Starting with the raw data, \textit{fei4Analyzer}\footnote{\url{https://github.com/terzo/fei4Analyzer}} is used to match hits in adjacent pixels to a hit cluster.
A combined hit map of five different source positions is shown in \Fig{fig:hitmap}.
Most particles passing though the PCB are stopped or scattered before reaching the trigger scintillator, but the beam spot is clearly visible in the PCB openings.
Apart from beam spots, pixels with increased hits correspond to pixels with increased area, i.e. the `ganged' and `long' pixels in the border area between the front-end chips. This is a purely geometric effect.
\begin{figure}[tbp]
\centering
\includegraphics[width=.70\textwidth]{DO-B02_unirrad_HitMap_bunched.png}
\caption{\label{fig:hitmap} Combined hit map of five different source positions on the unirradiated quad module. PCB openings are clearly visible as rectangular shapes. `Ganged' and `long' pixels have an increased hit count due to their increased area.}
\end{figure}
These measurements were performed at a bias voltage of \SI{150}{V}.
The ToT information, which corresponds to the collected charge, was obtained from this data and histogramed, sorted by cluster size.
The resulting distribution for clusters of size 1 was fitted using a Landau-Gauss-convolution provided by \textit{pyLandau}.\footnote{\url{https://pypi.python.org/pypi/pyLandau}} To represent non-suppressed background from backscattered particles, a Gaussian is added.
The fit for the unirradiated module is shown in \Fig{fig:langauss+gauss}. The fit result for the MPV of the Landau-Gauss-convolution is \SI{$(5.87 \pm 0.01)$}{ToT} units.
After irradiation, the module's own radiation caused by activation was measured, using FE self-triggering, since the $\beta$-source setup was not available. This method is more susceptible to noise. A bias scan was performed. The data was evaluated using the method described above.
The result of MPV vs. bias voltage for the irradiated module is shown in \Fig{fig:charge_quad_irrad}. The error bars result from the fit covariance matrix. The average MPV is \SI{2.18}{ToT} units.
As described in \autoref{sec:tuning}, the same tuning was chosen for all measurements.
This comparability allows to determine a reduction of the ToT signal to \SI{40}{\%} of the value before irradiation. This degradation can be improved easily by adjusting the tuning, leading to a larger ToT response at a similar charge in the sensor.
Due to the power dissipation of four FE chips, the external temperature sensors did not provide reliable values, but the sensor temperature was estimated to be \SI{-23}{\textdegree C} by comparing the leakage current during operation with the results obtained in \autoref{sec:IV}.
\begin{figure}[hbt]
\begin{minipage}[t]{0.49\textwidth}
\centering
\includegraphics[width=0.94\textwidth]{pytest.pdf}
\caption{\label{fig:langauss+gauss} Collected ToT signal of the non-irradiated quad module for clusters of size 1, fitted with a Langauss distribution and additional Gaussian background.}
\end{minipage}
\hfill
\begin{minipage}[t]{0.49\textwidth}
\centering
\includegraphics[width=0.94\textwidth]{charge_quad_new.png}
\caption{\label{fig:charge_quad_irrad} Collected ToT signal vs. bias voltage of the irradiated quad module. The error bars result from the fit. The MPV obtained before irradiation is also shown.}
\end{minipage}
\end{figure}
\section{Testbeam measurements}
Before irradiation, measurements were performed with a pion beam of \SI{120}{GeV} at CERN-SPS beamline H6. High tracking resolution is provided by six \mbox{MIMOSA26} sensors of ACONITE, an EUDET-type telescope \cite{Jansen2016}.
An unirradiated FE-I4 planar pixel module was used as the reference plane.
A sketch of the setup is shown in \Fig{fig:testbeam_sketch}. A bias voltage of \SI{150}{V} was applied to the quad sensor during all measurements. Unless stated otherwise, its front-end chips were tuned to a threshold of \SI{3200}{e} and a response of \SI{6}{ToT} at a reference charge of \SI{20}{ke}.
\begin{figure}[htbp]
\centering
\includegraphics[width=.5\textwidth]{TB-Anordnung_Okt_16_Batch4.png}
\caption{\label{fig:testbeam_sketch} Sketch of the testbeam setup at CERN-SPS, revealing the relative position of the quad and the reference to the Mimosa planes (blue).}
\end{figure}
Track reconstruction was performed with \textit{EUTelescope}.\footnote{\url{http://eutelescope.web.cern.ch/}}
For timing reasons, only tracks given by the telescope which match to hits in the reference plane are considered in the number of tracks $n_\text{tracks}$. Each track which is also matched with a hit in the quad is considered in the number of hits $n_\text{hits}$.
The pixel matching margin was set to \SI{125}{\textmum} (\SI{50}{\textmum}) in X (Y),
the cluster matching margin was set to \SI{400}{\textmum} (\SI{250}{\textmum}) in X (Y).
This analysis was performed with \textit{TBmon2}.\footnote{\url{https://bitbucket.org/TBmon2/tbmon2}}
The efficiency $\varepsilon$ is defined as
\begin{equation}
\varepsilon = \frac{n_\text{hits}}{n_\text{tracks}},
\end{equation}
its relative error $\sigma_\varepsilon$ is defined as
\begin{equation}
\sigma_\varepsilon = \sqrt{\frac{\varepsilon \cdot (1 - \varepsilon)}{n_\text{tracks}}}.
\end{equation}
The recorded data were divided into runs with \SI{16}{k} events each. The efficiency and its error was calculated for every run and every pixel geometry. A weighted mean was determined for all runs taken under the same condition.
\Fig{fig:eff_DUT20} shows the combined efficiency map for three different measurement positions. No distinction can be observed between the `standard', `long', `ganged' or `inter-ganged' pixel designs.
\Fig{fig:eff_Geo2} shows the in-pixel efficiency map of the `standard' pixel design. The resulting efficiency is \SI{$(99.94 \pm 0.04)$}{\%}.
\Fig{fig:eff_Geo47} shows the in-pixel efficiency map of one `ganged' pixel design. The resulting efficiency is \SI{$(99.8 \pm 0.6)$}{\%}.
It is overlayed with the layout of the two pixels with a distance of \SI{300}{\textmum}, connected via a metal trace. The layout of the pixel in-between is not drawn.
Comparable results with efficiencies well above \SI{99.5}{\%} were obtained for all other `ganged/inter-ganged pixels' as well as `long pixels'.
\begin{figure}[htbp]
\begin{minipage}[t]{0.49\textwidth}
\centering
\includegraphics[width=01.0\textwidth]{efficiency_map_DUT20.png}
\caption{\label{fig:eff_DUT20} Efficiency map for the complete sensor for three positions.}
\end{minipage}
\hfill
\begin{minipage}[t]{0.49\textwidth}
\centering
\includegraphics[width=01.0\textwidth]{GEOID_8_Eff_DUT20_Geo_2.png}
\caption{\label{fig:eff_Geo2} In-pixel efficiency map for a standard pixel.}
\end{minipage}
\end{figure}
\begin{figure}[htbp]
\begin{minipage}[t]{0.49\textwidth}
\centering
\includegraphics[width=01.0\textwidth]{GEOID_9_Eff_DUT20_Geo_47_overlayed.png}
\caption{\label{fig:eff_Geo47} In-pixel efficiency map for a ganged pixel, overlayed with its layout.}
\end{minipage}
\hfill
\begin{minipage}[t]{0.49\textwidth}
\centering
\includegraphics[width=01.0\textwidth]{GEOID_9_Eff_DUT20_Geo_3_overlayed.png}
\caption{\label{fig:eff_Geo3} In-pixel efficiency map for an edge pixel, overlayed with its and the guard ring layout.}
\end{minipage}
\end{figure}
\Fig{fig:eff_Geo3} shows the in-pixel efficiency map of the `edge' pixel design. The resulting efficiency is \SI{$(76 \pm 6)$}{\%}, but it is revealed that the sensor is full efficient up to \SI{150}{\textmum} beneath the guard rings.
By lowering the threshold to \SI{1600}{e} an improvement could be reached. The resulting efficiency of the `edge pixel' for this threshold is \SI{$(84 \pm 6)$}{\%}, the sensor is fully efficient up to \SI{200}{\textmum} beneath the guard rings.
This is consistent with results of IBL pre-\mbox{studies \cite{Wittig2013}}, where \SI{500}{\textmum} long pixels were partially shifted beneath the guard rings.
\section{Summary}
Fully working n-in-n quad modules were assembled. Promising results have been obtained for these modules in laboratory and testbeam measurements. MIP-like particles deposit a charge signal corresponding to \SI{20}{ke} at a threshold of \SI{3200}{e}. A tracking efficiency of more than \SI{99.9}{\%} has been measured for pixels with standard design which contribute to \SI{95}{\%} of all read-out channels.
A successful analysis of pixel efficiency for `ganged', `inter-ganged' and `long' designs was also performed, revealing a tracking efficiency well above \SI{99.5}{\%} for these special read-out channels.
After a first irradiation step, a slight degradation is visible as expected: The leakage current increases and the collected charge decreases, but no failures have been observed.
Testbeam results after the next irradiation step are expected in October, followed by extensive lab investigations.
\acknowledgments
The possibility to irradiate detector samples at the CERN Proton Synchrotron Radiation
Test Facility IRRAD \cite{Ravotti:2014} is kindly acknowledged, especially the help by F. Ravotti and G. Pezzullo.
Special thanks go to K. Wraight for the organization of this irradiation campaign, for the opportunity, and for the support to perform measurements with the irradiated quad module at the University of Glasgow.
The work presented here is carried out within the framework of Forschungsschwerpunkt FSP103 and supported by
the Bundesministerium f\"ur Bildung und Forschung BMBF under Grants 05H15PECAA and 05H15PECA9.
|
2,877,628,091,593 | arxiv | \section{Introduction}
\baselineskip 18pt
Recently, the D0 Collaboration
has announced the appearance of a like-sign
dimuon charge asymmetry in the semileptonic $b$-hadron decays measurement:
$A_{sl}^b = - 0.00957 \pm 0.00251 ({\rm stat}) \pm
0.00146 ({\rm syst})$ \cite{Abazov:2010hv}.
In the Standard Model (SM),
the prediction for the asymmetry is
$A_{sl}^b ({\rm SM}) = (-2.3^{+0.5}_{-0.6})\times 10^{-4}$,
and the D0 experimental result differs from this by 3.2 standard deviation.
In the absence of CP violation this quantity clearly vanishes,
hence the D0 result leads us to a new physics (NP) which induces
some CP violating flavor changing interactions beyond the SM.
The $B\bar B$ mesons created in $p\bar p$ collisions
include both $B_d (d\bar b)$ and $B_s (s\bar b)$.
The quantities of the $B_d$ system are well measured by $B$-factories,
and the unitarity triangle seems to be closed:
$V_{ud} V_{ub}^* + V_{cd} V_{cb}^* + V_{td} V_{tb}^* = 0$.
In that case, the asymmetry from the $B_d$-$\bar B_d$ oscillation
is expected to be tiny
and a new CP violating phase should show up
in the $B_s$ system instead, namely in the $b$-$s$ transition.
The D0 and CDF Collaborations have
reported the existence of a CP violating phase, $\phi_s$, in the $B_s$ system
from the $B_s \to J/\psi \phi$ decay \cite{Aaltonen:2007he,CDF}.
In fact, their results differ from the SM prediction in a direction which is consistent with the
signature of the like-sign dimuon asymmetry reported by the D0.
This provides us an encouraging guide, pointing towards a source for
new CP violation in the $b$-$s$ transitions.
Supersymmetry (SUSY) is a very attractive candidate to build NP models.
As it is well known, SUSY models have a natural dark matter candidate which is a neutralino as the lightest
SUSY particle (LSP).
Besides, the gauge hierarchy problem can be solved and a natural aspect of
the theory can be developed from the weak scale to the ultra high
energy scale.
In fact, the gauge coupling constants of the Standard Model gauge symmetries
can unify at a high scale using the renormalization group equations
(RGEs)
of the minimal SUSY standard
model (MSSM). This indicates the existence of a grand unified
theory (GUT) as the underlying principle for physics at the very high energy.
The well motivated SUSY GUT models
have always been subjected to intense experimental and theoretical investigations.
Identifying a GUT model, as currently is, will be a major focus of the upcoming experiments.
In SUSY models, the SUSY breaking mass terms for the squarks and sleptons
must be introduced, and they provide sources for
flavor changing neutral currents (FCNCs) and CP violation beyond the
Kobayashi-Maskawa theory.
In general, soft breaking terms generate too large FCNCs,
hence
a flavor universality is often assumed
in the squark and slepton mass matrices
to avoid large FCNCs
in the meson mixings and lepton flavor violations (LFV) \cite{Gabbiani:1988rb}.
The flavor universality is expected to be realized by the Planck scale physics.
However,
even if the universality is realized at a high energy scale such as the GUT scale
or the Planck scale,
non-universality in the SUSY breaking sfermion masses is still generated
through the evolutions of the RGEs,
and this can lead to some small flavor violating transitions
which could possibly be observed in the ongoing experiments.
In some MSSM models
with right-handed neutrinos,
the induced FCNCs from the RGE effects are not large in the quark sector,
while sizable effects can be generated in the lepton sector due to
the large neutrino mixing angles \cite{Borzumati:1986qx}.
Within GUTs, however, loop effects due to the large neutrino mixings
can induce sizable FCNCs also in the quark sector
since the GUT scale particles which connect the quark and lepton sectors can propagate in the loops~\cite{Barbieri:1994pv}.
As a result, the patterns of the induced FCNCs
highly depend on the unification scenario
and the heavy particle contents.
Therefore, it is important to investigate
FCNC effects to obtain a footprint of the GUT physics.
If the quark-lepton unification is manifested
in a GUT model,
the flavor violation in the $b$-$s$ transition
can be responsible for the large atmospheric neutrino mixing \cite{Moroi:2000tk},
and
thus, the amount of flavor violation in the $b$-$s$ transition
(the second and the third generation mixing),
which is related to the $B_s$-$\bar B_s$ mixing and its phase,
has to be related to the $\tau \to \mu\gamma$ decay
\cite{Dutta:2006gq,Parry:2005fp,Dutta:2008xg,Hisano:2008df,Dutta:2009iy,Parry:2010ce}
for a given particle spectrum.
The branching ratio of the $\tau \to \mu\gamma$ decay
is being measured at the $B$-factory,
and thus, the future results on LFV and from the ongoing measurement of the
$B_s$-$\bar B_s$ mixing phase
will provide important information to probe
the GUT scale physics.
In Refs.\cite{Dutta:2006gq,Dutta:2008xg,Dutta:2009iy},
the authors studied the correlation between
the branching ratio of $\tau\to\mu\gamma$ and
the phase of the $b$-$s$ transition
in the frameworks of SU(5) and SO(10) GUT models.
While performing the analysis, it is important to pay attention to
the dependence on $\tan\beta$,
which is the ratio of the vacuum expectation values of
up- and down-type Higgs bosons.
In the case of $\tan\beta \alt 20$,
the gluino box diagram can dominate the SUSY contribution
to the $B_s$-$\bar B_s$ mixing amplitude,
while for large $\tan\beta \agt 30$
it can be dominated by the double
penguin diagram contribution \cite{Hamzaoui:1998nu,Buras:2001mb,Foster:2004vp}.
When the Dirac neutrino Yukawa coupling is the
origin of the FCNC (we call this case as minimal type of SU(5)),
the $\tau\to\mu\gamma$ constraint gives
a strong bound on the phase of the $B_s$-$\bar B_s$ mixing
for smaller $\tan\beta$.
On the other hand,
when the Majorana neutrino Yukawa coupling
is the source for the FCNC, both left- and right-handed
squark mass matrices can have off-diagonal elements
(we call this case as minimal type of SO(10)),
the gluino box contribution is enhanced and larger
$B_s$-$\bar B_s$ phase is possible compared to the SU(5) case.
The double penguin contribution is proportional to
$\tan^4\beta$,
while the Br($\tau\to\mu\gamma$) is proportional to
$\tan^2\beta$.
As a result, for both SU(5) and SO(10) cases,
a large phase of $b$-$s$ transition is allowed.
In that case, however, Br($B_s\to\mu\mu$) constraint
is more important
since it is proportional to
$\tan^6\beta$ \cite{Choudhury:1998ze}.
In other words, when the phase of the $b$-$s$ transition
is large due to the double penguin contribution,
Br($B_s\to\mu\mu$) has to be also enhanced.
In fact, in \cite{Dutta:2009iy}
it was shown that a lower bound from Br($B_s\to\mu\mu$)
is obtained in the case of SU(5) GUT.
In Ref.\cite{Dutta:2009hj},
we also investigated the implication on the dark matter detection
from the large $B_s$-$\bar B_s$ mixing.
Assuming that the neutralino LSP is the dark matter candidate,
the SUSY parameters are restricted by the relic density constraint.
It was shown that the funnel region, in which the relic density constraint is
satisfied through annihilation near heavy Higgs pole, is favored by the flavor
solution. Moreover, there are some correlation between flavor changing processes
and the neutralino direct detection cross section through the dependency on the
CP-odd Higgs mass, $m_A$.
In this paper,
we will sort out the GUT models,
where the FCNC is due to the atmospheric neutrino mixing,
to obtain a large CP asymmetry of the $B$ decays.
This investigation is important if the reported size of the like-sign
dimuon charge asymmetry persists in the future
with a smaller error.
We show that it prefers the SO(10) type boundary condition where
Majorana neutrino couplings induce the FCNC
and
both left- and right-handed squark mass matrices
have off-diagonal elements.
Especially, for the SU(5) boundary condition
where the Dirac neutrino Yukawa coupling
induces the FCNC to produce a large CP asymmetry,
a large value of $\tan\beta$ is required
and the SUSY mass spectrum is restricted.
We will also study the implication of the dark matter direct and indirect detections
from the constraints on the SUSY mass spectrum.
This paper is organized as follows: in section 2 we discuss
the resent results of CP violation in $B_s$ decays; in section 3,
we discuss the sources of flavor changing neutral currents (FCNC)
in the context of SUSY GUTs; in section 4, we show constraints
arising from the experimental constraints on different FCNC processes;
in section 5, we discuss the constraints from the dark matter content
of the universe and predictions related to the direct and indirect detection
experiments; and we conclude in section 6.
\section{CP violation in $B_s$ decays}
The dimuon like-sign asymmetry $A_{\rm sl}^b$ by D0
deviates from the SM prediction by 3.2 $\sigma$.
The $B\bar B$ samples created at the Tevatron
includes both $B_d$ and $B_s$,
and the asymmetry can be written as
$A_{\rm sl}^b = (0.506\pm 0.043)a_{\rm sl}^d +
(0.494\pm 0.043) a_{\rm sl}^s$ \cite{Abazov:2010hv}.
The pieces of $a_{\rm sl}^{q}$ $(q=d,s)$ can be defined as
$a_{\rm sl}^{q} = (r_q - \bar r_q)/(r_q+\bar r_q)$
where $r_q$ and $\bar r_q$ are the
ratios of $B$-$\bar B$ mixings:
$r_q= P(\bar B\to B)/P(\bar B\to \bar B)$
and
$\bar r_q= P(B\to \bar B)/P(B\to B)$.
Since the $B_d$ system is consistent with experiments,
we assume that the CP asymmetry in the $B_d$-$\bar B_d$ mixing
is negligible.
When we take
the experimental data on dimuon asymmetry by CDF (1.6 fb$^{-1}$) and on ``wrong-charge" asymmetry in the semileptonic $B_s$ decay by D0 into account,
the asymmetry in $B_s$-$\bar B_s$ mixing is extracted as follows\cite{Dobrescu:2010rh}
\begin{equation}
a_{\rm sl}^s = (-12.7 \pm 5.0) \times 10^{-3},
\end{equation}
which deviates from the SM prediction by about 2.5 $\sigma$.
When $\Gamma_{12}^s \ll M_{12}^s$ ($M_{12}^s$ is the mixing amplitude
and $\Gamma_{12}^s$ is the absorptive part of the mixing),
$a_{\rm sl}^s$ is given as \cite{Hagelin:1981zk}
\begin{equation}
a_{\rm sl}^s = {\rm Im}\, \frac{\Gamma_{12}^s}{M_{12}^s}
= \left|\frac{\Gamma_{12}^s}{M_{12}^s}\right| \sin\phi_s,
\end{equation}
where $\phi_s$ is the argument of $\Gamma_{12}^s/M_{12}^s$.
In many NP models, the
$\Delta B =2$ ($B$ is beauty) Hamiltonian
can be easily modified
(e.g.
see for recent works motivated by the D0 results \cite{Dobrescu:2010rh,recent}).
We parameterize the $M_{12}^s$ as
\begin{equation}
M_{12}^s = M_{12,\rm SM}^s + M_{12,\rm NP}^s
=C_s \, M_{12,\rm SM}^s \, e^{2i \phi_{B_s}},
\end{equation}
where $C_s$ is a real positive number. From
the measurement of the mass difference, $\Delta M_s = 2|M_{12}^s|$,
the experimental result is consistent with $C_s =1$.
When the $\Delta B =1$
Hamiltonian is same
(allowing modification at 10\% level)
as the SM,
even in the presence of new physics,
the phase of $\Gamma_{12}^s$ is almost same as that of SM,
which is tiny $\sim 0.04$
(in usual phase convention where $V_{cb} V_{cs}^*$ is almost real).
In that case, $\phi_s$ is the same as the phase
($-2\beta_s = -2(\beta_s^{\rm SM}+ \phi_{B_s})$) measured by
$B_s\to J/\psi \phi$ decay observation.
Using the $B_s\to J/\psi \phi$ decay, the decay width difference
$\Delta \Gamma_s = 2 |\Gamma_{12}^s| \cos \phi_s$ is also measured.
The parameters of $B_s$-$\bar B$ oscillations,
$\beta_s$ and $\Delta \Gamma_s$, have been measured
at the Tevatron \cite{Aaltonen:2007he},
and the CDF Collaboration showed their recent analysis
till 5.2 fb$^{-1}$ of data \cite{CDF}.
It appears that the data statistics is very different over the periods ($0-2.8$ fb$^{-1}$ and $2.8-5.2$ fb$^{-1}$).
We will adopt their analysis for $0-5.2$ fb$^{-1}$.
The CDF result on the phase $2\beta_s$ differs from the SM prediction $2\beta_s^{\rm SM} \sim 0.04$
at 1 $\sigma$ level (D0 shows about 2 $\sigma$ deviation for
the same measurement for 2.8 fb$^{-1}$ data~\cite{Aaltonen:2007he}),
and the signature of the phase is consistent with the sign required to explain
the anomalous like-sign dimuon charge asymmetry by D0.
The SM prediction on $\Gamma_{12}^s/M_{12}^s$ is given by
Lenz-Nierste \cite{Lenz:2006hd}
\begin{equation}
\left|\frac{\Gamma_{12}^s}{M_{12}^s}\right|_{\rm SM} =
(4.97 \pm 0.94) \times 10^{-3}.
\label{Lenz-Nierste}
\end{equation}
It was pointed out that theoretical prediction
of the absolute value of $a_{\rm sl}^s$ is bounded from above
and
the bound is a little bit too small
to explain the dimuon asymmetry by D0 if $\Gamma_{12}^s$ is not modified
\cite{Dobrescu:2010rh,Hou:2007ps}.
This is because the experimental measurement of
$\Delta M_s = 2|M_{12}^s|$ is consistent with the SM prediction (which means
$C_s \simeq 1$).
Using the simple relation:
\begin{equation}
\left(\frac{\Delta \Gamma_s}{\Delta M_s}\right)^2
+ (a_{\rm sl}^s)^2
= \left|\frac{\Gamma_{12}^s}{M_{12}^s} \right|^2
= \frac{1}{C_s^2} \left|\frac{\Gamma_{12}^s}{M_{12}^s} \right|^2_{\rm SM},
\label{parameter}
\end{equation}
we illustrate the current situation in the Figure 1.
The solid circles correspond to the 1 $\sigma$ boundaries
in the case of $C_s = 1$ by using the
SM prediction by Lenz-Nierste.
The colored (blue and red) solid curves correspond to the
measurement from the $B_s \to J/\psi \phi$ decay by CDF ($0-5.2$ fb$^{-1}$).
We assume that $\phi_s = -2\phi_{B_s}$, which means that
${\rm arg}\,\Gamma_{12}^s = {\rm arg}\,\Gamma_{12,\rm SM}^s$.
The horizontal colored (yellow) band is the 1 $\sigma$ region of $a_{\rm sl}^s$.
As one can see, the 1 $\sigma$ regions do not match well,
independent of the choice of $\phi_s$,
where $\tan\phi_s = a_{\rm sl}^s/(\Delta \Gamma_s/\Delta M_s)$.
For the current situation, the combined analysis
has a large error for $a_{\rm sl}^s$,
the discrepancy is not very serious if the phase $\phi_s$ is $O(1)$ rad.
However, the dimuon asymmetry measured by the D0 alone has a large center value
(which corresponds to $a_{\rm sl}^s = (-19.4\pm 6.1)\times 10^{-3}$),
and if the measurement of the dimuon asymmetry
(or the ``wrong-charge" muon asymmetry in the semileptonic $B_s$ decays)
becomes accurate in the future keeping the large center value of $a_{\rm sl}^s$,
one has to resolve this issue.
To implement such possible future constraints,
one can consider the following three typical remedies.
\begin{figure}[tbp]
\center
\includegraphics[viewport = 10 10 230 220,width=8cm]{gamma12.eps}
\caption{
The experimental and theoretical regions in
the $a_{\rm sl}^s$-$\Delta \Gamma_s/\Delta M_s$ plane.
The yellow region is the 1 $\sigma$ region of
the combined data from the semi-leptonic $B$ decays.
The red and blue lines are 95\% and 68\% boundaries
from the $B_s\to J/\psi \phi$ decay, assuming that
the phase of $\Gamma_{12}^s$ is the same as the phase of
$\Gamma_{12,\rm SM}^s$.
The solid circles are the theoretical 1 $\sigma$ boundaries
using the numerical calculation in Ref.\cite{Lenz:2006hd}.
The dotted circle corresponds to the conservative theoretical region
when one implements the remedies 2 and 3 in the text.
}
\end{figure}
\begin{enumerate}
\item
Add a $\Delta B =1$ effective
Hamiltonian to modify $\Gamma_{12}^s$ or $\Gamma_{12}^d$
\cite{Dighe:2010nj,Bauer:2010dg,Deshpande:2010hy,Bai:2010kf}.
This is a direct resolution of this issue.
If in the future measurements the phase of $B_s \to J/\psi \phi$ is
really diminished, this type of resolution will be needed.
(In fact, the recent CDF data for the period $2.8-5.2$ fb$^{-1}$
may indicate that the $B_s\to J/\psi \phi$ phase is almost zero.)
\item
Non-perturbative effects \cite{Hou:2007ps,Deshpande:2010hy}.
In this case, the numerical number in Eq.(\ref{Lenz-Nierste})
is obtained by a two parameter
expansion, $\Lambda_{\rm QCD}/m_b$ and $\alpha_s (m_b)$,
using operator product expansion
and heavy quark expansion.
In fact, it is known that non-perturbative effects may dominate
in the $D^0$-$\bar D^0$ meson mixings,
and it may be true that there is a large long distance contribution
in the $\Gamma_{12}^s$ (e.g. the intermediate states include $D_s^+ D_s^-$,
which may lead to large non-perturbative effects).
In the case of $B_s$-$\bar B_s$ mixings, each term of the
next to leading order is about 30\% of the leading order,
and the expansion may be more reliable
than in the case of $D$-$\bar D$ mixing.
However, a careful treatment is needed since the series may not be converging.
Actually, the term which has the largest uncertainty in the next to leading order
calculation gives
a negative contribution to $\Gamma_{12}^s$,
and the true numerical value may be larger than in Eq.(\ref{Lenz-Nierste}).
In that sense, the discrepancy is not so serious
unless the $B_s \to J/\psi\phi$ phase will become tiny with a small error
in future.
\item
Use the uncertainty of the Bag parameter $B_{B_s}$
and the decay constants $f_{B_s}$.
The mixing amplitudes are proportional to $B_{B_s} f_{B_s}^2$,
which has about 40\% error.
This factor is canceled in the ratio of $\Gamma_{12}^s/M_{12}^s$,
and the SM prediction in Eq.(\ref{Lenz-Nierste}) does not have the ambiguity.
The parameter $C_s$ in Eq.(\ref{parameter}) may have the 40\% error.
However, since the ratio of the hadronic quantities for $B_d$ and $B_s$,
related to the SU(3) flavor violation,
is more accurate \cite{Aubin:2009yh}
\begin{equation}
\xi = \frac{f_{B_s}\sqrt{B_{B_s}}}{f_{B_d}\sqrt{B_{B_d}}}= 1.23\pm 0.04,
\end{equation}
the mass difference of $B_d$ restrict the uncertainty of $C_s$
less than 10\%
because of the relation
\begin{equation}
\left|\frac{M_{12}^s}{M_{12}^d}\right|_{\rm SM} =
\frac{M_{B_s}}{M_{B_d}} \left|\frac{V_{ts}}{V_{td}}\right|^2 \xi^2.
\end{equation}
Therefore,
if one uses the full uncertainty of
$B_{B_s} f_{B_s}^2$,
one also has to modify $|M_{12}^d|$ in the same rate as of $|M_{12}^s|$.
In general, it is possible to do that, but one should be careful about the
argument of $M_{12}^d$ since the $\sin2\beta$ measurement
is consistent with the SM.
In SUSY models, the
argument of $M_{12}^d$ is related to the 13 mixing/23 mixing of the
squark mass matrices, and in models where the FCNC is induced by
the Dirac/Majorana neutrino Yukawa couplings, it is related to the
size of the 13 neutrino mixing.
\end{enumerate}
In this paper, we consider the case where
$\Gamma_{12}^s \simeq \Gamma_{12,\rm SM}^s$
and the phase $\phi_s$ comes from the $M_{12}^s$ phase
$\phi_s = -2\phi_{B_s}$, in SUSY models
with $R$-parity conservation.
The dotted circle in the Figure 1 corresponds
to the region when we used the 2 $\sigma$ error of the
Eq.(\ref{Lenz-Nierste}) and
40\% error from the $B_{B_s} f_{B_s}^2$
and $\Gamma_{12}^s = \Gamma_{12,\rm SM}^s$.
The absolute value of the phase $\phi_s$ should be large
$\sim 50^{\rm o} - 70^{\rm o}$
to explain the large CP asymmetry $a_{\rm sl}^s$.
By definition in Eq.(\ref{parameter}), we obtain
\begin{equation}
\sin^2\phi_{B_s} =
\frac{\left(\frac{A^{\rm NP}_s}{A^{\rm SM}_s}\right)^2-(1-C_s)^2}{4C_s},
\label{relation-phiBs}
\end{equation}
where $A_s^{\rm NP} = | M_{12,\rm NP}^s |$ and
$A_s^{\rm SM} = | M_{12,\rm SM}^s |$.
When $C_s \simeq 1$, we obtain $2\sin\phi_{B_s} \simeq A^{\rm NP}_s/A^{\rm SM}_s$.
Therefore, the NP contribution of $M_{12}^s$
should be comparable to the SM contribution
to obtain the large phase $\phi_s \simeq -2\phi_{B_s}$.
In SUSY models
(for an early study of the dimuon asymmetry
in SUSY models, see \cite{Randall:1998te}), we need to realize
$A_s^{\rm NP} \sim A_s^{\rm SM}$
in order to explain the current combined data of CP asymmetry in $B$ decay.
As it is mentioned already that in GUT models,
the Dirac/Majorana neutrino Yukawa coupling
can be a source for FCNC even in the quark sector.
When the quark-lepton unification is manifested,
the amount of $A_s^{\rm NP}$ is related to the
lepton flavor violation $\tau \to \mu\gamma$,
and will be bounded by the constraint on
Br($\tau \to \mu\gamma$).
The main purpose of this paper is
to study how to obtain a large $A_s^{\rm NP}$
when there is quark-lepton unification and after satisfying all the other experimental constraints.
\section{FCNC sources in SUSY GUTs}
In SUSY GUT theories,
it is often assumed that
the SUSY breaking sfermion masses
are flavor-universal,
but the off-diagonal elements of the mass matrices
are generated by the loop effects.
The FCNC sources are the Dirac/Majorana neutrino Yukawa couplings,
which are responsible for the large neutrino mixings
\cite{Borzumati:1986qx,Barbieri:1994pv}.
Since the left-handed leptons $(L)$ and
the right-handed down-type quarks $(D^c)$
are unified in $\bar{\bf 5}$,
the Dirac neutrino Yukawa couplings can be written as
$Y_\nu{}_{ij} \bar{\bf 5}_i N^c_j H_{\bf 5}$,
where $N^c$ is the right-handed neutrino.
The flavor non-universality of the SUSY breaking $\tilde D^c$
masses is generated
by the colored Higgs and the $N^c$ loop diagram \cite{Moroi:2000tk},
and the non-universal part of the mass matrix is
$\delta M_{\tilde D^c}^2 \simeq
- \frac1{8\pi^2} (3m_0^2+A_0^2) Y_\nu Y_\nu^\dagger \ln(M_*/M_{H_C})$,
where $M_*$ is a cut-off scale (e.g. the Planck scale), $M_{H_C}$ is
a colored Higgs mass,
$m_0$ is the universal scalar mass and
$A_0$ is the universal scalar trilinear coupling.
The left-handed Majorana neutrino coupling $LL\Delta_L$
($\Delta_L$ is an SU(2)$_L$ triplet)
can also provide contributions to the light neutrino mass
(type II seesaw \cite{Schechter:1980gr}),
and can generate the FCNC in the sfermion masses
when the fermions are unified.
As a convention in this paper,
we will call the model with the FCNC source arising from
the Dirac neutrino Yukawa coupling as the minimal type of SU(5).
In this case, the off-diagonal elements of $\bf 10$
($Q,U^c,E^c$) representations are small because they originate from
the CKM mixings.
In a competitive model which we call the minimal type of SO(10),
the Majorana couplings, which contribute to the neutrino mass,
generate the off-diagonal elements for all sfermion species
since the Majorana couplings $f_{ij} L_iL_j\Delta_L$ can be unified to the
$f_{ij}{\bf 16}_i\ {\bf 16}_j\ \overline{\bf 126}$ coupling \cite{Babu:1992ia}.
We note that
when the source is not specified, such as the case for Dirac Yukawa coupling,
the off-diagonal elements (of $\bf 10$ multiplets in SU(5)) can be large in general.
In our convention of SU(5) and SO(10) models,
the source of the off-diagonal elements are specified,
and only the right-handed down-type squark
mass matrix has sizable off-diagonal elements in SU(5),
while both left- and right-handed squark mass matrices can have
sizable off-diagonal elements in SO(10).
The non-universal part generated
from the Dirac/Majorana couplings, $Y_\nu$ and $f$,
is proportional to $Y_\nu Y_\nu^\dagger$ and $f f^\dagger$.
In general, therefore,
the squark and slepton mass matrices due to the loop correction can be
parameterized as
\begin{equation}
M_{\tilde F}^2 = m_0^2 [{\bf 1} - \kappa_F U_F {\rm diag} (k_1,k_2,1) U_F^\dagger],
\end{equation}
where $F= Q,U^c,D^c,L,E^c$.
The unitary matrices $U_F$
is equal to the neutrino mixing matrix in a limit \cite{Dutta:2006zt}.
The quantity $\kappa_F$ denotes the amount of the off-diagonal elements
and it depends on the sfermion species.
In the minimal type of SU(5),
since the off-diagonal elements of the SUSY breaking mass matrix
for the left-handed lepton doublet get the correction,
$\delta M_{\tilde L}^2 \simeq -1/(8\pi^2) (3m_0^2+A_0^2) \sum_k
(Y_\nu)_{ik} (Y_\nu)_{jk} \ln (M_*/M_k)$ where $M_*$ is a scale
that the flavor universality is realized,
$\kappa_L$ can be written as $\kappa_L \sim (Y_\nu^{\rm diag})_{33}^2
(3+A_0^2/m_0^2)/(8\pi^2)\ln (M_*/M_R)$, where $M_R$ is a Majorana mass
of the right-handed neutrino.
If the GUT threshold effects are neglected, we have
$\kappa_{\bar {\bf5}} = \kappa_L = \kappa_{D^c}$,
and $U_{\bar{\bf 5}} = U_L = U_{D^c}$.
In general, the fermion mass matrices arise from the sum of the Yukawa terms and
the equality of $U_{D^c}$ and $U_L$ can be completely broken
when there are cancellations among the minimal Yukawa term and
additional Yukawa terms.
Here, we consider a model
where the (near) equality between $U_{D^c}$ and $U_L$ (especially for
23 mixing angle of them) is maintained
as a ``minimal type" assumption.
The assumption is natural if there is a dominant Yukawa contribution
and corrections to fit realistic masses and mixings are small.
The unitary matrices for
$Q$, $U^c$, $E^c$ species are related to the CKM matrix,
and can generate only negligible effects to the following discussion.
Therefore, for the SU(5) boundary condition, we assume, is as follows:
\begin{equation}
{\rm SU(5)} : \quad \kappa_L = \kappa_{D^c}, \quad U_L = U_{D^c}, \quad \kappa_Q = \kappa_{U^c}= \kappa_{E^c}=0.
\label{SU(5)}
\end{equation}
This boundary condition will be used for discussions in the following sections.
In the minimal type of SO(10) model, all $U_F$ can have large mixings
responsible for the neutrino mixings.
If the threshold effects are neglected, one finds
$\kappa_{\bf 16} \simeq 15/4 (f_{33}^{\rm diag})^2 (3+A_0^2/m_0^2)/(8\pi^2)
\ln M_*/M_U$, where $M_U$ is a SO(10) unification scale.
In general, however, the equality of all $\kappa_F$ in the SO(10) boundary condition
is broken by threshold effects.
A detail physical interpretation of this parameterization is given in
\cite{Dutta:2009iy,Dutta:2006zt}.
The SO(10) boundary condition,
we assume, is as follows:
\begin{equation}
{\rm SO(10)}: \quad \kappa_Q = \kappa_{U^c}= \kappa_{D^c}=\kappa_{L} = \kappa_{E^c},
\quad U_Q = U_{U^c}=U_{D^c}=U_{L}=U_{E^c}.
\label{SO(10)}
\end{equation}
When the Dirac neutrino Yukawa coupling $Y_\nu$ or the
Majorana coupling $f$ is hierarchical,
we obtain $k_1,k_2 \ll 1$ and
then the 23 element of the sfermion mass matrix
is $-1/2 m_0^2 \kappa \sin 2\theta_{23} e^{i \alpha}$.
The magnitude of the FCNC between 2nd and 3rd generations
is controlled by $\kappa \sin2\theta_{23}$,
where $\theta_{23}$ is the mixing angle in the unitary matrix.
The phase parameter $\alpha$ also originates from the unitary matrix,
and it will be the origin of a phase of the FCNC contribution.
As it is mentioned that
it is preferable to modify the absolute value of $B_d$-$\bar B_d$
mixing amplitude $|M_{12}^d|$ without modifying its argument
in order to enhance the asymmetry $a_{\rm sl}^s$.
The 13 element of the sfermion mass matrix is
$\kappa (-1/2 k_2 \sin2\theta_{12}\sin\theta_{23}
+ e^{i\delta} \sin\theta_{13} \cos\theta_{23}$) \cite{Dutta:2006zt},
where $\theta_{ij}$ are the mixing angles and $\delta$ is a
phase in the unitary matrix.
Choosing small values for the parameters $k_2$ and $\theta_{13}$,
one can realize the preferred situation.
\section{Constraints from the FCNC processes in SUSY GUTs}
In the MSSM with flavor universality, the chargino box diagram dominates
the SUSY contribution to $M_{12}^s$.
In the general parameter space of the soft SUSY breaking terms,
the gluino box diagram can dominate the SUSY contribution for a lower $\tan\beta$
(i.e. $\tan\beta \alt 20-30$).
The gluino box contribution is enhanced if both left- and right-handed
down-type squark
mass matrices have off-diagonal elements \cite{Ball:2003se},
and
therefore,
it is expected that the SUSY contribution to the $B_s$-$\bar B_s$
mixing amplitude is large for the SO(10) model with type II seesaw,
compared to the minimal type of SU(5) model \cite{Dutta:2008xg}.
When the lepton flavor violation is correlated to the flavor violation
in the right-handed down-type squark,
the $\tau\to \mu\gamma$ decay gives us the most important constraint
to obtain the large $B_s$-$\bar B_s$ phase \cite{Dutta:2008xg,Hisano:2008df}.
Furthermore, the squark masses are raised much more compared to
the slepton masses due to the gaugino loop contribution
since the gluino is heavier compared to the Bino and the Wino at low energy (assuming the gaugino mass universality at a high energy scale such as the GUT scale),
and thus the lepton flavor violation will be more sizable compared to the quark
flavor violation.
The current experimental bound on the branching ratio of $\tau\to\mu\gamma$
is \cite{Hayasaka:2007vc}
\begin{equation}
{\rm Br}(\tau\to\mu\gamma) < 4.4 \times 10^{-8}.
\label{taumugamma}
\end{equation}
In order to allow for a large phase in the $B_s$-$\bar B_s$ mixing,
a large flavor-universal scalar mass
at the cutoff scale
is preferable.
The reasons are as follows.
The gaugino loop effects are flavor invisible
and they enhance the diagonal elements of the scalar mass matrices
while keeping the off-diagonal elements unchanged.
If the flavor universal
scalar masses at the cutoff scale become larger,
both Br($\tau\to\mu\gamma)$ and $\phi_{B_s}$ are suppressed.
However, Br($\tau\to\mu\gamma)$ is much more suppressed compared to $\phi_{B_s}$
for a given $\kappa \sin2\theta_{23}$
because the low energy slepton masses are sensitive to $m_0$
while the squark masses are not so sensitive
due to the gluino loop contribution to their masses.
The large Higgsino mass, $\mu$, is also helpful
to suppress the dominant chargino contribution
of $\tau\to\mu\gamma$.
The subdominant neutralino contribution, however, will
become large when $\mu$ is large due to the large left-right
stau mixing.
\begin{figure}[tbp]
\center
\includegraphics[viewport = 0 10 290 220,width=9cm]{Graph1.eps}
\caption{
The possible SUSY contributions are plotted when
the $\tau\to\mu\gamma$ bound is saturated.
The SO(10) boundary condition can give a larger
SUSY contribution than the SU(5) boundary condition.
We choose $m_{1/2} = 300$ GeV and $\tan\beta =10$ for this plot.
The detail to draw the plot is written in the text.
The relation between the CP phase $\phi_s = -2\phi_{B_s}$
and $A_s^{\rm NP}/A_s^{\rm SM}$ is given in
Eq.(\ref{relation-phiBs}).
}
\end{figure}
In figure 2,
we plot the magnitude of $A_s^{\rm NP}/A_s^{\rm SM}$
as a function of $m_5$ (the SUSY breaking mass of
$\bar{\bf 5} = (D^c,L)$ at the unification scale),
when the $\tau\to\mu\gamma$ bound, Eq.(\ref{taumugamma}),
is saturated,
for various mass parameters in the case of $\tan\beta=10$.
We choose the unified gaugino mass as $m_{1/2} = 300$ GeV,
and
the universal scalar trilinear coupling as $A_0=0$.
In the case of SU(5), the SUSY breaking mass of
${\bf 10} = (Q,U^c,E^c)$, $m_{10}$, can be
different from $m_{5}$.
As one can see,
in order to achieve $A_s^{\rm NP}/A_s^{\rm SM} \sim 1$,
the mass parameters should be around 2 TeV.
In the case of SO(10), the sfermion masses are unified,
$m_0 = m_5 = m_{10}$, and thus we only change
$\mu$ in the two plots in the figure.
As one can see,
the NP contribution in SO(10)
can be much bigger than SU(5) case.
This is the consequence of the fact
that both left- and right-handed squark
mass matrix can have sizable off-diagonal elements
from the Majorana neutrino coupling
in SO(10) case.
In the lower $\tan\beta$ case, however,
the amount of non-universality $\kappa$
has to be large $\agt 0.3$ to achieve
$A_s^{\rm NP}/A_s^{\rm SM} \sim 1$, especially in SU(5).
Such a sizable $\kappa$ is possible if $A_0/m_0$ is large,
but the large $\kappa$ is not preferable
as long as it is the RGE induced origin from the Planck scale
and the GUT scale.
Besides, the muon $g-2$ anomaly \cite{g-2} is also suppressed
when $\tau\to\mu\gamma$ is suppressed.
This is not good since the deviation of $g-2$ from the SM prediction
is now estimated about 3.2 \cite{Davier} - 4 $\sigma$ level \cite{Teubner:2010ah}.
When $\tan\beta$ is larger ($> 30-40$),
the so called double Higgs penguin diagram
dominates the contribution rather than the box diagram,
and $\kappa$ can be smaller $\alt 0.1$ to achieve
$A_s^{\rm NP}/A_s^{\rm SM} \sim 1$.
In this case, we do not need to suppress $\tau\to\mu\gamma$,
and the muon $g-2$ anomaly can be explained.
The double Higgs penguin (flavor changing neutral Higgs interaction)
is generated as follows \cite{Choudhury:1998ze}.
In SUSY models, only the holomorphic coupling is allowed
for the Yukawa coupling. When SUSY is broken,
the non-holomorphic coupling is generated by the finite corrections.
For the down-type quark, the Yukawa coupling is
\begin{equation}
{\cal L}^{\rm eff} = Y_d Q D^c H_d + Y_d^\prime Q D^c H_u^*.
\end{equation}
The second term is the non-holomorphic term,
and $Y_{d}^\prime{}_{23}$ and $Y_{d}^\prime{}_{32}$
are roughly proportional to $\tan\beta$.
Since we work on the basis where the down-type quark mass matrix
($M_d = Y_d v_d + Y_d^\prime v_u$) is diagonal,
the following flavor changing Higgs coupling can be obtained:
\begin{equation}
Y_d^\prime Q D^c H_u^* - Y_d^\prime \frac{v_u}{v_d} Q D^c H_d.
\end{equation}
The dominant flavor changing neutral Higgs
coupling is roughly obtained from the second term,
and it is proportional to $\tan^2\beta$.
The $B_s$-$\bar B_s$ mixing can be generated from a
double penguin diagram,
the mixing amplitude is proportional to $\tan^4\beta$.
Since the Br($\tau\to\mu\gamma$) is proportional to $\tan^2\beta$,
the double penguin contribution for large $\tan\beta$
is preferable to obtain $A_s^{\rm NP}/A_s^{\rm SM} \sim 1$
satisfying the $\tau\to\mu\gamma$ constraint in GUT models.
The effective flavor changing Higgs couplings are written as
\begin{equation}
X_{RL}^{Sij} (\bar d_i P_R d_j) S^0 + X_{LR}^{Sij} (\bar d_i P_L d_j) S^0,
\end{equation}
where $S^0$ represents for the neutral Higgs fields, $S = [H,h,A]$,
where $H$ and $h$ stand for heavier and lighter CP even neutral Higgs fields,
and $A$ is a CP odd neutral Higgs field (pseudo Higgs field).
The couplings are
\begin{eqnarray}
X_{RL}^{Sij} &=& Y^\prime_d{}_{ij} \frac{1}{\sqrt2 \cos\beta} [\sin(\alpha-\beta),\cos(\alpha-\beta),-i], \\
X_{LR}^{Sij} &=& Y^\prime_d{}_{ji} \frac{1}{\sqrt2 \cos\beta} [\sin(\alpha-\beta),\cos(\alpha-\beta),i],
\end{eqnarray}
where $\alpha$ is a mixing angle for $h$ and $H$.
The double penguin diagram including both left- and right-handed
Higgs penguin which is proportional to the factor
\begin{equation}
\frac{\sin^2(\alpha-\beta)}{m_{H}^2} + \frac{\cos^2(\alpha-\beta)}{m_{h}^2} + \frac{1}{m_{A}^2},
\end{equation}
and the double penguin contribution is naively proportional to
$X_{RL}^{23} X_{LR}^{23}/m_{A}^2$
\cite{Hamzaoui:1998nu,Buras:2001mb,Foster:2004vp}.
On the other hand, the double left-handed (or double right-handed)
penguin contribution $\propto (X_{LR}^{23})^2$ (or $(X_{RL}^{23})^2$)
is tiny because $\cos(\alpha-\beta) \simeq 0$ and $m_A \simeq m_H$
for $\tan\beta \gg 1$ and $m_A > m_Z$.
In the flavor universal SUSY breaking, the right-handed penguin coupling
$X_{RL}^{23}$ is tiny,
and the double penguin contribution cannot be sizable
even for a large $\tan\beta$.
However, when the right-handed mixing is generated in the SUSY GUT models,
the double penguin diagram can be sizable for large $\tan\beta$.
We note that if there is a FCNC source in the right-handed squark mass matrix,
we do not need the off-diagonal elements in the left-handed squark mass matrix
in order to generate the sizable double penguin contribution.
Therefore, even in the minimal type of SU(5) model,
the double penguin contribution can be sizable when $\tan\beta$ is large.
\begin{figure}[tbp]
\center
\includegraphics[viewport = 0 10 290 220,width=9cm]{Graph2.eps}
\caption{
The possible SUSY contributions are plotted as a function of
$\tan\beta$ when
the $\tau\to\mu\gamma$ bound is saturated.
The SO(10) boundary condition can give a larger
SUSY contribution than the SU(5) boundary condition.
We choose $m_{1/2} = 300$ GeV and $m_0 = 800$ GeV.
The detail to draw the plot is written in the text.
}
\end{figure}
In figure 3, we plot the $\tan\beta$ dependence of
$A_s^{\rm NP}/A_s^{\rm SM}$
when Br($\tau\to\mu\gamma$) saturates the experimental bound
for two Higgsino masses $\mu$.
We choose the unified gaugino mass as $m_{1/2} = 300$ GeV,
and the SUSY breaking scalar masses as $m_0 = m_5 = m_{10} = 800$ GeV, and
$A_0=0$.
We assume $m_{H_u}^2 = m_{H_d}^2$ for the SUSY breaking
Higgs mass at the unification scale just for reducing the number of parameter.
As one can see, $A_s^{\rm NP}/A_s^{\rm SM}$ becomes smaller
for $\tan\beta \sim 20$.
This is because Br($\tau\to\mu\gamma$) is proportional to $\tan^2\beta$
while
the box contribution does not depend on $\tan\beta$.
The double penguin contribution
is proportional to $\tan^4\beta$,
and thus the amplitude can become larger
for $\tan\beta> 30$.
For large $\tan\beta > 40$,
the constraint from Br($B_s\to \mu\mu$) \cite{Bsmumu}
\begin{equation}
{\rm Br}(B_s \to\mu\mu) < 4.3 \times 10^{-8},
\end{equation}
becomes more important,
since it is proportional to $\tan^6\beta$.
In the plots,
the lines are terminated when the $B_s\to\mu\mu$ bound
is saturated.
The left-handed flavor changing Higgs coupling, $X_{LR}^{23}$
can be generated by chargino diagram
even if the flavor violating source is only the CKM mixing.
Therefore,
the mixing amplitude can be enhanced
when only the right-handed down-type squark
mass matrix has the off-diagonal element such as in the SU(5) case.
The left-handed source of FCNC is also helpful
to enhance the mixing amplitude
since it can give a constructive contribution
to the left-handed penguin.
In fact, in the SO(10) boundary condition,
there is additional phase freedom from the Yukawa matrix,
and the phases of the off-diagonal elements
for left- and right-handed squark mass matrices
are independent in the basis where
the down-type quark mass matrix is real and positive
diagonal matrix.
As a consequence,
even if we fix the phase of $M_{12,\rm SUSY}^s$,
there remains one more phase freedom.
In figure 2, actually,
we choose the
additional phase in the left-handed off-diagonal element
to make the constructive contribution to the mixing amplitude.
Therefore, the mixing amplitude
under the SO(10) boundary condition
can be larger than the case in SU(5).
It is interesting to note that
the chargino contribution to $b\to s\gamma$
is destructive
when the Higgs penguin contribution is constructive
\cite{Buras:2001mb,Dutta:2009iy}.
This is roughly because the electric charge of
down quark is negative,
and the signatures of amplitudes for $b\to s\gamma$
and the finite correction of the down-type quark mass matrix
are opposite.
As a result, the SO(10) boundary condition can be
also preferable
from the $b\to s\gamma$ constraint.
We comment on the GUT threshold effects
for the boundary conditions, Eqs.(\ref{SU(5)}),(\ref{SO(10)}).
In the SO(10) case, the flavor violation pattern in the lepton sector
and the quark sector can depend on the SO(10) symmetry breaking vacua.
Actually, in order to forbid a rapid proton decay,
the quark flavor violation should be larger than the lepton flavor violation
among the symmetry breaking vacua \cite{Dutta:2007ai}.
Namely, it is expected that
$\kappa_{Q}$, $\kappa_{U^c}$, and $\kappa_{D^c}$ are much larger than
$\kappa_L$ and $\kappa_{E^c}$.
For example, if only the Higgs fields $({\bf 8},{\bf 2}, \pm1/2)$
are light compared to the breaking scale (which is the most suitable case),
one obtains $\kappa_Q = \kappa_{U^c} = \kappa_{D^c}$,
and only quark flavor violation is generated, while the lepton flavor
violation is not generated.
On the other hand, when the flavor violation is generated
from the minimal type of SU(5) vacua with the type I seesaw,
it is expected that $\kappa_{L}$ is always larger than $\kappa_{D^c}$
since the right-handed Majorana mass scale is less than the scale of
colored Higgs mass.
Therefore, the existence of $b$-$s$ transition indicated by the
experimental results in Fermilab
predicts the sizable lepton flavor violation in the minimal type of
SU(5) model.
In other words, if the results of a large $B_s$-$\bar B_s$ phase is really
an evidence of NP,
the minimal type of SU(5) GUT models are restricted
severely \cite{Parry:2005fp,Dutta:2008xg,Hisano:2008df}.
\section{Constraints from the neutralino dark matter}
The cosmic microwave background anisotropy measurement by WMAP~\cite{WMAP} put a stringent constraint on the SUSY parameter space through the dark matter requirement. Within 2~$\sigma$, the neutralino relic density should be
$0.106 < \Omega h^2 < 0.121$.
This assuming that dark matter consists solely of neutralino, i.e. smaller relic density cannot be a priori excluded. In SUSY models with universal gaugino and sfermion masses, $m_{1/2}$ and $m_0$ respectively, it is well known that there are five distinct regions that satisfy the relic density constraint: (a) the bulk region
where both $m_{1/2}$ and $m_0$ are small, (b) the neutralino-stau coannihilation region where the lightest stau mass is almost degenerate with the neutralino mass, (c) the focus point (FP)/hyperbolic region at large $m_0$ where the $\mu$ parameter becomes small and the lightest neutralino gets more Higgsino content, (d) the funnel region where the heavy Higgs masses ($m_A$ and $m_H$) is about twice the neutralino mass, and (e) the neutralino-stop coannihilation region where the lightest stop mass is suppressed by large off-diagonal terms when the trilinear coupling parameter $A_0$ is large. In our analysis we assume that the Higgs soft masses are not tied to $m_0$, and from this assumption we have two more free parameters $\mu$ and $m_A$. In such models there could be another dark matter region, in addition to the five regions above, i.e. the neutralino-sneutrino coannihilation region at large $m_A$ and/or $\mu$~\cite{sneutrinoNLSP}.
The neutralino dark matter hypothesis is very attractive, and there are lots of activities, experimentally and theoretically, to discover the dark matter candidate. As a weakly interacting massive particle (WIMP), the lightest neutralino can in principle be detected directly by ultra-sensitive detectors. Such experiments are now reaching $O(10^{-8}~{\rm pb})$ sensitivity level for certain values of neutralino mass~\cite{CDMSII,Xenon100}. Neutralino particles in the galaxy can also give out some indirect signals through their annihilation, in particular from some regions where neutralinos can accumulate due to some gravitational potential attractors. It was pointed out that high energy neutrino flux from the sun can potentially be a clear signal of dark matter annihilation in the sun~\cite{Silk:1985ax}, and this is currently being searched by the IceCube experiment and its upgrade with DeepCore which can lower the neutrino energy threshold for the detector~\cite{IceCube}.
To study the dark matter aspect of our model, we calculate the neutralino relic density, the neutralino-proton elastic scattering cross section, and the muon flux induced by solar neutrinos from neutralino annihilation. For the muon flux calculation we use the {\tt DarkSUSY} program version 5.0.5 \cite{DarkSUSY} which utilizes the results of {\tt WimpSim}~\cite{wimpsim}, interfaced with our own spectrum program. Solar neutrino from WIMP dark matter in various models has recently also been analyzed in~\cite{EOSS,BKMS,solnu}. We assume the NFW profile~\cite{Navarro:1995iw} for our analysis. As mentioned in \cite{EOSS}, there are uncertainties in the neutralino-nucleon cross section due to the strange quark role in the interaction. We use their default values for $\Sigma_{\pi N}$ and $\Delta_s$, i.e. $\Sigma_{\pi N} = 64$~MeV, and $\Delta_s^{(p)} = -0.09$.
Since the parameter space in the minimal type of SU(5) is restrictive,
(in other words, the mass spectrum is constrained),
the discovery potential of this region
appears to be very promising at the LHC,
and at the direct and indirect dark matter search experiments~\cite{Dutta:2009hj}.
Since small values of $\mu$ are not preferable
due to the $\tau\to\mu\gamma$ constraint,
the WMAP relic density
prefers the funnel solution
(i.e. the neutralinos annihilate through the heavy Higgs bosons pole).
It is also true
that a small value of $m_A$ is preferred to enhance
the double penguin contribution.
In that case, the
spin-independent neutralino-nucleon scattering cross section
can be enhanced.
\begin{figure}[tbp]
\center
\includegraphics[viewport = 10 0 220 200,width=8cm]{mAmu_P1-SU5-m01-m125_NFW.eps}
\includegraphics[viewport = 10 0 220 220,width=8cm]{mAmu_P1-SU5-m01-m128_NFW.eps}
\caption{(Left) The $m_A - \mu$ plane in SU(5) model with $\tan \beta = 40$, $m_{1/2} = 500$~GeV, $m_0 = 1$~TeV, $A_0 = 0$, and $A_s^{\rm NP}/A_s^{\rm SM} = 1$, showing various constraints from flavor and dark matter sectors as discussed in the text. (Right) Same plot for $m_{1/2} = 800$~GeV.
}\label{fig:mumA}
\end{figure}
In figure~\ref{fig:mumA},
we show the allowed region in the $m_A$-$\mu$ plane
when $A_s^{\rm NP}/A_s^{\rm SM} = 1$
in the SU(5) case, $\kappa_L = \kappa_{D^c}$.
For the left panel we choose as SUSY parameters $\tan \beta = 40$, $m_{1/2} = 500$~GeV, $m_0 = 1$~TeV and $A_0 = 0$. The same parameters are used for the right panel except for $m_{1/2} = 800$~GeV.
The yellow and gray regions are excluded by the $\tau\to\mu\gamma$
and $B_s\to\mu\mu$ constraints, respectively.
The red-brown region is excluded because the lightest stau is the LSP there, and therefore the neutralino cannot be the dark matter candidate. The WMAP relic density range is obeyed for the neutralino in the narrow blue bands. The solid green lines are contours for the scalar neutralino-proton elastic scattering cross section of $5, 1, 0.1 \times 10^{-8}$~pb respectively from left to right. The almost horizontal dashed green lines are for the spin-dependent cross section, $5, 10, 50 \times 10^{-8}$~pb
respectively from top to bottom. We also show contours of muon flux, labeled as Ex-y, where x is the assumed detector energy threshold in GeV and y is the flux in km$^{-2}$yr$^{-1}$. We show two cases for E(threshold): 100 and 10~GeV, dotted and solid respectively. As we can see, the Br($\tau\to\mu\gamma$) constraint is important. In the left plot, the funnel happens at relatively small $m_A$, and a large part is allowed. In the right plot, however, the funnel is shifted to the right due to the larger neutralino mass. In the later case, we need much larger $\mu$ to satisfy the Br($\tau\to\mu\gamma$) constraint, although there would be an upper bound on $\mu$ due to the sneutrino LSP region~\cite{sneutrinoNLSP}.
The muon flux rate is correlated to the neutralino-proton scattering cross section since this cross section determines the number of neutralinos accumulated in the core of the sun, hence the neutralino annihilation rate there. Since protons constitute a large portion of the sun, both the spin dependent and independent neutralino-proton cross sections are comparably important. For most of the MSSM parameter space the spin dependent part is larger than the scalar part, and this leads to a widespread misconception that for solar neutrino flux calculation only the spin dependent cross section is important. However, there are some regions of the parameter space where the scalar and the spin dependent cross section are about the same order of magnitude, and in this case the scalar contribution is also significant in determining the flux. This was also pointed out by \cite{EOSS}. We can see in Fig.~\ref{fig:mumA} that for small $m_A$ the scalar cross section is quite large, and the
muon flux is following the scalar contour. For large $m_A$, however, the spin dependent part becomes larger than the scalar part, and the muon flux contour is flatten out. Furthermore, the solar neutrino flux also depends on the branching fractions of the neutralino annihilation. Near the stau LSP region, the lighter stau mass is relatively small, enhancing the $\tau^+ \tau^-$ channel, which in turn increases the flux (visible on the left panel).
In the middle of the funnel region, the neutralino relic density is much below the lower bound of the WMAP range. We rescale down the muon flux due to this fact, and this is seen as a drops on the muon flux rate contour.
The IceCube with DeepCore can potentially detect neutrinos with energy down to 10~GeV~\cite{jenni}. It appears that for the left plot large allowed region, i.e., the entire left band of the funnel region and part of the right band, can be detected. Note, however, that we should also consider the backgrounds to have a clear discovery~\cite{BKMS}, and therefore sufficiently large amount of data would need to be collected. For the plot on the right side, however, it would be very difficult for the IceCube with DeepCore to probe the WMAP region not already excluded by the Br($\tau\to\mu\gamma$) constraint.
In the SO(10) case, the figures for the same parameter space remain unchanged qualitatively,
except for a lower $\mu$ region where
the chargino contribution of the Higgs penguin
from the left-handed squark FCNC is important.
As mentioned in the previous section,
when the SO(10) breaking vacua is chosen to satisfy the
proton decay constraint while gauge unifications are maintained,
there is no stringent constraint from $\tau\to\mu\gamma$,
and a larger region of parameter space is allowed.
Consequently, the dark matter direct and indirect detection will play more significant roles in excluding the parameter space.
\section{Discussions}
In this paper, we investigated the effect of the recent dimuon CP asymmetry from $B$ decay modes
observed at 3.2 $\sigma$ deviation from the Standard Model by
the D0 collaboration in the context of $R$-parity conserving SUSY GUT models
and show that a large amount of flavor violation between the second
and the third generation can be generated.
Not only the large flavor violation arises due to large atmospheric mixing angle,
but also new CP phases are obtained from the Yukawa interactions
in grand unification,
and they are responsible for the observed large CP asymmetry.
Because of the quark-lepton unification,
the CP asymmetry is restricted due to the $\tau\to\mu\gamma$ bound.
This restriction depends on the
source of flavor violation (Dirac neutrino Yukawa coupling
in the minimal type of SU(5)
or Majorana neutrino Yukawa coupling in the minimal type of SO(10)),
SUSY mass spectrum (larger SUSY breaking masses are preferred),
and dominant diagram (box diagram for lower $\tan\beta$
or double Higgs penguin diagram for larger $\tan\beta$).
We find that large values of $\tan\beta$ ($=30-50$) are preferred
because the CP asymmetry is enhanced via the
double Higgs penguin diagram, whose contribution is
proportional to $\tan^4\beta$
and a sizable contribution to the flavor violating $b$-$s$ transition can be
easily realized.
We found that
the minimal type of SO(10) is preferred
due to the fact that both left- and right-handed
squark mass matrices can have FCNC sources.
The intermediate values of $\tan\beta$ ($\tan\beta = 20-30$)
are not very preferable.
In the case of the minimal type of SU(5),
$\tan\beta$ should be large to make the double penguin diagram
dominant.
In that case, $B_s\to\mu\mu$ is enhanced and
it provides a lower bound of Br($B_s\to\mu\mu$),
which is about $1\times 10^{-8}$,
in order to obtain a large CP asymmetry.
The symmetry breaking vacua can be chosen to
make the quark-lepton unification relaxed in the SO(10) case,
while in the SU(5) case where the neutrino Dirac Yukawa coupling is
the source of FCNCs, the leptonic FCNC is always larger
than the quark FCNC.
Therefore, the bound from the LFV in the SU(5) is more stringent,
and in other words, the spectrum is more predictive
to obtain the large CP asymmetry indicated by the like-sign
dimuon charge asymmetry.
In fact, in order to satisfy the dark matter content of the universe, the CP asymmetry prefers the funnel solution
where the lightest neutralinos annihilate through the heavier Higgs bosons pole.
Such a restriction on the spectrum from the flavor physics
would allow us to observe this parameter space at the LHC and at the direct and indirect
dark matter detection facilities more easily.
We showed that the high energy neutrino flux from the sun
from the neutralino annihilation can be detectable at IceCube with DeepCore for some regions of parameter space.
\section*{Acknowledgments}
We thank W.S. Hou, X.G. He, S. Khalil, S. Su and X. Tata
for useful discussions. Y.S. thanks J.~Edsjo, D.~Marfatia, K.A.~Richardson-McDaniel and E.~Sessolo for various information regarding solar neutrino analysis. B.D. thanks the hospitality of the GGI, Florence, where part of the work was done.
The work of B.D. is supported in part by the DOE grant DE-FG02-95ER40917.
The work of Y.S. is supported in part by the DOE grant DE-FG02-04ER41308.
The work of Y.M. is supported by the Excellent Research Projects of
National Taiwan University under grant number NTU-98R0526.
|
2,877,628,091,594 | arxiv | \section{Introduction}
Optimal power flow (OPF) is a fundamental optimization problem that aims to find a cost-minimizing operating point subject to the physical laws and safety limits of a power network.
The power sector is witnessing a rising need for fast and scalable OPF solvers, which can instruct a large network of controllable units (smart appliances, electric vehicles, energy storage devices, etc.) to make timely response to the growing variations of wind and solar generations.
Such a need is especially urgent in power distribution networks, where massive renewable energy sources and controllable units are being deployed.
However, distribution networks are also where the speed and scalability requirements are most challenging, as the high resistance-to-reactance ratios of distribution lines necessitate nonlinear and nonconvex alternating-current (AC) OPF rather than its simple direct-current approximation.
Numerous efforts have been made to overcome the computational challenges to AC-OPF. Many of them conducted convex relaxations, convex inner approximations, or linearizations of AC power flow; comprehensive reviews were provided in \cite{low2014convex, molzahn2019survey}.
Meanwhile, distributed OPF algorithms were developed and shown to be more scalable in terms of computation and communication and more robust to single-point failure, compared to their centralized counterparts \cite{dall2013distributed, erseghe2014distributed, zhang2014optimal, peng2016distributed}.
To further reduce computational efforts associated with solving AC power flow, some OPF algorithms were implemented by iteratively actuating the power system with intermediate decisions and updating the decisions based on system feedback \cite{bolognani2014distributed, gan2016online, bernstein2019real}.
From the vast literature, we bring to attention a hierarchical distributed primal-dual gradient algorithm \cite{zhou2019hierarchical} (and its extension to multi-phase networks \cite{zhou2019accelerated}).
It leveraged the radial structure of distribution networks to avoid repetitive computation and communication, and thus significantly accelerated large-scale OPF computations.
However, the algorithm in \cite{zhou2019hierarchical}
derived approximate gradients from the linearized distribution power flow model in \cite{baran1989optimalC, farivar2013branch}. Such linearization may cause the solver to optimistically estimate nodal voltages to be safe, while they actually already exceed safety limits. The consequent risk of voltage violation will be revealed later in this paper.
To prevent such violation, we develop an improved gradient evaluation method by approximating the partial derivatives of the quadratic terms associated with line currents and power losses.
Our analysis shows that with moderate extra computations, the proposed method returns more accurate gradient estimations than \cite{zhou2019hierarchical}.
Meanwhile, the proposed method preserves the structure in \cite{zhou2019hierarchical} for gradient evaluation, and thus enables us to develop an improved hierarchical OPF algorithm.
Numerical experiments on IEEE 37-node and 123-node networks demonstrate enhanced safety of voltage regulation achieved by the proposed algorithm with a moderate increase in computation time, compared to the previous method based on the linearized model.
The rest of this paper is organized as follows. Section \ref{sec:model} introduces the distribution network model, the OPF problem, and a primal-dual algorithm to solve it. Section \ref{sec:method} motivates and elaborates the improved gradient evaluation method. Section \ref{sec:algorithm} presents the hierarchical OPF algorithm based on the improved gradient evaluation.
Section \ref{sec:simulation} reports the numerical experiments. Section \ref{sec:conclusion} concludes this paper.
\section{Modeling and Preliminary Algorithm}\label{sec:model}
\subsection{Power network model and OPF problem}
We model a single-phase equivalent power distribution network as a directed \emph{tree} graph $\mathcal{T}:=\{\mathcal{N}^{+}, \mathcal{E}\}$, where $\mathcal{N}^{+}=\mathcal{N} \cup\{0\}$, with $0$ indexing the root node (the substation, also known as the slack bus), and $\mathcal{N}=\{1,...,N\}$ indexing other nodes.
The set $\mathcal{E}$ collects the ordered pairs of nodes representing the lines in the network.
We define the directions of lines as pointing away from the root, e.g., in line $(i,j)$, node $i$ is closer to the root than node $j$.
Let $p_{i}$ and $q_{i}$ denote the net active and reactive power injections (i.e., power supply minus consumption) and $v_{i}$ denote the squared voltage magnitude at each node $i\in \mathcal{N}^+$.
Let $\ell_{ij}$ denote the squared current magnitude, $z_{ij}=r_{ij}+\boldsymbol{i}x_{ij}$ denote the series impedance, and $P_{ij}$ and $Q_{ij}$ denote the sending-end active and reactive power on each line $(i,j)\in\mathcal{E}$.
We adopt the distribution power flow model \cite{baran1989optimalC, farivar2013branch}:
\begin{subequations}\label{powerequation}
\begin{alignat}{2}
P_{i j} &=-p_{j}+\sum_{k:(j, k) \in \mathcal{E}} P_{j k}+r_{i j} \ell_{i j},~\forall j\in\mathcal{N} \label{powerequation:1}\\
Q_{i j} &=-q_{j}+\sum_{k:(j, k) \in \mathcal{E}} Q_{j k}+x_{i j} \ell_{i j}, ~\forall j\in\mathcal{N} \label{powerequation:2}\\
v_{j} &=v_{i}\!-\!2\left(r_{i j} P_{i j}\!+\!x_{i j} Q_{i j}\right)\!+\!\left|z_{i j}\right|^{2} \ell_{i j}, \forall (i,j)\in\mathcal{E} \label{powerequation:3}\\
\ell_{i j} v_{i} &=P_{i j}^{2}+Q_{i j}^{2},~\forall (i,j)\in\mathcal{E}. \label{powerequation:4}
\end{alignat}
\end{subequations}
Assume the power injections $(p_i,q_i)$ to be controllable within a given operating region:
\begin{alignat}{2} \label{region}
\mathcal{Y}_{i}=\left\{\left(p_{i}, q_{i}\right) \mid \underline{p}_{i} \leqslant p_{i} \leqslant \bar{p}_{i}, \underline{q}_{i} \leqslant q_{i} \leqslant \bar{q}_{i}\right\},~\forall i\in\mathcal{N}.
\end{alignat}
Define vectors $\boldsymbol{p}:=\left[p_{1}, \ldots, p_{N}\right]^{\top}$, $\boldsymbol{q}:=\left[q_{1}, \ldots, q_{N}\right]^{\top}$, $\boldsymbol{v}:=\left[v_{1}, \ldots, v_{N}\right]^{\top} \in \mathbb{R}^{N}$.
Following \cite{gan2016online}, we express the voltage vector as a function of power injections, i.e., $\boldsymbol{v}(\boldsymbol{p},\boldsymbol{q})$, implicitly defined by \eqref{powerequation}.
Consider the following OPF problem:
\begin{subequations} \label{OPF}
\begin{alignat}{2}
\min _{\boldsymbol{p}, \boldsymbol{q}} & \sum_{i \in \mathcal{N}} f_{i}\left(p_{i}, q_{i}\right) \label{OPF:obj}\\
\text { s.t. } & \underline{\boldsymbol{v}} \leqslant \boldsymbol{v}(\boldsymbol{p}, \boldsymbol{q}) \leqslant \overline{\boldsymbol{v}} \label{OPF:v}\\
& \left(p_{i}, q_{i}\right) \in \mathcal{Y}_{i}, ~\forall i \in \mathcal{N}
\end{alignat}
\end{subequations}
where $f_{i}$ is a \emph{strongly convex} cost function for the control of power injections at node $i\in\mathcal{N}$, inequality \eqref{OPF:v} is element-wise, given constant squared voltage limits $\underline{\boldsymbol{v}}$ and $\overline{\boldsymbol{v}}$.
\subsection{Primal-dual gradient algorithm}
Let $\underline{\boldsymbol{\mu}}$ and $\overline{\boldsymbol{\mu}}$ be the dual variables associated with \eqref{OPF:v} to penalize the violation of voltage lower and upper bounds, respectively.
To design a convergent primal-dual algorithm, we consider the regularized Lagrangian of \eqref{OPF}:
\begin{alignat}{2}\label{RLagarangian} \nonumber
\mathcal{L}_{\epsilon}(\boldsymbol{p}, \boldsymbol{q} ; \overline{\boldsymbol{\mu}}, \underline{\boldsymbol{\mu}})= \sum_{i \in \mathcal{N}} f_{i}\left(p_{i}, q_{i}\right)\qquad\qquad\\
+\underline{\boldsymbol{\mu}}^{\top}(\underline{\boldsymbol{v}}-\boldsymbol{v}(\boldsymbol{p}, \boldsymbol{q}))+\overline{\boldsymbol { \mu }}^{\top}(\boldsymbol{v}(\boldsymbol{p}, \boldsymbol{q})-\overline{\boldsymbol{v}})-\frac{\epsilon}{2}\|\boldsymbol{\mu}\|_{2}^{2}
\end{alignat}
where $\boldsymbol{\mu}=\left(\underline{\boldsymbol{\mu}},\overline{\boldsymbol{\mu}}\right)$ and $\epsilon>0$ is a constant factor for the regularization term.
Function \eqref{RLagarangian} is strongly convex in primal variables $(\boldsymbol{p}, \boldsymbol{q})$ and strongly concave in dual variables $\boldsymbol{\mu}$. The saddle point of \eqref{RLagarangian} serves as an approximate solution to \eqref{OPF}, with a bounded error that is linear in $\epsilon$ \cite{zhou2019hierarchical,koshal2011multiuser}.
The following primal-dual gradient algorithm has been commonly used to iteratively approach such a saddle point:
\begin{subequations} \label{iter}
\begin{alignat}{2} \nonumber
\boldsymbol{u}(t+1)=\bigg[\boldsymbol{u}(t)-\sigma_u\ \left(\frac{\partial f\left(\boldsymbol{u}(t)\right)}{\partial \boldsymbol{u}}\right)^{\top} \nonumber\\
-\sigma_u \left(\frac{\partial \boldsymbol{v}\left(\boldsymbol{u}(t)\right)}{\partial \boldsymbol{u} }\right)^{\top} \left(\overline{\boldsymbol{\mu}}(t)-\underline{\boldsymbol{\mu}}(t)\right)\bigg]_{\mathcal{Y}} \label{iter:u}\\
\underline{\boldsymbol{\mu}}(t+1)=\left[\underline{\boldsymbol{\mu}}(t)+\sigma_{\mu}\left(\underline{\boldsymbol{v}}-\boldsymbol{v}(t)-\epsilon \underline{\boldsymbol{\mu}}(t)\right)\right]_{+}\label{iter:underline}\\
\overline{\boldsymbol{\mu}}(t+1)=\left[\overline{\boldsymbol{\mu}}(t)+\sigma_{\mu}\left(\boldsymbol{v}(t)-\overline{\boldsymbol{v}}-\epsilon \overline{\boldsymbol{\mu}}(t)\right)\right]_{+}\label{iter:overline}
\end{alignat}
\end{subequations}
where $\boldsymbol{u}:=(\boldsymbol{p},\boldsymbol{q})$ collects the controllable power injections; $f(\boldsymbol{u}):=\sum_{i\in\mathcal{N}} f_i(p_i,q_i)$ is the objective \eqref{OPF:obj}; $\boldsymbol{v}(t)=\boldsymbol{v}\left(\boldsymbol{u}(t)\right)$ both denote the voltage profile under power injections $\boldsymbol{u}(t)$ at time step $t$. Note that $\boldsymbol{v}\left(\boldsymbol{u}(t)\right)$ does not have to be calculated by \eqref{powerequation} or any other mathematical model of power flow; instead, it can be measured from the power network after actuating the network with the intermediate decisions $\boldsymbol{u}(t)$ during the process \eqref{iter}. This \emph{feedback-based} implementation has been widely adopted to compensate for likely modeling errors \cite{bolognani2014distributed, gan2016online, bernstein2019real, zhou2019accelerated}.
The subscripts $\mathcal{Y}$ and $+$ represent the projections onto $\prod_{i\in\mathcal{N}} \mathcal{Y}_{i}$ and the nonnegative orthant, respectively. Without loss of generality, we use constant step sizes $\sigma_u, \sigma_{\mu}>0$ to update the primal and dual variables, respectively.
The gradient $\partial{f}/\partial{\boldsymbol{u}}$ in \eqref{iter:u} can be easily computed, and the voltage $\boldsymbol{v}(t)$ in \eqref{iter:underline}--\eqref{iter:overline} can be conveniently measured, both locally at each node $i\in\mathcal{N}$.
Therefore, the remaining key challenge is to compute the gradient $\partial{\boldsymbol{v}}/\partial{\boldsymbol{u}}$ that potentially couples all the nodes in the power network. This will be addressed in the next section.
\section{Improved Gradient Evaluation}
\label{sec:method}
\subsection{Prior work and motivation for improvement}
In power flow equations \eqref{powerequation}, the nonlinear and implicit dependence of voltage $\boldsymbol{v}$ on power injections $\boldsymbol{u}$ makes it difficult to quickly and precisely compute the gradient $\partial{\boldsymbol{v}}/\partial{\boldsymbol{u}}$.
Reference \cite{gan2016online} proposed a backward-forward sweep method for gradient calculation, which is computationally inefficient in large networks. In contrast, prior work \cite{zhou2019hierarchical} considered the linearized power flow model in \cite{baran1989optimalC, farivar2013branch}:
\begin{subequations}\label{simppowerequation}
\begin{alignat}{2}
\hat{P}_{i j} &=-p_{j}+\sum_{k:(j, k) \in \mathcal{E}} \hat{P}_{j k},~\forall j\in\mathcal{N} \label{simppowerequation:1}\\
\hat{Q}_{i j} &=-q_{j}+\sum_{k:(j, k) \in \mathcal{E}} \hat{Q}_{j k},~\forall j\in\mathcal{N} \label{simppowerequation:2}\\
\hat{v}_{j} &=\hat{v}_{i}-2\left(r_{i j} \hat{P}_{i j}+x_{i j} \hat{Q}_{i j}\right),~\forall (i,j)\in\mathcal{E} \label{simppowerequation:3}
\end{alignat}
\end{subequations}
which yields the following approximation of $\partial{\boldsymbol{v}}/\partial{\boldsymbol{u}}$:
\begin{subequations}\label{linear-gradient}
\begin{alignat}{2}
\frac{\partial \hat{v}_{j}(\boldsymbol{p},\boldsymbol{q})}{\partial{p_{h}}} = R_{j h}:=\sum_{(i, k) \in \mathbb{P}_{j \wedge h}} 2\cdot r_{i k},~\forall j,h \in \mathcal{N} \label{linear-gradient:p}\\
\frac{\partial \hat{v}_{j}(\boldsymbol{p},\boldsymbol{q})}{\partial{q_{h}}} = X_{j h}:=\sum_{(i, k) \in \mathbb{P}_{j \wedge h}} 2\cdot x_{i k},~\forall j,h \in \mathcal{N} \label{linear-gradient:q}
\end{alignat}
\end{subequations}
where $\mathbb{P}_{j \wedge h}$ denotes the common part of the unique paths from nodes $j$ and $h$ back to the root node.
We illustrate the error of the linear model \eqref{simppowerequation} in a network of two nodes 0 and 1, with fixed voltage $v_0=\hat{v}_0$ at the root node 0.
Taking the difference between \eqref{powerequation:3} and \eqref{simppowerequation:3}:
\begin{eqnarray}
\hat{v}_{1}-v_{1} = 2\left(r_{01}(P_{01}\!-\!\hat{P}_{01})+x_{01}(Q_{01}\!-\!\hat{Q}_{01})\right)-\left|z_{01}\right|^{2} \ell_{01}&&\nonumber \\
=\left|z_{01}\right|^{2} \ell_{01}.\qquad\qquad\qquad\qquad\qquad\qquad\qquad && \nonumber
\end{eqnarray}
The equation above shows that the voltage computed with the linear model \eqref{simppowerequation} is higher than that under the accurate nonlinear model \eqref{powerequation}. In a more complex network, such errors would accumulate as we trace the nodes further away from the root.
Therefore, under the power injections $(\boldsymbol{p}, \boldsymbol{q})$ solved by \eqref{iter} with the linear model \eqref{simppowerequation} and the approximate gradient \eqref{linear-gradient}, even though the model optimistically estimates the voltages to be safe, the actual voltages may already drop below their lower bounds. The need to prevent such voltage violation motivates us to develop an improved gradient evaluation method, as elaborated below.
\subsection{Improved gradient evaluation}
Consider an arbitrary node $h\in\mathcal{N}$, and $u_h := (p_h,q_h)$. We take the partial derivatives of the variables in the nonlinear power flow model \eqref{powerequation}:
\begin{subequations}\label{derive}
\begin{alignat}{2}
\frac{\partial P_{i j}}{\partial u_{h}} =&-\frac{\partial p_{j}}{\partial u_{h}}+\sum_{k:(j, k) \in \mathcal{E}} \frac{\partial P_{j k}}{\partial u_{h}}+r_{i j} \frac{\partial \ell_{i j}}{\partial u_{h}}, ~\forall j\in\mathcal{N} \\
\frac{\partial Q_{i j}}{\partial u_{h}} =&-\frac{\partial q_{j}}{\partial u_{h}}+\sum_{k:(j, k) \in \mathcal{E}} \frac{\partial Q_{j k}}{\partial u_{h}}+x_{i j} \frac{\partial \ell_{i j}}{\partial u_{h}}, \forall j\in\mathcal{N} \\
\frac{\partial v_{j}}{\partial u_{h}} =&\frac{\partial v_{i}}{\partial u_{h}}\!-\!2\left(r_{i j}\frac{\partial P_{i j}}{\partial u_{h}} \!+\!x_{i j}\frac{\partial Q_{i j}}{\partial u_{h}} \right) \!+\!\left|z_{i j}\right|^{2} \frac{\partial \ell_{i j}}{\partial u_{h}} \label{derive:c}\\
\frac{\partial\ell_{i j}}{\partial u_{h}} =&\frac{2P_{i j}}{v_{i}}\frac{\partial P_{ij}}{\partial u_{h}}+\frac{2Q_{i j}}{v_{i}}\frac{\partial Q_{ij}}{\partial u_{h}}-\frac{\ell_{ij}}{v_{i}}\frac{\partial v_{i}}{\partial u_{h}},~\forall (i,j)\in\mathcal{E}. \label{derive:d}
\end{alignat}
\end{subequations}
The exact partial derivatives are hard to solve from \eqref{derive} due to their complex interdependence. To facilitate computation, we first simplify \eqref{derive:d} by replacing the partial derivatives with their approximations under the linear model \eqref{simppowerequation}:
\begin{alignat}{2} \label{deriveell}
\frac{\partial\hat{\ell}_{i j}}{\partial u_{h}} &=\frac{2P_{i j}}{v_{i}}\frac{\partial \hat{P}_{ij}}{\partial u_{h}}+\frac{2Q_{i j}}{v_{i}}\frac{\partial \hat{Q}_{ij}}{\partial u_{h}}-\frac{\ell_{ij}}{v_{i}}\frac{\partial \hat{v}_{i}}{\partial u_{h}}.
\end{alignat}
We recall the following results from \cite{gan2016online} for all the nodes $h\in\mathcal{N}$ and lines $(i,j)\in\mathcal{E}$:
\begin{subequations} \label{lineargradient-PQ}
\begin{alignat}{2}
\frac{\partial{\hat{P}_{i j}}}{\partial p_h} &=-\mathds{1}\left(j \in \mathbb{P}_{h}\right),\quad & \frac{\partial{\hat{P}_{i j}}}{\partial q_h} &=0 \\
\frac{\partial{\hat{Q}_{i j}}}{\partial q_h} &=-\mathds{1}\left(j \in \mathbb{P}_{h}\right), \quad & \frac{\partial{\hat{Q}_{i j}}}{\partial p_h} &=0
\end{alignat}
\end{subequations}
where $\mathds{1}(j \in \mathbb{P}_{h})$ is an indicator that equals 1 if node $j$ lies on the unique path from node $h$ to the root, and 0 otherwise.
By \eqref{lineargradient-PQ} and \eqref{linear-gradient}, we can calculate $\partial\hat{\boldsymbol{\ell}}/\partial {\boldsymbol{u}}$ in \eqref{deriveell} as follows:
\begin{subequations}\label{deriveellresult}
\begin{alignat}{2}
\frac{\partial\hat{\ell}_{i j}}{\partial p_{h}} &=-\frac{1}{v_{i}}\left(2 P_{i j}\cdot \mathds{1}(j \in \mathbb{P}_{h})+\ell_{ij}R_{i h}\right)\\
\frac{\partial\hat{\ell}_{i j}}{\partial q_{h}} &=-\frac{1}{v_{i}}\left(2 Q_{i j}\cdot \mathds{1}(j \in \mathbb{P}_{h})+\ell_{ij}X_{i h}\right).
\end{alignat}
\end{subequations}
The results in \eqref{linear-gradient}, \eqref{lineargradient-PQ}, \eqref{deriveellresult} can replace the partial derivatives on the right-hand side (RHS) of \eqref{derive:c} to obtain an \emph{improved gradient evaluation} $\partial{\boldsymbol{v}}/\partial{\boldsymbol{u}}$ as:
\begin{subequations}\label{derivesecond}
\begin{alignat}{2}
&\frac{\partial v_{j}}{\partial p_{h}} =\frac{\partial \hat{v}_{i}}{\partial p_{h}}-2\left(r_{i j}\frac{\partial \hat{P}_{i j}}{\partial p_{h}} +x_{i j}\frac{\partial \hat{Q}_{i j}}{\partial p_{h}} \right)+\left|z_{i j}\right|^{2} \frac{\partial \hat{\ell}_{i j}}{\partial p_{h}} \nonumber \\
& =\left(1\!-\!\frac{\left|z_{ij}\right|^{2}\ell_{ij}}{v_{i}}\right)R_{ih}+2\left(r_{ij}\!-\!\frac{\left|z_{ij}\right|^{2}P_{ij}}{v_{i}}\right)\cdot \mathds{1}(j \in \mathbb{P}_{h})\label{derivesecond:p} \\
&\frac{\partial v_{j}}{\partial q_{h}} =\frac{\partial \hat{v}_{i}}{\partial q_{h}}-2\left(r_{i j}\frac{\partial \hat{P}_{i j}}{\partial q_{h}} +x_{i j}\frac{\partial \hat{Q}_{i j}}{\partial q_{h}} \right)+\left|z_{i j}\right|^{2} \frac{\partial \hat{\ell}_{i j}}{\partial q_{h}} \nonumber \\
& =\left(1\!-\!\frac{\left|z_{ij}\right|^{2}\ell_{ij}}{v_{i}}\right)X_{ih}+2\left(x_{ij}\!-\!\frac{\left|z_{ij}\right|^{2}Q_{ij}}{v_{i}}\right)\cdot \mathds{1}(j \in \mathbb{P}_{h}) \label{derivesecond:q}
\end{alignat}
\end{subequations}
where node $i$ is the unique parent of node $j$ in the directed tree network.
By \eqref{linear-gradient:p}, there is $R_{jh} = R_{ih}+ 2r_{ij}\mathds{1}(j \in \mathbb{P}_{h})$, which converts the result in \eqref{derivesecond:p} into:
\begin{eqnarray}\label{derivesecond:rewrite:p}
\frac{\partial v_{j}}{\partial p_{h}} = R_{jh} -\frac{\left|z_{ij}\right|^{2}\ell_{ij}}{v_{i}} R_{ih}-\frac{2\left|z_{ij}\right|^{2}P_{ij}}{v_{i}} \mathds{1}(j \in \mathbb{P}_{h}).
\end{eqnarray}
The first term on the RHS of \eqref{derivesecond:rewrite:p} is the same as \eqref{linear-gradient:p} derived from the linear model \eqref{simppowerequation}, while the second and third terms compensate for the effects of the quadratic terms that were neglected from \eqref{simppowerequation}.
The improved partial derivatives \eqref{derivesecond:q} with respect to reactive power injections $q_h$ have the same structure as \eqref{derivesecond:p}, and thus we will not repeat.
\section{Hierarchical OPF Algorithm}\label{sec:algorithm}
Based on the primal-dual framework \eqref{iter} and the improved gradient evaluation \eqref{derivesecond}, we design a scalable hierarchical algorithm to solve the OPF problem \eqref{OPF}. The proposed algorithm utilizes the subtree structure of radial distribution networks discovered in \cite{zhou2019accelerated, zhou2019hierarchical}, while achieving more accurate and safer voltage regulation.
The distribution tree network $\mathcal{T}:=\{\mathcal{N}^{+}, \mathcal{E}\}$ is composed of subtrees
$\mathcal{T}_{k}=\left\{\mathcal{N}_{k}, \mathcal{E}_{k}\right\}$ indexed by $k \in \mathcal{K}=\{1, \ldots, K\}$ and a set of nodes $\mathcal{N}_{0}$ that are not clustered into any subtree. Let $n_{k}^{0}$ denote the root node of subtree $k$, which is the node in $\mathcal{N}_{k}$ that is nearest to the root node of the whole network.
Given a distribution network, the division of subtrees and unclustered nodes may not be unique, but we assume it always satisfies the following conditions.
\begin{assumption}\label{ass:nonoverlapping}
The subtrees are \emph{non-overlapping}, i.e., $\mathcal{N}_{k_1} \cap \mathcal{N}_{k_2}=\emptyset$ for any $k_1, k_2\in\mathcal{K}$, $k_1\neq k_2$.
\end{assumption}
\begin{assumption}\label{ass:notonpath}
For any subtree root node $n_{k}^{0}$, $k\in \mathcal{K}$ or any unclustered node $n\in \mathcal{N}_0$, its path to the network root node only goes through a subset of nodes in $\mathcal{N}_0$, but not any node in another subtree.
\end{assumption}
\begin{figure}
\centering
\includegraphics[width = 0.49\columnwidth]{ieee37a.png}
\hfil
\includegraphics[width = 0.49\columnwidth]{ieee37b.png}
\caption{The clustering of the IEEE 37-node network on the left satisfies Assumption \ref{ass:notonpath}. The one on the right does not, because the path from a subtree root node 730 to the network root 799 goes through another subtree.}
\label{ieee37}
\end{figure}
The left subfigure of Figure \ref{ieee37} shows a clustering of the IEEE 37-node network that satisfies Assumption \ref{ass:notonpath}, while the one in the right subfigure does not.
The distribution system operator plays a role of central controller (CC). It is sufficient for the CC to know the structure and parameters of the \emph{backbone network}, which only connects the unclustered nodes $\mathcal{N}_{0}$ and the subtree root nodes $\left\{n_{k}^0,~ k\in \mathcal{K}\right\}$. The CC also maintains two-way communications to receive/send/relay information from/to/between the backbone network nodes.
Each subtree network $k\in\mathcal{K}$ is known to and managed by the $k$-th regional controller (RC $k$), which communicates with the nodes in subtree $k$, the parent node of the subtree root $n_k^0$, and the CC.
With the settings above, we now explain the hierarchical structure underlying the vector $\left(\frac{\partial \boldsymbol{v}}{\partial \boldsymbol{u} }\right)^{\top} \left(\overline{\boldsymbol{\mu}}-\underline{\boldsymbol{\mu}}\right)$ in \eqref{iter:u}.
Without loss of generality, we only consider the element of this vector corresponding to $p_h$ for a particular node $h\in\mathcal{N}$, which can be discussed in two cases below. The element corresponding to $q_h$ can be calculated in the same manner.
\textit{Case 1:} If $h\in \mathcal{N}_k$ is in a subtree $k\in\mathcal{K}$, we have:
\begin{eqnarray}
&&\quad \sum_{j \in \mathcal{N}}\frac{\partial v_{j}}{\partial p_{h}}\cdot\left(\overline{\mu}_{j}-\underline{\mu}_{j}\right) =\sum_{j \in \mathcal{N}_{k}}\frac{\partial v_{j}}{\partial p_{h}}\cdot\left(\overline{\mu}_{j}-\underline{\mu}_{j}\right) \nonumber\\
&&\qquad +\sum_{k' \in \mathcal{K}\backslash\{k\}}~\sum_{j \in \mathcal{N}_{k'}}\frac{\partial v_{j}}{\partial p_{h}}\cdot\left(\overline{\mu}_{j}-\underline{\mu}_{j}\right)
\nonumber
\\ && \qquad +\sum_{j \in \mathcal{N}_{0}}\frac{\partial v_{j}}{\partial p_{h}}\cdot\left(\overline{\mu}_{j}-\underline{\mu}_{j}\right).\label{eq:decomposed_dvdp}
\end{eqnarray}
The first term on the RHS of \eqref{eq:decomposed_dvdp} sums over nodes $j$ in the same subtree $k$ as node $h$. It can be calculated using \eqref{derivesecond:p} by RC $k$, and then sent to node $h$ which requires this information to compute \eqref{eq:decomposed_dvdp} and carry out \eqref{iter:u}. In particular, the calculation of $\partial v_j/\partial p_h$ requires RC $k$ to receive parameters $R_{ih}$, $z_{ij}$ and measurements $\ell_{ij}$, $P_{ij}$, $v_i$ from line $(i,j)$ that connects node $j$ to its parent node $i$.
The second term on the RHS of \eqref{eq:decomposed_dvdp} sums over nodes $j$ in all the subtrees $k'$ except subtree $k$ that hosts node $h$. By Assumption \ref{ass:notonpath}, there must be $j\notin \mathbb{P}_{h}$, which simplifies \eqref{derivesecond:p} and leads to:
\begin{eqnarray}
\sum_{k' \in \mathcal{K}\backslash\{k\}}~\sum_{j \in \mathcal{N}_{k'}}\frac{\partial v_{j}}{\partial p_{h}}\cdot\left(\overline{\mu}_{j}-\underline{\mu}_{j}\right)\qquad\qquad\qquad&& \nonumber \\
= \sum_{k' \in \mathcal{K}\backslash\{k\}}
R_{n_{k}^{0}n_{k'}^{0}}\sum_{j \in \mathcal{N}_{k'}}\left(1\!-\!\frac{\left|z_{ij}\right|^{2}\ell_{ij}}{v_{i}}\right)\cdot\left(\overline{\mu}_{j}\!-\!\underline{\mu}_{j}\right).&& \label{NkNt}
\end{eqnarray}
In \eqref{NkNt}, the sum over $j\in \mathcal{N}_{k'}$ can be calculated by RC $k'$. The CC receives such sums from all the RCs $k'\in \mathcal{K}\backslash\{k\}$, adds them up after weighting by $R_{n_{k}^{0}n_{k'}^{0}}$, and sends the result to RC $k$. Upon receiving this result, RC $k$ broadcasts it to all the nodes in subtree $k$, including node $h$ which requires this information to compute \eqref{eq:decomposed_dvdp} and carry out \eqref{iter:u}.
The third term on the RHS of \eqref{eq:decomposed_dvdp} sums over all the unclustered nodes $j\in\mathcal{N}_0$. It can be calculated using \eqref{derivesecond:p} by the CC and then relayed by RC $k$ to node $h$. In particular, $R_{ih} = R_{i n_k^0}$ and $\mathds{1}(j\in\mathbb{P}_{h}) = \mathds{1}(j\in\mathbb{P}_{n_k^0})$ are known to the CC for the calculation of \eqref{derivesecond:p}.
\textit{Case 2:} If $h\in \mathcal{N}_0$ is an unclustered node, we have:
\begin{eqnarray}
\sum_{j \in \mathcal{N}}\frac{\partial v_{j}}{\partial p_{h}}\cdot\left(\overline{\mu}_{j}-\underline{\mu}_{j}\right) &=& \sum_{k \in \mathcal{K}}~\sum_{j \in \mathcal{N}_{k}}\frac{\partial v_{j}}{\partial p_{h}}\cdot\left(\overline{\mu}_{j}-\underline{\mu}_{j}\right)
\nonumber
\\ && +\sum_{j \in \mathcal{N}_{0}}\frac{\partial v_{j}}{\partial p_{h}}\cdot\left(\overline{\mu}_{j}-\underline{\mu}_{j}\right). \label{eq:decomposed_dvdp_unclustered}
\end{eqnarray}
The first term on the RHS of \eqref{eq:decomposed_dvdp_unclustered} sums over nodes $j$ in all the subtrees $k\in\mathcal{K}$. By Assumption \ref{ass:notonpath}, there must be $j\notin \mathbb{P}_{h}$, which simplifies \eqref{derivesecond:p} and leads to:
\begin{eqnarray}
\sum_{k \in \mathcal{K}}~\sum_{j \in \mathcal{N}_{k}}\frac{\partial v_{j}}{\partial p_{h}}\cdot\left(\overline{\mu}_{j}-\underline{\mu}_{j}\right)\qquad\qquad\qquad&& \nonumber \\
= \sum_{k \in \mathcal{K}}
R_{h n_{k}^{0}}\sum_{j \in \mathcal{N}_{k}}\left(1-\frac{\left|z_{ij}\right|^{2}\ell_{ij}}{v_{i}}\right)\cdot\left(\overline{\mu}_{j}-\underline{\mu}_{j}\right).&& \label{NkNt_unclustered}
\end{eqnarray}
In \eqref{NkNt_unclustered}, the sum over $j\in \mathcal{N}_{k}$ can be calculated by RC $k$. The CC receives such sums from all the RCs $k\in \mathcal{K}$, adds them up after weighting by $R_{h n_{k}^{0}}$, and sends the result to node $h$ for subsequent computations of \eqref{eq:decomposed_dvdp_unclustered} and \eqref{iter:u}.
The second term on the RHS of \eqref{eq:decomposed_dvdp_unclustered} sums over all the unclustered nodes $j\in\mathcal{N}_0$. It can be calculated using \eqref{derivesecond:p} by the CC and then sent to node $h$. In particular, $R_{ih}$ and $\mathds{1}(j\in\mathbb{P}_{h})$ are known to the CC for the calculation of \eqref{derivesecond:p}, since node $j$, its parent node $i$, and node $h$ are all in the backbone network.
To summarize, the computation of the key term $\left(\frac{\partial \boldsymbol{v}}{\partial \boldsymbol{u} }\right)^{\top} \left(\overline{\boldsymbol{\mu}}-\underline{\boldsymbol{\mu}}\right)$ in \eqref{iter:u} can be performed through the coordination of the CC and the RCs in a hierarchical manner.
This inspires our design of Algorithm 1 to solve the OPF problem \eqref{OPF}.
Compared to the conventional centralized primal-dual gradient method, the hierarchical Algorithm 1 can reduce computational complexity as analyzed in \cite{zhou2019accelerated}. Due to space limitation, the convergence proof of Algorithm 1 is deferred to the journal version of this work.
\begin{algorithm*}
\caption{Hierarchical OPF Algorithm} \label{algCCRC}
\begin{algorithmic}[1]
\Repeat
\State At time step $t$, every node $h\in \mathcal{N}$ performs local update of primal variables:
\begin{subequations}
\begin{alignat}{2}
p_{h}(t+1)=\left[p_{h}(t)-\sigma_u\left(\frac{\partial f_{h}\left({p}_h(t), {q}_h(t)\right)}{\partial p_{h}}+\alpha_{h}(t)\right) \right]_{\mathcal{Y}_{h}} \nonumber\\
q_{h}(t+1)=\left[q_{h}(t)-\sigma_u\left(\frac{\partial f_{h}\left({p}_h(t), {q}_h(t)\right)}{\partial q_{h}}+\beta_{h}(t)\right) \right]_{\mathcal{Y}_{h}} \nonumber
\end{alignat}
\end{subequations}
where $\alpha_{h}(t)=\quad \sum_{j \in \mathcal{N}}\frac{\partial v_{j}\left(\boldsymbol{u}(t)\right)}{\partial p_{h}}\cdot\left(\overline{\mu}_{j}(t)-\underline{\mu}_{j}(t)\right)$ is given by \eqref{eq:decomposed_dvdp} if $h\in\mathcal{N}_k$ is in subtree $k$, or \eqref{eq:decomposed_dvdp_unclustered} if $h\in\mathcal{N}_0$ is unclustered; and $\beta_{h}(t)=\quad \sum_{j \in \mathcal{N}}\frac{\partial v_{j}\left(\boldsymbol{u}(t)\right)}{\partial q_{h}}\cdot\left(\overline{\mu}_{j}(t)-\underline{\mu}_{j}(t)\right)$. The updated decisions $\boldsymbol{u}(t+1)=\left(\boldsymbol{p}(t+1), \boldsymbol{q}(t+1)\right)$ are actuated by controllable devices.
\State Based on local voltage measurement $v_h(t)$, every node $h\in\mathcal{N}$ updates its dual variables locally:
\begin{alignat}{2}
\underline{\mu}_{h}(t+1)=\left[\underline{\mu}_{h}(t)+\sigma_{\mu}\left(\underline{v}_{h}-v_{h}(t)-\epsilon \underline{\mu}_{h}(t)\right)\right]_{+}, \quad \overline{\mu}_{h}(t+1)=\left[\overline{\mu}_{h}(t)+\sigma_{\mu}\left(v_{h}(t)-\overline{v}_{h}-\epsilon \overline{\mu}_{h}(t)\right)\right]_{+}. \nonumber
\end{alignat}
\State Every RC $k\in\mathcal{K}$ calculates the following weighted sum of dual variables and sends it to the CC:
\begin{alignat}{2}
\sum_{j \in \mathcal{N}_{k}}\left(1-\frac{\left|z_{ij}\right|^{2}\ell_{ij}}{v_{i}}\right)\cdot\left(\overline{\mu}_{j}(t+1)-\underline{\mu}_{j}(t+1)\right). \nonumber
\end{alignat}
\State The CC computes the second and third terms on the RHS of \eqref{eq:decomposed_dvdp} for each destination node $h\in\mathcal{N}_k$ in a subtree $k$, and computes the first and second terms on the RHS of \eqref{eq:decomposed_dvdp_unclustered} for each unclustered destination node $h\in\mathcal{N}_0$. The CC adds up those terms and sends the result to the corresponding RC $k$ that hosts the destination node $h\in\mathcal{N}_k$, or to $h\in\mathcal{N}_0$ directly if it is unclustered.
The terms needed for computing $\beta_h(t+1)$ are calculated and sent by the CC similarly.
\State If $h \in \mathcal{N}_0$ is unclustered, it already obtains $\alpha_h(t+1)$ as \eqref{eq:decomposed_dvdp_unclustered}. Otherwise if $h\in\mathcal{N}_k$, RC $k$ computes the first term on the RHS of \eqref{eq:decomposed_dvdp}, adds it with the result received from the CC in step 5, and sends the result $\alpha_h(t+1)$ to the destination node $h$. The term $\beta_h(t+1)$ is obtained similarly.
\Until $\left\|\boldsymbol{u}(t+1)-\boldsymbol{u}(t)\right\|_2 < \delta$ for some preset threshold $\delta>0$, or a maximum number of iterations is reached.
\State Otherwise $t\leftarrow (t+1)$ and go back to step 2.
\end{algorithmic}
\end{algorithm*}
\section{Numerical Experiments}\label{sec:simulation}
We conduct numerical experiments to demonstrate our improvement over the previous method \cite{zhou2019hierarchical} based on the linearized power flow model \eqref{simppowerequation}.
\subsection{Experiment setup}
\begin{figure}
\centering
\includegraphics[width = 1.00\columnwidth]{ieee123.png}
\caption{The clustering of IEEE 123-node network in our experiments.}
\label{ieee123}
\end{figure}
We consider the single-phase equivalent models of the IEEE 37-node and 123-node test networks. They are clustered into subtrees as shown by Figure \ref{ieee37} (left panel) and Figure \ref{ieee123}, respectively.
For the ease of illustration, we make the following modifications to the original network models on the IEEE PES website (https://cmte.ieee.org/pes-testfeeders/resources/):
\begin{enumerate}
\item We model each multi-phase line as a single-phase line with the average impedance of multiple phases. We also convert the multi-phase wye- and delta-connected loads into single-phase loads by taking their average over multiple phases.
\item We multiply the original load data of the 37-node network by six, and those of the 123-node network by two, to create scenarios with serious voltage issues.
\item All the loads are treated as constant-power loads. Detailed models of capacitors, regulators, and breakers are not simulated.
\end{enumerate}
For each network, let $\mathcal{N}_L$ denote the set of nodes that host nonzero loads.
We denote the \emph{negative net} power injections from the IEEE load data by $(\underline{p}_{h},\underline{q}_{h})$ for all load nodes $h\in\mathcal{N}_L$, and treat them as the nominal injections or the most preferred injections.
The feasible power injection regions in \eqref{region} are defined as $\mathcal{Y}_h = \left\{(p_h,q_h) \mid \underline{p}_{h} \leq p_{h} \leq 0.3 \underline{p}_h<0, \underline{q}_{h} \leq q_{h} \leq 0.3\underline{q}_{h}<0\right\}$.
The OPF objective function in \eqref{OPF:obj} is defined as $f_{h}(p_h,q_h)=(p_{h}-\overline{p}_{h})^{2}+(q_{h}-\overline{q}_{h})^{2}$ for all $h\in\mathcal{N}_L$, which aims to minimize the disutility caused by the deviation from the nominal loads.
For each network, the voltage magnitude of the root (slack) node is fixed at $1.05$ per unit (p.u.). The lower and upper limits for safe voltage are set at $0.95$ p.u. and $1.05$ p.u., respectively.
The step sizes for primal and dual variable updates are empirically chosen as $\sigma_u = 2\times 10^{-3}$ and $\sigma_{\mu}=1\times 10^{-3}$, respectively.
For each given power injection $\boldsymbol{u}=\left(\boldsymbol{p},\boldsymbol{q}\right)$, an OpenDSS power flow simulation is performed to obtain the corresponding voltage $\boldsymbol{v}(\boldsymbol{u})$ and the associated line-flow quantities $(\boldsymbol{P},\boldsymbol{Q}, \boldsymbol{\ell})$, which are treated as the feedback signals measured from the power network. The Python 3.7 programs for OPF algorithms and the OpenDSS simulations are run on a laptop equipped with Intel Core i7-9750H CPU @ 2.6 GHz, 16 GB RAM, and Windows 10 Professional OS.
\subsection{Numerical results}
Due to space limitation, we only show the simulation results in the 123-node network model. The results in the 37-node network are similar.
\begin{figure}
\centering
\includegraphics[width=1.00\columnwidth]{Voltage_scatter_double_load.pdf}\label{fig:voltage_scatter_123}
\caption{The voltages in the IEEE 123-node network, in three cases: ``no control'' (using the nominal power injections); ``linear control'' (solving OPF with algorithm \cite{zhou2019hierarchical} based on linear power flow model); ``improved control'' (based on the proposed Algorithm 1 with improved gradient evaluation).}\label{fig:voltage_scatter}
\end{figure}
\textit{Voltage safety}: The voltage magnitudes at different nodes of the 123-node network are plotted in Figure \ref{fig:voltage_scatter}, for three cases. The first case, referred to as ``no control'', just takes the nominal power injections $(\underline {\boldsymbol{p}},\underline{\boldsymbol{q}})$ from the IEEE data, which causes severe violation of the voltage lower bound. The second case, referred to as ``linear control'', applies the primal-dual gradient algorithm in \cite{zhou2019hierarchical} based on the linearized power flow model \eqref{simppowerequation} to determine power injections, which enhances the overall voltage level but still leaves most nodes below the voltage lower bound. The third case, referred to as ``improved control (ours)'', solves the OPF with the proposed Algorithm 1 based on our improved gradient evaluation method, which effectively lifts all the voltages above the lower bound. This result verifies our analysis in Section \ref{sec:method} that the improved gradient evaluation can prevent voltage violation caused by the linear model that optimistically estimates the voltages to be safe.
\begin{figure}
\centering
\includegraphics[width=1.00\columnwidth]{Voltage_double_load.pdf}
\caption{The change of voltage at a node of the IEEE 123-node network, in the proposed Algorithm 1 (``improved control'') and the OPF algorithm in \cite{zhou2019hierarchical} based on the linear power flow model (``linear control'').}\label{fig:voltage_convergence_123}
\end{figure}
\textit{Computational efficiency}: Figure \ref{fig:voltage_convergence_123} shows the change of voltage at a node of the 123-node network (similar trends are displayed for the voltages at other nodes). This result verifies again that the voltage converges to a safe level under the proposed Algorithm 1, compared to the previous algorithm \cite{zhou2019hierarchical} that leads to a violation of voltage lower bound.
Moreover, Algorithm 1 converges in fewer iterations.
Indeed, Algorithm 1 spends 205 seconds to complete 2,000 iterations, compared to the previous algorithm in \cite{zhou2019hierarchical} which spends 166 seconds. This would be an acceptable increase, considering the improved voltage safety achieved by Algorithm 1.
\section{Conclusion}\label{sec:conclusion}
We proposed a hierarchical OPF algorithm based on a new gradient evaluation method that is more accurate than previous methods based on the linearized power flow model. The proposed method can prevent the previous voltage violation with moderate extra computations, as verified by numerical results on IEEE networks.
In future work, we will provide formal convergence proof and complexity analysis of the proposed algorithm. We will also extend the proposed algorithm to multi-phase networks, as what was done from \cite{zhou2019hierarchical} to \cite{zhou2019accelerated}.
|
2,877,628,091,595 | arxiv | \section{Introduction}
Arimoto \cite{ari} proposed a sequential algorithm for calculating the channel capacity $C$ of a discrete memoryless channel. Based on the Bayes probability, the algorithm is given by the alternating minimization between the input probabilities and the reverse channel matrices. For arbitrary channel matrix $\Phi$ the convergence of the Arimoto algorithm is proved and the convergence speed is evaluated. In the worst case, the convergence speed is the $1/N$ order, and if the input distribution $\bm\lambda^\ast$ that achieves the channel capacity $C$ is in the interior of the set $\Delta({\cal X})$ of input distributions, the convergence is exponential.
In this paper, we first consider the exponential convergence and evaluate the convergence speed. We show that there exist cases of exponential convergence even if $\bm\lambda^\ast$ is on the boundary of $\Delta({\cal X})$. Moreover, we also consider the convergence of the $1/N$ order, which is not dealt with in the previous studies. Especially, when the input alphabet size $m=3$, we will analyze the convergence of the $1/N$ order in detail and the convergence speed is evaluated by the derivatives of the Kullback-Leibler divergence with respect to the input probabilities.
As a basic idea for evaluating the convergence speed, we consider that the function $F(\bm\lambda)$ which defines the Arimoto algorithm is a differentiable mapping from $\Delta(\cal X)$ to $\Delta(\cal X)$, and notice that the capacity achieving input distribution $\bm\lambda^\ast$ is the fixed point of $F(\bm\lambda)$. Then, the convergence speed is evaluated by analyzing the Taylor expansion of $F(\bm\lambda)$ about the fixed point $\bm\lambda=\bm\lambda^\ast$.
\section{Related works}
There have been many related works on the Arimoto algorithm. For example, extension to different types of channels \cite{nai},\,\cite{rez},\,\cite{von}, acceleration of the Arimoto algorithm\,\cite{mat},\,\cite{yu}, characterization of Arimoto algorithm by divergence geometry \cite{csi2},\,\cite{mat},\,\cite{naj}, etc. If we focus on the analysis for the convergence speed of the Arimoto algorithm, we see in \cite{ari},\cite{mat},\cite{yu} that the eigenvalues of the Jacobian matrix are calculated and the convergence speed is investigated in the case that $\lambda^\ast$ is in the interior of $\Delta(\cal X)$.
In this paper, we consider the Taylor expansion of the defining function of the Arimoto algorithm. We will calculate not only the Jacobian matrix of the first order term of the Taylor expansion, but also the Hessian matrix of the second order term, and examine the convergence speed of the exponential or $1/N$ order based on the Jacobian and Hessian matrices. Because our approach for the evaluation of the convergence speed is very fundamental, we hope that our results will be applied to all the existing works.
\section{Channel matrix and channel capacity}
Consider a discrete memoryless channel $X\rightarrow Y$ with the input source $X$ and the output source $Y$. Let ${\cal X}=\{x_1,\cdots,x_m\}$ be the input alphabet and ${\cal Y}=\{y_1,\cdots,y_n\}$ be the output alphabet.
The conditional probability that the output symbol $y_j$ is received when the input symbol $x_i$ was transmitted is denoted by $P^i_j=P(Y=y_j|X=x_i),\,i=1,\cdots,m, j=1,\cdots,n,$ and the row vector $P^i$ is defined by $P^i=(P^i_1,\cdots,P^i_n),\,i=1,\cdots,m$. The channel matrix $\Phi$ is defined by\\[-3mm]
\begin{align}
\label{eqn:thechannelmatrix}
\Phi=\begin{pmatrix}
\,P^1\,\\
\vdots\\
\,P^m\,
\end{pmatrix}
=\begin{pmatrix}
\,P^1_1 & \cdots & P^1_n\,\\
\vdots & & \vdots\\
\,P^m_1 & \cdots & P^m_n\,
\end{pmatrix}.
\end{align}
We assume that for any $j\,(j=1,\cdots,n)$ there exist at least one $i\,(i=1,\cdots,m)$ with $P^i_j>0$. This means that there are no useless output symbols.
The set of input probability distributions on the input alphabet ${\cal X}$ is denoted by $\Delta({\cal X})\equiv\{\bm\lambda=(\lambda_1,\cdots,\lambda_m)|\lambda_i\geq0,i=1,\cdots,m,\sum_{i=1}^m\lambda_i=1\}$. The interior of $\Delta({\cal X})$ is denoted by $\Delta({\cal X})^\circ\equiv\{\bm\lambda=(\lambda_1,\cdots,\lambda_m)\in\Delta({\cal X})\,|\,\lambda_i>0,\,i=1,\cdots,m\}$. Similarly, the set of output probability distributions on the output alphabet ${\cal Y}$ is denoted by $\Delta({\cal Y})\equiv\{Q=(Q_1,\cdots,Q_n)|Q_j\geq0,j=1,\cdots,n,\sum_{j=1}^nQ_j=1\}$.
Let $Q=\bm\lambda\Phi$ be the output distribution for the input distribution $\bm\lambda\in\Delta(\cal X)$, where the representation by components is $Q_j=\sum_{i=1}^m\lambda_iP^i_j,\,j=1,\cdots,n$, then the mutual information is defined by $I(\bm\lambda,\Phi)=\sum_{i=1}^m\sum_{j=1}^n\lambda_iP^i_j\log{P^i_j}/{Q_j}$. The channel capacity $C$ is defined by
\begin{align}
\label{eqn:Cdefinition}
C=\max_{\bm\lambda\in\Delta({\cal X})}I(\bm\lambda,\Phi).
\end{align}
The Kullback-Leibler divergence $D(Q\|Q')$ for two output distributions $Q=(Q_1,\cdots,Q_n),\,Q'=(Q'_1,\cdots,Q'_n)\in\Delta(\cal Y)$ is defined by
\begin{align}
D(Q\|Q')=\sum_{j=1}^nQ_j\log\ds\frac{Q_j}{Q'_j}.
\end{align}
The Kullback-Leibler divergence satisfies $D(Q\|Q')\geq0$, and $D(Q\|Q')=0$ if and only if $Q=Q'$ \cite{csi1}.
An important proposition for investigating the convergence speed of the Arimoto algorithm is the Kuhn-Tucker condition on the input distribution $\bm\lambda=\bm\lambda^\ast$ to achieve the maximum of (\ref{eqn:Cdefinition}).
\medskip
{\it Theorem}\ (Kuhn-Tucker condition) In the maximization problem (\ref{eqn:Cdefinition}), a necessary and sufficient condition for the input distribution $\bm\lambda^\ast=(\lambda^\ast_1,\cdots,\lambda^\ast_m)\in\Delta({\cal X})$ to achieve the maximum is that there is a certain constant $\tilde{C}$ with
\begin{align}
\label{eqn:Kuhn-Tucker}
D(P^i\|\bm\lambda^\ast\Phi)\left\{\begin{array}{ll}=\tilde{C}, & {\mbox{\rm for}}\ i\ {\mbox{\rm with}}\ \lambda^\ast_i>0,\\
\leq \tilde{C}, & {\mbox{\rm for}}\ i\ {\mbox{\rm with}}\ \lambda^\ast_i=0.
\end{array}\right.
\end{align}
In (\ref{eqn:Kuhn-Tucker}), $\tilde{C}$ is equal to the channel capacity $C$.
\medskip
Since this Kuhn-Tucker condition is a necessary and sufficient condition, all the information about the capacity achieving input distribution $\bm\lambda^\ast$ can be derived from this condition.
\section{Arimoto algorithm for calculating channel capacity}
\subsection{Arimoto algorithm\,\cite{ari}}
A sequence of input distributions
\begin{align}
\{\bm\lambda^N=(\lambda^N_1,\cdots,\lambda^N_m)\}_ {N=0,1,\cdots}\subset\Delta({\cal X})
\end{align}
is defined by the Arimoto algorithm as follows. First, let $\bm\lambda^0=(\lambda^0_1,\cdots,\lambda^0_m)$ be an initial distribution taken in $\Delta(\cal X)^\circ$, i.e., $\lambda^0_i>0,\,i=1,\cdots,m$. Then, the Arimoto algorithm is given by the following recurrence formula;
\begin{align}
\lambda^{N+1}_i=\ds\frac{\lambda^N_i\exp D(P^i\|\bm\lambda^N\Phi)}{\ds\sum_{k=1}^m\lambda^N_k\exp D(P^k\|\bm\lambda^N\Phi)},\,i=1,\cdots,m,\,N=0,1,\cdots.\label{eqn:arimotoalgorithm}
\end{align}
On the convergence of this Arimoto algorithm, the following results are obtained in Arimoto\,\cite{ari};
By defining
\begin{align}
C(N+1,N)\equiv-\ds\sum_{i=1}^m\lambda^{N+1}_i\log\lambda^{N+1}_i+\ds\sum_{i=1}^m\sum_{j=1}^n\lambda^{N+1}_iP^i_j\log\ds\frac{\lambda^N_iP^i_j}{\ds\sum_{k=1}^m\lambda^N_kP^k_j},
\end{align}
they obtained the following theorems;
{\it Theorem A1:} If the initial input distribution $\bm\lambda^0$ is in $\Delta({\cal X})^\circ$, then
\begin{align}
\lim_{N\to\infty}C(N+1,N)=C.
\end{align}
{\it Theorem A2:} If $\bm\lambda^0\in\Delta({\cal X})^\circ$, then
\begin{align}
0\leq C-C(N+1,N)\leq\ds\frac{\log m-h(\bm\lambda^0)}{N},
\end{align}
where $h(\bm\lambda^0)$ is the entropy of $\bm\lambda^0$.
{\it Theorem A3:} If the capacity achieving input distribution $\bm\lambda^\ast$ is in $\Delta({\cal X})^\circ$, then
\begin{align}
0\leq C-C(N+1,N)<K\theta^N,\,N=0,1,\cdots,
\end{align}
where $0\leq\theta<1$ and $K$ is a constant.
In \cite{ari}, they consider the Taylor expansion of $D(\bm\lambda^\ast\|\bm\lambda)$ by $\bm\lambda$, and the Taylor expansion of $D(Q^\ast\|Q)$ by $Q$, however they do not consider the Taylor expansion of the mapping $F:\Delta({\cal X})\to\Delta({\cal X})$, which will be considered in this paper. Further, in the above Theorem A3, they consider only the case $\bm\lambda^\ast\in\Delta({\cal X})^\circ$, where the convergence is exponential.
In Yu\,\cite{yu}, they consider the mapping $F:\Delta({\cal X})\to\Delta({\cal X})$ and the Taylor expansion of $F(\bm\lambda)$ about $\bm\lambda=\bm\lambda^\ast$. They calculate the eigenvalues of the Jacobian matrix $J(\bm\lambda^\ast)$, however they do not consider the Hessian matrix. Further, they consider only the case $\bm\lambda^\ast\in\Delta({\cal X})^\circ$ as in \cite{ari}.
\subsection{Mapping from $\Delta({\cal X})$ to $\Delta({\cal X})$}
Let $F_i(\bm\lambda)$ be the defining function of the Arimoto algorithm (\ref{eqn:arimotoalgorithm}), i.e.,
\begin{align}
F_i(\bm\lambda)=\ds\frac{\lambda_i\exp D(P^i\|\bm\lambda\Phi)}{\ds\sum_{k=1}^m\lambda_k\exp D(P^k\|\bm\lambda\Phi)},\,i=1,\cdots,m.\label{eqn:Arimotofunction}
\end{align}
Define $F(\bm\lambda)=(F_1(\bm\lambda),\cdots,F_m(\bm\lambda))$, then we can consider that $F(\bm\lambda)$ is a differentiable mapping from $\Delta(\cal X)$ to $\Delta(\cal X)$, and (\ref{eqn:arimotoalgorithm}) is represented by
\begin{align}
\bm\lambda^{N+1}=F(\bm\lambda^N).\label{eqn:vectorrecurrence}
\end{align}
In this paper, for the analysis of the convergence speed, we assume
\begin{align}
{\rm rank}\,\Phi=m.\label{eqn:rankmdefinition}
\end{align}
\begin{lemma}
\label{lem:1}
The capacity achieving input distribution $\bm\lambda^\ast$ is unique.
\end{lemma}
{\bf Proof:} By Csisz\`{a}r\cite{csi1},\,p.137,\,eq.(37), for arbitrary $Q\in\Delta(\cal Y)$
\begin{align}
\ds\sum_{i=1}^m\lambda_iD(P^i\|Q)=I(\bm\lambda,\Phi)+D(\bm\lambda\Phi\|Q).\label{eqn:CKequality}
\end{align}
By the assumption (\ref{eqn:rankmdefinition}), we see that there exists $Q^0\in\Delta(\cal Y)$ \cite{nak2} with
\begin{align}
D(P^1\|Q^0)=\cdots=D(P^m\|Q^0)\equiv C^0.
\end{align}
Substituting $Q=Q^0$ into (\ref{eqn:CKequality}), we have $C^0=I(\bm\lambda,\Phi)+D(\bm\lambda\Phi\|Q^0)$. Because $C^0$ is a constant,
\begin{align}
\max_{\bm\lambda\in\Delta(\cal X)}I(\bm\lambda,\Phi)\Longleftrightarrow\min_{\bm\lambda\in\Delta(\cal X)}D(\bm\lambda\Phi\|Q^0).\label{eqn:maxequalmin}
\end{align}
Define $V\equiv\{\bm\lambda\Phi\,|\,\bm\lambda\in\Delta(\cal X)\}$, then $V$ is a closed convex set, thus by Cover\,\cite{cov},\,p.297,\,Theorem 12.6.1, $Q=Q^\ast$ that achieves $\min_{Q\in V}D(Q\|Q^0)$ exists and is unique. By the assumption (\ref{eqn:rankmdefinition}), the mapping $\Delta\ni\bm\lambda\mapsto\bm\lambda\Phi\in V$ is one to one, therefore, $\bm\lambda^\ast$ with $Q^\ast=\bm\lambda^\ast\Phi$ is unique.\hfill$\blacksquare$
\begin{remark}
\rm Due to the equivalence (\ref{eqn:maxequalmin}), the Arimoto algorithm can be obtained by Csisz\`{a}r \cite{csi2}, Chapter 4, ``Minimizing information distance from a single measure'', Theorem 5.
\end{remark}
\begin{lemma}
\label{lem:2}
The capacity achieving input distribution $\bm\lambda^\ast$ is the fixed point of the mapping $F(\bm\lambda)$ in $(\ref{eqn:vectorrecurrence})$. That is, $\bm\lambda^\ast=F(\bm\lambda^\ast)$.
\end{lemma}
{\bf Proof:} In the Kuhn-Tucker condition (\ref{eqn:Kuhn-Tucker}), let us define $m_1$ as the number of indices $i$ with $\lambda^\ast_i>0$, i.e.,
\begin{align}
\lambda^\ast_i\left\{\begin{array}{ll}>0, & i=1,\cdots,m_1,\\=0, & i=m_1+1,\cdots,m,\end{array}\right.\label{eqn:m1definition}
\end{align}
then
\begin{align}
D(P^i\|\bm\lambda^\ast\Phi)\left\{\begin{array}{ll}=C, & i=1,\cdots,m_1,\\\leq C, & i=m_1+1,\cdots,m.\end{array}\right.
\end{align}
We have
\begin{align}
\ds\sum_{k=1}^m\lambda^\ast_k\exp D(P^k\|\bm\lambda^\ast\Phi)=\ds\sum_{k=1}^{m_1}\lambda^\ast_ke^C=e^C,\label{eqn:yobitekikeisan}
\end{align}
hence by (\ref{eqn:Arimotofunction}),\,(\ref{eqn:m1definition}),\,(\ref{eqn:yobitekikeisan}),
\begin{align}
F_i(\bm\lambda^\ast)&=\left\{\begin{array}{ll}e^{-C}\lambda^\ast_ie^C, & i=1,\cdots,m_1,\\0, & i=m_1+1,\cdots,m,\end{array}\right.\\
&=\lambda^\ast_i,\,i=1,\cdots,m,
\end{align}
which shows $F(\bm\lambda^\ast)=\bm\lambda^\ast$.\hfill$\blacksquare$
\medskip
The sequence $\bm\lambda^N$ of the Arimoto algorithm converges to the fixed point $\bm\lambda^\ast$, i.e.,
\begin{align}
\bm\lambda^N\to\bm\lambda^\ast,\,N\to\infty.\label{eqn:lambdaNconverges}
\end{align}
We will investigate the convergence speed by using the Taylor expansion of $F(\bm\lambda)$ about $\bm\lambda=\bm\lambda^\ast$.
\subsection{Type of index}
Now, we classify the indices $i\,(i=1,\cdots,m)$ in the Kuhn-Tucker condition (\ref{eqn:Kuhn-Tucker}) in more detail into the following 3 types;
\begin{align}
\label{eqn:Kuhn-Tucker2}
D(P^i\|\bm\lambda^\ast\Phi)\left\{\begin{array}{ll}=C, & {\mbox{\rm for}}\ i\ {\mbox{\rm with}}\ \lambda^\ast_i>0\ {\rm (type\ I)},\\
=C, & {\mbox{\rm for}}\ i\ {\mbox{\rm with}}\ \lambda^\ast_i=0\ {\rm (type\ II)},\\
<C, & {\mbox{\rm for}}\ i\ {\mbox{\rm with}}\ \lambda^\ast_i=0\ {\rm (type\ III)}.
\end{array}\right.
\end{align}
Let us define the sets of indices as follows;
\begin{align}
&{\rm all\ the\ indices:}\ {\cal I}\equiv\{1,\cdots,m\},\label{eqn:allset}\\
&{\rm type\ I\ indices:}\ {\cal I}_{\rm I}\equiv\{1,\cdots,m_1\},\label{eqn:type1set}\\
&{\rm type\ II\ indices:}\ {\cal I}_{\rm II}\equiv\{m_1+1,\cdots,m_1+m_2\},\label{eqn:type2set}\\
&{\rm type\ III\ indices:}\ {\cal I}_{\rm III}\equiv\{m_1+m_2+1,\cdots,m\}.\label{eqn:type3set}
\end{align}
$|{\cal I}|=m$, $|{\cal I}_{\rm I}|=m_1$, $|{\cal I}_{\rm II}|=m_2$, $|{\cal I}_{\rm III}|=m-m_1-m_2\equiv m_3$. We have ${\cal I}={\cal I}_{\rm I}\cup{\cal I}_{\rm II}\cup{\cal I}_{\rm III}$ and $m=m_1+m_2+m_3$.
${\cal I}_{\rm I}$ is not empty and $|{\cal I}_{\rm I}|=m_1\geq2$ for any channel matrix, but ${\cal I}_{\rm II}$ and ${\cal I}_{\rm III}$ may be empty for some channel matrix.
\subsection{Examples of convergence speed}
Let us consider the difference of convergence speed of the Arimoto algorithm depending on the channel matrices.
For many channel matrices $\Phi$, the convergence is exponential, but for some special $\Phi$ the convergence is very slow. Let us consider the following examples taking types I, II, III into account, where the input alphabet size $m=3$ and the output alphabet size $n=3$.
\begin{example}
\label{exa:1}
\rm (only type I) If only type I indices exist, then $\lambda^\ast_i>0,\,i=1,2,3$, hence $Q^\ast\equiv\bm\lambda^\ast\Phi$ is in the interior of $\triangle P^1P^2P^3$. As a concrete channel matrix of this example, let us consider
\begin{align}
\label{eqn:Phi1}
\Phi^{(1)}=\begin{pmatrix}
\,0.800 & 0.100 & 0.100\,\\
\,0.100 & 0.800 & 0.100\,\\
\,0.250 & 0.250 & 0.500\,
\end{pmatrix}.
\end{align}
For this $\Phi^{(1)}$, we have $\bm\lambda^\ast=(0.431,0.431,0.138)$ and $Q^\ast=(0.422,0.422,0.156)$. See Fig.\ref{fig:1}. The vertices of the large triangle in Fig.\ref{fig:1} are the output probability distributions $\bm{e}^1=(1,0,0),\,\bm{e}^2=(0,1,0),\,\bm{e}^3=(0,0,1)$. We have $D(P^i\|Q^\ast)=C,\,i=1,2,3$, then considering the analogy to Euclidean geometry, $\triangle P^1P^2P^3$ can be regarded as an ``acute triangle''.
\begin{figure}[t]
\begin{center}
\begin{overpic}[width=8.8cm]{./triangle1_3.eps}
\put(-5,-4){$\bm{e}^1$}
\put(101,-4){$\bm{e}^2$}
\put(48,89){$\bm{e}^3$}
\put(52,16){$Q^\ast$}
\put(9,5){$P^1$}
\put(86,5){$P^2$}
\put(48,46){$P^3$}
\end{overpic}
\medskip
\caption{Positional relation of row vectors $P^1,P^2,P^3$ of $\Phi^{(1)}$ and $Q^\ast$ in Example \ref{exa:1}}
\label{fig:1}
\end{center}
\end{figure}
\end{example}
\begin{example}
\label{exa:2}
\rm (types I and II) If there are type I and type II indices, we can assume $\lambda^\ast_1>0,\lambda^\ast_2>0,\lambda^\ast_3=0$ without loss of generality, hence $Q^\ast$ is on the side $P^1P^2$ and $D(P^i\|Q^\ast)=C,\,i=1,2,3$. As a concrete channel matrix of this example, let us consider
\begin{align}
\label{eqn:Phi2}
\Phi^{(2)}=\begin{pmatrix}
\,0.800 & 0.100 & 0.100\,\\
\,0.100 & 0.800 & 0.100\,\\
\,0.300 & 0.300 & 0.400\,
\end{pmatrix}.
\end{align}
For this $\Phi^{(2)}$, we have $\bm\lambda^\ast=(0.500,0.500,0.000)$ and $Q^\ast=(0.450,0.450,0.100)$. See Fig.\ref{fig:2}. Considering the analogy to Euclidean geometry, $\triangle P^1P^2P^3$ can be regarded as a ``right triangle''.
\begin{figure}[t]
\begin{center}
\medskip
\begin{overpic}[width=8.8cm]{./triangle2_3.eps}
\put(-5,-4){$\bm{e}^1$}
\put(101,-4){$\bm{e}^2$}
\put(48,89){$\bm{e}^3$}
\put(48,4){$Q^\ast$}
\put(9,5){$P^1$}
\put(86,5){$P^2$}
\put(48,37){$P^3$}
\end{overpic}
\medskip
\caption{Positional relation of row vectors $P^1,P^2,P^3$ of $\Phi^{(2)}$ and $Q^\ast$ in Example \ref{exa:2}}
\label{fig:2}
\end{center}
\end{figure}
\end{example}
\begin{example}
\label{exa:3}
\rm (types I and III) If there are type I and type III indices, we can assume $\lambda^\ast_1>0,\lambda^\ast_2>0,\lambda^\ast_3=0$ without loss of generality, hence $Q^\ast$ is on the side $P^1P^2$ and $C=D(P^1\|Q^\ast)=D(P^2\|Q^\ast)>D(P^3\|Q^\ast)$. As a concrete channel matrix of this example, let us consider
\begin{align}
\label{eqn:Phi3}
\Phi^{(3)}=\begin{pmatrix}
\,0.800 & 0.100 & 0.100\,\\
\,0.100 & 0.800 & 0.100\,\\
\,0.350 & 0.350 & 0.300\,
\end{pmatrix}.
\end{align}
For this $\Phi^{(3)}$, we have $\bm\lambda^\ast=(0.500,0.500,0.000)$ and $Q^\ast=(0.450,0.450,0.100)$. See Fig.\ref{fig:3}. Considering the analogy to Euclidean geometry, $\triangle P^1P^2P^3$ can be regarded as an ``obtuse triangle''.
\begin{figure}[t]
\begin{center}
\begin{overpic}[width=8.8cm]{./triangle3_3.eps}
\put(-5,-4){$\bm{e}^1$}
\put(101,-4){$\bm{e}^2$}
\put(48,89){$\bm{e}^3$}
\put(48,4){$Q^\ast$}
\put(9,5){$P^1$}
\put(86,5){$P^2$}
\put(48,28.5){$P^3$}
\end{overpic}
\medskip
\caption{Positional relation of row vectors $P^1,P^2,P^3$ of $\Phi^{(3)}$ and $Q^\ast$ in Example \ref{exa:3}}
\label{fig:3}
\end{center}
\end{figure}
\end{example}
For the above $\Phi^{(1)},\Phi^{(2)},\Phi^{(3)}$, Fig.\ref{fig:4} shows the state of convergence of $|\lambda^N_1-\lambda^\ast_1|\to0$. By this Figure, we see that in Examples 1 and 3 the convergence is exponential, while in Example 2 the convergence is slower than exponential.
\begin{figure}[t]
\begin{center}
\begin{overpic}[width=9cm]{./lambda1_3.eps}
\put(57,74){\rotatebox{60}{$\leftarrow$}}
\put(62,78){Example 2}
\put(62,74){(types I and II)}
\put(58,47.5){\rotatebox{60}{$\leftarrow$}}
\put(62,50){Example 3}
\put(62,46){(types I and III)}
\put(52,40){\rotatebox{60}{$\rightarrow$}}
\put(39,36){Example 1}
\put(39,31.5){(only type I)}
\put(54,-3){$N$}
\put(-6,45){\rotatebox{90}{$|\lambda^N_1-\lambda^\ast_1|$}}
\end{overpic}
\caption{Comparison of the convergence speed in Examples \ref{exa:1},\ref{exa:2},\ref{exa:3}}
\label{fig:4}
\end{center}
\end{figure}
From the above three examples, it is inferred that the Arimoto algorithm converges very slowly when type II index exists, and converges exponentially when type II index does not exist. We will analyze this phenomenon in the following.
\section{Taylor expansion of $F(\bm\lambda)$ about $\bm\lambda=\bm\lambda^\ast$}
We will examine the convergence speed of the Arimoto algorithm by the Taylor expansion of $F(\bm\lambda)$ about the fixed point $\bm\lambda=\bm\lambda^\ast$. Taylor expansion of the function $F(\bm\lambda)=(F_1(\bm\lambda),\cdots,F_m(\bm\lambda))$ about $\bm\lambda=\bm\lambda^\ast$ is
\begin{align}
F(\bm\lambda)=F(\bm\lambda^\ast)+(\bm\lambda-\bm\lambda^\ast)J(\bm\lambda^\ast)+\ds\frac{1}{2!}(\bm\lambda-\bm\lambda^\ast)H(\bm\lambda^\ast)\,^t(\bm\lambda-\bm\lambda^\ast)+o(\|\bm\lambda-\bm\lambda^\ast\|^2),\label{eqn:Taylortenkai1}
\end{align}
where ${^t}\bm\lambda$ denotes the transpose of $\bm\lambda$ and $\|\bm\lambda\|$ denotes the Euclidean norm $\|\bm\lambda\|=\left(\lambda_1^2+\cdots+\lambda_m^2\right)^{1/2}$.
In (\ref{eqn:Taylortenkai1}), $J(\bm\lambda^\ast)$ is the Jacobian matrix at $\bm\lambda=\bm\lambda^\ast$, i.e.,
\begin{align}
J(\bm\lambda^\ast)&=\left(\left.\ds\frac{\partial F_i}{\partial\lambda_{i'}}\right|_{\bm\lambda=\bm\lambda^\ast}\right)_{i',i=1,\cdots,m}.\label{eqn:Jacobiseibun}
\end{align}
We consider in this paper that the input probability distribution $\bm\lambda$ is a row vector, thus the Jacobian matrix $J(\bm\lambda^\ast)$ is such as
\begin{align}
&\hspace{30mm}\leftarrow i\rightarrow\nonumber\\[0mm]
J(\bm\lambda^\ast)&=\begin{array}{c}\uparrow\\ i'\\\downarrow\end{array}
\hspace{-1mm}\begin{pmatrix}
\,\left.\ds\frac{\partial F_1}{\partial\lambda_1}\right|_{\bm\lambda=\bm\lambda^\ast} & \cdots & \left.\ds\frac{\partial F_m}{\partial\lambda_1}\right|_{\bm\lambda=\bm\lambda^\ast}\,\\
\vdots & & \vdots\\
\,\left.\ds\frac{\partial F_1}{\partial\lambda_m}\right|_{\bm\lambda=\bm\lambda^\ast} & \cdots & \left.\ds\frac{\partial F_m}{\partial\lambda_m}\right|_{\bm\lambda=\bm\lambda^\ast}\,
\end{pmatrix}\in\mathbb R^{m\times m},\label{eqn:rowvectorJacobimatrix}
\end{align}
i.e., $\partial F_i/\partial\lambda_{i'}|_{\bm\lambda=\bm\lambda^\ast}$ is the $(i',i)$ component. Note that our $J(\bm\lambda^\ast)$ is the transpose of a usual Jacobian matrix corresponding to column vector.
Because $\sum_{i=1}^mF_i(\bm\lambda)=1$ by (\ref{eqn:Arimotofunction}), we have by (\ref{eqn:rowvectorJacobimatrix}),
\begin{lemma}
\label{lem:rowsumofJis0}
Every row sum of $J(\bm\lambda^\ast)$ is equal to $0$.
\end{lemma}
\medskip
In (\ref{eqn:Taylortenkai1}), $H(\bm\lambda^\ast)\equiv(H_1(\bm\lambda^\ast),\cdots,H_m(\bm\lambda^\ast))$, where $H_i(\bm\lambda^\ast)$ is the Hessian matrix of $F_i$ at $\bm\lambda=\bm\lambda^\ast$, i.e.,
\begin{align}
H_i(\bm\lambda^\ast)=\left(\left.\ds\frac{\partial^2F_i}{\partial\lambda_{i'}\partial\lambda_{i''}}\right|_{\bm\lambda=\bm\lambda^\ast}\right)_{i',i''=1,\cdots,m},\label{eqn:Hesseseibun}
\end{align}
and $(\bm\lambda-\bm\lambda^\ast)H(\bm\lambda^\ast)\,^t(\bm\lambda-\bm\lambda^\ast)$ is an abbreviated expression of the $m$ dimensional vector $((\bm\lambda-\bm\lambda^\ast)H_1(\bm\lambda^\ast)\,^t(\bm\lambda-\bm\lambda^\ast),\cdots,(\bm\lambda-\bm\lambda^\ast)H_m(\bm\lambda^\ast)\,^t(\bm\lambda-\bm\lambda^\ast)).$
\begin{remark}
\label{rem:justify}
\rm $\lambda_1,\cdots,\lambda_m$ satisfy the constraint $\sum_{i=1}^m\lambda_i=1$, but in (\ref{eqn:Taylortenkai1}),\,(\ref{eqn:Jacobiseibun}),\,(\ref{eqn:Hesseseibun}) we consider $\lambda_1,\cdots,\lambda_m$ as independent variables to have the Taylor series approximation (\ref{eqn:Taylortenkai1}). This approximation is justified as follows. By the Kuhn-Tucker condition (\ref{eqn:Kuhn-Tucker}), $D(P^i\|Q^\ast)\leq C<\infty,\,i=1,\cdots,m$, hence by the assumption put below (\ref{eqn:thechannelmatrix}), we have $Q^\ast_j>0,\,j=1,\cdots,n$. See \cite{ari}. For $\epsilon>0$, define ${\cal Q}^\ast_\epsilon\equiv\{Q=(Q_1,\cdots,Q_n)\in{\mathbb R}^n\,|\,\|Q-Q^\ast\|<\epsilon\}$, i.e., ${\cal Q}^\ast_\epsilon$ is an open ball in $\mathbb{R}^n$ centered at $Q^\ast$ with radius $\epsilon$. Note that $Q\in{\cal Q}^\ast_\epsilon$ is free from the constraint $\sum_{j=1}^nQ_j=1$. Taking $\epsilon>0$ sufficiently small, we can have $Q_j>0,j=1,\cdots,n$, for any $Q\in{\cal Q}^\ast_\epsilon$. The function $F(\bm\lambda)$ is defined for $\bm\lambda$ with $\left(\bm\lambda\Phi\right)_j>0,\,j=1,\cdots,n$, even if some $\lambda_i<0$. Therefore, the domain of definition of $F(\bm\lambda)$ can be extended to $\Phi^{-1}\left({\cal Q}^\ast_\epsilon\right)\subset\mathbb{R}^m$, where $\Phi^{-1}\left({\cal Q}^\ast_\epsilon\right)$ is the inverse image of ${\cal Q}^\ast_\epsilon$ by the mapping $\mathbb{R}^m\ni\bm\lambda\to\bm\lambda\Phi\in\mathbb{R}^n$. $\Phi^{-1}\left({\cal Q}^\ast_\epsilon\right)$ is an open neighborhood of $\bm\lambda^\ast$ in $\mathbb{R}^m$. Then $F(\bm\lambda)$ is a function of $\bm\lambda=(\lambda_1,\cdots,\lambda_m)\in\Phi^{-1}\left({\cal Q}^\ast_\epsilon\right)$ as independent variables (free from the constraint $\sum_{i=1}^m\lambda_i=1$). We can consider (\ref{eqn:Taylortenkai1}) to be the Taylor expansion by independent variables $\lambda_1,\cdots,\lambda_m$, then substituting $\bm\lambda\in\Delta({\cal X})\cap\Phi^{-1}\left({\cal Q}^\ast_\epsilon\right)$ into (\ref{eqn:Taylortenkai1}) to obtain the approximation for $F(\bm\lambda)$ about $\bm\lambda=\bm\lambda^\ast$.
\end{remark}
\medskip
Now, substituting $\bm\lambda=\bm\lambda^N$ into (\ref{eqn:Taylortenkai1}), then by $F(\bm\lambda^\ast)=\bm\lambda^\ast$ and $F(\bm\lambda^N)=\bm\lambda^{N+1}$, we have
\begin{align}
\bm\lambda^{N+1}=\bm\lambda^\ast+(\bm\lambda^N-\bm\lambda^\ast)J(\bm\lambda^\ast)+\ds\frac{1}{2!}(\bm\lambda^N-\bm\lambda^\ast)H(\bm\lambda^\ast)\,^t(\bm\lambda^N-\bm\lambda^\ast)+o(\|\bm\lambda^N-\bm\lambda^\ast\|^2).\label{eqn:Taylortenkai2}
\end{align}
Then, by putting $\bm\mu^N\equiv\bm\lambda^N-\bm\lambda^\ast$, (\ref{eqn:Taylortenkai2}) becomes
\begin{align}
\bm\mu^{N+1}=\bm\mu^NJ(\bm\lambda^\ast)+\ds\frac{1}{2!}\bm\mu^NH(\bm\lambda^\ast)\,{^t}\bm\mu^N+o\left(\|\bm\mu^N\|^2\right).\label{eqn:Taylortenkai3}
\end{align}
By (\ref{eqn:lambdaNconverges}), we will investigate the convergence
\begin{align}
\bm\mu^N\to\bm0,\,N\to\infty,
\end{align}
based on the Taylor expansion (\ref{eqn:Taylortenkai3}). Let
\begin{align}
\mu^N_i\equiv\lambda^N_i-\lambda^\ast_i,\,i=1,\cdots,m,\label{eqn:muislambdaminuslambda}
\end{align}
denote the components of $\bm\mu^N=\bm\lambda^N-\bm\lambda^\ast$, and write $\bm\mu^N$ by components as $\bm\mu^N=(\mu^N_1,\cdots,\mu^N_m)$, then we have
\begin{align}
\sum_{i=1}^m\mu^N_i=0,\,N=0,1,\cdots,\label{eqn:musumiszero}
\end{align}
because $\sum_{i=1}^m\lambda^N_i=\sum_{i=1}^m\lambda^\ast_i=1$.
\subsection{Basic analysis for fast and slow convergence}
For the investigation of the convergence speed, we consider the following simple case.
Let us define a real sequence $\{\mu^N\}_{N=0,1,\cdots}\subset{\mathbb R}$ by the recurrence formula;
\begin{align}
\mu^{N+1}&=\theta\mu^N-\rho\left(\mu^N\right)^2,\,N=0,1,\cdots,\label{eqn:sequenceaN}\\
0&<\theta\leq1,\,\rho>0,\,0<\mu^0<\theta/\rho.
\end{align}
If $0<\theta<1$, then we have $0<\mu^{N+1}<\theta\mu^N<\cdots<\theta^{N+1}\mu^0$, hence $\mu^N$ decays exponentially.
While, if $\theta=1$, (\ref{eqn:sequenceaN}) becomes $\mu^{N+1}=\mu^N-\rho\left(\mu^N\right)^2,\,\rho>0$. This recurrence formula cannot be solved explicitly, however, we see the state of convergence by Fig.\ref{fig:5}.
\begin{figure}[t]
\begin{center}
\begin{overpic}[width=8cm]{./slow_decay_3.eps}
\put(46,4){$\mu^N$}
\put(35,4){$\mu^{N+1}$}
\put(97,5){$x$}
\put(4,75){$y$}
\put(44,60){$y=x$}
\put(70,52){$y=x-\rho x^2$}
\put(4,5){$O$}
\end{overpic}
\medskip
\caption{Convergence of the sequence defined by $\mu^{N+1}=\mu^N-\rho\left(\mu^N\right)^2$}
\label{fig:5}
\end{center}
\end{figure}
Because the differential coefficient of the function $y=x-\rho x^2$ at $x=0$ is 1, the convergence speed is very slow. In fact, this convergence is slower than exponential. From Lemma \ref{lem:6} in section \ref{sec:m3narbitray} below, we will see that the convergence speed is the $1/N$ order and $\lim_{N\to\infty}N\mu^N=1/\rho$.
\subsection{On Jacobian matrix $J(\bm\lambda^\ast)$}
Let us consider the Jacobian matrix $J(\bm\lambda^\ast)$ for any $m,n$. We are assuming ${\rm rank}\,\Phi=m$ in (\ref{eqn:rankmdefinition}), hence $m\leq n$.
We will calculate the components (\ref{eqn:Jacobiseibun}) of $J(\bm\lambda^\ast)$.
Defining
\begin{align}
&D_i\equiv D(P^i\|\bm\lambda\Phi),\,i=1,\cdots,m,\\
&F_i\equiv F_i(\bm\lambda),\,i=1,\cdots,m,
\end{align}
we can write (\ref{eqn:Arimotofunction}) as
\begin{align}
F_i=\ds\frac{\lambda_ie^{D_i}}{\ds\sum_{k=1}^m\lambda_ke^{D_k}},\,i=1,\cdots,m.\label{eqn:teigikansuFi}
\end{align}
From (\ref{eqn:teigikansuFi}),
\begin{align}
F_i\ds\sum_{k=1}^m\lambda_ke^{D_k}=\lambda_ie^{D_i},\label{eqn:Fibunboharau}
\end{align}
then differentiating the both sides of (\ref{eqn:Fibunboharau}) by $\lambda_{i'}$, we have
\begin{align}
\ds\frac{\partial F_i}{\partial\lambda_{i'}}\ds\sum_{k=1}^m\lambda_ke^{D_k}+F_i\ds\frac{\partial}{\partial\lambda_{i'}}\ds\sum_{k=1}^m\lambda_ke^{D_k}=\delta_{i'i}e^{D_i}+\lambda_ie^{D_i}\ds\frac{\partial D_i}{\partial\lambda_{i'}},\label{eqn:dFi}
\end{align}
where $\delta_{i'i}$ is the Kronecker delta.
Before substituting $\bm\lambda=\bm\lambda^\ast=(\lambda^\ast_1,\cdots,\lambda^\ast_m)$ into the both sides of (\ref{eqn:dFi}), we define the following symbols. Remember that the integer $m_1$ was defined in (\ref{eqn:m1definition}). See also (\ref{eqn:type1set}).
Let us define
\begin{align}
Q^\ast&\equiv Q(\bm\lambda^\ast)=\bm\lambda^\ast\Phi,\\
Q_j^\ast&\equiv Q(\bm\lambda^\ast)_j=\ds\sum_{i=1}^m\lambda_i^\ast P_j^i=\ds\sum_{i=1}^{m_1}\lambda_i^\ast P_j^i,\,j=1,\cdots,n,\\
D_i^\ast&\equiv D(P^i\|Q^\ast),\,i=1,\cdots,m,\\
D_{i',i}^\ast&\left.\equiv\ds\frac{\partial D_i}{\partial\lambda_{i'}}\right|_{\bm\lambda=\bm\lambda^\ast},\,i',i=1,\cdots,m,\label{eqn:Diidefinition}\\
F_i^\ast&\equiv F_i(\bm\lambda^\ast),\,i=1,\cdots,m.
\end{align}
\begin{lemma}
\label{lem:3}
\begin{align}
&\left.\ds\sum_{k=1}^m\lambda_ke^{D_k}\right|_{\bm\lambda=\bm\lambda^\ast}=e^C,\label{eqn:lem3-1}\\[2mm]
&\ds\frac{\partial D_i}{\partial\lambda_{i'}}=-\ds\sum_{j=1}^n\ds\frac{P_j^{i'}P_j^i}{Q_j},\,i',i=1,\cdots,m,\label{eqn:lem3-2}\\[2mm]
&\left.\ds\frac{\partial}{\partial\lambda_{i'}}\ds\sum_{k=1}^m\lambda_ke^{D_k}\right|_{\bm\lambda=\bm\lambda^\ast}=e^{D_{i'}^\ast}-e^C,\,i'=1,\cdots,m,\label{eqn:lem3-3}\\[2mm]
&F^\ast_i=\lambda^\ast_i,\,i=1,\cdots,m.\label{eqn:lem3-4}
\end{align}
\end{lemma}
{\bf Proof:}
We have (\ref{eqn:lem3-1}),\,(\ref{eqn:lem3-2}) by simple calculation. See (\ref{eqn:yobitekikeisan}). (\ref{eqn:lem3-4}) is the result of Lemma \ref{lem:2}. (\ref{eqn:lem3-3}) is proved as follows;
\begin{align*}
\left.\ds\frac{\partial}{\partial\lambda_{i'}}\ds\sum_{k=1}^m\lambda_ke^{D_k}\right|_{\bm\lambda=\bm\lambda^\ast}
&=\left.\ds\sum_{k=1}^m\left(\delta_{i'k}e^{D_k}+\lambda_ke^{D_k}\ds\frac{\partial D_k}{\partial\lambda_{i'}}\right)\right|_{\bm\lambda=\bm\lambda^\ast}\\
&=e^{D_{i'}^\ast}+\ds\sum_{k=1}^{m_1}\lambda_k^\ast e^C\left(-\ds\sum_{j=1}^n\ds\frac{P_j^kP_j^{i'}}{Q_j^\ast}\right)\\
&=e^{D_{i'}^\ast}-e^C\ds\sum_{j=1}^nP_j^{i'}\ds\frac{1}{Q_j^\ast}\ds\sum_{k=1}^{m_1}\lambda_k^\ast P_j^k\\
&=e^{D_{i'}^\ast}-e^C.
\end{align*}
Note that $Q^\ast_j>0,\,j=1,\cdots,n$, from Remark \ref{rem:justify}.\hfill$\blacksquare$
\medskip
Substituting the results of Lemma \ref{lem:3} into (\ref{eqn:dFi}), we have
\begin{align}
\left.\ds\frac{\partial F_i}{\partial\lambda_{i'}}\right|_{\bm\lambda=\bm\lambda^\ast}e^C+\lambda_i^\ast\left(e^{D_{i'}^\ast}-e^C\right)=\delta_{i'i}e^{D^\ast_i}+\lambda_i^\ast e^{D^\ast_i}D^\ast_{i,i'}.
\end{align}
Consequently, we have
\begin{theorem}
\label{the:1}
\begin{align}
\left.\ds\frac{\partial F_i}{\partial\lambda_{i'}}\right|_{\bm\lambda=\bm\lambda^\ast}&=e^{D_i^\ast-C}\left(\delta_{i'i}+\lambda_i^\ast D^\ast_{i',i}\right)+\lambda^\ast_i\left(1-e^{D^\ast_{i'}-C}\right),\,i',i\in{\cal I},\nonumber\\[1mm]
&=\left\{\begin{array}{l}
\delta_{i'i}+\lambda_i^\ast\left(D^\ast_{i',i}+1-e^{D_{i'}^\ast-C}\right),\,i'\in{\cal I},\,i\in{\cal I}_{\rm I},\\[2mm]
\delta_{i'i},\,i'\in{\cal I},\,i\in{\cal I}_{\rm II},\\[2mm]
e^{D^\ast_i-C}\delta_{i'i},\,i'\in{\cal I},\,i\in{\cal I}_{\rm III},
\end{array}\right.\label{eqn:theorem1-1}
\end{align}
where the sets of indices ${\cal I}$, ${\cal I}_{\rm I}$, ${\cal I}_{\rm II}$, ${\cal I}_{\rm III}$ were defined in $(\ref{eqn:allset})$-$(\ref{eqn:type3set})$. Note that $D^\ast_i=C$ for $i\in{\cal I}_{\rm I}\cup{\cal I}_{\rm II}$ and $\lambda^\ast_i=0$ for $i\in{\cal I}_{\rm II}\cup{\cal I}_{\rm III}$.
\end{theorem}
\subsection{Eigenvalues of Jacobian matrix $J(\bm\lambda^\ast)$}
From (\ref{eqn:theorem1-1}), we see that the Jacobian matrix $J(\bm\lambda^\ast)$ is of the form
\begin{align}
&J(\bm\lambda^\ast)\equiv\begin{pmatrix}
\,J^{\rm I} & O & O\,\\[1mm]
\,\ast & J^{\rm II} & O\,\\[1mm]
\,\ast & O & J^{\rm III}
\end{pmatrix},\label{eqn:J1AJ2}\\
&J^{\rm I}\in{\mathbb R}^{m_1\times m_1},\label{eqn:Jstructure1}\\
&J^{\rm II}=I\,({\rm the\ identity\ matrix})\in{\mathbb R}^{m_2\times m_2},\label{eqn:Jstructure2}\\
&J^{\rm III}={\rm diag}\left(e^{D^\ast_{m_1+m_2+1}-C},\cdots,e^{D^\ast_m-C}\right)\in{\mathbb R}^{m_3\times m_3},\label{eqn:Jstructure3}\\
&\text{\rm where}\ D^\ast_{m_1+m_2+1}<C,\cdots,D^\ast_m<C\ \text{\rm by\ type III\ in}\ (\ref{eqn:Kuhn-Tucker2}),\nonumber\\
&O\ {\rm denotes\ the\ zero\ matrix\ of\ appropriate\ size.}\nonumber
\end{align}
Let $\{\theta_1,\cdots,\theta_m\}\equiv\{\theta_i\,|\,i\in{\cal I}\}$ be the set of eigenvalues of $J(\bm\lambda^\ast)$. By (\ref{eqn:J1AJ2}), the eigenvalues of $J(\bm\lambda^\ast)$ are the eigenvalues of $J^{\rm I}$, $J^{\rm II}$, $J^{\rm III}$, hence we can put
$\{\theta_i\,|\,i\in{\cal I}_{\rm I}\}$: the set of eigenvalues of $J^{\rm I}$,
$\{\theta_i\,|\,i\in{\cal I}_{\rm II}\}$: the set of eigenvalues of $J^{\rm II}$,
$\{\theta_i\,|\,i\in{\cal I}_{\rm III}\}$: the set of eigenvalues of $J^{\rm III}$.
We will evaluate the eigenvalues of $J^{\rm I}$, $J^{\rm II}$ and $J^{\rm III}$ as follows;
\subsubsection{Eigenvalues of $J^{\rm I}$}
\label{subsubsec:J1}
Let $J^{\rm I}_{i'i}$ be the $(i',i)$ component of $J^{\rm I}$, then by (\ref{eqn:theorem1-1}),
\begin{align}
J^{\rm I}_{i'i}=\delta_{i'i}+\lambda_i^\ast D_{i',i}^\ast,\ i',i\in{\cal I}_{\rm I}.\label{eqn:J1seibun}
\end{align}
Let $I\in{\mathbb R}^{m_1\times m_1}$ denote the identity matrix and define $B\equiv I-J^{\rm I}$. Let $B_{i'i}$ be the $(i',i)$ component of $B$, then from (\ref{eqn:J1seibun}),
\begin{align}
B_{i'i}&=-\lambda^\ast_iD^\ast_{i',i}\\[1mm]
&=\lambda^\ast_i\ds\sum_{j=1}^n\ds\frac{P_j^{i'}P_j^i}{Q_j^\ast},\,i',i\in{\cal I}_{\rm I}.\label{eqn:componentofB}
\end{align}
Let $\{\beta_i\,|\,i\in{\cal I}_{\rm I}\}$ be the set of eigenvalues of $B$, then we have $\theta_i=1-\beta_i,\,i\in{\cal I}_{\rm I}$. In order to calculate the eigenvalues of $B$, we will define the following matrices. Similar calculations are performed in \cite{yu}.
Let us define
\begin{align}
\Phi_1&\equiv\begin{pmatrix}P^1\\\vdots\\P^{m_1}\end{pmatrix}\in\mathbb {R}^{m_1\times n},\\[3mm]
\Gamma&\equiv\left(-D_{i',i}^\ast\right)=\left(\ds\sum_{j=1}^n\ds\frac{P_j^{i'}P_j^i}{Q_j^\ast}\right)\in\mathbb {R}^{m_1\times m_1},\\[3mm]
\Lambda&\equiv{\rm diag}\left(\lambda_1^\ast,\cdots,\lambda_{m_1}^\ast\right)\in\mathbb {R}^{m_1\times m_1},\label{eqn:diagonalLambda}
\end{align}
where (\ref{eqn:diagonalLambda}) is the diagonal matrix with diagonal components $\lambda_1^\ast,\cdots,\lambda_{m_1}^\ast$. Furthermore,
\begin{align}
\sqrt{\Lambda}&\equiv{\rm diag}\left(\sqrt{\lambda_1^\ast},\cdots,\sqrt{\lambda_{m_1}^\ast}\right)\in\mathbb {R}^{m_1\times m_1},\\
\Omega&\equiv{\rm diag}\left((Q_1^\ast)^{-1},\cdots,(Q_n^\ast)^{-1}\right)\in\mathbb {R}^{n\times n},\\
\sqrt\Omega&\equiv{\rm diag}\left((Q_1^\ast)^{-1/2},\cdots,(Q_n^\ast)^{-1/2}\right)\in\mathbb {R}^{n\times n}.
\end{align}
Then, we have, by calculation,
\begin{align}
\sqrt\Lambda B\sqrt\Lambda^{-1}&=\sqrt\Lambda\Gamma\sqrt\Lambda\\
&=\sqrt\Lambda\Phi_1\Omega\ {^t}\Phi_1\ {^t}\sqrt\Lambda\\
&=\sqrt\Lambda\Phi_1\sqrt\Omega\ {^t}\sqrt\Omega\ {^t}\Phi_1\ {^t}\sqrt\Lambda\\
&=\sqrt\Lambda\Phi_1\sqrt\Omega\ {^t}\!\left(\sqrt\Lambda\Phi_1\sqrt\Omega\right).\label{eqn:L-1BL}
\end{align}
From (\ref{eqn:m1definition}), $\sqrt\Lambda$ is a regular matrix and from the assumption (\ref{eqn:rankmdefinition}), ${\rm rank}\,\Phi_1=m_1$. Therefore, by $m_1\leq m\leq n$, we have ${\rm rank}\,\sqrt\Lambda\Phi_1\sqrt\Omega=m_1$, and thus from (\ref{eqn:L-1BL}), $\sqrt\Lambda B\sqrt\Lambda^{-1}$ is symmetric and positive\ definite. In particular, all the eigenvalues $\beta_1,\cdots,\beta_{m_1}$ of $B$ are positive. Without loss of generality, let $\beta_1\geq\cdots\geq\beta_{m_1}>0$.
By (\ref{eqn:componentofB}), every component of $B$ is non-negative and by Lemma \ref{lem:rowsumofJis0}, every row sum of $B$ is equal to 1, hence by the Perron-Frobenius theorem
\begin{align}
1=\beta_1\geq\beta_2\geq\cdots\geq\beta_{m_1}>0.
\end{align}
Because $\theta_i=1-\beta_i,\,i\in{\cal I}_{\rm I}$, we have
\begin{align}
0=\theta_1\leq\theta_2\leq\cdots\leq\theta_{m_1}<1,
\end{align}
therefore,
\begin{theorem}
\label{the:2}
The eigenvalues of $J^{\rm I}$ satisfy
\begin{align}
0\leq\theta_i<1,\,i\in{\cal I}_{\rm I}.\label{eqn:J1nokoyuchi}
\end{align}
\end{theorem}
\subsubsection{Eigenvalues of $J^{\rm II}$}
\label{subsubsec:J2}
From (\ref{eqn:J1AJ2}),\,(\ref{eqn:Jstructure2}), we have
\begin{theorem}
\label{the:3}
The eigenvalues of $J^{\rm II}$ satisfy
\begin{align}
\theta_i=1,\,i\in{\cal I}_{\rm II}.\label{eqn:J2nokoyuchi}
\end{align}
\end{theorem}
\subsubsection{Eigenvalues of $J^{\rm III}$}
\label{subsubsec:J3}
From (\ref{eqn:J1AJ2}),\,(\ref{eqn:Jstructure3}), we have
\begin{theorem}
\label{the:4}
The eigenvalues of $J^{\rm III}$ are $\theta_i=e^{D^\ast_i-C},\,D^\ast_i<C,\,i\in{\cal I}_{\rm III}$, hence
\begin{align}
0<\theta_i<1,\,i\in{\cal I}_{\rm III}.\label{eqn:J3nokoyuchi}
\end{align}
\end{theorem}
\begin{remark}
\rm From the above consideration, we know that all the eigenvalues of the Jacobian matrix $J(\bm\lambda^\ast)$ are real.
\end{remark}
\section{On convergence speed}
We obtained in Theorems \ref{the:2},\,\ref{the:3},\,\ref{the:4}, the evaluation for the eigenvalues of $J(\bm\lambda^\ast)$. Let $\theta_{\rm max}\equiv\max_{i\in{\cal I}}\theta_i$ be the maximum eigenvalue of $J(\bm\lambda^\ast)$, then by Theorems \ref{the:2},\,\ref{the:3},\,\ref{the:4}, we have $0\leq\theta_{\rm max}<1$ if ${\cal I}_{\rm II}$ is empty and $\theta_{\rm max}=1$ if ${\cal I}_{\rm II}$ is not empty. In the following, we will see that $\bm\lambda^N\to\bm\lambda^\ast$ or $\bm\mu^N\to\bm0$ is the exponential convergence if $0\leq\theta_{\rm max}<1$, and the $1/N$ order convergence if $\theta_{\rm max}=1$.
\subsection{Convergence speed in case of $0\leq\theta_{\rm max}<1$}
\begin{theorem}
\label{the:5}
Suppose that the maximum eigenvalue $\theta_{\rm max}$ of the Jacobian matrix $J(\bm\lambda^\ast)$ satisfies $0\leq\theta_{\rm max}<1$. Then, for any $\theta$ with $\theta_{\rm max}<\theta<1$, there exist $\delta>0$ and $K>0$, such that for arbitrary initial vector $\bm\lambda^0$ with $\|\bm\lambda^0-\bm\lambda^\ast\|<\delta$, we have
\begin{align}
\|\bm\mu^N\|=\|\bm\lambda^N-\bm\lambda^\ast\|<K\theta^N,\,N=0,1,\cdots,
\end{align}
i.e., the convergence is exponential, where $\theta^N$ denotes the $N$th power of $\theta$.
\end{theorem}
{\bf Proof:}
See Appendix \ref{sec:proooftheorem4}.\hfill$\blacksquare$
\subsection{Convergence speed in case of $\theta_{\rm max}=1$}
In the case of $\theta_{\rm max}=1$, Theorem \ref{the:5} cannot be applied, i.e., the convergence $\bm\mu^N\to\bm0$ is not determined only by the Jacobian matrix, but it is necessary to investigate the Hessian matrix of the second order term of the Taylor expansion.
\subsection{On Hessian matrix}
In the previous studies, say, \cite{ari},\cite{mat},\cite{yu}, the Jacobian matrix is considered but the Hessian matrix is not. Let us calculate the components (\ref{eqn:Hesseseibun}) of the Hessian matrix of the function $F_i,\,i=1,\cdots,m$, at $\bm\lambda=\bm\lambda^\ast$. Define $D^\ast_{i,i',i''}\equiv\partial^2D_i/\partial\lambda_{i'}\partial\lambda_{i{''}}|_{\bm\lambda=\bm\lambda^\ast}.$ We have
\begin{theorem}
\label{the:6}
\begin{align}
\label{eqn:theorem5-1}
&\left.\ds\frac{\partial^2F_i}{\partial\lambda_{i'}\partial\lambda_{i''}}\right|_{\bm\lambda=\bm\lambda^\ast}=e^{D_i^\ast-C}\Big[(1-e^{D_{i'}^\ast-C}+D_{i,i'}^\ast)(\delta_{ii''}+\lambda_i^\ast(1-e^{D_{i''}^\ast-C}))\nonumber\\[3mm]
&\ \ +(1-e^{D_{i''}^\ast-C}+D_{i,i''}^\ast)(\delta_{ii'}+\lambda_i^\ast(1-e^{D_{i'}^\ast-C}))\nonumber\\
&\ \ +\lambda_i^\ast\Big(D_{i,i'}^\ast D_{i,i''}^\ast+D_{i,i',i''}^\ast+D_{i',i''}^\ast-e^{D_{i'}^\ast-C}D_{i',i''}^\ast-e^{D_{i''}^\ast-C}D_{i',i''}^\ast-\ds\sum_{k=1}^{m_1}\lambda_k^\ast D_{k,i'}^\ast D_{k,i''}^\ast\Big)\Big],\nonumber\\
&\ \ i,i',i''\in{\cal I}.
\end{align}
Especially, if ${\cal I}_{\rm III}$ is empty, then for $i\in{\cal I}_{\rm II}$,
\begin{align}
&\left.\ds\frac{\partial^2F_i}{\partial\lambda_{i'}\partial\lambda_{i''}}\right|_{\bm\lambda=\bm\lambda^\ast}=\delta_{ii'}D_{i,i''}^\ast+\delta_{ii''}D_{i,i'}^\ast,\,i',i''\in{\cal I},\label{eqn:theorem5-3}
\end{align}
which is a relatively simple form.
\end{theorem}
{\bf Proof:}
See Appendix \ref{sec:proofoftheorem5}.\hfill$\blacksquare$
\section{Convergence speed in case of $m=3$ and $n$ is arbitrary}
\label{sec:m3narbitray}
In Theorem \ref{the:6}, the Hessian matrix is very complicated, thus it is difficult to investigate arbitrary channel matrix. Therefore, in this section, we will consider a special case, i.e., $m=3$ and $n$ is arbitrary. For $m=3$, without loss of generality, we have the following exhaustive classification.
\begin{itemize}
\item[(i)] $\lambda^\ast_1>0,\lambda^\ast_2>0,\lambda^\ast_3>0,$
\item[(ii)] $\lambda^\ast_1>0,\lambda^\ast_2>0,\lambda^\ast_3=0,D^\ast_3=C,$
\item[(iii)] $\lambda^\ast_1>0,\lambda^\ast_2>0,\lambda^\ast_3=0,D^\ast_3<C.$
\end{itemize}
(i) is the case of ``acute triangle'' in Example \ref{exa:1}. We have ${\cal I}_{\rm I}={\cal I}$, ${\cal I}_{\rm II}={\cal I}_{\rm III}=\emptyset$, thus by (\ref{eqn:J1AJ2}),\,(\ref{eqn:Jstructure1}),
\begin{align}
J(\bm\lambda^\ast)=J^{\rm I}.\label{eqn:3by3Jacobimatrix1}
\end{align}
By Theorem \ref{the:2}, we have $0\leq\theta_{\rm max}<1$ then, by Theorem \ref{the:5} the convergence $\bm\mu^N\to\bm0$ is exponential.
Skipping (ii), let us consider (iii) first. (iii) is the case of ``obtuse triangle'' in Example \ref{exa:3}. We have ${\cal I}_{\rm I}=\{1,2\}$, ${\cal I}_{\rm II}=\emptyset$, ${\cal I}_{\rm III}=\{3\}$, thus by (\ref{eqn:J1AJ2}),\,(\ref{eqn:Jstructure3}),
\begin{align}
&J(\bm\lambda^\ast)=\begin{pmatrix}
\,J^{\rm I} & O\,\\
\,\ast & J^{\rm III}\,
\end{pmatrix},\label{eqn:3by3Jacobimatrix3}\\
&\ J^{\rm I}\in{\mathbb R}^{2\times2},\\
&\ J^{\rm III}=e^{D^\ast_3-C},\,0<J^{\rm III}<1.
\end{align}
By Theorems \ref{the:2},\,\ref{the:4}, we have $0<\theta_{\rm max}<1$, then by Theorem \ref{the:5}, the convergence $\bm\mu^N\to\bm0$ is exponential.
The rest is (ii), which is the case of ``right triangle'' in Example \ref{exa:2}. In this case, we have ${\cal I}_{\rm I}=\{1,2\}$, ${\cal I}_{\rm II}=\{3\}$, ${\cal I}_{\rm III}=\emptyset$, thus by (\ref{eqn:J1AJ2}),\,(\ref{eqn:Jstructure2}),
\begin{align}
&J(\bm\lambda^\ast)=\begin{pmatrix}
\,J^{\rm I} & O\,\\
\,\ast & J^{\rm II}\,
\end{pmatrix},\label{eqn:3by3Jacobimatrix2}\\
&\ J^{\rm I}\in{\mathbb R}^{2\times2},\\
&\ J^{\rm II}=1.
\end{align}
By Theorems \ref{the:2},\,\ref{the:3}, $\theta_{\rm max}=1$, thus we cannot apply Theorem \ref{the:5}. For the analysis of the convergence speed, we will investigate the Hessian matrix in the second order term of the Taylor expansion.
\subsection{Convergence of $1/N$ order }
We will investigate the convergence speed of $\bm\mu^N\to\bm0$ in the case (ii) above and prove that it is the convergence of the $1/N$ order.
By (\ref{eqn:theorem1-1}) in Theorem \ref{the:1} and (\ref{eqn:theorem5-3}) in Theorem \ref{the:6}, we have $J(\bm\lambda^\ast)$ and $H_3(\bm\lambda^\ast)$ as
\begin{align}
J(\bm\lambda^\ast)&=\begin{pmatrix}
\,1+\lambda^\ast_1D^\ast_{1,1} & \lambda^\ast_2D^\ast_{1,2} & 0\,\\
\,\lambda^\ast_1D^\ast_{1,2} & 1+\lambda^\ast_2D^\ast_{2,2} & 0\,\\
\,\lambda^\ast_1D^\ast_{1,3} & \lambda^\ast_2D^\ast_{2,3} & 1\,
\end{pmatrix},\\
H_3(\bm\lambda^\ast)&=\begin{pmatrix}
\,0 & 0 & D^\ast_{1,3}\,\\
\,0 & 0 & D^\ast_{2,3}\,\\
\,D^\ast_{1,3} & D^\ast_{2,3} & 2D^\ast_{3,3}\,
\end{pmatrix}.
\end{align}
$H_1(\bm\lambda^\ast)$ and $H_2(\bm\lambda^\ast)$ do not affect directly on the convergence speed.
Now, we show some properties of
\begin{align}
D^\ast_{i',i}=-\ds\sum_{j=1}^n\ds\frac{P^{i'}_jP^i_j}{Q^\ast_j},\,i',i=1,2,3,
\end{align}
defined by (\ref{eqn:Diidefinition}),\,(\ref{eqn:lem3-2}). We have
\begin{align}
&D^\ast_{i',i}=D^\ast_{i,i'},\,i',i=1,2,3,\label{eqn:property1}\\
&D^\ast_{i',i}\leq0,\,i',i=1,2,3,\label{eqn:property2}\\
&\lambda^\ast_1D^\ast_{1,i}+\lambda^\ast_2D^\ast_{2,i}=-\ds\sum_{j=1}^nP^i_j\ds\sum_{i'=1}^2\ds\frac{\lambda^\ast_{i'}P^{i'}_j}{Q^\ast_j}\nonumber\\
&\hspace{24mm}=-1,\,i=1,2,3.\label{eqn:property3}
\end{align}
Let us consider the first order term
\begin{align}
\bm\mu^{N+1}=\bm\mu^NJ(\bm\lambda^\ast)\label{eqn:taylor1ji}
\end{align}
of the Taylor expansion (\ref{eqn:Taylortenkai3}). See also (\ref{eqn:muislambdaminuslambda}),\,(\ref{eqn:musumiszero}). The representation by components of (\ref{eqn:taylor1ji}) is
\begin{align}
(\mu^{N+1}_1,\mu^{N+1}_2,\mu^{N+1}_3)=(\mu^N_1,\mu^N_2,\mu^N_3)\begin{pmatrix}
\,1+\lambda^\ast_1D^\ast_{1,1} & \lambda^\ast_2D^\ast_{1,2} & 0\,\\
\lambda^\ast_1D^\ast_{1,2} & 1+\lambda^\ast_2D^\ast_{2,2} & 0\\
\,\lambda^\ast_1D^\ast_{1,3} & \lambda^\ast_2D^\ast_{2,3} & 1\,
\end{pmatrix}.
\end{align}
Then, by calculation
\begin{align}
\mu_1^{N+1}&=(1+\lambda^\ast_1D^\ast_{1,1})\mu_1^N+\lambda^\ast_1D^\ast_{1,2}\,\mu^N_2+\lambda^\ast_1D^\ast_{1,3}\,\mu^N_3,\label{eqn:mu1}\\
\mu_2^{N+1}&=\lambda^\ast_2D^\ast_{1,2}\,\mu^N_1+(1+\lambda^\ast_2D^\ast_{2,2})\mu^N_2+\lambda^\ast_2D^\ast_{2,3}\,\mu^N_3,\label{eqn:mu2}\\
\mu_3^{N+1}&=\mu_3^N.
\end{align}
Substituting $\mu^N_3=-\mu^N_1-\mu^N_2$ into (\ref{eqn:mu1}),\,(\ref{eqn:mu2}),
\begin{align}
\mu_1^{N+1}&=(1+\lambda^\ast_1D^\ast_{1,1}-\lambda^\ast_1D^\ast_{1,3})\mu_1^N+(\lambda^\ast_1D^\ast_{1,2}-\lambda^\ast_1D^\ast_{1,3})\mu^N_2,\label{eqn:muhat1}\\
\mu_2^{N+1}&=(\lambda^\ast_2D^\ast_{1,2}-\lambda^\ast_2D^\ast_{2,3})\mu^N_1+(1+\lambda^\ast_2D^\ast_{2,2}-\lambda^\ast_2D^\ast_{2,3})\mu^N_2.\label{eqn:muhat2}
\end{align}
By defining
\begin{align}
&\hat{\bm\mu}^N\equiv(\mu^N_1,\,\mu^N_2),\\
&\hat{J}(\bm\lambda^\ast)\equiv\begin{pmatrix}
\,1+\lambda^\ast_1D^\ast_{1,1}-\lambda^\ast_1D^\ast_{1,3} & \lambda^\ast_2D^\ast_{1,2}-\lambda^\ast_2D^\ast_{2,3}\\
\lambda^\ast_1D^\ast_{1,2}-\lambda^\ast_1D^\ast_{1,3} & 1+\lambda^\ast_2D^\ast_{2,2}-\lambda^\ast_2D^\ast_{2,3}\,
\end{pmatrix},\label{eqn:hatJ}
\end{align}
(\ref{eqn:muhat1}) and (\ref{eqn:muhat2}) become
\begin{align}
\hat{\bm\mu}^{N+1}=\hat{\bm\mu}^N\hat{J}(\bm\lambda^\ast).\label{eqn:jacobi}
\end{align}
Let us calculate the eigenvalues and right eigenvectors of $\hat{J}(\bm\lambda^\ast)$. In the following calculation, (\ref{eqn:property3}) is often used. The characteristic polynomial $\varphi_{\hat{J}(\bm\lambda^\ast)}(\eta)\equiv\det\left(\hat{J}(\bm\lambda^\ast)-\eta I\right)$ of $\hat{J}(\bm\lambda^\ast)$ i
\begin{align*}
&\varphi_{\hat{J}(\bm\lambda^\ast)}(\eta)\\
&=\det\begin{pmatrix}\,1+\lambda^\ast_1D^\ast_{1,1}-\lambda^\ast_1D^\ast_{1,3}-\eta & \hspace{-3mm}\lambda^\ast_2D^\ast_{1,2}-\lambda^\ast_2D^\ast_{2,3}\\
\lambda^\ast_1D^\ast_{1,2}-\lambda^\ast_1D^\ast_{1,3} & \hspace{-3mm}1+\lambda^\ast_2D^\ast_{2,2}-\lambda^\ast_2D^\ast_{2,3}-\eta\,\end{pmatrix}\\
& {\rm (Add\ the\ 2nd\ column\ to\ the\ 1st\ column\ to\ have)}\\
&=\det\begin{pmatrix}\,1-\eta & \lambda^\ast_2D^\ast_{1,2}-\lambda^\ast_2D^\ast_{2,3}\\
1-\eta & 1+\lambda^\ast_2D^\ast_{2,2}-\lambda^\ast_2D^\ast_{2,3}-\eta\,\end{pmatrix}\\
& {\rm (Add\ (-1)\times the\ 1st\ row\ to\ the\ 2nd\ row\ to\ have)}\\
&=\det\begin{pmatrix}\,1-\eta & \lambda^\ast_2D^\ast_{1,2}-\lambda^\ast_2D^\ast_{2,3}\\
0 & 1+\lambda^\ast_2D^\ast_{2,2}-\lambda^\ast_2D^\ast_{1,2}-\eta\,\end{pmatrix}\\
&=\det\begin{pmatrix}\,1-\eta & \lambda^\ast_2D^\ast_{1,2}-\lambda^\ast_2D^\ast_{2,3}\\
0 & -D^\ast_{1,2}-\eta\,\end{pmatrix}\\
&=(\eta+D^\ast_{1,2})(\eta-1).
\end{align*}
Thus, the eigenvalues of $\hat{J}(\bm\lambda^\ast)$ are $\eta_1\equiv-D^\ast_{1,2}$ and $\eta_2\equiv1$.
\begin{lemma}
\label{lem:4}
$0\leq\eta_1<1$.
\end{lemma}
{\bf Proof:} First, $\eta_1\geq0$ by (\ref{eqn:property2}). Next, if $-D^\ast_{1,2}<-D^\ast_{2,2}$, then by (\ref{eqn:property3}), $1=\lambda^\ast_1(-D^\ast_{1,2})+\lambda^\ast_2(-D^\ast_{2,2})>\lambda^\ast_1(-D^\ast_{1,2})+\lambda^\ast_2(-D^\ast_{1,2})=-D^\ast_{1,2}$, which proves $\eta_1=-D^\ast_{1,2}<1$. Thus, we will prove $-D^\ast_{1,2}<-D^\ast_{2,2}$. This inequality is equivalent to
\begin{align}
\ds\sum_{j=1}^n\frac{(P^2_j)^2}{Q^\ast_j}>\ds\sum_{j=1}^n\frac{P^1_jP^2_j}{Q^\ast_j}\label{eqn:lem1-1}
\end{align}
by (\ref{eqn:Diidefinition}),\,(\ref{eqn:lem3-2}). We will prove (\ref{eqn:lem1-1}).
Let $R^t$ be a point on the line segment $P^1P^2$ moving from $P^2$ to $P^1$, i.e.,
\begin{align}
R^t\equiv(1-t)P^2+tP^1,\,0\leq t\leq1,\label{eqn:lem1-2}
\end{align}
see Fig.\ref{fig:6}. Write $R^t$ by components as $R^t=(R^t_1,\cdots,R^t_n)$.
\begin{figure}[t]
\begin{center}
\begin{overpic}[width=8cm]{./fig3_3.eps}
\put(-5,-3){$P^1$}
\put(100,-3){$P^2$}
\put(67,52){$P^3$}
\put(47,-3){$Q^\ast$}
\put(61,-3){$R^t$}
\end{overpic}
\vspace{1mm}
\caption{Figure for the proof of Lemma \ref{lem:4}}
\label{fig:6}
\end{center}
\end{figure}
Define a function $g(t)$ by
\begin{align}
g(t)\equiv D(P^2\|R^t)=\ds\sum_{j=1}^nP^2_j\log\ds\frac{P^2_j}{R^t_j}.\label{eqn:lem1-3}
\end{align}
Then,
\begin{align}
g'(t)=\ds\sum_{j=1}^n\ds\frac{(P^2_j)^2}{R^t_j}-\ds\sum_{j=1}^n\ds\frac{P^1_jP^2_j}{R^t_j},\label{eqn:lem1-5}
\end{align}
and
\begin{align}
g''(t)=\ds\sum_{j=1}^nP^2_j\ds\frac{\left(P^2_j-P^1_j\right)^2}{\left(R^t_j\right)^2}>0.\label{eqn:gtwodash}
\end{align}
From (\ref{eqn:lem1-5}) and $R^0=P^2$, $g'(0)=0$. From (\ref{eqn:gtwodash}), $g'(t)$ is monotonically increasing, thus $g'(t)>g'(0)=0,\,0<t\leq1$. Since $R^{\lambda^\ast_1}=\lambda^\ast_2P^2+\lambda^\ast_1P^1=Q^\ast$, substituting $t=\lambda^\ast_1$ into (\ref{eqn:lem1-5}), we obtain
\begin{align}
0<g'(\lambda^\ast_1)=\ds\sum_{j=1}^n\ds\frac{(P^2_j)^2}{Q^\ast_j}-\ds\sum_{j=1}^n\ds\frac{P^1_jP^2_j}{Q^\ast_j},\label{eqn:lem1-6}
\end{align}
which proves (\ref{eqn:lem1-1}).\hfill$\blacksquare$
\medskip
Next, we will calculate a right eigenvector $\bm a=\begin{pmatrix}a_1\\ a_2\end{pmatrix}$ of $\hat{J}(\bm\lambda^\ast)$ for the eigenvalue $\eta_1=-D^\ast_{1,2}$. The equation
\begin{align}
\hat{J}(\bm\lambda^\ast)\bm a=\eta_1\bm a\label{eqn:eigenvectorcalculation01}
\end{align}
is written by components as
\begin{align}
\begin{pmatrix}\,1+\lambda^\ast_1D^\ast_{1,1}-\lambda^\ast_1D^\ast_{1,3} & \lambda^\ast_2D^\ast_{1,2}-\lambda^\ast_2D^\ast_{2,3}\\
\lambda^\ast_1D^\ast_{1,2}-\lambda^\ast_1D^\ast_{1,3} & 1+\lambda^\ast_2D^\ast_{2,2}-\lambda^\ast_2D^\ast_{2,3}\,\end{pmatrix}\begin{pmatrix}a_1\\ a_2\end{pmatrix}=-D^\ast_{1,2}\begin{pmatrix}a_1\\ a_2\end{pmatrix}.\label{eqn:eigenvectorcalculation02}
\end{align}
From (\ref{eqn:eigenvectorcalculation02}),\,(\ref{eqn:property1}),\,(\ref{eqn:property3}), we have, by calculation
\begin{align}
\lambda^\ast_1(D^\ast_{1,2}-D^\ast_{1,3})a_1+\lambda^\ast_2(D^\ast_{1,2}-D^\ast_{2,3})a_2=0.\label{eqn:eigenvectorcalculation03}
\end{align}
By defining
\begin{align}
\tau_1\equiv\lambda^\ast_1(D^\ast_{1,2}-D^\ast_{1,3}),\ \tau_2\equiv\lambda^\ast_2(D^\ast_{1,2}-D^\ast_{2,3}),\label{eqn:eigenvectorcalculation04}
\end{align}
(\ref{eqn:eigenvectorcalculation03}) is written as
\begin{align}
\tau_1a_1+\tau_2a_2=0.\label{eqn:eigenvectorcalculation05}
\end{align}
Now, by (\ref{eqn:property3}) and Lemma \ref{lem:4}, we have
\begin{align}
\tau_1+\tau_2=1+D^\ast_{1,2}>0.\label{eqn:eigenvectorcalculation06}
\end{align}
We notice that $a_1\neq a_2$. In fact, if $a_1=a_2$, we have $a_1=a_2\neq0$ because $\bm a$ is an eigenvector, then (\ref{eqn:eigenvectorcalculation05}) and (\ref{eqn:eigenvectorcalculation06}) contradict each other. Hence, we can impose
\begin{align}
a_1-a_2=1\label{eqn:eigenvectorcalculation07}
\end{align}
as a normalizing condition of the eigenvector. By solving (\ref{eqn:eigenvectorcalculation05}) and (\ref{eqn:eigenvectorcalculation07}), we have
\begin{align}
&a_1=\dfrac{\tau_2}{\tau_1+\tau_2}=\dfrac{\lambda^\ast_2(D^\ast_{1,2}-D^\ast_{2,3})}{1+D^\ast_{1,2}},\\
&a_2=-\dfrac{\tau_1}{\tau_1+\tau_2}=-\dfrac{\lambda^\ast_1(D^\ast_{1,2}-D^\ast_{1,3})}{1+D^\ast_{1,2}}.\label{eqn:eigenvectorcalculation08}
\end{align}
Multiplying the both sides of (\ref{eqn:jacobi}) by $\bm a$ from the right, we have
\begin{align}
\hat{\bm\mu}^{N+1}\bm a&=\hat{\bm\mu}^N\hat{J}(\bm\lambda^\ast)\bm a\nonumber\\
&=\eta_1\hat{\bm\mu}^N\bm a\nonumber\\
&=\cdots\nonumber\\
&=\left(\eta_1\right)^{N+1}\hat{\bm\mu}^0\bm a.
\end{align}
Putting $K\equiv\hat{\bm\mu}^0\bm a$, we have
\begin{align}
\hat{\bm\mu}^N\bm a=K\left(\eta_1\right)^N,
\end{align}
and by components
\begin{align}
a_1\mu^N_1+a_2\mu^N_2=K\left(\eta_1\right)^N.\label{eqn:eigenvectorcalculation09}
\end{align}
Then, from (\ref{eqn:eigenvectorcalculation09}) and $\mu^N_1+\mu^N_2=-\mu^N_3$, we have
\begin{align}
\mu_1^N&=a_2\mu^N_3+K\left(\eta_1\right)^N,\label{eqn:eigenvectorcalculation10}\\[1mm]
\mu_2^N&=-a_1\mu^N_3-K\left(\eta_1\right)^N.\label{eqn:eigenvectorcalculation11}
\end{align}
Defining $b_1\equiv-a_2,\,b_2\equiv a_1$, we obtain the following results;
\begin{align}
\mu_1^N&=-b_1\mu^N_3+K\left(\eta_1\right)^N,\label{eqn:mun1}\\[1mm]
\mu_2^N&=-b_2\mu^N_3-K\left(\eta_1\right)^N,\label{eqn:mun2}
\end{align}
where
\begin{align}
b_1&\equiv\frac{\lambda^\ast_1(D^\ast_{1,2}-D^\ast_{1,3})}{1+D^\ast_{1,2}},\label{eqn:b1}\\[1mm]
b_2&\equiv\frac{\lambda^\ast_2(D^\ast_{1,2}-D^\ast_{2,3})}{1+D^\ast_{1,2}}.\label{eqn:b2}
\end{align}
We have $b_1+b_2=1$.
\begin{remark}
\rm As for the eigenvalue $\eta_2=1$, an eigenvector is $\begin{pmatrix}\,1\,\\1\end{pmatrix}$ and $\hat{J}(\bm\lambda^\ast)\begin{pmatrix}\,1\,\\1\end{pmatrix}=1\begin{pmatrix}\,1\,\\1\end{pmatrix}$ only shows a trivial relation because of (\ref{eqn:property3}).
\end{remark}
\begin{remark}
\label{rem:5}
\rm We obtained (\ref{eqn:mun1})-(\ref{eqn:b2}) by regarding (\ref{eqn:taylor1ji}) holds exactly. Actually, (\ref{eqn:taylor1ji}) holds approximately if $N$ is sufficiently large and the second and higher order terms of Taylor expansion are sufficiently small. Therefore, (\ref{eqn:mun1})-(\ref{eqn:b2}) also hold approximately. In particular, the approximate value for $\eta_1$ in Lemma \ref{lem:4} is considered to be smaller than $1$. Refer the proof of Theorem \ref{the:5}.
\end{remark}
\bigskip
Now, consider the third component of the Taylor expansion (\ref{eqn:Taylortenkai3});
\begin{align}
&\mu^{N+1}_3=\bm\mu^N\begin{pmatrix}\,0\,\\0\\1\end{pmatrix}+\ds\frac{1}{2!}\bm\mu^NH_3(\bm\lambda^\ast)\,{^t}\bm\mu^N+o\left(\|\bm\mu^N\|^2\right)\nonumber\\
&=\mu^N_3+\ds\frac{1}{2!}\left(\mu^N_1,\mu^N_2,\mu^N_3\right)
\begin{pmatrix}
0 & 0 & D^\ast_{1,3}\\
0 & 0 & D^\ast_{2,3}\\
\,D^\ast_{1,3} & D^\ast_{2,3} & 2D^\ast_{3,3}\,
\end{pmatrix}
\begin{pmatrix}\,\mu^N_1\,\\[1mm]\mu^N_2\\[1mm]\mu^N_3\end{pmatrix}\nonumber\\
&\ \ \ \ +o\left(\|\bm\mu^N\|^2\right)\nonumber\\
&=\mu^N_3+D^\ast_{1,3}\mu^N_1\mu^N_3+D^\ast_{2,3}\mu^N_2\mu^N_3+D^\ast_{3,3}\left(\mu^N_3\right)^2\nonumber\\
&\ \ \ \ +o\left(\|\bm\mu^N\|^2\right)\nonumber\\
&=\mu^N_3-(D^\ast_{1,3}b_1+D^\ast_{2,3}b_2-D^\ast_{3,3})\left(\mu^N_3\right)^2+o\left((\mu^N_3)^2\right),\label{eqn:thirdcomponent}
\end{align}
where the last equality is obtained by (\ref{eqn:mun1}),\,(\ref{eqn:mun2}). Defining
\begin{align}
&\rho\equiv D^\ast_{1,3}b_1+D^\ast_{2,3}b_2-D^\ast_{3,3}\label{eqn:rhodefinition}\\[1mm]
&=\lambda^\ast_1\ds\frac{D^\ast_{1,3}(D^\ast_{1,2}-D^\ast_{1,3})}{1+D^\ast_{1,2}}+\lambda^\ast_2\ds\frac{D^\ast_{2,3}(D^\ast_{1,2}-D^\ast_{2,3})}{1+D^\ast_{1,2}}-D^\ast_{3,3},
\end{align}
we have by (\ref{eqn:thirdcomponent}),\,(\ref{eqn:rhodefinition}),
\begin{align}
\mu^{N+1}_3=\mu^N_3-\rho\left(\mu^N_3\right)^2+o\left((\mu^N_3)^2\right).\label{eqn:mu3zenkashiki}
\end{align}
Now we assume
\begin{align}
\rho>0.\label{eqn:rhopositiveassumption}
\end{align}
If $\rho<0$, then the recurrence formula (\ref{eqn:mu3zenkashiki}) diverges, hence $\rho\geq0$ holds. Thus, the assumption (\ref{eqn:rhopositiveassumption}) is equivalent to $\rho\neq0$.
\begin{lemma}
\label{lem:5}
Consider the recurrence formula $(\ref{eqn:mu3zenkashiki})$. For a sufficiently small $\delta>0$ and any initial value $\mu^0_3$ with $0<\mu^0_3<\delta$, we have $\ds\lim_{N\to\infty}\mu^N_3=0$.
\end{lemma}
{\bf Proof:}
Consider the function $\mu-\rho\mu^2+o(\mu^2)$. If $\delta>0$ is sufficiently small, then for any $\mu$ with $0<\mu<\delta$, we have $\rho\mu+o(\mu)<1$ and $(\rho/2)\mu^2>o\left(\mu^2\right)$. Thus, for any initial value $\mu^0_3$ with $0<\mu^0_3<\delta$ we have $\mu^1_3=\mu^0_3\left(1-\rho\mu^0_3+o\left(\mu^0_3\right)\right)>0$ and $\mu^1_3<\mu^0_3-(\rho/2)\left(\mu^0_3\right)^2<\mu^0_3<\delta$. By mathematical induction, we have $0<\mu^N_3<\delta,\,N=0,1,\cdots$, hence
\begin{align}
0<\mu^{N+1}_3<\mu^N_3-\ds\frac{\rho}{2}(\mu^N_3)^2,\ N=0,1,\cdots.\label{eqn:zenkahutoushiki}
\end{align}
Since $0<\mu^{N+1}_3<\mu^N_3$ holds by (\ref{eqn:zenkahutoushiki}), there exists the limit $\mu^\infty_3\equiv\ds\lim_{N\to\infty}\mu^N_3\geq0$. Letting $N\to\infty$ in (\ref{eqn:zenkahutoushiki}), we have $\mu^\infty_3\leq\mu^\infty_3-(\rho/2)\left(\mu^\infty_3\right)^2$, which implies $\mu^\infty_3=0$.\hfill$\blacksquare$
\begin{lemma}
\label{lem:6}
For a sufficiently small $\delta>0$ and any initial value $\mu^0_3$ with $0<\mu^0_3<\delta$, we have
\begin{align}
\ds\lim_{N\to\infty}N\mu^N_3=\ds\frac{1}{\rho}.\label{eqn:convergence1overrho}
\end{align}
\end{lemma}
{\bf Proof:} From (\ref{eqn:mu3zenkashiki}),
\begin{align}
\ds\frac{1}{\mu^{l+1}_3}-\ds\frac{1}{\mu^l_3}&=\ds\frac{1}{\mu^l_3-\rho\left(\mu^l_3\right)^2+o\left((\mu^l_3)^2\right)}-\ds\frac{1}{\mu^l_3}\\[2mm]
&=\ds\frac{\rho+o\left((\mu^l_3)^2\right)/(\mu^l_3)^2}{1-\rho\mu^l_3+o\left((\mu^l_3)^2\right)/|\mu^l_3|},\label{eqn:theorem3-1}
\end{align}
hence taking the arithmetic mean of the both sides of (\ref{eqn:theorem3-1}) for $l=0,1,\cdots,N-1$,
\begin{align}
\ds\frac{1}{N}\ds\sum_{l=0}^{N-1}\left(\ds\frac{1}{\mu^{l+1}_3}-\ds\frac{1}{\mu^l_3}\right)=\ds\frac{1}{N}\ds\sum_{l=0}^{N-1}\ds\frac{\rho+o\left((\mu^l_3)^2\right)/(\mu^l_3)^2}{1-\rho\mu^l_3+o\left((\mu^l_3)^2\right)/|\mu^l_3|}.\label{eqn:therem3-2}
\end{align}
Applying the proposition that ``the arithmetic mean of a convergent sequence converges to the same limit as the original sequence'' \cite{ahl},\,p.37, to the right hand side of (\ref{eqn:therem3-2}), and further, by Lemma \ref{lem:5},
\begin{align*}
\lim_{N\to\infty}\ds\frac{1}{N}\left(\ds\frac{1}{\mu^N_3}-\ds\frac{1}{\mu^0_3}\right)&=\lim_{N\to\infty}\ds\frac{\rho+o\left((\mu^N_3)^2\right)/(\mu^N_3)^2}{1-\rho\mu^N_3+o\left((\mu^N_3)^2\right)/|\mu^N_3|}\\
&=\rho,
\end{align*}
which proves (\ref{eqn:convergence1overrho}).\hfill$\blacksquare$
\medskip
From (\ref{eqn:mun1}),\,(\ref{eqn:mun2}) and Lemma \ref{lem:6}, we have
\begin{theorem}
\label{the:7}
Let $m=3$ and $n$ be arbitrary. Suppose that the capacity achieving $\bm\lambda^\ast=(\lambda^\ast_1,\lambda^\ast_2,\lambda^\ast_3)$ satisfies $\lambda^\ast_1>0,\lambda^\ast_2>0,\lambda^\ast_3=0$ and $D^\ast_3=D(P^3\|\bm\lambda^\ast\Phi)=C$ $($see the case {\rm (ii)} at the first part of section $\ref{sec:m3narbitray})$, and further, $\rho>0$ in $(\ref{eqn:rhopositiveassumption})$. Then for $\bm\mu^N=\bm\lambda^N-\bm\lambda^\ast$ with $\bm\mu^N=(\mu^N_1,\mu^N_2,\mu^N_3)$, the convergence $\bm\mu^N\to\bm0$ is the $1/N$ order and we have
\begin{align}
&\ds\lim_{N\to\infty}N\mu^N_1=-\ds\frac{b_1}{\rho},\\[2mm]
&\ds\lim_{N\to\infty}N\mu^N_2=-\ds\frac{b_2}{\rho},\\[2mm]
&\ds\lim_{N\to\infty}N\mu^N_3=\ds\frac{1}{\rho},
\end{align}
where
$b_1=\ds\frac{\lambda^\ast_1(D^\ast_{1,2}-D^\ast_{1,3})}{1+D^\ast_{1,2}},\ b_2=\ds\frac{\lambda^\ast_2(D^\ast_{1,2}-D^\ast_{2,3})}{1+D^\ast_{1,2}},\\[2mm]
\rho=\lambda^\ast_1\ds\frac{D^\ast_{1,3}(D^\ast_{1,2}-D^\ast_{1,3})}{1+D^\ast_{1,2}}+\lambda^\ast_2\ds\frac{D^\ast_{2,3}(D^\ast_{1,2}-D^\ast_{2,3})}{1+D^\ast_{1,2}}-D^\ast_{3,3}$,\ and $D^\ast_{i',i}$ was defined by $(\ref{eqn:Diidefinition}),\,(\ref{eqn:lem3-2})$.
\end{theorem}
\subsection{Summary of Section \ref{sec:m3narbitray}}
We examined in this section the convergence speed of the Arimoto algorithm in the case that $m=3$ and $n$ is arbitrary. Based on the exhaustive classification (i), (ii), (iii) shown at the first part of section \ref{sec:m3narbitray}, in (i), (iii) the convergence is exponential, and in (ii) it is the $1/N$ order, under the assumption of $\rho>0$. In (ii), type II index in (\ref{eqn:Kuhn-Tucker2}) exists, therefore, under the assumption of $\rho>0$, we obtain the following equivalence;
\medskip
type II index exists $\Longleftrightarrow\theta_{\rm max}=1$ $\Longleftrightarrow$ the convergence is the $1/N$ order
\medskip
\noindent We conjecture that the same equivalence holds also in the case $m>3$.
\section{Numerical Evaluation}
Based on the analysis in the previous sections, we will evaluate numerically the convergence speed of the Arimoto algorithm for several channel matrices with $m=n=3$.
In Examples \ref{exa:4} and \ref{exa:5} below, we will investigate the exponential convergence in the case (i) in section \ref{sec:m3narbitray}, where the capacity achieving $\bm\lambda^\ast$ is in $\Delta({\cal X})^\circ$ (the interior of $\Delta({\cal X})$). In Example \ref{exa:5}, we will discuss how the convergence speed varies depending on the choice of initial input distribution $\bm\lambda^0$. Next, in Examples \ref{exa:6} and \ref{exa:7}, we will consider the $1/N$ order convergence in the case (ii). It will be confirmed that the convergence speed is accurately approximated by the limit values obtained in Theorem \ref{the:7}. In Example \ref{exa:8}, we will investigate the exponential convergence in the case (iii), where $\bm\lambda^\ast$ is on $\partial\Delta({\cal X})$ (the boundary of $\Delta({\cal X})$).
Here, in the exponential convergence, we will evaluate the values of the function
\begin{align}
L(N)\equiv-\ds\frac{1}{N}\log\|\bm\mu^N\|.
\end{align}
Based on the results of Theorem \ref{the:5}, i.e., $\|\bm\mu^N\|=\|\bm\lambda^N-\bm\lambda^\ast\|<K\theta^N$, $\theta\doteqdot\theta_{\rm max}$, we will compare $L(N)$ for large $N$ with $-\log\theta_{\rm max}$ or other values.
On the other hand, in the $1/N$ order convergence, we will evaluate
\begin{align}
N\bm\mu^N=(N\mu^N_1,N\mu^N_2,N\mu^N_3).
\end{align}
We will compare $N\bm\mu^N$ for large $N$ with the limit values obtained in Theorem \ref{the:7}.
\subsection{Case (i): exponential convergence where $\bm\lambda^\ast\in\Delta({\cal X})^\circ$}
\begin{example}
\label{exa:4}
\rm Consider the channel matrix $\Phi^{(1)}$ of (\ref{eqn:Phi1}), i.e.,
\begin{align}
\Phi^{(1)}=
\begin{pmatrix}
0.800 & 0.100 & 0.100\\
0.100 & 0.800 & 0.100\\
0.250 & 0.250 & 0.500
\end{pmatrix}.
\end{align}
We have
\begin{align}
\bm\lambda^\ast&=(0.431,0.431,0.138),\\
Q^\ast&=(0.422,0.422,0.156),\\
J(\bm\lambda^\ast)&=
\begin{pmatrix}
\,0.308 & -0.191 & -0.117\,\cr
\,-0.191 & 0.308 & -0.117\,\cr
\,-0.369 & -0.369 & 0.738\,\cr
\end{pmatrix}.
\end{align}
The eigenvalues of $J(\bm\lambda^\ast)$ are $(\theta_1,\theta_2,\theta_3)=(0.000,0.500,$ $0.855)$. Then, $\theta_{\rm max}=\theta_3=0.855$. If we choose $\bm\lambda^0=(1/3,1/3,1/3)$ as an initial distribution, then for $N=500$,
\begin{align}
L(500)=0.161\doteqdot-\log\theta_{\rm max}=0.157.\label{eqn:example4speedcomparison}
\end{align}
See Fig.\ref{fig:7}.
\begin{figure}[t]
\begin{center}
\begin{overpic}[width=8cm]{./absconv_c-0.00_2_3.eps}
\put(95.5,29){$\leftarrow-\log\theta_{\rm max}$}
\put(103,23){$=0.157$}
\put(51,-4){$N$}
\put(26,40){$L(N)\ \text{\rm with}\,\bm\lambda^0=(1/3,1/3,1/3)$}
\end{overpic}
\caption{Convergence of $L(N)$ in Example \ref{exa:4} with initial distribution $\bm\lambda^0=(1/3,1/3,1/3)$}
\label{fig:7}
\end{center}
\end{figure}
\end{example}
\begin{example}
\label{exa:5}
\rm Let us consider another channel matrix. Define
\begin{align}
\Phi^{(4)}\equiv
\begin{pmatrix}
\,0.793 & 0.196 & 0.011\,\\
0.196 & 0.793 & 0.011 \\
0.250 & 0.250 & 0.500
\end{pmatrix}.
\end{align}
We have
\begin{align}
\bm\lambda^\ast&=(0.352,0.352,0.296),\\
Q^\ast&=(0.422,0.422,0.156),\\
J(\bm\lambda^\ast)&=
\begin{pmatrix}
0.443 & -0.260 & -0.183\,\cr
-0.260 & 0.443 & -0.183\cr
\,-0.218 & -0.218 & 0.436
\end{pmatrix}.\label{eqn:example5J}
\end{align}
The eigenvalues of $J(\bm\lambda^\ast)$ are $(\theta_1,\theta_2,\theta_3)=(0.000,0.618,$ $0.702)$. Then, $\theta_{\rm max}=\theta_3=0.702$. Write the second largest eigenvalue as $\theta_{\rm sec}$, thus $\theta_{\rm sec}=\theta_2=0.618$.
We show in Fig.\ref{fig:8} the graph of $L(N)$ with initial distribution $\bar{\bm\lambda}^0\equiv(1/3,1/3,1/3)$ by solid line, and the graph with initial distribution $\bar{\bar{\bm\lambda}}^0\equiv(1/2,1/3,1/6)$ by dotted line.
\begin{figure}[t]
\begin{center}
\begin{overpic}[width=8cm]{./example5_3.eps}
\put(27,53){$L(N)\ \text{\rm with}\,\bar{\bm\lambda}^0=(1/3,1/3,1/3)$}
\put(25,30){$L(N)\ \text{\rm with}\,\bar{\bar{\bm\lambda}}^0=(1/2,1/3,1/6)$}
\put(95.5,47){$\leftarrow-\log\theta_{\text{\rm sec}}$}
\put(102.5,41.5){$=0.481$}
\put(95.5,35.8){$\leftarrow-\log\theta_{\rm max}$}
\put(102.5,30.3){$=0.353$}
\put(51,-4){$N$}
\end{overpic}
\caption{Convergence of $L(N)$ in Example \ref{fig:5} with initial distribution $\bar{\bm\lambda}^0=(1/3,1/3,1/3)$ and $\bar{\bar{\bm\lambda}}^0=(1/2,1/3,1/6)$}
\label{fig:8}
\end{center}
\end{figure}
The larger $L(N)$ the faster the convergence, hence the convergence with $\bar{\bm\lambda}^0$ is faster than with $\bar{\bar{\bm\lambda}}^0$. The convergence speed varies depending on the choice of initial distribution. What kind of initial distribution yields faster convergence? We will investigate it below.
First, we consider the initial vector by $\bm\mu$ not by $\bm\lambda$, and define
\begin{align}
\bar{\bm\mu}^0&\equiv\bar{\bm\lambda}^0-\bm\lambda^\ast=(-0.019,-0.019,0.038),\label{eqn:initialmu0}\\
\bar{\bar{\bm\mu}}^0&\equiv\bar{\bar{\bm\lambda}}^0-\bm\lambda^\ast=(0.148,-0.019,-0.129).\label{eqn:initialbarmu0}
\end{align}
Similarly to Remark \ref{rem:5}, we will execute the following calculation by regarding $\bm\mu^{N+1}=\bm\mu^NJ(\bm\lambda^\ast),\,N=0,1,\cdots$ holds exactly.
Here, we will investigate for general $m,n$. We assume for simplicity that all the eigenvalues of $J(\bm\lambda^\ast)$ are different. Let $\bm\nu_{\rm max}$ be the left eigenvector of $J(\bm\lambda^\ast)$ for $\theta_{\rm max}$, and let $\bm\nu_{\rm max}^\perp$ be the orthogonal complement of $\bm\nu_{\rm max}$, i.e., $\bm\nu_{\rm max}^\perp\equiv\{\bm\mu\,|\,\bm\mu{^t}\bm\nu_{\rm max}=0\}.$
\begin{lemma}
\label{lem:thetasec}
If
\begin{align}
\bm\mu^N\in\bm\nu_{\rm max}^\perp,\,N=0,1,\cdots,\label{eqn:innuperp}
\end{align}
then $\|\bm\mu^N\|<K\left(\theta_{\rm sec}\right)^N,\,K>0,\,N=0,1,\cdots$.
\end{lemma}
{\bf Proof:}
See Appendix \ref{sec:proofofthetasec}.\hfill$\blacksquare$
Because $\theta_{\text{\rm sec}}<\theta_{\rm max}$, if (\ref{eqn:innuperp}) holds then the convergence speed is faster than $\theta_{\rm max}$ by Lemma \ref{lem:thetasec}. Next lemma gives a necessary and sufficient condition for guaranteeing (\ref{eqn:innuperp}).
\begin{lemma}
\label{lem:migikoyuuvector}
A necessary and sufficient condition for $\bm\mu J(\bm\lambda^\ast)\in\bm\nu_{\rm max}^\perp$ to hold for any $\bm\mu\in\bm\nu_{\rm max}^\perp$ is that ${^t}\bm\nu_{\rm max}$ is a right eigenvector for $\theta_{\rm max}$.
\end{lemma}
{\bf Proof:} See Appendix \ref{sec:proofofmigikoyuuvector}.\hfill$\blacksquare$
\begin{figure}[t]
\begin{center}
\begin{overpic}[width=8.8cm]{./chart_3.eps}
\put(8,91){Calculate the left eigenvector $\bm\nu_{\rm max}$ for}
\put(8,86){the maximum eigenvalue $\theta_{\rm max}$ of $J(\bm\lambda^\ast)$.}
\put(22,60.5){Is ${^t}\bm\nu_{\rm max}$ a right eigenvector}
\put(22,55.5){for $\theta_{\rm max}$?}
\put(25,25){Does the initial vector}
\put(25,20){$\bm\mu^0$ satisfy $\bm\mu^0{^t}\bm\nu_{\rm max}=0$?}
\put(90,62){No}
\put(90,26){No}
\put(50,6){Yes}
\put(50,40){Yes}
\put(102,59){$\theta_{\rm max}$}
\put(102,22){$\theta_{\rm max}$}
\put(44,-5){$\theta_{\rm sec}$}
\end{overpic}\\[5mm]
\caption{Flow chart for determining the exponential convergence speed}
\label{fig:9}
\end{center}
\end{figure}
If ${^t}\bm\nu_{\rm max}$ is a right eigenvector, then by Lemma \ref{lem:migikoyuuvector}, any $\bm\mu^0\in\bm\nu_{\rm max}^\perp$ yields (\ref{eqn:innuperp}), hence the convergence becomes faster. We will show in the flow chart in Fig.\ref{fig:9} how the convergence speed depends on the choice of initial vector.
Now, we will evaluate the convergence speed for the initial vectors (\ref{eqn:initialmu0}),\,(\ref{eqn:initialbarmu0}) by applying the flow chart. For $J(\bm\lambda^\ast)$ in (\ref{eqn:example5J}), $\theta_{\rm max}=0.702$ and $\theta_{\rm sec}=0.618$. The left eigenvector for $\theta_{\rm max}$ is $\bm\nu_{\rm max}=(-0.500,0.500,0.000)$. We can confirm that ${^t}\bm\nu_{\rm max}$ is a right eigenvector for $\theta_{\rm max}$ and $\bar{\bm\mu}^0{^t}\bm\nu_{\rm max}=0$, thus in Fig.\ref{fig:9} the answers are Yes-Yes, so we reach $\theta_{\rm sec}$. Then by the solid line in Fig.\ref{fig:8}, for $N=500$, we have
\begin{align}
L(500)=0.489\doteqdot-\log\theta_{\rm sec}=0.481.
\end{align}
On the other hand, we have $\bar{\bar{\bm\mu}}^0{^t}\bm\nu_{\rm max}\neq0$, thus the answers are Yes-No, so we reach $\theta_{\rm max}$. Then by the dotted line, for $N=500$, we have
\begin{align}
L(500)=0.360\doteqdot-\log\theta_{\rm max}=0.353.
\end{align}
Checking Example 4 this way, we can see that $\bm\nu_{\rm max}=(-0.431,-0.431,0.862)$ is a left eigenvector for $\theta_{\rm max}=0.855$, but ${^t}\bm\nu_{\rm max}$ is not a right eigenvector. Thus the answer is No, so we reach $\theta_{\rm max}$ and we have (\ref{eqn:example4speedcomparison}).
\end{example}
\subsection{Case (ii): convergence of the $1/N$ order}
\begin{example}
\label{exa:6}
\rm Consider the channel matrix $\Phi^{(2)}$ of (\ref{eqn:Phi2}), i.e.,
\begin{align}
\Phi^{(2)}=\begin{pmatrix}
\,0.800 & 0.100 & 0.100\,\\
\,0.100 & 0.800 & 0.100\,\\
\,0.300 & 0.300 & 0.400\,
\end{pmatrix}.
\end{align}
We have
\begin{align}
\bm\lambda^\ast&=(0.500,0.500,0.000),\\
Q^\ast&=(0.450,0.450,0.100),\\
J(\bm\lambda^\ast)&=\begin{pmatrix}\,0.228 & -0.228 & 0.000\,\\\,-0.228 & 0.228 & 0.000\,\\\,-0.500 & -0.500 & 1.000\,\end{pmatrix},\\
H_3(\bm\lambda^\ast)&=\begin{pmatrix}\,0.000 & 0.000 & -1.000\,\\0.000 & 0.000 & -1.000\\\,-1.000 & -1.000 & -3.990\,\end{pmatrix}.
\end{align}
The eigenvalues of $J(\bm\lambda^\ast)$ are $(\theta_1,\theta_2,\theta_3)=(0.000,0.456,$ $1.000)$.
We have $N\bm\mu^N$ for $N=500$ as
\begin{align}
&N\bm\mu^N=(-0.510,-0.510,1.019)\\
&\doteqdot\ds\lim_{N\to\infty}N\bm\mu^N=(-0.503,-0.503,1.005).\label{eqn:rironchiexample6}
\end{align}
(\ref{eqn:rironchiexample6}) is obtained by Theorem \ref{the:7}. See Fig.\ref{fig:10}. We can confirm that $N\bm\mu^N$ for large $N$ is close to the limit value in Theorem \ref{the:7}.
\begin{figure}[t]
\begin{center}
\begin{overpic}[width=8cm]{./example6_3.eps}
\put(95,73.5){$\leftarrow 1/\rho$}
\put(102,68){$=1.005$}
\put(95,22){$\leftarrow$}
\put(101,22){$-b_1/\rho$}
\put(101,16.5){$=-b_2/\rho$}
\put(101,11){$=-0.503$}
\put(51,-4){$N$}
\put(45,80){$N\mu^N_3$}
\put(43,27){$N\mu^N_1=N\mu^N_2$}
\end{overpic}
\caption{Convergence of $N\mu^N_i$ in Example \ref{exa:6}}
\label{fig:10}
\end{center}
\end{figure}
\end{example}
\begin{example}
\label{exa:7}
\rm We will examine another example of slow convergence. Consider the channel matrix
\begin{align}
\Phi^{(5)}\equiv\begin{pmatrix}\,0.720 & 0.215 & 0.065\,\\\,0.013 & 0.431 & 0.556\,\\\,0.250 & 0.700 & 0.050\,\end{pmatrix}.
\end{align}
We have
\begin{align}
\bm\lambda^\ast&=(0.453,0.547,0.000),\\
Q^\ast&=(0.333,0.333,0.334),\\
J(\bm\lambda^\ast)&=\begin{pmatrix}\,0.227 & -0.227 & 0.000\,\\\,-0.188 & 0.188 & 0.000\,\\\,-0.453 & -0.547 & 1.000\,\end{pmatrix},\\
H_3(\bm\lambda^\ast)&=\begin{pmatrix}\,0.000 & 0.000 & -1.000\,\\\,0.000 & 0.000 & -1.000\,\\\,-1.000 & -1.000 & -3.330\,\end{pmatrix}.
\end{align}
The eigenvalues of $J(\bm\lambda^\ast)$ are $(\theta_1,\theta_2,\theta_3)=(0.000,0.416,$ $1.000)$.
We have $N\bm\mu^N$ for $N=500$ as
\begin{align}
&N\bm\mu^N=(-0.684,-0.825,1.509)\\
&\doteqdot\ds\lim_{N\to\infty}N\bm\mu^N=(-0.682,-0.822,1.504).\label{eqn:rironchiexample7}
\end{align}
See Fig.\ref{fig:11}. We can confirm that $N\bm\mu^N$ for large $N$ is close to the limit value in Theorem \ref{the:7}.
\begin{figure}[t]
\begin{center}
\begin{overpic}[width=8cm]{./conv-1_3.eps}
\put(94.5,69.4){$\leftarrow 1/\rho$}
\put(102,64){$=1.504$}
\put(94.5,23.5){\rotatebox{60}{$\leftarrow$}}
\put(99,29){$-b_1/\rho$}
\put(99.5,23.5){$=-0.682$}
\put(94.1,20.3){\rotatebox{-60}{$\leftarrow$}}
\put(99,14){$-b_2/\rho$}
\put(99,8.5){$=-0.822$}
\put(51,-2){$N$}
\put(45,73){$N\mu^N_3$}
\put(45,27){$N\mu^N_1$}
\put(45,15){$N\mu^N_2$}
\end{overpic}
\caption{Convergence of $N\bm\mu^N$ in Example \ref{exa:7}}
\label{fig:11}
\end{center}
\end{figure}
\end{example}
\subsection{Case (iii): exponential convergence where $\bm\lambda^\ast\in\partial\Delta({\cal X})$}
\begin{example}
\label{exa:8}
\rm Consider the channel matrix $\Phi^{(3)}$ of (\ref{eqn:Phi3}), i.e.,
\begin{align}
\Phi^{(3)}=\begin{pmatrix}
\,0.800 & 0.100 & 0.100\,\\
\,0.100 & 0.800 & 0.100\,\\
\,0.350 & 0.350 & 0.300\,
\end{pmatrix}.
\end{align}
We have
\begin{align}
\bm\lambda^\ast&=(0.500,0.500,0.000),\\
Q^\ast&=(0.450,0.450,0.100),\\
J(\bm\lambda^\ast)&=
\begin{pmatrix}
\,0.228 & -0.228 & 0.000\,\cr
\,-0.228 & 0.228 & 0.000\,\cr
\,-0.428 & -0.428 & 0.856\,\cr
\end{pmatrix}.\label{eqn:example8Jacobimatrix}
\end{align}
The eigenvalues of $J(\bm\lambda^\ast)$ are $(\theta_1,\theta_2,\theta_3)=(0.000,0.456,$ $0.856)$. Then, $\theta_{\rm max}=\theta_3=0.856$. With initial distribution $\bm\lambda^0=(1/3,1/3,1/3)$, we have for $N=500$
\begin{align}
L(500)=0.159\doteqdot-\log\theta_{\rm max}=0.155.
\end{align}
See Fig.\ref{fig:12}.
\begin{figure}[t]
\begin{center}
\begin{overpic}[width=8cm]{./example8_3.eps}
\put(95.5,32.5){$\leftarrow-\log\theta_{\rm max}$}
\put(103,26.5){$=0.155$}
\put(51,-4){$N$}
\put(27,43){$L(N)\ \text{\rm with}\,\bm\lambda^0=(1/3,1/3,1/3)$}
\end{overpic}
\caption{Convergence of $L(N)$ in Example \ref{exa:8} with initial distribution $\bm\lambda^0=(1/3,1/3,1/3)$}
\label{fig:12}
\end{center}
\end{figure}
We are here dealing with the exponential convergence in the case (iii) of section \ref{sec:m3narbitray}. In (iii), the Jacobian matrix $J(\bm\lambda^\ast)$ is given by (\ref{eqn:3by3Jacobimatrix3}). Let us consider the (3,3) component $J^{\rm III}=e^{D^\ast_3-C}$ of $J(\bm\lambda^\ast)$ in (\ref{eqn:3by3Jacobimatrix3}) where $0<J^{\rm III}<1$. Putting $\bm{e}_3=(0,0,1)$, we have $J(\bm\lambda^\ast){^t}\bm{e}_3=J^{\rm III}\,{^t}\bm{e}_3$, then $J^{\rm III}$ is an eigenvalue of $J(\bm\lambda^\ast)$ and ${^t}\bm{e}_3$ is a right eigenvector. On the other hand, $\bm{e}_3$ is not a left eigenvector for $J^{\rm III}$. In fact, since every row sum of $J(\bm\lambda^\ast)$ is equal to 0 by Lemma \ref{lem:rowsumofJis0}, putting $\bm{1}=(1,1,1)$, we have $J(\bm\lambda^\ast){^t}\bm{1}=\bm{0}$. Thus, if $\bm{e}_3$ were a left eigenvector for $J^{\rm III}$, then $0=\bm{e}_3J(\bm\lambda^\ast){^t}\bm{1}=J^{\rm III}\bm{e}_3{^t}\bm{1}=J^{\rm III}>0$, a contradiction. Therefore, if $J^{\rm III}=\theta_{\rm max}$, i.e., the maximum of the eigenvalues is achieved in ${\cal I}_{\rm III}$ not in ${\cal I}_{\rm I}$, then by Lemma \ref{lem:migikoyuuvector} or the flow chart in Fig.\ref{fig:9}, we have $L(N)\doteqdot-\log\theta_{\rm max}$ for large $N$. The Jacobian matrix of (\ref{eqn:example8Jacobimatrix}) is one that satisfies $J^{\rm III}=\theta_{\rm max}$.
\end{example}
\section{Conclusion}
In this paper, we investigated the convergence speed of the Arimoto algorithm. First, we noticed that the defining function $F(\bm\lambda)$ of the Arimoto algorithm is a differentiable mapping from the set $\Delta({\cal X})$ of all input distributions into itself. We showed that the capacity achieving input distribution $\bm\lambda^\ast$ is the fixed point of $F(\bm\lambda)$, and analyzed the convergence speed by the Taylor expansion of $F(\bm\lambda)$ about $\bm\lambda=\bm\lambda^\ast$. We concretely calculated the Jacobian matrix $J$ of the first order term of the Taylor expansion and the Hessian matrix $H$ of the second order term. We clarified that if the maximum eigenvalue $\theta_{\rm max}$ of $J(\bm\lambda^\ast)$ satisfies $0\leq\theta_{\rm max}<1$, then the convergence is exponential. Further, we investigated in detail the case that the input alphabet size $m=3$ and the output alphabet size $n$ is arbitrary. We proved, under the assumption $\rho>0$, where $\rho$ was defined in (\ref{eqn:rhodefinition}), the following three conditions are equivalent; type II index in (\ref{eqn:Kuhn-Tucker2}) exists, $\theta_{\rm max}=1$, and the convergence is the $1/N$ order. In this case, we determined the convergence speed by the derivatives of the Kullback-Leibler divergence with respect to the input probabilities. The analysis for the convergence of the $1/N$ order by the Hessian matrix $H$ was done for the first time in this paper.
Based on these analysis, the convergence speeds for several channel matrices were numerically evaluated. As a result, it was confirmed that the convergence speed of the Arimoto algorithm is very accurately approximated by the theoretical values obtained by our theorems.
\newpage
|
2,877,628,091,596 | arxiv | \section{Introduction}
Pulse timing is a common problem in high energy physics \cite{grzywacz2003applications}, optics \cite{samain1998timing}, telecommunication \cite{han2017timing} and many other applied physics disciplines. Among feasible methods, fast electronic readout systems provide a cost-effective and robust solution with relatively high timing resolution. In many engineering circumstances, we care more about the availability and practicality than technical indicators. Electronic timing systems are usually good candidates for these applications.
In high energy physics, accurate timing, along with energy and position information, is needed to reconstruct the collision events so as to discriminate against backgrounds \cite{alice2014performance} and identify phenomenons of interest \cite{aad2012observation}. Several kinds of detectors can provide the time information. For example, Time-of-Flight detectors can measure the time of incoming events directly; Time Projection Chambers (TPC) and calorimeters can measure the pulse signal and infer the time afterwards; silicon detectors and pixel sensors can measure the hit information and offer an auxiliary time stamp, and so on. The final reconstructed event is a combination and coincidence of multiple sources of detectors.
There are two major branches of timing systems: systems based on Analog-to-Digital Converters (ADC) and systems based on Time-to-Digital Converters (TDC). In general, TDC-based systems are specialized in time measurement and can achieve a precision of tens of picoseconds \cite{antonioli200320} when configured properly. In spite of their high precision, the major drawback of TDC-based systems is that they lack the necessary amplitude information which is critical in some applications. If both time and amplitude are of interest, ADC-based systems are good alternatives to TDC-based systems. The empirical timing precision for ADC-based systems is in the order of nanoseconds.
For ADC-based systems, a typical work flow can be described as follows. The original signal from TPCs or calorimeters is preprocessed by Charge Sensitive pre-Amplifiers (CSA) to get a step-like signal. Afterwards, this signal is fed to Front-End Electronics (FEE). The signal conditioning on the FEE board includes buffering, amplifying and bandpass filtering by CR-RCn shapers. Finally, the signal is sampled by ADCs with the prescribed precision and data depth. The recorded ADC samples can serve multiple purposes. For a classification task, the shaped pulse signal can be used to discriminate between particles or physical events \cite{mauri2018pulse, mahata2018particle, akerib2018liquid, ashida2018separation}. For a regression task, timing or other pulse information is extracted from the digitized pulse signal \cite{kaspar2017design}.
To obtain the time from a finite set of ADC samples, we can use an estimated fitting function and perform curve fitting to get estimated values of underlying parameters. Curve fitting is a standard inference method in the time domain and it shows promising properties under certain conditions (See section \ref{sec:adv-curve-fitting}). However, its applicability and accuracy rely on the fitting function and the ideal form of noise heavily. As a result, the actual performance of curve fitting is limited by experimental conditions of ADC-based systems \cite{fish2017electronic}.
Recently, \emph{deep learning} \cite{lecun2015deep} as a renewed machine learning technique has progressed rapidly. It has been successfully used for particle/event discrimination and identification at the pulse level \cite{griffiths2018pulse}, the pixel level \cite{adams2018deep} and the voxel (three-dimensional) level \cite{ai2018three}. In view of the fact that neural networks are applicable to classification tasks as well as regression tasks, it is meaningful to explore the capability of deep learning in the above-mentioned pulse timing problem.
In this paper, we mainly discuss the deep learning approach to pulse timing based on a comparison between curve fitting and the proposed method. Section \ref{sec:alice-phos} briefly introduces the project background and the mathematical form of the researched pulse. Section \ref{sec:curve-fitting} explains the traditional curve fitting method by theoretical analysis and simulation studies. Section \ref{sec:deep-learning} gives a comparative study and the details of the new approach of deep learning. Section \ref{sec:exp-results} discusses the experiments we conduct and shows the experimental results. Finally, a conclusion is drawn in section \ref{sec:conclusion}.
\section{ALICE PHOS electronics}
\label{sec:alice-phos}
The ALICE PHOS detectors \cite{muller2006config} refer to the Photon Spectrometers designed for the ALICE experiment \cite{aamodt2008alice}. The detectors were produced in 2007 and scheduled for the first p+p collisions at LHC in 2008 \cite{torii2009phos}. The scintillator is made of lead tungstate crystals and mainly used to detect high energy photons (up to 80 GeV). An Avalanche Photo-Diode (APD) receives the scintillation and converts it to an electrical signal, which is applied to a CSA near the APD. The output of the CSA is connected to the FEE card via a flat cable.
The FEE card has 32 independent readout channels, each of which is connected to two shaper sections with high gain and low gain. The CR-RC2 signal shapers are made up of discrete components on a 12-layer Printed Circuit Board (PCB). For each channel, there are two overlapping 10-bit ADCs at the terminations of the two shapers, which give an equivalent dynamic range of 14 bits. The sampling rate of the ADCs is fixed to 10 MS/s. The same readout plan and PCB layout were adopted by ALICE EMCal detectors \cite{fantoni2011emcal}, which refer to the ALICE Electromagnetic Calorimeters. The major difference of FEE cards between PHOS and EMCal lies in the shaping time of the shapers. For PHOS, the designate shaping time is 1 $\mu$s; however for EMCal, we use different resistors and capacitors to achieve a shaping time of 100 ns.
The CR-RC2 shaper is a bandpass filter in the frequency domain. In the time domain, its response to an ideal step signal can be formulated as the equation below:
\begin{equation} \label{equ:pulse}
f(t) = \begin{cases} K \left(\frac{t-t_0}{\tau_p}\right)^2 \cdot e^{-2 \cdot (\frac{t-t_0}{\tau_p})} + b, &\text{for } t \geq t_0 \\
b &\text{for } t < t_0 \end{cases}
\end{equation}
\noindent where $t_0$ is the start time, and $b$ is the pedestal. $K$ is originally defined as $\frac{2Q \cdot A^2}{C_f}$ which is a variable related to the energy of the incoming photon, where $Q$ is the APD charge, $A$ is the shaper gain and $C_f$ is charging capacitance of the CSA. In our simulations, without changing the nature of the problem, we use $K$ as a normalization factor for numerical purposes. $\tau_p$ is the peaking time defined as the interval between the start of the semi-Gaussian pulse and the moment when $f(t)$ reaches its maximum value. The relation between the shaping time $\tau_0$ and the peaking time $\tau_p$ is $\tau_p = n \cdot \tau_0$. For the CR-RC2 shaper structure, $n$ equals 2, so the peaking times for the PHOS and EMCal are 2 $\mu$s and 200 ns, respectively.
Since the CR-RCn shaper is representative for most applications in high energy physics, in the latter sections we center on the pulse function in equation \ref{equ:pulse} to discuss different timing methods.
\section{Curve fitting method}
\label{sec:curve-fitting}
Curve fitting is a traditional model fitting technique mainly aimed at finding the parameterized mathematical relations between two or more variables. Classical linear curve fitting can be directly solved by the least squares method, and nonlinear curve fitting can be solved by the trust region and Levenberg-Marquardt methods \cite{marquardt1963algorithm}. In the pulse timing scenario, the main purpose of curve fitting is to determine the desired parameters related to the time information. In the following subsections, we analyze the curve fitting method in terms of its capability to reveal the ground-truth parameters under various conditions.
\subsection{Theoretical analysis}
\subsubsection{Summary and notations}
We consider the following nonlinear least squares problem:
\begin{equation}
\begin{aligned}
\text{minimize } & S \\
= \text{minimize } & \sum_{i=1}^n r_i^2 \\
= \text{minimize } & \sum_{i=1}^n \left[y_i - f(t_i; \bm{\beta}, \bm{\theta}) \right]^2
\end{aligned}
\end{equation}
\noindent where $S$ is the sum of squared residuals to minimize, $r_i$ is the i-th residual, $y_i$ is the i-th observed value (from ADC), and $t_i$ is the i-th time value. There is some noise residing in the observed value $y_i$, and we denote this noise term as $n_i$. Besides, $\bm{\beta}$ is the \emph{fitting parameters} and $\bm{\theta}$ is the \emph{system parameters}. The division of fitting parameters and system parameters is made according to our understanding of the problem and practical issues. It is not recommended to set two parameters with high correlation as fitting parameters at the same time, which will cause instability to the fitting process.
It should be noted that the above formulation is a general framework for the fitting problem. Usually we choose a function family $f(t; \bm{\beta}, \bm{\theta_m})$ for curve fitting. However, $f(t; \bm{\beta}, \bm{\theta_m})$ is only a subset of the underlying possible functions $f(t; \bm{\beta}, \bm{\theta})$. We denote the reference fitting function as $f(t; \bm{\beta_0}, \bm{\theta_0})$ in section \ref{sec:adv-curve-fitting} and section \ref{sec:quan-analysis}.
\subsubsection{The advantage of curve fitting in the ideal condition}
\label{sec:adv-curve-fitting}
In this part, we assume that the selected fitting function is accurate (i.e. $\bm{\theta}$ is fixed to $\bm{\theta_0}$ and $\bm{\theta_m} = \bm{\theta_0}$), and the noise distribution is strictly Gaussian with a fixed variance $\sigma$. Under these assumptions, the distribution of the observed value can be written as:
\begin{equation}
y_i = f(t_i; \bm{\beta_0}, \bm{\theta_0}) + n_i \sim N\left( f(t_i; \bm{\beta_0}, \bm{\theta_0}), \sigma^2 \right)
\end{equation}
Since the Gaussian distribution is $P(x|\mu, \sigma^2) = \frac{1}{{\sqrt{2\pi}\sigma}}e^{{{ - \left( {x - \mu } \right)^2 } \mathord{\left/ {\vphantom {{ - \left( {x - \mu } \right)^2 } {2\sigma ^2 }}} \right. \kern-\nulldelimiterspace} {2\sigma ^2 }}}$, the corresponding log-likelihood function is:
\begin{equation} \label{equ:log-likelihood}
\begin{aligned}
L(y_1, y_2, \ldots, y_n; \bm{\beta_0}, \bm{\theta_0}) & = \ln\prod_{i=1}^n P(y_i|f(t_i; \bm{\beta_0}, \bm{\theta_0}), \sigma^2) \\
& = -\frac{1}{2\sigma^2}\sum_{i=1}^n\left[y_i-f(t_i; \bm{\beta_0}, \bm{\theta_0})\right]^2 + const
\end{aligned}
\end{equation}
The equation \ref{equ:log-likelihood} implies that, in the ideal condition, using curve fitting to minimize the sum of squared residuals $S$ is equivalent to maximizing the log-likelihood function of the noise distributions. In other words, curve fitting gives the \emph{maximum likelihood estimators} of fitting parameters. This claim reveals the statistical properties of the curve fitting method. It is based on a hypothesis of Gaussian noise distributions, which is a useful prior when our knowledge about the system is limited.
\subsubsection{Quantitative analysis of drift, change and noise}
\label{sec:quan-analysis}
In reality, the assumptions in section \ref{sec:adv-curve-fitting} are usually not valid. Variations in the fitting function and the noise make the problem much more complicated. In this paper, we consider three types of variations which are representative in high energy physics:
\begin{enumerate}
\item \emph{Long-term drift}. This kind of variation refers to the deviation in the system parameters $\bm{\theta}$ after the circuit board is fabricated. It can also represent the persistent change between two calibration runs. It will affect the pulse function consistently so that the event-by-event characteristics stay the same for ADC sampling values.
\item \emph{Short-term change}. This kind of variation refers to the deviation in the system parameters $\bm{\theta}$ between two events. It will change according to the current status of the detector, but its effect is near-identical to all ADC sampling values in a single event. In other words, the event-by-event characteristics will change in the operation of the experiment.
\item \emph{Random noise}. This kind of variation refers to the randomized noise $n_i$ residing in the observed value $y_i$. It will vary between ADC samples in a single event. Since it is random, the actual value of the noise is not predictable. However, its statistical features can be determined in advance.
\end{enumerate}
Next, we will introduce these variations into the curve fitting. We only consider the variations that are near the reference point so that the fitting result will not be rejected by the fitting process (i.e. without increasing the chi-square criterion significantly). When the above variations are present, by using the first-order approximation we can formulate $y_i$ as:
\begin{equation} \label{equ:deviation-y}
y_i = f(t_i; \bm{\beta_0}, \bm{\theta_0}) + \sum_j \frac{\partial f(t_i; \bm{\beta_0}, \bm{\theta_0})}{\partial \theta_j}\Delta\theta_j + n_i
\end{equation}
Since we use the reference system parameters in the curve fitting, non-ideal $y_i$ will cause a change in the fitting parameters. By using the first-order approximation:
\begin{equation} \label{equ:deviation-ft}
f(t_i; \bm{\beta}, \bm{\theta_0}) = f(t_i; \bm{\beta_0}, \bm{\theta_0}) + \sum_j \frac{\partial f(t_i; \bm{\beta_0}, \bm{\theta_0})}{\partial \beta_j}\Delta\beta_j
\end{equation}
Curve fitting tries to minimize the sum of squared residuals by varying $\bm{\beta}$. By applying the first-order necessary condition for a minimum, we get the following equation:
\begin{equation} \label{equ:gradient_S}
\nabla_{\bm{\beta}}S = \nabla_{\bm{\beta}}\sum_{i=1}^n r_i^2 = \nabla_{\bm{\beta}}\left[ \sum_{i=1}^n \left(y_i - f(t_i; \bm{\beta}, \bm{\theta_0}) \right)^2 \right] = 0
\end{equation}
By substituting equation \ref{equ:deviation-y} and equation \ref{equ:deviation-ft} into equation \ref{equ:gradient_S} and solving the system of linear equations, we can get the following expression:
\begin{gather}
(\bm{J^T J})\bm{\Delta \beta} = \bm{J^T} (\bm{P \Delta\theta} + \bm{n}) \\
\text{where } J_{ij} = \frac{\partial f(t_i; \bm{\beta_0}, \bm{\theta_0})}{\partial \beta_j} \quad P_{ij} = \frac{\partial f(t_i; \bm{\beta_0}, \bm{\theta_0})}{\partial \theta_j} \nonumber
\end{gather}
If $\bm{J^T J}$ is nonsingular, the deviation in the fitting parameters can be solved by:
\begin{equation} \label{equ:dev-solve}
\bm{\Delta \beta} = (\bm{J^T J})^{-1} \bm{J^T} (\bm{P \Delta\theta} + \bm{n})
\end{equation}
In general, equation \ref{equ:dev-solve} is a generalization of linear curve fitting to nonlinear cases. It implies that, under first-order approximations, the deviation of the fitting parameters around the reference point is linearly dependent on the deviation of the system parameters and random noise.
\subsection{Simulation studies}
\label{sec:curve-fitting-sim}
To demonstrate the accuracy of first-order approximations in our pulse function, we compare the results from calculating equation \ref{equ:dev-solve} to the results from directly applying curve fitting. For the pulse function in equation \ref{equ:pulse}, we divide parameters in the following way without inducing a complicated function family:
\begin{equation}
\bm{\beta} = \{K, t_0 \}, \quad \bm{\theta} = \{\tau_p, b\}
\end{equation}
In the following simulations, we choose $K = 5.12$, $t_0 = 0.0$, $\tau_p = 2.0$, $b = 0.1$ as the reference point. The pulse is sampled from $t = 0.0$ to $t = 3.2$ with a period of $0.1$, so there are a total of 33 points. The value of $K$ ensures that the amplitude is renormalized to a range in the interior of $(0, 1)$. This parameterization is in accord with the PHOS electronics with 1 $\mu$s shaping time (section \ref{sec:exp-1us}).
\begin{figure}[htbp]
\centering
\subfigure[$K$ vs. $\tau_p$]{
\includegraphics[width=0.48\textwidth]{./tau_p_K}}
\subfigure[$t_0$ vs. $\tau_p$]{
\includegraphics[width=0.48\textwidth]{./tau_p_t_0}}
\subfigure[$K$ vs. $b$]{
\includegraphics[width=0.48\textwidth]{./base_K}}
\subfigure[$t_0$ vs. $b$]{
\includegraphics[width=0.48\textwidth]{./base_t_0}}
\caption{\label{fig:curve-fitting-drift-change} A gathering of figures for drift and change simulations. Each figure compares the result from first-order (linear) approximations and the result from curve fitting directly.}
\end{figure}
\paragraph{Long-term drift and short-term change} These two kinds of variations are associated with system parameters $\bm{\theta}$. We separate $\tau_p$ and $b$ and study their influence on fitting parameters $K$ and $t_0$ respectively. The simulation results are shown in figure \ref{fig:curve-fitting-drift-change}. The solid line is calculated from first-order approximations, and the solid dots are generated from curve fitting. It can be seen that in a region near the reference point the first-order approximations are fairly accurate. This is especially true for ($t_0$, $\tau_p$) and ($K$, $b$) pairs, which have high correlations. In other two pairs, the discrepancy of first-order approximations and curve fitting is determined by higher order effects.
\begin{figure}[htbp]
\centering
\subfigure[$K$ vs. shift of clipped crystal ball]{
\includegraphics[width=0.48\textwidth]{./cb_clip_K}}
\subfigure[$t_0$ vs. shift of clipped crystal ball]{
\includegraphics[width=0.48\textwidth]{./cb_clip_t_0}}
\subfigure[$K$ vs. shift of clipped Moyal]{
\includegraphics[width=0.48\textwidth]{./moyal_clip_K}}
\subfigure[$t_0$ vs. shift of clipped Moyal]{
\includegraphics[width=0.48\textwidth]{./moyal_clip_t_0}}
\caption{\label{fig:curve-fitting-noise} A gathering of figures for noise simulations. Each figure compares the result from first-order (linear) approximations and the result from curve fitting directly.}
\end{figure}
\paragraph{Random noise} According to equation \ref{equ:dev-solve}, if the per-sample noise is Gaussian, the linear mapping will propagate the noise to the fitting parameters directly, so the distribution of fitting parameters will also be Gaussian. On the other hand, if the per-sample noise is not Gaussian, the linear mapping will work in a similar way. In order to study the distribution of fitting parameters for these non-Gaussian cases, we select two representative noise distributions, which are the crystal ball distribution \cite{gaiserappendix} and the Moyal distribution \cite{walck1996hand}. The former one has a long tail at the left-hand side and the latter one has a long tail at the right-hand side. Their probability density functions are:
\begin{equation}
\text{crystal ball: } f(x, \beta, m) = \begin{cases}
N \exp(-x^2 / 2), &\text{for } x > -\beta\\
N A (B - x)^{-m} &\text{for } x \le -\beta
\end{cases}
\end{equation}
\begin{equation}
\text{Moyal: } f(x) = \exp(-(x + \exp(-x))/2) / \sqrt{2\pi}
\end{equation}
\noindent For the crystal ball distribution, we choose $\beta = 2$, $m = 3$, shift its center to $[2.0, 4.0]$ and downscale it with 0.01. For the Moyal distribution, we shift its center to $[3.2, 6.4]$ and downscale it with 0.00625. In addition, in order to study the non-negative effect in detector electronics, we clip the noise to force the noise values to 0 if they become negative. The simulation results are shown in figure \ref{fig:curve-fitting-noise}. For each figure, we calculate the first-order approximations and run Monte Carlo simulation with a volume of 1000. It can be seen that although the noise distributions have strong non-Gaussian features, the distributions of fitting parameters have Gaussian shapes. The mean and standard deviation calculated from equation \ref{equ:dev-solve} can very well characterize the distributions from curve fitting. This implies that for medium size (33 in this case) of sampling points, distributions of fitting parameters show accordance to the law of large numbers, which is a statistical property of many independent random variables.
\paragraph{} In conclusion, the first-order approximations can describe curve fitting in a simple and convenient way. Different variations can be viewed as independent forces that drive the deviation of fitting parameters, and their relation is additive. This paves the way for the comparison in section \ref{sec:comparative-study} where we will demonstrate the potential of deep learning against this perspective.
\section{New approach --- deep learning method}
\label{sec:deep-learning}
Deep learning is a major breakthrough in recent years. It is based on neural networks, but its focus has shifted to building intricate network architectures for real-world applications (eg. image, voice, natural language, etc.). It started with image classification tasks \cite{krizhevsky2012imagenet, he2016deep} and spread to other domains \cite{hinton2012deep, shen2014learning} in artificial intelligence. Furthermore, it has been applied to high energy physics in recent literatures \cite{de2016jet, racah2016revealing, renner2017background, acciarri2017convolutional}. In the following subsections, we discuss how to use deep learning to solve the pulse timing problem.
\subsection{Deep learning basics}
The concept of neural networks is fundamental in deep learning. The basic element of a neural network is called a \emph{neuron}. A neuron has N inputs and one output. Besides, it has N weights and one bias as its parameters. It computes the products of the inputs and the weights in an element-wise manner, adds them together with the bias and uses a nonlinear activation function on the sum. Many similar neurons can act on the same inputs and form a layer. For a neural network with one intermediate layer, the output unit is also a neuron. The only intermediate layer is also called the hidden layer.
A deep neural network usually refers to a network with more than one hidden layer. By taking the output of the former layer as the input, hidden layers can be stacked. Increasing the depth of the network can gain additional power to extract structured features and reduce the number of parameters needed to approximate some functions. In general, neural networks have promising mathematical characteristics. They are supported by the universal approximation theorem \cite{hornik1989multilayer, cybenko1989approximation}, which states that neural networks can approximate mathematical functions with arbitrary precisions with enough neurons and layers.
One successful network structure is the \emph{convolutional neural network} \cite{krizhevsky2012imagenet}. It is based on the ideas of weights sharing and shift invariance. Instead of connecting a neuron to all inputs, we compute the output of a neuron in a vicinity (eg. a 2D patch) of the neuron. Besides, the weights to produce the output are shared across different places. By taking these measures, the parameters in a neural network can be greatly reduced and the efficiency can be improved dramatically.
To train a neural network, we need (input, label) pairs. The input is propagated through the neural network and compared to the label to compute a loss function. Then the loss is used to update the parameters of the whole network by the back propagation algorithm. The updating formula is usually based on the gradient descent method, i.e. descending the parameters in a direction which reduces the loss function. The loss function is usually the cross entropy along with the softmax function for a classification task \cite{krizhevsky2012imagenet}, and the mean square error and its derivatives for a regression task.
\subsection{The potential of deep learning --- a comparative study}
\label{sec:comparative-study}
With the knowledge above, it is ready to discuss the potential of deep learning and compare it to the curve fitting method. The study is carried out in the aspect of variations in section \ref{sec:quan-analysis}.
\paragraph{Long-term drift} From the analysis in section \ref{sec:curve-fitting}, the long-term drift will introduce a bias to the fitting parameters. In a large detector system, correcting the bias is a tremendous task, and even impractical in some cases. For one thing, unlike the discussion in section \ref{sec:curve-fitting-sim}, system parameters are hidden in the function and sometimes have very sophisticated forms. For another, the non-uniformity of different cells makes the problem even more complicated. Furthermore, if we view the bias in the non-Gaussian noise as a kind of long-term drift, the total effect is a mixture of several aspects. To tackle the bias challenge, we can use a regression neural network to fix the influence of the long-term drift to the fitting parameters. Without loss of generality, we can assume that the last layer of the neural network has the form $y = f(\bm{x}; \bm{w}, b) = \sum_i w_i \cdot x_i + b$. Since the last layer has a bias parameter $b$, if there is a persistent shift in the system, this shift will be counteracted by the bias parameter $b$ through the training process. As long as the training label is sufficiently accurate, the bias can be greatly reduced by the neural network.
\paragraph{Short-term change} For curve fitting, the short-term change has a direct impact on the precision of the fitting parameters. In equation \ref{equ:dev-solve}, it can be seen that the event-by-event variations of the system parameters $\bm{\theta}$ will result in the fluctuation of the fitting parameters. The primary cause for this phenomenon is that curve fitting treats each set of ADC samples as an independent and complete set of features. However, different sets of ADC samples belong to the same function family, and an overall understanding of the function family is beneficial to the explanation of the individual set of features. The optimization of neural networks is such a global process which is helpful to establish the overall understanding. To see this point, we can rewrite the mapping of the neural network as:
\begin{equation}
\bm{\beta^\prime} = \bm{g}(\bm{f}(\bm{t}; \bm{\beta}, \bm{\theta}) + \bm{n}; \bm{W}, \bm{B})
\end{equation}
\noindent where $\bm{f}(\bm{t}; \bm{\beta}, \bm{\theta}) = (f(t_1; \bm{\beta}, \bm{\theta}), f(t_2; \bm{\beta}, \bm{\theta}), \ldots, f(t_n; \bm{\beta}, \bm{\theta}))$ is the vector of sampling points, and $\bm{W}, \bm{B}$ are the weights and biases of the neural network. When we optimize the model, the training label will change consistently with the underlying fitting parameters $\bm{\beta}$ but remain the same when system parameters $\bm{\theta}$ vary. As a result of training, the weights $\bm{W}$ and biases $\bm{B}$ of the neural network follow a gradient descent direction so that the change of $\bm{\beta^\prime}$ is proportional to the change of $\bm{\beta}$ but orthogonal to the change of $\bm{\theta}$. In other words, training increases the sensitivity to variations of fitting parameters and reduces the sensitivity to variations of system parameters. In this way, the influence of the short-term change can be greatly alleviated.
\paragraph{Random noise} We have already analyzed the Gaussian noise with the accurate fitting function in section \ref{sec:adv-curve-fitting}. Here we focus on the noise with more complex forms. According to the central limit theorem, the distributions of fitting parameters will take Gaussian shapes when noise is presented. This is a degenerative process and could loss original information. To help understand the claim, we might think of the development of modern physics. When the instrumentation was not so advanced, people could only observe macro phenomenons, which were normally distributed according to statistical laws. Once the hardware condition had improved, people could measure the micro mechanisms, and the fine structures could be found. In our problem, curve fitting does not utilize the information in each time point sufficiently, and the loss of information can not be retrieved. On the other hand, we already know that neural networks have micro structures. This offers an opportunity to achieve better performance than curve fitting in the non-Gaussian settings. Since the nonlinear mapping in the activation function can implement a complicated function family, it is possible to use neural networks to retrieve the origin information from noisy inputs.
\paragraph{} In conclusion, deep learning is a good alternative to the traditional curve fitting method in terms of drift, change and noise when used in an appropriate way.
\subsection{Network architecture}
\label{sec:nn-arch}
In this part, we will discuss the implementation issues of deep learning in the specific pulse timing problem. Although neural networks are promising according to the analysis in section \ref{sec:comparative-study}, it does not mean that any structure will perform well. When facing a new problem, practitioners need to customize the network structure to make it suitable for the problem settings.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.9\textwidth]{./Network}
\caption{\label{fig:network} A diagram of the network architecture.}
\end{figure}
We design our network architecture based on the ideas from \cite{isola2017image, kuleshov2017audio}. A diagram of the adopted architecture is shown in figure \ref{fig:network}. In principle, the network is comprised of two parts, a denoising autoencoder and a regression network.
The denoising autoencoder \cite{vincent2010stacked} is a network which tries to recover the original unstained input from its noisy version. A typical autoencoder is made up of a pyramid structure which performs feature extraction (encoding), and an inverted pyramid structure which restores original data (decoding). We add following features to the prototype of the autoencoder to improve its performance:
\begin{enumerate}
\item \emph{Convolution and deconvolution}. In the encoder layers and decoder layers, we use convolution \cite{krizhevsky2012imagenet} and deconvolution \cite{noh2015learning} operations to replace the fully-connected layers. These operations can utilize the locality of input features and extract structured patterns from data. In the convolution, we use many groups of parameters (called a \emph{filter} or \emph{kernel}) to compute the output (called a \emph{feature map}). Each filter has its own weights and bias, and it moves across the input feature map to produce an one-dimensional output. Many filters will result in a feature map with many channels of one-dimensional data. In the deconvolution, the operation between the input and the output is transposed. For the same stride and padding, the output shape of deconvolution operations will be the same as the input shape of the corresponding convolution operations.
\item \emph{Skip connections}. Optimizing a deep neural network suffers from problems of vanishing/exploding gradients. Even when these problems are handled by normalization, a degrading problem still affects the performance of the model. In \cite{he2016deep}, a dedicated structure called the \emph{residual network} was suggested to solve the problem. In view of this work, we implement skip connections between the encoder layers and decoder layers to overcome the issues when training a deep network. Except the last layer, every layer in the encoder is directly copied to the corresponding layer in the decoder. At the decoding side, the channels from the encoder and the channels from the main passage of the network are concatenated. In this way, the relation between long-range layers is preserved so that it is easier for the network to learn valuable features from the input.
\item \emph{Leaky ReLU}. The Rectified Linear Unit (ReLU) \cite{nair2010rectified} is a kind of activation function which is widely used in deep learning. Since ReLUs force the output to become zero when the input is negative, it blocks the flow of information for a considerable amount of neurons in a network. The leaky ReLU \cite{xu2015empirical} is proposed to solve the problem. Unlike ReLUs, the leaky ReLU has a gradual slope at the negative x-axis. It has a non-zero gradient even when the input is negative. In our network, we use leaky ReLUs in the encoder layers.
\end{enumerate}
\begin{table}[htbp]
\centering
\caption{Specification for the denoising autoencoder.}
\label{tab:denosing-autoencoder}
\scriptsize
\begin{tabular}{|lllll|}
\hline
\multicolumn{5}{|c|}{Convolution} \\
No. & stride & filter width & out channels & leaky ReLU \\
\hline
1 & 2 & 4 & 64 & No ReLU \\
2 & 2 & 4 & 128 & Yes (0.2) \\
3 & 2 & 4 & 256 & Yes (0.2) \\
4 & 2 & 4 & 512 & Yes (0.2) \\
5 & 2 & 4 & 512 & Yes (0.2) \\
6 & 2 & 4 & 512 & Yes (0.2) \\
7 & 2 & 4 & 512 & Yes (0.2) \\
8 & 2 & 4 & 512 & Yes (0.2) \\
\hline
\end{tabular}
{\begin{tabular}{|lllll|}
\hline
\multicolumn{5}{|c|}{Deconvolution} \\
No. & stride & filter width & out channels & dropout \\
\hline
8 & 2 & 4 & 1024 & Yes (0.5) \\
7 & 2 & 4 & 1024 & Yes (0.5) \\
6 & 2 & 4 & 1024 & Yes (0.5) \\
5 & 2 & 4 & 1024 & No \\
4 & 2 & 4 & 512 & No \\
3 & 2 & 4 & 256 & No \\
2 & 2 & 4 & 128 & No \\
1 & 2 & 4 & 1 & No \\
\hline
\end{tabular}}
\end{table}
Specifically, the denoising autoencoder is a network with 8 $\times$ 2 layers. First, we use cubic (or quadratic) interpolation to stretch the original input to the desired length. Then it goes through the network to get the output. The specification in detail is shown in table \ref{tab:denosing-autoencoder}. In the convolution part, leaky ReLUs are used except the first layer. The number is the parentheses represents the slope at the negative x-axis. In the deconvolution part, the output channels include channels from skip connections. Dropout \cite{srivastava2014dropout} is a regularization method to prevent overfitting. We use dropout in the first three layers. The number in the parentheses represents the dropout ratio.
Upon the denoising autoencoder, we add a regression network to directly output the parameters of interest. The structure of the regression network is a traditional feedforward network with 2 hidden layers. Each layer, with 512 neurons, is fully connected to its input. A \emph{softmax} layer is used between the denoising autoencoder and the regression network.
Training such a network can be divided into the following two steps:
\begin{enumerate}
\item \emph{Autoencoder pre-training}. It is strongly recommended to pre-train the denoising autoencoder as the first step of the training process. Based on the function of the autoencoder, we need to estimate the form of noise and generate (noisy input, unstained input) pairs as the (input, label) to train the network. To be more specific, first we randomly generate a set of sampling points according to the pulse function. Then we add per-sample noise to the sampling points according to the probability distribution of the estimated form of noise. If the expression of the short-term change is known, it can also be used. Actually, only a rough estimate can improve the final performance significantly (see section \ref{sec:exp-results}). In this stage, only simulation data is used.
\item \emph{End-to-end finetuning}. After pre-training, we can use experimental data (if available) to make an end-to-end finetuning of the whole network. A precise label indicating the ground-truth parameter is used at the far-end of the network to generate a loss function. There are two options in finetuning. The first option is to keep the autoencoder unchanged and only finetune the regression network. If there are no distinct changes in the pulse function compared to the pre-training stage, this option can be used. The second option is to finetune the whole network together. For this option, the pre-trained network only works as an optimal start point for finetuning, and the capacity of the model is larger (which also implies overfitting issues).
\end{enumerate}
\subsection{Simulation studies}
\label{sec:nn-sim}
In this part, we run simulations of the proposed neural network regarding the variations discussed in section \ref{sec:quan-analysis}. Since the advantage of the neural network model on the long-term drift is evident according to the discussion in section \ref{sec:comparative-study}, we do not run simulations for this kind of variation.
In order to study the variations, first we need to generate the simulation dataset. The pulse function is the same as section \ref{sec:curve-fitting-sim}. In the following simulations, we choose $K$ uniformly sampled in the range $[2.56, 5.12]$ and $t_0$ uniformly sampled in the range $[-0.9, 0.1]$. The reference values for $\tau_p$ and $b$ are 2.0, 0.1 respectively. The pulse for the noisy input (or the input with short-term change) is sampled from $t = 0.0$ to $t = 3.2$ with a period of $0.1$. We drop the last point when training, so there are 32 points. The same pulse for the label is sampled at a super-resolution ratio of 8 in the same interval, so there are a total of 256 points. We gather the simulation samples into two separate datasets. The training dataset has 40000 samples and the test dataset has 10000 samples.
To calculate the timing resolution, we test different methods on the test dataset and get the predicted values of the start time $t_0$. For curve fitting, the predicted values are the fitting parameters. For regression networks, the predicted values are the outputs of the networks. Then we use the difference between the predicted values and the ground-truth values to make a Gaussian fit. The standard deviation of the Gaussian fit is a measure of the timing resolution.
\begin{figure}[htbp]
\centering
\subfigure[A typical figure of the inputs, the outputs and the targets (label) of the autoencoder.]{
\includegraphics[width=0.48\textwidth]{./sc_plot_000001}}
\hfill
\subfigure[The RMS of amplitude between the inputs/outputs and the ground-truth targets. The figure is plotted on the statistics of the whole test dataset.]{
\includegraphics[width=0.48\textwidth]{./sc_rms}}
\caption{\label{fig:nn-short-term-change} The simulation results of the denoising autoencoder for the short-term change. (\emph{left}) We choose a sample in the test dataset and plot the noisy input, the denoising outcome and the training label. (\emph{right}) We calculate the Root Mean Square (RMS) between the inputs/outputs of the neural network and the ground-truth label for each sample in the test dataset. Then we make a Gaussian fit for all the samples and plot the figure.}
\end{figure}
\begin{table}[htbp]
\centering
\caption{Simulation results for the short-term change. The table compares different neural network models with curve fitting.}
\label{tab:nn-short-term-change}
\small
\begin{tabular}{|cccc|}
\hline model & note & converged & timing resolution ($\mu$s) \\
\hline fitting original data & --- & --- & 0.01217 \\
fitting autoencoder outputs & only base network & --- & 0.00296 \\
regression net v1 & base network fixed & successful & 0.00303 \\
regression net v2 & base network trainable & successful & 0.00182 \\
\hline
\end{tabular}
\end{table}
\paragraph{Short-term change} To study the effects of the short-term change, we introduce the baseline shift, i.e. the variations of the pedestal $b$. The baseline shift is a common type of the short-term change especially when the event rate is high so that nearby events will interplay. To construct the dataset, first we add the same shift to all sampling points in an event. The shift is randomly sampled from a Gaussian distribution with 0.1 mean and 0.014 standard deviation. The training targets of the denoising autoencoder are set to have the pedestal $b = 0.1$, which is the standard value used in curve fitting. The results are shown in figure \ref{fig:nn-short-term-change} and table \ref{tab:nn-short-term-change}. In the left figure, we can see that although the pedestal $b$ and the amplitude $K$ are both random and have high correlation, the denoising autoencoder can effectively perceive the change in the pedestal and cancel the change. The right figure shows the distribution of RMS based on the statistics of the whole test dataset. The average of RMS is reduced from 0.01110 to 0.00310 by a factor of 3.58. In the table, we compare the timing resolution achieved by curve fitting and neural networks. In the first two lines, it can be seen that fitting the outputs of the denoising autoencoder is better than fitting original data, which demonstrates the effectiveness of the neural network structure. The result of the regression network v1 when the base network is fixed is slightly worse than fitting the outputs of the denoising autoencoder. The best result (1.82 ns) comes with the regression network v2 when the base network is trainable. It outperforms curve fitting results significantly. It implies that, for the short-term change, when we choose a proper start point and finetune the whole network, the result can be even better than the autoencoder alone.
\paragraph{Random noise} We analyze two representative kinds of noise: the Gaussian noise and the clipped Moyal noise (see section \ref{sec:curve-fitting-sim}).
\begin{figure}[htbp]
\centering
\subfigure[A typical figure of the inputs, the outputs and the targets (label) of the autoencoder.]{
\includegraphics[width=0.48\textwidth]{./plot_000001}}
\hfill
\subfigure[The RMS of amplitude between the inputs/outputs and the ground-truth targets. The figure is plotted on the statistics of the whole test dataset.]{
\includegraphics[width=0.48\textwidth]{./rms}}
\caption{\label{fig:nn-gaussian-noise} The simulation results of the denoising autoencoder for the Gaussian noise. The figures are plotted in the same way as figure \ref{fig:nn-short-term-change}.}
\end{figure}
\begin{table}[htbp]
\centering
\caption{Simulation results for the Gaussian noise. The table compares different neural network models with curve fitting.}
\label{tab:nn-gaussian-noise}
\small
\begin{tabular}{|cccc|}
\hline model & note & converged & timing resolution ($\mu$s) \\
\hline fitting original data & maximum likelihood estimator & --- & 0.01206 \\
only regression net & no base network & failed & 0.26756 \\
fitting autoencoder outputs & only base network & --- & 0.01249 \\
regression net v1 & base network fixed & successful & 0.01530 \\
regression net v2 & base network trainable & successful & 0.01261 \\
\hline
\end{tabular}
\end{table}
In the first place, we add the Gaussian noise with zero mean and 0.014 standard deviation. This introduces a noise ratio $\frac{\text{noise std. deviation}}{\text{average amplitude}} \approx 2.7\%$. In order to give results with visual impacts, the noise ratio we choose is significantly larger than reality. The results are shown in figure \ref{fig:nn-gaussian-noise} and table \ref{tab:nn-gaussian-noise}. In the left figure, we can see that although the input points have obvious noise, the noise is suppressed by the denoising autoencoder so that the output points are approximating the label. In this sample and the majority of samples in the test dataset, the difference between the output points and the label is very slight. The right figure displays the statistics of the whole test dataset. The average of the noise RMS is reduced from 0.01390 to 0.00372 by a factor of 3.74. In the table, we use three neural network models and compare their performance with curve fitting. Since the Gaussian noise is the most common case, in the analysis we add the regression network alone for comparison. According to section \ref{sec:adv-curve-fitting}, fitting original data gives the result of the maximum likelihood estimator which is the theoretical lower bound. It can be seen that the network architecture is important to achieve the optimal performance. When we use only the regression network, the model fails to converge and gives a result worse than the sampling period. However, when we use the autoencoder-regression network architecture, the model can converge successfully. The best result of neural networks comes from the regression network v2 with the base network trainable. This shows the advantage of the model capacity in the problem.
\begin{figure}[htbp]
\centering
\subfigure[A typical figure of the inputs, the outputs and the targets (label) of the autoencoder.]{
\includegraphics[width=0.48\textwidth]{./mo_plot_000001}}
\hfill
\subfigure[The RMS of amplitude between the inputs/outputs and the ground-truth targets. The figure is plotted on the statistics of the whole test dataset.]{
\includegraphics[width=0.48\textwidth]{./mo_rms}}
\caption{\label{fig:nn-moyal-noise} The simulation results of the denoising autoencoder for the clipped Moyal noise. The figures are plotted in the same way as figure \ref{fig:nn-short-term-change}.}
\end{figure}
\begin{table}[htbp]
\centering
\caption{Simulation results for the clipped Moyal noise. The table compares different neural network models with curve fitting.}
\label{tab:nn-moyal-noise}
\small
\begin{tabular}{|cccc|}
\hline model & note & converged & timing resolution ($\mu$s) \\
\hline fitting original data & --- & --- & 0.01203 \\
fitting autoencoder outputs & only base network & --- & 0.00324 \\
regression net v1 & base network fixed & successful & 0.00463 \\
regression net v2 & base network trainable & successful & 0.00487 \\
\hline
\end{tabular}
\end{table}
In the second place, we analyze the clipped Moyal noise. The original Moyal distribution is shifted to location 0.004, rescaled with 0.006 and then clipped for noise generation. Again, the noise is more intense than reality. The results are shown in figure \ref{fig:nn-moyal-noise} and table \ref{tab:nn-moyal-noise}. In the left figure, we can see that the unique structure of the denoising autoencoder can very well get the clue of the ground-truth target from the noisy input. To further illustrate the idea, we plot the distribution of RMS on the test dataset in the right figure. The average of the noise RMS is reduced from 0.01722 to 0.00093 by a factor of 18.52. This exceeds the results from former simulations. In the table, we compare the timing resolution between curve fitting and neural networks. In the first two lines, it can be seen that curve fitting with the denoising autoencoder alone can improve the timing resolution significantly. Besides, when regression networks are added, the models can successfully converge and show competitive results. In this case, keeping the base network fixed (regression network v1) is slightly better than making the base network trainable (regression network v2), which demonstrates the good baseline provided by the autoencoder.
\paragraph{} To conclude the simulation results, the network architecture proposed in section \ref{sec:nn-arch} can very well tackle the non-ideal conditions. Finetuning the whole network together can achieve results better than fitting the outputs of autoencoder when the short-term change is applied, but slightly worse when the random noise is applied. Finetuning the regression network alone can sometimes achieve better results than finetuning the whole network, especially when the base network is accurate. In experimental conditions, it is not always possible to provide exact training targets for the denoising autoencoder as in the simulations. Thus, finetuning the regression network with the precise time label is vital to improve the performance of the whole network.
\section{Experimental results}
\label{sec:exp-results}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.9\textwidth]{./Platform}
\caption{\label{fig:hardware-platform} A photograph of the hardware test platform with the PHOS detector, AD9656 data acquisition board and HPDAQ.}
\end{figure}
We build a hardware test platform to study the pulse timing in the real-world environment. A photograph of the platform is shown in figure \ref{fig:hardware-platform}. The test platform is based on the PHOS detector (section \ref{sec:alice-phos}). We use a pulse generator to produce pulses with the $\sim$50 ns width and the $\sim$10 Hz frequency. This pulse signal drives a LED to produce light for the PHOS crystal. The scintillation is collected by the APD and passed to the CSA. Then it is transmitted to the CR-RC2 shaper on the FEE card. The output of the CR-RC2 shaper is hardwired to the AD9656 data acquisition board, which is connected to the HPDAQ motherboard for TCP/IP communications. The AD9656 is a 4-channel ADC chip with 2.8 V dynamic range, 16-bit precision and 125 MHz sampling rate. Choosing such a high-speed ADC chip makes it possible to compare the performance of curve fitting and the neural network model with different number of sampling points.
To prepare the datasets, we watch two channels of signals simultaneously. One channel is the trigger signal driving the LED, and the other channel is the output of the shaper on the FEE card. We randomly choose a fixed-interval section from the most salient part of the output pulse. Then we add a label to the pulse according to the interval between the trigger signal and the selected section. This label is used to train the neural network and work as the baseline for curve fitting. We normalize the amplitude of the ADC sampling points to the range similar to section \ref{sec:curve-fitting-sim} and section \ref{sec:nn-sim}. We collect 80000 samples for the training dataset and 20000 samples for the test dataset.
\subsection{1 \texorpdfstring{$\mu$}{u}s shaping time}
\label{sec:exp-1us}
\begin{figure}[htbp]
\centering
\subfigure[timing resolution]{
\includegraphics[width=0.48\textwidth]{./exp_1u_sigma}}
\subfigure[system bias]{
\includegraphics[width=0.48\textwidth]{./exp_1u_mu}}
\caption{\label{fig:exp-1u} Experimental results for the 1 $\mu$s shaping time.}
\end{figure}
In this part, we conduct experiments with 1 $\mu$s shaping time (2 $\mu$s peaking time) which is the ALICE PHOS specification. The sampling section has a span of 3072 ns. We choose $2^k + 1$ points evenly distributed in the sampling section. These points have been stretch to a fixed length of 256 points by cubic (for $k \geq 2$) or quadratic (for $k = 1$) interpolation when training the neural network. We pre-train the model under the assumption of the Gaussian noise with the parameterization in section \ref{sec:nn-sim}. Then we finetune the whole network using the experimental data. The base network is trainable when finetuning.
We analyze 6 different conditions with $k = 1, 2, 3, 4, 5, 6$. This gives an approximate sampling rate of 0.625 MHz, 1.25 MHz, 2.5 MHz, 5 MHz, 10 MHz and 20 MHz respectively. We perform an independent training process using the same pre-trained model. Then we test our model on the corresponding test dataset and make a Gaussian fit of the residuals (difference between regression outputs and time labels) to get the mean and the standard deviation. The standard deviation of the Gaussian fit is a measure of the timing resolution and the mean is a measure of the system bias. For curve fitting, we use the same sampling points and fit the residuals (difference between fitting parameters and time labels) to a Gaussian distribution.
We use a batch size of 16 when training the neural network, and the training proceeds for 10 epoches. The final result and error bar ($1\sigma$ error) for the neural network are calculated by the test results paused at even number of training epoches.
The main result is shown in figure \ref{fig:exp-1u}. In the left figure, it can be seen that the neural network works better than curve fitting steadily. With as few as 3 sampling points, the two methods can already achieve relatively good performance. When sampling points increase, the results improve slightly. When we use greater or equal than 17 sampling points, the performance of curve fitting hits a plateau, but the neural network can still improve. The best performance achieved by the neural network is $8.22\pm0.11$ ns, which is $27.3\%$ better than curve fitting (11.31 ns).
In the right figure, the system bias is greatly reduced by the neural network model compared to curve fitting. From directly observation, the interval between the start of the trigger signal and the start of the shaped pulse is approximately 15 sampling points (120 ns), which is close to results from curve fitting (137.94 ns to 148.11 ns). The bias for the neural network fluctuates around the horizontal axis. Since the bias is a fixed value for a given model, it can be calibrated in the same way as curve fitting, and the burden for calibration is considerably alleviated.
\subsection{100 ns shaping time}
\label{sec:exp-100ns}
\begin{figure}[htbp]
\centering
\subfigure[timing resolution]{
\includegraphics[width=0.48\textwidth]{./exp_100n_sigma}}
\subfigure[system bias]{
\includegraphics[width=0.48\textwidth]{./exp_100n_mu}}
\caption{\label{fig:exp-100n} Experimental results for the 100 ns shaping time.}
\end{figure}
In this part, we conduct experiments with 100 ns shaping time (200 ns peaking time) which is the ALICE EMCal specification. We replace resistors and capacitors in the CR-RC2 shaper on the FEE card to achieve a shorter shaping time. The sampling section has a span of 256 ns. We choose $2^k + 1$ points and analyze 5 different conditions with $k = 1, 2, 3, 4, 5$. This gives a sampling rate of 7.8125 MHz, 15.625 MHz, 31.25 MHz, 62.5 MHz and 125 MHz respectively. Other experimental conditions and procedures are similar to section \ref{sec:exp-1us}.
To determine the label for curve fitting and the neural network with a precision superior to the sampling period, we fit the trigger signal to the square pulse response of a second-order system:
\begin{gather}
Y_{\text{step}}(t) = K\left(1 + \frac{T_1}{T_2 - T_1}e^{-(t-t_s)/T_2} - \frac{T_2}{T_2 - T_1}e^{-(t-t_s)/T_1}\right)u(t-t_s) \\
Y_{\text{square}}(t) = Y_{\text{step}}(t) - Y_{\text{step}}(t-w)
\end{gather}
\noindent where $u(t)$ is the step function, $Y_{\text{step}}(t)$ is the overdamped step response of a second-order system. $K$ and $t_s$ are parameters to be fitted, and other parameters are fixed according to the circuit specification and the experimental observation. $t_s$ is used as the label to judge the quality of curve fitting and train the neural network.
The main result is shown in figure \ref{fig:exp-100n}. In the left figure, the timing resolution has improved significantly compared to the 1 $\mu$s shaping time. Again, the neural network outperforms curve fitting. When the number of sampling points increases from 3 to 33, the precision of the neural network and curve fitting increases slightly, and the trend gradually slows down. The neural network achieves the optimal result $1.37\pm0.03$ ns at 17 sampling points, which is $24.7\%$ better than curve fitting (1.82 ns).
In the right figure, the system bias of the neural network model is much less than curve fitting. Basically, curve fitting has a large system bias (90.16 ns to 91.73 ns) which is in accord with approximate 96 ns from direct observation, but the neural network model suppresses the absolute value of the bias to less than 2 ns. This facilitates the calibration and improves the overall stability of the timing system.
\subsection{Discussion about the experimental results}
In the above experiments, a relation between the shaping time of the shaper and the timing resolution is being considered. The experimental results show that, decreasing shaping time can potentially increase the timing resolution when other conditions are the same. In the frequency domain, a shorter shaping time means a bandpass filter with higher cut-off frequency. Therefore, more information about the original event is kept. In the time domain, shorter shaping time can alleviate the long-range misfit problem. To be more specific, in the experiments of 1 $\mu$s shaping time, sampling points are far away from the desired start time $t_0$; thus any slight discrepancy between the fitted model and the ideal model will cause a large deviation in the value of $t_0$. The similar issue applies to the neural network if we view the discrepancy as an intrinsic error and a source of misunderstanding. To use the 100 ns shaping time, the distance between sampling points and the start time is shortened and the long-range problem is properly handled.
However, on the other hand, when the shorter shaping time is used, the influence of three kinds of variations (especially short-term change and random noise) is relatively more significant. Besides, since the width of the LED pulse is less than 50 ns, signal integrity issues (especially overshooting) affect the precision of the fitted label. As a result, the improvement of timing resolution is worse than estimates based on a proportional hypothesis ($\sim$0.8 ns). If we use auxiliary timing detectors to construct a coincidence measuring system, better results can be expected.
\section{Conclusion}
\label{sec:conclusion}
The classic curve fitting method uses a Gaussian noise hypothesis, and its performance is guaranteed by its statistical properties. However, when the long-term drift, short-term change and random noise are presented in the pulse function, the limitation of curve fitting emerges. Among the possible alternatives, neural networks show strong resistance to these three kinds of variations by its delicate structure and optimization process. Simulations and experiments demonstrate its superiority over curve fitting.
Nevertheless, neural networks have their special requirements which pose new challenges to the design of the detector system. Since most deep learning methods are based on the supervised learning, an accurate label for training is needed. Sometimes acquiring the label is not an easy task, especially when the detector system has complex geometric structures and intricate components. This raises the demand for the traceable design, i.e. a design scheme in which the timing information can be traced back internally through the calibration process. From this perspective, we sincerely hope our work will provide a new way of thinking in the future design of timing systems.
\acknowledgments
This research is supported by the National Natural Science Foundation of China (Grant Number 11875146, 11505074, 11605051).
\input{pulse_manu.bbl}
\end{document} |
2,877,628,091,597 | arxiv | \section{Models for UV/Optical Variability and relationship to
X-ray Variability}
The origin of UV/optical variability in AGN and its relationship to
X-ray variability has been a puzzle for some time and there are two
main possibilities for the origin of the UV/optical variability. The
UV/optical variability could result from reprocessing of X-ray
emission by the accretion disc or it could simply be the result of
intrinsic variability of the thermal emission from the disc. These two
models can, in principle, be distinguished simply by measuring the lag
between the X-ray and UV/optical wavebands. In the reprocessing model,
the UV/optical variations will lag behind the X-ray variations by the
light travel time between the two emission regions. For a typical AGN
this time will be a few hours. If the UV/optical variations are
produced by intrinsic disc variations there are two possible lag
timescales. If the UV/optical photons are the seed photons for the
X-ray emission, being Compton up-scattered in the central corona, then
if the X-ray variations are driven by seed photon variations, the
X-ray emission will lag behind the UV/optical variations by the light
travel time between the two emission regions, ie a few
hours. Alternatively, if the UV/optical variations are caused by
inwardly propagating accretion rate variations \citep{arevalouttley06},
these variations will eventually propagate in at the viscous timescale
and will affect the X-ray emission region. In this case the X-ray variations
will lag the UV/optical varations by $\sim$years.
\section{Previous Related Observations }
\subsection{UV/Optical inter-band lags}
The thin disc model which has been our accepted model for the
temperature structure of optically thick accretion discs for over 40
years \cite[][SS]{shakura73} predicts a disc temperature profile
$T \propto R^{-3/4}$. Therefore, in the X-ray reprocessing model, in
which incident X-ray emission boosts the existing thermal emission
from the disc, we expect that the time lag of the UV/optical
variations after the X-ray variations should be given by
$lag \propto wavelength (\lambda)^{4/3}$.
\cite{sergeev05} and \cite{cackett07} have measured the lags of the V,
R, R1 and I bands relative to the B band for a sample of
AGN. \citeauthor{cackett07} find that, although not a perfect fit, the lags
are broadly consistent with the prediction of a reprocessing
model. However there were no accompanying X-ray measurements so it is
unknown as to how the X-ray variations might be related to the optical
variations.
\subsection{RXTE and ground based optical observations}
To investigate the link between the X-ray emission and the UV/optical
emission in AGN a number of groups \cite[e.g.][]{uttley03_5548, suganuma06,
arevalo08_2251, arevalo09, breedt09, breedt10, breedt10_thesis,
lira11, lira15} have monitored AGN quasi-simultaneously in X-rays
with RXTE and in optical wavebands from the ground (eg Fig.~\ref{79lcs}).
\begin{figure}[h]
\includegraphics[width=50mm,height=80mm,angle=270]{mkn79xv_trend.ps}
\caption{RXTE X-ray (upper panel) and ground based V-band (lower
panel) lightcurves of Mkn79 from \protect\cite{breedt09}.}
\label{79lcs}
\end{figure}
\begin{figure}[h]
\includegraphics[width=80mm,height=40mm]{79_ccf.eps}
\caption{Interpolation cross-correlation function
\citep{white_peterson94} between the X-ray and V-band lightcurves
shown in Fig.~\ref{79lcs}. The V-band lags the X-rays by $\sim$1d
\protect\cite[from][]{breedt09} Here, and throughout this paper, a
positive lag means that the longer wavelength lags behind the
shorter wavelength.}
\label{79ccf}
\end{figure}
Cross-correlation analysis, in all cases, shows either that the
optical lags the X-rays by $\sim1$d (e.g. Fig.~\ref{79ccf}), or that
there is no measurable lag. However the average sampling was $\sim$2d
and so the lags were rarely measured to better than 0.5d and it was
not possible to be absolutely certain that the X-rays never led the
optical.
\subsection{XMM-Newton and Swift single band lags}
To refine the measurement of the X-ray/optical lag better sampling is
needed than was available with the RXTE and ground based optical
monitoring. Such sampling is possible with Swift and with XMM-Newton.
\begin{figure}[h]
\includegraphics[width=80mm,height=40mm]{x+b_short.eps}
\caption{Swift X-ray and B-band lightcurves of NGC4395
\protect\cite[from][]{cameron12}.}
\label{4395lcs}
\end{figure}
\begin{figure}[h]
\includegraphics[width=80mm,height=40mm]{4395_vshort_ccf.ps}
\caption{Discrete correlation function \citep[DCF][]{edelson88}
derived from Swift event-mode UVOT data and X-ray photon counting
data. The UVW2 lags behind the X-rays by $\sim$400s, although at
low significance \protect\cite[from][]{cameron12}}.
\label{4395ccf}
\end{figure}
\begin{figure}[h]
\includegraphics[width=78mm,height=40mm]{4051_om_x_lcs_1pan.eps}
\caption{XMM-Newton OM UVW1 imaging observations (black dots) of NGC4051 with 1ks exposure superposed on a model X-ray lightcurve from EPIC and RXTE
observations, reprocessed by a ring of radius 0.14d
\protect\cite[from][]{mason02}.}
\label{4051om}
\end{figure}
Swift allows observations in the 0.5-10 keV X-ray band with the X-Ray
Telescope (XRT) and, with the UV-Optical Telescope (UVOT),
simultaneous observations in one of either 3 UV (UVW2, UVM2 and UVW1)
or 3 optical (U, B and V) bands, depending on filter
selection. XMM-Newton allows X-ray observations with EPIC in a
similar band to Swift, and also allows UV/optical observations with
the Optical Monitor (OM), which is similar to the Swift UVOT.
In Fig.~\ref{4395lcs} we show Swift X-ray and B-band variations in the low
black hole mass AGN NGC4395 (mass $3.6 \times 10^{5}$$M_{\odot}$ - see Bentz
and Katz http://www.astro.gsu.edu/AGNmass/ for all masses and
luminosities). The data here are presented with a resolution of one
satellite orbit (96 minutes).
We can see a strong correlation and cross-correlation
analysis (not shown here) reveals that the B-band lags the X-rays by
less than the orbital sampling time \citep{cameron12}.
To obtain still higher time resolution it is possible to
split the Swift orbital observations, of total duration
$\sim$1000-1500s, into smaller time bins, eg 100 or 200s.
In Fig.~\ref{4395ccf} we show the DCF derived from Swift X-ray and
UVW2-band observations with 200s time resolution (lightcurves not
shown here). Although with large errors,
this DCF suggests that the UVW2-band lags the X-rays by
$\sim$400s. This lag is approximately what we expect based on
X-ray reprocessing and formed the basis of the exciting XMM-Newton
observations which we describe in Section~\ref{4395xmm}.
With XMM-Newton the OM has, until our observations which we report here
in Section 3, usually been used in imaging mode. This mode provides
a minimum exposure time of 800s with a typical 300s readout time, thus
limiting time resolution to about 1100s. Observations in this mode
have not, in general, found significant correlations between the X-ray
and UV/optical bands \citep{smith07}. One exception is NGC4051
\citep{mason02} where, at $85\%$ confidence, the UVW1 band was seen to
lag the X-rays by $\sim$0.2d (Fig.~\ref{4051om}), broadly
consistent with reprocessing.
\subsection{Swift multi-band lags}
A number of programmes have been undertaken with Swift to measure the
lags between the X-ray band and the 6 UVOT bands. \cite{shappee14}
presented lag measurements of the Seyfert galaxy NGC2617 but
the results are slightly puzzling. The lags increase with wavelength,
but if the fit is forced to go through the X-ray point then, for
$lag \propto \lambda^{\beta}$, they find $\beta=0.37$, which is far
from the 4/3 expected from reprocessing. If the X-ray point is
ignored, a fit of $\beta=1.18$ is found between the other bands, but
if extrapolated to the X-ray wavelength, the fit is offset from the
X-ray point by a very large lag of 2.4d.
\begin{figure}[h]
\includegraphics[width=50mm,height=80mm,angle=270]{lc_2pan_x_w2_land.ps}
\caption{Long timescale Swift X-ray (lower panel) and UVW2 (upper
panel) observations of NGC5548 \protect\citep{mch14}.
}
\label{5548xw2long}
\end{figure}
\hspace*{-4mm}
\begin{minipage}{50mm}
\hspace*{-2mm}
\includegraphics[width=50mm,height=40mm,angle=0]{fig5_xw2_sub20.ps}
\captionof{figure}{Swift X-ray (lower panel) and UVW2 (upper panel)
for a $\sim$150d period of twice-daily observations of NGC5548
\protect\citep{mch14} with a 20d running mean subtracted from both.}
\label{5548intensive}
\hspace*{2mm}
\end{minipage}
\hspace*{1mm}
\begin{minipage}{25mm}
\includegraphics[width=25mm,height=40mm]{fig7a_jav_xw2.ps}
\captionof{figure}{Lag
of X-rays by UVW2 in NGC5548 from the
data shown in Fig.~\ref{5548intensive}.
}
\label{5548javelin}
\hspace*{2mm}
\end{minipage}
\cite{mch14} presented the result of almost 3 years of Swift
monitoring of NGC5548 with $\sim10 \times$ more observations than
those of \citeauthor{shappee14} In order to follow the Swift preference to
use the 'filter of the day', most of the UVOT observations were in the
UVW2 band (Fig.~\ref{5548xw2long}) which shows a strong ($>99.99\%$
confidence) correlation with the X-rays. There are, however, trends in
the UVW2 emission, lasting for a few months, which are not present in
the X-ray emission. Such trends may arise from intrinsic disc
variations caused by inwardly propagating accretion rate fluctuations.
Such long term trends, which are not present in both lightcurves, can
distort cross correlation functions and so, to measure lags on shorter
timescales, it is recommended practice to remove such trends by
subtracting a running mean \citep{welsh99}. From a 150d period
containing 300 observations M$\rm^{c}$Hardy\,\, et al. therefore subtracted a 20d
running mean from both lightcurves and a strong X-ray/UVW2 correlation
is then seen (Fig.~\ref{5548intensive}).
The lag between the two bands
was measured using a variety of techniques \citep[e.g. ZDCF,][]
{alexander13}, all showing that the UVW2 lags behind the X-rays by
about 1d. In Fig.~\ref{5548javelin} we show the lag
($0.70^{+0.24}_{-0.27}$d) as measured using Javelin
\citep{zu11_javelin}. These observations provided the first
unambiguous evidence that the UV variability was both strongly
correlated with the X-ray variations and lagged behind the X-ray
variations.
Javelin was designed to improve continuum-line lag measurements in
AGN. It assumes that the line lightcurve is a scaled, smoothed and
displaced version of the continuum. It models the variability as a
damped random walk (DRW), to interpolate between gaps, and directly
compares simulated line lightcurves with the observed line
lightcurve to recover the lag. \cite{pancoast14} show that
Javelin recovers simulated lags very well. It might be argued that
a DRW, which has a long timescale power spectral slope, $\alpha_{L}$,
of 0, does not describe the long timescale (months/years) X-ray
variability of AGN \cite[$\alpha_{L}= -1$, e.g.][]{mch04,mch05a} very
well. However on short timescales the DRW and X-ray power spectral
slopes are similar so there is no obvious reason why Javelin should be
less applicable here than in the measurement of continuum-line
lags.
\cite{mch14} also present observations of NGC5548 in the 5 other UVOT bands
thereby allowing the best measurement at that time of lag as a
function of wavelength (Fig.~\ref{5548lags}). In this case the fit
goes straight through the X-ray point with no offset, with
$\beta = 1.23 \pm 0.31$, in very good agreement with a reprocessing
model.
\subsection{Comparison of Lags with Models}
In Fig.~\ref{5548lags} we also show the expected model lags following
impulse X-ray illumination of a standard thin SS disc for the accepted
black hole mass, accretion rate and illuminating X-ray luminosity of
NGC5548. Here the model lags are defined by the time after the
initial X-ray impulse illumination for half of the reprocessed light
to be received. The observed lags are factors of $\sim3 \times$
larger than the model lags. Only by invoking a much hotter accretion
disc, eg by assuming a much higher accretion rate or illuminating X-ray flux
than currently accepted values, can we push the standard model close to
agreement with the observations. Later observations of NGC5548 by
\cite{edelson15} and \cite{fausnaugh15} with extended UV and optical
monitoring confirmed this result and
\cite{troyer15} find a similar result in NGC6814. However although
initially surprising, microlensing observations \citep{morgan10} have
also indicated that accretion discs might be factors of a few larger
than predicted by SS discs. A possible explanation is an
inhomogeneous disc temperature structure. Hotter clumps at large radii can
enhance the emission at those radii, making the disc appear larger by
factors of a few, depending on the degree of clumpiness
\citep{dexter11}.
\begin{figure}[h]
\includegraphics[width=80mm,height=40mm]{fig9_lags.ps}
\caption{Lags relative to the UVW2 band for NGC5548
\protect\cite[from][]{mch14}. The solid line is the best fit through
all of the data, including the X-ray point. The fit of $lag \propto
\lambda^{\beta}$ gives $\beta=1.23$, in good agreement with X-ray
reprocessing. However the expected lags, assuming a
standard thin disc \citep{shakura73}, are
$\sim3\times$ shorter (dashed red line) than observed.
}
\label{5548lags}
\end{figure}
\section{XMM-Newton and Ground Based Observations of NGC4395 }
\label{4395xmm}
As we only have good lag measurements for one AGN, it is very
important to make lag measurements on other AGN to determine
whether NGC5548 is just unusual or whether standard SS disc theory is
incomplete. Lag measurements on larger mass AGN require long
observational campaigns. However for smaller mass AGN such as NGC4395,
the expected lags can be very well measured in long observations with
XMM-Newton using EPIC for X-rays in combination with the OM in fast
readout mode for the UV/optical. The fast readout mode has not been
widely used for AGN observations but allows continuous readout with
sub-second resolution. The suggested 400s UV lag in NGC4395 cannot be
detected with standard OM imaging observations with $\sim1100$s time resolution.
On 28 and 30 December 2014 we therefore observed NGC4395 for
$\sim$53ks each time with XMM-Newton. We observed with the OM in the
UVW1 band thus extending our coverage to shorter wavelengths than can
be observed from the ground. This band has the highest sensitivity of
the UV bands and less host galaxy contamination than the optical
bands.
\begin{figure*}[h]
\includegraphics[width=170mm,height=80mm]{lc30ii.eps}
\caption{XMM-Newton EPIC X-rays (top panel), OM UVW1 (middle panel) and ground based g-band (bottom panel) from 30 December 2014 (Connolly et al, in prep). The X-rays are binned to 100s and the UVW1 and g-band to 200s.}
\label{xmmlcs}
\end{figure*}
During an XMM-Newton observation a source is typically
only observable from the ground for $\sim4$hr. To provide simultaneous
g-band observations we therefore made CCD imaging observations with
either 100 or 200s integrations depending on telescope size at 6
different ground based observatories (LCOGT McDonald Observatory,
Texas; Whipple Observatory, Arizona; LCOGT Haleakala Maui; Kanata
Observatory, Japan; ARIES observatory, India and the Wise Observatory,
Israel). In Fig.~\ref{xmmlcs} we show the X-ray, UVW1 and combined
g-band lightcurves from 30 December 2014. A good correlation is seen
between all bands. In Figs.~\ref{xmmdcfw1} and ~\ref{xmmdcfg} we show
the DCFs between the X-ray band and the UVW1 and g-band lightcurves
respectively, confirming high significance correlations.
To refine the lags relative to the X-rays we calculate
lag probability distributions using Javelin for both the UVW1
(Fig.~\ref{xmmjavelinw1}) and g-bands (Fig.~\ref{xmmjaveling}). The
resultant lags are $473^{+47}_{-98}$ and $788^{+44}_{-54}$s. In
Fig.~\ref{xmmlags} we plot both the UVW1 and g-band lags as a function
of wavelength. If we force the fit through zero, a simple linear fit
(ie $\beta=1$, red line) is best although $\beta=4/3$ (blue line) is
also an acceptable fit. These observations indicate that reprocessing
of X-rays is also responsible for the UV/optical variability of
NGC4395.
\begin{minipage}{40mm}
\hspace*{-3mm}
\includegraphics[width=40mm,height=40mm]{uvw1DCF.eps}
\captionof{figure}{DCF between the X-ray and UVW1 data for NGC4395 shown in
Fig.~\ref{xmmlcs}. Simulation based 75\%, 90\% and 99\% confidence
levels are shown.\vspace*{10mm}}
\label{xmmdcfw1}
\end{minipage}
\hspace*{1mm}
\begin{minipage}{40mm}
\includegraphics[width=40mm,height=40mm]{gbandDCF.eps}
\captionof{figure}{DCF between the X-ray and g-band data for NGC4395
shown in Fig.~\ref{xmmlcs}. Simulation based 75\% and 90\%
confidence levels are shown.\vspace*{10mm} }
\label{xmmdcfg}
\end{minipage}
In Fig.~\ref{4395model} we compare the expected radial emissivity profile of
the disc with the distances derived from the lag measurements,
following \cite{lira11}. The distances derived from lag measurements
are close to the expected peak emissivity regions. Due to
computational problems we have not yet derived lags in the same way as
for NGC5548 (Fig.~\ref{5548lags}) but
Fig.~\ref{4395model} indicates that, for NGC4395, the agreement
between standard SS thin disc theory and model may be closer than for NGC5548.
\begin{minipage}{37mm}
\hspace*{-5mm}
\includegraphics[width=37mm,height=38mm]{uvw1lag.eps}
\captionof{figure}{Lag of X-rays by UVW1
for NGC4395 from the data shown in Fig.~\ref{xmmlcs} using Javelin.}
\label{xmmjavelinw1}
\end{minipage}
\hspace*{1mm}
\begin{minipage}{38mm}
\includegraphics[width=38mm,height=40mm]{glag.eps}
\captionof{figure}{Lag of X-rays by g-band in NGC4395 from the data shown in Fig.~\ref{xmmlcs} using Javelin.}
\label{xmmjaveling}
\end{minipage}
\begin{figure}[h]
\includegraphics[width=80mm,height=40mm]{xmmlags.eps}
\caption{Lags of UVW1 and g-band behind the X-rays in NGC4395 derived
from the data shown in Fig.~\ref{xmmlcs}. For
$lag \propto \lambda^{\beta}$, the best fit, if forced through the
origin gives $\beta=1.0$ (red line). However $\beta=4/3$ (blue line)
is also an
acceptable fit.}
\label{xmmlags}
\end{figure}
\begin{figure}[h]
\includegraphics[width=78mm,height=40mm]{model0_1.eps}
\caption{Emissivity of the accretion disc as a function of radius in
NGC4395 for the g-band (dark green) and UVW1 (turquiose)
bands \protect\cite[following][]{lira11}. We assume
$M= 3.6\times10^{5}$ $M_{\odot}$, {$\dot{m}_{E}$}=0.0013 and $R_{in}=3R_{g}$. We also assume
$L_{2-10}=2.8\times10^{42}$ ergs cm$^{-2}$ s$^{-1}$~ and extrapolate from 0.1 to 500 keV
but also assume a high albedo so that the power absorbed by the disc
is approximately equal to just $L_{2-10}$keV. The solid lines are the
total luminosity released by the disc and the dotted lines are just the
gravitational emission.}
\label{4395model}
\end{figure}
\section{Conclusions }
To test whether the X-ray to UV/optical lags in an AGN which is quite
different to NGC5548 are also larger than expected from the standard
SS model, we made XMM-Newton EPIC and OM observations of NGC4395. This
AGN has a mass $\sim 100 \times$ less and accretion rate (in Eddington
units) $\sim 20 \times$ less than that of NGC5548. To obtain high
enough time resolution in the OM to sample the previously suggested
400s UV lag we used the fast readout mode, not previously used for AGN
observations. To obtain the UV lightcurve with the highest possible
signal to noise we used the UVW1 filter. To obtain a high quality,
continuous, optical lightcurve we chose the g-band and made
observations from six different observatories around the globe.
All observations were very successful and we measured lags, relative
to the X-rays of $473^{+47}_{-98}$ and $788^{+44}_{-54}$s for the UVW1
and g-bands respectively. These lags are in good agreement with the
hypothesis that the UV/optical variability is driven by reprocessing
of X-ray emission. However, unlike in NGC5548, preliminary modelling
indicates that the measured lags are not too far different from those
expected from standard SS disc theory. We remain cautious in our
interpretation at present but we do note that the disc in NGC4395 is
$\sim50\%$ hotter than in NGC5548. Increased disc
temperature may lead to a more stable disc \cite[e.g.][]{churazov01},
less sensitive to radiation-induced perturbations \citep{pringle97}.
These observations demonstrate the potential of XMM-Newton for
X-ray/UV lag measurements in low mass AGN.
\vspace*{4mm}
\noindent
{\bf Acknowledgements}
\noindent
We thank the XMM-Newton operations team for considerable advice and
assistance in setting up the observations of NGC4395. We also thank
the staff and observers at the McDonald, FLWO, Haleakala, Kanata,
Aries and Wise observatories. IMcH thanks STFC for support under grant
ST/M001326/1.
|
2,877,628,091,598 | arxiv | \section{Introduction}
\label{Section:Intro}
In high-energy heavy-ion collisions, the high-transverse momentum ($p_{T}$) partons ($p_{T} \gtrapprox 10$~GeV) are generated almost at the instant at which the incoming nuclei overlap. Such high $p_{T}$ partons are generated in parton-parton exchanges with large momentum transfers $Q \gg \Lambda_{\mathrm{QCD}}$. They are typically produced far from their mass shell and engender multiple collinear emissions produced over a large time range. In the case of a heavy-ion collision, the propagation and development of these parton showers are strongly affected by the produced Quark Gluon Plasma (QGP).
Studying jet modification in nucleus-nucleus collisions relative to proton-proton collisions, together with constraints from model-to-data comparison provides unique opportunities to probe the properties of the QGP~\cite{Bjorken:1982tu,Appel:1985dq,Baier:1996kr,Baier:1996sk,Zakharov:1996fv,Gyulassy:1999zd,Gyulassy:2000fs,Gyulassy:2000er,Wiedemann:2000za,Wiedemann:2000tf,Guo:2000nz,Wang:2001ifa,Majumder:2009ge,Arnold:2001ba,Arnold:2002ja,Majumder:2010qh,Blaizot:2015lma,Qin:2015srf,Cao:2020wlm}.
The experimental attempts started at the Relativistic Heavy Ion Collider (RHIC) with the observation of suppression in the yield of single inclusive hadrons~\cite{PHENIX:2001hpc,PHENIX:2003djd,PHENIX:2003qdj,STAR:2002ggv,STAR:2003fka}
and associated hadrons (dihadrons)~\cite{STAR:2002svs, STAR:2005ryu,PHENIX:2007yjc}
produced with high transverse momentum relative to the yield in proton-proton collisions.
Since 2010, starting at the Large Hadron Collider (LHC) and later at RHIC, the ability of experiments evolved from single hadrons and dihadrons to jets~\cite{ALICE:2013dpt,ATLAS:2010isq,CMS:2011iwn}.
Over the last decade, experiments have attained the ability to not just study the energy-momentum and cross section of a jet but also to look at modifications of the internal properties of the jet, often referred to as \emph{jet substructure}. Based on current detector improvements and accumulated high statistics data at RHIC and the LHC, it is possible to analyze a vast variety of observables revealing different aspects of the jet-medium interaction~\cite{Connors:2017ptx}.
For example, the yield suppression and internal structure of fully reconstructed jets, revealed in observables such as the jet fragmentation function and jet shape (respectively), provide details on the diffusion of jet energy and momentum in momentum or angular space due to the interaction with the medium~\cite{ATLAS:2010isq,CMS:2011iwn,ATLAS:2012tjt,ALICE:2013dpt,ATLAS:2014ipv,CMS:2016uxf,STAR:2016dfv,ATLAS:2018gwx,ALICE:2019qyj,CMS:2021vui,STAR:2020xiv,CMS:2013lhm,CMS:2016cvr,CMS:2018zze,CMS:2018jco,ALICE:2019whv,CMS:2021nhn,CMS:2014jjt,ATLAS:2014dtd,ATLAS:2017nre,ATLAS:2018bvp,ATLAS:2019dsv,ATLAS:2019pid}.
Even the structural modification of hard partonic branching is now potentially accessible through groomed jet observables~\cite{CMS:2017qlm,Kauder:2017cvz,CMS:2018fof,ALICE:2019ykw,ALargeIonColliderExperiment:2021mqf,ATLAS:2022vii}.
On the theory side, many studies have attempted to describe and understand the jet-medium interaction by constructing models that reproduce these various observables or propose predictions and new observables~
\cite{Vitev:2009rd,Qin:2010mn,Casalderrey-Solana:2011rbm,He:2011pd,Qin:2012gp,Blaizot:2014ula,Chien:2015hda,Chang:2016gjp,Mehtar-Tani:2016aco,Chen:2016vem,Chien:2016led,Mehtar-Tani:2017web,Chang:2017gkt,Tachibana:2017syd,Li:2018xuv,Chang:2019sae,Qiu:2019sfj,Ringer:2019rfk,Cao:2021rpv,Mehtar-Tani:2021fud,Sirimanna:2022zje}.
In particular, to obtain a universal understanding, it is essential to simultaneously explain multiple observables, ultimately all observables, with a consistent theoretical picture.
Therefore, Monte Carlo calculations, which can generate experiment-like events by a single model, are a powerful tool for theoretical approaches because they enable one to calculate a wide range of event-by-event defined jet observables~\cite{Lokhtin:2005px,Zapp:2008gi,Renk:2008xq,Armesto:2009fj,Schenke:2009gb,Li:2010ts,Young:2011qx,Lokhtin:2011qq,Zapp:2013vla,Casalderrey-Solana:2014bpa,He:2015pra,Casalderrey-Solana:2016jvj,Cao:2016gvr,KunnawalkamElayavalli:2017hxo,Milhano:2017nzm,Chen:2017zte,He:2018xjv,Luo:2018pto,Park:2018acg,Ke:2018jem,Casalderrey-Solana:2019ubu,Pablos:2019ngg,Caucal:2019uvr,Ke:2020clc,Dai:2020rlu,Chen:2020tbl,Zhao:2021vmu,Liu:2021dpm,Luo:2021hoo,Luo:2021voy,Caucal:2021cfb,Yazdi:2022bru,Yang:2022nei,Shi:2022rja}.
Jets evolve dynamically, moving through the expanding medium, and generating more partons from splits and interactions with the dense medium.
The original partons start at very high virtuality, and thus, the early splits have a small transverse size.
These splittings from the leading parton and the still highly-virtual daughters are driven by their individual virtualities, with minor medium correction via the scattering, strongly suppressed due to their small transverse size. We refer to these as Vacuum Like Emissions (VLE)~\cite{Caucal:2018dla}.
To simulate the VLEs, taking into account the reduction in the effective interaction rate with scale dependence, an event generator such as MATTER~\cite{Cao:2017qpx,Majumder:2013re} can be employed.
With repeated splittings, the virtuality of the partons reduces to the point that splits are widely separated in time.
With decreasing virtuality, the transverse size of the parton becomes larger, thereby increasing the rate of interaction with the medium, which in turn triggers more radiation.
Thus, the main mechanism causing parton splittings changes dynamically in the medium.
The evolution of such partons at lower virtuality but energy still large enough to treat the medium interaction perturbatively can be approximated by kinetic theory-based approaches for on-shell particles, as implemented by generators such as LBT~\cite{He:2015pra,Cao:2016gvr,He:2018xjv,Luo:2018pto}, or MARTINI~\cite{Schenke:2009gb,Yazdi:2022bru,Shi:2022rja}.
As partons transition to energies and virtualities close to those of the QGP, they begin to undergo strong coupling~\cite{Casalderrey-Solana:2014bpa} and thermalization with the medium~\cite{Tachibana:2020mtb}.
Thus, jets interact with the medium over a wide range of scales, which requires incorporating multiple generators at different scales for simulations~\cite{Majumder:2010qh}.
JETSCAPE is a general-purpose framework for Monte Carlo simulations of the complete evolution in high-energy heavy-ion collisions~\cite{Putschke:2019yrg,JETSCAPE:2019udz,JETSCAPE:2020shq,JETSCAPE:2020mzn,JETSCAPE:2021ehl,JETSCAPE:2022cob,JETSCAPE:2022jer,JETSCAPE:2022hcb}.
The framework is designed to be as general and extensive as possible while modularizing each physics element involved in a collision event, such as the generation of geometric initial conditions, hydrodynamic evolution of the soft sector, jet production by hard scattering, etc. so that users can employ a module based on their favorite physical description for each.
For the in-medium parton shower evolution, the most distinctive feature of the JETSCAPE framework is its support for multi-stage descriptions that, by stitching multiple models together, cover a broader range of scales. Depending on the virtuality or energy of a parton, each model becomes active to handle the parton shower evolution interactions with the medium.
Recently, we systematically studied the energy loss of large-transverse momentum particles, jets, and charmed particles using a multi-stage model, combining two modules, MATTER for high-virtual parton shower and LBT for low virtuality, developed within the JETSCAPE framework in Refs.~\cite{JETSCAPE:2022jer,JETSCAPE:2022hcb}.
Our simulations indicate that the single high-$p_T$ particle spectra are dominated by the large virtuality phase simulated by the MATTER module.
On the other hand, to describe the suppression of reconstructed jets and D mesons, we found that the energy loss of soft daughter partons and heavy quarks is governed by the low-virtuality scattering dominated phase simulated by the LBT module.
One further important insight from our prior work is that the reduction of the interaction with the medium at high virtuality due to coherence effects plays a crucial role in explaining the weak suppression of single charged particles with $p_{T}\gtrapprox 10~\mathrm{GeV}$.
These coherence effects occur because the partons probing the medium have a small transverse size when the virtuality is large.
A section of QGP resolved at such a shorter distance scale appears more dilute, resulting in fewer interactions \cite{Kumar:2019uvu}.\footnote{In several other models, e.g., those in Refs~\cite{Caucal:2019uvr,Ke:2020clc}, coherence effects are implicitly taken into account, without detailed formulations, by turning off the medium effect at high virtuality.}
Coherence effects implemented in MATTER drastically improve the description of the transverse momentum dependence of the nuclear modification factor for inclusive single-charged particles, even at the qualitative functional behavior level.
In contrast, for reconstructed jets at the currently available collision energies, coherence effects are not visible in the transverse momentum dependence of the nuclear modification factor, which only necessitated a readjustment of the overall medium coupling parameter $\alpha^{\mathrm{fix}}_{s}$.
Thus, it is essential to search for the role of coherence effects in the evolution of jet showering patterns by examining further inner jet structure modification.
In this paper, we systematically analyze the observables characterizing the internal structure of jets using the results of the exact same numerical simulations with MATTER+LBT that were used to study the nuclear modification factors for reconstructed jets and high $p_{T}$ single-charged particles in Ref.~\cite{JETSCAPE:2022jer}. The goal is to explore the details of the interaction strength at each scale on the internal structure of the jet.
In particular, we examine the groomed jet observables, which display the effect of jet-medium interactions at the early high-virtuality stage, and the jet fragmentation function, which shows the medium effect on partons throughout a wide range of scales.
In this work, we do not re-tune any parameters and employ those obtained in our previous work~\cite{JETSCAPE:2022jer}.
The paper is organized as follows.
In Sec.~\ref{Section:Model}, salient characteristics of the underlying model are presented.
In Subsec.~\ref{Subsection:ModelOverview}, an overview of the framework and setup is outlined.
Subsection~\ref{Subsection:CoherenceEffects} is devoted to formulating coherence effects.
This is followed by an investigation of the medium modification of jet substructure observables, focusing on coherence effects, by presenting results from our model calculations in Sec.~\ref{Section:Results}. Here, we also make predictions for the upcoming measurements of the jet substructure observables at RHIC.
A summary of our results and concluding remarks are presented in Sec.~\ref{Section:Summary}.
The {\hyperlink{Appen}{\textcolor{black}{Appendix}}} is dedicated to the presentation of our predictions of jet $R_{\mathrm{AA}}$ at the top RHIC energy for benchmarking purposes.
\section{Model}
\label{Section:Model}
JETSCAPE is a general-purpose event generator framework where different \emph{sub} event generators can be included in a modular fashion, producing an extensive end-to-end simulation of a heavy-ion collision. In this paper, we will use the results of simulations that were generated in Ref.~\cite{JETSCAPE:2022jer} to calculate all jet substructure observables. This is not just for convenience but rather to demonstrate how the exact same simulations can simultaneously describe both the jet and leading hadron suppression, as well as several jet substructure observables.
To that end, only a very brief overview of the components of the simulation will be provided in this section. The reader may refer to Ref.~\cite{JETSCAPE:2022jer} for specific details of the physics included in a MATTER+LBT simulation within the JETSCAPE framework. Computational aspects of the JETSCAPE framework are described in great detail in Ref.~\cite{Putschke:2019yrg}, while the basic physics of multi-stage simulators is described in Ref.~\cite{JETSCAPE:2017eso}.
\subsection{Overview}
\label{Subsection:ModelOverview}
To explore the medium modification of jet substructure, we perform simulations of jet events in high-energy nucleus-nucleus collisions utilizing the full framework of JETSCAPE in two separate steps.
First, we calculate the event-by-event space-time profiles of the QGP medium in nucleus+nucleus (A+A) collisions for the estimation of the local medium effect on parton shower evolution.
For this part, we perform simulations of (2+1)-D free-streaming pre-equilibrium evolution~\cite{Liu:2015nwa} and subsequent viscous hydrodynamic evolution by the (2+1)D VISHNU code package~\cite{Shen:2014vra} with the initial condition generated by T\raisebox{-0.3ex}{R}ENTo\ \cite{Moreland:2014oya}.
Here the MAP parameters obtained by Bayesian calibration in Ref.~\cite{Bernhard:2019bmu} are used for the LHC energy calculations, while hand-tuned parameters were used for top RHIC energy.
In the second step, the binary collision distribution from the same T\raisebox{-0.3ex}{R}ENTo\ initial condition as for the medium is used to sample the transverse position of a hard scattering.
The hard scattering is produced by \textsc{Pythia}\ 8~\cite{Sjostrand:2019zhc} with initial state radiation (ISR) and multiparton interaction (MPI) turned on, and final state radiation (FSR) turned off.
The produced partons in the hard scattering then undergo the multi-stage in-medium parton shower evolution within the JETSCAPE framework.
In this study, we use a combination of MATTER and LBT modules as described in Ref.~\cite{JETSCAPE:2022jer}.
The partons produced by hard scattering are first passed to the MATTER module, which simulates virtuality-ordered splitting of high-energy partons incorporating medium effects~\cite{Majumder:2013re,Cao:2017qpx}.
This description by MATTER is valid for partons with virtuality sufficiently larger than the accumulated transverse momentum and virtuality generated by scattering from the medium.
Partons whose virtuality is reduced by showering in MATTER are then transferred to LBT at a transition scale.
In LBT, the kinetic theory for on-shell partons with elastic and inelastic scatterings with medium constituents is applied~\cite{Wang:2013cia,He:2015pra,Cao:2016gvr}.
The parton splittings under this description are entirely scattering-driven.
In the multi-stage approach of the JETSCAPE framework, virtuality-dependent switching between modules is done bi-directionally on a per-parton basis using a switching parameter $Q^2_{\mathrm{sw}}$.
If the virtuality of the parton $Q^2 = p^\mu p_\mu - m^2$ falls below $Q^2_{\mathrm{sw}}$, it is then sent from MATTER to LBT.
Conversely, the parton is returned to MATTER if its virtuality exceeds $Q^2_{\mathrm{sw}}$ again, or it goes out of the dense medium. The transition from medium-like back to vacuum-like emission takes place at a boundary with a temperature $T_{c} = 0.16$~GeV.
In this study, $Q^2_{\mathrm{sw}}$ is set to $4~{\mathrm{GeV}}^2$.
After all the partons are outside the QGP medium and have virtuality smaller than the cut-off scale $Q_{\mathrm{min}}^2=1\,{\mathrm{GeV}}^2$, they are hadronized via the Colorless Hadronization module, in which the Lund string model of \textsc{Pythia}\ 8 is utilized.
In both MATTER and LBT modules, the medium response effect is taken into account via recoil partons~\cite{Li:2010ts,Zapp:2012ak,Zapp:2013vla,Cao:2017hhk,Luo:2018pto,Park:2018acg,Tachibana:2020mtb}.
In the \emph{recoil} prescription, the energy-momentum transfer is described by scatterings between jet partons and medium partons.
For each scattering, a parton is sampled from the thermal medium.
Then, the scattered sampled parton is assumed to be on-shell, and passed to LBT for its in-medium evolution, assuming weak coupling with the medium.
These \emph{recoil} partons and further accompanying daughter partons are collectively hadronized with the other jet shower partons.
On the other hand, a deficit of energy and momentum in the medium is left for each recoil process, where a parton emanating from the medium is included, post scattering, as a part of the jet.
We treat this deficit as a freestreaming particle, referred to as a \emph{hole} parton, and track it.
The hole partons are hadronized separately from other jet partons, and their energy and momentum within each positive particle jet cone are subtracted in the jet clustering routine to ensure energy-momentum conservation.
In the later stages of evolution, where the energy of a jet shower parton reaches a comparable scale to the ambient temperature, the mean free path is no longer large enough to apply the kinetic theory-based approach with the recoil prescription. In principle, such soft components of jets are supposed to be thermalized and evolve hydrodynamically as part of the bulk medium fluid~\cite{Casalderrey-Solana:2004fdk,Stoecker:2004qu,Tachibana:2019hrn,Cao:2020wlm,Schlichting:2020lef,Luo:2021iay,Mehtar-Tani:2022zwf}. As in Refs.~\cite{Chaudhuri:2005vc,Renk:2005si,Satarov:2005mv,Neufeld:2008fi,Noronha:2008un,Qin:2009uh,Betz:2010qh,Neufeld:2011yh,Schulc:2014jma,Tachibana:2014lja,JETSCAPE:2020uew}, implementation of models based on such a description is proposed, and there are some studies of the hydrodynamic medium response to jets using it~\cite{Tachibana:2017syd,Okai:2017ofp,Chen:2017zte,Chang:2019sae,Tachibana:2020mtb,Casalderrey-Solana:2020rsj,Yang:2021qtl,Yang:2022nei,Pablos:2022piv,Yang:2022yfr}. However, with such an implementation of the hydrodynamic medium response, the computational cost for a systematic and exhaustive study covering various configurations, as presented in this paper, is enormously expensive.
Thus, in this paper, we mainly discuss the structure of the hard part of the jet, where the contributions of such very soft components are relatively small. A further comprehensive investigation with more detailed modeling of the medium response in jet modification is left for future work.
To investigate the modification of jet substructures by medium effects in A+A collisions, the calculations of the same observables for $p$+$p$ collisions are necessary as references.
For such calculations, the parton shower evolution modules are replaced entirely by MATTER with no in-medium scattering.
This setup for $p$+$p$ collisions of JETSCAPE, referred to as the JETSCAPE PP19 tune, is equivalent to the limit of no medium effect in the event and is detailed in Ref.~\cite{JETSCAPE:2019udz}.
\subsection{Coherence Effects at High Virtuality}
\label{Subsection:CoherenceEffects}
In this study, we focus on coherence effects~\cite{Mehtar-Tani:2010ebp,Mehtar-Tani:2011hma,Casalderrey-Solana:2011ule,Caucal:2018dla,Kumar:2019uvu} on the interaction of a highly virtual parton with the medium and explore their manifestation in jet substructure modification.
In Ref.~\cite{Kumar:2019uvu}, it was demonstrated that a hard parton with large virtuality resolves the very short-distance structure of the medium via the exchange of a gluon whose momentum is much larger than the medium temperature.
These coherence effects are formulated with the continuous evolution of the medium-resolution scale and give a gradual reduction of jet parton-medium interaction as a function of the virtuality.
For jet quenching calculations, coherence effects can be effectively implemented by introducing a modulation factor $f(Q^2)$, which diminishes as a function of the parent parton's virtuality $Q^2$, in the medium-modified splitting function:
\begin{multline}
\Tilde{P}_a(y,Q^2)=P^{\mathrm{vac}}_a(y)\\
\times
\left\{ 1 +
\int\limits^{\tau_{\mathrm{form}}^{+}}_{0}
d\xi^{+}
\hat{q}^a_{\mathrm{HTL}}
\frac{c^{a}_{\hat{q}} f(Q^2) \left[ 2 - 2\cos \left( \frac{\xi^{+} }{\tau_{\mathrm{form}}^{+}} \right) \right]}{y(1-y) Q^{2} (1+\chi_a)^{2}} \right\}.
\label{eq:HT-splitting function}
\end{multline}
In the equation above, $P^{\mathrm{vac}}_a(y)$ is the Altarelli-Parisi vacuum splitting function~\cite{Altarelli:1977zs} for the parent parton species $a=(g,q,\bar{q})$ with the forward light-cone momentum fraction of the daughter parton $y$,
$\chi_a=(\delta_{aq}+\delta_{a\bar{q}})y^2m_a^2/[y(1-y)Q^2 -y^2m_a^{2}]$ with $m_a$ being the parent parton mass, and $c^{a}_{\hat{q}}=\left[1-\frac{y}{2}\left(\delta_{a,q}+\delta_{a,\bar{q}}\right)\right] - \chi_a \left[1 - \left(1-\frac{y}{2}\right) \chi_a\right]$.
The integration in Eq.~(\ref{eq:HT-splitting function}) is taken over light-cone time $\xi^+$ with the upper bound $\tau_{\mathrm{form}}^{+}=2p^{+}/Q^{2}$ being the formation time of the radiated parton, where $p^+=p^\mu \hat{n}_{\mu}/\sqrt{2}$ [with $\hat{n}_{\mu}=\left(1,\mathbf{p}/\lvert \mathbf{p}\rvert\right)$] is the forward light-cone momentum of the parent parton.
The formulation of $\Tilde{P}_{a}(y,Q^2)$ in Eq.~(\ref{eq:HT-splitting function}) is obtained using soft collinear effective theory within the higher twist scheme~\cite{Abir:2014sxa,Abir:2015hta}.
The parameterization of the virtuality-dependent modulation factor is given as~\cite{JETSCAPE:2022jer}
\begin{align}
f(Q^2) & =
\begin{cases}
\frac{1+10\ln^{2}(Q^2_\mathrm{sw}) + 100\ln^{4}(Q^2_\mathrm{sw})}{1+10\ln^{2}(Q^2) + 100\ln^{4}(Q^2)} &
\text{if } Q^2 > Q_{\rm sw}^2 \\
1 & \text{if } Q^2 \le Q_{\rm sw}^2
\end{cases}.
\label{eq:qhatSuppressionFactor}
\end{align}
When this explicit virtuality dependence is eliminated, the strength of the medium effect is controlled solely by the conventional transport coefficient for a low virtuality (near on shell) parton from the hard-thermal-loop (HTL) calculation~\cite{He:2015pra},
\begin{align}
\hat{q}^{a}_{\mathrm{HTL}} =C_{a}\frac{42 \zeta(3)}{\pi} \alpha^{\mathrm{run}}_{s}(p^0 T) \alpha^{\mathrm{fix}}_{s} T^{3} \ln\left[\! \frac{2p^0T}{m^2_{D}}\! \right]. \label{eq:HTL-qhat-formula-C-2}
\end{align}
Here,
$C_{a}$ is the Casimir color factor for the hard parent parton,
$\zeta(3)\approx 1.20205$ is Ap\'{e}ry's constant,
$p^0$ is the energy of the hard parent parton,
$T$ is the temperature at its location,
and $m^2_{D}=\frac{4\pi\alpha^{\mathrm{fix}}_{s} T^2}{3} \left(N_{c}+\frac{N_{f}}{2}\right)$ is the Debye screening mass for a QCD plasma with $N_{c}= 3$ colors and $N_{f}= 3$ fermion flavors.
The coupling strength $\alpha^{\mathrm{run}}_{s}(p^0T)$
is evaluated at the scale $\mu^2=p^0T$ via the running coupling constant,
\begin{align}
\alpha^{\mathrm{run}}_{s}(\mu^2)
&=
\begin{cases}
\frac{4\pi}{11-2N_{f}/3} \frac{1}{\ln\left(\mu^2/\Lambda^2\right)}
& \text{if } \mu^2 > \mu^2_0\\
\alpha^{\mathrm{fix}}_{s} &
\text{if } \mu^2 \leq \mu^2_0
\end{cases},
\label{eq:running_coupling}
\end{align}
with $\Lambda$ being chosen such that $\alpha^{\mathrm{run}}_{s}(\mu_0^2)=\alpha^{\mathrm{fix}}_{s}$ at $\mu^2_0=1~\mathrm{GeV}^2$.
In this framework, $\alpha^{\mathrm{fix}}_{s}$ is the free parameter controlling the overall interaction strength and chosen to give the best fit to the experimental data of inclusive jet $R_{\mathrm{AA}}$~\cite{JETSCAPE:2022jer}.
In this paper, we compare results from two different setups: with and without the virtuality-dependent coherence effects (referred to as Type-3 and Type-2 in Ref.~\cite{JETSCAPE:2022jer}, respectively).
For the case with coherence, $\Tilde{P}_a(y,Q^2)$ in Eq.~(\ref{eq:HT-splitting function}), with the virtuality-dependent modulation factor from Eq.~(\ref{eq:qhatSuppressionFactor}), is employed in the high virtuality phase by MATTER, with $\alpha^{\mathrm{fix}}_{s}=0.3$.\footnote{This configuration for MATTER+LBT with coherence effects is referred to as JETSACPEv3.5 AA22 tune, and its results are provided as defaults for comparisons with experimental and other data.}
In the setup without coherence effects, the modulation factor is fixed to unity [$f(Q^2)=1$] for any $Q^2$ to eliminate the explicit virtuality dependence.
The best fit with leading hadron and jet data is obtained with an $\alpha^{\mathrm{fix}}_{s}=0.25$ for this case. We will present results for jet substructure using events generated with the above parametrizations, both with and without coherence effects.
\section{Results}
\label{Section:Results}
In this section, we present the results
for jet substructure observables in Pb+Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02~\mathrm{TeV}$ based on the multi-stage (MATTER+LBT) jet quenching model described in the previous section.
A complementary study of the nuclear modification factor $R_{\mathrm{AA}}$ for reconstructed jets and charged particles using the same model has been presented in Ref.~\cite{JETSCAPE:2022jer}. Moreover, this same formalism has been applied to study the heavy-flavor observables and has been presented in Ref.~\cite{JETSCAPE:2022hcb}.
To show the capability of the JETSCAPE framework, we also provide predictions of the groomed jet observables, fragmentation function, and jet cone size dependence of inclusive jets and charged jets for the upcoming jet measurements at RHIC.
Throughout this work, the jet reconstruction and Soft Drop grooming are performed using the FastJet package~\cite{Cacciari:2005hq, Cacciari:2011ma} with FastJet Contrib~\cite{fjcontrib_code}.
\subsection{Groomed jet observables}
In this subsection, we present the observables obtained via Soft Drop grooming algorithm~\cite{Larkoski:2014wba,Dasgupta:2013ihk,Larkoski:2015lea}.
The Soft Drop procedure removes the contributions from soft wide-angle radiation and enables access to the hard parton splittings during the jet evolution.
In this algorithm, first, jets are constructed by a standard jet finding algorithm such as the anti-$k_{t}$ algorithm~\cite{Cacciari:2008gp} with a definite jet cone size $R$. Then, the constituents of an anti-$k_{t}$ jet are again reclustered by the Cambridge-Aachen (C/A) algorithm~\cite{Dokshitzer:1997in,Wobisch:1998wt} to form a pairwise clustering tree.
The next step is to trace back the C/A tree. Here, one declusters the C/A jet by undoing the last step of the C/A clustering and selecting the resulting two prongs. The two prongs are checked to see if they satisfy the Soft Drop condition, given as:
\begin{align}
\label{eq:soft_drop}
\frac{\min\left(p_{T,1},
p_{T,2}\right)}{p_{T,1}+
p_{T,2}}>z_{\mathrm{cut}}\left(\frac{\Delta R_{12}}{R}\right)^{\beta},
\end{align}
where $p_{T,1}$ and $p_{T,2}$ are the transverse momenta of the prongs,
$\Delta R_{12} = \sqrt{\!\left(\eta_{1}-\eta_{2}\right)^{2}
+\left(\phi_{1}-\phi_{2}\right)^{2}}$ is the radial distance between the prongs in the rapidity-azimuthal angle plane, $z_{\mathrm{cut}}$ and $\beta$ are parameters controlling the grooming procedure.
If the condition is failed, the prong with the larger $p_{T}$ of the pair is further declustered into a pair of prongs.
This process is repeated until one finds a pair of prongs satisfying the Soft Drop condition. The resulting pair of prongs are used to compute the groomed jet observables.
It is worth noting that there may exist cases in which no prong pair passing the soft-drop condition is eventually found even if the C/A tree is traversed back to the end; such cases are referred to as ``Soft Drop fail".
\begin{figure*}[htb!]
\centering
\includegraphics[width=0.98\textwidth]{ZG_SD_ALICE_PP_logo.pdf}
\caption{(Color online) Distributions of jet splitting momentum fraction $z_{g}$ for charged jets
in $p$+$p$ collisions at $\sqrt{s}=5.02$~TeV and the ratios
for different jet cone size $R$, and $p^{\mathrm{ch,jet}}_{T}$ range.
The Soft Drop parameters are $z_{\mathrm{cut}}=0.2$ and $\beta = 0$.
The solid lines and circles with statistical error bars show the results from JETSCAPE and the experimental data from ALICE Collaboration~\cite{ALargeIonColliderExperiment:2021mqf}, respectively.
The bands indicate the systematic uncertainties of the experimental data.
}
\label{fig:alice_zg_pp}
\end{figure*}
\subsubsection{Jet splitting momentum fraction}
Here we study the medium modification of the jet splitting momentum fraction $z_{g}$, which is defined as the left-hand side of Eq.~(\ref{eq:soft_drop}) in the case with the prong pair passing the Soft Drop condition.
\begin{figure*}[htbp]
\centering
\includegraphics[width=0.98\textwidth]{ZG_SD_ALICE_PbPb_logo.pdf}
\caption{(Color online)
Ratios of $z_{g}$ distributions for charged jets
between Pb+Pb and $p$+$p$ collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$~TeV
for different centrality, jet cone size $R$, and $p^{\mathrm{ch,jet}}_{T}$ range.
The Soft Drop parameters are $z_{\mathrm{cut}}=0.2$ and $\beta = 0$.
The solid and dashed lines with statistical error bars show the results from MATTER+LBT of JETSCAPE with and without coherence effects, respectively.
For comparison, the experimental data from the ALICE Collaboration~\cite{ALargeIonColliderExperiment:2021mqf}
are shown by squares with statistical errors (bars) and systematic uncertainties (bands). }
\label{fig:alice_zg_pbpb}
\end{figure*}
Figure~\ref{fig:alice_zg_pp} shows $z_{g}$ distributions for charged jets in $p$+$p$ collisions at $\sqrt{s}=5.02$~TeV defined as
\begin{align}
\label{eq:zg_distribution_alice}
\frac{1}{\sigma_{\mathrm{jet}}}\frac{d\sigma_\mathrm{SD,jet}}{dz_{g}}&=\frac{1}{N_{\mathrm{jet}}}\frac{dN_\mathrm{SD,jet}}{dz_{g}},
\end{align}
where $N_{\mathrm{jet}}$ is the number of inclusive jets,
$N_\mathrm{SD,jet}$ is the number of jets passing the Soft Drop condition
and $\sigma_{\mathrm{jet}}$, $\sigma_\mathrm{SD,jet}$ are the corresponding cross sections.
The Soft Drop parameters are set as $z_{\mathrm{cut}} = 0.2$ and $\beta = 0$.
The results from the JETSCAPE PP19 tune for different $p^{\mathrm{ch,jet}}_{T}$ ranges and jet cone sizes
are compared with the experimental data from ALICE.
Some small discrepancies can be seen, but they are mostly compatible within uncertainty.
\begin{figure*}[htbp]
\centering
\includegraphics[width=0.75\textwidth]{ZG_SD_RHIC_logo.pdf}
\caption{(Color online)
Ratios of $z_{g}$ distributions for charged jets with $R=0.2$ and $|\eta_{\mathrm{ch,jet}}|<0.7$ (left), and $R=0.4$ $|\eta_{\mathrm{ch,jet}}|<0.5$ (right) between 0-10\% Au+Au and $p$+$p$ collisions at $\sqrt{s_{\mathrm{NN}}}=200$~GeV from MATTER+LBT of JETSCAPE with coherence effects.
The Soft Drop parameters are $z_{\mathrm{cut}}=0.2$ and $\beta = 0$.
The solid and dashed lines with statistical error bars show the results for $10<p^{\mathrm{ch,jet}}_{T}<30$~GeV and $30<p^{\mathrm{ch,jet}}_{T}<50$~GeV, respectively.
}
\label{fig:rhic_zg}
\end{figure*}
In Fig.~\ref{fig:alice_zg_pbpb}, the modification of the $z_{g}$ distribution for charged jets is presented as the ratio of the distribution in Pb+Pb to p+p collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$~TeV.
Both results, with and without consideration of coherence effects, do not exhibit significant modification and are consistent with the experimental data.
This indicates that the medium effects on the functional form for the momentum fraction $y$ of the splitting function are small in hard partonic splittings. To be clear, the entire ensemble of jets in Pb+Pb that are included in this analysis is indeed modified by the medium. Looking at these results and the experimental data, one could imagine two possibilities: (i) The sample of jets that pass the soft drop condition is biased towards jets that are unmodified, and (ii) the jets are modified, but this modification does not affect the momentum fraction distribution of the prongs produced in the hardest split. In the subsequent subsection on the angle between the prongs, we will demonstrate that it is indeed the latter of the two possibilities. This indicates that most of the modification of the jet may take place at softer momenta, i.e., the hardest split is not affected by the medium at all.
Next, for upcoming measurements at RHIC, we present the prediction of the modification of the $z_{g}$ distribution for charged jets in 0-10\% Au+Au collisions at $\sqrt{s_{\mathrm{NN}}}=200$~GeV from MATTER+LBT with coherence effects in Fig.~\ref{fig:rhic_zg}. The trend is similar to the results observed at the LHC collision energy and does not show any significant nuclear effects for the kinematic configurations considered.
\subsubsection{Jet splitting radius}
\begin{figure*}[htb!]
\centering
\includegraphics[width=0.98\textwidth]{RG_SD_ALICE_PP_logo.pdf}
\caption{(Color online) Distributions of jet splitting radius $r_{g}$ for charged jets in $p$+$p$ collisions at $\sqrt{s}=5.02$~TeV and the ratios
for different jet cone size $R$, and $p^{\mathrm{ch,jet}}_{T}$ range.
The Soft Drop parameters are $z_{\mathrm{cut}}=0.2$ and $\beta = 0$.
The solid lines and circles with statistical error bars show the results from JETSCAPE and the experimental data from the ALICE Collaboration~\cite{ALargeIonColliderExperiment:2021mqf}, respectively.
The bands indicate the systematic uncertainties of the experimental data. }
\label{fig:alice_rg_pp}
\end{figure*}
Next, we study the medium modification of jet splitting radius $r_{g}$, which is defined as the radial distance $\Delta R_{12}$ of the prong pair passing the Soft Drop condition.
In Fig.~\ref{fig:alice_rg_pp}, $r_{g}$ distributions defined as
\begin{align}
\frac{1}{\sigma_{\mathrm{jet}}}\frac{d\sigma_\mathrm{SD,jet}}{d\left(r_{g}/R\right)}&=\frac{1}{N_{\mathrm{jet}}}\frac{dN_\mathrm{SD,jet}}{d\left(r_{g}/R\right)},
\end{align}
are shown for charged jets in $p$+$p$ collisions at $\sqrt{s}=5.02$~TeV.
The results from the JETSCAPE PP19 tune show good agreement with the ALICE data, particularly for the cases with $z_{\mathrm{cut}}=0.2$.
Figure~\ref{fig:alice_rg_pbpb} shows the modification of $r_{g}$ distribution for charged jets in Pb+Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$~TeV.
Our full results with coherence effects capture the trend observed in experimental data: Enhancement at small $r_g$ and suppression at large $r_g$.
In particular, the agreements within uncertainties can be seen for the case with $z_{\mathrm{cut}} = 0.2$.
For the 0–10\% most central bin, the result without coherence effects is shown for comparison.
It gives a slightly smaller slope, but no conclusion can be drawn within the current uncertainties.
Combined with the results for the $z_g$ distribution, we obtain the clear conclusion that these jets passing the Soft Drop condition are indeed modified, but predominantly in their softer components rather than in the hard partonic splittings.
For jets originally having a larger hard-splitting angle, the soft component diffusing due to the medium effect is more likely to leave the jet cone, resulting in more considerable energy loss.
Thus, jets with larger hard splitting angles are less likely to be triggered, and the narrowing is observed as the yield ratio of jets with smaller splitting angles increases.
\begin{figure*}[!htb]
\centering
\includegraphics[width=0.98\textwidth]{RG_SD_ALICE_PbPb_logo.pdf}
\caption{(Color online)
Ratios of $r_{g}$ distributions for charged jets
between Pb+Pb and $p$+$p$ collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$~TeV
for different centrality, jet cone size $R$, soft drop parameter $z_{\mathrm{cut}}$, and $p^{\mathrm{ch,jet}}_{T}$ range.
The solid and dashed lines with statistical error bars show the results from MATTER+LBT of JETSCAPE with and without coherence effects, respectively.
For comparison, the experimental data from the ALICE Collaboration~\cite{ALargeIonColliderExperiment:2021mqf}
are shown by squares with statistical errors (bars) and systematic uncertainties (bands). }
\label{fig:alice_rg_pbpb}
\end{figure*}
Motivated by the recent analysis by ATLAS~\cite{ATLAS:2022vii},
we also calculated the nuclear modification factor $R_{\mathrm{AA}}$ for full jets with different $r_{g}$.
Figures~\ref{fig:atlas_rg_afo_pt_pbpb} and~\ref{fig:atlas_rg_afo_rg_pbpb} show the $R_{\mathrm{AA}}$ results as a function of $p_{T}^{\mathrm{jet}}$ and $r_{g}$, respectively.
Here, $R_{\mathrm{AA}}$ is defined as
\begin{align}
R_{\mathrm{AA}}
=
\frac{
\left.\frac{1}{\langle N_{\mathrm{coll}} \rangle}\frac{d^2 N_{\mathrm{SD,jet}}}{dr_{g} dp_{T}^{\mathrm{jet}}}\right|_{\mathrm{AA}}
}{
\left.\frac{d^2 N_{\mathrm{SD,jet}}}{dr_{g} dp_{T}^{\mathrm{jet}}}\right|_{pp}
},
\end{align}
for jets passing the Soft Drop condition with a finite value of $r_g$, and
\begin{align}
R_{\mathrm{AA}}
=
\frac{
\left.\frac{1}{\langle N_{\mathrm{coll}} \rangle}\frac{d N^{\mathrm{incl/}r_{\!g}=0}_{\mathrm{jet}}}{ dp_{T}^{\mathrm{jet}}}\right|_{\mathrm{AA}}
}{
\left.\frac{d N^{\mathrm{incl/}r_{\!g}=0}_{\mathrm{jet}}}{ dp_{T}^{\mathrm{jet}}}\right|_{pp}
},
\end{align}
for inclusive jets and jets failing the Soft Drop
condition ($r_{g}=0$), where $N^{\mathrm{incl/}r_{\!g}=0}_{\mathrm{jet}}$ is the number of triggered jets for each condition.
The denominator is calculated for $p$+$p$ collisions, and the numerator is for a given centrality class of A+A collisions, where $\langle N_{\mathrm{coll}} \rangle$ is the average number of binary nucleon-nucleon collisions in the given centrality class.
\begin{figure*}[htb!]
\centering
\includegraphics[width=0.98\textwidth]{SD_ATLAS_PT_PbPb0-10_beta0.00_zCut0.20_logo.pdf}
\caption{(Color online) Nuclear modification factor $R_{\mathrm{AA}}$ as a function of $p_{T}^{\mathrm{jet}}$ for inclusive jets,
jets failing the Soft Drop condition ($r_{g}=0$), and groomed jets with different splitting radius $r_{g}$ in 0-10\% Pb+Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$~TeV.
Jets are reconstructed with $R=0.4$ at midrapidity $|y_{\mathrm{jet}}|<2.1$.
The Soft Drop parameters are $z_{\mathrm{cut}}=0.2$ and $\beta = 0$.
The solid and dashed lines with statistical error bars show the results from MATTER+LBT of JETSCAPE with and without coherence effects, respectively.
The results are compared to ATLAS data~\cite{ATLAS:2022vii} (squares)
and CMS data for $\left\vert\eta_{\mathrm{jet}}\right\vert<2.0$~\cite{CMS:2021vui} (triangles)
are shown with statistical errors (bars) and systematic uncertainties (bands).
}
\label{fig:atlas_rg_afo_pt_pbpb}
\end{figure*}
Figure~\ref{fig:atlas_rg_afo_pt_pbpb} shows jet $R_{\mathrm{AA}}$ as a function of $p_{T}^{\mathrm{jet}}$ for different $r_g$ intervals.
As already described in Ref.~\cite{JETSCAPE:2022jer}, for the case of inclusive jets (top left plot in Fig.~\ref{fig:atlas_rg_afo_pt_pbpb}), no clear differences due to coherence effects are observed in the jet $R_{\mathrm{AA}}$.
Note that the overall medium coupling parameter $\alpha^{\mathrm{fix}}_{s}$ is adjusted separately for each setup ($\alpha^{\mathrm{fix}}_{s}=0.3$ for the case with coherence effects, and $0.25$ for the case without coherence effects). It should also be noted that our simulations, which do not contain any nuclear shadowing effects, have a somewhat sharper rise than the ATLAS data and are somewhat consistent with the data from CMS.
Moving to the case of Soft Drop fail (top middle plot in Fig.~\ref{fig:atlas_rg_afo_pt_pbpb}), one notices that the data clearly prefer the simulation with coherence as opposed to that without coherence. The reduced suppression for the case with coherence can be understood under the assumption that the prong structure is established in the high virtuality or MATTER stage. In this stage, the effective jet quenching strength with the virtuality dependence $\hat{q}\cdot f(Q^2)$ is smaller for the case with coherence effects compared to that without. For the case without coherence, the larger value of $\hat{q} \cdot f(Q^2)=\hat{q}$ in the MATTER stage leads to the formation of wider prongs, leading to a reduction in the number of jets that fail the Soft Drop condition.
It bears repeating yet again: The comparisons of simulations to data presented in this paper do \emph{not} include any parameter tuning to fit any substructure data. All parameter tuning was carried out in the calculation of the single high-$p_{T}$ particle and jet suppressions in Ref.~\cite{JETSCAPE:2022jer}. All simulation results presented in this paper are predictions.
Figure~\ref{fig:atlas_rg_afo_rg_pbpb} shows jet $R_{\mathrm{AA}}$ as a function of $r_g$ for different $p_{T}^{\mathrm{jet}}$ intervals.
The yellow-shaded regions in the figure indicate the areas of bins containing contributions from jets with a transverse scale $\mu_\perp\approxeq p_{T}^{\mathrm{jet}}r_{g} \lessapprox 1~\mathrm{GeV}$, where the perturbative description of parton splitting in the model is not valid.
To regulate the infra-red singularity in the splitting function, the model needs to specify a minimum cut-off scale for resolvable splitting~\cite{Ellis:1996mzs}, which here is $Q_{\mathrm{min}}=1~\mathrm{GeV}$.
In other words, the jet structure of the yellow-shaded region is governed by the effects from non-perturbative dynamics, namely hydrodynamic evolution of the soft thermalized portion of jets (not modeled in this study), hadronization and subsequent dynamics, rather than the perturbative parton-level dynamics.
Note that one needs to examine the results shown in Figs.~\ref{fig:alice_rg_pbpb} and~\ref{fig:atlas_rg_afo_pt_pbpb} with the same considerations for regions with small $r_g$ or small $p_{T}^{\mathrm{jet}}$. In this regard, it should also be noted that the results in Fig.~\ref{fig:alice_rg_pbpb} are for charged jets.
\begin{figure*}[htb!]
\centering
\includegraphics[width=\textwidth]{SD_ATLAS_RG_PbPb0-10_beta0.00_zCut0.20_logo.pdf}
\caption{(Color online) Nuclear modification factor $R_{\mathrm{AA}}$ as a function of $r_{g}$ for jets with different $p_{T}^{\mathrm{jet}}$ in 0-10\% Pb+Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$~TeV.
Jets are reconstructed with $R=0.4$ at midrapidity $|y_{\mathrm{jet}}|<2.1$.
The Soft Drop parameters are $z_{\mathrm{cut}}=0.2$ and $\beta = 0$.
The solid and dashed lines with statistical error bars show the results from MATTER+LBT of JETSCAPE with and without coherence effects, respectively.
For comparison, the experimental data
from the ATLAS Collaboration~\cite{ATLAS:2022vii} are shown by squares with statistical errors (bars) and systematic uncertainties (bands).
The yellow-shaded regions are the bin areas including the regime where the perturbation approach does not apply (see text for details).
}
\label{fig:atlas_rg_afo_rg_pbpb}
\end{figure*}
Given that the calculation with coherence (solid red lines in the Figs.~\ref{fig:atlas_rg_afo_pt_pbpb} and \ref{fig:atlas_rg_afo_rg_pbpb}) have a larger $\alpha^{\mathrm{fix}}_{s}$ than calculations without coherence, there are a larger number of softer near collinear partons branched in the later low-virtuality stage, which leads to an enhancement of the $R_{\mathrm{AA}}$ at non-perturbatively low $r_g$, indicated by the yellow band in Fig.~\ref{fig:atlas_rg_afo_rg_pbpb}. As a result, in this region, the solid red line ($R_{\mathrm{AA}}$ with coherence) will always exceed the dashed green line ($R_{\mathrm{AA}}$ calculated without coherence).
This excess at very low $r_g$, which emanates from the lack of non-perturbative modification of the jets in the simulation, also strongly affects the $R_{\mathrm{AA}}$ as a function of $p_{T}^{\mathrm{jet}}$ for $0<r_g<0.022$, which is the top right plot in Fig.~\ref{fig:atlas_rg_afo_pt_pbpb}.
As $p_{T}^{\mathrm{jet}}$ increases, the deviation of the curves from the data increases as more soft partons pile up at low momentum around the jet. This deviation will be addressed when a non-perturbative modification for soft partons radiated from the jet is included in the simulations.
At very large $r_g$, with $r_g > 0.2$, the prong structure as the transverse scale of the split exceeds $\mu_\perp \gtrapprox 158\,{\rm GeV} \times 0.2 \approx 32$~GeV can be completely dominated by the virtuality acquired by a parent parton at its production in the initial hard scattering.
This is because, in this region, the initial virtuality is quite large, and furthermore, the formation time for the splitting is very short: $\tau_{\mathrm{form}}\lessapprox \frac{2\cdot (158~\mathrm{GeV})}{(32~\mathrm{GeV})^2} \approx 0.3~\mathrm{GeV}^{-1}\approx 0.06~\mathrm{fm}$.
Thus, even without the interaction reduction due to coherence, no amount of scattering from the medium has much of an effect on the hard splitting.
As a result, the $R_{\mathrm{AA}}$ as a function of $p_{T}^{\mathrm{jet}}$ for the case of $0.2642 < r_g < 0.4$ shows no difference between the cases with and without coherence, as shown in the bottom panel of Fig.~\ref{fig:atlas_rg_afo_pt_pbpb}. This is also the case for $r_g \gtrapprox 0.2$ in all the plots of Fig.~\ref{fig:atlas_rg_afo_rg_pbpb}.
We finally address the region with $0.022 < r_g < 0.26 $.
Perturbative QCD should be applicable in this region. Calculations without coherence effects include a $\hat{q}\cdot f(Q^2) = \hat{q}$ that has a large value (growing with the logarithm of the energy) even in the high virtuality MATTER stage, given by Eq.~\eqref{eq:HTL-qhat-formula-C-2}.
One notes in Fig.~\ref{fig:atlas_rg_afo_rg_pbpb}, for the case of the dashed green line (without coherence effects), that multiple scattering broadens the prong structure. This creates a depletion at lower $r_g$ and an enhancement around $0.02 \lessapprox r_g \lessapprox 0.06$, which eventually begins to disappear at large $r_g \gtrapprox 0.1$. The broadening can be roughly estimated using the simple formula that
\begin{align}
k_\perp^2 \approxeq z(1-z) \sqrt{ 2 E \hat{q} } \approx \sqrt{ E \hat{q} /8 } .
\end{align}
This yields the simple expression for the peak angle of the bump of the dashed green line as,
\begin{align}
\theta_{\mathrm{max}} \approxeq \frac{k_\perp}{E} \approx \frac{ (\hat{q}/8)^{1/4} }{E^{3/4}}.
\end{align}
Using the above equation, one would obtain that if the energy of the jet were to double, the angle of the bump in the dashed green line in Fig.~\ref{fig:atlas_rg_afo_rg_pbpb} would move down in $r_g$ by a factor of $2^{3/4} \approx 1.6$. One notes that this is indeed the case in the $2^{\rm nd}$ and $4^{\rm th}$ panels of Fig.~\ref{fig:atlas_rg_afo_rg_pbpb}. The energy range between these doubles and the position of the bump in the green curve shift down in $r_g$ by about a factor of $1.5$-$2$.
This different behavior, depending on the presence or absence of coherence effects, is also evident when shown as a function of $p_{T}^{\mathrm{jet}}$ from intermediate ranges of $r_g$, as in Fig.~\ref{fig:atlas_rg_afo_pt_pbpb}.
The bump structure of the jet $R_{\mathrm{AA}}$ as a function of $r_g$, which our results without coherence show, can also be seen in the prediction results from the JetMed model by Caucal {\it et al.}~\cite{Caucal:2019uvr,Caucal:2020uic,Caucal:2018dla} and semi-analytical calculation with $p_{T}$-broadening effect by Ringer {\it et al.}~\cite{Ringer:2019rfk} for the ATLAS measurements presented in Ref.~\cite{ATLAS:2022vii}.
However, the data from ATLAS exhibit an almost monotonically decreasing trend with no such clear bump structure for all $p_{T}^{\mathrm{jet}}$ intervals, which rather agrees with our MATTER+LBT results with coherence effects.
This reveals that the medium effect is strongly suppressed at high virtuality, where hard partonic splitting passing the Soft Drop condition is likely to occur.
Figure~\ref{fig:rhic_rg} presents our prediction for the modification of $r_{g}$ distribution for charged jets in 0-10\% Au+Au collisions at $\sqrt{s_{\mathrm{NN}}}=200$~GeV from MATTER+LBT with coherence effects.
Similar to the LHC case, one finds enhancement at small $r_g$ and slight suppression at large $r_g$, which is more pronounced for jets with larger transverse momentum.
\begin{figure*}[htbp]
\centering
\includegraphics[width=0.75\textwidth]{TG_SD_RHIC_logo.pdf}
\caption{(Color online)
Ratios of $r_{g}$ distributions for charged jets with $R=0.2$ and $|\eta_{\mathrm{ch,jet}}|<0.7$ (left), and $R=0.4$, $|\eta_{\mathrm{ch,jet}}|<0.5$ (right) between 0-10\% Au+Au and $p$+$p$ collisions at $\sqrt{s_{\mathrm{NN}}}=200$~GeV, from MATTER+LBT simulations within JETSCAPE, including coherence effects.
The Soft Drop parameters are $z_{\mathrm{cut}}=0.2$ and $\beta = 0$.
The solid and dashed lines with statistical error bars show the results for $10<p^{\mathrm{ch,jet}}_{T}<30$~GeV and $30<p^{\mathrm{ch,jet}}_{T}<50$~GeV, respectively.
}
\label{fig:rhic_rg}
\end{figure*}
\subsection{Jet fragmentation function}
\begin{figure*}[htb!]
\centering
\includegraphics[width=0.98\textwidth]{FF_ATLAS_Z_PP_logo.pdf}
\includegraphics[width=0.98\textwidth]{FF_ATLAS_PT_PP_logo.pdf}
\caption{(Color online) Jet fragmentation functions for jets in $p$+$p$ collisions at $\sqrt{s}=5.02$~TeV and the ratios as a function of $z$ (top) and $p^{\mathrm{trk}}_{T}$ (bottom) for different $p_{T}^{\mathrm{jet}}$ range.
Jets are fully reconstructed including both charged and neutral particles by anti-$k_{t}$ with $R=0.4$ at midrapidity $\left|y^{\mathrm{jet}}\right|<0.3$.
The solid lines and circles with statistical error bars show the results from JETSCAPE and the experimental data from the ATLAS Collaboration~\cite{ATLAS:2018bvp}, respectively.
The bands indicate the systematic uncertainties of the experimental data. }
\label{fig:atlas_ff_pp}
\end{figure*}
We now turn to the last jet substructure observable: the jet fragmentation function.
Jet fragmentation functions are measured as a function of
the track-particle transverse momentum $p^{\mathrm{trk}}_{T}$
or longitudinal momentum fraction relative to the jet,
\begin{align}
z&=\frac{p^{\mathrm{trk}}_{T}\cos(\Delta r)}{p_{T}^{\mathrm{jet}}},
\end{align}
where $\Delta r = \sqrt{(\eta_{\mathrm{trk}}-\eta_{\mathrm{jet}})^2+(\phi_{\mathrm{trk}}-\phi_{\mathrm{jet}})^2}$.
The fragmentation functions are defined as
\begin{align}
D(z)&=\frac{1}{N_{\mathrm{jet}}}\frac{dN_\mathrm{trk}}{dz},\\
D(p^{\mathrm{trk}}_{T})&=\frac{1}{N_{\mathrm{jet}}}\frac{dN_\mathrm{trk}}{dp^{\mathrm{trk}}_{T}},
\end{align}
where $N_{\mathrm{jet}}$ is the number of triggered jets
and $N_{\mathrm{trk}}$ is the number of charged track particles detected inside the jet cones, $\Delta r < R$.
Our JETSCAPE PP19 results for the fragmentation functions are compared with the experimental data by ATLAS in Fig.~\ref{fig:atlas_ff_pp}.
For all available $p_{T}^{\mathrm{jet}}$ ranges, the discrepancies from the data are generally within $20\%$ at most.
In Fig.~\ref{fig:atlas_ff_pbpb}, we present the modification of the jet fragmentation functions for full jets in $0$-$10\%$ Pb+Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$~TeV.
Results from the MATTER+LBT simulations, both with and without coherence effects, are compared with the experimental data from ATLAS.
All the simulation results and the data show qualitatively the same trends.
While the track particles at intermediate $z$ are suppressed by the interactions with the medium and give the enhancement at small $z$, the large-$z$ part is enhanced due to the less affected hard part of jets.
\begin{figure*}[htb!]
\centering
\includegraphics[width=0.98\textwidth]{FF_ATLAS_Z_PbPb_logo.pdf}
\includegraphics[width=0.98\textwidth]{FF_ATLAS_PT_PbPb_logo.pdf}
\caption{(Color online)
Ratios of jet fragmentation functions for jets
between $0$-$10\%$ Pb+Pb and $p$+$p$ collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$~TeV
as a function of $z$ (top) and $p^{\mathrm{trk}}_{T}$ (bottom) for different $p_{T}^{\mathrm{jet}}$ range.
Jets are fully reconstructed, including both charged and neutral particles by anti-$k_{t}$ with $R=0.4$ at midrapidity $\left|y^{\mathrm{jet}}\right|<0.3$.
The solid and dashed with statistical error bars lines show the results from MATTER+LBT of JETSCAPE with and without coherence effects, respectively.
For comparison, the experimental data from the ATLAS Collaboration~\cite{ATLAS:2018bvp} are shown by squares with statistical errors (bars) and systematic uncertainties (bands).
}
\label{fig:atlas_ff_pbpb}
\end{figure*}
In jet fragmentation functions, coherence effects are quantitatively visible as more prominent enhancements in the large-$z$ region dominated by hadrons from leading partons of jets.
Since the leading parton has the largest virtuality at the early stage in the jet shower evolution, the interaction reduction due to coherence affects this parton the most.
As a result, the modification of large-$z$ jet hadrons is further lessened, and the enhancement becomes more substantial than the case without coherence effects.
This is consistent with the weak energy loss of inclusive charged particles at high $p_{T}$ explained by coherence effects presented in Refs.~\cite{JETSCAPE:2022hcb,JETSCAPE:2022jer}.
In conjunction with the behaviors in the high-$z$ region, a slight difference can also be seen in the low-$z$ region between the two settings.
Both results with and without coherence effects show a sizable enhancement at low-$z$ mainly due to the medium response via recoils but still underestimate the data.
One possible cause of this is the visible discrepancy in the suppression at mid-$z$.
Furthermore, for some very soft components of jets giving contribution in the low-$z$ region, the recoil prescription may not provide an entirely reasonable description once their energies become close to the typical scale for the medium constituents.
More comprehensive momentum structures of jet constituents, including such soft regions where hydrodynamic medium response needs to be considered, will be explored in a future effort.
With the current uncertainties, it is not yet possible to conclude the presence of coherence effects from comparisons with only the experimental data on modified jet fragmentation functions. However, when taken in conjunction with the results on the $r_g$ distribution, a stronger case can be made for the existence of coherence effects at high virtuality.
Our results also indicate that the medium effects over different scales can be discernible by future measurements with high precision.
In Fig.~\ref{fig:rhic_ff}, we present our results of the modification of jet fragmentation functions for charged jets in 0-10\% Au+Au collisions at $\sqrt{s_{\mathrm{NN}}}=200$~GeV from MATTER+LBT with coherence effects.
Compared to the results for the top LHC energy, the modifications are quite small.
\begin{figure*}[htbp]
\centering
\includegraphics[width=0.75\textwidth]{FF_RHIC_PbPb_logo.pdf}
\caption{(Color online)
Ratios of jet fragmentation functions for charged jets with $R=0.4$ and $\left|\eta_{\mathrm{ch, jet}}\right|<1.0$ between $0$-$10\%$ Au+Au and $p$+$p$ collisions at $\sqrt{s_{\mathrm{NN}}}=200$~GeV
as a function of $z$ (left) and $p^{\mathrm{trk}}_{T}$ (right)
from MATTER+LBT of JETSCAPE with coherence effects.
The solid and dashed lines with statistical error bars show the results for $10<p^{\mathrm{ch,jet}}_{T}<30$~GeV and $30<p^{\mathrm{ch,jet}}_{T}<50$~GeV, respectively.
}
\label{fig:rhic_ff}
\end{figure*}
\section{Summary and Outlook}
\label{Section:Summary}
This paper explored the medium modification of jet substructure in high-energy heavy-ion collisions, employing a multi-stage jet evolution model, MATTER+LBT, with the configuration and parameters established within the JETSCAPE framework by comparison with leading hadron and jet data. All parameters were taken from our previous efforts~\cite{JETSCAPE:2022jer} and were not re-tuned for this study. In fact, no new simulations were run for this paper. The presented results were calculated from the simulations carried out for Ref.~\cite{JETSCAPE:2022jer}.
To investigate the contribution of coherence effects based on the ability of the medium to resolve the partons radiated from splits at high energy and virtuality, we performed numerical simulations for two cases, with and without coherence effects.
These coherence effects are implemented as the $Q^2$-dependent modulation factor in the medium-modified splitting function and give a drastic reduction of the interaction with the medium with increasing parton virtuality.
The distribution of jet splitting momentum fraction ($z_{g}$) shows almost no visible modification due to the medium effects for any kinematic configurations in both cases with and without coherence effects. This extremely small sensitivity to the medium effects is consistent with the experimental data taken by ALICE at the LHC. Our predictions for future RHIC measurements also show no significant modification.
Then, we presented the observables related to the jet splitting radius ($r_{g}$).
In comparison with the ALICE data, both results with and without coherence effects satisfactorily capture the monotonically decreasing behavior with increasing radius and give good agreement. Here, no conclusions about coherent effects could be drawn from this analysis in comparison with the data from ALICE. We reiterate again that our simulations reduce to and reproduce the $z_g$ and $r_g$ distributions in the absence of the medium, in comparison with data from $p$+$p$ collisions.
In comparison with data from ATLAS~\cite{ATLAS:2022vii}, we demonstrated that coherence effects manifest, even at the qualitative behavior level, in $r_{g}$-dependent $R_{\mathrm{AA}}$ with finer binning. In both the $R_{\mathrm{AA}}$ as a function of $p_{T}^{\mathrm{jet}}$ for different bins of the angle $r_g$ as well as the $R_{\mathrm{AA}}$ as a function of $r_g$ in different $p_{T}^{\mathrm{jet}}$ bins, there is a clear difference between simulations with and without coherence effects. The experimental data clearly prefer simulations with coherence effects. This indicates that the scattering with the medium constituents at high virtuality is reduced due to the finer scale of the medium probed by the jet parton.
Finally, we found that coherence effects may also be visible as a more prominent enhancement at large $z$ in the modification pattern of the jet fragmentation functions.
The energy loss of hard leading partons, which form the jet core components with large transverse momentum, is highly suppressed by coherence effects due to their large virtualities. The data have a slight preference for simulations with coherence if one restricts attention to particles with $z\gtrapprox 0.1$. For both the case with and without coherence, the simulations produce fewer particles at very small $z$ ($z \lessapprox 0.02$), with the case without coherence performing marginally better.
This paper constitutes the third installment of jet and hadron-based observables from the MATTER+LBT simulations in the JETSCAPE framework~\cite{JETSCAPE:2022jer,JETSCAPE:2022hcb}. In all three of these papers, including the current effort, we have demonstrated wide-ranging agreement for the hard sector of jets, between simulations, typically with coherence and experimental data. The only remaining issues within the hard sector of the jet are related to coincidence measurements. These will be presented in a future effort.
In terms of physics included within these simulations, the one remaining component is the very soft sector of jets. In the current effort, this was pointed out in the discussion of the low $r_g$ section of the $r_g$ dependent $R_{\mathrm{AA}}$, and the low-$z$ and low-$p_{T}$ sector of the jet fragmentation function.
This requires incorporating an energy deposition scheme in which partons with energy comparable to the ambient temperature are converted into an energy-momentum source term and then included back in the hydrodynamic calculation.
As may be obvious, these simulations require close to a single hydro run per hard event and, as such, are very computationally demanding. Various schemes to approximately incorporate soft physics without the need for full hydrodynamic simulation are currently underway. The analysis of certain jet-based observables predominantly sensitive to the soft sector of jets will be carried out after these efforts are complete.
\acknowledgments
\label{Ack}
This work was supported in part by the National Science Foundation (NSF) within the framework of the JETSCAPE collaboration, under grant number OAC-2004571 (CSSI:X-SCAPE). It was also supported under ACI-1550172 (Y.C. and G.R.), ACI-1550221 (R.J.F., F.G., and M.K.), ACI-1550223 (U.H., L.D., and D.L.), ACI-1550225 (S.A.B., T.D., W.F., R.W.), ACI-1550228 (J.M., B.J., P.J., X.-N.W.), and ACI-1550300 (S.C., A.K., J.L., A.M., H.M., C.N., A.S., J.P., L.S., C.Si., I.S., R.A.S. and G.V.); by PHY-1516590 and PHY-1812431 (R.J.F., M.K. and A.S.); it was supported in part by NSF CSSI grant number \rm{OAC-2004601} (BAND; D.L. and U.H.); it was supported in part by the US Department of Energy, Office of Science, Office of Nuclear Physics under grant numbers \rm{DE-AC02-05CH11231} (X.-N.W.), \rm{DE-FG02-00ER41132} (D.O), \rm{DE-AC52-07NA27344} (A.A., R.A.S.), \rm{DE-SC0013460} (S.C., A.K., A.M., C.S., I.S. and C.Si.), \rm{DE-SC0021969} (C.S. and W.Z.), \rm{DE-SC0004286} (L.D., U.H. and D.L.), \rm{DE-SC0012704} (B.S.), \rm{DE-FG02-92ER40713} (J.P.) and \rm{DE-FG02-05ER41367} (T.D., W.F., J.-F.P., D.S. and S.A.B.). The work was also supported in part by the National Science Foundation of China (NSFC) under grant numbers 11935007, 11861131009 and 11890714 (Y.H. and X.-N.W.), under grant numbers 12175122 and 2021-867 (S.C.), by the Natural Sciences and Engineering Research Council of Canada (C.G., M.H., S.J., and G.V.), by the Office of the Vice President for Research (OVPR) at Wayne State University (Y.T.),
by JSPS KAKENHI Grant No.~22K14041 (Y.T.), by the S\~{a}o Paulo Research Foundation (FAPESP) under projects 2016/24029-6, 2017/05685-2 and 2018/24720-6 (A. L. and M.L.), and by the University of California, Berkeley - Central China Normal University Collaboration Grant (W.K.). U.H. would like to acknowledge support by the Alexander von Humboldt Foundation through a Humboldt Research Award. C.S. acknowledges a DOE Office of Science Early Career Award. Computations were carried out on the Wayne State Grid funded by the Wayne State OVPR. The bulk medium simulations were done using resources provided by the Open Science Grid (OSG) \cite{Pordes:2007zzb, Sfiligoi:2009cct}, which is supported by the National Science Foundation award \#2030508. Data storage was provided in part by the OSIRIS project supported by the National Science Foundation under grant number OAC-1541335.
\begin{figure*}[htbp]
\centering
\includegraphics[width=0.75\textwidth]{RAA_RHIC_logo.pdf}
\caption{(Color online)
Nuclear modification factor $R_{\mathrm{AA}}$ for inclusive full jet with $|\eta_{\mathrm{jet}}|<1$ (left), and charged jet with $|\eta_{\mathrm{ch,jet}}|<1-R$ and leading charged particle $p^{\mathrm{ch,lead}}_{T}>5$ GeV (right) in $0-10\%$ Au+Au collisions at $\sqrt{s_{\mathrm{NN}}}=200$~GeV from MATTER+LBT of JETSCAPE with coherence effects.
The solid, dashed, and dash-dotted lines with statistical error bars show the results for $R=0.2$, $R=0.4$, and $R=0.6$, respectively.
}
\label{fig:rhic_raa}
\end{figure*}
\hypertarget{Appen}{}
|
2,877,628,091,599 | arxiv | \section{Introduction}
As it is well known under reasonable conditions the mixed partial
derivatives of a real function coincide. This result has a long
history and several distinguished scholars provided proofs including
Euler and Clairaut. However, according to Lindel\"of none of those
proofs was free of errors or tacit assumptions, so that historians
give credit for the first correct proof to H. A. Schwarz, see
\cite{higgins40} for a nice historical account.
Actually, the first correct proof of the equality of mixed partial
derivatives was obtained by Cauchy who improved and amended a
previous proof by Lagrange. However, they assumed the existence and
continuity of the derivatives $\partial_1^2 f$, $\partial_2^2 f$. Schwarz
removed this assumption and showed also that the continuity of
$\partial_1\partial_2 f$ could be obtained from the other hypothesis. Let
$O=(a,b)\times (c,d)\subset \mathbb{R}^2$. He proved
\cite{schwarz73}:
\begin{itemize}
\item[1.] Let $f \in C^1(O,\mathbb{R})$ and suppose that $\partial_2 \partial_1 f$
exists and belongs to $C(O,\mathbb{R})$. Then $\partial_1 \partial_2 f$ exists
and $\partial_1 \partial_2 f=\partial_2 \partial_1 f$.
\end{itemize}
It is natural to ask whether the assumptions can be weakened.
Stronger versions can be found in the first published studies of
this problem. For instance, Dini in his ``Lezioni di Analisi
Infinitesimale'' \cite[p.\ 164]{dini07} does not assume $f\in
C^1(O,\mathbb{R})$, but demands just the existence of the partial
derivatives and the continuity of $\partial_2 f$ in $y$.\footnote{In
Dini's book ${\partial^2 f}/{\partial x\partial y}$ means $\partial_2\partial_1 f$.
We stress that in the
terminology of this article a necessary condition for a limit, such
as a partial derivative, to exist will be its finiteness. That is,
we do not tacitly use the extended real line, as some other authors
do. This convention allows us to write just ``exists'' in place of
``exists and is finite''.}
The strongest result in this direction seems to have been obtained
by Peano who removed the assumption on the continuity of $\partial_2 f$
from Dini's version
\cite{peano90}. Peano's version can be found in Rudin
\cite{rudin76}.
\begin{itemize}
\item[2.] Let $f\colon O\to \mathbb{R}$. Suppose that $\partial_1 f$, $\partial_2 f$ and $\partial_2\partial_1 f$
exist on $O$ and that the latter is continuous at $(x_0,y_0)$. Then
$\partial_1\partial_2 f(x_0,y_0)=\partial_2\partial_1 f(x_0,y_0)$.
\end{itemize}
The continuity of $f$ is not assumed and there are indeed
discontinuous functions which admit everywhere partial derivatives
at any order \cite{kimura60}.
Many authors tried to weaken
the conditions of Schwarz's theorem in other directions. Young
proved the following result (see Apostol \cite[Theor.\
12.12]{apostol74}):
\begin{itemize}
\item[3.] Let $f\colon O\to \mathbb{R}$. If both
partial derivatives $\partial_1 f$ and $\partial_2 f$ exist in neighborhood of
$(x_0,y_0)\in O$ and if both are differentiable at $(x_0,y_0)$, then
$\partial_2 \partial_1 f(x_0,y_0)=\partial_1\partial_2 f(x_0,y_0)$.
\end{itemize}
We observe that these assumptions imply that $\partial_1 f$ and $\partial_2 f$
being continuous at $(x_0,y_0)$ are bounded in a neighborhood of
this point, thus $f$ is Lipschitz and hence continuous in such
neighborhood. As with Lagrange-Cauchy version, Young's result
assumes the existence of both $\partial_1^2 f(x_0,y_0)$ and $\partial_2^2
f(x_0,y_0)$.
We are now going to prove a results which improves Peano's. It is
based on the concept of strong differentiation also introduced by
himself in \cite{peano92,dolecki12}. It seems that he did not
realize the usefulness of strong differentiation for the problem of
the equality of mixed derivatives, possibly because he investigated
the latter problem before the introduction of this derivative.
\begin{remark}
Recently the notion of strong differentiation has received renewed
attention since it has been proved that the exponential map of
Lipschitz connections or sprays over $C^{2,1}$ manifolds is strongly
differentiable at the origin \cite{minguzzi13d}. This fact implies
that the exponential map is a Lipeomorphism near the origin. Thus
this notion proves important to do differential geometry under weak
differentiability conditions.
\end{remark}
\begin{definition}
A function $f\colon B \to \mathbb{R}^c$, $B\subset \mathbb{R}^a
\times \mathbb{R}^b$, $(x,z) \mapsto f(x,z)$, is said to be {\em
partially strongly differentiable} with respect to $x$ at
$(x_0,z_0)\in \bar{B}$, with differential $\partial_1 f(x_0,z_0)$ if for
every $\epsilon>0$, there is a $\delta>0$ such that for every
$x_1,x_2, z$ such that $\Vert x_1-x_0\Vert <\delta$, $\Vert
x_2-x_0\Vert <\delta$, $\Vert z-z_0\Vert <\delta$, $(x_1,z)\in B$,
$(x_2,z)\in B$,
\[
\Vert f(x_2,z)-f(x_1,z)-\partial_1 f(x_0,y_0) (x_2-x_1)\Vert \le \epsilon
\Vert x_2-x_1\Vert.
\]
If $f$ does not depend on $z$ then $\partial_1 f$ is called {\em
differential} and is denoted $d f$.
\end{definition}
Clearly, if a function is strongly differentiable then it is
differentiable, and the strong differential coincides with the
differential. Some interesting properties are
\cite{peano92,esser64,nijenhuis74}:
\begin{itemize}
\item[(i)] If $f$ is strongly differentiable at $p$ then its satisfies a
Lipschitz condition in a neighborhood of $p$.
\item[(ii)] If $f$ is differentiable in a neighborhood of $p$ and the
differential is continuous at $p$ then it is strongly differentiable
at $p$. Conversely, if $f$ is strongly differentiable at $p$ and the
differential exists in a neighborhood of $p$ then the differential
is continuous at $p.$ A similar version for partial strong
differentiation holds (this point is an easy consequence of the mean
value theorem).
\item[(iii)] If $f$ is strongly differentiable over a subset $A\subset E$ then
the strong differential is continuous over $A$ with respect to the
induced topology.
\end{itemize}
In particular, a function is strongly differentiable in an open set
$O$ if and only if it is continuously differentiable on $O$.
The concept of strong differentiation has some advantages over that
of ordinary (Frechet) differentiation. In particular, it allows us
to obtain simpler and stronger results through shorter proofs. It
serves better the intuition and at the elementary level could
possibly
replace the usual differentiation in
elementary textbooks on analysis. Indeed, it extends the the range
of applicability of some key results in analysis by removing some
continuity assumptions on derivatives.
For instance:
\begin{itemize}
\item[(iv)] A function which is partially strongly differentiable with
respect to all its variables at a point $p$ is also totally strongly
differentiable at that point $p$ \cite{nijenhuis74}.
\item[(v)] If a function $f\colon \mathbb{R}\to \mathbb{R}$ has positive strong
derivative at a point then it is increasing in a neighborhood of
that point. More generally, if a function $f:\mathbb{R}^n\to
\mathbb{R}^n$ has invertible strong differential at a point $p$ then
it is injective in a neighborhood of $p$ and the inverse is strongly
differentiable at $f(p)$ with $d f^{-1} (f(p))=(df(p))^{-1}$
(Leach's inverse function theorem \cite{leach61})
\end{itemize}
Let us observe that the usual assumptions that make the
corresponding results hold for ordinary differentiation imply that
the differential is continuous in a neighborhood of $p$. As observed
above these assumptions serve essentially to assume strong
differentiability in a neighborhood without naming it. The point of
using strong differentiability is that strong differentiability at a
point suffices.
Peano's theorem on the equality of mixed partial derivatives at
$(x_0,y_0)$ demands the existence of $\partial_2\partial_1 f$ in a neighborhood
of $(x_0,y_0)$ and its continuity there. By (ii) above, $\partial_1 f$ is
partially strongly differentiable with respect to $y$ at
$(x_0,y_0)$, thus one can ask whether the previous conditions can be
replaced by partial strong differentiability. The answer is
affirmative. The author reobtained the next theorem unaware of a previous result by Mikusi\'nski \cite{mikusinski72,mikusinski73,mikusinski78} subsequently generalized to Banach spaces by Sk\'ornik \cite{skornik83}. Mikusi\'nski calls ``full derivative" the Peano's strong derivative and does not give references so that it was quite hard to spot his important work \cite{mikusinski78}.
He reobtains first results due to Peano and more advanced results such as Leach's inverse function theorem, but he also obtain new results on the role of strong differentiation in integration theory. In \cite[Chap.\ 12]{mikusinski78} he provides the best and most complete introduction to strong derivatives up to date. The fact that he published those results in Polish \cite{mikusinski73} and in some sections in a book devoted to quite different problems did not help to spread knowledge of his important contributions. For completeness we include the next proof as it is different from Mikuskinski's and has weaker assumptions.
\begin{theorem}
Let $f\colon O\to \mathbb{R}$. Suppose that the partial derivative
$\partial_1 f$ exists on $O$ and that it is partially strongly
differentiable with respect to $y$ at $(x_0,y_0)$. Then, denoting
with $A\subset O$ the subset where $\partial_2 f$ exists, provided
$(x_0,y_0)\in \bar{A}$, $\partial_2 f(:=\partial_2 f\vert_A)$ is partially
strongly differentiable with respect to $x$ at $(x_0,y_0)$ and
$\partial_1\partial_2 f(x_0,y_0)=\partial_2\partial_1 f(x_0,y_0)$.
\end{theorem}
We stress that while the assumptions are weaker than Peano's, the
conclusion is stronger.
For instance, the previous theorem implies that
if $ \partial_1\partial_2 f(\cdot,y_0)$ exists in a neighborhood of $x_0$ then
it is continuous at $x_0$.
Esser and Shisha \cite{esser64,nijenhuis74} construct a simple
function $h(y)$ defined on an open set of $y=0$ which is not
everywhere differentiable on any neighborhood of 0 but which is
strongly differentiable at $0$. Then $f(x,y)=x h(y)$ satisfies the
assumptions of our theorem but not those of Peano's.
\begin{proof}
Let $\epsilon >0$. Since $\partial_1 f$ is partially strongly
differentiable with respect to $y$ at $(x_0,y_0)$ there is
$\delta(\epsilon)>0$ such that for every $\tilde x \in (a,b)$,
$\tilde y_1, \tilde y_2\in (c,d)$, $\vert \tilde x-x_0\vert
<\delta$, $\vert \tilde y_1-y_0\vert <\delta$, $\vert \tilde
y_2-y_0\vert <\delta$, we have
\begin{equation} \label{ied}
\vert \partial_1 f(\tilde x,\tilde y_2)-\partial_1 f(\tilde x,\tilde
y_1)-\partial_2\partial_1 f(x_0,y_0) (\tilde y_2-\tilde y_1)\vert \le \epsilon
\vert \tilde y_2-\tilde y_1\vert .
\end{equation}
Given $\epsilon>0$ let $\delta(\epsilon)>0$ be as above. Let
$x_1,x_2\in (a,b)$ be such that $\vert x_1-x_0\vert <\delta$, $\vert
x_2-x_0\vert <\delta$, and let $y\in (c,d)$ be such that $\vert
y-y_0\vert <\delta$. Furthermore, let them be such that $(x_1,y) \in
A$, $(x_2,y)\in A$.
Let $y_1,y_2\in (c,d)$, $y_1\ne y_2$, be arbitrary and such that
$\vert y_1-y_0\vert <\delta$, $\vert y_2-y_0\vert <\delta$. Let
$u(t):=f(t,y_2)-f(t,y_1)$, then by the existence of $\partial_1 f$ and by
the mean value theorem there is $x\in (a,b)$, $\vert x-x_0\vert
<\delta$, such that
\[
\partial_1 u(x) (x_2-x_1)=u(x_2)-u(x_1).
\]
Equation (\ref{ied}) holds for these values for $x,y_1,y_2$, thus
\begin{align*}
\vert f(x_2,y_2)-f(x_2,y_1)-f(x_1,y_2)+f(x_1,y_1)-&\,\partial_2\partial_1
f(x_0,y_0) (y_2-y_1) (x_2-x_1)\vert\\&\le \epsilon \vert
x_2-x_1\vert \, \vert y_2-y_1\vert.
\end{align*}
Dividing by $\vert y_2-y_1\vert$, setting $y_1=y$ and taking the
limit $y_2\to y$ we obtain
\[
\vert \partial_2 f(x_2,y)-\partial_2 f(x_1, y) -\partial_2\partial_1 f(x_0,y_0)
(x_2-x_1)\vert \le \epsilon \vert x_2-x_1\vert,
\]
which means that $\partial_2 f$ has partial strong differential with
respect to $x$ at $(x_0,y_0)$ given by $\partial_2\partial_1 f(x_0,y_0)$, that
is $\partial_1\partial_2 f(x_0,y_0)=\partial_2\partial_1 f(x_0,y_0)$.
\end{proof}
The concept of strong differentiation lead us to a satisfactory
result which we can summarize as follows:
\begin{itemize}
\item[(vi)] If the strong derivative $\partial_2\partial_1 f(x_0,y_0)$ exists
and it makes sense to consider the strong derivative $\partial_1\partial_2
f(x_0,y_0)$ (that is, $\partial_2 f$ exists in a set which accumulates at
$(x_0,y_0)$) then the latter exists and they coincide.
\end{itemize}
\section{Equality almost everywhere}
We might also ask to what extent the equality of mixed partial
derivatives holds for functions which admit those second derivatives
almost everywhere. Some important results have been obtained for
convex functions. A well known result by Alexandrov establishes that
convex functions admit a generalized Peano derivative of order 2
in the sense that for almost every $x$
\[
f(x+h)=f(x)+L(h)+ A(h,h)+o_x(\vert h\vert^2),
\]
where $L$ is a linear map and $A$ is a quadratic form. Less clear is
whether $A$ can be obtained from the differentiation of the
generalized differential of $f$ and whether such double
differentiation gives a symmetric Hessian. The affirmative answer to
this question has been established by Rockafellar
\cite{rockafellar99}.
The Russian mathematician G.\ P.\ Tolstov clarified several
questions related to the equality of mixed derivatives in two papers
published in 1949.
Unfortunately, only one of those articles was translated into
English \cite{tolstov49}, so that the interesting results contained
in the other paper \cite{tolstov49b} have been largely overlooked by
the mathematical community. Most space in those papers is devoted to
the construction of counterexamples. In fact, he proved
\cite{tolstov49b}:
\begin{itemize}
\item[4.] There exists a function $f\in C^1(O,\mathbb{R})$, the mixed second derivatives of which
exist at every point of $O$ but such that $\partial_2\partial_1 f\ne \partial_1\partial_2 f$
on a set $P\subset O$ of positive measure.
\item[5.] There exists a function $f\in C^1(O,\mathbb{R})$, the mixed second derivatives of which
exist almost everywhere in $O$ and such that $\partial_2\partial_1 f\ne \partial_1\partial_2
f$ almost everywhere in $O$.
\end{itemize}
On the positive direction he improved Young's theorem as follows
\cite{tolstov49}:
\begin{itemize}
\item[6.] If the function $f$ has all second derivatives everywhere
in $O$, then the equality of mixed derivatives holds in $O$.
\end{itemize}
This theorem with {\em existence} replaced by {\em existence almost
everywhere} in both the hypothesis and thesis had been already
proved by Currier \cite{currier33}.
These type of results have still an undesirable feature, for they
place conditions on the existence of the double derivatives $\partial^2_1
f$ and $\partial^2_2 f$. Actually, Tolstov obtained some results which do
not place conditions on the homogeneous second derivatives. In the
remainder of this work we shall review and develop them. In
particular, we shall stress the importance for applications of the
Lipschitz conditions on the first derivatives.
We start with an important Lemma from Tolstov's paper. Since there
are no published English translations, we provide the proof.
\begin{lemma}[Tolstov \cite{tolstov49b}] \label{lem}
Let $O=(a,b)\times (c,d) \subset \mathbb{R}^2$ and let
\[
f(x,y)=\int_a^x {\rm d} u\int_c^y h(u,v) {\rm d} v.
\]
where $h \in L^1(\bar{O},\mathbb{R})$. Then there is a measurable
set $e_1\subset (a,b)$ with $\vert e_1\vert=b-a$ such that for every
$x\in e_1$ and for every $y\in (c,d)$
\begin{equation} \label{jud}
\partial_1 f(x,y)=\int_c^y h(x,v) {\rm d} v.
\end{equation}
\end{lemma}
Clearly, by Fubini's theorem the integrals in the definition of
$f(x,y)$ can be exchanged and hence a similar statement holds for
the derivative with respect to $y$. Fubini's theorem will play a
very important role in the proof of this lemma and in the proofs of
the next theorems. The reader is referred to Aksoy and Martelli
\cite{aksoy02} for a discussions of the relationship between
Fubini's and Schwarz's theorems.
\begin{proof}
Differentiating $f(x,y)$ with respect to
$x$ we obtain for $x \in X_y\subset [a,b]$ with $\vert
X_y\vert=b-a$,
(Fundamental Theorem of Calculus e.g.\ \cite[Theor.\ 8.17]{rudin70})
\begin{equation} \label{bos}
\partial_1 f(x,y)=\int_c^y h(x,v)\, {\rm d} v .
\end{equation}
Let $\Lambda=\cup_y ( X_y\times \{y\} )$, so that $\vert
\Lambda\vert=(b-a)(d-c)$. Equation (\ref{bos}) holds for $(x,y)\in
\Lambda$. Let $Y_x\subset (c,d)$ be the coordinate slices defined
by $\{x\}\times Y_x=(\{x\}\times (c,d))\cap \Lambda$, or
equivalently $Y_x=\pi_2(\pi_1^{-1}(x)\cap \Lambda)$. Fubini's
theorem applied to the characteristic function of $\Lambda$ gives
that there is some $e_1 \subset (a,b)$, $\vert e_1\vert=b-a$, such
that for every $x\in e_1$, $\vert Y_x\vert= c-d$. Let $x\in e_1$,
for $y\in Y_x$ Eq.\ (\ref{bos}) is true. We wish to show that it
holds for any $y\in(c,d)$. Let $y\in (c,d)$ and let $h_n$ be any
sequence converging to zero. Let
\[
\varphi^\pm_n(y):=\frac{1}{h_n} \int_x^{x+h_n}
{\rm d} u\int_c^y h^\pm(u,v) {\rm d} v.
\]
where $h^+$ and $h^-$ are the positive and negative parts of $h$,
respectively. Since the functions $\varphi^+_n(y)$ are monotone and
continuous and converge in a dense subset (for $n \to \infty$), to the continuous function
$\int_c^y h^+(x,v)\, {\rm d} v$ they do the same everywhere on $(c,d)$,
and analogously for $\varphi^-_n(y)$. By the arbitrariness of
$h_n$, Eq.\ (\ref{bos}) is true for every $y\in (c,d)$ provided
$x\in e_1$.
\end{proof}
\begin{remark}
Notice that the Fundamental Theorem of Calculus (e.g.\ \cite[Theor.\
8.17]{rudin70}) states that Eq.\ (\ref{jud}) is true for $x\in
e_1(y)$, where $\vert e_1(y) \vert=b-a$, but $e_1(y)$ might depend
on $y$. The previous lemma states that $e_1$ does not depend on $y$.
\end{remark}
A corollary is:
\begin{theorem}[Tolstov \cite{tolstov49b}] \label{the}
Let $h$, $f$ and $O$ be as in Lemma \ref{lem} above. There are
measurable sets $e_1$ and $e_2$ with $\vert e_1\vert=b-a$, $\vert
e_2\vert=d-c$, such that
\begin{itemize}
\item[(a1)] Anywhere in $\{x\}\times (c,d)$ with $x\in e_1$, we have
\begin{equation} \label{ni1}
\partial_1 f(x,y)=\int_c^y h(x,v) {\rm d} v.
\end{equation}
\item[(a2)] Anywhere in $(a,b)\times \{y\}$ with $y\in e_2$, we have
\begin{equation} \label{ni2}
\partial_2 f(x,y)=\int_a^x h(u,y) {\rm d} u.
\end{equation}
\item[(b)] There exist a measurable set $E\subset e_1\times e_2$ with $\vert
E\vert=(b-a) (d-c)$, such that for every $(x,y)\in E$ the mixed
derivatives exist, and
\begin{equation} \label{jis}
\partial_2\partial_1 f(x,y)=h(x,y)=\partial_1\partial_2 f(x,y).
\end{equation}
Moreover, for every $x\in e_1$, $\vert \pi_1^{-1}(x)\cap E\vert=d-c$, and for every $y\in e_2$, $\vert \pi_2^{-1}(y)\cap E\vert =b-a$.
\end{itemize}
\end{theorem}
With respect to Tolstov's paper we have included the last statement of point (b).
Although this inclusion lengthens the proof we give this version in order to be as complete as possible. A similar statement could be included at the end of the next theorems.
\begin{proof}
Let $e_1$ and $e_2$ be as in Lemma \ref{lem}, then (a1) and (a2) are
a rephrasing of that lemma. Differentiating Eq.\ (\ref{ni1}) with
$x\in e_1$ with respect to $y$ we obtain that there is $e^2(x)\subset
(c,d)$, $\vert e^2(x)\vert=d-c$ such that for $y\in e^2(x)$ the derivative
$\partial_2 \partial_1 f(x,y)$ exists and
\begin{equation} \label{bjy}
\partial_2 \partial_1 f(x,y)=h(x,y).
\end{equation}
By taking the intersection of $e^2(x)$ with $e_2$, if necessary, we can assume that $e^2(x)\subset e_2$.
Let $E^1=\cup_{x\in e_1} \{x\}\times e^2(x)$, so that $\vert E^1\vert=(b-a)(d-c)$ and on $E^1$ Eq.\ (\ref{bjy}) holds true. Observe that $\pi_1(E^1)\subset e_1$ and $\pi_2(E^1)\subset e_2$. By Fubini's theorem there is $c_2 \subset e_2$, $\vert c_2\vert = d-c$, such that for every $y\in c_2$, $d_1(y):=\pi_1(\pi_2^{-1}(y)\cap E^1)\subset e_1$, is such that $\vert d_1(y)\vert=b-a$.
By taking the intersection of $e^2(x)$ with $c_2$, if necessary, we can assume that $e^2(x)\subset c_2$. This redefinition does not change the properties of $E^1$, which gets replaced as follows $E^1 \to E^1\cap (e_1\times c_2)$, but now $\pi_2(E^1)\subset c_2$, and for every $y\in c_2$, $d_1(y)=\pi_1(\pi_2^{-1}(y)\cap E^1)\subset e_1$, is such that $\vert d_1(y)\vert=b-a$.
Analogously, starting from Eq.\ (\ref{ni2}) and working with the roles of $x$ and $y$ exchanged we obtain that there is $c_1\subset e_1$, $\vert c_1\vert=b-a$, such that for every $y\in e_2$ there is $e^1(y)\subset c_1$, $\vert e^1(y)\vert = b-a$, such that on $E^2=\cup_{y\in e_2} e^1(y)\times\{y\}$ the derivative $\partial_1 \partial_2
f(x,y)$ exists and
\begin{equation} \label{bji}
\partial_1 \partial_2 f(x,y)=h(x,y).
\end{equation}
Moreover, $\pi_1(E^2)\subset c_1$ and for every $x\in c_1$, $d_2(x):=\pi_2(\pi_1^{-1}(x)\cap E^2)\subset e_2$, is such that $\vert d_2(x)\vert=d-c$.
Let us define $E=E^1\cap E^2$, then $E\subset c_1\times c_2$, and for every $x\in c_1$,
\[
\pi_2(\pi_1^{-1}(x)\cap E)= \pi_2(\pi_1^{-1}(x)\cap E^1)\cap \pi_2(\pi_1^{-1}(x)\cap E^2)= e^2(x)\cap d_2(x),
\]
where both sets on the right-hand side have full measure thus $\vert \pi_2(\pi_1^{-1}(x)\cap E)\vert=d-c$ (and analogously for the analogous statement with $x$ and $y$ exchanged). Finally, (\ref{bjy})-(\ref{bji}) are true on $E$, which proves (b) keeping (a1)-(a2) once we redefine $c_1\to e_1$, $c_2\to e_2$.
\end{proof}
We can also obtain a related theorem which adds information on the
differentiability properties of $f$:
\begin{theorem} \label{jui}
Let $f\colon [a,b]\times [c,d] \to \mathbb{R}$, be such that
$f(x,\cdot):[c,d]\to \mathbb{R}$ and $f(\cdot,y): [a,b]\to
\mathbb{R}$ are absolutely continuous for every $x\in [a,b]$ and
$y\in [c,d]$, respectively. The following properties are equivalent:
\begin{itemize}
\item[(i)] There is $e_1\subset (a,b)$, $\vert e_1\vert=b-a$, such that for $x\in e_1$, $\partial_1
f(x,\cdot)$ exists for every $y$, moreover it is absolutely
continuous over $[c,d]$, and $\partial_2 \partial_1 f\in L^1([a,b]\times
[c,d])$.
\item[(ii)] There is $e_2\subset (c,d)$, $\vert e_2\vert=d-c$, such that for $y\in e_2$, $\partial_2 f(\cdot, y)$ exists for every $x$, moreover it is absolutely continuous over
$[a,b]$, and $\partial_1 \partial_2 f\in L^1([a,b]\times [c,d])$.
\end{itemize}
Suppose they hold true then there is a subset $E\subset
e_1\times e_2$, $\vert E\vert=(b-a)(d-c)$, such that on $E$
the function $f$ is differentiable, $\partial_2 \partial_1 f(x,y)$, $\partial_1 \partial_2
f(x,y)$ exist, and
\[\partial_2 \partial_1 f=\partial_1 \partial_2 f.\]
\end{theorem}
\begin{proof}
Assume (i). For $x\in e_1$ function $\partial_1 f(x,\cdot)$ is absolutely
continuous thus for every $y$
\begin{equation} \label{kid}
\partial_1 f(x,y)-\partial_1 f(x,c)=\int_c^y \partial_2 \partial_1 f(x,v) {\rm d} v.
\end{equation}
Since $f(\cdot,y)$ is absolutely continuous we obtain upon
integration
\[
f(x,y)-f(a,y)-f(x,c)+f(a,c)=\int_a^x {\rm d} u \int_c^y \partial_2 \partial_1 f(u,v)
{\rm d} v.
\]
By Theorem \ref{the} applied to the right-hand side there is a
subset $\tilde{e}_1\subset (a,b)$, $\vert \tilde{e}_1\vert=b-a$,
such that Eq.\ (\ref{kid}) holds. This is already known to be true
with $\tilde{e}_1=e_1$. The same theorem establishes the existence
of $e_2\subset(c,d)$, $\vert e_2\vert=d-c$, such that for $y\in e_2$
and for every $x\in (a,b)$
\[
\partial_2 f(x,y)-\partial_2 f(a,y)=\int_a^x \partial_2 \partial_1 f(u,y) {\rm d} u.
\]
This last equation shows that for $y\in e_2$ the function $\partial_2
f(\cdot,y)$ is absolutely continuous, thus for every $y\in e_2$, $\vert e_2\vert=d-c$, there is $e_1(y)\subset (a,b)$, $\vert e_1(y)\vert=b-a$, such that for $x\in e_1(y)$, and hence for almost every $(x,y)\in (a,b)\times (c,d)$ we have $\partial_1\partial_2 f(x,y)=\partial_2\partial_1 f(x,y)$ which implies that $\partial_1\partial_2 f \in L^1([a,b]\times [c,d])$, that is, (ii) is true. The
proof that (ii) implies (i) is analogous.
It remains only to prove the differentiability of $f$ on $E$, the
remaining part of the last statement being an immediate consequence
of Theorem \ref{the}.
Let us prove the differentiability of $f$ at $(x_0,y_0)\in E$. The
partial derivative $\partial_2 f(x, y_0)$ exists for every $x$ and is
absolutely continuous in $x$. Analogously, $\partial_1 f(x_0, y)$ exists
for every $y$ and is absolutely continuous in $y$. We have
\begin{align*}
f(x_0+\Delta x,y_0+\Delta y)-f(x_0,y_0)&=[f(x_0+\Delta x,y_0+\Delta y)-f(x_0+\Delta x,y_0)]\\
& \quad +[f(x_0+\Delta x,y_0)-f(x_0,y_0) ],\\
& = \partial_2 f(x_0+\Delta x,y_0) \Delta y+o_2(\Delta y)\\&\quad +\partial_1 f(x_0,y_0) \Delta x+o_1(\Delta x)\\
& = [\partial_2 f(x_0,y_0) +\partial_1\partial_2 f(x_0,y_0) \Delta x+o_3(\Delta x)]\Delta y\\
& \quad +o_2(\Delta y)+\partial_1 f(x_0,y_0) \Delta x+o_1(\Delta x)\\
&=\partial_2 f(x_0,y_0) \Delta y+\partial_1 f(x_0,y_0) \Delta x +R(\Delta
x,\Delta y) ,
\end{align*}
where $R(\Delta x,\Delta y)/(\Delta x^2+\Delta y^2)^{1/2} \to 0$ for
the denominator going to zero.
\end{proof}
\begin{remark}
It is well known that in the theory of distributions the equality
of mixed derivatives holds at any order of differentiation
\cite{vladimirov02}. In order to convert this fact into a claim for
ordinary differentiation it is necessary that the second derivatives
$\partial_2 \partial_1 f$ and $\partial_1 \partial_2 f$ be {\em regular} distributions,
namely representable as the integral of the test function $\varphi$
with $L^1(\bar{O},\mathbb{R})$ functions. Tolstov's Lemma allows us
to remove this double condition on the second mixed derivatives, for
it is sufficient to place that condition on just $\partial_2\partial_1 f$.
\end{remark}
\section{Lipschitz conditions on the partial derivatives}
Let us recall that a function $g\colon U \to \mathbb{R}^k$ defined
on an open set $U\subset \mathbb{R}^n$ is Lipschitz if for every
$p,q\in U$,
\[
\Vert g(p)- g(q)\Vert< K \Vert p-q\Vert
\]
for some $K>0$. It is locally Lipschitz if this inequality holds
over every compact subset of $U$, with $K$ dependent on the compact
subset.
A function $f\colon U\to \mathbb{R}$, $U\subset \mathbb{R}^2$,
$(x,y) \mapsto f(x,y)$, is differentiable with Lipschitz
differential, or $C^{1,1}$ for short, if $df\colon U\to
\mathbb{R}^2$ is Lipschitz. Clearly, if $f\in C^{1,1}$ with
Lipschitz constant $K$ then the partial derivative $\partial_1 f(x,\cdot)$
regarded as a function of $y$ is $K$-Lipschitz. In particular, the
Lipschitz constant does not change if we change $x$. We say that
$\partial_1 f(x,y)$ is Lipschitz in $y$ uniformly in $x$. Analogously,
$\partial_2 f(x,y)$ is Lipschitz in $x$ uniformly in $y$, and two other
similar combinations hold.
For functions admitting Lipschitz partial derivatives the $L^1$
condition on the mixed derivative which we met in Theorem \ref{jui}
is satisfied.
\begin{theorem} \label{jun}
Let $f\colon [a,b]\times [c,d] \to \mathbb{R}$, be such that
$f(x,\cdot):[c,d]\to \mathbb{R}$ and $f(\cdot,y): [a,b]\to
\mathbb{R}$ are absolutely continuous for every $x\in [a,b]$ and
$y\in [c,d]$, respectively. The following properties are equivalent:
\begin{itemize}
\item[(i)] There is $e_1\subset (a,b)$, $\vert e_1\vert=b-a$, such that for $x\in e_1$, $\partial_1
f(x,\cdot)$ exists for every $y$, and moreover it is Lipschitz over
$[c,d]$ uniformly for $x\in e_1$.
\item[(ii)] There is $e_2\subset (c,d)$, $\vert e_2\vert=d-c$, such that for $y\in e_2$, $\partial_2 f(\cdot, y)$ exists for every
$x$, and moreover it is Lipschitz over $[a,b]$ uniformly for $y\in
e_2$.
\end{itemize}
Suppose they hold true then there is a subset $E\subset
e_1\times e_2$, $\vert E\vert=(b-a)(d-c)$, such that on $E$
the function $f$ is differentiable, $\partial_2 \partial_1 f(x,y)$, $\partial_1 \partial_2
f(x,y)$ exist and are bounded and
\[\partial_2 \partial_1 f=\partial_1 \partial_2 f.\]
\end{theorem}
\begin{proof}
Suppose (i) is true and let $x\in e_1$ so that $\partial_1 f(x,\cdot)$
exists and is $K$-Lipschitz over $[c,d]$. Then $\vert \partial_2 \partial_1 f
(x,\cdot)\vert\le K$ a.e.\ in $(c,d)$ and by the assumption of Lipschitz
uniformity this bound holds for every $x\in e_1$, in particular $\vert \partial_2 \partial_1 f
\vert\le K$ holds almost everywhere on $O$. Thus $\partial_2 \partial_1
f\in L^{\infty}(\bar{O},\mathbb{R})\subset
L^{1}(\bar{O},\mathbb{R})$ and condition (i) of Theorem \ref{jui} is satisfied. In particular, the last statement of that theorem implies that ``$\vert \partial_2 \partial_1 f
\vert\le K$, $f$ is differentiable and $\partial_2 \partial_1 f$, $\partial_1 \partial_2 f$ exist and coincide'' almost everywhere
on a subset $W\subset O$, $\vert W\vert=(b-a)(d-c)$.
By Theorem \ref{jui} there is also a
set $e_2$ such that if $y\in e_2$ then $\partial_2 f(\cdot,y)$ exists for
every $x$, and moreover it is absolutely continuous thus
\[
\partial_2 f(x,y)-\partial_2 f(a,y)=\int_a^x \partial_1 \partial_2 f(u,y) {\rm d} u.
\]
However, by Fubini's theorem there is $b_2\subset(c,d)$, $\vert b_2\vert=d-c$, such that for every $y\in b_2$, $\vert\pi^{-1}(y)\cap W\vert=b-a$. Thus for almost every $y$, namely for $y\in e_2\cap b_2$, we have that
$\partial_2 f(\cdot,y)$ exists for
every $x$ and it is absolutely continuous, and for almost every $x$ we have ``$\partial_1\partial_2 f=\partial_2\partial_1f$ and $\vert \partial_2 \partial_1 f
\vert\le K$'', thus for $y\in e_2\cap b_2$, $\partial_2 f(\cdot, y)$ is $K$ Lipschitz over $(a,b)$ where $K$ does
not depend on $y$. Thus (ii) is proved once we rename $e_2\cap b_2$ as $e_2$.
The last statement follows from the previous paragraph or from the last one in Theorem
\ref{jui}.
\end{proof}
For $C^1$ functions we have:
\begin{theorem} \label{thr}
Let $\Omega$ be an open subset of $\mathbb{R}^2$ and let $f \in
C^1(\Omega,\mathbb{R})$ then the following conditions are
equivalent:
\begin{itemize}
\item[(i)] For every $x$, the partial derivative $\partial_1 f(x,\cdot)$ is locally Lipschitz,
locally uniformly with respect to $x$.
\item[(ii)] For every $y$, the partial derivative $\partial_2 f(\cdot,y)$ is locally Lipschitz,
locally uniformly with respect to $y$.
\end{itemize}
If they hold true, for instance $f \in
C^{1,1}_{loc}(\Omega,\mathbb{R})$, then on a set $E\subset \Omega$, $\vert \Omega \backslash E\vert=0$,
$\partial_2 \partial_1 f$ and $\partial_1
\partial_2 f$ exist, $f$ is differentiable and $\partial_2 \partial_1 f=\partial_1 \partial_2 f$. In particular, $\partial_2 \partial_1 f$ and $\partial_1
\partial_2 f$ belong to
$L^\infty_{loc}(\Omega,\mathbb{R})$.
\end{theorem}
\begin{proof}
Let $p\in \Omega$ and let us consider an open neighborhood
$(a,b)\times (c,d)$ of $p$ such that $\bar{O}\subset \Omega$. Let us
assume (i). By Theorem \ref{jun} we need only to show that $\partial_2
f(\cdot,y)$ is $K$-Lipschitz in $(a,b)$ for every chosen value of
$y\in (c,d)$ provided it is so for almost every $y\in (c,d)$. Let
$y\in (c,d)$ and let $\epsilon>0$. The function $\partial_2 f$ is
continuous, thus uniformly continuous over the compact set
$[a,b]\times [c,d]$. We can find a $\delta>0$ such that whenever
$\vert y_2-y_1\vert<\delta$, $y_1,y_2\in [c,d]$, we have $\vert \partial_2
f(x,y_2)-\partial_2 f(x,y_1)\vert<\epsilon$ for every $x\in [a,b]$. We can
find some $\bar{y}\in (y-\delta,y+\delta)\cap [c,d]$ such that $\partial_2
f(\cdot, \bar{y})$ is $K$-Lipschitz. Thus
\begin{align*}
\vert \partial_2 f(x_2,y)-\partial_2 f(x_1,y)\vert &\le \vert \partial_2 f(x_2,\bar
y)-\partial_2 f(x_1,\bar y) \vert+\vert \partial_2 f(x_2,\bar y)-\partial_2 f(x_2,
y)\vert
\\& \ +\vert \partial_2 f(x_1,\bar y)-\partial_2 f(x_1, y)\vert\le K \Vert
x_2-x_1\Vert+2 \epsilon
\end{align*}
From the arbitrariness of $\epsilon$, $x_1$ and $x_2$ we obtain that
$\partial_2 f(\cdot,y)$ is $K$-Lipschitz for every chosen value of $y$.
The remaining claims follow trivially from Theorem \ref{jun}.
\end{proof}
We stress that if $f\in C^1$ then the fact that $\partial_1 f(x,y)$ is
Lipschitz in $y$ uniformly in $x$, and that $\partial_2 f(x,y)$ is
Lipschitz in $x$ uniformly in $y$, does not guarantee that $f\in
C^{1,1}$; it is sufficient to consider the function $f(x,y)=\vert
x\vert^{3/2}$. As a consequence, the assumptions of this theorem are
weaker than $f\in C^{1,1}_{loc}(\Omega,\mathbb{R})$.
For the $C^{1,1}_{loc}(\Omega,\mathbb{R})$ case the equality of
mixed derivatives can also be obtained as a consequence of Young's
(point 3 above) and Rademacher's theorems. We recall that the latter
states that every Lipschitz function is almost everywhere
differentiable \cite{evans92}. Indeed:
\begin{theorem} \label{nde}
Let $f\colon \Omega \to \mathbb{R}$, $f \in C^{1,1}_{loc}$, then $f$
is twice differentiable almost everywhere and in such
differentiability set $\partial_2 \partial_1 f=\partial_1\partial_2 f$.
\end{theorem}
\begin{proof}
The differential $d f\colon \Omega\to \mathbb{R}^2$ is Lipschitz
thus differentiable almost everywhere (Rademacher's theorem). If $p$
belongs to the differentiability set then $\partial_1 f$ and $\partial_2 f$,
being components of the differential, are there differentiable. Thus
by Young's theorem 3.\ we have $\partial_2 \partial_1 f=\partial_1\partial_2 f$ at $p$.
\end{proof}
\section{Some applications}
In this section we explore some applications that motivated our
study. They are in the area of differential geometry but it is
likely that many other applications can be found.
\subsection{Usefulness of Lipschitz one-forms}
A rather natural application of these results is in the study of
Lipschitz 1-forms, namely 1-forms with Lipschitz coefficients, over
differentiable manifolds (at least $C^{1,1}$). Indeed, some related
results have been already developed following paths independent of
the above considerations.
If $f\in
C^{1,1}_{loc}$ then by Theorem \ref{nde} ${\rm d}^2 f=0$ almost
everywhere in the Lebesgue 2-dimensional measure of any
2-dimensional $C^{1,1}$ embedded manifold. Thus the exterior
differential satisfies ${\rm d}^2=0$ in a well defined sense whenever
0-forms and 1-forms
are taken with the correct degree of differentiability. In
particular, if $\omega$ is a Lipschitz 1-form then Stokes theorem
\[
\int_S {\rm d} \omega=\int_{\partial S} \omega
\]
still holds true \cite{simic96}.
One also expects that Lipschitz distributions of hyperplanes should
be integrable according to the usual rule for $C^1$ distributions.
Namely, let $\omega$ be a Lipschitz 1-form, then the distribution
$\textrm{Ker}\,\omega$ should be integrable if and only if
$\omega\wedge {\rm d} \omega=0$. This result has indeed been proved
\cite{simic96,rampazzo07}.
The nice behavior of locally Lipschitz 1-forms suggests to study
\mbox{(pseudo-)} Riemannian $C^{2,1}$ manifolds endowed with Lipschitz
connections $\nabla$ and $C^{1,1}$ metrics. Indeed, in Cartan's
approach the connection is regarded as a Lie algebra-valued 1-form
in the bundle of reference frames. In such framework the Riemannian
tensor would be a locally bounded element of $L_{loc}^\infty(M)$ and
hence would be defined only almost everywhere in the Lebesgue
2-dimensional measure. In particular, it could be discontinuous
though locally bounded. This feature would be quite appreciated in
the theory of Einstein's general relativity. There the Ricci tensor
is proportional to the stress-energy tensor and so it is necessarily
discontinuous for the typical mass distribution of a planet; the
reader may consider the discontinuity in density which takes place
at the planet's boundary.
\subsection{An application to mathematical relativity}
As mentioned, the assumptions of Theorem \ref{thr} are weaker than
the condition $f\in C^{1,1}_{loc}$. We wish to describe shortly an
example of application where those weaker conditions turn out to be
important.
In Einstein's General Relativity the spacetime continuum
is represented with a Lorentzian manifold \cite{hawking73}, namely a
differentiable manifold endowed with a metric of signature
$(-,+,+,+)$. Observers or massive particles are represented by
$C^1$ curves $x(s)$ which are timelike, namely such that $
g({x}',{x}')<0$. Unfortunately, for various mathematical arguments
it is necessary to consider limits of such curves, and those limits
are rarely $C^1$ but are necessarily Lipschitz. Lipschitz
mathematical objects arise quite naturally in General Relativity,
ultimately because the light cones on spacetime (at $p\in M$ the
light cone is given by the subset of $T_pM$ where $g$ vanishes)
place a bound on the local speed of massive objects.
A typical problem which is met in the discussion of the clock effect
(or twin paradox) deals with curve variations $x(t,s)$ of timelike
geodesics $x(\cdot, s)$ parametrized by $s$, where the transverse
curves $x(t, \cdot)$ are just Lipschitz. In this situation the
properties of the exponential map allow one to prove that the
tangent $\partial_t x(t,s)$ is Lipschitz in $s$ uniformly in $t$, exactly
the assumptions of Theorem \ref{thr} (the function need not be
differentiable in $s$). It turns out that in order to prove formulas
such as the first variation formula for the energy functional of
differential geometry,
\[
E[x]=\frac{1}{2} \int_0^1 g(\dot x, \dot x) {\rm d} t,
\]
one needs to switch $\partial_s \partial_t x$ for $\partial_t \partial_s x$ an operation
which is indeed allowed thanks to Theorem \ref{thr}. Thus, this
theorem can be used to operate with Lipschitz curves in Lorentzian
(or Riemannian) geometry much in the same way as it is usually done
with $C^1$ curves \cite{minguzzi13d}.
\section{Conclusions}
We have reviewed the notion of strong differentiation and Mikusi\'nski's result on the equality of mixed partial derivatives.
The assumptions do not demand the existence and
continuity of any second derivative in a neighborhood of the point.
Rather, the theorem assumes the weaker notion of strong differentiability of
one first derivative at the point. This possibility was suggested by
previous applications of the concept of strong differentiation where
it proved to be particularly advantageous, e.g.\ the inverse
function theorem.
We have then considered results which prove the existence and
equality of mixed partial derivatives almost everywhere. We have
presented and elaborated previous results by Tolstov stressing the
importance of the Lipschitz condition on first partial derivatives
for applications. The advantage of this approach over alternative distributional approaches becomes clear
whenever one cannot conclude that both
mixed second partial derivatives are summable. We have ended this
work giving a specific example of application where this classical
approach is more effective and justified.
\medskip
\noindent {\bf Acknowledgment}. I thank an anonymous referee for some useful criticisms. Work partially supported by GNFM of INDAM.
|
2,877,628,091,600 | arxiv |
\section{Supervised speaker verification}
For the supervised verification track (VoxSRC-20 Track 1 \& 2) we train 6 systems based on our ECAPA-TDNN architecture~\cite{ecapa_tdnn, voxsrc20_submission} and 4 ResNet34~\cite{magneto} variants. The ECAPA-TDNN architecture is depicted in Figure~\ref{fig:ecapa_tdnn}. We scale up the network compared to~\cite{ecapa_tdnn} by using 2048 feature channels and add a fourth SE-Res2Block with a dilation factor of 5 for optimized verification performance for all models. To further diversify the models, we make small architectural changes across all five other ECAPA-TDNN models:
\begin{enumerate}
\item Replacement of the ReLU-activation function in the middle layer of the attention module with the Tanh activation function.
\item Addition of a Bi-directional Long Short-Term memory (BLSTM) layer in the first layer of the model~\cite{blstm_microsoft}.
\item Increase of the embedding dimension from 192 to 256.
\item 60-dimensional log Mel filter bank energies as input feature.
\item Incorporation of two Sub-Centers per class in the AAM-softmax layer~\cite{subcenter} (SC-AAM), along with the integration of the dilation factor variability across the groups in the Res2Net grouped convolutions instead of across the SE-Res2Blocks. We refer to this as Dynamic Dilation (DD). Except for the first directly connected group, the 7 remaining groups in the central Res2Net~\cite{res2net} convolutions have dilation factors of 2, 3, 4, 5, 6, 2, and 3 respectively. All SE-Res2Blocks use the same dilation configuration.
\end{enumerate}
In addition, we create four models based on the ResNet34 architecture introduced in~\cite{magneto}. We enhance all our ResNet34 models by adding Squeeze-Excitation (SE) blocks~\cite{se_block} after each ResBlock component. Three additional variants are trained:
\begin{enumerate}
\item Incorporation of a Channel-dependent Attentive Statistics (CAS) pooling layer~\cite{ecapa_tdnn}.
\item Usage of two sub-centers per class in the AAM-softmax layer (SC-AAM).
\item Integration of both CAS and two sub-center AAM.
\end{enumerate}
\begin{figure}[h]
\centering
\includegraphics[scale=0.2]{images/full_ecapa_new.png}
\setlength{\belowcaptionskip}{-10pt}
\caption{ECAPA-TDNN system architecture. $T$ indicates the number of input frames, $C$ the amount of intermediate feature channels and $S$ the number of classification speakers. We denote \textit{k} and \textit{d} in the SE-Res2Block for the kernel size and dilation factor, respectively. See~\cite{ecapa_tdnn} for more details.}
\label{fig:ecapa_tdnn}
\end{figure}
\subsection{Training protocol}
In the closed track, we train all models on the development set of the VoxCeleb2 dataset~\cite{vox2}. For the open track we add the development part of the VoxCeleb1 \cite{vox1} dataset, the \textit{train-other-500} subset of the LibriSpeech dataset~\cite{libri} and a part of the DeepMine corpus \cite{deepmine} containing utterances from 588 Farsi speakers.
We also create 6 additional augmented copies using the MUSAN~\cite{musan} corpus (babble, noise), the RIR~\cite{rirs} (reverb) dataset and the SoX (tempo up, tempo down) and FFmpeg (compression) libraries.
All systems are trained on random crops of 2~s to prevent overfitting. The input features are 80-dimensional MFCCs with a window size of 25~ms and a frame shift of 10~ms for the ECAPA-TDNN systems. For the ResNet based systems we use 80 log Mel filter bank energies as input features. To further improve robustness of the models, we apply SpecAugment \cite{specaugment} to the log mel-spectrograms which randomly masks 0 to 5 frames in the time-domain and 0 to 8 frequency bands. Subsequently, the input features of the cropped segment are cepstral mean normalized.
The initial margin penalty of the AAM-softmax layer is set to 0.2. We also apply a weight decay of 2e-5 on the weights in the networks, except for the AAM-softmax layer, which uses a slightly higher value of 2e-4. The systems are trained using the Adam optimizer \cite{adam} with a Cyclical Learning Rate (CLR) using the \textit{triangular2} policy as described in~\cite{clr}. The maximum and minimum learning rates are set at 1e-3 and 1e-8 respectively. We use a cycle length of 130k iterations with a batch size of 128. All ECAPA-TDNN models are trained for three full cycles, the ResNet34 models use one training cycle.
\subsection{Large margin fine-tuning}
After the initial training stage we apply our proposed large margin fine-tuning strategy~\cite{voxsrc20_submission} to all models. During this fine-tuning stage, the margin of the AAM-softmax layer is increased to 0.5. SpecAugment is disabled and the length of the random crop is increased to 6~s. The CLR cycle length is decreased to 60k, with the maximum learning rate lowered to \mbox{1e-5}. No layers in the network are frozen. Finally, the sampling strategy is changed to HPM as described in~\cite{sdsvc_paper} with parameter values $S = 16$, $I = 8$ and $U = 1$. An ablation study can be found in~\cite{voxsrc20_submission}.
We create scores for all fine-tuned systems using adaptive s-normalization~\cite{s_norm_2} with an imposter cohort size of 100. The imposter cohort consists of the average of the length-normalized utterance-based embeddings of each training speaker. The imposter cohort size is set to 100.
\subsection{Quality-aware score calibration}
To train our calibration system we create a set of trials from the VoxCeleb2 training dataset. We want our calibration system to be robust against varying levels of duration in the trials. Subsequently, we define three types of trials: \textit{short-short}, \textit{short-long} and \textit{long-long} with \textit{short} indicating an utterance between 2 and 6 seconds and \textit{long} ranging from 6~s to the maximum length utterance in the VoxCeleb2 dataset. We include 10k trials of each type in our calibration trial set, resulting in a total of 30k trials. The amount of target and non-target trials is balanced.
We calibrate each system individually on the calibration set using logistic regression after which we calculate the mean score across all models. Subsequently, we use our proposed quality-aware calibration~\cite{voxsrc20_submission} on the mean score to make the decision thresholds of the evaluation metrics robust across varying utterance conditions. We incorporate two symmetric Quality Measure Functions (QMFs) based on the minimum and maximum quality value in the quality-aware calibration stage. Our first quality metric is the amount of speech frames detected by the Voice Activity Detector (VAD) module of the SPRAAK system~\cite{spraak}. As our second metric, we use the mean top-100 imposter score based on the inner-product of the considered embedding and the s-normalization imposter cohort.
\begin{table}[h]
\caption{Impact of distance metric and cohort size on imposter-based QMF on the VoxSRC-20 validation set.}
\vspace{-5pt}
\label{tab:ablation_qmf}
\centering
\scalebox{0.95}{
\begin{tabular}{ccccc}
\toprule
& \textbf{Method} & \multicolumn{1}{c}{\textbf{Cohort}} & \multicolumn{1}{c}{\textbf{EER(\%)}} & \multicolumn{1}{c}{\textbf{MinDCF\textsubscript{0.01}}} \\
\midrule
& baseline & - & 2.89 & 0.2274 \\
\midrule
A & inner product & all & 2.95 & 0.2326 \\
B & inner product & top-100 & \textbf{2.81} & \textbf{0.2257} \\
C & cosine & all & 2.89 & 0.2274 \\
D & cosine & top-100 & 2.89 & 0.2280 \\
\bottomrule
\end{tabular}}
\vspace{-15pt}
\end{table}
A detailed description of the quality-aware score calibration can be found in~\cite{voxsrc20_submission}. As an additional analysis to~\cite{voxsrc20_submission}, we compare the usage of the inner product and cosine distance during the calculation of the imposter mean against different cohort sizes in Table~\ref{tab:ablation_qmf} on our baseline ECAPA-TDNN system. Experiments \textit{A} and \textit{C} show that using the cosine distance when the imposter cohort contains all 5994 training speakers appears to be more stable than using the inner product, which even degrades performance. However, a properly chosen cohort size appears to be beneficial for the imposter mean based on the inner product, as indicated in experiment \textit{B}, for which we note a relative improvement of 3.1\% EER and 0.7\% MinDCF over the baseline system.
\subsection{Fusion system performance}
Table~\ref{tab:ablation_fusion} provides a performance overview on the Vox-SRC20 validation set of all fine-tuned systems in our final system fusion submission for the closed track. Notably, by incorporating Channel-dependent Attentive Statistics (CAS) pooling in the SE-ResNet34 system, the architecture now provides performance similar to the baseline ECAPA-TDNN system. Using a Tanh activation in the CAS layer does not seem to have a big impact. Adding a BLSTM layer does not improve performance over the baseline ECAPA-TDNN, possibly due to the SE-blocks already inserting global context information in the frame-level layers of the model. The experiment with Dynamic Dilation (DD) indicates there are multiple ways to gradually scale up the temporal context of the ECAPA-TDNN system. Interestingly, using a too large embedding dimension can degrade the performance of the ECAPA-TDNN significantly. We also notice a slight performance degradation by only using 60 log Mel filter bank energies. Using two sub-centers in the AAM-softmax (SC-AAM) layer results in additional performance gains on the ECAPA-TDNN and ResNet34 based systems. The VoxCeleb2 dataset is less curated than the VoxCeleb1 dataset and this more robust training loss might compensate for labeling errors, but it may also compensate for too strong augmentations. Incorporating SC-AAM and CAS in a SE-ResNet34 based system leads to our best single system performance.
\begin{table}[h]
\centering
\caption{Evaluation of all fine-tuned systems in the final fusion submission of the closed track on VoxSRC-20 validation set.}
\vspace{-5pt}
\label{tab:ablation_fusion}
\resizebox{0.91\textwidth}{!}{\begin{minipage}{\textwidth}
\begin{tabular}{lccc}
\toprule
\textbf{Architecture} & \textbf{Variant} & \multicolumn{1}{c}{\textbf{EER(\%)}} & \multicolumn{1}{c}{\textbf{MinDCF\textsubscript{0.01}}} \\
\midrule
ECAPA-TDNN & Baseline & 2.89 & 0.2274 \\
ECAPA-TDNN & Tanh in CAS & 2.86 & 0.2274 \\
ECAPA-TDNN & BLSTM & 2.88 & 0.2360 \\
ECAPA-TDNN & 256-dim emb. & 3.15 & 0.2578 \\
ECAPA-TDNN & FBANK60 & 2.92 & 0.2389 \\
ECAPA-TDNN & SC-AAM \& DD & 2.83 & 0.2298 \\
ResNet34 & SE-blocks & 3.03 & 0.2605 \\
SE-ResNet34 & CAS & 2.89 & 0.2306 \\
SE-ResNet34 & SC-AAM & 2.98 & 0.2437 \\
SE-ResNet34 & SC-AAM \& CAS & \textbf{2.70} & \textbf{0.2215} \\
\midrule
Fusion & & 2.41 & 0.1901 \\
Fusion + QMFs & & \textbf{2.16} & \textbf{0.1795} \\
\bottomrule
\end{tabular}
\end{minipage}}
\end{table}
Evaluation of our final system fusion in the closed track with large margin fine-tuning and quality-aware score calibration is given in Table~\ref{tab:ablation_voxsrc}. Large margin fine-tuning of all models results in a relative improvement of 3\% in EER and 8\% in MinDCF. Using quality-aware score calibration of the fused score with the speech duration and imposter mean QMFs resulted in an additional 8\% EER and 6\% MinDCF relative improvement.
Due to time constraints, we only trained one extra baseline ECAPA-TDNN model for the open speaker verification track. We replace the under-performing ECAPA-TDNN with 256-dimensional embeddings with this new model. Our final open fusion submission improves relative upon the closed submission with 4\% EER and 2\% MinDCF.
\begin{table}[h]
\caption{Evaluation of the proposed fine-tuning and quality-aware calibration (QMFs) on the VoxSRC-20 test set.}
\vspace{-5pt}
\label{tab:ablation_voxsrc}
\centering
\resizebox{0.98\textwidth}{!}{\begin{minipage}{\textwidth}
\begin{tabular}{lccc}
\toprule
\textbf{Systems} & \textbf{Track} & \multicolumn{1}{c}{\textbf{EER(\%)}} & \multicolumn{1}{c}{\textbf{MinDCF\textsubscript{0.05}}} \\
\midrule
Fusion & closed & 4.20 & 0.2052 \\
Fusion + FT & closed & 4.06 & 0.1890 \\
Fusion + FT + QMFs & closed & \textbf{3.73} & \textbf{0.1772} \\
Fusion + FT + QMFs & open & \textbf{3.58} & \textbf{0.1737} \\
\bottomrule
\end{tabular}
\end{minipage}}
\vspace{-10pt}
\end{table}
\section{Unsupervised speaker verification}
\subsection{Training without speaker labels}
Unsupervised speaker verification tackles the problem without the use of any manually created speaker identity labels (VoxSRC-20 Track 3). We define three main stages in our audio-only solution:
\begin{enumerate}
\item Utterance-based contrastive learning
\item Iterative clustering for pseudo-label generation
\item Supervised training on pseudo-labels
\end{enumerate}
\subsubsection{Contrastive learning}
The contrastive learning stage~\cite{simclr,aat,semisupervised_contrastive_learning} generates positive and negative comparison pairs between all utterances in the unlabeled dataset. Through strong augmentation, two different versions of the same utterance can be made. It is clear that the speaker identity related to the processed utterance does not change, and the two generated segments correspond with a positive comparison pair. Speaker embeddings generated by the speaker verification model should be consistent within a positive pair, indicating this speaker embedding extractor is invariant to the applied augmentations. As there are no speaker labels available, it is assumed that each training utterance corresponds with a different speaker. A negative comparison pair is thus constructed by simply comparing across utterances. Embeddings within a negative pair should be different, showing that the network can differentiate between the speakers.
We rely on Momentum Contrast (MoCo)~\cite{moco} to expand the number of possible negative comparison pairs beyond the data in the mini-batch per gradient update. This MoCo technique creates a second momentum encoder, which is a copy of the original speaker embedding extractor. The momentum encoder is only used for forward passes. Instead of the back-propagation algorithm to update its model parameters, a momentum update based on the values of the original network is used during each iteration:
\begin{equation}
\theta_m \leftarrow m \theta_m + (1-m ) \theta_e
\end{equation}
with $\theta_m$ a model parameter of the momentum encoder and $\theta_e$ the corresponding model parameter in the speaker embedding extractor. The momentum $m$ controls how consistent the output will remain across multiple mini-batch update iterations.
After each parameter update, we store the momentum encoder output embeddings of the latest mini-batch in a large queue. When adding this new data to the queue, we remove the oldest momentum encoder embeddings. This way we store a large number of recent and consistent embeddings that can be used for the negative pair comparisons.
The loss function of the original speaker embedding extractor is given by:
\begin{equation}
\label{aam_softmax}
L = -\frac{1}{n} \sum^{n}_{i=1} log \frac{e^{s(\textbf{x}_i \cdot \textbf{x}^{m}_{i+})}}{e^{s(\textbf{x}_i \cdot \textbf{x}^{m}_{i+})} + \sum_{j=1}^N e^{s(\textbf{x}_i \cdot \textbf{x}^{m}_{j-})}}
\end{equation}
with $n$ indicating the batch size, $\textbf{x}$ is a length-normalized speaker embedding that is generated by the original speaker embedding extractor. $\textbf{x}^m$ refers to the normalized embeddings of the momentum encoder. Note that the embeddings $\textbf{x}_{i}$ and $\textbf{x}^{m}_{i+}$ are each generated with a different augmentation applied to the original utterance.
The MoCo queue for negative pair comparisons contains $N$ stored embeddings $\textbf{x}^{m}_{j-}$. A large value for $N$ can be set, as the momentum encoder is not updated through back-propagation. A scaling factor $s$ is applied to increase the range of the output log-likelihoods. After the parameter update, the oldest embeddings in the MoCo queue are replaced by all $\textbf{x}^{m}_{i+}$ in the mini-batch.
\subsubsection{Iterative clustering}
The assumption that each utterance corresponds with a different speaker is most likely incorrect for most training datasets. Once a speaker embedding extractor model is trained through contrastive learning, we can use this model to characterize the speaker in each training utterance by extracting an embedding. These embeddings can be grouped by means of a clustering algorithm.
One of the most successful algorithms in speaker diarization is Agglomerative Hierarchical Clustering (AHC). However, the memory complexity does not allow to efficiently apply AHC to the complete VoxCeleb2 dataset with current hardware and software. Therefore, we rely on an extremely efficient mini-batch k-means~\cite{minibatch_kmeans, scikit-learn} cluster process to reduce the embeddings to a more manageable number of k-means cluster centers. As the contrastive learning optimized the cosine similarity between embeddings, we length-normalize the embeddings before clustering. After this initial clustering, the k-means cluster centers are grouped through AHC with Ward linkage~\cite{ward_ahc} and the cosine similarity metric.
Each utterance belonging to the same cluster of k-means centers is assigned the same speaker identity pseudo-label. We interpret these pseudo-labels as the ground truth and we train a speaker embedding extractor using the supervised AAM-softmax loss. The number of AHC output clusters can be determined by analyzing the performance of this speaker verification model on the VoxSRC validation data.
This process can be re-iterated. Given the training embeddings generated by the model trained on the pseudo-labels we can re-execute the clustering and continue training the speaker embedding extractor with the new pseudo labels. This can be repeated until performance convergence on the validation data.
\subsubsection{Robust training on pseudo-labels}
The final iteration of the clustering process generates reliable pseudo labels that can be used to train a larger speaker verification model in a supervised way. However, these pseudo labels still contain label noise and we rely on the AAM subcenter loss~\cite{subcenter} for potential robustness against these label errors. To further optimize performance, the trial scores of this large model can be fused with the scores of the final model in the preceding iterative clustering process.
\subsection{Implementation}
\subsubsection{Contrastive learning}
An ECAPA-TDNN~\cite{ecapa_tdnn} with $C=1024$ is trained on VoxCeleb2 in an unsupervised way with contrastive learning. We only use two SE-Res2Blocks to reduce the GPU memory requirement.
The CLR settings are the same as in the initial phase of the supervised training, and we train the model during 1 cycle. The size of the MoCo queue is set to 65K, the momentum is 0.999. The scale $s$ in the contrastive loss is set to 10. We apply shuffle batchnorm~\cite{moco} on the momentum encoder to avoid non-generalizing information leakage through the batchnorm statistics. This problem is caused by the fact that all the positive pair utterances of the embedding extractor and the momentum encoder are grouped in a mini-batch corresponding with an identical group of source utterances. All other settings except the data augmentation settings are similar to the supervised learning.
As the augmentation plays a crucial role in contrastive learning we implement an online augmentation protocol. We reduce overlap between the 3.5~s random crops of the same utterance processed by the embedding extractor and the momentum encoder by sampling 5 crops and picking the pair with the least overlap. Minimizing the overlap in a more precise way, causes sub-optimal performance by frequently selecting the start and the end of the utterance during cropping. We use 37K noise segments from the balanced YouTube-8M~\cite{youtube_8m} train set for additive noise augmentation with an SNR between 5 and 15 dB. Optional reverb~\cite{rirs} is applied with a chance of 75\%. Contrary to our expectations, we noticed slight performance gains by applying reverb after the additive noise augmentation. No SpecAugment is applied.
\subsubsection{Iterative clustering}
To guarantee high quality embeddings, we extract training embeddings with the model trained through contrastive loss from the clean full-length VoxCeleb2 utterances. The normalized embeddings are grouped to 50K clusters by Euclidean mini-batch k-means clustering with a batch size of 10K. Random initialization of the k-means algorithm is used.
The assignment of 50K cluster centers makes AHC computationally feasible in the next stage. The AHC clustering is realized using the fastcluster package~\cite{fastcluster}.
We assume that it is realistic to determine the optimal number of AHC clusters in VoxCeleb2 in steps of 2.5K extra clusters. Evaluation on the VoxSRC-20 validation set of the ECAPA-TDNN trained with the AAM-sofmax loss on pseudo-labels, showed that 7.5K clusters delivered optimal results. This analysis is shown in Figure~\ref{fig:ahc_performance}.
The data preprocessing in this stage is identical to the supervised setting, and a standard ECAPA-TDNN with three SE-Res2Blocks is used. To speed up the training process we reduce the CLR cycle length to 60K. The new training embeddings are re-clustered after one CLR training cycle. The maximum CLR learning rate does not decay per cycle as the system should be able to adapt to the improved pseudo-labels. The assigned cluster labels are permuted after each iteration. To assure that we can continue training with the same model, we replace the AAM prototypes by the mean vector of all normalized embeddings assigned to the corresponding cluster. Performance converged after 7 iterations.
\subsubsection{Robust training on pseudo-labels}
The pseudo-labels of the final clustering iteration are used to train a large ECAPA-TDNN with $C=2048$ with dynamic dilation within 4 SE-Res2Blocks and subcenter AAM with 2 centers per cluster. The CLR schedule is restored to its original setting. After 3 cycles the model is fine-tuned with the large margin setting. We do not apply any score normalization or calibration. These post-processing steps could be enabled by generating target and non-target trials from the pseudo-labels. The output scores of this system can be fused with the final model of the iterative clustering process by taking the average of the scores. This is possible without any calibration as the systems have been trained on the same input data with a similar AAM loss function.
\subsection{Results}
Performance of the unsupervised speaker verification approach is shown in Figure~\ref{fig:ahc_performance}. Contrastive learning achieves an EER of 18.0\% on the VoxSRC-20 validation set and an EER of 7.3\% on the original VoxCeleb1 test set. Iterative grouping through clustering improves the results if the number of clusters is chosen appropriately. Setting the number of clusters too low will result in performance divergence as shown in the case of 5K clusters. Constructing 7.5K clusters during the iterative process is more optimal than generating 10K clusters. In the case of 7.5K clusters the EER gradually improves to 6.5\%, which is a relative improvement of 64\% compared to the initial model. This final model achieves an EER of 2.1\% on the original VoxCeleb1 test set. Training and large margin fine-tuning of a larger ECAPA-TDNN system trained with 2-subcenter AAM on the final pseudo-labels produces an EER of 6\% on the VoxSRC-20 validation set.
\begin{figure}[h]
\centering
\includegraphics[scale=0.275]{images/iterative_ahc_performance.pdf}
\setlength{\belowcaptionskip}{-10pt}
\caption{Performance of iterative clustering on the VoxSRC-20 validation set.}
\label{fig:ahc_performance}
\end{figure}
The converged iterative clustering algorithm with 7.5K clusters initialized by contrastive learning achieves an EER of 7.7\% on the VoxSRC-20 test set. Score averaging with the larger fine-tuned ECAPA-TDNN system results in a final submission score of 7.2\% EER.
\bibliographystyle{IEEEtran}
|
2,877,628,091,601 | arxiv |
\section{Introduction}
Knowledge Graphs (KGs) encode real-world facts as structured data and have drawn significant attention from academia, and the industry \cite{zhang2022deepke}.
KG representation learning aims to project the relations and entities into a continuous vector space, which can promote the knowledge reasoning ability and feasibly be applied to downstream tasks: question answering \cite{kgt5}, recommender system \cite{zhang2021alicg} and so on.
Previous embedding-based knowledge graph representation methods, such as TransE \cite{DBLP:conf/nips/BordesUGWY13}, embed the relational knowledge into a vector space and then optimize the target object by leveraging a pre-defined scoring function to those vectors.
A few remarkable open-sourced and long-term maintained KG representation toolkits have been developed, such as OpenKE \cite{DBLP:conf/emnlp/HanCLLLSL18}, LibKGE \cite{DBLP:conf/emnlp/BroscheitRKBG20}, PyKEEN \cite{ali2021pykeen}, CogKGE \cite{DBLP:conf/acl/JinMYHSWXCZ22}.
Nevertheless, these embedding-based methods are restricted in expressiveness regarding the shallow network architectures without using any side information.
\input{images/framework}
By comparison, text-based methods \cite{kgbert} incorporate available texts for knowledge
representation learning.
With the development of prompt learning, lots of text-based models \cite{genkgc,kgt5} have been proposed, which can obtain promising performance with pre-trained language models (PLMs) and take advantage of allocating a fixed memory footprint for those large-scale real-world KGs.
However, there is no comprehensive open-sourced framework particularly designed for KG representation with prompt learning at present, which makes it challenging to try out new methods and make rigorous comparisons for previous approaches.
In this paper, we share with the community an open-sourced prompt learning framework for KG representation learning and application called \textbf{PromptKG} (MIT License), which supports various cutting-edge text-based KG representation models.
Besides, we equip PromptKG with a simple-yet-effective prompt learning method, which utilizes contextual \textsc{[MASK]} embedding as knowledge representations (through KG pre-training) for downstream tasks (task-specific fine-tuning with KG representations as shown in Figure \ref{fig:ins}).
\textbf{PromptKG} supports diverse tasks including KG completion, question answering, recommendation, and knowledge probing (LAMA).
Empirically, we demonstrate that \textbf{PromptKG} can yield better or comparable performance on seven datasets.
We also provide tutorial notebooks for beginners.
We will provide maintenance to meet new tasks, new requests, and fix bugs.
\input{images/ins}
\section{Text-based KG Representation}
In this section, we detail two major types of text-based KG representation (discrimination-based and generation-based) and the proposed prompt learning method for KG representation learning, which are all integrated into \textbf{PromptKG}.
\input{charts/table_model}
\paragraph{Notation}
Given a triple $(h, r, t)$, we define their natural language descriptions as $X^{h}=\{x^h_{1}, x^h_2, ..., x^h_{|h|}\}$ , $X^r = \{x^{r}_1, x^r_2,...,x^r_{|r|} \}$ and $X^t = \{x^{t}_1, x^t_2,...,x^t_{|t|} \}$.
We denote the embedding of token $x$ as $e$.
\paragraph{Discrimination-based methods}
There are two kinds of models based on the discrimination method:
one (e.g., KG-BERT \cite{kgbert}, PKGC \cite{DBLP:conf/acl/LvL00LLLZ22}) utilizes a single encoder to encode triples of KG with text description;
others \cite{STAR,simkgc} leverage siamese encoder (two-tower models) with PLMs to encode entities and relations respectively.
For the first kind, the score of each triple is expressed as:
\begin{equation}
Score(h,r,t)= \text{TransformerEnc}(X^h,X^r,X^t),
\end{equation}
where \text{TransformerEnc} is the BERT model followed by a binary classifier.
However, these models have to iterate all the entities calculating scores to decide the correct one, which is computation-intensive, as shown in Table \ref{tb:time}.
In contrast, two-tower models like StAR \cite{STAR} and SimKGC \cite{simkgc} usually encode $\langle h,r\rangle$ and $t$ to obtain the embeddings.
Then, they use a score function to predict the correct tail entity from the candidates, denoted by:
\begin{equation}
Score(\langle h,r\rangle, t) = \cos(e_{\langle h,r\rangle}, e_{t}).
\end{equation}
\paragraph{Generation-based methods}
Generation-based models formulate KG completion or other KG-intensive tasks as sequence-to-sequence generation.
Given a triple with the tail entity missing $(h, r, ?)$, models are fed with $\langle X^h, X^r\rangle$ and then output $X^t$.
In the training procedure, generative models maximize the conditional probability:
\begin{equation}
Score(h, r, t)= \prod_{i=1}^{|t|}p(x^t_i|x^t_1, x^t_2, ..., x^t_{i-1};\langle X^h, X^r \rangle).
\end{equation}
To guarantee the consistency of decoding sequential schemas and tokens in KG, GenKGC \cite{genkgc} proposes an entity-aware hierarchical decoder to constrain $X^t$.
In addition, inspired by prompt-learning, GenKGC takes triples with the same relation as \textit{demonstrations} to implicitly encode structured knowledge.
Besides, KGT5 \cite{kgt5} proposes to pre-train generation-based PLMs from scratch with text descriptions for KG representation.
\paragraph{A prompt learning method for KG representation learning}
We introduce the technical details of the proposed prompt learning method for KG representation learning, which shares the same architecture as normal discrimination PLMs.
Note that there are two modules in the normal PLMs: a \textit{word embedding layer} to embed the token ids into semantic space and an \textit{encoder} to generate context-aware token embedding.
Here, we take the masked language model and \textbf{treat entities and relations as special tokens} in the `word embedding layer`.
As shown in Figure \ref{fig:ins}, the model predicts the correct tail entity with the sequence of head entity and relation token and their descriptions.
For the entity/relation embedding, we freeze the \textit{encoder layer}, only tuning the \textit{entity embedding layer}, to optimize the loss function:
\begin{equation}
\resizebox{.37\textwidth}{!}{$\mathcal{L}=-\frac{1}{\left| \mathcal{E} \right|} \sum\limits_{e_j \in \mathcal{E}} \mathbb{I}(e_j=e_i) \log p\left(\texttt{[MASK]}=e_j \mid X^i ; \Theta \right)$},
\label{loss}
\end{equation}
where $\Theta$ represents the parameters of the model, $X^i$ and $e_i$ is the description and the embedding of entity $i$.
\section{Toolkit Design}
We introduce the design principle of \textbf{PromptKG} as:
1) \emph{Unified KG Encoder}: \textbf{PromptKG} utilizes a unified encoder to pack graph structure and text semantics;
2) \emph{Model Hub}: \textbf{PromptKG} is integrated with many cutting-edge text-based KG representation models;
3) \emph{Flexible Downstream Tasks}: \textbf{PromptKG} disentangles KG representation learning and downstream tasks
\subsection{Unified KG Encoder}
As shown in Figure \ref{fig:framework}, a unified KG encoder represents graph structure and text semantics, supporting different types of text-based KG representation methods.
For the discrimination-based method, the input is built on the plain text description as:
\begin{align*}
\begin{split}
X_{\text{hr pair}} &= \texttt{[CLS]}\ X^h \texttt{[SEP]} \ X^r \ \texttt{[SEP]} \\
X_{\text{tail}} &= \texttt{[CLS]} \ X^t \ \texttt{[SEP]}.
\end{split}
\end{align*}
For the generation-based model, we leverage the tokens in $X^h$ and $X^r$ to optimize the model with label $X^t$.
When predicting the head entity, we add a special token \texttt{[reverse]} in the input sequence for reverse reasoning.
Referring to the proposed prompt learning method, we represent entities and relations in KG with special tokens and obtain the input as:
\begin{equation}
X = \texttt{[CLS]} X^h \texttt{[Entity h]} \ \texttt{[SEP]} \ X^r \ \texttt{[SEP]} \ \texttt{[MASK]} \ \texttt{[SEP]},
\end{equation}
where \texttt{[Entity h]} represents the special token to the head entity.
To encode the graph structure, we sample 1-hop neighbor entities and concatenate their tokens as input for implicit structure information.
With such a unified KG encoder, \textbf{PromptKG} can encode both heterogeneous graph structure and text-rich semantic information.
\subsection{Model Hub}
As shown in Figure \ref{fig:framework}, \textbf{PromptKG} consists of a Model Hub which supports many representative text-based KG representation models.
For example, KG-BERT \cite{kgbert} uses BERT to score the triple with their descriptions.
Comparing to its high time complexity, StAR \cite{STAR} and SimKGC \cite{simkgc} both introduce a tower-based method to pre-compute entity embeddings and retrieve top-$k$ entities efficiently.
Further, GenKGC \cite{genkgc} and KGT5 \cite{kgt5} treat knowledge graph completion as sequence-to-sequence generation.
Besides, $k$NN-KGE \cite{knnkge} is a KG representation model which linearly interpolates its entity distribution by \emph{k}-nearest neighbors.
Note that all the model implementations in Model Hub are modularized; thus, flexible to debug and add new models.
\subsection{Applying to Downstream Tasks}
We take the proposed prompt learning for KG representation learning as an example\footnote{Some models still under fast development in Model Hub can not be directly applied to downstream tasks.} and introduce the technical details of applying to downstream tasks as shown in Figure \ref{fig:ins}.
For knowledge graph completion, we feed the model with the textual information $\langle X^h, X^r\rangle$ of the head entity and the relation, then obtain the target tail entity via mask token prediction.
For question answering, we feed the model with the question written in natural language concatenated with a \texttt{[MASK]} token to obtain the special token of the target answer (entity).
For recommendation, we take the user's interaction history as sequential input \cite{DBLP:conf/cikm/SunLWPLOJ19} with entity embeddings and then leverage the mask token prediction to obtain recommended items.
For the knowledge probing task, we adopt entity embedding as additional knowledge to help the model better reason through the sentence and predict the token in the masked position following PELT \cite{PELT}.
\input{charts/question_answering}
\section{Experiments}
To evaluate PromptKG, we conduct experiments on four tasks as shown in Table \ref{tab:sdm}.
1) KG completion (link prediction) with textual information is a direct downstream task of KG representation;
2) question answering is an intuitive knowledge-intensive task;
3) recommendation involves items aligned to entities in real-world KGs and thus can benefit from KG representation;
4) knowledge probing (LAMA) analyzes the factual and commonsense knowledge contained in language models using cloze-style questions.
All datasets and detailed hyper-parameters are all available on the Github for reproducibility.
\subsection{Knowledge Graph Completion}
For the KG completion task, we conduct link prediction experiments on two datasets WN18RR \cite{wn18rr} and FB15k-237 \cite{fb15k}, and evaluate the models in \textbf{PromptKG} with hits1 and MRR metrics.
From Table \ref{tab:sdm}, we observe that discrimination-based method SimKGC (previous state-of-the-art) achieves higher performance than other baselines.
Generation-based models like KGT5 \cite{kgt5} and GenKGC \cite{genkgc} also yield comparable results and show potential abilities in KG representation.
$k$NN-KGE \cite{knnkge} can obtain promising performance (the best hits1 score) by computing the nearest neighbors based on the distance in the entity embedding space from the knowledge store and a two-step training strategy.
\subsection{Question Answering}
KG is known to be helpful for the task of question answering.
We apply \textbf{PromptKG} to question answering and conduct experiments on the MetaQA dataset.
Due to computational resource limits, we only evaluate the 1-hop inference performance.
From Table \ref{tab:sdm}, KGT5 in \textbf{PromptKG} yields the best performance.
\subsection{Recommendation}
For the recommendation task, we conduct experiments on a well-established version ML-20m\footnote{\url{https://grouplens.org/datasets/movielens/20m/}}.
Linkage of ML-20m and Freebase offered by KB4Rec \cite{Zhao-DI-2019} is utilized to obtain textual descriptions of movies in ML-20m.
With movie embeddings pre-trained on these descriptions, we conduct experiments on sequential recommendation task following the settings of BERT4Rec \cite{DBLP:conf/cikm/SunLWPLOJ19}.
We notice that \textbf{PromptKG} is confirmed to be effective for the recommendation compared with BERT4Rec.
\subsection{Knowledge Probing}
Knowledge probing \cite{lama} examines the ability of LMs (BERT, RoBERTa, etc.) to recall facts from their parameters.
We conduct experiments on LAMA using pre-trained BERT (\textit{bert-base-uncased}) and RoBERTa (\textit{roberta-base}) models.
To prove that entity embedding enhanced by KGs helps LMs grab more factual knowledge from PLMs, we train a pluggable entity embedding module following PELT \cite{PELT}.
As shown in Table \ref{tab:sdm}, the performance boosts while we use the entity embedding module.
Since there is no subject entity annotated in Squad and no URI of the subject entity in Concept Net for entity alignment, we only apply the entity embedding module to the remaining data in LAMA.
\textbf{PromptKG} will support a unified API accessible for enhanced entity embedding in the future.
\section{Conclusion and Future Work}
We propose \textbf{PromptKG}, a prompt learning framework for knowledge graph representation learning and application.
\textbf{PromptKG} establishes a unified toolkit with well-defined modules and easy-to-use interfaces to support research on using PLMs on KGs.
Both for researchers and developers, PromptKG provides effective and efficient training code and supports downstream tasks.
In the future, we will continue to integrate more models and tasks (e.g., dialogue) into PromptKG to facilitate the research progress of the KG.
|
2,877,628,091,602 | arxiv | \section{Introduction}
Gamma-Ray Bursts (GRBs) are bright flashes of high energy photons
usually lasting about several seconds. They are by far the most
luminous objects in the universe, they emit so large amounts of
energy (up to $10^{53}$ ergs) and thus can be detected up to very
high redshifts ($z>5$).
GRB050904 was detected by the Burst Alert Telescope (BAT) onboard
Swift on 2005 September 4 at 01:51:44 UT (Cummings et al. 2005). It
was a long ($\leq 500$ seconds duration in BAT), multi-peaked, bight
burst, the 15 - 150 keV fluence was ($5.4 \pm 0.2$) $\times 10^{-6}$
erg cm$^{-2}$, the spectrum can be described by a power law with a
photon index $\sim -1.34$, its redshift has been measured by several
groups (Haislip et al. 2005; Antonelli et al. 2005; Price et al.
2005), $z=6.29$ makes it to be by far the most distant GRB
discovered to date.
Bo\"{e}r et al. (2005) reported that they detected a very bright
optical flare during the prompt high energy emission phase, and at
the same time there is an X-ray flare. It is widely believed that
the reverse shock synchrotron radiation usually peaks in the
optical/IR band, and this emission has been successful in
interpreting the early optical emission from GRB990123 (Akerlof et
al. 1999; Sari \& Piran 1999; Wang et al. 2000; Fan et al. 2002;
Zhang et al. 2003; Nakar \& Piran 2005), GRB021211 (Fox et al.
2003; Li et al. 2003; Wei 2003; Kumar \& Panaitescu 2003),
GRB041219a (Blake et al. 2005; Fan et al. 2005b) and GRB 050525a
(Blustin et al. 2005; Shao \& Dai 2005). However in this reverse
shock model, it is expected that the emission has a negligible
contribution in the X-ray band (However, see Fan \& Wei 2005).
Strong optical flare accompanying an X-ray flare may also be
accounted for by the ``late internal shock model" (Fan \& Wei
2005). Originally, that model has been proposed to interpret the
X-ray flare detected in GRB 011121 (Piro et al. 2005) and many XRT
X-ray flares (Burrows et al. 2005; Zhang et al. 2005; Nousek et
al. 2005).
The optical afterglow light curve of GRB050904 cannot be described
by a simple power law, between $\sim 3$ hours and 0.5 day after
the burst, the fading of the afterglow can be described by a power
law with index -1.36. However after this time the light curve
flattened to a temporal index of -0.82 (Haislip et al. 2005).
Tagliaferri et al. (2005) have found a break in the light curve at
time $t_b\simeq2.6$ day, which may be the jet effect. In this
Letter, we try to explain the optical flare with two models, i.e.
the reverse shock emission and the late internal shock model, and
then we will fit the afterglow light curve including energy
injection and jet effects.
\section{Explanation of the optical flare}
\subsection{The late internal shock model}
In the standard shock scenario, the prompt gamma-ray emission is
produced by the internal shock, and the burst duration is determined
by the active timescale of the central engine. However, some authors
suggest that the activity of the central engine may be much longer
than the GRB duration, which can give rise to some signatures in
multi-wavelength afterglows (Dai \& Lu 1998; Zhang \& M\'esz\'aros
2001; Granot et al. 2003; Ioka et al. 2005). Furthermore, it has
been proposed that the Fe line observed in some GRB X-ray afterglows
are produced by the late time energy injection (Rees \& M\'esz\'aros
2000; Gao \& Wei 2005).
Fan \& Wei (2005) first proposed the late internal shock model to
account for the bright X-ray flares detected in many GRBs. Here we
will show that the late internal shock model not only can produce
the X-ray flare, but also can produce the optical flare with
proper parameters.
Following Fan \& Wei (2005), the typical synchrotron radiation
frequency can be estimated by
\begin{eqnarray}
\nu_{m} & \approx & 8.5\times 10^{15} (\frac{\epsilon_e}{0.4})^2
\epsilon_{B,-2}^{1/2} (\Gamma_{sh}-1)^{5/2}\Gamma_{sh}^{1/2}
L_{m,52}^{1/2}\nonumber\\
&& \Gamma_2^{-2}\delta t_{1}^{-1} ~{\rm Hz},
\end{eqnarray}
where $L_m$ is the outflow luminosity, $\Gamma_{sh}$ is the Lorentz
factor of the internal shock, $\Gamma$ is the Lorentz factor of the
emitting shell, $\delta t$ is the observed typical variability
timescale, $\epsilon_B$ and $\epsilon_e$ are the energy fractions
occupied by the magnetic field and electrons, respectively. Here the
convention $Q_x=Q/10^x$ has been adopted in cgs units throughout the
text.
The cooling Lorentz factor is $\gamma_{e,c}\simeq 7.7\times 10^{8}
(1+z)/[(1+Y)\Gamma B^2 \delta t]$, where
$Y=[-1+\sqrt{1+4x\epsilon_e/\epsilon_B}]/2$ is the Compton
parameter, $x\simeq \min\{1, (\nu_m/\nu_c)^{(p-2)/2}\}$ (Sari \&
Esin 2001). Then the cooling frequency is
\begin{eqnarray}
\nu_{c} &\approx & 1.6\times 10^{11}(\frac{1+z}{7.29})^{-2}
\epsilon_B^{-3/2}[\Gamma_{sh}(\Gamma_{sh}-1)]^{-3/2}
L_{m,52}^{-3/2}\nonumber\\
&& \Gamma_2^8 \delta t_{1} (1+Y)^{-2} ~{\rm Hz}
\end{eqnarray}
The synchrotron-self-absorption frequency is about (Li \& Song
2004; Fan \& Wei 2005)
\begin{eqnarray}
\nu_{a} &\approx & 2.9\times 10^{14} (\frac{1+z}{7.29})^{-2/7}
\epsilon_{B,-2}^{1/14} [\Gamma_{sh}(\Gamma_{sh}-1)]^{1/14}
L_{m,52}^{1/14}\nonumber\\
&& L_{syn,50}^{2/7} \Gamma_2^{-8/7}\delta t_{1}^{-5/7} ~{\rm Hz}
\end{eqnarray}
where $L_{syn}$ is the synchrotron radiation luminosity. The
maximum flux of synchrotron radiation is $F_{max}\approx
3\sqrt{3}\Phi_p (1+z)N_e m_e c^2 \sigma_T \Gamma B/(32\pi^2 q_e
D_L^2)$, where $q_e$ is the charge of electron, $N_e=L_m \delta
t/[(1+z)\Gamma m_p c^2$ is the total number of emitting electrons,
$\Phi_P$ is a function of $p$, for $p=2.5$, $\Phi_P \simeq 0.6$
(Wijers \& Galama 1999). $D_L$ is the luminosity distance, we
adopt $(\Omega_M,\Omega_\Lambda,h)=(0.3,0.7,0.71)$. Then for the
case $\nu_{c}< \nu_{a}< \nu_{obs} < \nu_{m}$, the observed flux at
frequency $\nu_{obs}$ should be
\begin{eqnarray}
F_{\nu} &\approx & 100(\frac{\nu_{obs}}{3\times 10^{14}{\rm
Hz}})^{-1/2} L_{m,52}^{3/4}\Gamma_2 \epsilon_{B,-2}^{-1/4}
[\Gamma_{sh}(\Gamma_{sh}-1)]^{-1/4}\nonumber\\
&& D_{L, 29.3}^{-2} \delta t_{1}^{1/2}(1+Y)^{-1} ~{\rm mJy}
\end{eqnarray}
Now we turn to the observation. Bo\"{e}r et al. (2005) reported
that they detected a bright optical flare at frequency $\nu_{obs}
=3\times 10^{14}$ Hz, the peak flux is 48 mJy. Meanwhile, the
Swift XRT data shows that there is also a peak in the X-ray light
curve at nearly the same time with the optical flare, which
suggests that the optical flare and the X-ray peak may have the
same origin. The slope of the X-ray spectrum is about -1/2, and
the flux at $1 {\rm KeV}$ is about 0.08 mJy.
In our late internal shock model, if we take the values as
follows: $\epsilon_e=0.4$, $\epsilon_B=0.02$, $L_m=10^{52}$
ergs$^{-1}$, $\Gamma=200$, $\Gamma_{sh}=1.6$, $\delta t =20s$,
then we find $\nu_{m}\sim 6.3\times 10^{14}$ Hz, $\nu_{a}\sim
8.4\times 10^{13}$ Hz, $\nu_{c}\sim 1.2\times 10^{12}$Hz, so it is
in the fast cooling phase, between $\nu_{a}$ and $\nu_{m}$ the
spectrum takes the form $F_{\nu}\propto \nu^{-1/2}$, and at the
observed frequency ($3\times 10^{14}$ Hz) the flux is 49 mJy,
which is quite consistent with the observation. In addition, with
the values of $\epsilon_e$ and $\epsilon_B$, the Compton parameter
$Y\simeq 4$, then the synchrotron photons will be Compton
scattered to high energy, the energy spectrum between $10^{16}$ Hz
and $10^{19}$ Hz is also $F_{\nu}\propto \nu^{-1/2}$, and we can
estimate the flux at $1 {\rm KeV}$ to be about 0.06 mJy, which is
also consistent with the observation well.
\subsection{The reverse shock model}
\label{sec:OFRS} After the internal shock phase, as the fireball
is decelerated by the circumburst medium, usually a pair of shocks
develop (M\'esz\'aros \& Rees 1997; Sari \& Piran 1999; Kobayashi
2000). The early optical afterglow lightcurve is usually composed
of the contributions from both the forward (FS) and the reverse
shocks (RS). With that model, the very early optical/IR flash
following GRB 990123, GRB 021211, GRB 041219a and GRB 050525a
could be well modelled by assuming the physical parameters are
quite different for the FS and RS (Fan et al. 2002; Zhang et al.
2003; Kumar \& Panaitescu 2003; McMahon et al. 2004; Fan et al.
2005b; Blustin et al. 2005). For example, Fan et al. (2002)
performed a detailed fit to the optical flash of GRB 990123 data
and obtained $\epsilon_{e} ^{\rm r}=4.7\epsilon_{e}^{\rm f}=0.6$
and $\epsilon_{ B}^{\rm r}=400\epsilon_{B} ^{\rm f}=0.4$, where
the superscripts ``r'' and ``f'' represent RS and FS,
respectively.
B\"oer et al. (2005) found that both the optical flare and the
gamma-ray burst of GRB 050904 were as energetic as those of GRB
990123 (in the rest frame of the GRBs). If other parameters
(including the initial Lorentz factor of the ejecta, the number
density of the interstellar medium $n$) are similar for these two
events, then the resulted shock parameters should be similar, too!
So it is very likely that in the current case the shock parameters
of FS and RS are also different.
Recently, Yan et al. (2005) have developed a code to calculate the
GRB afterglow light curves, including the FS and the RS emission
components. In the current calculation, there are two novel effects
have been taken into account. One is that in previous works, the
Lorentz factor of the outflow as well as the comoving number density
of particle are assumed to be constant. This may not be the case
since in the standard fireball model, the gamma-ray burst is from
the internal shocks. The detected gamma-ray lightcurve is so
variable that the involved outflow may be variable, too (both the
Lorentz factor and the particle number density). In order to model
the optical plateau (Bo\"er et al. 2005) and partly for convenience,
in this work we assume the outflow can be approximated as two parts.
Their bulk Lorentz factors, isotropic energies and widths are
($\eta_{(1)}$, $E_{iso(1)}$, $\Delta_{(1)}$) and ($\eta_{(2)}$,
$E_{iso(2)}$, $\Delta_{(2)}$), respectively. The other is the more
reliable calculation of the arriving time of the RS emission. We
take the emitting time of the first $\gamma-$ray photon as our zero
point of time. On the line of sight ($\theta=0$), a gamma-ray photon
$\gamma_P$ arriving us at $t$ implies that the distance of
corresponding electron (i.e., point P, at which the bulk Lorentz
factor is $\eta$) to the initial outflow front is $\sim c t/(1+z)$.
The radial distance of the FS front to the central engine is $R_P$
when the RS crosses point P. At that time, the width between photon
$\gamma_P$ and point P is $\approx (1-\beta_\eta)R_P$, where
$\beta_\eta=\sqrt{1-1/\eta^2}$. Therefore, the arriving time of the
RS emission from point P should be $t+(1+z)(1-\beta_\eta)R_p/c$. It
is straightforward to extend this calculation to the cases of
$\theta \neq 0$. It is found that the $I-$band flare of GRB 050904
can be well reproduced with the following parameters (see the insert
of Fig. \ref{fig:Fits}): the isotropic energy of the outflow is
$\eta_{(1),2}=380$, $E_{iso(1),54}=0.4$, $\Delta_{(1),12}=1.3$,
$\eta_{(2),2}=800$, $E_{iso(2),54}=0.3$, $\Delta_{(2),12}=0.7$,
$n=3~{\rm cm^{-3}}$, $\epsilon_e^{\rm r}=0.6$, and $\epsilon_B^{\rm
r}=0.4$. It is surprising to see that the resulting reverse shock
parameters are nearly the same as those of GRB 990123 (Fan et al.
2002).
Can the X-ray flare be from the RS, too? The answer is negative.
Firstly, as shown by Fan \& Wei (2005), the decline of the X-ray
emission of the RS can not be steeper than $t^{-(2+p/2)}$, which
is inconsistent with the observation. Secondly, now the reverse
shock region is significantly magnetized, so the RS emission in
X-ray band should also be dominated by the synchrotron radiation.
Thus the X-ray band emission should be an extension of the optical
emission. However, the observation shows the optical-to-X-ray
bands emission can not be described by a simple synchrotron
spectra (Bo\"er et al. 2005). Therefore, the X-ray flare
accompanying the optical flare should be attributed to the
activity of the central engine in the RS model.
\section{Fits to the late $J-$band afterglow}
\label{sec:LateAfterglow} The multi-wavelength afterglow light
curves (especially the $J-$band one) of GRB050904 have been
detected (Haislip et al. 2005; Tagliaferri et al. 2005 and the
references therein). Between $\sim 3$ hours and 0.5 day after the
burst, the fading of the afterglow can be described by a power law
with index -1.36. After that time the light curve flattened to a
temporal index of -0.82. A break appears at time $t_b\simeq2.6$
day, which suggests that the outflow may be a jet. In this Letter
we pay more attention to the optical flattening. We note that at
the observer time $t\sim 10^4-10^5$ s, there are strong X-ray
flares (Cusumano et al. 2005; Price et al. 2005; Watson et al.
2005). Fan \& Wei (2005) suggested that when the moderate
relativistic outflow powering the X-ray flare catched up with the
initial GRB ejecta, a flattening would occur in the
long-wavelength afterglow light curve. In the calculation, we
assume that between $t\sim 4\times 10^4$ seconds and $t\sim 1.5$
day, significant part of energy has been injected into the
decelerating GRB ejecta. Similar to Zhang et al. (2005), the
energy injection rate has been taken as $dE_{inj}/[dt/(1+z)]=
Ac^2(t/t_0)^{-0.5}$, where $A$ is a constant. We take $A=0$ when
there is no energy injection. With the energy injection, the
equation (8) of Huang et al. (2000) should be replaced by
\begin{equation}
d \gamma={(1-\gamma^2)dm+{A(t/t_0)^{-0.5}[dt/(1+z)]}\over
M_{ej}+\epsilon m+2(1-\epsilon)\gamma m},
\label{Eq:Dyn}
\end{equation}
where $\gamma$ is the bulk Lorentz factor of the GRB ejecta,
$M_{ej}$ is the rest mass of the initial GRB ejecta, $m$ is the
mass of the medium swept by the GRB ejecta (which is governed by
$dm=4\pi R^2 n m_p dR$, where $m_p$ is the rest mass of proton,
$dR=\gamma(\gamma+\sqrt{\gamma^2-1})c dt/(1+z)$, $\epsilon=x
\epsilon_e$ is the radiation efficiency. With the dynamical
evolution of the ejecta, it is straightforward to calculate its FS
emission (e.g., Huang et al. 2000; Yan et al. 2005).
The fits to the $J-$band data (taken from Haislip et al. [2005] and
Tagliaferri et al. [2005]) are presented in Fig. \ref{fig:Fits}. It
is found that the data can be well modeled with the following
parameters: $E_{\rm iso,54}=0.7$, $n=3~{\rm cm^{-3}}$,
$\epsilon_e^{\rm f}=0.15$, $\epsilon_B^{\rm f}=0.001$, $A=7\times
10^{49}~{\rm ergs~s^{-1}}$, $t_0=4\times 10^4$ s, and the jet angle
$\theta_j=0.054$. Note that the value of $\theta_j$ is obtained from
fitting the afterglow light curve, not from the simple analytic
relation. Comparing with the reverse shock parameters derived in
\S{\ref{sec:OFRS}}, the shock parameters of the FS and the RS are
quite different, as that found in GRB 990123 (Fan et al. 2002; see
also Zhang et al. 2003). The isotropic energy of the $\gamma-$rays
is $\sim 5\times 10^{53}$ ergs and the derived $\theta_j=0.054$, so
the geometry corrected energy should be $\sim 7\times 10^{50}$ ergs,
which is typical for the GRBs detected by BeppoSAX, HETE-2 and
Swift. In our treatment, the flattening is caused by the late time
energy injection. The total isotropic energy injected into the GRB
ejecta is $\sim 6\times 10^{53}$ ergs.
\begin{figure}
\epsscale{1.0} \plotone{f1.eps} \caption{Modeling the $I$-band
flare (the insert) and the $J$-band afterglow of GRB 050904. In
the insert, the $I-$band flare data (the filled rectangles) are
taken from Bo\"er et al. (2005), the dashed line is the
theoretical light curve of the reverse shock emission. The
$J-$band afterglow data (the filled circles) are taken from
Haislip et al. (2005) and Tagliaferri et al. (2005). The solid
line is the theoretical light curve of the $J-$band afterglow.}
\label{fig:Fits}
\end{figure}
\section{Discussion and conclusion}
The bright optical flare has been detected in GRB050904, which is as
bright as the optical flash of GRB990123 (in the rest frame of
bursts) and seems to be accompanied by an X-ray flare (Bo\"er et al.
2005). Here we explored two possible models to account for that
observation. One is the ``late internal shock model", in which the
optical flare is produced by the synchrotron radiation of the
electrons accelerated by the late internal shock, and the X-ray
flare is produced by the synchrotron-self-Compton process.
\footnote{However, in some cases the synchrotron emission may peak
in keV energy band, then the IC component would peak at GeV energy
band (unless the outflow is highly magnetized, as suggested by Fan,
Zhang \& Proga (2005a)), which may be detectable for the upcoming
GLAST. This possibility will be discussed in great detail
elsewhere.}
The other is the external forward-reverse shock model, in which the
optical flare is from the reverse shock emission and the X-ray
flare is attributed to the central engine activity. We show that
with proper parameters, a bright optical flare can appear in both
models.
In the forward-reverse shock scenario and with late time energy
injection, we have modeled the optical flare as well as the late
$J-$band afterglow numerically. The resulted shock parameters of
the forward/reverse shocks are $\epsilon_e^{\rm
r}=4\epsilon_e^{\rm f}=0.6$ and $\epsilon_B^{\rm
r}=400\epsilon_B^{\rm f}=0.4$, respectively. They are quite
similar to those found in GRB 990123 (Panaitescu \& Kumar 2001;
Fan et al. 2002), which is a natural result in view of the
similarity between these two GRBs and their optical flares (in the
rest frame of bursts).
As for the reverse shock emission, previous works usually assumed
that the physical parameters are uniform, this greatly simplify the
calculation. But in reality, the observed gamma-ray emission light
curve is much variable, so it is very likely that the involved
outflow should be also variable. We notice that if the parameters
are uniform, then before the peak time the flux rises quickly,
cannot account for the observed plateau (Kobayashi 2000; Bo\"{e}r et
al. 2005). Here just for simplicity, we divide the outflow into two
parts. We expect that in the realistic case, the outflow should be
non-uniform, so the parameters should have a continuous distribution
within the shell, but the calculation is complicated.
Although both the "late internal shock model" and the reverse shock
emission can account for the observed optical flash and the X-ray
flare, We think the "late internal shock model" is more favored
since in this model the optical flash and the X-ray flare have the
same origin, which provides a natural explanation of the temporal
coincidence of them. In the late internal shock model, it is needed
that after the prompt $\gamma-$ray burst phase, the central engine
could re-start. Recently two models have been proposed for the
production of late energy injection (King et al. 2005; Perna et al.
2005; MacFadyen et al. 2005 proposed another model to account for
the X-ray flare in short GRBs). While in the reverse shock model,
the temporal coincidence of the optical flash and the X-ray flare
can only be regarded as fortuitous. In addition, we note that in the
late internal shock model, the typical synchrotron radiation
frequency strongly depends on the parameters, such as $\Gamma$,
$\Gamma_{sh}$, $L_m$, $\delta t$ etc., and for different burst
sources it is natural that these parameters are different,so we
expect that the late internal shock model not only can produce the
optical or X-ray flare, but also can produce the flare at other
wavelength, such as at the ultraviolet or infrared. Meanwhile we
predict that the synchrotron-self-Compton process may produce
emission at high energy band ($\sim$ GeV).
Despite its high redshift, the optical afterglow of GRB050904 is not
peculiar with respect to other GRBs. Recently Zhang et al. (2005)
and Nousek et al. (2005) analyzed the X-ray afterglows of many GRBs,
they found some features (the X-ray flares, the flattening of the
light curve, a late time break) occurred in a good fraction of GRBs.
These features are consistent with the afterglow of GRB050904. We
suggest that the progenitor of GRB050904 may be not quite different
from that of other GRBs in view of these similarity.
\acknowledgments We thank the referee for her/his helpful comments.
This work is supported by the National Natural Science Foundation
(grants 10225314 and 10233010) of China, and the National 973
Project on Fundamental Researches of China (NKBRSF G19990754).
|
2,877,628,091,603 | arxiv | \section{Introduction}
The information loss paradox of black holes \cite{Hawking:1974sw} is an unresolved problem in modern theoretical physics. This paradox implies a contradiction between general relativity (GR) and local quantum field theory (QFT) \cite{Yeom:2009zp,Almheiri:2012rt}. There have been attempts to solve the paradox within GR and QFT. For example, the `soft hair' proposed by Hawking, Perry and Strominger \cite{Hawking:2016msc} invokes the BMS symmetry within GR, but it was soon argued that the soft hair cannot carry information \cite{Bousso:2017dny,Giddings:2019hjc}. The `firewall' conjecture \cite{Almheiri:2012rt}, on the other hand, attempts to solve the paradox by violating GR equivalence principle near the black hole horizon, but was argued to be problematic \cite{Chen:2015gux,Hwang:2012nn}. In order to render the black hole evaporation process unitary, it may be necessary to invoke some unknown new mechanism or a hidden sector \cite{Chen:2021fve,Hotta:2017yzk,Ong:2016iwi} that lies outside proper GR and QFT. An implicit assumption is that such new elements must be deduced from \textit{quantum gravity}.
It is important to stress that \textit{entanglement entropy} is the physical quantity that measures the information flow from a black hole to its radiation \cite{Page:1993wv}. In this bipartite system of a black hole and its Hawking radiation, the radiation entropy will increase as the evaporation proceeds. On the other hand, the black hole entropy, known as the Bekenstein entropy, which is supposed to describe the black hole's microscopic degrees of freedom, will decrease as the black hole mass decreases. Then at some point during the evaporation process, these two entropies must coincide with each other. Page asserted that the entanglement entropy of the system is approximately the minimum of the two \cite{Page:1993df}. This curve of the evolution of the entanglement entropy, known as the Page curve, with the turning point occurring at a time, the Page time, when the black hole area reduces to roughly half, plays an essential role in the investigation of the information loss problem.
In spite of Page’s demonstration in a quantum mechanical system, the attempt to derive the Page curve in GR has failed \cite{Hwang:2017yxp}, which is often regarded as the deficiency of invoking the semi-classical perturbative calculations in GR. In particular, the decrease of the entanglement entropy after Page time would require non-perturbative effects in quantum gravity beyond our current understanding of GR. However, a full-blown quantum gravity does not yet exist. One possible circumvention of a quantum gravity calculation would be via a new classical saddle point, e.g., a soliton, deduced from a valid approximation of quantum gravity, where a semi-classical tunneling around such a new saddle-point, e.g., via instantons, might be able to
capture some essence of non-perturbative quantum effects evaluated around the original saddle-point. Under this light, the Page curve would result from a transition between the saddles.
Based on this philosophy, there exist two stages in black hole evaporation. First, before a modified Page time, GR and QFT work well and any hidden contributions are negligible. After the modified Page time, however, the hidden contributions are no more negligible. In fact, it must be dominant at late times. Otherwise, the contributions via proper GR and QFT may erase the information.
Inspired by the above thinking, we have recently investigated \cite{Chen:2021} the information loss paradox via the Euclidean path integral (EPI) approach \cite{Hartle:1983ai}. The EPI formalism is widely regarded as one of the most promising candidates of quantum gravity that can describe a non-purturbative processes \cite{Gibbons:1994cg}.
Though not the final theory, EPI manages to capture the essence of a full-blown quantum gravity theory by dealing with the \textit{entire} wave-function, which includes not only perturbative but also non-perturbative gravity effects \cite{Chen:2018aij} via Wheeler-DeWitt equation \cite{DeWitt:1967yk}. We demonstrated in our short essay \cite{Chen:2021} that the essence of the Page argument remains valid, albeit with the Page curve modified where the Page time shifts significantly toward the late time of black hole evaporation. One implication of this modified Page curve is that the entropy bound may thus be violated. In this paper, we provide a more detailed and self-contained development of our arguments and calculations to support our claims.
This paper is organized as follows. In Sec.~\ref{sec:ent}, we discuss the basics of the entanglement entropy and the physically essential conditions that explains the unitary Page curve. In Sec.~\ref{sec:euc}, we discuss the EPI formalism and show that the previous essential conditions are indeed realized by EPI. In Sec.~\ref{sec:mod}, we eventually obtain the modified form of the Page curve; we also argue that this formula is still consistent within the validity regime of the EPI. Finally, in Sec.~\ref{sec:con}, we summarize our results and comment possible future applications.
\section{\label{sec:ent}Entanglement entropy for unitary evaporation}
\subsection{Entanglement entropy and the Page curve}
To quantitatively describe the flow of information, the notion of the \textit{entanglement entropy} is found very useful \cite{Page:1993wv}.
Let us consider a system composed of two subsystems $A$ and $B$, and a pure state given by $| \Psi \rangle$.
The density matrix of the system is $\rho \equiv | \Psi \rangle \langle \Psi |$. The reduced density matrix for the subsystem $A$ is given by
tracing out the subsystem $B$, i.e., $\rho_{A} \equiv \mathrm{tr}_{B} \rho$.
Likewise, the reduced density matrix for the subsystem $B$ is given by
$\rho_{B} \equiv \mathrm{tr}_{A} \rho$.
Then the von Neumann entropy of the subsystem $A$ is $S_{B}(A) \equiv - \mathrm{tr}_{A} \rho_{A} \ln \rho_{A}$.
This is known as the entanglement entropy of a subsystem $A$, which is the same as that of its complementary, $S_A(B)$, if the state is pure.
Let us consider quantum states of a black hole. We divide the system into a black hole, denoted by $A$, and the Hawking radiation in the exterior, denoted by $B$.
We assume that initially all degrees of freedom were in $A$, and, as time goes on, they are monotonically transmitted from $A$ to $B$ by the Hawking radiation.
According to the analysis by Page \cite{Page:1993df}, by assuming a typical pure state with a fixed number of total degrees of freedom in the beginning,
the entanglement entropy is almost the same as the Boltzmann entropy of the radiated particles.
However, when the original entropy of the black hole decreased to approximately its half value, the entanglement entropy of the radiation begins to decrease (Fig.~\ref{fig:Page}).
This turning time is called the \textit{Page time}. If one further assumes that the Boltzmann entropy of the black hole is the same as the Bekenstein-Hawking entropy, one can compute the value of the Page time, which is approximately $\sim M^{3}$ (in Planck units),
where $M$ is the black hole mass; it is evident that even at this time, the black hole remains semi-classical.
\begin{figure}
\begin{center}
\includegraphics[scale=1]{PageCurve}
\caption{\label{fig:Page}The Page curve, where $S_{A}$ and $S_{B}$ are Boltzmann entropy of $A$ and $B$, respectively, and $S_{B}(A)$ denotes the entanglement entropy.}
\end{center}
\end{figure}
\subsection{Wave function of the universe and superposition of states}
According to canonical quantum gravity \cite{DeWitt:1967yk}, the entire information of the universe is included by the wave function of
the universe $\Psi$, which is a functional of the three-geometry $h_{\mu\nu}$ and a matter field configuration, say $\phi$,
on top of $h_{\mu\nu}$.
This wave function should satisfy the quantum Hamiltonian constraint equation, the so-called Wheeler-DeWitt (WDW) equation.
Since this is the fundamental equation of quantum gravity, unitarity must be manifest.
One can assume that the in-state of the wave function of the universe is given by $| \mathrm{in} \rangle \equiv | h_{\mu\nu}^{(\mathrm{in})}, \phi^{(\mathrm{in})} \rangle$, where we assume that this in-state is a fixed classical configuration. This configuration will evolve to an out-state, say $| \mathrm{out} \rangle \equiv | h_{\mu\nu}^{(\mathrm{out})}, \phi^{(\mathrm{out})} \rangle$. The WDW equation will determine the wave function of the universe for a given initial condition.
In order to understand the black hole evaporation process, we need to choose a proper out-state.
It should be such that the observer at future infinity will see a semi-classical spacetime.
However, the final out-state is not necessarily a unique classical spacetime; rather,
it can be a \textit{superposition of states corresponding to each classical spacetime} \cite{Hartle:2015bna}. This follows from the observation that
the Hawking radiation can be interpreted as a result of quantum tunneling \cite{Chen:2018aij}.
Thus
\begin{eqnarray}
\left|\mathrm{out} \right\rangle = \sum_{i,\alpha} c_{\alpha,i} \left|\alpha;i \right\rangle,
\end{eqnarray}
where $|\alpha;i \rangle$ is a quantum state associated with a semi-classical state labeled by $i$, while
$\alpha$ represents microscopic quantum degrees of freedom in the semi-classical state,
and $c_{\alpha,i} = \langle \alpha;i|\mathrm{in}\rangle$ (Fig.~\ref{fig:histories}).
\begin{figure}
\begin{center}
\includegraphics[scale=1]{histories}
\caption{\label{fig:histories}The path integral from the in-state $| \mathrm{in} \rangle$ to the out-state $| \mathrm{out} \rangle$, where the out-state is a superposition of classical boundaries $\{ | i \rangle \}$.}
\end{center}
\end{figure}
\subsection{Essential conditions toward the unitary Page curve}
As mentioned in the above, a natural consequence from the picture based on canonical quantum gravity is
that the out-state is a superposition of semi-classical states.
Let us assume that one can categorize the semi-classical states into two distinct classes.
The first class is those with \textit{information-losing histories}, where the black hole keeps existing
and loses information by the Hawking radiation, and therefore the entanglement entropy monotonically
increases up to the end point.
The second class is those with \textit{information-preserving histories},
which appear as a result of quantum tunneling, where there is no black hole, hence no singularity nor event horizon.
Hence the entanglement entropy is zero for this class of histories.
When the in-state is dominated by information-losing histories, the (semi-classical) observer outside of the black hole cannot have access to the degrees of freedom inside of the black hole, which leads to the increase of the entanglement entropy. After the Page time, if the out-state is dominated by information-preserving histories, the observer can now have access to all degrees of freedom which would not have been measured in the information-losing histories. Hence, the entanglement entropy will vanish in the end.
To summarize, in order to obtain a unitary Page curve, what one needs to justify
is the following two essential conditions (Fig.~\ref{fig:conceptual4-1}):
\begin{itemize}
\item[--] 1. \textit{Multi-history condition}: existence of multiple information-preserving and non-preserving histories;
\item[--] 2. \textit{Late-time dominance condition}: dominance of the information-preserving history at late-time.
\end{itemize}
For simplicity, let us assume that there are only two histories, one information-losing
and the other information-preserving.
Then one can approximately evaluate the entanglement entropy as
$S_{\mathrm{ent}} \simeq p_{1} S_{1} + p_{2} S_{2}$, where $1$ denotes the information-losing history,
$2$ denotes the information preserving-history,
$p_{i}$ and $S_{i}$ ($i=1,\,2$) are the probability and the entanglement entropy for each history,
respectively.
In the beginning, the history $1$ dominates, and hence explains the increasing phase of
the entanglement entropy.
However, at late times, the history $2$ dominates as $S_{2}=0$.
The total entanglement entropy will eventually decrease to zero.
\begin{figure}
\begin{center}
\includegraphics[scale=0.7]{conceptual4-1}
\caption{\label{fig:conceptual4-1}Conceptual figure for interpretations. $h_{1}$ is the information-losing history, while $h_{2}^{(1,2)}$ are the information-preserving history. For any time, histories can be branched; the tunneling probability must be dominated at the late time.}
\end{center}
\end{figure}
Therefore, if these two conditions are satisfied, then the unitary Page curve would be explained, because the entanglement entropy eventually becomes zero again at the end. We will demonstrate that EPI can indeed deliver such a conclusion.
\subsection{Entanglement entropy with multiple histories}
A generic quantum state with two classical histories can be described as follows: $| \psi \rangle = c_{1} | \psi_{1} \rangle + c_{2} | \psi_{2} \rangle$, where $1$ and $2$ denote the two different histories and $c_{1,2}$ are complex coefficients. The total density matrix $\rho$ is
\begin{eqnarray}
\rho = \begin{pmatrix} |c_{1}|^{2} & c_{1}^{*} c_{2} \\ c_{1} c_{2}^{*} & |c_{2}|^{2} \end{pmatrix}.
\end{eqnarray}
We assume two histories are semi-classical and the off-diagonal terms will become less dominant; this is in accordance with the decoherence condition. With this assumption, one can write
\begin{eqnarray}
\rho \simeq \begin{pmatrix} p_{1} \rho_1 & 0 \\ 0 & p_{2}\rho_2 \end{pmatrix} \left( I + e^{-S}\delta \right),
\end{eqnarray}
where $p_{1} = |c_{1}|^{2}$, $p_{2} = |c_{2}|^{2}$, $I$ is the identity matrix, and $\delta$ is the off-diagonal components that is approximately suppressed by $e^{-S}$, which is the transition amplitude between two histories. Although we already assumed the decoherence between two histories, we will see that this still approximately captures the physical essentials for the unitary evolution of the entire wave function.
Now let us assume that one can split the total system into two subsystems $A$ and $B$. Physically, $A$ corresponds to the subsystem outside the horizon, while $B$ is that inside the horizon. (If there is no horizon, then $B$ is empty.) By tracing out the subsystem $B$, the reduced density matrix $\rho_A \equiv \mathrm{tr}_{B} \rho$ can be written as
\begin{eqnarray}
\rho_A \simeq \begin{pmatrix} p_{1} \rho_{1,A} & 0 \\ 0 & p_{2}\rho_{2,A} \end{pmatrix},
\end{eqnarray}
where we can neglect the off-diagonal term $\delta$ due to the decoherence. This gives the entanglement entropy $S_{\mathrm{ent}}(A)$:
\begin{eqnarray}
S_{\mathrm{ent}}\big(A \big) &=& -\text{tr}\big( \rho_A \log \rho_A\big) \\
&=& p_{1} S_{1}\big(A\big) + p_{2} S_{2} \big(A\big) - p_{1} \log p_{1} - p_{2} \log p_{2},
\end{eqnarray}
where $S_{\nu}(A)\equiv -\text{tr} (\rho_{\nu,A}\log \rho_{\nu,A})$ is the entanglement entropy evaluated in each histories ($\nu=1,2$). Note that the last two terms are negligible if the number of degrees of freedom of the system is much greater than $2$. Therefore, we obtain the following approximate form of the entanglement entropy
\begin{eqnarray}
S_{\mathrm{ent}} \simeq p_{1} S_{1} + p_{2} S_{2},
\label{enentform}
\end{eqnarray}
as advertised in the previous subsection.
\section{\label{sec:euc} Semi-classical Euclidean path integrals}
To confirm these essential conditions, one needs to compute the transition element
$\langle i | \mathrm{in} \rangle$, where $|i \rangle$ is a state representing a classical spacetime.
Here we have omitted the label $\alpha$ for the microscopic degrees of freedom
in each classical spacetime for notational simplicity.
The probability of each history is given by $p_{i} = |\langle i | \mathrm{in} \rangle|^{2}$.
If we recover the label $\alpha$, then we have $p_i=\sum_\alpha|\langle\alpha; i | \mathrm{in} \rangle|^{2}$.
Although we do not yet have a final formulation for quantum gravity, at the semi-classical level
the Euclidean path integral can provide a good approximation that captures the essence of a bona fide full-blown quantum gravity theory \cite{Hartle:1983ai}:
\begin{eqnarray}
\langle i | \mathrm{in} \rangle = \left\langle \{h_{\mu\nu}^{(i)}, \phi^{(i)}\} | \{h_{\mu\nu}^{(\mathrm{in})}, \phi^{(\mathrm{in})}\} \right\rangle = \int \mathcal{D} g \mathcal{D}\phi \; e^{-S_{\mathrm{E}}[g_{\mu\nu}, \phi]},
\end{eqnarray}
where $S_{\mathrm{E}}$ is the Euclidean action and the integral is over all configurations of four-geometries $g_{\mu\nu}$ and matter fields $\phi$ that match those on the two different spacelike surfaces
$|\{h_{\mu\nu}^{(\mathrm{in})}, \phi^{(\mathrm{in})}\} \rangle$ and $| \{h_{\mu\nu}^{(i)}, \phi^{(i)}\} \rangle$.
This path integral can be well approximated by summing over on-shell solutions,
i.e., either Lorentzian classical solutions or Euclidean instantons.
\subsection{Hawking radiation as instantons}
We consider the following model:
\begin{eqnarray}
S_{\mathrm{E}} = - \int \sqrt{+g} d^{4}x \left( \frac{\mathcal{R}}{16\pi} - \frac{1}{2} \left(\partial \phi\right)^{2} \right) - \int_{\partial \mathcal{M}} \frac{\mathcal{K} - \mathcal{K}_{o}}{8\pi} \sqrt{+h} d^{3}x,
\end{eqnarray}
where $\mathcal{R}$ is the Ricci scalar, $\phi$ is a massless scalar field, and $\mathcal{K}$ and $\mathcal{K}_{o}$ are the Gibbons-Hawking boundary terms at infinity of the solution and of the Minkowski background, respectively.
\begin{figure}
\begin{center}
\includegraphics[scale=0.6]{conc3-2}
\caption{\label{fig:conc3}Typical instantons for Hawking radiation \cite{Chen:2018aij}.}
\end{center}
\end{figure}
Now we consider the path-integral from an in-state that includes a black hole to a possible out-state. If we approximate the in-state as a pure Schwarzschild geometry, then the most typical instanton that connects the past infinity and the future infinity will be the Euclidean Schwarzschild geometry. Fig.~\ref{fig:conc3} shows this geometry conceptually. For a given constant $t$, the hypersurface is analytically continued to the Euclidean manifold. The past (lower) part has no scalar field and this mimics a vacuum in-state, while the future (upper) part has a non-trivial scalar field configuration. Also, note that a non-trivial scalar field configuration must satisfy the equation of motion on top of the Euclidean-Lorentzian manifold configuration.
Let us see further details of scalar field perturbations near the horizon. As we magnify around the Einstein-Rosen bridge, we can approximate a solution as a superposition of in-going and out-going modes \cite{Chen:2018aij}. In order to satisfy the classicality (i.e., reality) condition at future infinity, we must choose a condition that there are only real-valued out-going modes at (III). This implies that the solution must be complex-valued in (I) and (II). In addition, since the out-going scalar field carries energy $\delta M$, the green colored region surrounding the event horizon has mass $M' = M - \delta M$. By choosing the Euclidean time period $\tau_{\mathrm{T}} = 8\pi M$ to cancel out the boundary term at infinity, the horizon becomes a cusp singularity. Nevertheless, it can be shown that the cusp does not cause an issue as it can be appropriately regularized with the result independent of the regularization \cite{Gregory:2013hja}.
In \cite{Chen:2018aij}, it was argued that free scalar field perturbations on the Euclidean-Lorentzian manifold can be identified as Hawking radiation.
This is justified from computing the probability of such configurations.
For each on-shell history, the tunneling rate is given by $\Gamma \propto e^{-B}$, where
\begin{eqnarray}
B = S_{\mathrm{E}}\left(\mathrm{instanton}\right) - S_{\mathrm{E}}\left(\mathrm{background}\right).
\end{eqnarray}
After regularizing the cusp \cite{Gregory:2013hja}, we obtain
\begin{eqnarray}
B = 4\pi \left( M^{2} -M'^{2} \right),
\label{hawkingrad}
\end{eqnarray}
where $M$ and $M'$ are the mass of the initial black hole and that of the final black hole, respectively.\footnote{We note that $S_{\mathrm{E}}\left(\mathrm{instanton}\right)$ is defined as that for the solution with half of the period in \cite{Chen:2018aij} so that it represents the amplitude of the tunneling wave function. Here, in accordance with the standard convention, we define the Euclidean action to be the one with the whole period.}
For a large black hole with $M\gg 1$ and $\omega \ll M$, we have
\begin{eqnarray}
B = 8\pi M \omega\,.
\end{eqnarray}
This is perfectly consistent with Hawking radiation, where the Hawking temperature is $T_{\mathrm{H}}=(8\pi M)^{-1}$. In the other extreme, if we choose $\delta M = M$, the black hole should transit to a Minkowski background. In general, there exists such a process toward a trivial geometry, as long as one assumes a massless scalar field. The only price to pay is that the probability will be exponentially suppressed for $\delta M=M\gg 1$.
\subsection{Thin-shell toy model for tunneling to trivial geometry}
The existence of the instanton that describes a transition process from a black hole to a flat space is evident from the discussion of the previous subsection. However, in order to obtain the solution faithfully, we need to solve the equations for time-dependent instantons with metric back-reactions fully taken into account. This means we need to solve the highly nonlinear, coupled partial differential equations, which is beyond the scope of the present paper. Nevertheless, we expect that the result is not qualitatively different from the one extrapolated from the case of $\delta M\ll M$.
To support our expectation, as a simple toy model for the tunneling of a black hole to a flat space, we consider a thin-shell model \cite{Israel:1966rt}. Namely, we mimic the Hawking radiation by a thin-shell nucleated in the black hole geometry. After nucleation, we assume that it carries all the energy as it moves away to future null infinity. Hence the spacetime inside the shell will be flat while outside the shell will be the Schwarzschild geometry with the mass equal to that of the black hole prior to the tunneling \cite{Chen:2018aij}.
To be specific, we consider a spherically symmetric spacetime with the metric,
\begin{eqnarray}
ds_{\pm}^{2}= - f_{\pm}(R) dT^{2} + f_{\pm}^{-1}(R) dR^{2} + R^{2} d\Omega^{2}\,,
\end{eqnarray}
with a thin-shell located at $R=r$, whose intrinsic metric is given by
\begin{eqnarray}
ds^{2} = - dt^{2} + r^{2}(t) d\Omega^{2},
\end{eqnarray}
where the region outside (inside) the shell $R>r$ ($R<r$) is denoted by $+$ ($-$).
We impose the metric ansatz for outside and inside the shell, $f_{\pm}(R) = 1 - 2M_{\pm}/R$, where $M_{+}$ and $M_{-}$ are the mass parameters of each region. We assume $M_{+} > M_{-} = 0$, hence there is no black hole inside the shell.
The equation of motion of the thin-shell is determined by the junction equation \cite{Israel:1966rt}:
\begin{eqnarray}\label{eq:junc}
\epsilon_{-} \sqrt{\dot{r}^{2}+f_{-}(r)} - \epsilon_{+} \sqrt{\dot{r}^{2}+f_{+}(r)} = 4\pi r \sigma(r)\,,
\end{eqnarray}
where we impose $\epsilon_{\pm} = + 1$ so that the outward normal vector of the shell has the proper expansion (i.e., the extrinsic curvature).
The above equation can be re-expressed as
\begin{eqnarray}
\dot{r}^{2} + V_{\mathrm{eff}} (r) = 0,
\end{eqnarray}
where
\begin{eqnarray}
V_{\mathrm{eff}} (r) = f_{+} - \frac{\left( f_{-} - f_{+} - 16 \pi^{2} \sigma^{2} r^{2} \right)^{2}}{64 \pi^{2} \sigma^{2} r^{2}}.
\end{eqnarray}
Here, $\sigma(r)$ is the tension of the shell, which satisfies the energy conservation equation,
\begin{eqnarray}
\frac{d\sigma}{dr} = -\frac{2\sigma\left(1 + w\right)}{r}\,,
\end{eqnarray}
where $w$ is the equation of state of the shell, and we require $w \geq -1$ to satisfy the null energy condition. In general, the tension may be assumed to have the form \cite{Chen:2015lbp},
\begin{eqnarray}
\sigma(r) = \sum_{i=1}^{n} \frac{\sigma_{i}}{r^{2(1+w_{i})}}\,,
\end{eqnarray}
where $n$ is an arbitrary positive integer, and $\sigma_{i}$ and $w_{i}$ are constants.
Now we look for a static solution. In the Euclidean time, a static instanton may be regarded as a thermal instanton. The condition for the existence of such a solution is the existence of a radius $r_{0}$ such that $V_{\mathrm{eff}}(r_{0}) = V'_{\mathrm{eff}}(r_{0}) = 0$. This is satisfied, for example, if one chooses $w_{1} = 0.5$, $w_{2} = 1$, and appropriately tunes $\sigma_{1}$ and $\sigma_2$, as depicted in Fig.~\ref{fig:Veff}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.9]{Veff}
\caption{\label{fig:Veff}
The effective potential $V_{\mathrm{eff}}(r)$ that allows a static shell solution.
The parameters are $w_1=0.5$, $\sigma_{1} \simeq 340$ and $w_2=1$, $\sigma_{2} = 10$.
The black hole mass is set to $M_{+} = 10$.
}
\end{center}
\end{figure}
Using the static shell instanton, we may now compute the tunneling probability of the black hole geometry to the one with no black hole as shown in Fig.~\ref{fig:conceptual2-3}, where an evaporating black hole geometry (left panel) tunnels to the geometry of an expanding shell with no black hole (right panel) at the hypersurface denoted by $t$. In the language of the previous section, the left panel corresponds to the information-losing history, $h_1$, and the right
to the information-preserving history, $h_2$ (see Fig. \ref{fig:conceptual4-1}).
Under normal circumstances the information-preserving geometry is exponentially suppressed.
However, we will show that it becomes dominant at late times.
\begin{figure}
\begin{center}
\includegraphics[scale=0.6]{conceptual2-3}
\caption{\label{fig:conceptual2-3}Left: the causal structure of the usual semi-classical black hole, where the green curve is the trajectory of the collapsing matter, the red curve is the apparent horizon, and the blue line is the event horizon. Right: the causal structure after a quantum tunneling at the time slice $t$. After the tunneling, matter or information (red curve) is emitted and the black hole structure disappears.}
\end{center}
\end{figure}
\subsection{Tunneling probability}
The Euclidean action is given by
\begin{eqnarray}
S_{\mathrm{E}} = - \int \frac{\mathcal{R}}{16\pi} \sqrt{+g} d^{4}x + \sigma \int_{\Sigma} \sqrt{+h} d^{3}x - \int_{\partial \mathcal{M}} \frac{\mathcal{K} - \mathcal{K}_{o}}{8\pi} \sqrt{+h} d^{3}x\,,\label{eq:ac}
\end{eqnarray}
where $\mathcal{R}$ is the Ricci scalar, $\Sigma$ is the hypersurface of the thin-shell, and $\mathcal{K}$ and $\mathcal{K}_{o}$ are the Gibbons-Hawking boundary terms at infinity for the solution and the Minkowski background, respectively \cite{Garriga:2004nm}.
Now we evaluate $S_E$ for the static thin-shell instanton with the Euclidean time period $\Delta T_E=8\pi M_{+}$.
By using the on-shell condition, we can simplify it as
\begin{eqnarray}
S_{\mathrm{E}}(\mbox{thin-shell})
=\left( \frac{\sigma}{2} + \lambda \right) \int_{\Sigma} \sqrt{+h} d^{3}x + \left( \mathrm{boundary\; term} \right),
\end{eqnarray}
where $\lambda$ is the pressure of the shell. From the energy conservation relation, we have
\begin{eqnarray}\label{eq:onshellaction}
\lambda(r_{0}) = - \left( \sigma(r_{0}) + \frac{r_{0}}{2} \sigma'(r_{0}) \right)\,.
\end{eqnarray}
In addition, assuming $f_{-}(r_{0}) = 1$ and $f_{+}(r_{0}) = 1 - 2M_{+}/r_{0}$, we can derive an important relation by taking the $r$-derivative of (\ref{eq:junc}),
\begin{eqnarray}
M_{+} = - \sqrt{f_{+}} 4\pi r_{0}^{2} \left( \sigma(r_{0}) + r_{0} \sigma'(r_{0}) \right).
\end{eqnarray}
Noting that Eq.~(\ref{eq:onshellaction}) implies $-(\sigma/2+\lambda)=(\sigma+r\sigma')/2$ for $r=r_0$, and using the above relation, we find
\begin{eqnarray}
S_{\mathrm{E}}(\mbox{thin-shell}) =4\pi M_+^2 + \left( \mathrm{boundary\; term} \right)\,.
\end{eqnarray}
Since the action $S_E(\mbox{background})$ of the Euclidean Schwarzschild metric is purely given by its boundary term at infinity, and it coincides with that of $S_E(\mbox{thin-shell})$, we obtain
\begin{eqnarray}
B=S_{\mathrm{E}}(\mbox{thin-shell})-S_{\mathrm{E}}(\mbox{background})=4\pi M_+^2\,.
\end{eqnarray}
We note that this agrees with the extrapolation of Eq.~(\ref{hawkingrad}) to the case of $M'=M-\delta M=0$.
Here, we mention a point that has an important implication to our later discussion. In general, there is no freedom in the choice of the Euclidean time period of the background (the Schwarzschild metric in our case); It is fixed. On the other hand, the Euclidean time period of the solution is not unique, but can be multiples of the fundamental period determined by the background geometry. Therefore, we should take into account all possible multiple period instanton configurations. Then for an instanton with $n$ periods, we obtain
\begin{eqnarray}
B_n &=& n \Bigl[ (\mbox{bulk action})
+ (\mbox{boundary term}) \Bigr] - (\mbox{boundary term})
\nonumber\\
&=& (2n-1)S\,,
\end{eqnarray}
where $S = 4\pi M_{+}^{2}$ and $n \geq 1$. The total tunneling probability is therefore
\begin{eqnarray}
\Gamma=\sum_{n=1}^{\infty} e^{-S(2n-1)} = \frac{1}{e^{S} - e^{-S}}\,.
\label{Gamma}
\end{eqnarray}
Evidently, $n = 1$ is the dominant contribution and one recovers the result $\Gamma=e^{-S}$ for $S\gg1$. However, the subdominant contributions may become important for small $S$. This has an important consequence when we consider the Page curve.
\begin{figure}
\begin{center}
\includegraphics[scale=1]{prob}
\caption{\label{fig:prob}Probabilities of the semi-classical history ($p_{1}$, blue curve) and histories with thermal-shell emission ($p_{2}$, black curve) as a function of the entropy $S$. For large $S$, $p_{1}$ is dominated; however, there exists a golden cross between two probabilities, and eventually, $p_{2}$ is dominated.}
\end{center}
\end{figure}
For simplicity, we consider only two classes of histories, where one is the semi-classical black hole which gradually loses its mass due to the Hawking radiation (hence a class composed of a single history), and the other is a family of spacetimes with the thin-shell emitted to infinity as a result of the tunneling. It may be noted that, since the tunneling can happen at any time, histories will continue to branch out. Eventually, infinitely many histories of the second class appear in the out-state (Fig.~\ref{fig:conceptual4-1}) \cite{Hartle:2015bna,Chen:2015lbp}.
Let us denote the probability of the semi-classical black hole history by $p_{1}$ and
that of the information-preserving histories by $p_{2}$. From Eq.~(\ref{Gamma}), they are estimated as
\begin{eqnarray}
p_{1} = \frac{e^{S} - e^{-S}}{1 + e^{S} - e^{-S}}\,,\qquad
p_{2} = \frac{1}{1 + e^{S} - e^{-S}}\,.
\end{eqnarray}
Interestingly, $p_2$ becomes greater than $p_1$ at $S \simeq 1$
(Fig.~\ref{fig:prob}).
We note that this is true even if one ignores the effect of the multiple Euclidean time-period instantons, though the value of $S$ at which $p_2$ exceeds $p_1$ would be somewhat different.
\section{\label{sec:mod}Modified Page curve}
\begin{figure}
\begin{center}
\includegraphics[scale=1]{ent4}
\caption{\label{fig:ent}Entanglement entropy, $S_{\mathrm{ent}}/S_{0}$, vs. entropy of the emitted radiation, $S_{\mathrm{rad}}/S_{0}\equiv 1 - S/S_{0}$. As a demonstration, we display curves with different initial black hole entropy (therefore different initial mass) $S_{0} = 10$ (black), $30$ (blue), $50$ (red), respectively, to illustrate the tendency. The purple dashed curve is the conventional Page curve. Note that our modified Page curve deviates from the conventional Page curve. The thin red dashed curve is the location of the modified Page time, i.e., $dS_{\mathrm{ent}}/dS_{\mathrm{rad}} = 0$. The more massive the initial black hole, the more skew the modified Page curve, with the modified Page time shifted more significantly towards the late time.}
\end{center}
\end{figure}
\subsection{Page curve}
Now we are ready to evaluate the Page curve. Using the formula for the expectation value of the entanglement entropy (\ref{enentform}), and noting that the entanglement entropy vanishes for the information-preserving histories, We obtain
\begin{eqnarray}
S_{\mathrm{ent}} &=& \left( S_{0} - S \right) \times p_{1} + 0 \times p_{2}
\nonumber\\
&=& \left( S_{0} - S \right) \left(\frac{e^{S} - e^{-S}}{1 + e^{S} - e^{-S}}\right),
\end{eqnarray}
where $S_{0}$ is the initial entropy of the black hole; $S_0=4\pi M_0^2$ where $M_0$ is the initial mass. Here we have assumed that the entanglement entropy for the semi-classical black hole monotonically increases as its mass decreases due to the Hawking radiation. Namely,
the entropy of the emitted radiation $S_{\rm rad}=S_0-S$ exactly accounts for the entanglement entropy. The entanglement entropy as a function of $S_{\rm rad}$ is depicted in Fig.~\ref{fig:ent}. It is clear that the entanglement entropy increases monotonically, reaches a maximum, and eventually vanishes as the black hole mass decreases to zero. This result is consistent with the notion of unitary evolution.
We have thus successfully recovered the most important property of the Page curve, i.e., the preservation of unitarity, albeit with the price that it is significantly modified.
In particular, an important and interesting observation is that the turning point, i.e., the Page time, is far beyond the moment when the Bekenstein-Hawking entropy decreased to its half value. In our picture, it occurs when
$S\sim \log S_{0}$, instead of $S\sim S_0/2$ for the conventional Page time.
This implies that the equivalence of the Bekenstein-Hawking areal entropy and the Boltzmann entropy is violated. However, one should realize that this equivalence is not a result of any fundamental principles. In fact, an intriguing counter example has recently been pointed out
in \cite{Buoninfante:2021ijy}, which is perfectly consistent with both local field theory and
semi-classical general relativity.
\subsection{Validity of the semi-classical approximation}
Throughout this paper, we have assumed the validity of the semi-classical approximation for quantum gravity. It is therefore important to check if it remains valid in the regime of our interest.
The crucial point is whether the modified Page curve we obtained would be within the validity of our semi-classical approximation. This can be checked by inspecting the maximum of the curve. By taking the derivative of $S_{\rm ent}$ with respect to $S$, we obtain the modified Page time at which the entanglement entropy is maximum, i.e., the time at which $dS_{\mathrm{ent}}/dS = 0$ is satisfied. This gives an implicit equation
for the value of $S$ at the maximum as a function of $S_0$,
\begin{eqnarray}
S_{0} = S_m + \left(1 + 2 \sinh S_m \right) \tanh S_m,
\end{eqnarray}
where $S_m$ is the areal entropy at the Page time (Fig.~\ref{fig:S0}). For $S_0\gg1$,
one approximately obtains $S_m \simeq \log S_{0}$.
This result implies that as long as the initial black hole entropy is sufficiently large ($S_{0} \gg 1$), the areal entropy at the Page time can be sufficiently greater than the Planck scale ($\log S_{0} \gg 1$). Therefore, the turning point of the Page curve occurs while the
semi-classical approximation is still perfectly valid, provided that the initial black hole is macroscopic. (For example, even for a black hole of a fairly small mass $M_0\sim10^5$g, $S_0\sim 10^{20}$ and hence $S_m\sim 46$.)
\begin{figure}
\begin{center}
\includegraphics[scale=1]{S0}
\caption{\label{fig:S0}The maximum entanglement entropy of the modified Page curve, $S_m$, vs. the initial areal entropy $S_{0}$.}
\end{center}
\end{figure}
\section{\label{sec:con}Conclusion}
\subsection{Recent developments based on string theory}
It is interesting to compare our approach with the recent developments based on string theory, where the Page curve was semi-classically reproduced by evaluating quantum extremal surfaces (QES). It was found~\cite{Almheiri:2019psf,Almheiri:2019hni}, in two-dimensional gravity, that a saddle of QES without an \textit{island} is dominant before the Page time, which induces the increase of the entanglement entropy. After the Page time, the QES is dominated by the other saddle which has an island, and this saddle results in the decrease of the entanglement entropy. It was also argued in~\cite{Almheiri:2020cfm} that such a role played by the island can also reproduce the Page curve in higher dimensional black holes.
The string-based islands approach and ours share two essential features. One, there exist more than two contributions to the evolution of the entanglement entropy, where one is dominant at early-time and the other at late-time. Two, the final entanglement entropy is dominated by the late-time condition. That is, the multi-history condition and the late-time dominance condition in our Euclidean path integral approach are analogous to the spirit of the QES computations for black hole.
However, the physical interpretation of the islands conjecture is still not very clear. One reason is that the entanglement entropy computed in QES is based on the density matrix instead of the quantum state, which is the standard quantum field theory approach and what we have followed. It is therefore natural to ask, what is the implication of the islands or the replica wormholes \cite{Almheiri:2019qdq,Penington:2019kki} to the more orthodox, state-level path integral in Lorentzian signatures, and vice versa?
Although our interpretation of the thin-shell tunneling shares common features with the islands conjecture and the replica wormholes, there exists a clear difference. In spite of significant progress in islands conjecture, it is still premature to understand the classical geometry of the new saddle from QES. For example, the island is an extremum of the generalized entropy rather than the original path integral itself. Moreover, the replica wormhole is based on Euclidean path integral whereas our approach deals with the branching of semi-classical histories along the Lorentzian evolution. In this sense, our interpretation of thin-shell tunneling may be more closely connected with the recent studies in the context of real-time gravitational replica wormholes \cite{Goto:2020wnk,Colin-Ellerin:2020mva,Colin-Ellerin:2021jev} for the generalization of the islands conjecture and the replica wormholes with baby universes~\cite{Marolf:2020xie,Giddings:2020yes,Marolf:2020rpm}. In \cite{Chen:2018aij}, it was argued that Hawking radiation can be viewed as instanton. Likewise, the instanton interpretation of the Hawking radiation after Page time might provide an alternative means to understand black hole information paradox. More detailed comparisons of our approach with the replica wormholes approach, either Euclidean or Lorentzian, are left for future investigations.
\subsection{Future prospects}
We have argued that the canonical quantum gravity with the Euclidean path integral approach can provide a consistent picture to resolve the information loss paradox. By computing the wave function of the universe with the Euclidean path integral, we successfully justified the two essential conditions, that is, the \textit{multi-history condition} and the \textit{late-time dominance condition}, and eventually obtained a modified Page curve that preserves the unitarity but with the Page time shifted significantly towards the late-time.
Note that the entanglement entropy of a black hole can never exceed its Boltzmann entropy. Therefore, if one insists the equivalence between the Bekenstein-Hawking areal entropy and the Boltzmann entropy, then the entanglement entropy cannot exceed the Bekenstein-Hawking areal entropy. On the contrary, one salient outcome of our computation is that there exists a moment where the entanglement entropy is greater than the Bekenstein-Hawking areal entropy. This necessarily implies that the number of states inside the horizon must have been accumulated during the black hole evaporation, although such an accumulation is strictly bounded. We emphasize that this violation of the equivalence is not in contradiction with basic principles of physics \cite{Buoninfante:2021ijy}.
In our picture, the turning point of the Page curve, though shifted significantly towards the end-life of the black hole evaporation, is still in the semi-classical regime of quantum gravity as we have shown. Hence there might be a way to experimentally investigate our notion. If our model can be examined not only by theoretical means, but also by experimental methods \cite{Chen:2015bcg}, then the synergy between theory and experiment may hopefully lead us to the ultimate understanding of the information loss paradox.
\newpage
\section*{Acknowledgment} PC is supported by Taiwan's Ministry of Science and Technology (MOST) and the Leung Center for Cosmology and Particle Astrophysics (LeCosPA), National Taiwan University. MS is supported in part by JSPS KAKENHI grant Nos. 19H01895, 20H04727, and 20H05853. DY is supported by the National Research Foundation of Korea (Grant no.: 2021R1C1C1008622, 2021R1A4A5031460). JY is supported by the National Research Foundation of Korea (Grant no.: 2019R1F1A1045971, 2022R1A2C1003182). JY is also supported by an appointment to the JRG Program at APCTP through the Science and Technology Promotion Fund, the Lottery Fund of the Korean government, and by Gyeongsangbuk-do and Pohang-si.
|
2,877,628,091,604 | arxiv | \section{Introduction}
\label{sec:intro}
Hopf algebras naturally appear in combinatorics in the following way: one constructs a Hopf algebra whose generators are canonically parametrized by certain combinatorial objects of interest, for instance, graphs, posets, or symmetric functions. Then the Hopf algebra structures encode basic operations of combinatorial objects, such as direct sums, restrictions, or contractions. One of the main motivations for studying these types of Hopf algebras is to obtain combinatorial results by appealing to purely algebraic properties of Hopf algebras (see \cite{gr14}).
When basic operations of combinatorial objects provide only bialgebra structures, to obtain Hopf algebra structures, one may appeal to Takeuchi's celebrated result \cite{t71}, which states that every graded, connected bialgebra
is a Hopf algebra with an explicit antipode formula. This formula, however, usually contains a large number of cancellations so is not optimal for producing combinatorial identities among the elements of the Hopf algebra.
Recently, a great deal of research has been devoted to developing cancellation-free antipode formulas for combinatorial Hopf algebras.
For example, cancellation-free antipode formulas have been obtained for the incidence Hopf algebra on graphs by Humpert and Martin \cite{hm12}, for $K$-theoretic analogs of various symmetric function Hopf algebras (introduced by Lam and Pylyavskyy \cite{lp07}) by Patrias \cite{p16}, for the Hopf algebra of simplicial complexes by Benedetti, Hallam, and Macechek \cite{bhm16}, and for various Hopf algebras embeddable into the Hopf monoid by Benedetti and Bergeron \cite{bb16}.
In a seminal paper by Benedetti and Sagan, cancellation-free antipode formulas were given for nine combinatorial Hopf algebras \cite{bs17}. What is particularly novel about their approach is that all of their cancellation-free formulas are obtained via the same general technique---for each of their combinatorial Hopf algebras, they introduce a sign-reversing involution on the set indexing the terms in Takeuchi's formula, and they classify the fixed points of this involution. Summing over these fixed points yields a cancellation-free formula (see Section~\ref{sec:examplesignreverse} for an elementary example of a sign-reversing involution). Examples of combinatorial Hopf algebras that yield to their technique are the shuffle Hopf algebra, the incidence Hopf algebra for graphs, and one of the Lam--Pylyavskyy symmetric function Hopf algebras (thereby recovering the formula of Patrias). The number of recent results producing cancellation-free antipode formulas via sign-reversing involutions suggests that potentially there is a general way to define the involution for a Hopf algebra whose generators are obtained from combinatorial objects. The results presented in this paper hope to further the evidence that such an involution can be constructed.
The matroid-minor Hopf algebra was introduced by Schmitt in 1994 \cite{s94}, and a cancellation-free formula for the antipode of uniform matroids was given by the first and last authors in 2016 \cite{bm16} by using a sign-reversing involution. The current paper finishes what was started in \cite{bm16} and gives a cancellation-free antipode for all matroids (as well as matroids over hyperfields, in Corollary~\ref{coro:hyperfield theorem}).
\begin{theorem*}[Theorem~\ref{thm:cancellation-free antipode}]
Let $M$ be a matroid with ground set $E$. Choose a total ordering on $E$, call it $<$. Then
\[
S(M)=\sum_{ \text{fixed points $\pi$ of } \iota_<} \ \mathrm{sgn}(\pi) \ \bigoplus_{i=1}^{j} U_{\delta_i^M-\sigma_i^M}^{| \delta_i^M-\sigma_i^M |} \oplus (M |_{\delta_i^M}) / (\delta_i^M - \sigma_i^M).
\]
\end{theorem*}
In the formula above, $\iota_<$ is a certain sign-reversing involution, defined from $<$, on the set of ordered set partitions of the ground set $E$ of the matroid $M$ (see Section~\ref{sec:takeuichi}), the matroid $U^{|P|}_P$ is the uniform matroid of rank $|P|$ on ground set $P$, the matroid $(M|_P)/Q$ is the matroid formed by first restricting M to $P$ then contracting $Q$ from $M|_P$, and the various $\delta$ and $\sigma$ are certain subsets of $E$ which carry information about circuits in the matroid $M$ (see Section~\ref{sec:deltasets}).
Between the time of \cite{bm16} and the current paper, a cancellation-free antipode formula for the matroid-minor Hopf algebra was obtained by Aguiar and Ardila \cite{aa17} as an application of a far-reaching approach which involves embedding the matroid-minor Hopf algebra into the Hopf algebra of generalized permutohedra. Even though a cancellation-free formula already exists, we hope to convince the reader that further study of sign-reversing involutions in the theory of combinatorial Hopf algebras is warranted:
\begin{itemize}
\item Our formula answers part of a question by Benedetti--Sagan \cite[Section 11, Question 1]{bs17} by adding another involved example to the growing list of graded connected Hopf algebras for which cancellation-free antipodes can be obtained via sign-reversing involutions. This suggests that there might be a general procedure one could follow in the case of any combinatorial Hopf algebra.
\item Our antipode formula is given purely combinatorially. While
\cite{aa17} gives a much more elegant description of the antipode
via an embedding into the Hopf algebra of generalized permutohedra,
our antipode computations work on a ``calculus of partitions'' and
can be automated using standard computer software.
\item In \cite{baker2017matroids}, Baker and Bowler introduced the notion of \emph{matroids over hyperfields} which unifies various generalizations of matroids including oriented matroids and valuated matroids. Consequently, in \cite{ejs17}, Hopf algebras for matroids over hyperfields are defined. Our method for the antipode formula is robust enough to obtain a cancellation-free antipode formula for the hyperfield case without much efforts.
\end{itemize}
\subsection*{Acknowledgements}
The authors would like to thank Federico Ardila, Carolina Benedetti, Vic Reiner, and Bruce Sagan for useful conversations while this work was in progress. The authors also thank Binghamton University where the authors met and initiated the project.
\section{Preliminaries}
\label{sec:prelim}
\subsection{Matroids}
In this paper, we work with the ``circuit'' formulation of a matroid
(as opposed to \cite{bm16} where the first and last authors used
independent sets).
\begin{definition}
Let $E$ be a finite set, and let $\mathcal{C}$ be a collection of subsets of
$E$ satisfying:
\begin{itemize}
\item[(a)] $\emptyset \not\in \mathcal{C}$,
\item[(b)] if $C_1,C_2 \in \mathcal{C}$ with $C_1 \subseteq C_2$, then
$C_1 = C_2$, and
\item[(c)] (Circuit elimination) if $C_1,C_2 \in \mathcal{C}$ with $C_1 \not= C_2$ and
$e \in C_1 \cap C_2$, then there exists $C_3 \in \mathcal{C}$ with
$C_3 \subseteq (C_1 \cup C_2) - e$.
\end{itemize}
The pair $M = (E,\mathcal{C})$ is a {\em matroid with ground set $E$ and set
of circuits $\mathcal{C}$}. A subset $S\subseteq E$ is \emph{dependent in $M$}
when $S$ contains some circuit of $M$.
\end{definition}
Uniform matroids form an important class of matroids: we write $U^r_n$
for the {\em uniform matroid of rank $r$ on an $n$-element set}---this
is the matroid with ground set $E$ an $n$-element set and set of
circuits $\mathcal{C}$ all subsets of $E$ with exactly $r+1$ elements.
Throughout, it will be important for us to keep track of various
uniform matroids with different ground sets; when we want to specify
the ground set of a uniform matroid, we write $U_E^r$ for the uniform
matroid $U_{|E|}^r$ with specified ground set $E$.
When defining the operations inside the matroid-minor Hopf algebra we
will need the matroid operations of restriction, contraction, and
direct sum. To this end, let $M_1 = (E_1, \mathcal{C}_1)$ and
$M_2 = (E_2, \mathcal{C}_2)$ be two matroids on disjoint ground sets, and let
$S$ be a subset of $E_1$. The {\em restriction of $M_1$ to $S$} is
the matroid $M_1|_S = (S, \mathcal{D}_1)$ where
$\mathcal{D}_1 = \{C \subseteq S \mid C \in \mathcal{C}_1\}$. The {\em contraction of
$S$ from $M_1$} is the matroid $M_1/S = (E_1-S, \mathcal{D})$ where $\mathcal{D}$ is
the set of minimal non-empty elements of $\{C - S \mid C \in \mathcal{C}_1\}$.
The {\em direct sum of matroids $M_1$ and $M_2$} is the matroid
$M_1 \oplus M_2 = (E_1 \cup E_2, \mathcal{C}_1 \cup \mathcal{C}_2)$. Note that
$M_1 \oplus M_2 = M_2 \oplus M_1$.
\subsection{Matroid-minor Hopf algebras}
\label{sec:mmh}
For the remainder of the paper, fix a field $\Bbbk$. A
$\Bbbk$-bialgebra is simultaneously a $\Bbbk$-algebra and a
$\Bbbk$-coalgebra together with some compatibility of the
multiplication morphism $\mu\colon \mathcal{H} \otimes \mathcal{H} \rightarrow \mathcal{H}$ and unit
morphism $\eta\colon \Bbbk \rightarrow \mathcal{H}$, with the comultiplication morphism
$\Delta\colon \mathcal{H} \rightarrow \mathcal{H} \otimes \mathcal{H}$ and counit morphism
$\epsilon\colon \mathcal{H} \rightarrow \Bbbk$. A $\Bbbk$-bialgebra $\mathcal{H}$ is {\em
graded} if $\mathcal{H} = \bigoplus_{\ell \in \mathbb{Z}_{\ge 0}} \mathcal{H}_\ell$, and
the maps $\mu, \eta, \Delta,$ and $\epsilon$ are graded $\Bbbk$-linear
maps. Additionally, $\mathcal{H}$ is called {\em connected} if
$\mathcal{H}_0 = \Bbbk$.
A Hopf algebra $\mathcal{H}$ over $\Bbbk$ is a $\Bbbk$-bialgebra together with
one additional $\Bbbk$-linear map---the antipode map
$S\colon \mathcal{H} \rightarrow \mathcal{H}$ also satisfies some compatibility with the
$\Bbbk$-linear maps $\mu, \eta, \Delta,$ and $\epsilon$. The
celebrated 1971 result by Takeuchi asserts that any graded, connected
$\Bbbk$-bialgebra is a Hopf algebra with an explicit antipode. We
refer the reader to \cite{sweedler1969hopf} for an introduction to
Hopf algebras.
\begin{theorem}[\cite{t71}]\label{thm:t}
A graded, connected $\Bbbk$-bialgebra $\mathcal{H}$ is a Hopf algebra, and it has a unique antipode $S$ given by
\begin{equation}\label{eq:tak}
S = \sum_{i \in \mathbb{Z}_{\ge 0}} (-1)^i \mu^{i-1} \circ \mathrm{pr}^{\otimes i} \circ \Delta^{i-1},
\end{equation}
where $\mu^{-1} := \eta$, $\Delta^{-1} := \epsilon$, and $\mathrm{pr}\colon \mathcal{H} \rightarrow \mathcal{H}$ is the projection map defined by linearly extending the map
\[
\mathrm{pr}|_{\mathcal{H}_{\ell}} = \begin{cases} 0 & \text{if }\ell = 0,\\ \mathrm{id} & \text{if } \ell \ge 1.\end{cases}
\]
\end{theorem}
Throughout this paper, we will focus on a particular Hopf algebra,
called the matroid-minor Hopf algebra, which we will now define. Let
$\mathcal{M}$ be any class of matroids which is closed under taking direct
sums and minors, and let $\widetilde{\mathcal{M}}$ be the set
of all isomorphism classes of matroids in $\mathcal{M}$. Then
$\Bbbk \widetilde{\mathcal{M}}$ is a Hopf algebra \cite{s94} with
$\Bbbk$-linear maps
\[\begin{array}{rcl}
\Bbbk \widetilde{\mathcal{M}} \otimes \Bbbk \widetilde{\mathcal{M}} & \stackrel{\mu}{\longrightarrow} & \Bbbk \widetilde{\mathcal{M}} \\
(M,N) & \longmapsto & M \oplus N, \\
& & \\
\Bbbk \widetilde{\mathcal{M}} & \stackrel{\Delta}{\longrightarrow} & \Bbbk \widetilde{\mathcal{M}} \otimes \Bbbk \widetilde{\mathcal{M}} \\
M & \longmapsto & \displaystyle \sum_{A \subseteq E} M|_A \otimes M/A, \\
& & \\
\Bbbk & \stackrel{\eta}{\longrightarrow} & \Bbbk \widetilde{\mathcal{M}} \\
1_{\Bbbk} & \longmapsto & U^0_0, \\
& & \\
\Bbbk \widetilde{\mathcal{M}} & \stackrel{\epsilon}{\longrightarrow} & \Bbbk \\
M & \longmapsto & \left\{\begin{array}{ll}1_{\Bbbk} & \mathrm{if}\ E = \emptyset,\\ 0 & \mathrm{else}.\end{array}\right.
\end{array}\]
The matroid-minor Hopf algebra $\Bbbk\widetilde{\mathcal{M}}$ is indeed a graded, connected $\Bbbk$-bialgebra, so its antipode is given by Theorem~\ref{thm:t}.
In Section~\ref{sec:rewrite}, we will rewrite Takeuchi's antipode formula \eqref{eq:tak} more explicitly for the matroid-minor Hopf algebra $\Bbbk\widetilde{\mathcal{M}}$.
\subsection{Sign-reversing involutions}\label{sec:signreversinginvs}
Sign-reversing involutions are ubiquitous in combinatorics and are a well-known way of removing cancellations from a formula. We give preliminaries on sign-reversing involutions, and we give an elementary example in Section~\ref{sec:examplesignreverse}.
\begin{definition}
Let $A$ be a finite set equipped with a sign function; that is, a function $\mathrm{sgn}\colon A \rightarrow \{\pm1\}$. A {\em sign-reversing involution} $\iota\colon A \rightarrow A$ is an involution such that for every $a \in A$
\[
\mathrm{sgn}(\iota(a)) = -\mathrm{sgn}(a).
\]
\end{definition}
Now fix a sign-reversing involution $\iota$ of a finite set $A$. Let $F \subset A$ be the set of fixed points of $\iota$; that is,
\[
F := \{a \in A \mid \iota(a) = a\}.
\]
A formula of the form
\[
\sum_{a \in A} \mathrm{sgn}(a)
\]
may have many cancellations, but because $\iota$ is sign-reversing, it follows that
\[
\sum_{a \in A} \mathrm{sgn}(a) = \sum_{a \in F} \mathrm{sgn}(a),
\]
and the right-hand side is cancellation free.
\subsection{An example of a sign-reversing involution}\label{sec:examplesignreverse}
We would like to present a small example to illustrate how one might
use sign-reversing involutions to prove combinatorial identities. We
will use a sign-reversing involution to compute the following
summation:
\[
\sum_{k \geq 0} (-1)^k {\binom{n}{k}} .
\]
For this example we think of $\binom{n}{k}$ as enumerating the ways of partitioning $[n]$ into two sets $A$ and $B$ with $|A|=k$ and $|B|=n-k$. Then this summation assigns a $1$ or $-1$ for each 2-part partition $(A,B)$ where the sign is given by the parity of the cardinality of $A$. The sum can be rewritten then as,
\[
\sum_{(A,B)} (-1)^{|A|} .
\]
We define an involution $\iota$ on the set of 2-part partitions as
follows:
\[
\iota (A,B) =
\begin{cases}
(A - \{n\},B \cup \{n\}) & \text{ if } n\in A, \\
(A \cup \{n\},B -\{n\}) & \text{ if } n\in B.
\end{cases}
\]
This is a sign-reversing involution on the index set of the summation. It has no fixed points, and therefore
\[
\sum_{k \geq 0} (-1)^k \binom{n}{k} = \sum_{(A,B)} (-1)^{|A|} = \sum_{(A,B) \in F} (-1)^{|A|} = 0.
\]
\section{The sign-free Takeuchi term}
\label{sec:takeuichi}
We will see in Section~\ref{sec:rewrite} that Takeuchi's antipode formula \eqref{eq:tak} can be reinterpreted, for the matroid-minor Hopf algebra $\Bbbk\widetilde{\mathcal{M}}$, as a sum over ordered set partitions of the ground set $E$ (see Section~\ref{sec:rewrite}). Thus, our technique for proving Theorem~\ref{thm:cancellation-free antipode} involves introducing a sign-reversing involution (see Section~\ref{sec:inv}) on the set of ordered set partitions of the ground set $E$ of a matroid $M$.
\subsection{Takeuchi's result for the matroid-minor Hopf algebra}\label{sec:rewrite}
\begin{definition}
An {\em ordered set partition} $\pi =(\pi_1, \pi_2, \ldots, \pi_k)$ of a finite set $E$ is a tuple of nonempty subsets of $E$ satisfying
\begin{itemize}
\item[(a)] $\pi_i \cap \pi_j = \emptyset$ for all $1 \le i\neq j \le k$, and
\item[(b)] $\cup_{i=1}^k \pi_i = E$.
\end{itemize}
We will write $\pi \vDash E$ to denote that $\pi$ is an ordered set partition of $E$.
\end{definition}
Below we rewrite the antipode formula in \eqref{eq:tak} for the matroid-minor Hopf algebra $\Bbbk\widetilde{\mathcal{M}}$ defined in Section~\ref{sec:mmh}.
\begin{proposition}\cite[Proposition 3.1]{bm16}\label{prop:matant}
Let $M \in \mathcal{M}$ be a matroid with ground set $E$. For the matroid-minor Hopf algebra $\Bbbk\widetilde{\mathcal{M}}$, Takeuchi's formula \eqref{eq:tak} is equivalent to
\[
S(M) = \sum_{k \ge 0} (-1)^k \sum_{(\pi_1,\ldots,\pi_k)\vDash E} M|_{\pi_1} \oplus (M/\pi_1)|_{\pi_2} \oplus \cdots \oplus (M/\bigcup_{i=1}^{k-2}\pi_i)|_{\pi_{k-1}} \oplus M/\bigcup_{i=1}^{k-1} \pi_i.
\]
\end{proposition}
\begin{remark}
In Proposition \ref{prop:matant}, all matroids that appear in the formula are representatives chosen from their isomorphism classes.
\end{remark}
Given an ordered set partition $\pi = (\pi_1,\ldots,\pi_k) \vDash E$, we write $T(\pi)$ for the corresponding {\em signless Takeuchi term}
\[
T(\pi) := M|_{\pi_1} \oplus (M/\pi_1)|_{\pi_2} \oplus \cdots \oplus (M/\bigcup_{i=1}^{k-2}\pi_i)|_{\pi_{k-1}} \oplus M/\bigcup_{i=1}^{k-1} \pi_i.
\]
In the next section, we will provide a characterization of the signless term $T(\pi)$. To do so, we first develop the language of $\delta$-sets.
\subsection{Defining $\delta$-sets}\label{sec:deltasets}
Let $M$ be a matroid with ground set $E$, and let $\pi = (\pi_1, \pi_2, \ldots, \pi_k) \vDash E$. We define the following for the pair $(M,\pi)$:
\begin{itemize}
\item The {\em $\ell$-th part of $(M,\pi)$} is defined as
\begin{equation} \label{equation: l-part}
\ell(M,\pi) := \min\left\{t\ |\ \bigcup_{1 \le i \le t} \pi_i \text{ is a dependent set}\right\}.
\end{equation}
\item The {\em $\delta$-function of $(M,\pi)$} is defined as
\[
\delta(M,\pi) := \bigcup_{1 \le i \leq \ell(M,\pi)} \pi_i.
\]
\item The {\em $\sigma$-function of $(M,\pi)$} is defined as
\[
\sigma(M,\pi) := \pi_{\ell(M,\pi)}.
\]
\end{itemize}
Now we are ready to define the $\delta$-sets and $\sigma$-sets for
$(M,\pi)$; intuitively, $\delta$-sets keep track of where new circuits
are introduced in the union of the initial parts of the partition
$\pi$. Note that when $\pi=(\pi_1,\pi_2,\ldots,\pi_k)$ is an ordered
partition of set $E$, we write $\pi/\bigcup_{i=1}^m\pi_i$ for the
ordered partition $(\pi_{m+1},\ldots,\pi_k)$ of
$E-\left(\bigcup_{i=1}^m\pi_i\right)$. Formally:
\begin{definition}
We define the $\delta$-sets iteratively as follows:
\begin{align*}
\delta_1^M&:=\delta(M,\pi),\\
\delta_{j+1}^M
&:=\delta(M / U_j,\pi/U_j)
\text{ when }\delta_j(M,\pi)\neq\delta(M / U_j,\pi/U_j),
\end{align*}
where $U_j=\bigcup_{i=1}^j \delta_i^M$ for all appropriate $j$. We
call $(\delta_1^M, \delta_2^M, \dots , \delta_j^M)$ the {\em
$\delta$-sets of $(M,\pi)$}.
\end{definition}
We also define the $\sigma$-sets for $(M,\pi)$, which are determined by the $\delta$-sets of $(M,\pi)$.
\begin{definition}
We define the $\sigma$-sets iteratively as follows:
\begin{align*}
\sigma_1^M&:=\sigma(M,\pi),\\
\sigma_{j+1}^M
&:=\sigma(M / U_j,\pi/U_j)
\text{ when }\sigma_j(M,\pi)\neq\sigma(M / U_j,\pi/U_j),
\end{align*}
where $U_j=\bigcup_{i=1}^j \delta_i^M$ for all appropriate $j$. We
call $(\sigma_1^M, \sigma_2^M, \dots , \sigma_j^M)$ the {\em
$\sigma$-sets of $(M,\pi)$}.
\end{definition}
\begin{proposition}
\label{prop:signless tak}
The signless Takeuchi term $T(\pi)$ is completely determined by the $\delta$-sets of $(M,\pi)$. Specifically,
\[
T(\pi) = \bigoplus_{i=1}^{j} U_{\delta_i^M-\sigma_i^M}^{| \delta_i^M-\sigma_i^M |} \oplus (M |_{\delta_i^M}) / (\delta_i^M - \sigma_i^M).
\]
\end{proposition}
\begin{proof}
Computing $T(\pi)$ is independent of the order in which we apply the restrictions and contractions dictated by the partition $\pi$. We will let the $\delta$-sets prescribe to us the order. The first restriction/contraction we perform is
\[
M|_{\delta_1^M} \oplus M/{\delta_1^M}.
\]
Consider the restriction/contraction that is applied to $M$ dictated by the part of $\pi$ immediately proceeding $\sigma_1^M$. After applying this operation, we obtain
\begin{align*}
&(M|_{\delta_1^M})|_{\delta_1^M - \sigma_1^M} \oplus (M|_{\delta_1^M})/(\delta_1^M - \sigma_1^M) \\
&= M|_{\delta_1^M - \sigma_1^M} \oplus (M|_{\delta_1^M})/(\delta_1^M - \sigma_1^M)
\end{align*}
The first summand contains no dependent sets by the definition of $\delta_1^M$. Therefore $M|_{\delta_1^M - \sigma_1^M} = U_{\delta_1^M - \sigma_1^M} ^{|\delta_1^M - \sigma_1^M|}$. Furthermore any additional restrictions/contractions dictated by $\pi$ which affect this summand will also result in the matroid $U_{\delta_1^M - \sigma_1^M} ^{|\delta_1^M - \sigma_1^M|}$. Therefore after applying all of the prescribed restriction/contraction operations up to $\delta_1^M$, we have the term
\[
U_{\delta_1^M - \sigma_1^M} ^{|\delta_1^M - \sigma_1^M|} \oplus (M|_{\delta_1^M})/(\delta_1^M - \sigma_1^M) \oplus M/\delta_1^M.
\]
Now we iterate the process on $M/\delta_1^M$ as if it were its own matroid that did not arise from any operations. The result is the sum
\[
T(\pi) = \bigoplus_{i=1}^{j} U_{\delta_i^M-\sigma_i^M}^{| \delta_i^M-\sigma_i^M |} \oplus (M |_{\delta_i^M}) / (\delta_i^M - \sigma_i^M).
\]
\end{proof}
\section{The involution $\iota_<$}
\label{sec:inv}
In this section, we construct a sign-reversing involution $\iota_<$
with the property $T(\iota_< (\pi))=T(\pi)$. Let $M$ be a matroid
with ground set $E$, and fix a total ordering $<$ of $E$. We will
define $\iota_<$ as a ``split-merge process'' on the ordered set
partitions of $E$. For clarity, we first give a heuristic definition in the next paragraph, and then give the formal definition immediately afterward.
For simplicity, we write $\iota$ for $\iota_<$.\footnote{In general, the involution $\iota_<$ is dependent on the choice of total ordering $<$, and in fact there is a distinct involution for each choice of $<$. Regardless of which total ordering we use for defining $\iota_<$, the cancellation-free antipode formula obtained in Theorem~\ref{thm:cancellation-free antipode} is the same.} The way $\iota$ is
applied is that it starts with the first part of the partition and
attempts to split the part. If it can, it splits the part into two
new parts and then $\iota$ stops. If it cannot, it will attempt to
merge the part with its successor, combining them into one part and
then $\iota$ stops. If it neither split nor merged the part, then the
process moves on to attempt to split or merge the next part. It will
do this until it either is applied to a part or passes through the
entire partition and is never applied.
Now we will give the formal definition of splitting and merging.
\begin{description}
\item[Splitting] If the part $\pi_i$ has more than one element, then $\iota$ will try to split this part.
\begin{itemize}
\item Let $x$ be the largest element in $\pi_i$ with respect to $<$. If $\pi_i$ splits, replace the part $\pi_i$ with the two parts
\[
\{x\} \ \ \ \ \ \ \ \text{and}\ \ \ \ \ \ \ \pi_i - \{x\}.
\]
Hence the new partition after splitting is \[\iota(\pi) =( \pi_1,\pi_2, \dots , \{x\} , \pi_i -\{x\} , \pi_{i+1}, \dots, \pi_k).\]
\item The map $\iota$ will \textbf{only} split if $\pi_i$ and $\iota(\pi)$ have the same $\delta$-sets and $\sigma$-sets as $\pi$.
\end{itemize}
\vspace{.3cm}
\item[Merging] If the part $\pi_i$ is a single element, then $\iota$ will try to merge it with the part immediately following it. Let $\pi_i={x}$.
\begin{itemize}
\item Merging two parts results in a new ordered set partition that replaces $\pi_i$ and $\pi_{i+1}$ with the single part $\pi_i \cup \pi_{i+1}$. Hence the new partition after merging is \[\iota(\pi)= (\pi_1,\pi_2,\dots, \pi_{i-1},\pi_i\cup \pi_{i+1},\pi_{i+2},\dots, \pi_k).\]
\item The map $\iota$ will \textbf{only} apply a merge if $x > y $ for all $y \in \pi_{i+1}$, and
\item the $\delta$-sets and $\sigma$-sets of $\iota(\pi)$ are the same as the $\delta$-sets and $\sigma$-sets of $\pi$.
\end{itemize}
\end{description}
We will now show that $\iota$ is a sign-reversing
involution. In order to prove this, we will use the following
observation multiple times.
\begin{remark}
\label{rem:delta}
The most important aspect of $\iota$ is that it preserves the $\delta$-sets and $\sigma$-sets of the pair $(M,\pi)$. This property of $\iota$ will play a key role in the ``pairing off'' of ordered set partitions needed to produce a cancellation-free antipode formula.
\end{remark}
This remark along with Proposition \ref{prop:signless tak} gives the following result.
\begin{theorem}
\label{thm:involution}
Let $M$ be a matroid with ground set $E$, and let $\pi \vDash E$. Then $T(\pi)=T(\iota(\pi))$.
\end{theorem}
The goal was to construct a sign-reversing \emph{involution}. We need to show that $\iota$ in fact accomplishes this.
\begin{theorem}
The map $\iota$ is an involution on the set of ordered set partitions of $E$.
\end{theorem}
\begin{proof}
Let $\pi \vDash E$. There are two cases to check: if $\iota(\pi)$ results in a split, and if $\iota(\pi)$ results in a merge. If $\pi$ is neither split nor merged by $\iota$ then it is a fixed point of $\iota$, and hence we have $\iota^2(\pi)=\pi$.
\vspace{.3cm}
\underline{Case One}: $\iota$ splits $\pi$.
\vspace{.3cm}
We have that $\iota^2(\pi)= \iota(( \pi_1,\pi_2, \dots , \{x\} , \pi_i -\{x\} , \pi_{i+1}, \dots, \pi_k))$. Now let us consider what $\iota$ will do for this new partition. It will not attempt to split or merge $\pi_1,\pi_2, \dots, \pi_{i-1}$ because they were not altered during $\iota(\pi)$ and $\iota$ attempts to split or merge starting from left to right. The first part of $( \pi_1,\pi_2, \dots , \{x\} , \pi_i -\{x\} , \pi_{i+1}, \dots, \pi_k)$ which $\iota$ will attempt to be applied to is the part $\{x\}$. Since $x$ was split from $\pi_i$, we know that $x>y$ for all $y\in\pi_{i}$. This is exactly the property that is required for $\iota$ to merge $\{x\}$ with the part immediately following it. Hence we get
\[\iota^2(\pi)= \iota(( \pi_1,\pi_2, \dots , \{x\} , \pi_i -\{x\} , \pi_{i+1}, \dots, \pi_k))=\pi.\]
\vspace{.3cm}
\underline{Case Two}: $\iota$ merges $\pi$.
\vspace{.3cm}
We have that $\iota^2(\pi)= \iota((\pi_1,\pi_2,\dots, \pi_{i-1},\pi_i\cup \pi_{i+1},\pi_{i+2},\dots, \pi_k))$. By Remark \ref{rem:delta}, we know that both splitting and merging will never affect the $\delta$-sets and $\sigma$-sets. We can also see that $\iota$ will not apply to the parts $\pi_1, \pi_2, \dots$, and $\pi_{i-1}$ or it would have been altered during the initial application of $\iota$. Therefore the first time $\iota$ will attempt to be applied is to the part $\pi_i \cup \pi_{i+1}$. This part will split. Since $\iota(\pi)$ resulted in a merge, we know that $\pi_i=\{x\}$ and $x>y$ for all $y\in \pi_{i+1}$. Therefore
\begin{align*}
\iota^2(\pi)&= \iota((\pi_1,\pi_2,\dots, \pi_{i-1},\pi_i\cup \pi_{i+1},\pi_{i+2},\dots, \pi_k))\\
&=(\pi_1,\pi_2,\dots, \pi_{i-1},\pi_i, \pi_{i+1},\pi_{i+2},\dots, \pi_k)\\
&=\pi.
\end{align*}
In both cases we have that $\iota^2(\pi)=\pi$, as desired.
\end{proof}
Until this point, we considered only the {\em signless Takeuchi term} $T(\pi)$. Now we take into account signs.
\begin{definition}
Let $\pi=(\pi_1,\pi_2,\dots, \pi_k) \vDash E$, where $E$ is the ground set of a matroid $M$. Define \[ \mathrm{sgn}(\pi) := (-1)^k.\]
\end{definition}
This definition exactly produces the sign associated to the signless term $T(\pi)$ in the antipode formula of Proposition~\ref{prop:matant}.
\begin{theorem}
\label{thm:signreversing}
The involution $\iota$ is sign-reversing in that; if $\pi$ is not a fixed point of $\iota$, then $\mathrm{sgn}(\pi)=-\mathrm{sgn}(\iota(\pi))$.
\end{theorem}
\begin{proof}
Let $\pi$ be a non-fixed point of $\iota$. Then $\iota(\pi)$ either is split or merged by $\iota$. Therefore $\iota(\pi)$ has exactly one more or one fewer part than $\pi$.
\end{proof}
\subsection{Characterizing the fixed points of $\iota$}
Now we will characterize the fixed points of $\iota$.
\begin{theorem}
Let $M$ be a matroid with ground set $E$. The fixed points of $\iota$ are the ordered set partitions $\pi$ of $E$ which satisfy the following:
\begin{itemize}
\item For each $i$ which is not the occurrence of a $\sigma$-set, we have $\pi_i$ consists of a single element, and
\item between each occurrence of a $\sigma$-set, these single element parts are in ascending order.
\end{itemize}
\end{theorem}
\begin{proof}
First we will show that each partition satisfying these two conditions will be a fixed point. Note that splitting will never occur at a $\sigma$-set because this would alter the overall $\sigma$-sets of the partition. Additionally, the first condition assures that the partition will not be split by $\iota$ at any other parts as splitting does not apply to single element parts.
Now we consider merging. We will never merge a singleton part with a $\sigma$-set as this would change the overall $\sigma$-sets of the partition. Therefore we only have to be concerned with the case where a non-$\sigma$ part merges with the non-$\sigma$ part immediately after it. This does not occur because of the second criterion.
To finish the characterization, we must explain why every fixed point satisfies our criteria. Let $\pi$ be a fixed point of $\iota$. Since $\iota$ would split any non-$\sigma$ part with cardinality greater than one, we know that each part which is not the occurrence of a $\sigma$-set will have a single element in $\pi_i$. Additionally since $\iota$ will never apply a merge, this requires that each of these parts \emph{fail} the merge test, meaning they are in ascending order.
\end{proof}
\begin{theorem}
\label{thm:cancellation-free antipode}
Let $M$ be a matroid with ground set $E$. Choose a total ordering on $E$, call it $<$. Then
\[
S(M)=\sum_{ \text{fixed points $\pi$ of } \iota_<} \ \mathrm{sgn}(\pi) \ \bigoplus_{i=1}^{j} U_{\delta_i^M-\sigma_i^M}^{| \delta_i^M-\sigma_i^M |} \oplus (M |_{\delta_i^M}) / (\delta_i^M - \sigma_i^M).
\]
\end{theorem}
\begin{proof}
This follows directly from Theorem \ref{thm:signreversing}, Theorem \ref{thm:involution}, and Proposition \ref{prop:signless tak}, and by using the principles outlined in Section~\ref{sec:signreversinginvs} about utilizing \emph{sign-reversing involutions} to produce cancellations in sums. To show this is cancellation free, let $T(\pi)=T(\pi ')$ for two fixed points $\pi$ and $\pi '$. This means that they must have the same $\sigma$-sets, though they could occur in different orders. In other words if $(\sigma_1^M, \sigma_2^M, \dots, \sigma_k^M)$ are the $\sigma$-sets for $(M,\pi)$, then the $\sigma$-sets for $(M,\pi')$ must be a permutation of that $k$-tuple. But then notice that in both tuples there would be exactly the same number of parts where a $\sigma$-set occurs, with exactly the same elements. Since all the elements not occuring in $\sigma$-sets occur as singleton parts, we have that $\mathrm{sgn}(\pi)=\mathrm{sgn}(\pi ')$. Therefore this formula is indeed cancellation free.
\end{proof}
\section{Applications}
In this section, we give some applications of Theorem~\ref{thm:cancellation-free antipode}. In particular, in Section~\ref{sec:uniform}
we recover a formula of \cite{bm16} by interpreting Theorem~\ref{thm:cancellation-free antipode} for the class of uniform matroids. Moreover, we refine this result by producing a formula that is also grouping-free. Section~\ref{sec:hyperfield} shows that the techniques of this paper also yield an analogous cancellation-free formula for all matroids over hyperfields.
\subsection{Uniform matroids}
\label{sec:uniform}
A cancellation-free formula for the antipode of uniform matroids via
the involution method was first described in \cite{bm16}, and Theorem
\ref{thm:cancellation-free antipode} is a natural generalization of
the main result of that paper.
\begin{corollary}
The antipode of a uniform matroid $U_{n}^r$ is given by
\[
S(U_n^r)=\sum_{I,J}(-1)^{n-|J|+1}U_I^{|I|}\oplus U_J^{r-|I|},
\]
where $I,J$ range over all pairs of subsets of the ground set $E$
such that
\begin{itemize}
\item $I$ and $J$ are disjoint,
\item $|I|<r$,
\item $|I|+|J|\geq r$, and
\item if $|I|+|J|=r$, then $J=\{x\}$ is a singleton and $\max(I)<x$.
\end{itemize}
\end{corollary}
\begin{proof}
Consider the antipode formula in Theorem \ref{thm:cancellation-free antipode}. Since every circuit in $U_{n}^r$ has the same cardinality we can see that there is exactly one $\delta$-set which is not trivial, $\delta_1^M$. Therefore,
\[S(U_{n}^r) = \sum_{ \text{fixed points $\pi$ of } \iota_<} \ \mathrm{sgn}(\pi) \ U_{\delta_1^M-\sigma_1^M}^{| \delta_1^M-\sigma_1^M |} \oplus (M |_{\delta_1^M}) / (\delta_1^M - \sigma_1^M).
\]
Direct computation of the $\mathrm{sgn}(\pi)$ and the second term in the internal summand produces the desired formula. The indexing set in the summand can be found by carefully characterizing the fixed points of $\iota$. Since every circuit of $U_{n}^r$ is the same cardinality this can be done by considering the elements of $E$ which are in parts of $\pi$ prior to $\sigma_1^M$ and the elements which are in $\sigma_1^M$. We let $I$ be the set of elements that occur prior to $\sigma_1^M$ and $J=\sigma_1^M$. The result is the indexing set above.
\end{proof}
Basic counting of the fixed points described above yields the
following.
\begin{corollary}\label{cor:group-free-uniform}
Let $U_n^r$ be the uniform matroid of rank $r$ on $n$ elements. We
have the following cancellation-free, grouping-free formula for the
antipode:
\[
S(U_n^r)
=\sum_{i=0}^{r-2}
\sum_{j=r-i+1}^{n-i}
(-1)^{n-j+1}\binom{n}{i}\binom{n-i}{j}(U_i^i\oplus U_j^{r-i})
+(-1)^n\sum_{x=r}^n\binom{x-1}{r-1}U_r^r.
\]
\end{corollary}
\subsection{Matroids over hyperfields}
\label{sec:hyperfield}
In this section, we show that the method of split-merge can also be
employed to obtain a cancellation-free antipode formula for Hopf
algebras defined in \cite{ejs17} in the case of matroids over
hyperfields. To this end, we slightly change the definitions in the
previous sections. We refer the reader to \cite{ejs17} for details on
matroids over hyperfields and Hopf algebras constructed from them.
Let $E$ be a finite set, $H$ a hyperfield, and $M$ a (weak or strong) matroid over $H$ on $E$. Then, as in the matroid case, the antipode $S(M)$ of $M$ can be indexed by ordered partitions of $E$ as follows:
\[
S(M) = \sum_{k \ge 0} (-1)^k \sum_{(\pi_1,\ldots,\pi_k)\vDash E} M|_{\pi_1} \oplus (M/\pi_1)|_{\pi_2} \oplus \cdots \oplus (M/\bigcup_{i=1}^{k-2}\pi_i)|_{\pi_{k-1}} \oplus M/\bigcup_{i=1}^{k-1} \pi_i.
\]
The $\ell$-th part of an ordered partition $\pi=(\pi_1,\ldots,\pi_k)$ is defined as follows (cf.~\eqref{equation: l-part}):
\[
\ell(M,\pi) := \min\left\{t\ |\ \bigcup_{1 \le i \le t} \pi_i \text{ contains the support of a circuit of $M$}\right\}.
\]
With this, $\delta_i^M$ and $\sigma_i^M$ are defined in the same way as for ordinary matroids. We thus obtain the following.
\begin{corollary}\label{proposition: hyperfield signless}
The signless Takeuchi term $T(\pi)$ is completely determined by the $\delta$-sets of $(M,\pi)$. Specifically,
\[
T(\pi) = \bigoplus_{i=1}^{j} M |_{\delta_i^M - \sigma_i^M} \oplus (M |_{\delta_i^M}) / (\delta_i^M - \sigma_i^M).
\]
\end{corollary}
The proof is essentially the same as in Proposition \ref{prop:signless
tak}. In fact, one can use \cite[Corollary 3.10]{ejs17} to rearrange
a partition so that it only depends on $\delta$-sets. What remains is
identical to the proof of Proposition \ref{prop:signless tak}.
Let $M$ be a matroid over a hyperfield $H$. Let $E$ be the ground set of $M$. We fix a total order $<$ on $E$ and define the involution $\iota_<$ as in Section~\ref{sec:inv}. We note that $\iota_<$ only depends on the order $<$, delta sets $\delta_i^M$, and sigma sets $\sigma_i^M$. Hence the same description as in Section~\ref{sec:inv} can be used for the hyperfield case. In particular, since $T(\pi)$ in Corollary~\ref{proposition: hyperfield signless} only depends on delta and sigma sets, one has $T(\pi)=T(\iota_<(\pi))$.
\begin{corollary}\label{coro:hyperfield theorem}
Let $M$ be a matroid over a hyperfield $H$ with ground set $E$. Choose a total ordering on $E$, call it $<$. Then
\[
S(M)=\sum_{ \text{fixed points $\pi$ of } \iota_<} \ \mathrm{sgn}(\pi) \
\bigoplus_{i=1}^{j} M |_{\delta_i^M - \sigma_i^M} \oplus (M
|_{\delta_i^M}) / (\delta_i^M - \sigma_i^M).
\]
\end{corollary}
Finally, we note that the characterization of fixed points of $\iota$ only depends on $\delta$- and $\sigma$-sets. Hence one has the same characterization for the case of hyperfields.
\section{Small example of computation}
We compute the antipode using the method described above for the
matroid $M$ on ground set $E = \{1,2,3,4\}$ given by circuits $\{123,124,34\}$. Note
that $M=M[\Gamma]$ is the cycle matroid of the following graph:
\[
\begin{tikzpicture}[scale=.5]
\tikzstyle{every node}=[circle,fill=black,inner sep=1pt]
\draw
(90:2) node (a){}
(210:2) node (b){}
(330:2) node (c){}
;
\draw
(a) edge node[fill=none,label={150:$1$}] {} (b)
edge node[fill=none,label={30:$2$}] {} (c)
(b) edge[out=-30,in=210] node[fill=none,label={-90:$4$}] {} (c)
(b) edge[out=30,in=150] node[fill=none,label={90:$3$}] {} (c)
;
\draw (180:3) node[fill=none,rectangle] {$\Gamma=$};
\end{tikzpicture}
\]
Recall that the signless Takeuchi term associated to an ordered set partition
$\pi=(\pi_1,\pi_2,\ldots,\pi_r)$ of $E$ is given by:
\[
T(\pi)=M|_{\pi_1}
\oplus (M/\pi_1)|_{\pi_2}
\oplus (M/(\pi_1\cup\pi_2))|_{\pi_3}
\oplus \cdots
\oplus (M/(\pi_1\cup\pi_2\cup\cdots\cup\pi_{r-1}))|_{\pi_r}.
\]
When one na\"ively computes the Takeuchi formula on this matroid $M$,
one must compute the Takeuchi terms of the $75$ ordered partitions of
$\{1,2,3,4\}$, group these by equality (i.e.\ remembering the labels
associated to terms), and finally apply cancellations. After doing so
for $M$, we have the antipode $S(M)$ given below:
\begin{align*}
S(M)
& =-T(1234)
\\
& +T(24|13)+T(23|14)+T(12|34)+T(14|23)+T(13|24)
\\
& -T(4|123)-T(3|124)-T(2|13|4)-T(1|23|4)-T(1|24|3)-T(2|14|3)
\\
& -T(2|34|1)-T(1|34|2)
\\
& +T(123|4)+T(124|3)
\\
& +T(34|12)
\\
& +T(2|134)+T(1|234).
\end{align*}
This yields the following grouping-free formula after identifying
isomorphism classes (the chosen labels are lexicographically minimal
in their isomorphism classes):
\[
S(M)
=
-T(1234)
+5\cdot T(12|34)
-8\cdot T(1|23|4)
+2\cdot T(123|4)
+T(34|12)
+2\cdot T(1|234).
\]
We now apply the involution method to compute $S(M)$. First we sort
the ordered partitions of $\{1,2,3,4\}$ by their $\delta$-sets and
indicate which ``merge'' below:
\begin{align*}
\delta^M_1 &= 34:
&&
\begin{matrix}
4|3|2|1 & \xrightarrow{\mathrm{merge}} & 34|2|1 \\
4|3|1|2 & \xrightarrow{\mathrm{merge}} & 34|1|2 \\
4|3|12 & \xrightarrow{\mathrm{merge}} & 34|12
\end{matrix}
\\\hline
\delta^M_1 &= 123:
&&
\begin{matrix}
3|2|1|4 & \xrightarrow{\mathrm{merge}} & 23|1|4 \\
2|1|3|4 & \xrightarrow{\mathrm{merge}} & 12|3|4 \\
3|1|2|4 & \xrightarrow{\mathrm{merge}} & 13|2|4
\end{matrix}
\\\hline
\delta^M_1 &= 124:
&&
\begin{matrix}
4|2|1|3 & \xrightarrow{\mathrm{merge}} & 24|1|3 \\
2|1|4|3 & \xrightarrow{\mathrm{merge}} & 12|4|3 \\
4|1|2|3 & \xrightarrow{\mathrm{merge}} & 14|2|3
\end{matrix}
\\\hline
\delta^M_1 & =134:
&&
\begin{matrix}
3|1|4|2 & \xrightarrow{\mathrm{merge}} & 13|4|2 \\
4|1|3|2 & \xrightarrow{\mathrm{merge}} & 14|3|2
\end{matrix}
\\\hline
\delta^M_1 & =234:
&&
\begin{matrix}
3|2|4|1 & \xrightarrow{\mathrm{merge}} & 23|4|1 \\
4|2|3|1 & \xrightarrow{\mathrm{merge}} & 24|3|1
\end{matrix}
\\\hline
\delta^M_1 & =1234:
&&
\begin{matrix}
4|2|13 & \xrightarrow{\mathrm{merge}} & 24|13 \\
4|1|23 & \xrightarrow{\mathrm{merge}} & 14|23 \\
3|2|14 & \xrightarrow{\mathrm{merge}} & 23|14 \\
3|1|24 & \xrightarrow{\mathrm{merge}} & 13|24 \\
2|1|34 & \xrightarrow{\mathrm{merge}} & 12|34
\end{matrix}
\end{align*}
Notice that the split-merge operation (indicated above only as merges)
has associated $18$ pairs of these ordered partitions with one
another. This accounts for $36$ of the ordered partitions. Thus the
fixed points of $\iota_1$ are not yet in bijection with the
cancellation-free terms of our formula. We proceed with the second
iteration of our rule, noting that $\delta_2^M=1234$ for all ordered
partitions $\pi$ of $1234$. Our first example below demonstrates the
importance of sorting into $\delta$-piles:
\begin{align*}
\delta^M_1 &= 34:
&&
\begin{matrix}
3|4|2|1 & \xrightarrow{\mathrm{merge}} & 3|4|12
\end{matrix}
\\\hline
\delta^M_1 &= 123:
&&
\begin{matrix}
2|3|1|4 & \xrightarrow{\mathrm{merge}} & 2|13|4 \\
1|3|2|4 & \xrightarrow{\mathrm{merge}} & 1|23|4 \\
3|12|4 & \xrightarrow{\mathrm{merge}} & 123|4
\end{matrix}
\\\hline
\delta^M_1 &= 124:
&&
\begin{matrix}
2|4|1|3 & \xrightarrow{\mathrm{merge}} & 2|14|3 \\
1|4|2|3 & \xrightarrow{\mathrm{merge}} & 1|24|3
\end{matrix}
\\\hline
\delta^M_1 &= 134:
&&
\begin{matrix}
1|4|3|2 & \xrightarrow{\mathrm{merge}} & 1|34|2 \\
4|13|2 & \xrightarrow{\mathrm{merge}} & 134|2
\end{matrix}
\\\hline
\delta^M_1 &= 234:
&&
\begin{matrix}
2|4|3|1 & \xrightarrow{\mathrm{merge}} & 2|34|1 \\
4|23|1 & \xrightarrow{\mathrm{merge}} & 234|1
\end{matrix}
\end{align*}
Note that the involution $\iota_2$ cannot be extended (all
$\delta^M_2=1234$). Moreover, the fixed points of $\iota_2$ yield the
cancellation-free antipode formula described above.
\begin{remark}
Applying the above argument to the oriented matroid with
signed circuits
\[
\mathcal{C}=\{\pm(+,+,+,0),\pm(+,+,0,-),\pm(0,0,+,+)\}
\]
yields much the same computation. In particular, one obtains the
same cancellation-free formula for $S(M)$; it is the grouping-free
formula that changes. A difference here is that the terms
$T(123|4)$ and $T(124|3)$ no longer represent the same isomorphism
class of oriented matroids. Thus the matroid-minor Hopf algebra
of oriented matroids has a refined antipode.
\end{remark}
|
2,877,628,091,605 | arxiv | \subsection*{\\Abstract}
\end{center}
Using the effective potential, we study the one-loop renormalization of
a massive self-interacting scalar field at finite
temperature in flat manifolds with one or more compactified
spatial dimensions.
We prove that, owing to the compactification and finite temperature,
the renormalized physical parameters of the
theory (mass and coupling constant) acquire thermal and topological
contributions. In the case of one compactified spatial dimension at finite
temperature,
we find that the corrections to
the mass are positive, but those to the coupling
constant are negative. We discuss the possibility of triviality, i.e. that
the renormalized coupling constant goes to zero at some temperature or at some
radius of the compactified spatial dimension.
\end{titlepage}
\newpage
\baselineskip 18pt
\section {Introduction}\
It is well known that one loop quantum corrections may alter the physical
parameters of an interacting quantum field theory. In general this alteration
is not of a form which can be absorbed by a simple redefinition of the
parameters, in the way that one can remove the ultraviolet divergences. A
simple example of this occurs in finite temperature field theory, where the
renormalized mass can become temperature dependent\cite{DJ}. Similarly, in flat
spacetime with compactification in one spatial direction, the mass can
depend upon the periodicity length in the compact direction\cite{FY,
Toms1,mass_gen}. This phenomenon is of particular interest in theories
with broken symmetry, as it allows both thermal and topological effects
to play a role in the breaking and restoration of symmetry\cite{sym}.
The aim of this paper is to discuss quantum field theory at finite
temperature in a spacetime where at least
one of the spatial dimensions is compactified. The particular model which
we adopt is a scalar field with quartic self-coupling. In particular, we
wish to investigate the dependence of the renormalized mass and coupling
constant upon the temperature and the size of the compactified dimension.
We will calculate the effective potential, which may be expressed in terms
of Epstein zeta functions. The ultraviolet divergences may be removed by
analytic regularization and renormalization.
The outline of this paper is the following. In Section II,
periodic boundary conditions are imposed upon
the fields (after a Wick rotation), and the temperature dependent one-loop
effective potential is calculated.
The theory is regularized using an analytic
continuation of the inhomogeneous Epstein zeta function. The renormalization
of $\lambda\varphi^{4}$ theory in this multiply
connected spacetime can be done
by introducing counterterms, and we show that the mass and coupling constant
counterterms are temperature and size independent
In Section III, we assume finite temperature and only one compactified
spatial dimension. We explicitly calculate the corrections to both the mass
and the coupling constant in this case. We find that the corrections
to the mass are positive but those to the coupling constant are negative.
The results are discussed in Section IV. In particular, we discuss the
possibility of arranging for the renormalized coupling constant to vanish
(``triviality'') at some particular temperature or spatial size.
In this paper we use units in which $\hbar=c=k_{B}=1$.
\section{The effective potential of a scalar field at finite
temperature}\
In this section we study a real massive scalar field at finite temperature,
where we assume that
the topology of the spacelike sections is that of a three-torus.
This kind of topology
allows two different types of scalar fields. One which is periodic in the
identified spatial coordinates is called an untwisted field, and the other
which is antiperiodic in the identified spatial coordinate is called a twisted
scalar field\cite{Isham}.
To study twisted scalar fields, we cannot assume that the normalized vacuum
expectation value of the field is constant and the
effective potential cannot be used. For the sake of simplicity, in
this paper we will study only the untwisted scalar field.
The Lagrange density of the field is
\begin{equation}
{\cal L}= \frac{1}{2}\partial_{\mu}\varphi_{u}\partial^{\mu}\varphi_{u}
-\frac{1}{2}m^{2}_{0}~
\varphi_{u}^{2}-\frac{\lambda_{0}}{4!}\varphi^{4}_{u}\,
\end{equation}
where $\varphi_{u}(x)$ is the unrenormalized field,
and $m_{0}$ and $\lambda_{0}$
are the bare mass and coupling constant, respectively. We may rewrite the
Lagrange density in the usual form where the counterterms will appear
explicitly. Defining
\begin{equation}
\varphi_{u}(x)= (1+\delta Z)^{\frac{1}{2}}\varphi(x)
\end{equation}
\begin{equation}
m^{2}_{0}=(m^{2}+\delta m^{2}) (1+\delta Z )^{-1}
\end{equation}
\begin{equation}
\lambda_{0}= (\lambda+\delta\lambda)(1+\delta Z)^{-2}
\end{equation}
and substituting Eqs. (2), (3) and (4) into Eq. (1), we have
\begin{equation}
{\cal L}=\frac{1}{2}\partial_{\mu}\varphi\partial^{\mu}\varphi
-\frac{1}{2}m^{2}\varphi^{2}-
\frac{\lambda}{4!}\varphi^{4}+\frac{1}{2}\delta Z~\partial_{\mu}\varphi
\partial^{\mu}\varphi-
\frac{1}{2}\delta m^{2}\varphi^{2}-\frac{1}{4!}\delta\lambda~\varphi^{4}\,
\end{equation}
where $\delta Z$, $\delta m^{2}$, and $\delta\lambda$ are the wave function,
mass and coupling constant counterterms of the model. Through this paper
we will assume that $m^{2}>0$. In the one-loop
approximation, the effective potential at zero temperature in
uncompactified spacetime is given by \cite{CW,Jackiw}
\begin{eqnarray}
V(\varphi_{0})&=& \frac{1}{2}m^{2}\varphi^{2}_{0}+\frac{\lambda}{4!}
\varphi^{4}_{0}-\frac{1}{2}\delta m^{2}\varphi^{2}_{0}-
\frac{1}{4!}\delta\lambda~\varphi^{4}_{0}\ \nonumber \\&+&i\int d^{4}q
\frac{1}{(2\pi)^{4}}
\sum_{s=1}^{\infty}\frac{1}{2s}\biggl(\frac{1}{2}\lambda
\varphi^{2}_{0}\biggr)^{s}\frac{1}{(q^{2}-m^{2}+i\epsilon )^s}\, .
\label{eq:V}
\end{eqnarray}
There is no difficulty in extending the above results to finite temperature
states. In this case, functional integrals will run over the fields that
satisfy periodic boundary conditions in Euclidian time. The effective action
can be defined as in the zero temperature case by a functional Legendre
transformation, and regularization and renormalization procedures follow
the same steps as in the zero temperature case. Similarly, compactification
in imposed by requiring that the field be periodic in the spatial directions.
It has been shown that for models where the spacelike section are noncompact,
all the divergences present in the Feynman loops are temperature
independent\cite{KM,MOU}. Similarly, the renormalization of the zero
temperature
theory with at least one compactified spatial dimension has been investigated
by Toms\cite{Toms2} and by Birrell and Ford\cite{BF}.
These authors found that through the two-loop level, all of the counterterms
are independent of the spatial size. A more general discussion has been
given by Banach\cite{Banach}, who shows that topological identifications
will not introduce new counterterms.
Thus the divergences of the theory are independent of both
temperature and spatial size. If this were not the case, there would be
a danger that the renormalizability of the theory would be upset by
changing either the temperature or the spatial topology.
Let us assume that we have a massive scalar field at finite temperature
$ \beta^{-1}$, and that the spacelike section is compactified
with the topology of a three torus of sides
$L_{1},L_{2}$ and $L_{3}$. Define
\begin{equation}
c^{2}=\frac{m^{2}}{4\pi^{2}\mu^{2}}
\end{equation}
\begin{equation}
(\beta\mu)^{2}=a^{-1}_{4}
\end{equation}
\begin{equation}
(L_{i}\mu)^{2}=a^{-1}_{i} ~~~ i=1,2,3 \quad ,
\end{equation}
where $\mu$ is a mass parameter introduced to keep the Epstein zeta function
a dimensionless quantity. The euclidean effective potential becomes
\begin{eqnarray}
V_{E}(\beta,L_{1},L_{2},L_{3},\varphi_{0})&=&\frac{1}{2}m^{2}\varphi^{2}_{0}
+\frac{\lambda}{4!}
\varphi^{4}_{0}-\frac{1}{2}\delta m^{2}\varphi^{2}_{0}-\frac{1}{4!}
\delta\lambda\varphi^{4}_{0} \nonumber \\&+&\frac{1}{\beta L_{1}L_{2}L_{3}}
\sum^{\infty}_{s=1}\frac{(-1)^
{s+1}}{2s}\biggl(\frac{\lambda}{8\pi^{2}}\biggr)^{s}
\biggl(\frac{\varphi_{0}}{\mu}\biggr)^{2s}A^{c^{2}}_{4}
(s,a_{1},a_{2},a_{3},a_{4}) \, , \label{eq:VE}
\end{eqnarray}
where
\begin{equation}
A^{c^{2}}_{N}(s,a_{1},a_{2},..,a_{N})=\sum^{\infty}_{n_{1},n_{2}..n_{N}=-\infty}
(a_{1}n_{1}^{2}+a_{2}n^{2}_{2}+...+a_{N}n^{2}_{N}+c^{2})^{-s}
\end{equation}
is the inhomogeneous Epstein zeta function\cite{Epstein, Erdelyi}.
Note that in going from Eq.~(\ref{eq:V}) to Eq.~(\ref{eq:VE}), we have
first performed a Wick rotation so that the momenta are euclidean,
and then have replaced the momentum integrals by discrete sums.
In the case $c^{2}=0$,
Eq. (11) defines a Madelung sum in the theory of classical lattices. If we
impose the condition that the renormalized mass is zero, there is a
problem in defining the renormalized coupling constant. The way to
circumvented this difficulty is to impose the renormalizations conditions
not at $\varphi_{0}=0$ but at another point. For a careful
discussion, see the paper by Coleman and Weinberg\cite{CW}.
In this paper we assume $m^{2}>0$, and the above problem does not
appear.
In the limit $ L_{i}\rightarrow\infty $,
the expression given by Eq. (10) differs from the usual finite temperature
effective potential by terms that
are independent of $ \varphi_{0}$ \cite{Linde}. Because only derivatives
with respect to $\varphi_{0}$ correspond to
physically meaningful quantities, this does not pose any problems.
Let us define the modified inhomogeneous Epstein zeta function as
\begin{equation}
E^{c^{2}}_{N}(s,a_{1},a_{2},..a_{N})=\sum^{\infty}_{n_{1},n_{2},..n_{N}=1}
(a_{1}n^{2}_{1}+..+a_{N}n^{2}_{N}+c^{2})^{-s}
\end{equation}
A simple calculation gives
\begin{eqnarray}
A^{c^{2}}_{4}(s,a_{1},a_{2},a_{3},a_{4})&=&
16E^{c^{2}}_{4}(s,a_{1},a_{2},a_{3},a_{4})+8 E^{c^{2}}_{3}
(s,a_{1},a_{2},a_{4})+8 E^{c^{2}}_{3}(s,a_{1},a_{3},a_{4}) \nonumber \\&+&8
E^{c^{2}}_{3}(s,a_{2},a_{3},a_{4})+
8 E^{c^{2}}_{3}(s,a_{1},a_{2},a_{3})+
4 E^{c^{2}}_{2}(s,a_{1},a_{2}) \nonumber \\&+&4
E^{c^{2}}_{2}(s,a_{1},a_{3})+4 E^{c^{2}}_{2}(s,a_{1},a_{4})+
4 E^{c^{2}}_{2}(s,a_{2},a_{3}) \nonumber \\&+&4
E^{c^{2}}_{2}(s,a_{2},a_{4})+ 4 E^{c^{2}}_{2}(s,a_{3},a_{4})
+ 2 E^{c^{2}}_{1}(s,a_{1}) \nonumber \\&+&2
E^{c^{2}}_{1}(s,a_{2})
+ 2 E^{c^{2}}_{1}(s,a_{3})+ 2 E^{c^{2}}_{1}(s,a_{4})+
c^{-2s}.
\end{eqnarray}
Defining
the new coupling constant and a dimensionless vacuum expectation value of the
field by
\begin{equation}
g=\frac{\lambda}{8\pi^{2}}
\end{equation}
\begin{equation}
\frac{\varphi_{0}}{\mu}=\phi \, ,
\end{equation}
the finite temperature one-loop effective potential is given by
\begin{eqnarray}
V_{E}(\beta,L_{1},L_{2},L_{3},\phi)&=& \mu^{4}\biggl(2\pi^{2}
c^{2}\phi^{2}
+\frac{1}{3}\pi^{2} g \phi^{4}
-\frac{1}{2\mu^{2}}{\delta m^{2}}\phi^{2}-\frac{1}{4!}
\delta\lambda\phi^{4}\biggr) \nonumber \\&+&
\frac{1}{\beta L_{1}L_{2}L_{3}}\sum^{\infty}_{s=1}
\frac{(-1)^{s+1}}{2s}
g^{s}\phi^{2s}A^{c^{2}}_{4}(s,a_{1},a_{2},a_{3},a_{4}) \, .
\end{eqnarray}
It is possible to regularize the one-loop effective potential introducing
a cutoff in the Euclidian region, but we prefer to use
the method
of analytic extension.
Let us assume that each term of the series in $s$ in
the one-loop effective potential $ V_{E}(\beta,L_{1},L_{2},L_{3}
,\varphi_{0})$ is an analytic extension,
defined in the beginning only in an open connected set.
To render the discussion more general, let us discuss the
process of the analytic continuation of the modified inhomogeneous Epstein
zeta function given by Eq. (12). For $ Re(s) > \frac{N}{2}$, the $E^{c^{2}}
_{N}(s,a_{1},a_{2},..a_{N}) $ converges and represent an analytic function
of $ s$, so $s > \frac{N}{2} $ is the largest possible domain of the
convergence of the series. This means that in Eq. (10) only the terms
$ s=1 $ and $ s=2$ are divergent. The $s=1$ term arises from the self-energy
diagram (the one-loop process with two external lines), and the $s=2$
term arises from the one-loop correction to the scattering amplitude
(the one-loop diagram with four external lines).
After regularization, we may think of the first two terms in the sum in
Eq.~(\ref{eq:VE}) as being evaluated not at $s=1$ and $s=2$, but rather
at $s=1+\alpha$ and $s=2+\alpha$, respectively, where $\alpha$ is a
complex parameter which vanishes in the limit in which the regularization
is removed.
Using a Melin transform, it is possible to continue analytically $ E^{c^{2}}
_{N}(s,a_{1},..,a_{N})$ from $ Re(s) >
\frac{N}{2} $ to $ Re(s) \leq \frac{N}{2}$,
although isolated singularities will appear in the closed region
$ Re(s) \leq \frac{N}
{2}$ at the points
\begin{equation}
s=\frac{N}{2},\, \frac{N-1}{2},\, \cdots \, \frac{1}{2},\,
-\frac{2l+1}{2},~~l \in N.
\end{equation}
At these points, the analytic extension of $ E^{c^{2}}_{N}(s,a_{1}..,a_{N}) $
has first order poles, with residues $ Res\biggl[E^{c^{2}}_{N}(s,a_{1},..,a_
{N}),s_{i}\biggr]$. The exact
expression of the residue at the points in which we are interested is
\begin{equation}
Res \biggl[E^{c{2}}_{N}(s,a_{1},...,a_{N}),\frac{j}{2}\biggr]=
\frac{(-1)^{N-j}\pi^{\frac{j}{2}}}
{2^{N}\Gamma(\frac{j}{2})}\sum^{\frac{N-j}{2}}
_{k=0}\frac{(-1)^{k}}{k!}c^{2k}\pi^{k} A(2k+j)\,
\end{equation}
where
\begin{equation}
A(k)=\sum_{\{i_{1},..,i_{k}\}}\sqrt{a_{i_{1}}...a_{i_{k}}}\
,
\end{equation}
and $\sum_{i_{1},..i_{k}} $ denotes the sum over all possible choices of
the $ i_{1},..i_{k}$ among $1,...N$ (for $k=0$ the sum is set equal to one)
\cite{Kirstein}. An appropriate choice of $\delta m^{2}$ and $\delta\lambda$
will remove the poles at $s=1$ and $s=2$, respectively.
The idea to continue analytically
expressions and subtract the poles was
exploited by various authors\cite{BGD,Speer}.
In the method used by Bollini, Giambiagi and
Domingues\cite{BGD}, a complex parameter
was introduced as an exponent of the denominator of
the loop expressions and
the integrals are well defined analytic functions of the parameter for
some $ s_{0}$. Performing an analytic extension of this expression for
$ s < s_{0} $, poles will appear in the analytic extension and the final
expression becomes finite after the subtraction
of the poles. It is clear that our regularization is exactly
a discrete version of the Bollini et al analytic regularization.
In our problem, the renormalization conditions are given by
\begin{equation}
\frac{\partial^{2}}{\partial\phi^{2}}V_{E}(\beta,L_{1},L_{2},L_{3},\phi)
|_{\phi=0}=
4\pi^{2}\mu^{4}c^{2} = \mu^2 m^2
\end{equation} and
\begin{equation}
\frac{\partial^{4}}{\partial\phi^{4}}V_{E}(\beta,L_{1},L_{2},L_{3},\phi)
|_{\phi=0}=8\pi^{2}\mu^{4}g = \mu^4 \lambda \, .
\end{equation}
Substituting Eq. (16) into Eqs. (20) and (21) in
such a way that the counterterm
$\delta m^{2}$ cancels the pole contribution at $s=1$ of the analytic
extension of the inhomogeneous Epstein zeta function, and $\delta\lambda$
cancels the pole contribution at $s=2$, we have
\begin{equation}
\delta m^{2}=\frac{g}{\beta L_{1}L_{2}L_{3}\mu^{2}}
\frac{1}{s-1}
Res\biggl[A^{c^{2}}_{4}(s,a_{1},a_{2},a_{3},a_{4}),s=1\biggr]
\end{equation}
and
\begin{equation}
\delta\lambda=-24\frac{g^{2}}{\beta L_{1}L_{2}L_{3}\mu^{4}}
\frac{1}
{s-2} Res\biggl[A^{c^{2}}_{4}(s,
a_{1},a_{2},a_{3},a_{4}),s=2\biggr].
\end{equation}
By substitution of Eq. (18) and Eq. (19) into Eq. (22) and Eq. (23), it
is straightforward to
show that both $\delta\lambda$ and $\delta m^{2}$ are temperature and size
independent. This shows that in the one-loop approximation,
the counterterms of the model are independent
of the parameters which are associated with the nontrivial topology. Hence
if the model is renormalizable at zero temperature with certain
counterterms, it is also renormalizable at finite temperature with exactly
the same counterterms.
\section{Topological and Thermal Corrections to the Mass and Coupling
Constant}
In this section we will investigate the
thermal and topological correction to the renormalized mass and
coupling constant in the case where there is compactification in only
one spatial direction. Set $L=L_3$ and take the limit in which
$L_1 \rightarrow \infty$ and $L_2 \rightarrow \infty$.
The finite temperature one-loop
effective potential in this case is given by:
\begin{equation}
V_{E}(\beta,L,\phi)=\mu^{4}\biggl(2\pi^{2}c^{2}\phi^{2}+
\frac{1}{3}\pi^{2}g
\phi^{4}-\frac{1}{2\mu^{2}}\delta m^{2}\phi^{2}-
\frac{1}{4!}\delta\lambda\phi^{4}\biggr)+
G_{E}(\beta,L,\phi)
\end{equation}
where
\begin{equation}
G_{E}(\beta,L,\phi)=\mu^{4}\sqrt{a_{3}a_{4}}\sum^{\infty}_{s=1}
\frac{(-1)^{s+1}}{2s}g^{s}\phi^{2s}\int d^{2}k A^{M^{2}}_{2}(s,a_{3},a_{4}),
\end{equation}
In the above equation
$$
M^{2}=(k^{1})^{2}+(k^{2})^{2}+c^{2},
$$ where
$$ k^{1}=\frac{q^{1}}{2\pi\mu}$$ and
$$ k^{2}=\frac{q^{2}}{2\pi\mu}$$
are dimensionless quantities.
Using the identity
\begin{equation}
\int \frac{d^{d}l}{(l^{2}+a^{2})^{b}}=
\frac{\pi^{d/2}}{\Gamma(b)}\Gamma(b-\frac{d}{2})a^{d-2b},
\end{equation}
we can perform the integrations over the continuous momenta and write
\begin{equation}
G_{E}(a_{3},a_{4},\phi)=
\mu^{4}\sqrt{a_{3}a_{4}}\pi\sum^{\infty}_{s=0}\frac{(-1)^{s+2}}{2s+2}
g^{s+1}\phi^{2s+2}
\frac{\Gamma(s)}{\Gamma(s+1)}A^{c^{2}}_{2}(s,a_{3},a_{4})\, .
\label{eq:GE}
\end{equation}
Note that the poles which were at $s=1$ and $s=2$ in Eq.~(\ref{eq:VE})
are now located at $s=0$ and $s=1$, respectively, in Eq.~(\ref{eq:GE}).
As before, we regularize Eq.~(\ref{eq:GE}) by analytically continuing
the summand around the points $s=0$ and $s=1$.
The formal correction to the squared mass is given by
\begin{equation}
\Delta' m^2 = \left(\frac{d^2 G_E}{d \varphi_0^2} \right)_{s=0}
= \pi g \mu^2\, \sqrt{a_3 a_4}\; \lim_{s\rightarrow 0}\; \bigl[ \Gamma(s)
A_2^{c^2}(s,a_3,a_4) \bigr] \,. \label{eq:formasscorr}
\end{equation}
The $s\rightarrow 0$ limit of the Epstein zeta function is evaluated in the
Appendix. The result, Eq.~(\ref{eq:A2rep2a}), may be used to separate
$\Delta' m^2$ into its infinite and finite parts:
\begin{equation}
\Delta' m^2 = \delta m^2 + \Delta m^2 \,,
\end{equation}
where $\delta m^2$ is the counterterm to be absorbed by mass renormalization,
and $\Delta m^2$ is the finite correction to the mass. The latter is defined
so as to vanish in the limit of zero temperature in noncompactified space.
Explicitly, we have
\begin{equation}
\delta m^2 = g \mu^2 \biggl(-\frac{\pi c^2}{s} + 2\pi \ln c \biggr)\,,
\end{equation}
and
\begin{equation}
\Delta m^2 = \frac{\lambda}{4\pi^2} \Biggl[ m^2 \int_1^\infty
{{(t^2-1)^{{1\over 2}} dt} \over {{\rm e}^{m\beta t} -1}} \,
- \frac{\pi}{\beta L} \sum_{n =-\infty}^{\infty}
\ln \Bigl(1 - e^{-2\pi L \beta^{-1} \sqrt{n^2 +m^2
\beta^2/4\pi^2}}\Bigr)\Biggr]
\, . \label{eq:masscorr}
\end{equation}
There is an equivalent expression obtained by interchange of $\beta$ and $L$.
Note that $\Delta m^2 \geq 0$ for all choices of the parameters.
The first term in Eq.~(\ref{eq:masscorr}) is the purely thermal
correction to the mass\cite{DJ}. In the limit that
$L \rightarrow \infty$, the second
term vanishes, and this correction is all that survives. The second term
becomes the purely topological correction\cite{Toms1}
in the limit of zero temperature:
\begin{equation}
\Delta m^2 \sim -\frac{\lambda}{2\pi L}\, \int_0^\infty dx \,
\ln \Bigl(1 - e^{-2\pi L \sqrt{x^2 +m^2/4\pi^2}}\Bigr) =
\frac{\lambda}{4\pi^2} m^2 \int_1^\infty
{{(t^2-1)^{{1\over 2}} dt} \over {{\rm e}^{mL t} -1}} \, .
\end{equation}
(An integration by parts was performed to obtain the last form.)
The formal correction to the coupling constant $\lambda$ due to finite
temperature and/or spatial compactification is given by
\begin{equation}
\Delta' \lambda = \left(\frac{d^4 G_E}{d \varphi_0^4} \right)_{s=1}
= -\frac{3\lambda^2}{32\pi^3}\, \sqrt{a_3 a_4} \,
\lim_{s\rightarrow 1} A_2^{c^2}(s,a_3,a_4) \,.
\end{equation}
This quantity is, of course, ill-defined because of a pole term at $s=1$
which needs to be isolated and removed. In the Appendix, it is shown that
\begin{equation}
A_2^{c^2}(s,a_3,a_4) \sim
{\pi \over {\sqrt{a_3 a_4}}} \left[ {1 \over {s-1}} - \ln c^2 + \cdots \right]
+ F_1(a_3,a_4) \, , \qquad s \rightarrow 1 \,.
\end{equation}
The pole term is absorbed by the $\delta \lambda$ counterterm when we let
\begin{equation}
\delta \lambda = -\frac{3\lambda^2}{32\pi^2}
\left( {1 \over {s-1}} - \ln c^2 \right) \,.
\end{equation}
Then the finite correction to the coupling constant is given by
\begin{equation}
\Delta \lambda = -\frac{3\lambda^2}{32\pi^3}\, \sqrt{a_3 a_4}\,
F_1(a_3,a_4) \,. \label{eq:delcc1}
\end{equation}
Here
\begin{equation}
F_1(a_3,a_4) =\frac{1}{{\sqrt{a_3 a_4}}}\left[f(a_3) + f(a_4) +
R(a_3,a_4) \right] \,, \label{eq:F1}
\end{equation}
where
\begin{equation}
f(a_3) = 4\pi \int_1^\infty {{(t^2-1)^{-\frac{1}{2}} dt}
\over {{\rm e}^{2\pi c t/ \sqrt{a_3}} -1}} \,,
\end{equation}
and $R(a_3,a_4)$ is given by
\begin{eqnarray}
R(a_3,a_4) &=& 2\pi \sqrt{a_3} \sum^\infty_{n=-\infty}
\frac{1}{\sqrt{a_3 n^2+ c^2}\left(e^{2\pi\sqrt{(a_3 n^2+ c^2)/a_4}}-1\right)}
-f(a_4) \nonumber \\
&=& 2\pi \sqrt{a_4} \sum^\infty_{n=-\infty}
\frac{1}{\sqrt{a_4 n^2+ c^2}\left(e^{2\pi\sqrt{(a_4 n^2+ c^2)/a_3}}-1\right)}
-f(a_3) \,. \label{eq:Rrep}
\end{eqnarray}
Note that the thermal and topological corrections to the coupling constant
are always negative:
\begin{equation}
\Delta \lambda < 0 \,,
\end{equation}
which follows, for example, from Eqs.~(\ref{eq:F1}-\ref{eq:Rrep}),
where it is apparent that
${F_1}(a_3,a_4) > 0$. The three terms on the right-hand side of
Eq.~(\ref{eq:delcc1}) can each be given a physical interpretation. The
$f(a_3)$ term is the purely topological term. It is the correction to the
coupling constant at zero temperature in a space with one compact dimension.
Similarly, the $f(a_4)$ term is the purely thermal term, which is the
correction to the coupling constant at finite temperature in uncompactified
space. The $R$ term represents a coupling between thermal and topological
effects which is present only at finite temperature in compactified space.
It is of interest to examine the small mass limit of this correction
to the coupling constant. In the limit that $m \rightarrow 0$, the dominant
contribution to $\Delta \lambda$ comes from the $n=0$ term in $R(a_3,a_4)$,
and we obtain
\begin{equation}
\Delta \lambda \sim -\frac{3\lambda^2}{8m^2 \beta L}, \qquad m \rightarrow 0.
\label{eq:smallm}
\end{equation}
This result seems to indicate that the one loop correction to the coupling
constant can be arbitrarily negative for small masses. However, one must
be careful about the limits of validity of the one loop approximation.
This issue will be discussed in more detail in the next section.
Note that the coupling
constant correction described by Eq.~(\ref{eq:smallm}) is nontrivial
only at finite temperature in compactified spacetime, i.e. when both
$L$ and $\beta$ are finite.
Let us now consider the purely thermal correction in more detail. Let
$L \rightarrow \infty$, so that $a_3 \rightarrow 0$. Then
\begin{equation}
\Delta \lambda = -\frac{3\lambda^2}{32\pi^3} f(a_4)
= -\frac{3\lambda^2}{8\pi^2}\,
\int_1^\infty {{(t^2-1)^{-\frac{1}{2}} dt}
\over {{\rm e}^{\beta m t} -1}} \,.
\label{eq:delcc2}
\end{equation}
In the low temperature limit($\beta \rightarrow \infty$), we have
\begin{equation}
\Delta \lambda \approx -\frac{3\lambda^2}{8\pi^2}\,
\int_1^\infty (t^2-1)^{-\frac{1}{2}} {\rm e}^{- \beta m t} dt
= -\frac{3\lambda^2}{8\pi^2} K_0(\beta m) \approx
-\frac{3\lambda^2}{8\pi^2} \sqrt{\frac{\pi}{2\beta m}}\, {\rm e}^{-\beta m} \,.
\label{eq:delcc3}
\end{equation}
Similarly, in the high temperature limit ($\beta \rightarrow 0$), we
may use
\begin{equation}
\frac{1}{{\rm e}^{2\pi \beta m t} -1} \approx \frac{1}{\beta m t}
= \frac{T}{m t}
\end{equation}
to write
\begin{equation}
\Delta \lambda \sim -\frac{3\lambda^2 T}{16\pi m}\,,
\qquad T \rightarrow \infty \,. \label{eq:highT}
\end{equation}
Again, this correction would seem to be large in the case that $T \gg m$.
In this section we have found that in a space with one compact spatial
dimension and/or at finite temperature, the one-loop correction to the
squared mass is always positive, whereas that to the coupling constant
is always negative. In the limits that either the temperature vanishes,
or that the size of the compact dimension becomes large, we recover
the results of previous authors for the mass correction, $\Delta m^2$.
To our knowledge, the results presented here for the coupling constant
correction, $\Delta \lambda$, have not been given before. The only
reference of which we are aware which discusses either thermal or
topological corrections to coupling constants is Higuchi and
Parker\cite{HP}. Of course, all of the results of this section also
apply to the case of a spacetime with periodicity in two spatial directions,
but at zero temperature. One simply replaces $L$ and $\beta$ in the above
formulas by $L_1$ and $L_2$, the two periodicity lengths.
\section{Discussion}\
In this paper, we have calculated the mass correction, $\Delta m^2$,
and the coupling constant correction, $\Delta \lambda$, due to both finite
temperature and compactification in one spatial direction. We found that
$\Delta m^2 \ge 0$, whereas $\Delta \lambda \le 0$. One of the primary
reasons for interest in $\Delta m^2 $ is its role in symmetry restoration.
It had been noted by previous authors that $\Delta m^2 \ge 0$ when one
has either finite temperature in uncompactified space, or compactification
at zero temperature. Thus in both cases, the effect of the radiative
correction is to restore broken symmetries. Our results show that this effect
also holds when one has a finite temperature state in compactified space.
The extension of these results to models in spacetime dimensions other
than four and to other model field theories is undertaken in a separate
paper\cite{MS}.
An interesting feature of the negative coupling constant correction
is that it tends to make the theory less strongly coupled. One is then
tempted to raise the question of whether it would even be possible to
cause the net coupling constant to vanish, i.e., to achieve triviality
at some particular temperature or compactification length. We have defined
$\lambda$ to denote the renormalized coupling constant at zero temperature
in uncompactified space. Thus the effective coupling constant when either
$L$ or $\beta$ are finite is
\begin{equation}
\lambda' = \lambda + \Delta \lambda \, .
\end{equation}
The one-loop correction, $\Delta \lambda$, is of order $\lambda^2$, so it
is not clear that one can make it equal to $\lambda$ in magnitude before
the one-loop approximation fails. The crucial issue here is just what are
the limits of validity of this approximation. If it is simply that one
needs $\lambda \ll 1$, then this does not prevent us from arranging a
situation where $\lambda' =0$. This is apparent from Eqs.~(\ref{eq:smallm})
or (\ref{eq:highT}). However, it is not clear that the true expansion
parameter is $\lambda$ itself. It could be $\lambda$ multiplied
by a dimensionless function of $m$, $L$, and $T$. If, for example, the
limit of validity of Eq.~(\ref{eq:highT}) is when $\lambda T/m \ll 1$,
then it is not possible to use this relation at the point where $\lambda'$
would vanish. To settle this question, it would be necessary to have a
reliable estimate of the magnitude of the higher order corrections.
\section{Acknowledgement}
We would like to thank Prof. A. Vilenkin for several helpful
discussions. N.F. Svaiter would like to acknowledge the
hospitality of the Institute of Cosmology, Tufts University, where part of this
work was carried out . This work was supported by Conselho Nacional de
Desenvolvimento Cientifico e Tecnologico do Brazil (CNPq) and by National
Science Foundation Grant PHY-9208805.
|
2,877,628,091,606 | arxiv | \section{Introduction}
Recent studies of slow light and its properties are motivated by the
enhancement of light-matter interactions for smaller group
velocities leading to the enhancement of optical
gain~\cite{Dowling:1994-1896:JAP} and electro-optic
effect~\cite{Roussey:2006-241110:APL}, a growth of the spontaneous
emission rate~\cite{Suzuki:1995-570:JOSB}, and more efficient
nonlinear optical response
\cite{Scalora:1994-1368:PRL,Martorell:1997-702:APL,Soljacic:2002-2052:JOSB,Chen:2004-3353:OE}.
The concept of slow light is also useful for realizing all-optical
routers and optical buffers for pulse storage and synchronization.
Different concepts and schemes for realizing the slow-light
propagation in various media and structures have been suggested so
far. Although the most dramatic reduction of the group velocity of
light has been achieved in atomic media and it is based on
electromagnetically-induced transparency, such media are not
suitable for high-bit-rate optical systems due to their high
dispersion~\cite{Khurgin:2005-1062:JOSB}. In contrast, alternative
realizations of the slow-light propagation in high-index-contrast
coupled resonator structures can be very useful for creating optical
buffers operating at hundreds of
gigabits/s~\cite{Khurgin:2005-1062:JOSB}. Recently, ultra-compact
coupled-resonator optical buffers on a silicon chip have been
experimentally fabricated with a large fractional group delay
exceeding 10 bits, achieved for bit rates as high as 20
gigabits/s~\cite{Xia:2007-65:NATPHOT}.
\begin{figure}[t]
\centering\includegraphics[width=11cm]{Mingaleev1new.eps}
\caption{Frequencies of localized cavity modes created by changing
the radius $r_{\rm def}$ of (a) a single defect rod, and (b) two
neighboring defect rods in the photonic crystal created by a
triangular lattice of rods with $\varepsilon=12$ and radius
$r=0.25a$ in air, $a$ is the lattice spacing. (c) Dispersion of the
W1 photonic-crystal waveguide created by removing a row of rods in
the same photonic crystal. Results are calculated using eleven
maximally localized Wannier functions \cite{Busch:2003-R1233:JPCM}
(blue lines) in an excellent agreement with the supercell
plane-waves method \cite{mpb} (red circles).} \label{fig:fig1}
\end{figure}
Compact coupled-resonator optical systems can be realized on the
basis of photonic-crystal waveguides, for which the slow-light
propagation with the smallest achieved group velocity reaching
$c/1000$ has been experimentally demonstrated
\cite{Notomi:2001-253902:PRL,Jacobsen:2005-7861:OE,Vlasov:2005-65:NAT,Gersen:2005-073903:PRL}.
Because of this success, the interest to the slow-light applications
based on photonic-crystal waveguides is rapidly growing, attracting
attention to the problems of designing waveguide
bends~\cite{Assefa:2006-745:OL}, couplers~\cite{Vlasov:2006-50:OL},
and other types of functional optical devices which would
efficiently operate in the slow-light regime.
Dynamic control of the slow-light propagation can be realized by
direct tuning of the group velocity, either
thermo-electrically~\cite{Vlasov:2005-65:NAT} or
all-optically~\cite{Mok:2006-775:NATPHYS}. The scales of the
corresponding optical devices are, however, sub-optimal.
Potentially, they can be reduced dramatically in the
waveguide-cavity structures with tunable high-quality
resonances~\cite{Fan:2002-908:APL}. Using a nonlinear cavity, active
switching and other functionalities of such devices can be realized
by shifting the resonance frequency all-optically, e.g. by changing
the power of the incoming light in order to achieve the bistable
transmission. Several successful experimental realizations of
low-threshold light switching in such structures have been recently
reported~\cite{Almeida:2004-1081:NAT,Barclay:2005-801:OE,Notomi:2005-2678:OE,
Tanabe:2005-151112:APL,Priem:2005-9623:OE,Uesugi:2006-377:OE,Yang:2007-0703132:arXiv}.
However, none of those demonstrated devices could operate in the
slow-light regime.
Making the waveguide-cavity structures to be applicable and useful
for the dynamic control of the slow-light propagation is a
nontrivial task, due to strong extrinsic scattering losses caused by
most types of side-coupled cavities (including those introduced by
fabrication imperfections) for operating frequencies close to the
edges of the propagation
bands~\cite{Hughes:2005-033903:PRL,Mingaleev:2006-046603:PRE}.
In this paper, we demonstrate how to employ the recently suggested
geometry-based enhancement of the resonance quality
factor~\cite{Mingaleev:2006-046603:PRE} to design efficient
waveguide-cavity structures for ultralow-threshold bistability and
all-optical switching in the slow-light regime.
The outline of the paper is as follows. Section 2 describes the
model we adopt, using the single-defect geometry shown in
Fig.~\ref{fig:fig2} as a concrete example, and demonstrates why such
type of geometry exhibits the resonance quality factor $Q \sim v_g$
which scales linearly with the group velocity of light, $v_{g}$, and
thus vanishes at \emph{both} propagation band edges. Section 3
describes the contrasting mechanism of slow light scattering which
leads to $Q \sim 1/v_{g}$ growing indefinitely at one of the
propagation band edges. Correspondingly, the power threshold $P_{\rm
th}$ required for all-optical light switching decreases as $P_{\rm
th} \sim Q^{-2} \sim v_{g}^2$ in this case. We illustrate this
mechanism on the example of the double-defect geometry shown in
Fig.~\ref{fig:fig3}. Section 4 concludes the paper.
\section{Model and the basic parameters}
\begin{figure}[t]
\centering\includegraphics[width=12cm]{Mingaleev2new.eps}
\caption{Single-defect waveguide-cavity structure with the radius of
the defect rod $r_{\rm{def}}$: (a) Electric field at the resonance
reflection for $r_{\rm{def}}=0.102a$; (b) Transmission spectra for
different values of $r_{\rm{def}}$: $0.1a$ (black), $0.101a$ (blue),
$0.102a$ (red), $0.1025a$ (green). For convenience, in addition to
the light frequency on the bottom axis, we indicate on the top axis
the complementary group velocity, $v_{g}(\omega)$, of the
waveguide's guided mode.} \label{fig:fig2}
\end{figure}
To illustrate our basic idea, we consider the simplest case of a
two-dimensional photonic crystal (PhC) created by a triangular
lattice of dielectric rods in air. The rods are made of either Si or
GaAs ($\varepsilon=12$) with the radius $r=0.25a$, $a$ is the
lattice spacing. Such photonic crystal has two large band gaps for
the E-polarized light (electric field parallel to the rods), and we
use the first gap between the frequencies $\omega a/2 \pi c =
0.2440$ and $0.3705$. By reducing the radius of {\em a single rod},
we create a monopole-like localized defect mode in this bandgap [see
Fig.~\ref{fig:fig1}(a)]. Reducing the radius of {\em two neighboring
rods} allows to create two localized modes, one with odd and another
with even field symmetry [see Fig.~\ref{fig:fig1}(b)]. Removing a
row of rods creates the so-called W1-waveguide which guides light
with the frequencies $\omega(k)$ determined by the guided mode wave
vector $k$ as shown in Fig.~\ref{fig:fig1}(c). The group velocity
$v_{g} = d\omega/dk$ of the guided mode vanishes at the edge $k=0$
(with $\omega a/2 \pi c = 0.3168$) of the propagation band. At small
wave vectors $(ka/2\pi) < 0.1$, it can be approximated as: $v_{g}/c
\approx 1.8155 (ka/2\pi) - 0.94776 (ka/2\pi)^2$. All numerical
results presented in this paper are obtained by employing the
Wannier functions approach~\cite{Busch:2003-R1233:JPCM} using eleven
maximally localized Wannier functions; they have also been checked
to be in an excellent agreement with the results based on the
plane-waves calculations~\cite{mpb}.
First, we emphasize that in general case light transmission at the
band edges of PhC waveguide structures is vanishing, due to
vanishing group velocity $v_{g}$. This effect is responsible, in
particular, for strong (scaling as $1/v_{g}$) extrinsic scattering
loss of slow light in PhC waveguides due to random fabrication
imperfections such as surface roughness and
disorder~\cite{Hughes:2005-033903:PRL}.
To illustrate, in Fig.~\ref{fig:fig2} we present the slow-light
transmission spectra for the waveguide-cavity structure based on the
W1-waveguide coupled to a cavity created by a single defect rod with
the radius $r_{\rm{def}}$. In what follows we refer to this
structure as a single-defect geometry. Changing $r_{\rm{def}}$, we
shift the resonance frequency $\omega_{\rm{res}}$ of this structure
from the middle of the propagation band, at $r_{\rm{def}}=0$, to the
edge ($k=0$) at $r_{\rm{def}} \approx 0.103a$. As was mentioned
above, for such a generic structure the light transmission vanishes
at the propagation band edges (at $\omega a/2 \pi c = 0.3168$ in our
case). This effect can be understood by analyzing an effective
discrete model of the waveguide-cavity structures
\cite{Mingaleev:2006-046603:PRE,Miroshnichenko:2005-036626:PRE}. In
such discrete model the transmission coefficient can be calculated
as $T(\omega)=\sigma^2(\omega)/[\sigma^2(\omega)+1]$, where the
function $\sigma(\omega)$ is determined by the structure geometry.
For a high-quality resonance with $\omega_{\rm{res}}$ lying deeply
inside the propagation band $\sigma(\omega) \simeq
\sigma_{\rm{Lorenz}}(\omega) \equiv 2Q(1-\omega/\omega_{\rm{res}})$
is determined by the resonance quality factor $Q$. However,
$\sigma(\omega)$ changes substantially near the band edges $k=0$ and
$k=\pi/s$, where $s$ is the waveguide period ($s=a$ for the W1
waveguide). For most of the structures, $\sigma(\omega)$ can be
approximated as $\sigma(\omega) \simeq \sin (ks)
\sigma_{\rm{Lorenz}}(\omega) \sim v_{g}
\sigma_{\rm{Lorenz}}(\omega)$ and, therefore, the transmission
coefficient {\em vanishes at the band edges}, where $v_{g} \to 0$.
This effect can be understood as an effective reduction of the
quality factor $Q \sim v_{g}$ in the slow-light regime, clearly seen
in Fig.~\ref{fig:fig2}. Moreover, when the resonance frequency
approaches the band edge, the maximally achievable (in the
slow-light region) transmission vanishes too (see
Fig.~\ref{fig:fig2}). Therefore, this structure cannot be employed
for the slow-light switching because the optical bistability in such
structures will become possible only
\cite{Mingaleev:2006-046603:PRE,Yanik:2003-2739:APL} when the linear
transmission exceeds 75\%.
\begin{figure}[t]
\centering\includegraphics[width=12cm]{Mingaleev3new.eps} \caption{
Double-defect waveguide-cavity structure with the cavity created by
two defect rods with the radius $r_{\rm{def}}$: (a) Electric field
at the resonance reflection for $r_{\rm{def}}=0.121a$; (b)
Transmission spectra for different values of $r_{\rm{def}}$:
$0.119a$ (black), $0.120a$ (blue), $0.121a$ (red), $0.1213a$
(green).} \label{fig:fig3}
\end{figure}
\section{Quality factor enhancement and slow-light bistability}
We found that the effect of vanishing light transmission at the band
edges of PhC waveguides is very common, and it appears in a variety
of commonly used resonant structures. However, using the discrete
nature of PhCs, it becomes possible to design such waveguide-cavity
structures for which $\sigma(\omega) \simeq \tan (ks/2)
\sigma_{\rm{Lorenz}}(\omega)$ will grow inversely proportional with
the vanishing group velocity, $\sigma(\omega) \sim (1/v_{g})
\sigma_{\rm{Lorenz}}(\omega)$, at the band edge
$k=\pi/s$~\cite{Mingaleev:2006-046603:PRE}. At this band edge $T\to
1$ and the effective quality factor of the resonance should grow as
$Q \sim 1/v_{g}$ when the resonant frequency approaches the band
edge. This increase in $Q$ leads to significant lowering of the
bistability threshold for all-optical light switching in the
slow-light regime. Such a structure can be designed by placing a
side-coupled cavity between two nearest defects of a PhC waveguide
assuming that all the defect modes and the cavity mode have the same
symmetry. In Ref.~\cite{Mingaleev:2006-046603:PRE}, we illustrated
these results for the so-called coupled-resonator waveguides made by
removing every second rod; this example looks however a bit
artificial having limited applicability.
Our extensive analysis shows that the geometry engineering can be
employed effectively to achieve the slow-light switching in
different types of PhC structures, including the important case of
the W1 waveguide and the propagation band edge at $k=0$. In a
general case, this approach is based on placing a side-coupled
cavity with an appropriate symmetry of the cavity mode into special
locations along the PhC waveguide. These locations and the mode
symmetry should be chosen in such a way that the overlap between the
cavity mode and guided mode at the band edge vanishes and,
consequently, scattering of light by the resonator at this band edge
vanishes too. As we already indicated, in the case when the
waveguide's defect modes have the same symmetry as the cavity mode,
such a vanishing of the modes' overlap is only possible at the band
edge $k=\pi/s$, leading to $\sigma(\omega) \simeq \tan (ks/2)
\sigma_{\rm{Lorenz}}(\omega)$. However, as can be shown by direct
extending of the results of the discrete model~
\cite{Miroshnichenko:2005-036626:PRE,Mingaleev:2006-046603:PRE}, in
the case of the opposite (even-odd) symmetry of the waveguide defect
modes and the side-coupled cavity mode, such a vanishing of the
modes' overlap becomes possible at the band edge $k=0$, leading to
$\sigma(\omega) \simeq \cot (ks/2) \sigma_{\rm{Lorenz}}(\omega)$.
Therefore, in this case both $\sigma(\omega)$ and the resonance
quality factor grow inversely proportional with vanishing group
velocity at the band edge $k=0$: $\sigma(\omega) \sim (1/v_{g})
\sigma_{\rm{Lorenz}}(\omega)$; and, therefore, we obtain $Q \sim
1/v_{g}$.
\begin{figure}[t]
\centering\includegraphics[width=11cm]{Mingaleev4.eps} \caption{ (a)
Quality factor $Q$ vs. group velocity $v_{g}$ at resonance for the
structure shown in Fig.~\ref{fig:fig3}; (b) Nonlinear bistable
transmission in the same structure at the frequencies with 80\% of
linear light transmission vs. the incoming light power for different
values of $r_{\rm{def}}$: $0.119a$ (black), $0.120a$ (blue),
$0.121a$ (red), $0.1214a$ (green); (c) Switch-off bistability
threshold $P_{\rm{th}}$ vs. the group velocity $v_{g}$ at resonance
for the same structure.} \label{fig:fig4}
\end{figure}
This is exactly the case of the waveguide-cavity structure presented
in Fig.~\ref{fig:fig3} which utilizes odd-symmetry mode of a
double-defect cavity. Indeed, we use the even-symmetry defect modes
creating the waveguide and thus to achieve high-$Q$ resonance in the
slow-light regime at the band edge $k=0$ we should employ a
side-coupled cavity mode with the odd symmetry. The simplest cavity
of this type is created by reducing the radius $r_{\rm{def}}$ of
{\em two neighboring rods}, as shown in Figs.~\ref{fig:fig1}(b). The
range of the resonance frequencies of such a structure occupies
almost the whole propagation band from its upper boundary at
$r_{\rm{def}}=0$ to its edge $k=0$ at $r_{\rm{def}} \approx
0.1215a$. In Fig.~\ref{fig:fig3} we present the linear transmission
spectra for several values of the radius $r_{\rm{def}}$. As is seen,
in all the cases the light transmission remains perfect at the band
edge $k=0$ due to decoupling guided and cavity
modes~\cite{Mingaleev:2006-046603:PRE,Miroshnichenko:2005-036626:PRE}.
The resonance quality factor $Q$ grows when the resonance frequency
approaches the band edge. Numerically obtained dependence $Q(v_{gr})
\sim 1/v_{gr}$ is shown in Fig.~\ref{fig:fig4}(a), and it is in an
excellent agreement with the theoretical predictions. Since the
bistability threshold power of the incoming light in
waveguide-cavity structures scales as $P_{th} \sim 1/Q^2$
\cite{Mingaleev:2006-046603:PRE}, we should observe rapid vanishing
of $P_{th} \sim v_{g}^2$ when the resonance frequency approaches the
band edge. Indeed, direct numerical calculations summarized in
Figs.~\ref{fig:fig4}(b,c) prove this idea. These results were
calculated with the straightforward nonlinear extension of the
Wannier function approach \cite{Busch:2003-R1233:JPCM} using 11
maximally localized Wannier functions.
\section{Conclusions}
We have analyzed the resonant transmission and bistability of slow
light in a photonic-crystal waveguide coupled to a nonlinear cavity.
We have shown how to achieve the perfect transmission near the edges
of the propagation band by adjusting either the cavity location
relative to the waveguide or the mode symmetry. We emphasize that
such properly designed photonic structures are suitable for the
observation of high-$Q$ resonant (with the quality factor $Q\sim
1/v_{\rm g}$) and ultralow-threshold bistable (with threshold power
$P_{\rm th} \sim v_{\rm g}^2$) transmission of slow light with the
small group velocity $v_{\rm g}$ at the resonance frequency. Thus,
engineering the geometry and mode symmetry of the photonic-crystal
structures is an useful tool for developing novel concepts of
all-optical switching devices operating in the slow-light regime.
\vspace{5mm}
{\bf Acknowledgments.} The work was supported by the Australian
Research Council and the National Academy of Sciences of Ukraine
through the Fundamental Research Program.
\end{document}
|
2,877,628,091,607 | arxiv | \section{INTRODUCTION}
Characteristics of flare-productive sunspot groups have been
studied by many authors
(Zirin \& Tanaka 1973; Hagyard et al. 1984; Kurokawa 1987;
Zirin \& Liggett 1987; Tanaka 1991; and Kurokawa 1991, 1996).
In these studies, the authors pointed out that
sheared configuration of magnetic field and
$\delta$-type configuration of sunspot group are
essential characteristics for strong flare activities.
From statistical studies, however, Shi \& Wang (1994) found
that 23 \% of $\delta$-type sunspot groups produced X-class
flares, and Chen et al. (1994) found that even strong
magnetic shear of more than 80-degree shear angle is not a
sufficient condition to produce flares.
On the other hand, Tanaka (1991) suggested that an
emergence of a twisted magnetic flux rope produces strong flares.
Kurokawa (1987, 1991, 1996) also concluded that strong flare
activities occur whenever a magnetic shear is rapidly
developed by a successive emergence of twisted magnetic loops.
Zirin \& Liggett (1987) showed that big flares occurred in
$\delta$-groups which emerged as continents.
These studies suggest that flare-productivity of a sunspot
group depends on formation process of the $\delta$-type
configuration or the magnetic shear.
This processes can be examined by measuring of sunspot
proper motions.
A large active region observed in March 1989 (NOAA 5395)
was a highly flare-productive sunspot group with
$\delta$-type configuration.
The region produced about two hundred flares including ten
X-class flares.
All X-class flares were observed in three particular regions.
Wang et al. (1991) studied this region using
Magnetograms, Dopplergrams and monochromatic images
mainly obtained at Big Bear Solar Observatory (BBSO) and
Zhang (1995) also studied using Magnetograms obtained
at Huairou Solar Observing Station (HSO).
We measured proper motions of sunspots in NOAA 5395 by using
H$\alpha$ images obtained with the 60-cm Domeless Solar
Telescope (DST) at Hida Observatory, Kyoto University
in order to study the formation process of $\delta$-type
configuration and magnetic shear in the region.
In this study we demonstrate that twisted magnetic flux tubes
successively emerged at the preceding edge of the great sunspot
group and that they played an essential role in the production
of strong flare activities of NOAA 5395.
We describe the procedure of data analysis in Sec. 2 and show
the obtained results of sunspot proper motions in Sec. 3.
In Sec. 4 we discuss the cause of peculiar sunspot motions
found at the leading edge of the active region and propose a
morphological model of an emerging twisted magnetic flux
bundle which is essential to the strong flare activity.
Section 5 is devoted for our conclusion.
\section{DATA ANALYSIS}
\subsection{Observational Data}
The active region NOAA 5395 turned into the visible hemisphere
on March 6 and turned off behind the western limb on March 19.
During this period, except on March 7, 13 and 17, we continuously
observed the evolution of the region with DST.
Many H$\alpha$ monochromatic images were sequentially obtained
with the Zeiss Lyot filter of 0.25 \AA $\;$ passband and a Nikon
motor-drive film camera of DST in 10 wavelength ({\sl i.e.}
H$\alpha \pm 0.0$ \AA, $\pm 0.3$ \AA, $\pm 0.5$ \AA,
$\pm 0.8$ \AA, $\pm 1.2$ \AA $\;$ and $-5.0$ \AA).
In this study we mainly used H$\alpha -5.0$ \AA $\;$ images
to measure sunspot proper motions.
We digitized the film densities of the H$\alpha -5.0$ \AA $\;$
images by a film scanner and analyzed them by using IDL
(Interactive Data Language) software on a personal computer and
UNIX.
Times and numbers of measured images are listed in Table 1.
One example of the H$\alpha -5.0$ \AA $\;$ images is shown
in Fig. 1.
We carefully identified individual sunspot umbrae of the group
in successive images and gave numbers to them as shown in Fig. 1b,
where the signs F and f stand for the {\sl following} magnetic
polarity, P and p, the {\sl preceding} polarity
and the sign '' : '' means uncertainness.
We used the magnetograph data supplied by Okayama Astrophysical
Observatory (OAO), BBSO and HSO for the determination
of magnetic polarities of the sunspots.
\subsection{Calculation of Latitude and Carrington Longitude of
Sunspot Umbrae}
The field of view of the H$\alpha$ filtergraph of DST is a circle
of about 390 $''$ diameter.
The heliocentric coordinate of the center of the circle is recorded
for each observation.
Referring to the center of the circle, we determined the heliocentric
coordinate of individual sunspot umbra in IDL pictures.
The umbra position was defined as the center of gravity position
of the umbra, which is determined by drawing the density contour
of each umbra in an IDL picture.
By means of coordinate transformation we finally obtained
heliographic latitude and Carrington longitude of each umbra.
During this observation the pointing accuracy of DST was
about 10 $''$, and this limits the accuracy of absolute positions
of umbrae.
Then we defined the center of gravity of a triangle made by
three umbrae F1, F3 and P1 as a reference point and
determined the relative positions of all sunspot umbrae
referred to this point.
The mean heliographic latitude of the reference point averaged
over all measured images is $33.^\circ 6$ N and its standard
deviation is 1.0 degree.
By excluding the images whose deviation exceeds $1 \sigma$,
we get $33.^\circ5$ N as the average latitude of the reference
point.
The longitudinal positions are significantly
influenced by the differential rotation.
Tang(1981) gave the differential rotation rate for the
sunspot groups at high ($28^\circ \sim 40^\circ$) latitude
by the formula
\begin{equation}
\Omega = 14.37 -2.60 \sin^2 \theta \quad{\rm deg/day},
\end{equation}
where $\Omega$ is the sidereal rotation rate.
We would like to know the synodic rotation rate.
According to Zirin (1989), the synodic rotation rate ($\omega$)
and the sidereal rate ($\Omega$) differ by the
Earth's orbital velocity of 0.9865 deg/day;
\begin{equation}
\omega = \Omega - 0.9865 \quad {\rm deg/day}.
\end{equation}
And we find that the synodic rotation rate at the latitude of
$33.^\circ 5$ N is 12.59 deg/day.
On the other hand, the heliographic longitude of the disk
center ($L_0$) decreased by the rate of 13.18 deg/day.
Considering the difference of above two values
(i.e. 12.59 deg/day and 13.18 deg/day),
the Carrington longitude of the reference point should
change by the rate of 0.59 deg/day due to the
differential rotation.
We first determined the heliographic latitude and
Carrington longitude of the reference point for the image
of 00:00 UT of Mrach 11, which are $33.^\circ 5$ N (latitude)
and $254.^\circ 5$ L (longitude) respectively.
Then for all other images we calculated the Carrington longitude
of the reference point by correcting the value of
0.59 deg/day.
For each sunspot umbra, we adapted a different correction rate
for the differential rotation corresponding to its latitude.
After these corrections for the differential rotation, we
calculated velocities of proper motions of sunspot umbrae.
The term of umbra velocity used hereafter in this paper
means a velocity of proper motion.
\section{RESULTS}
We noticed that the motion of F1 is quite different from those of
other big umbrae (F3, F4 and P1);
the umbra F1 moved southward or southwestward, while the others
moved eastward or northwestward.
The umbrae F3, F4 and P1 moved toward east at velocities of
$0.03 \; \sim \; 0.08$ km/s.
The umbra F1 moved southwestward at a velocity of
about 0.05 km/s and it suddenly accelerated
after 12 March.
The velocity of F1 reached at 0.11 km/s on March 14.
The other three big umbrae moved eastward at velocities of
$0.03 \; \sim \; 0.08$ km/s.
These velocities are approximately the same as those of
previous works, in which sunspot proper motions were
measured in various active regions
(van Driel-Gesztelyi \& Petrovay 1990,
van Driel-Gesztelyi {\sl et al.} 1993,
van Driel-Gesztelyi {\sl et al.} 1994,
Schmieder {\sl et al.} 1994, and
Herdiwijaya, Makita, \& Anwar 1997).
According to Herdiwijaya et al. (1997),
the longitudinal velocities of sunspots varies with
their Z\"{u}rich classes;
the average velocity of GHJ classes (i.e. decaying phase)
is smaller than that of AB classes (i.e. emerging phase).
The {\sl following} polarity sunspots of Z\"{u}rich AB classes
show the average longitudinal
velocity of 88 m/s.
On the other hand, the average longitudinal velocities
of EF classes (i.e. well-developed phase) and GHJ classes are
18 m/s and 21 m/s respectively.
The average longitudinal velocity of {\sl following} polarity spot F1
is 57 m/s for the period from 9 to 15 March.
This means that the magnetic flux tube forming sunspot F1 is
still emerging from the subphotospheric layer
in our observation period.
We also noticed conspicuous motions of small sunspot umbrae around
the leading sunspot F1.
They successively appeared at the south and southeast edge of
F1 and moved eastward.
We examined their motions in details.
By calculating the displacements of these small umbrae
between successive frames of images in each day,
we determined their velocities of proper motions.
We also calculated their velocities relative to F1
(relative velocities, hereafter).
The results obtained only for images of good seeing are
presented in Fig. 2, where
arrows indicate the relative velocities of the small umbrae.
On March 9 (Fig. 2a), small umbrae near F1
(p1 and p2) moved southward and
others (p3, p4 and p5) moved eastward.
The relative velocities of these umbrae were all 0.15 km/s.
On March 10 (Fig. 2b), small umbrae (p:6, f:1) near F1
moved southward at relative velocities of
$0.15 \; {\rm and} \; 0.30$ km/s, respectively.
A $p$-polarity umbra (p7) moved southeastward
at a relative velocity of $0.10$ km/s.
A $p$-polarity umbra (p9), which moved toward P1,
had a relative velocity of 0.20 km/s.
A $p$-polarity umbra (p8) moved eastward
at a relative velocity of 0.10 km/s.
Small umbrae at the north side of F1 moved eastward along
the channel between F1 and F2.
On March 11 (Fig. 2c), small umbrae at the east and
the southeast sides of F1 moved eastward or northeastward
at relative velocities of $0.30 \; \sim \; 0.40$ km/s.
Small umbrae at the north side and at the west side of F1
($f$-polarities) also moved eastward or northeastward.
On March 12 (Fig. 2d), small umbrae at the east and
southeast side of F1 also moved eastward at much higher
relative velocities of $0.30 \; \sim \; 0.60$ km/s.
Small umbrae ($f$-polarities) at the north side
and at the northwest side of F1
also moved east or northeast along the north edge of F1.
We also calculated the relative velocities of
proper motions of the small umbrae by using all available images
for each day.
The average velocities obtained in each day are given in Fig. 3.
It is found that the two results of Fig. 2 and Fig. 3 agree with
each other.
From these results we summarize main characteristics of
proper motions of small umbrae as follows.
At the south and southeast edges of F1, small umbrae successively
emerged and moved toward P1 through the east side of F1.
Appearing near F1, they first moved southward,
turned to the east and finally moved northeastward to approach P1.
It means they moved along curved trajectory like
an outflowing vortex.
Such emergences and motions of small sunspots
continued with their speed accelerated
from 9 through 12 of March.
Spot P1 became bigger in its umbra area
from 10 through 12, March.
Wang et al. (1991) also found that $p$-polarity umbrae
at the east side of F1 merged into P1.
We found that spot P2 newly formed on 14 Mar at
the region where small emerging sunspots or emerging magnetic
flux successively converged from 12 through 14 March (see Fig. 1b).
Some small umbrae emerging at the south west edge of F1
(i.e. f6, f10, f11, f13, f14, and f15)
first moved northward at the west side of F1 and moved
eastward or northeastward along the north edge of F1
from Mar 10 through 12.
They converged into a few larger umbrae at the northeast
side of F1 on March 12.
By March 14 the northeast part of F1 has been separated from
its leading part and moved northeastward to form F5 umbra.
\section{DISCUSSIONS}
\subsection{A Schematic Model of Emerging Twisted Flux Tubes}
The most remarkable result found in the previous section is the
vortex-like motions of small sunspot umbrae around the leading
sunspot F1.
Pairs of small umbrae of different magnetic polarities
successively emerged
at the leading edge of F1 (i.e. at the south
or southwest edges of F1) from 9 through 12, March.
These umbrae moved clockwise toward the southwest of P1,
where growth of P1 was observed from 10 through 12, March
and P2 was formed on March 14.
At the west and north sides of F1, small umbrae moved
anticlockwise and formed F5 by merging with some parts of decaying F1.
These motions are schematically summarized in Fig. 4a.
At the east side of F1, small umbrae, most
of which are $p$-polarities, moved clockwise and formed P2.
At the west and north sides of F1, small umbrae, all $f$-polarities,
moved anticlockwise and formed F5 by merging with some parts
of decaying F1.
Notice vortex-like lines and structures of small
sunspot umbrae and penumbrae surrounding F1 spot in Fig. 1a.
Wang et al. (1991) also found the same
vortex-like motions of magnetic features in this region
using Magnetograph data.
For the explanation of these peculiar vortex-like motions,
we propose a schematic model of emerging twisted flux tubes
given in Fig. 4b.
The model is characterized by the successive emergence of
twisted flux tubes coiling around the main trunk of
flux tube F1.
The bundle of flux tubes is twisted and each flux tube
is also twisted.
As Parker (1955) pointed out, the magnetic buoyancy
raises up the system of twisted magnetic-flux bundle.
The successive emergence of the inclined and twisted flux bundle
can explain the vortex-like motions of small sunspots
both at the southeast and the northwest sides of F1
as shown in Fig. 4a.
In Fig. 4b planes $P_{t_1} {\rm and} P_{t_2}$ show the
positions of the photosphere relative to the
emerging flux tubes for the different days.
The cross sections of flux tubes on each photospheric plane
correspond to sunspots.
As the coiling loops successively emerge, the small
$p$-polarity and $f$-polarity sunspots can be seen to move
clockwise and anticlockwise,
at the southeast side and at the northwest side of F1 respectively.
Notice that small $p$-polarity sunspots seen on the planes $P_{t_1}$
are all converged into the sunspots P2 on $P_{t_2}$
and that $f$-polarity ones are converged into the sunspot F5.
These are very consistent with the observations
summarized at the beginning of this section.
In addition, it is interesting to notice another evolutional
feature of F1.
As studied in section 3, the southwestward motion
of the umbra F1 suddenly accelerated after 12 March.
We observed the drastic change of the shape of F1
at the same time (see Fig 1b);
F1 suddenly elongated and started to decay after 12 March.
We also observed the rapid growth of P2 and F5 umbrae
between 12 and 14 March.
This time coincidence between the decay of F1 and the
growth of P2 and F5 strongly suggests a close causal relation
between them :
The umbrae F1 was stable until 12 March because it was bound
by the coiling flux tubes.
When the coiling flux tubes already emerged above the
photosphere after 12 March, however, the trunk of
flux tube F1 suddenly expanded, or elongated and
started to decay.
Furthermore, it must be worthwhile here to mention about
a differential emergence of magnetic loops.
Suppose the gas pressure and magnetic field strength
in the tube are $p_{\rm i}$ and $B_{\rm i}$ and the surrounding
external pressure at the same height is $p_{\rm e}$.
Then the lateral total pressure balance implies
\begin{equation}
p_{\rm e} = p_{\rm i} + \frac{B_{\rm i}^2}{8 \pi}.
\end{equation}
If the temperature ($T$) is uniform and the corresponding
densities are $\rho_{\rm i}, \rho_{\rm e}$, Equation (3) becomes
\begin{equation}
\frac{k_{\rm B} T \rho_{\rm e}}{m} = \frac{k_{\rm B} T \rho_{\rm i}}{m} +
\frac{B_{\rm i}^2}{8 \pi},
\end{equation}
where we take the perfect gas law ;
$p=k_{\rm B} \rho T/m$ ($k_{\rm B}$ is the Boltzmann constant and $m$ is
the mean particle mass).
The plasma in the tube feels a resultant buoyancy force
of $(\rho_{\rm e} - \rho_{\rm i})g$ per unit volume, which tends
to make the tube rise.
The trunk and coiling loops in Fig 4b are considered to have different
magnetic pressures ($B_{\rm i}$) and different densities
($\rho_{\rm i}$).
Different densities give different buoyancy force, so the system
rises differentially.
Difference of the shapes of flux tubes (i.e. different
curvatures of flux tubes) may also make the system rise
differentially.
In spite of this consideration, the schematic drawing like
Fig. 4b may give the impression that the whole system
emerges at the same speed, because we do not draw how
differentially the loops rise.
We need more observational data of better temporal- and spatial-
resolutions before we can discuss more realistic
structures of successively emerging flux loops.
And this is an important subject to be attacked in future works.
Figure 4b, however, well shows the essential characteristics of our model,
i.e. an emergence of a twisted magnetic flux bundle.
It does not influence our conclusion how differentially the flux loops rise.
\subsection{Flare Activity in NOAA 5395}
According to Solar Geophysical Data (SGD),
about two hundred flares occurred in this active region.
Some example of flares observed with DST at Hida
Observatory are shown in Fig. 5.
The positions of the flares given in SGD were transformed
to our coordinate system (heliographic latitudes and
Carrington longitude) and
these of the flares stronger than C-class in
soft X-ray importance were plotted in Fig. 6, where
the positions of the flares are given for each day
(from 15:00 UT of the previous day
to 15:00 UT of the day).
The relative velocities of the small sunspot umbra around F1
are also shown in Fig. 6.
Notice that almost all flares occurred around F1 from
9 through 12 March.
On 15 March, before which the twisted flux bundle has already
emerged out, however, no strong flare occurred around
F1 (Fig. 6).
On the contrary, some strong flares occurred along the
sheared neutral line formed between P2 and F5 on 14 and
15 March.
Especially noticeable in these two figures (Fig. 5 and 6)
is that from 9 through 12, March,
many M-class and C-class flares occurred at the leading edge
and at the east of F1 where the vortex-like motions of the small
umbrae were found.
The north side of F1 is also flare-productive especially on 12 March.
These results strongly suggest that
the vortex motions of small spots around F1 or the
successive emergence of twisted magnetic flux tubes
should be sources of the strong flare activities in this region.
\subsection{Flare Energy Build-up Process}
The vortex-like motions of small satellite spots around F1
were explained with emergences of the twisted magnetic flux
bundle(Sec. 4.1),
and strong flare activities preferentially occurred around this
emerging flux region (Sec. 4.2).
These evidence strongly suggest the existence of causal
relations between the emergences of twisted magnetic flux tubes
and the strong flare activities.
With successive emergence of twisted flux tubes, very complex
magnetic field structures, which consist of
many current loops, were formed around F1 and at the
south of P1.
Notice the twisted dark loops at the east side of F1
in the H$\alpha$ images of Fig. 7.
They have different orientations with each other,
and they are considered to reconnect with each other
and sometimes to produce flares as shown in Fig. 5.
In the convection zone, magnetic flux tubes are
intensified by a dynamo action and
twisted by convection in a rotating fluid.
When the twisted flux tubes emerge to the photosphere,
the twists should get loose as shown in Fig. 4 with
decrement of the gas pressure.
Since the twisted magnetic tubes store much energy,
the energy could be released in the violent flare activities
in the F1 region of NOAA 5395.
The idea that twisted magnetic flux tubes store the energy
for flares is first proposed by Piddington (1974), and
several observational studies developing this idea have been
published by Tanaka (1991), Kurokawa (1987, 1991, 1996),
Wang (1994) and Wang et al. (1996).
Our study presents another clear evidence of the emergence
of a strongly twisted and flare-productive magnetic flux bundle.
Especially important findings in this study are the outflowing
vortex-like motions of small sunspots and extremely hot flare
activity around the leading sunspot F1.
Such conspicuous motions of sunspots and associated
hot flare activities have been hardly reported before,
though Kurokawa et al. (1987) noticed a conspicuous
growth of the leading
part of $\delta$-type sunspot just before a great flare
of April 25, 1984 in NOAA 4474.
Leka et al. (1994, 1996) also reported some vortex-like
motions of small sunspots in an emerging flux region, but
the region is not so active in flare production.
More examinations of the relation between the vortex motions of
sunspot structures and flare activity are necessary in future
observations and measurements of existing data.
\section{CONCLUSION}
Sunspot proper motions and flares of a super active region
NOAA 5395, which is one of the biggest and the most flare-active
regions in the 22nd sunspot cycle, were analyzed in details.
We measured the sunspot proper motions
by using the H$\alpha$ images obtained with DST and
found some peculiar vortex-like motions
of small satellite spots successively emerging from the
leading sunspot F1.
For the explanation of these motions of small sunspots,
we proposed a schematic model of the successive emergence of
twisted and winding magnetic flux loops coiling around
a trunk of magnetic flux tube (Fig 4).
We also found most of flares preferentially occurred in this
emerging region around F1.
These results are consistent with our previous conclusion
that the flare productive magnetic shear is produced by the
emergence of twisted magnetic flux tubes (Kurokawa 1987, 1991,
1996).
This study strongly suggests that the magnetic energy
of a flare-productive sunspot group is stored
in a twisted flux bundle which is originally
formed in the convection zone.
The energy is released as flares in the course of
the emergence of the twisted flux bundle above the photosphere.
Studies of the typical flux tube geometry which has enough energy
for big flares should be useful for forecasting flare occurrence.
It is still unclear, however, which type of emergence of
twisted flux tube always produces strong flare activity.
We need more observational studies of the process of
magnetic shear developments in more active regions by
using proper motions of sunspots and evolutional changes
of H$\alpha$ fine structures and photospheric vector
magnetic field.
\acknowledgments
We would like to thank the anonymous referee for the useful comments
which improved our paper.
We also thank Drs. T. Sakurai, F. Tang and
H. Zhang for their providing magnetograms.
We are grateful to Drs. R. Kitai and Y. Funakoshi for
their helps in the observations at Hida Observatory.
Dr. H. Hirashita contributed helpful discussions and advice.
One of the authors (TTT) acknowledges the Research Fellow
of the Japan Society for Promotion of Science for
Young Scientists.
|
2,877,628,091,608 | arxiv | \section{}
\section{Introduction and Theoretical Techniques}
Crystals are regular periodic physical systems with the atomic
constituents organized in periodic arrays, where the periodicity is a characteristic
of the crystal in its equilibrium configuration and a manifestation of long-range
positional order. However, the effect of thermal fluctuations must be taken into
consideration at finite temperatures where thermally excited lattice vibrations
may degrade or destroy crystalline order. An early theoretical treatment developed by
Lindemann~\cite{uno} examined the effect of lattice vibrations in a framework which
neglected the correlations of atomic motions, but which nonetheless provides a reasonable
description (on an order of magnitude basis) for the melting of three dimensional
solids.
A salient component of the Lindemann analysis is the Lindemann criterion
where the melting of a solid is considered to have taken place when mean square deviations
from equilibrium exceed a tenth of a lattice constant. X-ray diffraction data, which may provide a
direct measure of long-range order in a crystal lattice (and hence a means to determine
temperatures where crystalline order is lost) finds reasonable agreement~\cite{unopuntocinco,unopuntosiete} with the Lindemann
criterion. The accord is manifest in the finding
that Bragg peaks corresponding to broken translational symmetry vanish
when mean square fluctuations from equilibrium (also determined from an analysis of
X-ray diffraction data) are in the vicinity of a tenth of a lattice constant, as
specified in the Lindemann result.
A factor of significance for the effectiveness of the Lindemann criterion is the tendency for
atoms in three dimensional crystals to have a large number of neighbors [e.g. a dozen
nearest neighbors in the face of face centered cubic (fcc) lattices]. Hence, mean
field treatments in the spirit of Weiss molecular mean field theory are more likely
to provide a reasonable theoretical description
since statistical fluctuations tend to suppressed somewhat by averaging when a
relatively large number of
neighbors are present.
On the other hand, it should
be understood that apart from the number of nearest neighbors,
dimensionality is a very important parameter which may affect the thermodynamic
characteristics and integrity of a crystal lattice to a large degree.
Nano-scale engineering often takes place in systems of low dimensionality such
as carbon-nanotubes where the length may exceed the width by several orders of
magnitude; nanotubes tend to be regarded as one dimensional systems.
Graphene sheets, covalently bonded single layer honeycomb lattices of carbon atoms,
may be considered genuine monolayers, and hence possess strongly two
dimensional character. The thermodynamic stability of a system is strongly
dependent on its dimensionality with statistical
fluctuations becoming more important for two dimensional systems, and very
important for essentially 1D structures such as nanotubes.
An important theoretical result known as the Mermin-Wagner theorem~\cite{dos,tres}
predicts that as the bulk limit is approached, thermal fluctuations destroy
long-range crystalline order in the context of 1D lattices. However, this result does not preclude the
stabilization of positional order for $T > 0$ if the interaction between
atomic members is long-ranged (e.g. decaying as a power law in the separation between
positions in the crystal lattice). In this work, it is our program to examine
conditions which preserve long range order in low dimensional systems at finite
temperatures.
In one dimension, the deleterious effect of thermal fluctuations is
felt most severely, and ultimately only short-range order exists if the
interatomic interaction is finite in range. In three
dimensions, long-range positional order is intact for finite temperatures
below the melting temperature $T_{m}$. Two dimensional solids are often regarded as
an intermediate case where thermal fluctuations are strong enough to destroy
long-range order as the size of the system is increased, but only in a very
gradual manner. Although crystalline order in 2D systems does not survive
in the thermodynamic limit if the interaction is confined to nearest neighbors or is
otherwise finite in range, a long-range coupling with power law decay may
stabilize long-range positional order. In fact, even for one dimensional
solids, we find a critical decay exponent $\alpha_{c}^{\mathrm{1D}} = 1.615 \pm 0.005$ below which crystalline
order remains stable for $T > 0$, whereas long-range order is only gradually lost
in the bulk for power law decays where $\alpha > \alpha_{c}$.
Similarly, the corresponding exponent in 2D is $\alpha_{c}^{\mathrm{2D}} = 3.15 \pm 0.025$,
where long range crystalline order is preserved for $\alpha < \alpha_{c}^{\mathrm{2D}}$, whereas
thermal fluctuations destroy positional order if $\alpha > \alpha_{c}^{\mathrm{2D}}$. Within the bounds of
numerical error, we obtain the
same value for the threshold exponent $\alpha_{c}^{\mathrm{2D}}$ for distinct
lattice geometries including square lattices, triangular lattices, and honeycomb lattices.
We examine various types of coupling schemes, including very short-ranged interactions
where atoms interact with only a few nearest neighbors and
perhaps also next-nearest neighbors. We also consider extended schemes where there is a finite
coupling to all neighbors, but where the interaction is still short-ranged,
with a rapid decay profile, such as that of an inter-atomic potential with an
exponential dependence $V \propto e^{-\gamma r}$ where $\gamma^{-1}$ is the finite
length scale corresponding to the coupling scheme.
Finally, we also consider a long-ranged algebraically decaying coupling of the
form $V(r) = r^{-\alpha}$ where $\alpha$ may assume different values, though for
the energy per atom to be finite in the bulk limit, the exponent must exceed
threshold values which depend on dimensionality of the lattice.
For single dimensional systems, one must have $\alpha > \alpha_{L}^{1 \mathrm{D}} = 1$ and
$\alpha > \alpha_{L}^{2 \mathrm{D}} = 2$ for two dimensional crystals.
We report on a calculation of the atomic root mean square deviation $\delta_{\mathrm{RMS}}$ about
positions of equilibrium in 1D and 2D crystals. In section I, we discuss theoretical methods
used to calculate the partition function and hence calculate salient thermodynamic quantities by decoupling the vibrational states
used to gauge the integrity of long-range crystalline order
such as $\delta_{\mathrm{RMS}}$. In Section II, we examine one dimensional systems, finding
positional order to be destroyed except in a long-range coupling scheme, (i.e. a power law dependence where the
decay exponent $\alpha$ must lie between $\alpha_{L}^{1 \mathrm{D}} = 1$ and an upper bound exponent
$\alpha_{c}^{1 \mathrm{D}} = 1.615 \pm 0.005$). In section III, we perform a similar analysis for two dimensional
square lattices, where we generalize to an extended scheme, and find a gradual destruction
of crystalline order with increasing system size
$L$ for short-ranged couplings. However, we find that a power law decay profile where
$\alpha < \alpha_{c}^{\mathrm{2D}} = 3.15 \pm 0.025$ is sufficient to maintain
positional order at finite temperatures. In addition to the square lattice, in Section IV we also examine triangular and
honeycomb lattices, finding the ability of a long-range interaction between atoms to
preserve crystalline order is not affected by the specific type of lattice geometry under
consideration, and the threshold exponents in all three cases are identical within the bounds of error
in the calculations. Finally, in section V, we consider motion transverse to the plane of the crystal
lattice for locally stiff dual layer systems where even if interaction between particles is taken
to be long-ranged,
we find the perpendicular motions rapidly compromise long-range order as the size of the
system is increased.
A salient component of our treatment is the explicit accounting for atomic motions in
discrete systems. The harmonic approximation, which neglects anharmonic terms in the potential
set up by geometric effects has been tested directly in the context of Monte Carlo
simulations and found to be accurate to within one part in $10^{3}$ for the systems
considered~\cite{cuatro}.
Since our interest is in equilibrium thermodynamic characteristics of the system, we begin with the
lattice potential
\begin{align}
V = \frac{1}{2} \sum_{i=1}^{N} \sum_{j=1}^{n_{i}} V(r_{ij}),
\end{align}
where $n_{i}$ is the number of neighbors corresponding to each atom in the system indexed with
the label $i$, and $N$ is the total number of particles contained in the crystal lattice.
The factor of $1/2$ in the lattice potential expression is present to compensate for double
counting of the energy associated with individual ``bonds'' between atomic pairs $i$ and $j$.
For small deviations from equilibrium positions, a ``harmonic approximation'' is possible, and one
finds instead
\begin{align}
V = \frac{1}{2} \sum_{i=1}^{N} \sum_{j=1}^{n_{i}} \frac{K_{ij}}{2} (l_{ij} - l_{ij}^{0})^{2},
\end{align}
the first nonzero term of a Taylor expansion of $V(r_{ij})$ where $l_{ij} \approx l_{ij}^{0}$,
and $K_{ij}$ is the second derivative of the potential $V(r_{ij})$ at $r_{ij} = l_{ij}^{0}$.
In the results we report on here, we restrict attention to temperatures below those which would cause
melting in the bulk (determined by the Lindemann criterion), where the harmonic approximation would tend to fare well.
A primary issue of interest is whether there is any finite temperature
range where long-range positional order is intact, and our calculations are in the context of temperatures not of the magnitude that
would disrupt the bonding topology and create dislocations, but thermal regimes considerably below the
temperature range which might begin to rupture bonds between neighbors.
For covalent solids such as two dimensional sheets of graphene and carbon nanotubes where energies stored
in covalent bonds are far in excess of $k_{\mathrm{B}} T$ at 300K, even room temperature may be considered
a ``low'' temperature in the sense of being considerably below temperature scales where thermal
fluctuations would perturb the
local bonding scheme in a significant way.
In one dimension, the bonds are collinear, and the potential will remain quadratic as the energies of all bonds
between atoms are summed. However, in two dimensional geometries, restoring forces to oppose displacements from
equilibrium will be exerted in different directions along distinct bond axes between an interacting
pair of atoms. As a consequence, it will be
necessary to make an additional harmonic approximation in order to obtain a quadratic Hamiltonian and
subsequently exploit translational invariance for the regular lattices we examine.
In general, a bond length $l_{ij}$ between atoms $i$ and $j$ will appear as
\begin{align}
l_{ij} = \sqrt{ \begin{array}{c} (\Delta_{ij}^{0x} + \delta_{i}^{x} - \delta_{j}^{x} )^{2} +
( \Delta_{ij}^{0y} + \delta_{i}^{y} - \delta_{j}^{y})^{2} \\ +
(\Delta_{ij}^{0z} + \delta_{i}^{z} - \delta_{j}^{z})^{2} \end{array}}
\end{align}
where $\Delta_{ij}^{0x} \equiv (x_{i}^{0} - x_{j}^{0})$, $\Delta_{ij}^{0y} \equiv (y_{i}^{0} - y_{j}^{0})$,
and $\Delta_{ij}^{0z} \equiv (z_{i}^{0} - z_{j}^{0})$. Thus, the potential energy stored in the bonds
depends only on the difference of coordinates such as, e.g., $\Delta_{ij}^{0x}$ for the equilibrium $x$
coordinate differences and $(\delta_{i}^{x} - \delta_{j}^{x})$ for differences constructed from
the corresponding shifts from equilibrium. If the
latter are sufficiently small in relation to the former, it is appropriate to expand about
$\Delta_{ij}^{0x}$, $\Delta_{ij}^{0y}$, $\Delta_{ij}^{0z}$, and to quadratic order one will have
$(l_{ij} - l_{ij}^{0})^{2} \approx \left [ \hat{\Delta}_{ij} \cdot (\vec{\delta}_{i} - \vec{\delta}_{j} ) \right] ^{2}$ where
$\hat{\Delta}_{ij}$ is a unit vector formed by subtracting the position vectors $\mathbf{x}^{0}_{i}$ and
$\mathbf{x}^{0}_{j}$
corresponding to the atom $i$ and its neighbor $j$, such that
$\hat{\Delta}_{ij} = (\mathbf{x}_{i}^{0} - \mathbf{x}_{j}^{0})/ \| \mathbf{x}_{i}^{0} - \mathbf{x}_{j}^{0} \|$.
Hence, the atomic potential may be written to quadratic order in $\delta_{i}$ and $\delta_{j}$, and one has in
particular
\begin{align}
V_{\mathrm{Har}} = \frac{1}{2} \sum_{i=1}^{N} \sum_{j=1}^{n_{i}} \frac{K_{ij}}{2} \left[ \hat{\Delta_{ij}}
\cdot (\vec{\delta}_{i} - \vec{\delta}_{j} ) \right]^{2}
\end{align}
It will be necessary to solve an eigenvalue problem to decouple the vibrational
modes. However, with Fourier analysis, the problem may be reduced to the task of diagonalizing a
$2 \times 2$ matrix, a $4 \times 4$ matrix in the case of
a lattice with a honeycomb geometry, or
at most a $6 \times 6$ matrix for the case of the locally stiff dual-layer system,
even in cases where the coupling scheme is extended to encompass many neighbors for each atomic
member. We will consider systems in one dimension where we show that long-range crystalline order may be stabilized
in the case of slowly decaying power law potentials, but not for localized exponentially decaying coupling schemes.
We also examine two dimensional lattice geometries, and find similar phenomena; again, a long-range power law decay is needed
to preserve crystalline order at finite temperatures.
Finally, we are careful to restrict motion to collinear displacements in the context of 1D systems
and intraplanar motion for the two dimensional crystals. The lattices are very easily disturbed by
transverse displacements, and we find that relaxing the collinear and coplanar restrictions in
1D and 2D yields mean square fluctuations which diverge rapidly with increasing system size
$N$. As we find with explicit calculation in section V, this rapid (i.e. at a linear) growth in $N$ occurs even if
the interaction between atoms is a long-ranged power law decay in locally stiff dual layer crystal geometries.
We use the results for the eigenvalues for the vibrational modes to calculate thermodynamic properties related to
crystalline order such as the thermally averaged mean square fluctuations about equilibrium per site, $\delta_{\mathrm{RMS}}$.
As noted elsewhere~\cite{cuatro} and summarized here, the mean square displacements about equilibrium may be calculated in terms of
the eigenvalues for the vibrational states.
In terms of the vibrational modes, the lattice energy may be written as
\begin{align}
{\mathcal H}^{\mathrm{Har}} = \frac{1}{2} \sum_{\alpha = 1}^{3M} \lambda_{\alpha} c_{\alpha}^{2}
\end{align}
with $M$ the total number of particles contained in the lattice.
The connection between the vibrational states and the mean square fluctuations is
\begin{align}
\langle \delta_{\mathrm{RMS}} \rangle^{2} = \frac{1}{M} \sum_{i = 1}^{M} \langle (\delta_{i}^{x})^{2} + (\delta_{i}^{y})^{2} + (\delta_{i}^{z})^{2} \rangle
\end{align}
With the eigenvectors indexed with the label $\alpha$ and using, e.g., $\delta_{i}^{x} = \sum_{\alpha = 1}^{3M} c_{\alpha} v_{\alpha}^{ix}$,
to express the displacements in terms of the eigenvectors, one finds for a specific system configuration
\begin{align}
\delta^{2} = \frac{1}{M} \sum_{i=1}^{M} \sum_{\alpha=1}^{3M} \sum_{\alpha^{'} = 1}^{3M} \left[ c_{\alpha} c_{\alpha^{'}} \left( v_{\alpha}^{ix}
v_{\alpha^{'}}^{ix} + v_{\alpha}^{iy} v_{\alpha^{'}}^{iy} + v_{\alpha}^{iz} v_{\alpha^{'}}^{iz} \right ) \right ]
\end{align}
In the thermal average, the factor $c_{\alpha} c_{\alpha^{'}}$ will be as often positive as negative, and hence only in the case $\alpha = \alpha^{'}$
will there be a net contribution to the thermal average $\langle \delta_{\mathrm{RMS}} \rangle^{2}$. If the vectors are taken to be
normalized, one finds
\begin{align}
\langle \delta_{\mathrm{RMS}} \rangle ^{2} = \frac{1}{M} \sum_{\alpha = 1}^{3M} \langle c_{\alpha}^{2} \rangle
\end{align}
With the lattice energy expressed in this way, the partition function becomes of a product of Gaussian integrals,
\begin{align}
Z = \prod_{\alpha = 1}^{3M} \int_{-\infty}^{\infty} e^{- \beta \lambda_{\alpha} c_{\alpha}^{2}/2} d c_{\alpha} =
\prod_{\alpha = 1}^{3M} \left( \frac{2 \pi \tau}{\lambda_{\alpha}} \right)^{1/2},
\end{align}
where $\tau \equiv k_{\mathrm{B}} T$. Finally, a thermal derivative of $Z$ leads to
\begin{align}
\langle \delta_{\mathrm{RMS}} \rangle^{2} = \frac{d}{d \tau} \textrm{Ln} (Z) = \sum_{\alpha = 1}^{3M} \lambda_{\alpha}^{-1} \tau
\end{align}
Hence, for the mean square deviation, we have
\begin{align}
\langle \delta_{\mathrm{RMS}} \rangle = \tau^{1/2} \sqrt{ \sum_{\alpha = 1}^{3M}} \equiv \tau^{1/2} \delta_{\mathrm{RMS}}^{n}
\end{align}
The term in the radical is not temperature dependent, but is instead determined by characteristics of the
lattice geometry and the bonding scheme between atomic members. In this work, we calculate $\delta_{\mathrm{RMS}}^{n}$,
a mean square RMS deviation normalized with respect to temperature.
Zero eigenvalues are artifacts of the periodic boundary conditions, correspond to global translations of the
lattice, and are excluded from the sum.
\section{Systems in One Dimension}
In the 1D systems we consider, only longitudinal displacements are examined; similarly, for the two dimensional
geometries, lattice vibrations are considered to be confined to the two dimensional plane with no transverse
motions considered. Periodic boundary conditions are implemented in both the one and two dimensional
cases.
An important characteristic of systems in one dimension is the
fact that all bonds are collinear, and hence there is no purely geometric source of anharmonic effects.
The lattice potential energy will have the form
\begin{align}
V = \sum_{l=1}^{n} \sum_{m = 1}^{n} K_{m} (\delta_{l} - \delta_{l+m})^{2}
\label{eq:Eq5}
\end{align}
Since only longitudinal motions are considered, the label ``$x$'' that would normally appear as a
subscript on the ``$\delta$'' symbols is suppressed. The sum recorded in Eq.~\ref{eq:Eq5} is configured to avoid the
redundant summation over bonds, and the counting ``1/2'' factor is not required.
To maximize the number of neighbors coupled to any particular atom while avoiding multiple couplings to the
same atom via the periodicity condition, we set $n = (N-1)/2$ and we always consider an odd
number of atomic members.
It is convenient to operate in terms of Fourier components, where we have $\delta_{l} = \sum_{k} e^{ikl} \delta_{k}$; on
substitution, the expression for the lattice energy has the form
\begin{align}
E = 2 \sum_{k_{x}} \left[ \sum_{m = 1}^{n} K_{m} (1 - \cos mk_{x} ) \right] |\delta_{k_{x}}|^{2},
\end{align}
which has been diagonalized with the use of Fourier components $\delta_{k}$. The eigenvalues are given by
$\lambda_{k_{x}} = 2 {\displaystyle \sum_{m=1}^{n}} K_{m} (1 - \cos km )$, and the normalized thermally induced shift $\delta_{\mathrm{RMS}}^{n}$ has
the form
\begin{align}
\delta_{\mathrm{RMS}}^{n} = \left( \sum_{k_{x}} \lambda_{k_{x}}^{-1} \right)^{1/2}
\end{align}
We first examine a localized potential which in the 1D context
would certainly be expected to yield a divergent mean square fluctuation $\delta_{\mathrm{RMS}}^{n}$
with increasing system size $N$. As a companion result to gain complementary insight, we also
calculate the density of states
corresponding to the system. One merit of obtaining the vibrational density of states is the fact that it may be
computed in the thermodynamic limit without encountering divergences
with the aid of Monte Carlo sampling. On the other hand, the divergence or convergence of
the mean square deviations will be signaled by specific signatures in the low eigenvalue regime of the density of
states without the need for an extrapolation to the thermodynamic limit.
In the case where interactions are confined to nearest neighbors, the
eigenvalues $\lambda_{k_{x}}$ have the simple form $2K_{0} (1 - \cos k_{x} x)$. The corresponding
thermally averaged $\delta_{\mathrm{RMS}}^{n}$ values in the case of finite systems
are shown in the graph in Fig.~\ref{fig:Fig1}, and there is a steady rise with
in the RMS fluctuations with increasing $n$.
The expansion of the mean square deviations from equilibrium is sub-linear, but it may be shown that the
increase continues indefinitely (i.e. diverges in the thermodynamic limit) by graphing instead
$(\delta_{\mathrm{RMS}}^{n})^{2}$, as in the inset of Fig.~\ref{fig:Fig1}.
The dependence on $N$ quickly reverts to an asymptotically linear increase with
$N$, and to a good approximation $\delta_{\mathrm{RMS}}^{n} = N^{1/2}$ for moderate to large systems.
\begin{figure}
\includegraphics[width=.49\textwidth]{dallsfig1.eps}
\caption{\label{fig:Fig1} Mean square deviation curves are graphed
versus system size $N$ in inset (a). The vertical axis of the graph in inset (b) is $(\delta_{\mathrm{RMS}})^{2}$ to
emphasize the linear behavior of the square of the mean square fluctuations for
a broader range of system sizes.}
\end{figure}
We next extend the coupling scheme to many neighbors where the coupling decays at an exponential rate,
as might be found at least on a qualitative level for a covalently bonded system where the
rapidly decaying overlap of the orbitals of atomic neighbors (and hence the
magnitude of the exchange coupling) has an asymptotically exponential
decay as the separation between the pair of atoms becomes sufficiently large. The lattice energy will have the form
\begin{align}
V_{\mathrm{exp}}^{\gamma} = \sum_{l = 1}^{N} \sum_{m = 1}^{n} K_{0} e^{-\gamma m} ( \delta_{l} - \delta_{l+m})^{2}
\end{align}
Hence in terms of Fourier components, the total lattice potential becomes
\begin{align}
V_{\mathrm{exp}}^{\gamma} = \sum_{k} \left( 2 K_{0} \sum_{m=1}^{n} e^{-\gamma m} [1 - \cos mk ] \right)
\end{align}
Again, operating in terms of Fourier components decouples the modes, and the appropriate
eigenvalues $\lambda_{k}$ are given by
\begin{align}
\lambda_{k} = 2 \sum_{m=1}^{n} e^{-\gamma m} [1 - \cos mk ]
\end{align}
where the prefactor $K_{0}$ has been suppressed.
In addition, the label ``$x$'' on the wave vector has also been suppressed
for the sake of convenience.
Although the coupling scheme is extended to many neighbors, the potential is in
an important sense still a local interaction due to its rapid decay, where
the appropriate length scale is the inverse decay rate $\gamma^{-1}$.
By appealing to the formula for a geometric sum, $r + r^{2} + \ldots + r^{n} = (r-r^{n+1})/(1-r)$ (where $r$ is
taken to be complex and $\lvert r \rvert < 1$) and using the
fact that $\cos m k = \tfrac{1}{2} (e^{imk} + e^{-imk})$, one may obtain an
explicit expression for $\lambda_{k}$ which does not require the intermediate summation.
Applying the geometric series formula for a finite series yields
\begin{align}
&\lambda_{k} = 2 e^{-\gamma} \left( \frac{1 - e^{-\gamma n}}{1 - e^{-\gamma}} \right ) -& \\ \nonumber
&e^{-\gamma} \left( e^{ik}\left[ \frac{1 - e^{n(-\gamma + ik)}}{1 - e^{-\gamma + ik}} \right] +
e^{-ik}\left[ \frac{1 - e^{n(-\gamma - ik)}}{1 - e^{-\gamma - ik}} \right] \right)&
\end{align}
Combining the last two fractional terms gives
\begin{align}
\lambda_{k} = 2 \! \left( \! \! \frac{1 - e^{-\gamma n}}{e^{\gamma} - 1} \! \right )\! - \! \left( \! \frac{ \begin{array}{c}
\cos k + e^{-\gamma (n+1)} \cos nk\\
-e^{-\gamma} - e^{-\gamma n} \cos (n+1)k] \end{array} }{\cosh \gamma - \cos k} \! \! \right ) \! \! ,
\end{align}
a tidier and computationally convenient expression
to use in calculating the RMS displacements $\delta_{\mathrm{RMS}}^{n}$.
As $n$ becomes large, terms proportional to $e^{-\gamma n}$ quickly become suppressed by the
rapid exponential decay. Hence, for $n \gg \gamma^{-1}$, one will obtain
\begin{align}
\lambda_{k} = \frac{2}{e^{\gamma} - 1} + \frac{e^{-\gamma} - \cos(k)}{\cosh (\gamma) - \cos (k)}
\label{eq:Eq10}
\end{align}
The results are shown in Fig.~\ref{fig:Fig3} for the scaling of $\delta_{\mathrm{RMS}}^{n}$ with respect to the size $N$
of the system. To keep the results for different values of $\gamma$
on the same footing, we use the prefactor $K_{0}$ as a normalization of the
coupling with $K_{0} \equiv e^{\gamma}$. A similar procedure is also used in calculations involving exponentially
decaying extended couplings in 2D. In this manner, the convergence to the results for the case where only
nearest neighbors are involved in the coupling scheme is easier to see.
In the main part of the graph, the
square of the RMS deviation is graphed with respect to system size $N$ for a broad range of system sizes.
The curves corresponding to the different decay constants are asymptotically linear in the system size
although the slopes decrease with decreasing $\gamma$ as the coupling becomes longer in range.
The inset of the graph shows a closer view of $\delta_{\mathrm{RMS}}^{n}$. Each of the curves rises steadily for
sufficiently large $N$, notwithstanding non-monotonicities for small to moderate $N$ in the case
$\gamma = 0.25$ where the decay of the interaction is relatively slow.
In latter case, there is competition between thermal fluctuations and an increase in lattice
rigidity which occurs as the linear crystal grows, providing atoms with more neighbors. Eventually, however,
$N$ exceeds the length scale $\gamma^{-1}$ of the coupling between atoms, and the balance shifts in
favor of thermal fluctuations. The latter increase in importance with increasing $N$ and thus eventually destroy long-range
crystalline order.
We also examine the eigenvalue density of states for different decay rates $\gamma$.
With the range of the potential being set by $\gamma^{-1}$, larger values of $\gamma$ would correspond to a more
rapid decay and a shorter range of the interaction between neighbors. In calculating the density of states, we use a Monte Carlo sampling process where
the values of $k$ are not quantized, permitting one to genuinely achieve the bulk limit for the purpose of
obtaining the density of states. To obtain a smooth curve a large number of
eigenvalues (i.e. $2.5 \times 10^{8}$ for the
histograms corresponding to the exponentially decaying coupling scheme) are sampled.
The formula given in the continuum limit in Eq.~\ref{eq:Eq10} is the appropriate expression to use for
$\lambda_{k}$ in the Monte Carlo sampling process.
The normalized density of states for a range of decay constants $\gamma$ appears in Fig.~\ref{fig:Fig3}.
Panel (a) is a standard plot with the density of states on the vertical axis, while
to facilitate the viewing of the DOS curves, the logarithm of the DOS curves in shown in panel (b).
Even for relatively long-range cases such as $\gamma = 0.25$, the density of states retains the ``U''-shaped
profile of the nearest neighbor case. The latter corresponds effectively to $\gamma = \infty$, and is
shown in red in the graphs. For convenience in comparing results, we again choose $K_{0} = e^{\gamma}$ in calculating the
DOS curves. The convergence to the $\gamma = \infty$ case with increasing $\gamma$ is
evident for the case $\gamma = 2.0$, where close agreement with the DOS calculated for the nearest neighbor case
is evident in panel (b) of Fig.~\ref{fig:Fig1}. In the latter, the ordinate is chosen to be $\log_{10} (\textrm{DOS})$ to help show the
structure of the density of states curves more clearly.
\begin{figure}
\includegraphics[width=.49\textwidth]{dallsfig2.eps}
\caption{\label{fig:Fig2} (Color Online) Mean square deviation curves are shown for a range of
$\gamma$ values. The vertical axis of the main graph is $(\delta_{\mathrm{RMS}})^{2}$ to
emphasize the linear behavior of the square of the mean square fluctuations, while the inset is a
graph of $\delta_{\mathrm{RMS}}^{n}$ with respect to $N$ for a smaller range of system sizes.
$2.5 \times 10^{8}$ eigenvalues were sampled to calculate the DOS curves.}
\end{figure}
\begin{figure}
\includegraphics[width=.49\textwidth]{dallsfig3.eps}
\caption{\label{fig:Fig3} (Color Online) Eigenvalue Density of States (DOS) for $\gamma$ values
ranging from $\gamma = 0.25$ to $\gamma = 2.0$ to, effectively, $\gamma = \infty$ with the
DOS on the ordinate for panel (b).
Panel (b) shows the density of states as well, but the
ordinate is $\log_{10} (\textrm{DOS})$.}
\end{figure}
Finally, we examine a genuinely long-range inter-atomic potential with a power law decay profile.
The lattice potential energy will have the form
\begin{align}
E = \sum_{l=1}^{N} \sum_{m=1}^{n} K_{0} m^{-\alpha} (\delta_{l+m} - \delta_{l})^{2}
\end{align}
where $\alpha$ is the decay exponent of the power law interaction ($\alpha > 1$), and
again $n = (N-1)/2$.
In terms of Fourier components, one will have
\begin{align}
E = \sum_{k} \sum_{m=1}^{n} 2 K_{0} m^{-\alpha} (1 - \cos mk) \lvert \delta_{k} \rvert^{2}
\label{eq:Eq15}
\end{align}
Hence, the modes are now decoupled with eigenvalues given by $\lambda_{k} = {\displaystyle \sum_{m=1}^{n}}
m^{-\alpha} (1 - \cos mk)$; we evaluate this expression directly in order to obtain $\delta_{\mathrm{RMS}}^{n}$
and the DOS profile appropriate to particular exponent $\alpha$ in the bulk limit.
Again, we calculate $\delta_{\mathrm{RMS}}^{n}$ and generate plots with respect to system
size. To test for divergence or convergence in the bulk limit, it is useful also to prepare log-log
plots (we use base ten logarithms in all cases), and the results appear in the inset of Figure~\ref{fig:Fig4}.
We examine systems ranging in size from $N = 3$ to on the order of a few hundred thousand atomic members.
A crucial question is whether there is a threshold value $\alpha_{c}$ above $\alpha = 1$ (where the
lattice energy may diverge with increasing system size) below which long range crystalline
order is stable with respect to thermal fluctuations in one dimensional lattices.
To identify $\alpha_{c}^{\mathrm{1D}}$, we calculate the normalized mean square fluctuations $\delta_{\mathrm{RMS}}^{n}$ with respect to
system size $n$, producing log-log graphs. The highest value of $\alpha$ where the mean square
deviations converge is identified as $\alpha_{c}^{\mathrm{1D}}$, the upper limit for the decay exponent in the extended
power law decay scheme where long-range crystalline order is still supported at finite temperatures.
The mean square
deviations, useful thermodynamic quantities with which to diagnose the presence or absence of
long-range crystalline order, are shown in Fig.~\ref{fig:Fig4} and Fig.~\ref{fig:Fig5}
(with the abscissa shown as a base ten logarithm over five decades of system
sizes $N$). In Fig.~\ref{fig:Fig4}, $\delta_{\mathrm{RMS}}^{n}$ curves are shown
for a relatively wide range of $\alpha$ values. Over the broad range of systems on
the horizontal axis, five orders of magnitude, the mean square displacements rise
monotonically for $\alpha = 2.0$ and $\alpha = 1.75$, while $\delta_{\mathrm{RMS}}^{n}$
decreases steadily for $\alpha = 1.5$ and $\alpha = 1.25$. The curves suggest a
decay exponent $\alpha_{c}$ in the vicinity of $\alpha = 1.6$ as a boundary between
crystals where long-range order is unstable at finite temperatures, and one dimensional
solid where crystalline order is retained for $T > 0$.
The inset is the corresponding log-log graph of the mean square fluctuations
plotted for the same $\alpha$ values as in the main graph, which is a semi-logarithmic plot.
In Fig.~\ref{fig:Fig5}, RMS deviation curves are shown for a tighter span of power law
decay exponents
(ranging from $\alpha = 1.60$ to $\alpha = 1.65$) to identify with greater accuracy the
numerical value of $\alpha_{c}$. To facilitate the determination of the
exponent separating crystals with long-range order and those disrupted by thermal
fluctuations, we place dark circles over the maxima of the
$\delta_{\mathrm{RMS}}^{n}$ curves. For $\alpha > 1.625$, the maxima are located at the
edge of the plot, consistent with a steady increase (and likely
divergence in the bulk limit) of the RMS curves.
On the other hand, for $\alpha < 1.6125$, the thermally averaged RMS deviations
are non-monotonic, reaching a
maximum for finite values of $N$ and then declining, presumably toward a stable
bulk value.
We identify the boundary as $\alpha_{c} = 1.615 \pm 0.005$.
It should be emphasized that while long-range order is not supported for decay exponents
in excess of $\alpha_{c}$, the divergence of $\delta_{\mathrm{RMS}}^{n}$ with increasing
system size is nonetheless quite slow, sublinear in $\log_{10}(N)$, whereas a strictly logarithmically
diverging mean square deviation would instead rise at a more rapid linear rate.
\begin{figure}
\includegraphics[width=.49\textwidth]{dallsfig4.eps}
\caption{\label{fig:Fig4} (Color Online) $\delta_{\mathrm{RMS}}^{n}$ is plotted on a
semi-logarithmic (all logs are base ten)
scale in the main draft for selected values of $\alpha$ for
one dimensional systems. The inset is
a log-log graph of the mean square deviations for the same values of $\alpha$ as in the main
graph.}
\end{figure}
\begin{figure}
\includegraphics[width=.4\textwidth]{dallsfig5.eps}
\caption{\label{fig:Fig5} (Color Online) $\delta_{\mathrm{RMS}}^{n}$ is plotted on a
semi-logarithmic scale for a relatively tight range of $\alpha$ values. The large dark
circles indicate maxima in the mean square deviation curves; it is concluded that
$\alpha_{c} = 1.615 \pm 0.005$.}
\end{figure}
\begin{figure}
\includegraphics[width=.49\textwidth]{dallsfig6.eps}
\caption{\label{fig:Fig6} (Color Online) Eigenvalue Density of States (DOS) for the one
dimensional lattice with a power law decay where the decay exponent $\alpha$ varies from
$\alpha = 1.5$ in panel (a) to $\alpha = 2.5$ in panel (d).}
\end{figure}
To obtain information complementary to the RMS fluctuations, we again calculate the eigenvalue
density of states.
We also use Monte Carlo sampling where wave numbers are chosen at random,
with uniform probability, to calculate the vibrational density of states with the results shown in Fig.~\ref{fig:Fig6}.
The double sum in Eq.~\ref{eq:Eq15} requires careful consideration, in that one must be certain that enough terms
have been included in the inner sum that a convergent result is obtained. To be certain convergence has
been achieved,
we prepare eigenvalue histograms for successive doublings of the number of terms contained in the inner sum indexed by
$m$. The number of terms which must be included in order to attain suitable convergence
increases with decreasing $\alpha$ for crystal lattices where the coupling is more slowly
decaying. In general, however, the oscillatory cosine term in Eq. does act to somewhat hasten
convergence and hence limit the number of terms which need to be summed.
\section{Two Dimensional Crystals}
For the case of a two dimensional system, the analysis is in many respects parallel to that applied
for the one dimensional lattices. However, the additional dimension makes available richer choices
for the lattice geometry. We examine various coupling schemes for three types of lattices; the
square lattice, the triangular lattice, and the honeycomb lattice. In Fig.~\ref{fig:Fig7} panel (a) represents
the square lattice, the triangular lattice is depicted in panel (b), and the honeycomb lattice
appears in panel (c). A peculiarity of the honeycomb lattice is the presence of inequivalent sites,
and this characteristic is highlighted in panel (c) of Fig.~\ref{fig:Fig7} where different colors are used in labeling
the sites. Although the geometries we examine have different characteristics, the
essential qualitative characteristics and the most salient physics are found to share much in common.
\begin{figure}
\includegraphics[width=.49\textwidth]{dallsfig7.eps}
\caption{\label{fig:Fig7} (Color Online) Three distinct lattice geometries are shown.
Panel (a) is a portion of a square lattice, panel (b) represents the triangular lattice, and
panel (c) shows the honeycomb lattice with inequivalent sites represented with distinct colors.}
\end{figure}
We first consider the square lattice, and
we initially take into account only interactions between nearest neighbors where at the present level of
approximation the lattice lacks rigidity. The lattice energy is
\begin{align}
E = \frac{K}{2} \sum_{i,j=0}^{N-1} [ (\delta_{i+1}^{x} - \delta_{i}^{x})^{2} +
(\delta_{ij+1}^{y} - \delta_{ij}^{y})^{2}]
\end{align}
We express the displacements in terms of Fourier components with, e.g., $\delta_{ij}^{x}
= {\displaystyle \sum_{\mathbf{k}}} \delta_{\mathbf{k}}^{x} e^{I(k_{x} i + k_{y} j)}$, with $I$ being the
imaginary unit. In terms of $\delta_{\mathbf{k}}^{x}$ and $\delta_{\mathbf{k}}^{y}$, the
energy has the form
\begin{align}
E = \frac{k}{2} \sum_{\mathbf{k}} [ (1 - \cos k_{x} ) \lvert \delta_{\mathbf{k}}^{x} \rvert^{2} +
(1 - \cos k_{y} ) \lvert \delta_{\mathbf{k}}^{y} \rvert^{2} ];
\label{eq:Eqfloppy}
\end{align}
in this manner the degrees of freedom are decoupled. Inspection of Eq.~\ref{eq:Eqfloppy}
reveals that the eigenvalues are $2N$ fold degenerate and identical to the eigenvalues obtained
for the case of the one dimensional crystal where only interactions between nearest neighbors
were considered.
Since the eigenvalues are the same as those in the 1D case with interactions only between
nearest neighbors, crystalline order is readily disrupted by thermal fluctuations.
Hence, $(\delta_{\mathrm{RMS}}^{n})^{2}$ will scale with $N$ just as was the case for the 1D counterpart.
If one takes into account coupling to next-nearest neighbors as well, then the lattice energy in
real space is
\begin{align}
E = \tfrac{K}{2} \! \sum_{i,j=0}^{N-1} \left( \! \! \begin{array}{c} \left[ \hat{x} \cdot
(\vec{\delta}_{i+1j} - \vec{\delta}_{ij}) \right]^{2} + \left [ \hat{y} \cdot
( \vec{\delta}_{ij+1} - \vec{\delta}_{ij} ) \right]^{2} \\
+ \left[ \tfrac{1}{\sqrt{2}} ( \hat{x} + \hat{y} ) \cdot \left( \vec{\delta}_{i+1j+1}
- \vec{\delta}_{ij} \right) \right ]^{2} \\ + \left[ \frac{1}{\sqrt{2}} ( \hat{x} - \hat{y})
\cdot \left( \vec{\delta}_{i+1j-1} - \vec{\delta}_{ij} \right) \right]^{2} \end{array} \! \!\right)
\end{align}
Operating in terms of Fourier components, one diagonalizes the matrix
\begin{align}
\left[ \begin{array}{cc} \left( \begin{array}{c} 2 - \cos k_{x} \\ - \cos k_{x} \cos k_{y} \end{array}
\right) & \sin k_{x} \sin k_{y} \\
\sin k_{x} \sin k_{y} & \left( \begin{array}{c} 2 - \cos k_{y} \\ - \cos k_{x} \cos k_{y} \end{array} \right) \end{array} \right]
\end{align}
The eigenvalues are given by
\begin{align}
&\lambda_{\pm} = (4 - \cos k_{x} \cos k_{y} - 2 \cos k_{x} \cos k_{y} & \\
&\pm \sqrt{(\cos k_{x} - \cos k_{y} )^{2} + 4 \sin^{2} k_{x} \sin^{2} k_{y}}&
\end{align}
The results for the mean
square deviations $\delta_{\mathrm{RMS}}^{n}$ appear in Fig.~\ref{fig:Fig8}.
One may also calculate the vibrational DOS, and the results appear in panel (a) of
Fig.~\ref{fig:Fig8}. The introduction of next-nearest neighbor interactions is very effective in
reducing the deleterious effect of thermal fluctuations on long-range
crystalline order, though there is still a weak divergence in $\delta_{\mathrm{RMS}}^{n}$ in the
bulk limit. The square $( \delta_{\mathrm{RMS}}^{n})^{2}$ quickly assumes an asymptotically
linear form with respect to $\log_{10}N$.
The much slower increase of the RMS deviations with $N$ is
reflected in the DOS profile, where instead of
exhibiting a sharp cusp in the low eigenvalue regime, the
DOS curve terminates smoothly. However, the fact that the DOS tends to a finite value as the eigenvalue vanishes is still
enough to cause a divergence in the mean square displacements from equilibrium.
\begin{figure}
\includegraphics[width=.49\textwidth]{dallsfig8.eps}
\caption{\label{fig:Fig8} (Color Online) The mean square displacements
for the square lattice with nearest and next-nearest neighbor couplings. The main graph shows $(\delta_{\mathrm{RMS}}^{n})^{2}$,
with the corresponding vibrational density of states graphed in inset (a) and the raw RMS displacements $\delta_{\mathrm{RMS}}^{n}$ displayed
in inset (b).}
\end{figure}
We next examine a general case where there are interactions with many neighbors.
In real space, the energy stored in the lattice has the form
\begin{align}
E = \frac{1}{2} \! \sum_{i,j=-n}^{N-1} \sum_{l,m=-n}^{n} \! \! \! \! \tfrac{1}{2} K(r_{lm}) \! \left( \! \hat{\Delta}_{lm} \! \cdot \!
[ \vec{\delta}_{i+lj+m} - \vec{\delta}_{ij} ] \! \right )^{2}
\end{align}
where the inner ``$\tfrac{1}{2}$'' factor compensates for multiple counting of bond energies and the choice
$n = (N-1)/2$ allows each atomic member to interact with all of the atoms contained in
crystal while avoiding multiple interactions with the same particle.
Since $\vec{\Delta}_{lm} = [l,m]$, the appropriate unit vector directed between particles
given the labels ``$i+l,j+m$'' and ``$ij$'' is $\vec{\hat{\Delta}}_{lm} = [l,m]/\sqrt{l^{2} + m^{2}}$.
Again, we may decouple the vibrational modes by expressing the coordinate shifts in
terms of Fourier components. The lattice potential energy may then be written as
\begin{align}
E = \frac{1}{2} \sum_{i,j=0}^{N-1} \sum_{l,m=-n}^{n} \tfrac{K(r_{lm})}{2r_{lm}^{2}} \left[ \begin{array}{c} l (\delta_{i+lj+m}^{x} -
\delta_{ij}^{x} ) + \\ m ( \delta_{i+lj+m}^{y} - \delta_{ij}^{y} ) \end{array}
\right]^{2}
\end{align}
with the ``1/2'' factor present to compensate for redundant bond counting.
The range radius $r_{lm}$ is defined
with $r_{lm} \equiv \sqrt{l^{2} + m^{2}}$, with the full vector given by $\mathbf{r}_{lm} =
l \hat{x} + m \hat{y}$.
In terms of Fourier components, one will have
\begin{align}
&E = \frac{1}{2} \sum_{k_{x},k_{y}} \sum_{l,m=-n}^{n} \frac{K(r_{lm})}{r_{lm}^{2}} [1 - \cos (\mathbf{k} \cdot \mathbf{r}_{lm})]& \\ \nonumber
&\times \left( \begin{array}{c}
l^{2} \lvert \delta_{\mathbf{k}}^{x} \rvert^{2} + m^{2} \lvert \delta_{\mathbf{k}}^{y} \rvert^{2} \\
+ ml [\delta_{\mathbf{k}}^{x} \delta_{\mathbf{k}}^{y*} + \delta_{\mathbf{k}}^{x*} \delta_{\mathbf{k}}^{y}]
\end{array} \right )&
\end{align}
Hence, in order to to decouple the vibrational modes, one must diagonalize the
matrix
\begin{align}
\tfrac{1}{2} \sum_{l,m=-n}^{n} K(r_{lm}) r_{lm}^{-2} [1 - \cos (\mathbf{k} \cdot \mathbf{r}_{lm})]
\left[ \begin{array}{cc} l^{2} & ml \\ ml & m^{2} \end{array} \right ],
\end{align}
which may also be written as
\begin{align}
\tfrac{1}{2} \! \! \! \! \! \sum_{l,m=-n}^{n} \! \! \! \! K(r_{lm}) [1 - \cos (\mathbf{k} \! \cdot \! \mathbf{r}_{lm})] \! \! \left [ \! \!
\begin{array}{cc} \hat{\Delta}_{lm}^{x} \hat{\Delta}_{lm}^{x} & \hat{\Delta}_{lm}^{x} \hat{\Delta}_{lm}^{y} \\
\hat{\Delta}_{lm}^{y} \hat{\Delta}_{lm}^{x} & \hat{\Delta}_{lm}^{y} \hat{\Delta}_{lm}^{y}
\end{array} \! \right ] \! \! ,
\end{align}
a representation which will prove more compact for more complicated systems such as the honeycomb lattice crystals with
more than one layer in the direction transverse to the crystal plane, examined in Section V.
We first turn to the case of an exponentially decaying coupling scheme, and we calculate
the $\delta_{\mathrm{RMS}}^{n}$ curves with respect to the size $N$ of the system.
The results for the thermally averaged means square displacements
are shown in Fig.~\ref{fig:Fig9} for a range of different decay constants $\gamma$.
The computational burden of calculating the auxiliary sum will grow with $N$, but one
aspect of the exponential decay that is of assistance in the calculations is the fact that the sum may be safely
truncated when the distance between interacting atoms becomes several times
greater than the range of the short-ranged coupling (i.e. terms beyond
beyond $m$ and $l$ pairs such that $\sqrt{l^{2} + m^{2}} \gg \gamma^{-1}$) need not be included.
In particular, we obtain results which are very well converged if we discard terms beyond 20 decay lengths $\gamma^{-1}$.
Ultimately, thermally induced deviations from equilibrium destroy long-range order, and
the RMS deviations diverge slowly [$(\delta_{\mathrm{RMS}}^{n})^{2}$ again scales linearly
with $\log_{10}(N)$], but the rate of divergence decreases with decreasing $\gamma$. In
particular, as the range $\gamma^{-1}$ of the inter-atomic coupling is increased,
the slope of the graph of $(\delta_{\mathrm{RMS}}^{n})^{2}$ with respect to
$\log_{10}(N)$ decreases, although the RMS deviations eventually still diverge in
the thermodynamic limit.
The DOS is also calculated, with results appearing in Fig.~\ref{fig:Fig10} for a range of $\gamma$ values.
We use Monte Carlo sampling to choose $k_{x}$ and $k_{y}$ from a continuum range, and thereby
operate in the thermodynamic limit for the purpose of calculating the DOS curves.
At least $10^{6}$ eigenvalues are sampled in generating the DOS curves.
The low eigenvalue region of the DOS graph is very similar to the corresponding regime of the
density of states where only interactions with nearest and next-nearest neighbors are
included.
\begin{figure}
\includegraphics[width=.49\textwidth]{dallsfig9.eps}
\caption{\label{fig:Fig9} (Color Online) Mean square deviations for an
exponentially decaying interaction. The main graph shows $(\delta_{\mathrm{RMS}}^{n})^{2}$, while
the raw mean square deviations are plotted in the inset of the Figure.}
\end{figure}
\begin{figure}
\includegraphics[width=.49\textwidth]{dallsfig10.eps}
\caption{\label{fig:Fig10} (Color Online) Vibrational density of states for an
exponentially decaying coupling. The DOS curves are plotted for assorted values of the
decay parameter $\gamma$.}
\end{figure}
Next, we examine the much slower power law decays $K_{ij} = K r_{ij}^{-\alpha}$ in
the inter-atomic separation $r_{ij}$ where the
exponent $\alpha$ controls the rate of the decay, long-ranged in
the sense that there is not a length scale to
set the range of the coupling. Again, we first calculate the mean square displacements
with respect to the size of the system, and then we examine the density of states for
the eigenvalues. The $\delta_{\mathrm{RMS}}^{n}$ results are shown in Fig.~\ref{fig:Fig11}.
Computational subtleties similar to those encountered for the case of the one dimensional
solid in with a long-ranged coupling scheme must be carefully navigated since the
higher dimensionality ($d = 2$) will cause the computational burden to grow even more rapidly (nominally as
$L^{4}$ if interactions with all neighbors are included) with system size. Again, we use Monte Carlo sampling to select wavevectors and accumulate
eigenvalues to build up the vibrational DOS. The auxiliary sum
over dummy indices $l$ and $m$ giving the eigenvalue
would in principle contain an infinite number of terms (in the bulk limit, each atom in the
crystal would have an infinite number of neighbors), but we truncate the sum at a
finite range. The presence of sinusoidal terms in the
sum, as in the corresponding 1D case, provides an oscillatory element and will hasten the convergence of the sum, thereby reducing the
computational burden. We check convergence with respect to the
truncation range by calculating the DOS with successive doublings of the truncation length $N_{\Delta}$
until the DOS profile ceases to change with additional doublings of
the system size. The results for the vibrational DOS appear in Fig.~\ref{fig:Fig12}.
One notes that the convergence with respect to the truncation radius is least
rapid for $\alpha = 2.5$. However, the graphs are relatively well converged for $\alpha = \alpha_{c}^{\mathrm{2D}}$ and
higher values of the decay exponent. When $\alpha$ is in the vicinity of $\alpha_{c}^{\mathrm{2D}}$, there is
very little support in the low eigenvalue regime. On the other hand, with increasing $\alpha$ the
interaction decays more rapidly, and ultimately the histogram amplitude in the zero eigenvalue limit rises to a finite value,
contributing to a divergence in the mean square fluctuations with increasing system size.
\begin{figure}
\includegraphics[width=.49\textwidth]{dallsfig11.eps}
\caption{\label{fig:Fig11} (Color Online) Mean square deviations for an interaction
decaying as a power law for decay exponents $\alpha$ near the threshold $\alpha_{c}^{\mathrm{2D}}$. The
inset shows a broader view, while the main graph is a closer view of the transition from
converging to diverging $\delta_{\mathrm{RMS}}^{n}$ curves.}
\end{figure}
\begin{figure}
\includegraphics[width=.49\textwidth]{dallsfig12.eps}
\caption{\label{fig:Fig12} (Color Online) The vibrational density of states for the two
dimensional square lattice with a power law decay interaction. DOS curves for various values of the
decay parameter $\alpha$ are shown, with traces for different values of the truncation
length $N_{\Delta}$ on the same plot.}
\end{figure}
\section{Alternate Two Dimensional Geometries}
The treatment in the case of the triangular and honeycomb lattices is very similar to the
approach used in the case of the square lattice.
Interestingly,
the triangular lattice is rigid with only a nearest
neighbor interaction in the context of the harmonic
approximation, and the mere inclusion of nearest neighbors is enough to set up quasi-long range
order where thermally induced fluctuations about equilibrium diverge very slowly [i.e.
$(\delta_{\mathrm{RMS}}^{n})^{2}$ increases as the logarithm of the system size just as
in short-ranged extended interactions for the square lattice].
For the triangular lattice, the lattice energy in real space has the form
\begin{align}
E = \tfrac{K}{2} \sum_{i,j=0}^{N-1} \left( \! \! \begin{array}{c} \left[ \hat{x} \cdot \left (
\vec{\delta}_{i+1j} - \vec{\delta}_{ij} \right ) \right]^{2} + \\
\left[ (\tfrac{1}{2} \hat{x} + \tfrac{\sqrt{3}}{2} \hat{y} ) \cdot \left(
\vec{\delta}_{ij+1} - \vec{\delta}_{ij} \right) \right]^{2} + \\ \left[ (\tfrac{1}{2} \hat{x} -
\tfrac{\sqrt{3}}{2} \hat{y} ) \cdot \left( \vec{\delta}_{i+1j-1} - \vec{\delta}_{ij} \right )\right]^{2} \end{array} \! \! \right)
\end{align}
and the eigenvalues for the decoupled vibrational modes are obtained by diagonalizing the $2 \times 2$ matrix
\begin{align}
\left[ \! \! \! \! \begin{array}{cc} \left( \! \! \begin{array}{c} 3 - 2 \cos k_{x} \! - \! \tfrac{1}{2} \cos k_{y} \\ -
\tfrac{1}{2} \cos [k_{y} \! - \! k_{x} ] \end{array} \! \! \! \right) & \tfrac{\sqrt{3}}{2} ( \cos [k_{y} \! - \! k_{x} ] - \cos k_{y} ) \\
\tfrac{\sqrt{3}}{2} (\cos [k_{y} \! - \! k_{x} ] \cos k_{y}) & \left( \! \! \begin{array}{c} 3 - \tfrac{3}{2} \cos k_{y} \\
- \tfrac{3}{2} \cos [k_{y} \! - \! k_{x} ] \end{array} \! \! \right) \end{array} \! \! \! \right] \! \! ,
\end{align}
yielding
\begin{align}
&\lambda_{\mathbf{k}}^{\pm} = [3 - \cos k_{x} - \cos k_{y} - \cos (k_{y} \! - \! k_{x} ) ]& \\ \nonumber
&\pm \sqrt{\begin{array}{c} \cos^{2} k_{x} + \cos^{2} k_{y} + \cos^{2} (k_{y} \! - \! k_{x}) - \cos k_{x} \cos k_{y} \\
- \cos k_{y} \cos (k_{y} \! - \! k_{x} ) - \cos k_{x} \cos (k_{y} \! - \! k_{x}) \end{array}}&
\end{align}
The results for the mean square fluctuations as well as the vibrational DOS are given in Fig.~\ref{fig:Fig13}.
\begin{figure}
\includegraphics[width=.49\textwidth]{dallsfig13.eps}
\caption{\label{fig:Fig13} (Color Online) The mean square deviation for the triangular lattice with nearest neighbor
couplings. The main graph is a semi-logarithmic plot of $\delta_{\mathrm{RMS}}$, while inset (a) shows the vibrational
density of states. A graph of the raw $\delta_{\mathrm{RMS}}^{n}$ appears in panel (b).}
\end{figure}
We generalize the nearest-neighbor case to an extended scheme where
each atomic member may interact with many neighbors.
In real space, the lattice potential energy may be written as
\begin{align}
E = \frac{1}{2} \sum_{i,j=0}^{N} \sum_{l,m=-n}^{n} \tfrac{1}{2}
K(r_{lm}) \vec{\hat{\Delta}}_{lm} \! \cdot \! (\vec{\delta}_{i+l,j+m} - \vec{\delta}_{ij}),
\end{align}
where the components of the unit vector $\vec{\hat{\Delta}}_{lm}$ are $\hat{\Delta}_{lm}^{x} = (l+m/2)/r_{lm}$ and
$\hat{\Delta}_{lm}^{y} = \sqrt{3}m/2r_{lm}$, where $r_{lm} = (l^{2} + m^{2} + lm)^{1/2}$ is the distance separating interacting pairs
in the triangular lattice geometry.
After expressing the displacements in terms of Fourier components, one calculates the eigenvalues of the $2 \times 2$
matrix
\begin{align}
\frac{1}{2} \sum_{l,m=-n}^{n} g_{lm} \left[ \! \! \begin{array}{cc} \hat{\Delta}^{x}_{lm} \hat{\Delta}^{x}_{lm} &
\hat{\Delta}^{x}_{lm} \hat{\Delta}^{y}_{lm} \\
\hat{\Delta}^{y}_{lm} \hat{\Delta}^{x}_{lm} & \hat{\Delta}^{y}_{lm} \hat{\Delta}^{y}_{lm} \end{array} \! \! \right ],
\end{align}
where $g_{lm} = [1 - \cos(k_{x}l + k_{y}m)] K(r_{lm})$.
As in the case of the square lattice, we consider for the triangular lattices an
exponentially decaying coupling scheme, and the results are shown in Fig.~\ref{fig:Fig14}
for over four decades of system sizes.
Again, the quantity $(\delta_{\mathrm{RMS}}^{n})^{2}$ increases linearly as $\beta \log_{10} N$ with the slope $\beta$
decreasing with decreasing decay rate $\gamma$ (and hence increasing range of the coupling).
We also prepare graphs of the vibrational density of states, shown in the four graphs in Fig.~\ref{fig:Fig15}
for a range of values of the decay constant $\gamma$.
The wide separation between the RMS curves corresponding to $\gamma = 2.0$ and $\gamma = 1.0$, $\gamma = 0.75$, and
$\gamma = 0.5$ is mirrored in the DOS curves where for the smaller decay rates the eigenvalue histogram curve
intersects the ordinate with very low amplitudes. On the other hand, for the more rapid decay where $\gamma = 2.0$,
the amplitude in the regime of low eigenvalues is much higher, and the DOS graph resembles that of the nearest neighbor case
to a much greater degree than DOS profiles corresponding to lower decay rates of the exponential coupling.
\begin{figure}
\includegraphics[width=.45\textwidth]{dallsfig14.eps}
\caption{\label{fig:Fig14} (Color Online) Normalized Mean square curves and eigenvalue density of
states for the triangular lattice with an exponential coupling scheme. The main plot shows $(\delta_{\mathrm{RMS}}^{n})^{2}$ with
respect to the base ten logarithm of the system size $N$, while inset (a) is the corresponding
graph for $\delta_{\mathrm{RMS}}^{n}$. Inset (b) shows the density states curve for the triangular lattice
for $2.5 \times 10^{8}$ eigenvalues sampled in the bulk limit via Monte Carlo}
\end{figure}
\begin{figure}
\includegraphics[width=.49\textwidth]{dallsfig15.eps}
\caption{\label{fig:Fig15} (Color Online) Density of states curves for $\gamma = 2.0$,
$\gamma = 1.0$, $\gamma = 0.75$, and $\gamma = 0.50$ in panels (a), (b), (c), and (d) respectively
for the triangular lattice with an exponentially decaying coupling scheme.}
\end{figure}
As for the square lattice geometry, we examine a long-ranged power law interaction in the context of
triangular lattices. The mean square deviations from equilibrium are graphed in Fig.~\ref{fig:Fig16} with the inset of the plot
showing a closer view of the $(\delta_{\textrm{RMS}}^{n})^{2}$ curves. A salient question is if lattices with
geometries which differ from those of the square lattice will exhibit long-range crystalline order for the
same range of decay exponents $\alpha$ as in the context of the
square lattice. We find up to the bounds of error $\alpha_{c}^{\mathrm{2D}} = 3.15 \pm 0.025$ calculated for the triangular lattice to be identical
to the threshold exponent for the square lattice.
We show the corresponding eigenvalue histograms for the power law decay
for the decay exponents $\alpha = 5.0$, $\alpha = 4.0$, $\alpha = 3.5$, and $\alpha = 3.125$ in panels
(a), (b), (c), and (d) respectively of Fig.~\ref{fig:Fig17} While the eigenvalue histograms plotted in the panel (a) and panel (b)
correspond to decay exponents significantly higher than $\alpha_{c}^{\mathrm{2D}}$, the DOS curve in panel (c),
is plotted for a decay exponent only slightly above the threshold value, and the histogram in panel (d) of Fig. corresponds
to a value of $\alpha$ just below (though very nearly equal) to $\alpha_{c}^{\mathrm{2D}}$.
Whereas the DOS curves in panel (a) and (b) clearly tend to a finite value as the eigenvalue approaches zero, the
amplitude for the slower decay $\alpha = 3.125$ and tends to zero in the the limit that the
eigenvalue is very small; for the case $\alpha = 3.5$, the amplitude reaches a finite
but very small value in the zero eigenvalue limit.
A DOS amplitude tending to zero in the low eigenvalue limit, as seen for $\alpha = 3.125$ is
consistent with the preservation of long-range
crystalline order indicated in the convergence of the mean square deviations graphed in Fig.~\ref{fig:Fig16}.
\begin{figure}
\includegraphics[width=.49\textwidth]{dallsfig16.eps}
\caption{\label{fig:Fig16} (Color Online) The square of the normalized RMS deviations for
various values of the decay exponent $\alpha$, with the inset showing a closer view of the
$(\delta_{\mathrm{RMS}}^{n})^{2}$ curves.}
\end{figure}
\begin{figure}
\includegraphics[width=.49\textwidth]{dallsfig17.eps}
\caption{\label{fig:Fig17} (Color Online) Eigenvalue histogram curves plotted for $\alpha = 5.0$,
$\alpha = 4.0$, $\alpha = 3.5$, and $\alpha = 3.125$ respectively. The red trace corresponds to
a relatively short truncation radius where $N_{\Delta} = 25$, and the truncation is less drastic for
the blue DOS profiles where $N_{\Delta} = 50$.}
\end{figure}
We examine the honeycomb lattice, which differs from the square in triangular lattices in that it possesses
two inequivalent sites (labeled ``A'' and ``B'' for convenience).
We again appeal to translational invariance, operating in terms of Fourier components, to
decouple the vibrational modes for the honeycomb lattice.
The relationship of sites of type ``A'' and ``B'' to nearest neighbors is
illustrated in Fig.~\ref{fig:Fig20}.
\begin{figure}
\includegraphics[width=.49\textwidth]{dallsfig18.eps}
\caption{\label{fig:Fig18} (Color Online) Labeling and indexing scheme for the honeycomb lattice
for inequivalent sites labeled ``A'' and ``B'', and their immediate vicinity.}
\end{figure}
Following this labeling convention, the lattice energy in real space has the form
\begin{align}
E \! = \! \tfrac{K}{2} \! \sum_{i,j=0}^{N-1} \left \{ \! \! \begin{array}{c}
\left[ \left( \tfrac{\sqrt{3}}{2}\hat{x} + \tfrac{1}{2} \hat{y} \right) \! \cdot \!
\left( \vec{\delta}_{ij}^{\mathrm{A}} - \vec{\delta}_{ij+1}^{\mathrm{B}} \right) \right ]^{2} + \\
\left[ \left( -\tfrac{\sqrt{3}}{2} \hat{x} + \tfrac{1}{2} \hat{y} \right) \! \cdot \!
\left( \vec{\delta}_{ij}^{\mathrm{A}} - \vec{\delta}_{i-1j+1}^{\mathrm{B}} \right) \right]^{2}
\\ + \left[ -\hat{y} \! \cdot \! \left ( \vec{\delta}_{ij}^{\mathrm{A}} -
\vec{\delta}_{ij}^{\mathrm{B}} \right) \right]^{2}
\end{array} \! \! \! \right \}
\end{align}
where it is sufficient to sum over the three bonds surrounding the atoms labeled ``A''
with no factor of $\frac{1}{2}$ needed to compensate for double counting.
In Fourier space, the energy stored in the lattice has the form
\begin{align}
E = \tfrac{K}{2} \! \sum_{\mathbf{k}} \! \left \{ \! \! \! \begin{array}{c}
\tfrac{3}{2} \left( \lvert \delta_{\mathbf{k}}^{\mathrm{A} x} \rvert^{2} +
\lvert \delta_{\mathbf{k}}^{\mathrm{A} y}
\rvert^{2} + \lvert \delta_{\mathbf{k}}^{\mathrm{B} x} \rvert^{2} +
\lvert \delta_{\mathbf{k}}^{By} \rvert^{2} \right) \\
-\tfrac{1}{4} ( 1 + e^{-ik_{x}} ) e^{ik_{y}} \! \! \left (3 \delta_{\mathbf{k}}^{\mathrm{B} x}
\delta_{\mathbf{k}}^{\mathrm{A} x*} + \delta_{\mathbf{k}}^{\mathrm{B} y} \delta_{\mathbf{k}}^{\mathrm{A} x*}
\right) \\
-\tfrac{1}{4} (1 + e^{ik_{x}} ) e^{-ik_{y}} \! \! \left( 3 \delta_{\mathbf{k}}^{\mathrm{B} x*}
\delta_{\mathbf{k}}^{\mathrm{A} y} + \delta_{\mathbf{k}}^{\mathrm{B} y*} \delta_{\mathbf{k}}^{\mathrm{A} x}
\right) \\
+ \tfrac{\sqrt{3}}{4} (e^{-ik_{x}} - 1) e^{ik_{y}} \! \! \left( \delta_{\mathbf{k}}^{\mathrm{B} x}
\delta_{\mathbf{k}}^{\mathrm{A} y*} + \delta_{\mathbf{k}}^{\mathrm{B} y} \delta_{\mathbf{k}}^{\mathbf{A} x*}
\right) \\
+ \tfrac{\sqrt{3}}{4} (e^{ik_{x}} - 1) e^{-ik_{y}} \! \! \left( \delta_{\mathbf{k}}^{\mathrm{B} x*}
\delta_{\mathbf{k}}^{\mathrm{A} y} + \delta_{\mathbf{k}}^{\mathrm{B} y*}
\delta_{\mathbf{k}}^{\mathrm{A} x} \right) \\ - \delta_{\mathbf{k}}^{\mathrm{A}y} \delta_{\mathbf{k}}^{\mathrm{B}y*} -
\delta_{\mathbf{k}}^{\mathrm{A} y *} \delta_{\mathbf{k}}^{\mathrm{B} y}
\end{array} \! \! \! \! \! \right \}
\end{align}
In addition to Fourier decomposition, the diagonalization of a $4 \times 4$ matrix
will be necessary to completely decouple the vibrational modes appropriate to the honeycomb lattice
with the nearest neighbor coupling scheme; the matrix in question is
\begin{align}
\left[ \begin{array}{cccc} c_{\mathrm{A}x \mathrm{A}x} & c_{\mathrm{A}x \mathrm{A}y} &
c_{\mathrm{A} x \mathrm{B} x} & c_{\mathrm{A} x \mathrm{B} y} \\
c_{\mathrm{A} y \mathrm{A} x} & c_{\mathrm{A} y \mathrm{A} y} &
c_{\mathrm{A} y \mathrm{B} x} & c_{\mathrm{A} y \mathrm{B} y} \\
c_{\mathrm{B} x \mathrm{A} x} & c_{\mathrm{B} x \mathrm{A} y} &
c_{\mathrm{B} x \mathrm{B} x} & c_{\mathrm{B} x \mathrm{B} y} \\
c_{\mathrm{B} y \mathrm{A} x} & c_{\mathrm{B} y \mathrm{A} y} &
c_{\mathrm{B} y \mathrm{B} x} & c_{\mathrm{B} y \mathrm{B} y}
\end{array} \right] = \left[ \begin{array}{cc} \hat{A} & \hat{B} \\ \hat{B}^{\dagger} & \hat{A}
\end{array} \right]
\end{align}
where $\hat{A}$ and $\hat{B}$, and $\hat{B}^{\dagger}$ is the
Hermitian conjugate of $\hat{B}$. The sub-matrices $\hat{A}$ and $\hat{B}$ are given by
\begin{align}
\hat{A} = \left [ \begin{array}{cc} \tfrac{3}{2} & 0 \\ 0 & \tfrac{3}{2} \end{array} \right ]
\end{align}
and
\begin{align}
\hat{B} = e^{ik_{y}} \! \left[ \! \! \begin{array}{cc} -\tfrac{3}{4} (1 + e^{-ik_{x}} ) &
\tfrac{\sqrt{3}}{4} (e^{-ik_{x}} - 1) \\
\tfrac{\sqrt{3}}{4} (e^{-ik_{x}} - 1) & -e^{-ik_{y}} - \tfrac{1}{4}(1 + e^{-ik_{x}}) \end{array} \! \! \right]
\end{align}
However, to obtain a crystal which is locally stiff,
one must examine an extended scheme
where each atomic member interactions with many neighbors. In real space, the lattice energy may
be expressed as
\begin{align}
&E \! \! = \! \! \sum_{i,j = 0}^{N}& \\ \nonumber
&\sum_{l,m=-n}^{n} \! \! \left( \! \! \! \begin{array}{c} \tfrac{1}{2} \! K(r_{lm}^{t}) \! \left[ \!
(\hat{\Delta}_{lm}^{(t)x} \hat{x} \! + \! \hat{\Delta}_{lm}^{(t)y} \hat{y} ) \! \cdot \! ( \vec{\delta}_{i+lj+m}^{A} \! - \!
\vec{\delta}_{ij}^{A} ) \! \right]^{2} + \\ \tfrac{1}{2} K(r_{lm}^{t}) \! \!
\left[ \! (\hat{\Delta}_{lm}^{(t)x} \hat{x} \! + \! \hat{\Delta}_{lm}^{(t)y} \hat{y} ) \! \cdot \! ( \vec{\delta}_{i+lj+m}^{B} \! - \!
\vec{\delta}_{ij}^{B} ) \! \right]^{2} + \\ \! K(r_{lm}^{\mathrm{ab}}) \! \! \left[ \! (\hat{\Delta}_{lm}^{(\mathrm{ab})x} \hat{x} \! + \!
\hat{\Delta}_{lm}^{(\mathrm{ab})y} \hat{y} ) \! \cdot \! ( \vec{\delta}_{i+lj+m}^{B} \! - \!
\vec{\delta}_{ij}^{A} ) \! \right]^{2} \end{array} \! \! \! \! \right) &
\end{align}
where the first and second terms in the sum take into account interactions between atoms labeled
``A'' and ``B'', respectively; the identical form of ``A-A'' and ``B-B'' interaction terms is due
to the fact that the ``A'' and ``B'' species both define triangular lattices, as illustrated in Fig.
We take the lattice constant to be unity, and the components of the unit vector $\vec{\hat{\Delta}}^{t}_{lm}$ appropriate to the
triangular sublattices are $\hat{\Delta}_{lm}^{x(t)} = \sqrt{3}(l + m/2)/r_{lm}^{t}$ and $\hat{\Delta}_{lm}^{y(t)} =(3m/2)/r_{lm}^{t}$, where
$r_{lm}^{t} = (3[l^{2} + m^{2} + lm])^{1/2}$.
On the other hand, the components of $\vec{\hat{\Delta}}^{\mathrm{ab}}_{lm}$ used in calculating interactions between
``A'' and ``B'' atoms are given by
$\hat{\Delta}_{lm}^{x(\mathrm{ab})} = \sqrt{3}(l + m/2)/r^{\mathrm{ab}}$ and
$\hat{\Delta}_{lm}^{y (\mathrm{ab})} = (\tfrac{3}{2} m - 1)/r^{\mathrm{ab}}_{lm}$, where
$r^{\mathrm{ab}}_{lm} = (3l^{2} + 3m^{2} +3lm -3m + 1)^{1/2}$
The exploitation of translational invariance by expressing the displacements in terms of Fourier
components reduces the decoupling of the vibrational modes to the diagonalization of the $4 \times 4$ matrix $\left[ \begin{array}{cc}
\hat{A} & \hat{B} \\ \hat{B}^{\dagger} & \hat{A} \end{array} \right]$,
where the sub-matrices are given by
\begin{align}
&\hat{A} = \sum_{l,m=-n}^{n} g_{lm}
\left[ \! \! \begin{array}{cc} \hat{\Delta}^{x (t)}_{lm}
\hat{\Delta}^{x (t)}_{lm} & \hat{\Delta}^{x (t)}_{lm} \hat{\Delta}^{y (t)}_{lm} \\
\hat{\Delta}^{x (t)}_{lm} \hat{\Delta}^{y (t)}_{lm} & \hat{\Delta}^{y (t)}_{lm} \hat{\Delta}^{y (t)}_{lm} \end{array} \! \! \right ]&
\\ \nonumber
&+ K(r_{lm}^{\mathrm{ab}}) \left[ \begin{array}{cc} \hat{\Delta}_{lm}^{x(\mathrm{ab})} \hat{\Delta}_{lm}^{x(\mathrm{ab})}
& \hat{\Delta}_{lm}^{x(\mathrm{ab})} \hat{\Delta}_{lm}^{y(\mathrm{ab})} \\
\hat{\Delta}_{lm}^{y(\mathrm{ab})} \hat{\Delta}_{lm}^{x(\mathrm{ab})}
& \hat{\Delta}_{lm}^{y(\mathrm{ab})} \hat{\Delta}_{lm}^{y(\mathrm{ab})} \end{array}\right]&
\end{align}
for $\hat{A}$ and
\begin{align}
\hat{B} \! = \! \! \! \! \sum_{l,m=-n}^{n} \! \! e^{I(k_{x}l + k_{y}m)} \! K(r_{lm}^{\mathrm{ab}}) \! \!
\left[ \! \! \! \begin{array}{cc} \hat{\Delta}_{lm}^{x(\mathrm{ab})} \hat{\Delta}_{lm}^{x(\mathrm{ab})}
& \hat{\Delta}_{lm}^{x(\mathrm{ab})} \hat{\Delta}_{lm}^{y(\mathrm{ab})} \\
\hat{\Delta}_{lm}^{y(\mathrm{ab})} \hat{\Delta}_{lm}^{x(\mathrm{ab})}
& \hat{\Delta}_{lm}^{y(\mathrm{ab})} \hat{\Delta}_{lm}^{y(\mathrm{ab})}\end{array} \! \! \! \right]
\end{align}
where $g_{lm} \equiv (1 - \cos[k_{x} l + k_{y}m]) K(r_{lm}^{t})$, and
$g_{00} = 0$ to exclude self-interactions in the ``A-A'' and the ``B-B'' coupled pairs.
\begin{figure}
\includegraphics[width=.35\textwidth]{dallsfig19.eps}
\caption{\label{fig:Fig19} (Color Online) Honeycomb lattice geometry showing the
interpenetrating triangular lattices defined by the inequivalent sites labeled
``A'' and ``B''.}
\end{figure}
As we did for the square and triangular lattices, we examine a short-ranged exponential
interaction between atoms in the lattice with a length scale given by $\gamma^{-1}$.
On the other hand, as for the preceding two lattice geometries, we also consider a
long-ranged power law decay with $K(r) = K_{0} r^{-\alpha}$. The $\delta_{\mathrm{RMS}}^{n}$
results for the mean square deviations for the exponential decay scheme are shown in Fig.~\ref{fig:Fig20} for a range of $\gamma$ values.
As in the cases of the square and triangular lattices in the extended schemes, increasing the range $\gamma^{-1}$
of the coupling slows (but does not halt) the rate of divergence of the mean square fluctuations. The large separation
between the RMS curves corresponding to $\gamma = 2.0$ and the slower decays $\gamma = 1.0$, $\gamma = 0.75$, and $\gamma = 0.50$ is
consistent with changes in the density of states curves where the histogram amplitude in the low eigenvalue regime is
sharply diminished as $\gamma$ decreases from $\gamma = 2.0$ to $\gamma = 1.0$.
In the case of a power law decay, results for the mean square deviations are shown in the semi-logarithmic graphs in Fig.~\ref{fig:Fig23}
where the systems sizes considered span two decades,
with a closer view for a more restricted range of the decay exponent $\alpha$ in the main graph; the inset is a graph of RMS curves
for a broader set of $\alpha$ values. As in the cases of the square and triangular lattices, the threshold exponent
$\alpha_{c}^{\mathrm{2D}}$ is determined by examining whether the RMS curves converge or diverge for very large system sizes.
In agreement with the square and triangular geometries, we find $\alpha_{c}^{\mathrm{2D}} = 3.15 \pm 0.025$ for the
critical decay exponent.
The eigenvalue histogram curves shown in Fig.~\ref{fig:Fig23} are consistent with the behavior of the mean square deviation
curves given in Fig.~\ref{fig:Fig22}. For the more rapid decays $\alpha = 4.0$ and $\alpha = 4.5$, the amplitude of the
density of states is finite, which eventually contributes to a divergence in $\delta_{\mathrm{RMS}}^{n}$.
The divergence in the mean square fluctuations is much slower for $\alpha = 3.5$, a characteristic which is echoed in the
eigenvalue histogram in panel (c) of Fig.~\ref{fig:Fig23}, where in the zero eigenvalue limit the histogram amplitude is finite
but very small. Finally, for $\alpha = 3.125$, just below $\alpha_{c}^{2D}$, the density of states curve tends to zero in the low
eigenvalue limit.
\begin{figure}
\includegraphics[width=.49\textwidth]{dallsfig20.eps}
\caption{\label{fig:Fig20} (Color Online) Mean square deviations for the honeycomb lattice for an
extended coupling scheme where the interaction decays exponentially. The main graph is a semi-logarithmic plot of
$(\delta_{\mathrm{RMS}}^{n})^{2}$ for various values of the decay constant $\gamma$, while the inset is the corresponding
plot of $\delta_{\mathrm{rm}}^{n}$.}
\end{figure}
\begin{figure}
\includegraphics[width=.49\textwidth]{dallsfig21.eps}
\caption{\label{fig:Fig21} (Color Online) Density of states curves in the case of the
honeycomb lattice for $\gamma = 2.0$,
$\gamma = 1.0$, $\gamma = 0.75$, and $\gamma = 0.50$ in panels (a), (b), (c), and (d) respectively.}
\end{figure}
\begin{figure}
\includegraphics[width=.49\textwidth]{dallsfig22.eps}
\caption{\label{fig:Fig22} (Color Online) Mean square deviations for long-range interactions with a power law decay
exponent $\alpha$ for the honeycomb lattice geometry. The inset shows a relatively broad range of $\alpha$ values, whereas the
main graph is a closer view.}
\end{figure}
\begin{figure}
\includegraphics[width=.49\textwidth]{dallsfig23.eps}
\caption{\label{fig:Fig23} (Color Online) Density of states curves in the case of the
honeycomb lattice with a power law coupling scheme for $\alpha = 4.5$,
$\alpha = 4.0$, $\alpha = 3.5$, and $\alpha = 3.125$ in panels (a), (b), (c), and (d) respectively.}
\end{figure}
\section{Transverse Displacements in an extended scheme}
In each of the preceding cases discussed in this work, thermally induced deviations of the
lattice sites have been confined either to motion withing the lattice plane in the context of two
dimensional systems, or collinear movements for the lattice in 1D. However, we also consider
displacements perpendicular to the lattice for two dimensional systems. To provide local stiffness
of covalent two dimensional lattices realized in nature where the finite
thickness would provide rigidity with respect to transverse displacements tending to push atoms above
or below the plane of the crystal, we examine a dual layer geometry where an extended scheme
provides a local stiffness.
In fact, to provide as much stability as possible, we consider a long-range coupling scheme in
the interaction between atoms within a layer as well as between layers decreases as a power law
(with the decay exponent designated $\alpha$) in the
separation between atomic species.
The potential energy stored in the lattice is similar in abstract form to the corresponding expression for the
honeycomb lattices, and is given by
\begin{align}
E \! = \! \! \sum_{i,j = 0}^{N} \sum_{l,m=-n}^{n} \! \! \left( \! \! \! \begin{array}{c} \tfrac{1}{2} K(r_{lm}^{s})\! \left[ \!
\vec{\hat{\Delta}}_{lm}^{s} \! \cdot \! (\vec{\delta}_{i+lj+m}^{A} \! - \!
\vec{\delta}_{ij}^{A} ) \! \right]^{2} \\ + \tfrac{1}{2} K(r_{lm}^{s}) \!
\left[ \! \vec{\hat{\Delta}}_{lm}^{s} \! \cdot \! (\vec{\delta}_{i+lj+m}^{B} \! - \!
\vec{\delta}_{ij}^{B} ) \! \right]^{2} \\ + \! \! K(r_{lm}^{\mathrm{ab}}) \left[ \! \vec{\hat{\Delta}}_{lm}^{\mathrm{ab}}
\! \cdot \! (\vec{\delta}_{i+lj+m}^{B} \! - \!
\vec{\delta}_{ij}^{A} ) \! \right]^{2} \end{array} \! \! \! \right)
\end{align}
in a real space representation, where the unit vectors corresponding the intraplanar couplings $\vec{\hat{\Delta}}_{lm}^{s}$ have
$x$ and $y$ components given by
\begin{align}
\hat{\Delta}_{lm}^{x(s)} = \frac{l}{r_{lm}^{s}};~~\hat{\Delta}_{lm}^{y(s)} = \frac{m}{r_{lm}^{s}},
\end{align}
where the separation between interacting sites within a plane is $r_{lm}^{s} = \sqrt{l^{2} + m^{2}}$
The components of the unit vector $\vec{\hat{\Delta}}_{lm}^{\mathrm{ab}}$
are identical to those of the planar case with the exception of a nonzero $z$ component $\hat{\Delta}_{lm}^{z(\mathrm{ab})} =
1/r_{lm}^{\mathrm{ab}}$ where the distance between sites in the two distinct lattice planes is $r_{lm}^{\mathrm{ab}} = \sqrt{m^{2} + l^{2} + 1}$.
Operating in terms of Fourier components reduces the
decoupling of the vibrational modes to the diagonalization of a $6 \times 6$ matrix of the form
where $\hat{A}$ and $\hat{B}$ are $3 \times 3$ sub-matrices, with
$\hat{B}^{\dagger}$ the Hermitian conjugate of $\hat{B}$. The sub-matrices have the form
\begin{align}
\hat{A} = \! \! \! \sum_{l,m=-n}^{n} \! \! \left[ \! \! \! \begin{array}{ccc} (d_{lm}^{xx(\mathrm{ab})} + d_{lm}^{xx(s)}) & (d_{lm}^{xy(\mathrm{ab})} +
d_{lm}^{xy(s)}) & d_{lm}^{xz(\mathrm{ab})} \\
(d_{lm}^{yx(\mathrm{ab})} + d_{lm}^{yx(s)}) & (d_{lm}^{yy(\mathrm{ab})} + d_{lm}^{yy(s)}) & d_{lm}^{yz(\mathrm{ab})} \\
d_{lm}^{zx(\mathrm{ab}} & d_{lm}^{zy(\mathrm{ab}} & d_{lm}^{zz(\mathrm{ab})}
\end{array} \! \! \! \right ]
\end{align}
for $\hat{A}$, where for the sake of brevity we have used the notation, e.g. $d_{lm}^{xy(\mathrm{ab})} \equiv K(r_{lm}^{\mathrm{ab}})
\hat{\Delta}_{lm}^{x(\mathrm{ab})}
\hat{\Delta}_{lm}^{y(\mathrm{ab})}$ and $d_{lm}^{xy(s)} \equiv K(r_{lm}^{s}) (1 - \cos [k_{x} l + k_{y} m])
\hat{\Delta}_{lm}^{x(s)} \hat{\Delta}_{lm}^{y(s)}$.
The complex sub-matrix $\hat{B}$ is given by
\begin{align}
\hat{B} = \! \! \sum_{l,m=-n}^{n} e^{i(k_{x}l + k_{y} m)}
\left[ \! \! \begin{array}{ccc} d_{lm}^{xx(\mathrm{ab})} & d_{lm}^{xy(\mathrm{ab})} & d_{lm}^{xz(\mathrm{ab})} \\
d_{lm}^{yx(\mathrm{ab})} & d_{lm}^{yy(\mathrm{ab})} & d_{lm}^{yz(\mathrm{ab})} \\
d_{lm}^{zx(\mathrm{ab})} & d_{lm}^{zy(\mathrm{ab})} & d_{lm}^{zz(\mathrm{ab})}
\end{array} \! \! \right ]
\end{align}
Results for the mean square deviation $\delta_{\mathrm{RMS}}^{n}$ are shown in Fig.~\ref{fig:Fig24} for a variety of
decay exponents ranging from a weak decay $\alpha = -2.5$ to a considerably more rapid decay with
the separation distance $\alpha = -6.0$. A salient feature in each of the RMS curves
shown in the graph is an asymptotic linear growth
with the size $N$ of the system, though the slope of the linear diverge in system size decreases
with decreasing $\alpha$. Broadly speaking, there are two regimes for each value of $\alpha$ in the
variation of the mean square deviation with system size. For relatively small system sizes,
$\delta_{\mathrm{RMS}}$ changes quite slowly with increasing $N$. However, ultimately the RMS
fluctuations begin to grow more rapidly and eventually The size of the plateau where the mean square
fluctuations expand slowly is slightly broader for small values of the decay exponent $\alpha$, and
somewhat abbreviated for the more rapidly decaying coupling where $\alpha = -6$. The latter is
closer to what one would find in covalently bonded (but non-polar) systems such as graphene where
Van der Waals interactions decreasing with the sixth power of the inter-atomic separation
constitute the
main source of long-range attraction between particles. Hence, London interactions would not
prevent atomic displacements transverse to the lattice plane from destroying
crystalline order.
\begin{figure}
\includegraphics[width=.49\textwidth]{dallsfig24.eps}
\caption{\label{fig:Fig24} (Color Online) Mean square deviations for long-range interactions with a power law decay
exponent $\alpha$.}
\end{figure}
\section{Conclusions}
We have examined the effect of thermally induced lattice vibrations on long range order in
one and two dimensional crystals.
In the case of crystals in one dimension, long-range positional order is (as expected) disrupted by
thermal fluctuations with the $\delta_{\mathrm{RMS}}^{n}$ scaling as $N^{1/2}$. For inherently
long-ranged interactions scaling as power laws in the distance between interacting atoms,
the divergence is much slower for $\alpha > \alpha_{c}^{\mathrm{1D}} = 1.615$, while crystalline order is
intact even at finite temperatures if $\alpha < \alpha_{c}^{\mathrm{1D}}$.
For two dimensional crystals, we find the same essential phenomena
with respect to thermodynamic stability of the crystal for square, triangular, and
honeycomb lattices. For the latter two, thermal fluctuations destroy long-range crystalline
order at finite temperatures, but the divergence in $\delta_{\mathrm{RMS}}^{n}$ occurs very
slowly with increasing system size. On the other hand, RMS deviations decay rapidly for
simple square lattices where only nearest neighbor couplings are active. However, an extended
coupling scheme to both nearest and next-nearest neighbors considerably mitigates the effect
of thermal fluctuations on crystalline order, and the resulting slow divergence is
quantitatively similar to that seen in the triangular and honeycomb lattices where the coupling
is confined entirely to nearest-neighbors.
When we extend the coupling to many neighbors, but still implement a short-ranged coupling
(e.g. an exponential decay with the range set by the inverse decay rate $\gamma^{-1}$),
we find qualitatively the same results as for the square lattice with couplings both to
nearest and next-nearest neighbors, as well as the triangular and hexagonal lattices in
atomic members only interact with nearest neighbors with
$(\delta_{\mathrm{RMS}}^{n})^{2}$ scaling linearly with $\log_{10}(N)$. However, the
slope of the linear dependence
becomes smaller as $\gamma$ is decreased, as a longer-ranged coupling is more effective at
suppressing the effects of thermal fluctuations.
As in the case of systems in 1D, a longer range coupling in the form of a power law
can maintain long-range crystalline order for $T > 0$ if the decay exponent does not
exceed $\alpha_{c}^{\mathrm{2D}} = 3.15$.
Allowing thermally induced fluctuations perpendicular to the lattice causes a rapid divergence
(i.e. linear) of the RMS deviations with the system size, even if the lattice geometry is dual-layered
in an extended scheme to provide local stiffness.
The growth of $\delta_{\mathrm{rms}}^{n}$ is asymptotically linear even if the coupling between sites is
long-ranged, decaying as a power law. Hence, long-ranged London interactions would not be enough by themselves
to preserve positional order in a covalently bonded locally rigid two dimensional lattice.
\begin{acknowledgments}
Useful conversations with Yogesh Joglekar are gratefully acknowledged.
\end{acknowledgments}
|
2,877,628,091,609 | arxiv |
\subsubsection*{\bibname}}
\input notation.tex
\newcommand\blfootnote[1]{%
\begingroup
\renewcommand\thefootnote{}\,\footnote{#1}%
\addtocounter{footnote}{-1}%
\endgroup
}
\raggedbottom
\pagestyle{myheadings}
\begin{document}
\thispagestyle{empty}
\title{Extending the Patra--Sen Approach to Estimating the Background Component in a Two-Component Mixture Model}
\author{
Ery Arias-Castro\footnote{Department of Mathematics, University of California, San Diego, USA \newline \indent \quad \url{https://math.ucsd.edu/\~eariasca}}
\and He Jiang\footnote{Department of Mathematics, University of California, San Diego, USA \newline \indent \quad \url{https://math.ucsd.edu/people/graduate-students/}}
}
\date{}
\maketitle
\begin{abstract}
\cite{patra} consider a two-component mixture model, where one component plays the role of background while the other plays the role of signal, and propose to estimate the background component by simply `maximizing' its weight. While in their work the background component is a completely known distribution, we extend their approach here to three emblematic settings: when the background distribution is symmetric; when it is monotonic; and when it is log-concave. In each setting, we derive estimators for the background component, establish consistency, and provide a confidence band. While the estimation of a background component is straightforward when it is taken to be symmetric or monotonic, when it is log-concave its estimation requires the computation of a largest concave minorant, which we implement using sequential quadratic programming.
Compared to existing methods, our method has the advantage of requiring much less prior knowledge on the background component, and is thus less prone to model misspecification. We illustrate this methodology on a number of synthetic and real datasets.
\end{abstract}
\section{Introduction} \label{sec:intro}
\subsection{Two component mixture models}
Among mixture models, two-component models play a special role. In robust statistics, they are used to model contamination, with the main component representing the inlier distribution, while the remaining component representing the outlier distribution \citep{hettmansperger2010robust, huber2009robust, tukey1960survey, huber1964robust}. In that kind of setting, the contamination is a nuisance and the goal is to study how it impacts certain methods for estimation or testing, and also to design alternative methods that behave comparatively better in the presence of contamination.
In multiple testing, the background distribution plays the role of the distribution assumed (in a simplified framework) to be common to all test statistics under their respective null hypotheses, while the remaining component plays the role of the distribution assumed of the test statistics under their respective alternative hypotheses \citep{efron2001empirical, genovese2002operating}.
In an ideal situation where the $p$-values can be computed exactly and are uniformly distributed on $[0,1]$ under their respective null hypotheses, the background distribution is the uniform distribution on $[0,1]$. Compared to the contamination perspective, here the situation is in a sense reverse, as we are keenly interested in the component other than the background component.
We adopt this multiple testing perspective in the present work.
\subsection{The Patra--Sen approach}
Working within the multiple testing framework, \cite{patra} posed the problem of estimating the background component as follows. They operated under the assumption that the background distribution is completely known --- a natural choice in many pracical situations, see for example the first two situations in \secref{symmetric_read_data}.
Given a density $f$ representing the density of all the test statistics combined, and letting $g_0$ denote a completely known density, define
\begin{equation}
\label{theta_0_definition}
\theta_0 := \sup \{t: f \ge t g_0\}.
\end{equation}
Note that $\theta_0 \in [0,1]$.
Under some mild assumptions on $f$, the supremum is attained, so that $f$ can be expressed as the following two-component mixture:
\begin{equation}
f = \theta_0 g_0 + (1-\theta_0) u,
\end{equation}
for some density $u$.
\cite{patra} aim at estimating $\theta_0$ defined in \eqref{theta_0_definition} based on a sample from the density $f$, and implement a slightly modified plug-in approach.
Even in this relatively simple setting where the background density --- the completely known density $g_0$ above --- is given, information on $\theta_0$ can help improve inference in a multiple testing situation as shown early on by \cite{storey2002direct}, and even earlier by \cite{benjamini2000adaptive}.
\subsection{Our contribution}
We find the Patra--Sen approach elegant, and in the present work extend it to settings where the background distribution (also referred to as the null distribution) --- not just the background proportion --- is unknown. For an approach that has the potential to be broadly applicable, we consider three emblematic settings where the background distribution is in turn assumed to be symmetric (\secref{symmetric}), monotone (\secref{monotone}), or log-concave (\secref{log-concave}).
Each time, we describe the estimator for the background component (proportion and density) that the Patra-Sen approach leads to, and study its consistency and numerical implementation. We also provide a confidence interval for the background proportion and a simultaneous confidence band for the background density. In addition, in the log-concave setting, we provide a way of computing the largest concave minorant. We address the situation where the background is specified incorrectly, and mention other extensions, including combinations of these settings and in multivariate settings, in \secref{discussion}.
\subsection{More related work in multiple testing}
The work of \cite{patra} adds to a larger effort to estimate the proportion and/or the density of the null component in a multiple testing scenario. This effort dates back, at least, to early work on false discovery control \citep{benjamini1995controlling} where (over-)estimating the proportion of null hypotheses is crucial to controlling the FDR and related quantities \citep{storey2002direct, benjamini2000adaptive, genovese2004stochastic}.
Directly focusing on the estimation of the null proportion, \cite{langaas2005estimating} consider a setting where the $p$-values are uniform in $[0,1]$ under their null hypotheses and have a monotone decreasing density under their alternative hypotheses, while \cite{meinshausen2006estimating} do not assume anything of the alternative distribution and propose an estimator which is similar in spirit to that of \cite{patra}.
\cite{jin2007estimating} and \cite{jin2008proportion} consider a Gaussian mixture model and approach the problem via the characteristic function --- a common approach in deconvolution problems.
Gaussian mixtures are also considered in \citep{efron2007size, efron2012large, cai2010optimal}, where the Gaussian component corresponding to the null has unknown parameters that need to be estimated.
Some references to estimating the null component that have been or could be applied in the context of multiple testing are given in \cite[Ch 5]{efron2012large}.
Otherwise, we are also aware of the very recent work of \cite{roquain2020}, where in addition to studying the `cost' of having to estimate the parameters of the null distribution when assumed Gaussian, also consider the situation where null distribution belongs to a given location family, and further, propose to estimate the null distribution under an upper bound constraint on the proportion of non-nulls in the mixture model.
\begin{rem}
Much more broadly, all this connects with the vast literature on Gaussian mixture models \citep{cohen1967estimation, lindsay1993multivariate} and on mixture models in general \citep{mclachlan2004finite, mclachlan1988mixture, mclachlan2019finite, lindsay1995mixture}, including two-component models \citep{shen2018mm, bordes2006semiparametric, ma2015flexible, gadat2020parameter}.
\end{rem}
\section{Symmetric background component}
\label{sec:symmetric}
We start with what is perhaps the most natural nonparametric class of null distributions: the class of symmetric distributions about the origin. Unlike \cite{roquain2020}, who assume that the null distribution is symmetric around an unknown location that needs to be estimated but is otherwise known, i.e., its `shape' is known, we assume that the shape is unknown. We do assume that the center of symmetry is known, but this is for simplicity, as an extension to an unknown center of symmetry is straightforward (see our numerical experiments in \secref{symmetric_experiments}).
Mixtures of symmetric distributions are considered in \citep{hunter2007inference}, but otherwise, we are not aware of works estimating the null distribution under an assumption of symmetry in the context of multiple testing.
For works in multiple testing that assume that the null distribution is symmetric but unknown, but where the goal is either testing the global null hypothesis or controlling the false discovery rate, see \citep{arias2017distribution, arias2017distribution_fdr}.
Following the footsteps of \cite{patra}, we make sense of the problem by defining for a density $f$ the following:
\begin{equation}
\label{symmetric_pi0_definition}
\pi_0 := \sup \big\{\pi: \exists g \in \mathcal{S} \text{ s.t. } f - \pi g \geq 0 \text{ a.e.}\big\},
\end{equation}
where $\mathcal{S}$ is the class of even densities (i.e., representing a distribution that is symmetric about the origin).
Note that $\pi_0 \in [0,1]$ is well-defined for any density $f$, with $\pi_0 = 1$ if and only if $f$ itself is symmetric.
\begin{thm}
We have
\begin{align}
\label{symmetric_pi0_value}
\pi_0 = \int_{-\infty}^\infty h_0(x) d x,
&& h_0(x) := \min \{ f(x), f(-x) \}.
\end{align}
Moreover, if $\pi_0 > 0$ the supremum in \eqref{symmetric_pi0_definition} is attained by the following density and no other\,\footnote{~As usual, densities are understood up to sets of zero Lebesgue measure.} :
\begin{equation}
\label{symmetric_g0}
g_0(x) := \frac{h_0(x)}{\pi_0}.
\end{equation}
\end{thm}
\begin{proof}
The parameter $\pi_0$ can be equivalently defined as
\begin{align}
\pi_0 = \sup \big\{\textstyle\int h: \text{$h$ is even and $0 \le h \le f$ a.e.}\big\}.
\end{align}
Note that $h_0$, as defined in the statement, satisfies the above conditions, implying that $\pi_0 \ge \int h_0$.
Take $h$ satisfying these same conditions, namely, $h(x) = h(-x)$ and $0 \le h(x) \le f(x)$ for almost all $x$. Then, for almost any $x$, $h(x) \le f(x)$ and $h(-x) \le f(-x)$, implying that $h(x) \le f(x) \wedge f(-x) = h_0(x)$.
(Here and elsewhere, $a \wedge b$ is another way of denoting $\min(a, b)$.)
Hence, $\int h \le \int h_0$ with equality if and only if $h = h_0$ a.e., in particular implying that $\pi_0 \le \int h_0$. We have thus established that $\pi_0 = \int h_0$, and also that $\int h = \pi_0$ if and only if $h = h_0$ a.e.. This not only proves \eqref{symmetric_pi0_value}, but also \eqref{symmetric_g0}, essentially by definition.
\end{proof}
We have thus established that, in the setting of this section, the background component as defined above is given by
\begin{equation}
h_0(x) = \pi_0 g_0(x) = \min \{ f(x), f(-x) \},
\end{equation}
and $f$ can be expressed as a mixture of the background density and another, unspecified, density $u$, as follows:
\begin{equation}
f = \pi_0 g_0 + (1-\pi_0) u.
\end{equation}
The procedure is summarized in \tabref{symmetric_algorithm}.
An illustration of this decomposition is shown in \figref{illustration_symmetric}. By construction, the density $u$ is such that it has no symmetric background component in that, for almost every $x$, $u(x) = 0$ or $u(-x) = 0$.
\begin{table}[htpb]
\centering
\caption{Symmetric background computation.}
\label{tab:symmetric_algorithm}
\bigskip
\setlength{\tabcolsep}{0.22in}
\begin{tabular}{ p{0.9\textwidth} }
\toprule
{\textbf{inputs}: density ${f}$, given center of symmetry $c_0$ or candidate center points $\{c_1, c_2, \dots, c_k$\}} \\ \midrule
\textbf{if} center of symmetry is not provided \textbf{then}
\hspace{3mm} \textbf{for} $i=1,\dots,k$ \textbf{do}
\hspace{3mm} \hspace{3mm} $h_i(x) = \min \{ {f}(x), {f}(2c_i - x) \}$
\hspace{3mm} \hspace{3mm} $\pi_i(x) = \int_{-\infty}^{\infty} h_i(x) dx $
\hspace{3mm} $\beta = \argmax_i \pi_i(x)$
\hspace{3mm} $c_0 = c_{\beta}$
$h_0(x) = \min \{ {f}(x), {f}(2c_0 - x) \}$
$\pi_0 = \int_{-\infty}^{\infty} h_0(x) dx$
$g_0(x) = h_0(x) / \pi_0$ \\
\midrule
\textbf{return} $c_0, \pi_0, g_0, h_0$
\\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}[htpb]
\centering
\centering
\subfigure[Decomposition of $f$ with center of symmetry specified as $0$ (dotted).]{\label{fig:a}\includegraphics[scale=0.35]{illustrationplot_symmetric_giv_ctr.png}}\qquad
\centering
\subfigure[Decomposition of $f$ with center of symmetry, $0.04$ (dotted), found by maximization. ]{\label{fig:b}\includegraphics[scale=0.35]{illustration_symmetric_no_ctr.png}}
\caption{The density $f$ of the Gaussian mixture $0.85 \text{ } \mathcal{N}(0, 1) + 0.15 \text{ } \mathcal{N} (0, 1)$, in black, and its decomposition into $\pi_0 g_0$, in orange, and $(1 - \pi_0) u$, in blue. We specify the center of symmetry as $0$ on the left, and we do not specify the center of symmetry on the right. Notice $\pi_0 = 0.850$ on the left and $\pi_0 = 0.860$ on the right. }
\label{fig:illustration_symmetric}
\end{figure}
\subsection{Estimation and consistency}
When all we have to work with is a sample --- $x_1, x_2, \dots, x_n \in \mathbb{R}$ --- we adopt a straightforward plug-in approach: We estimate the density $f$, obtaining $\hat f$, and apply the procedure of \tabref{symmetric_algorithm}, meaning, we compute $\hat h_0(x) := \min \{ \hat f(x), \hat f(-x) \}$. If we want estimates for the background density and proportion, we simply return $\hat\pi_0 := \int \hat h_0$ and $\hat g_0 := \hat h_0/\hat\pi_0$. (By convention, we set $\hat g_0$ to the standard normal distribution if $\hat\pi_0 = 0$.)
We say that $\hat f = \hat f_n$ is locally uniformly consistent for $f$ if $\operatorname{\mathbb{E}}[\esssup_{x \in I} |\hat f_n(x) - f(x)|] \to 0$ as $n\to\infty$ for any bounded interval $I$.
(Here and elsewhere, $\esssup_{x \in I} f(x)$ denotes the essential supremum of $f$ over the set $I$.)
We note that this consistency condition is satisfied, for example, when $f$ is continuous and $\hat f$ is the kernel density estimator with the Gaussian kernel and bandwidth chosen by cross-validation \citep{chow1983consistent}.
\begin{thm}
\label{thm:symmetric_consistency}
Suppose that $\hat f$ is a true density and locally uniformly consistent for $f$. Then $\hat h_0$ is locally uniformly consistent for $h_0$ and $\hat \pi_0$ is consistent, and if $\pi_0 > 0$, then $\hat g_0$ is locally uniformly consistent for $g_0$.
\end{thm}
\begin{proof}
All the limits that follows are as the sample size diverges to infinity.
We rely on the elementary fact that, for $a_1, a_2, b_1, b_2 \in \mathbb{R}$,
\begin{equation}
\big|\min\{a_1, b_1\} - \min\{a_2, b_2\}\big|
\le \max\{|a_1 - a_2|, |b_1 - b_2|\},
\end{equation}
to get that, for all $x$,
\begin{equation}
|\hat h_0(x) - h_0(x)|
\le \max\{|\hat f(x) - f(x)|, |\hat f(-x) - f(-x)|\},
\end{equation}
implying that $\hat h_0$ is locally uniformly consistent for $h_0$.
To be sure, take a bounded interval $I$, which we assume to be symmetric without loss of generality. Then
\begin{align*}
\esssup_{x \in I} |\hat h_0(x) - h_0(x)|
&\le \esssup_{x \in I} |\hat f(x) - f(x)| \vee \esssup_{x \in I} |\hat f(-x) - f(-x)| \\
&= \esssup_{x \in I} |\hat f(x) - f(x)|,
\end{align*}
and we then use the fact that $\operatorname{\mathbb{E}}[\esssup_I |\hat f - f|] \to 0$.
(Here and elsewhere, $a \vee b$ is another way of denoting $\max(a, b)$.)
To conclude, it suffices to show that $\hat\pi_0$ is consistent for $\pi_0$.
Fix $\varepsilon > 0$ arbitrarily small. There is a bounded interval $I$ such that $\int_I f \ge 1 - \varepsilon$.
Then, by the fact that $0\le h_0 \le f$ a.e.,
\[
\int_I h_0 \le \pi_0 = \int h_0 \le \int_I h_0 + \int_{I^\mathsf{c}} f \le \int_I h_0 + \varepsilon.
\]
($I^\mathsf{c}$ denotes the complement of $I$, meaning, $I^\mathsf{c} = \mathbb{R} \setminus I$.)
Similarly, by the fact that $0\le \hat h_0 \le \hat f$ a.e.,
\[
\int_I \hat h_0 \le \hat\pi_0 = \int \hat h_0 \le \int_I \hat h_0 + \int_{I^\mathsf{c}} \hat f.
\]
From this we gather that
\begin{equation*}
\int_I \hat h_0 - \int_I h_0 - \varepsilon
\le
\hat\pi_0 - \pi_0
\le \int_I \hat h_0 - \int_I h_0 + \int_{I^\mathsf{c}} \hat f.
\end{equation*}
Thus consistency of $\hat\pi_0$ follows if we establish that $\limsup \int_{I^\mathsf{c}} \hat f \le \varepsilon$ and that $\int_I \hat h_0 - \int_I h_0 \to 0$.
The former comes from the fact that
\begin{align*}
\int_{I^\mathsf{c}} \hat f
&= \int_{I^\mathsf{c}} \hat f - \int_{I^\mathsf{c}} f + \int_{I^\mathsf{c}} f \\
&\le \int_{I} (\hat f - f) + \varepsilon \\
&\le |I| \esssup_I |\hat f - f| + \varepsilon \to \varepsilon,
\end{align*}
using the fact that $f$ and $\hat f$ are densities and that $\int_{I^\mathsf{c}} f \le \varepsilon$.
($|I|$ denotes the Lebesgue measure of $I$, meaning its length when $I$ is an interval.)
For the latter, we have
\begin{align*}
\left|\int_I \hat h_0 - \int_I h_0\right|
&\le \int_I |\hat h_0 - h_0|
&\le |I| \esssup_I |\hat h_0 - h_0| \to 0,
\end{align*}
having already established that $\hat h_0$ is locally uniformly consistent for $h_0$.
\end{proof}
\paragraph{Confidence interval and confidence band}
Beyond mere pointwise consistency, suppose that we have available a confidence band for $f$, which can be derived under some conditions on $f$ from a kernel density estimator --- see \citep{chen2017tutorial} or \citep[Ch 6.4]{gine2021mathematical}.
\begin{thm}
Suppose that for some $\alpha \in (0,1)$, we have at our disposal $\hat f_l$ and $\hat f_u$ such that
\begin{equation}
\label{conf1}
\P \big(\hat f_l(x) \leq f(x) \leq \hat f_u(x), \text{ for almost all $x$}\big) \geq 1 - \alpha.
\end{equation}
Then, with probability at least $1-\alpha$,
\begin{align}
\label{conf2}
\hat\pi_l \le \pi_0 \le \hat\pi_u,
&& \hat g_l \le g_0 \le \hat g_u \text{ a.e.},
\end{align}
where
\begin{align*}
\hat\pi_l := \int \hat h_l,
&& \hat\pi_u := \int \hat h_u,
&& \hat h_l(x) := \min \{ \hat{f}_l(x), \hat{f}_l(-x)\},
&& \hat h_u(x) := \min \{ \hat{f}_u(x), \hat{f}_u(-x)\}.
\end{align*}
\end{thm}
\begin{proof}
Let $\Omega$ be the event that $\hat f_l(x) \leq f(x) \leq \hat f_u(x)$ for almost all $x$, and note that $\P(\Omega) \ge 1-\alpha$ by assumption. Assuming that $\Omega$ holds, we have, for almost all $x$,
\begin{align*}
\hat{f}_l (x) \leq f(x) \leq \hat{f}_u (x),
&& \hat{f}_l (-x) \leq f(-x) \leq \hat{f}_u (-x),
\end{align*}
and taking the minimum in the corresponding places yields
\begin{equation}
\label{symmetric_minimum_bound}
\hat h_l(x) \leq h_0(x) \leq \hat h_u(x).
\end{equation}
Everything else follows immediately from this.
\end{proof}
In words, we apply the procedure of \tabref{symmetric_algorithm} to the lower and upper bounds, $\hat f_l$ and $\hat f_u$. If the center of symmetry is not provided, then we determine it based on $\hat f$, and then use that same center for $\hat f_l$ and $\hat f_u$.
\subsection{Numerical experiments}
\label{sec:symmetric_experiments}
In this subsection we provide simulated examples when the background distribution is taken to be symmetric. We acquire $\hat{f}$ by kernel density estimation using the Gaussian kernel with bandwidth selected by cross-validation \citep{rudemo1982empirical, stone1984asymptotically, arlot2010survey, silverman1986density, sheather1991reliable}, where the cross-validated bandwidth selection is implemented in the \textsf{kedd} package \citep{guidoum2015kernel}. The consistency of this density estimator has been proven in \citep{chow1983consistent}. Although there are many methods that provide confidence bands for kernel density estimator, for example \citep{bickel1973some, gine2010confidence}, for consideration of simplicity and intuitiveness, our simultaneous confidence band (in the form of $\hat{f}_l$ and $\hat{f}_u$) used in the experiments is acquired from bootstrapping a debiased estimator of the density as proposed in \citep{cheng2019nonparametric}. For a comprehensive review on the area of kernel density and confidence bands, we point the reader to the recent survey paper \citep{chen2017tutorial} and textbook \citep[Ch 6.4]{gine2021mathematical}.
In the experiments below, we carry out method and report the estimated proportion $\hat{\pi}_0$, as well as its $95\%$ confidence interval $(\hat{\pi}_0^{\rm{L}}, \hat{\pi}_0^{\rm{U}})$.
We consider both the situation where the center of symmetry is given, and the situation where it is not.
In the latter situation, we also report the background component's estimated center, which is selected among several candidate centers and chosen as the one giving the largest symmetric background proportion.
We are not aware of other methods for estimating the quantity $\pi_0$, but we provide a comparison with several well-known methods that estimate similar quantities.
To begin with, we consider the method of \cite{patra}, which estimates the quantity $\theta_0$ as defined in $\eqref{theta_0_definition}$.
We let $\hat{\theta}_0^{\rm{PSC}}$, $\hat{\theta}_0^{\rm{PSH}}$, $\hat{\theta}_0^{\rm{PSB}}$ denote the constant, heuristic, and $95\%$ upper bound estimator on $\theta_0$, respectively.
We then consider estimators of the quantity $\theta$, the actual proportion of the given background component $f_b$ in the mixture density
\begin{equation}
\label{theta_definition}
f = \theta f_b + (1-\theta) u,
\end{equation}
where $u$ denotes the unknown component.
Note that $\theta_0$ and $\pi_0$ may be different from $\theta$.
This mixture model may not be identifiable in general, and we discuss this issue down below.
We also consider the estimator of \citep{efron2007size}, denoted $\hat{\theta}^{\rm{E}}$, and implemented in package \textsf{locfdr}\,\footnote{~\url{https://cran.r-project.org/web/packages/locfdr/index.html}}. This method requires the unknown component to be located away from $0$, and to be have heavier tails than the background component.
In addition, when the $p$-values are known, \cite{meinshausen2006estimating} provide a $95\%$ upper bound on the proportion of the null component, and we include that estimator also, denoted $\hat{\theta}^{\rm{MR}}$, and implemented in package \textsf{howmany}\,\footnote{~\url{https://cran.r-project.org/web/packages/howmany/howmany.pdf}}. Finally, when the distribution is assumed to be a Gaussian mixture, \cite{cai2010optimal} provide an estimator, denoted $\hat{\theta}^{\rm{CJ}}$, when the unknown component is assumed to have larger standard deviation than the background component. $\hat{\theta}^{\rm{CJ}}$ requires the specification of a parameter $\gamma$, and following the advice given by the authors, we select $\gamma = 0.2$.
Importantly, unlike our method, these other methods assume knowledge of the background distribution. (Note that the methods of \cite{efron2007size} and \cite{cai2010optimal} do not necessitate full knowledge of the background distribution, but we provide them with that knowledge in all the simulated datasets.)
We summarize the methods used in our experiments in \tabref{other_methods_summary}.
For experiments in situations where the background component is misspecified, we invite the reader to \secref{incorrect_background_subsection}.
\begin{table}[htpb]
\centering\small
\caption{Summary of the methods considered in our experiments.}
\label{tab:other_methods_summary}
\bigskip
\setlength{\tabcolsep}{0.12in}
\begin{tabular}{p{0.14\textwidth}
p{0.2\textwidth}
p{0.25\textwidth}
p{0.26\textwidth}
}
\toprule
{\bf Estimator} & {\bf Reference} & {\bf Description } & {\bf Background information needed} \\
\midrule
$\hat{\pi}_0, \hat{\pi}_0^{\rm{L}}, \hat{\pi}_0^{\rm{U}}$ & Current paper & Estimator of $\pi_0$ with $95\%$ lower and upper confidence bounds. & Requires background distribution to be symmetric (note the requirement becomes monotonic in \secref{monotone} or log-concave in \secref{log-concave}). \\
\midrule
$\hat{\theta}_0^{\rm{PSC}}, \hat{\theta}_0^{\rm{PSH}}, \hat{\theta}_0^{\rm{PSB}}$ & \citep{patra} & Constant, heuristic, and $95\%$ upper bound estimates of $\theta_0$. & Requires complete knowledge on the background distribution.\\
\midrule
$\hat{\theta}_0^{\rm{E}}$ & \citep{efron2007size} & Estimator of $\theta$. & Either requires full knowledge of background distribution or can estimate the background distribution when it has shape similar to a Gaussian distribution centered around $0$. \\
\midrule
$\hat{\theta}_0^{\rm{MR}}$ & \citep{meinshausen2006estimating} & $95\%$ upper bound of $\theta$. & Requires complete knowledge on the background distribution. \\
\midrule
$\hat{\theta}_0^{\rm{CJ}}$ & \citep{cai2010optimal} & Estimator of $\theta$. & Either requires full knowledge of background distribution or can estimate the background distribution when it is Gaussian. \\
\bottomrule
\end{tabular}
\end{table}
We consider four different situations as listed in \tabref{symmetric_simulation_situations}. Each situation's corresponding $\theta$, $\theta_0$, and $\pi_0$, defined as in \eqref{symmetric_pi0_definition}, are also presented, where $\theta_0$ and $\pi_0$ are obtained numerically based on knowledge of $f$. For each model, we generate a sample of size $n = 1000$ and compute all the estimators described above. We repeat this process $1000$ times.
We transform the data accordingly when applying the comparison methods.
The result of our experiment are reported, in terms of the mean values as well as standard deviations, in \tabref{symmetric_simulation_numbers}. It can be seen that in most situations, our estimator achieves comparable if not better performance when estimating $\pi_0$ as compared to the other methods for the parameter they are meant to estimate ($\theta$ or $\theta_0$). We also note that our method is significantly influenced by the estimation of $\hat{f}$, therefore in situations where $\hat{f}$ deviates from $f$ often, our estimator will likely result in higher error. In addition, it is clear from the experiments that specifying the center of symmetry is unnecessary.
\begin{table}[htpb]
\centering\small
\caption{Simulated situations for the estimation of a symmetric background component, together with the corresponding values of $\theta$, $\theta_0$, $\pi_0$ (unspecified center), and $\pi_{00}$ (given center), obtained numerically (and rounded at 3 decimals).}
\label{tab:symmetric_simulation_situations}
\bigskip
\setlength{\tabcolsep}{0.03in}
\begin{tabular}{p{0.1\textwidth}
p{0.525\textwidth} p{0.075\textwidth} p{0.075\textwidth} p{0.075\textwidth} p{0.075\textwidth}
}
\toprule
{\bf Model} & {\bf Distribution} & {\bf $\theta $} & {\bf $\theta_0$} & {\bf $\pi_0$} & {\bf $\pi_{00}$}\\
\midrule
\textsf{S1} & $0.85 \text{ }\mathcal{N} (0, 1) + 0.15 \text{ }\mathcal{N}(3, 1) $ & 0.850 & 0.850 & 0.860 & 0.850 \\
\midrule
\textsf{S2} & $0.95 \text{ }\mathcal{N} (0, 1) + 0.05 \text{ }\mathcal{N}(3, 1) $ & 0.950 & 0.950 & 0.950 & 0.950 \\
\midrule
\textsf{S3} & $0.85 \text{ }\mathcal{N} (0, 1) + 0.1 \text{ }\mathcal{N}(2.5, 0.75) + 0.05 \text{ }\mathcal{N}(-2.5, 0.75) $ & 0.850 & 0.851 & 0.954 & 0.950 \\
\midrule
\textsf{S4} & $0.85 \text{ }\mathcal{N} (0, 1) + 0.1 \text{ }\mathcal{N}(2.5, 0.75) + 0.05 \text{ }\mathcal{N}(5, 0.75) $ & 0.850 & 0.850 & 0.858 & 0.850\\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[htpb]
\centering\small
\caption{A comparison of various methods for estimating a background component in the situations of \tabref{symmetric_simulation_situations}. For our method, the first and second rows in each situation are for when the center is unspecified, while the third and fourth rows are for when the center is specified to be the origin. Thus $\hat{\pi}_0$ on the first row of each situation is compared with $\pi_0$, $\hat{\pi}_0$ on the third row of each situation is compared with $\pi_{00}$. Otherwise, the $\hat{\theta}_0^{\rm X}$ are compared with $\theta_0$, while the $\hat{\theta}^X$ are compared with $\theta$.}
\label{tab:symmetric_simulation_numbers}
\bigskip
\setlength{\tabcolsep}{0.02in}
\begin{tabular}{p{0.08\textwidth}
p{0.075\textwidth}
p{0.075\textwidth} p{0.075\textwidth} p{0.075\textwidth}
p{0.08\textwidth}
p{0.075\textwidth} p{0.075\textwidth} p{0.075\textwidth} p{0.075\textwidth} p{0.075\textwidth} p{0.075\textwidth}}
\toprule
{\bf Model} & {Center} & {\bf $\hat{\pi}_0$} & {\bf $\hat{\pi}_0^{\rm{L}}$} & {\bf $\hat{\pi}_0^{\rm{U}}$} & {\bf Null} & {\bf $\hat{\theta}_0^{\rm{PSC}}$} & {\bf $\hat{\theta}_0^{\rm{PSH}}$} & {\bf $\hat{\theta}_0^{\rm{PSB}}$} & {\bf $\hat{\theta}^{\rm{E}}$} & {\bf $\hat{\theta}^{\rm{MR}}$} & {\bf $\hat{\theta}^{\rm{CJ}}$}\\
\midrule
\textsf{S1} & 0.068 & 0.857 & 0.571 & 1 & $\mathcal{N} (0,1)$ & 0.848 & 0.855 & 0.893 & 0.861 & 0.890 & 0.893 \\
\text{ } & (0.052) & (0.021) & (0.059) & (0) & & (0.024) & (0.023) & (0.018) & (0.023) & (0.014) & (0.085)\\
& 0 & 0.835 & 0.561 & 1 & & & & & \\ \text{ } & \textsf{given} & (0.022) & (0.058) & (0) & & & &\\
\midrule
\textsf{S2} & 0.024 & 0.936 & 0.631 & 1 & $\mathcal{N}(0,1)$ & 0.938 & 0.954 & 0.985 & 0.955 & 0.972 & 0.963 \\
\text{ } & (0.044) & (0.017) & (0.066) & (0) & & (0.023) & (0.022) & (0.015) & (0.026) & (0.008) & (0.082) \\
& 0 & 0.925 & 0.623 & 1 & & & & & \\
\text{ } & \textsf{given} & (0.019) & (0.067) & (0) & & & & & \\
\midrule
\textsf{S3} & 0.046 & 0.945 & 0.597 & 1 & $\mathcal{N} (0,1)$ & 0.864 & 0.942 & 0.937 & 0.856 & 0.896 & 0.707\\
\text{ } & (0.050) & (0.019) & (0.063) & (0) & & (0.023) & (0.034) & (0.017) & (0.024) & (0.015) & (0.085)\\
& 0 & 0.930 & 0.587 & 1 & & & & & \\ \text{ } & \textsf{given} & (0.020) & (0.064) & (0) & & & &\\
\midrule
\textsf{S4} & 0.070 & 0.856 & 0.574 & 1 & $\mathcal{N}(0,1)$ & 0.846 & 0.854 & 0.891 & 0.849 & 0.889 & 0.713 \\
\text{ } & (0.056) & (0.021) & (0.061) & (0) & & (0.023) & (0.024) & (0.018) & (0.024) & (0.014) & (0.088)\\
& 0 & 0.833 & 0.563 & 1 & & & & & \\
\text{ } & \textsf{given} & (0.022) & (0.060) & (0) & & & & & \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Real data analysis}
\label{sec:symmetric_read_data}
In this subsection we examine six real datasets where the null component could be reasonably assumed to be symmetric.
We begin with two datasets where we have sufficient information on the background component. The first one is the Prostate dataset \citep{singh2002gene}, which contains gene expression levels for $n=6033$ genes on $102$ men, $50$ of which are control subjects and $52$ are prostate cancer patients. The main objective is to discover the genes that have a different expression level on the control and prostate patient groups. For each gene, we conduct a two-sided two sample $t$ test on the control subjects and prostate patients, and then transform these $t$ statistics into $z$ values, using
\begin{equation}
\label{symmetric_t_z_transformation}
z_i = \Phi^{-1} (F_{100}(t_i)), \text{ } i = 1,2,\dots,6033,
\end{equation}
where $\Phi$ denotes the cdf of the standard normal distribution, and $F_{100}$ denotes the cdf of the $t$ distribution with 100 degrees of freedom. We work with these $n=6033$ $z$ values. From \citep{efron2007size} the background component here could be reasonably assumed to be $\mathcal{N} (0,1)$. The results of the different proportion estimators compared in \secref{symmetric_experiments} are shown in the first row of \tabref{symmetric_realdata}.
The fitted largest symmetric component as well as confidence bands are plotted in \figref{symmetric_realdata_picture_a}.
Next we consider the Carina dataset \citep{walker2007velocity}, which contains the radial velocities of $n=1266$ stars in Carina, a dwarf spheroidal galaxy, mixed with those of Milky Way stars in the field of view. As \cite{patra} stated, the background distribution of the radial velocity, \textsf{bgstars}, can be acquired from \citep{robin2003synthetic}. The various estimators are computed and shown in the second row of \tabref{symmetric_realdata}. The fitted largest symmetric component as well as confidence bands are plotted in \figref{symmetric_realdata_picture_b}.
\begin{table}[htpb]
\centering\small
\caption{Two real datasets where background component can be reasonably guessed or derived. We compare the same methods for extracting a background symmetric component as in \secref{symmetric_experiments}. (Note that we work with $z$ values here instead of $p$-values so our results for the Prostate dataset are slightly different from those reported by \cite{patra}.)}
\label{tab:symmetric_realdata}
\bigskip
\setlength{\tabcolsep}{0.035in}
\begin{tabular}{p{0.1\textwidth}
p{0.07\textwidth}
p{0.07\textwidth} p{0.07\textwidth} p{0.05\textwidth}
p{0.08\textwidth}
p{0.07\textwidth} p{0.07\textwidth} p{0.07\textwidth} p{0.07\textwidth}
p{0.07\textwidth} p{0.07\textwidth}}
\toprule
{\bf Model} & {Center} & {\bf $\hat{\pi}_0$} & {\bf $\hat{\pi}_0^{\rm{L}}$} & {\bf $\hat{\pi}_0^{\rm{U}}$} & {\bf Null} & {\bf $\hat{\theta}_0^{\rm{PSC}}$} & {\bf $\hat{\theta}_0^{\rm{PSH}}$} & {\bf $\hat{\theta}_0^{\rm{PSB}}$} & {\bf $\hat{\theta}^{\rm{E}}$} & {\bf $\hat{\theta}^{\rm{MR}}$} & {\bf $\hat{\theta}^{\rm{CJ}}$} \\
\midrule
Prostate & 0 & 0.977 & 0.789 & 1 & $\mathcal{N} (0,1)$ & 0.931 & 0.941 & 0.975 & 0.931 & 0.956 & 0.867\\
\midrule
Carina & 59 & 0.540 & 0.071 & 1 & \textsf{bgstars} & 0.636 & 0.645 & 0.677 & 0.951 & 0.664 & 0.206 \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}[htpb]
\label{fig:symmetric_realdata_known}
\centering
\centering
\subfigure[Prostate dataset]{\label{fig:symmetric_realdata_picture_a}\includegraphics[scale=0.355]{real_data_prostate_symmetric.png}}\qquad
\centering
\subfigure[Carina dataset]{\label{fig:symmetric_realdata_picture_b}\includegraphics[scale=0.355]{real_data_carina_symmetric.png}}
\caption{Estimated symmetric component on the Prostate ($z$ values) and Carina (radial velocity) datasets: the black curve represents the fitted density; the center orange curve represents the computed $\hat{h}_0$; the top and bottom orange curves represent the $95\%$ simultaneous confidence bands for $h_0$; the estimated center of the symmetric component is indicated by a dotted vertical line.}
\label{fig:prostate_carina}
\end{figure}
Aside from these two real datasets where we know the background distribution, we consider four other real datasets --- three microarray datasets and one police dataset --- where we do not know the null distribution \citep{efron2012large}. Here, out of the methods considered above, only our method and that of \citep{efron2007size} and \citep{cai2010optimal} are applicable. For the first comparison method we use the MLE estimation as presented in \citep[Sec 4]{efron2007size}, which is usually very close to the result of Central Matching as used in \citep{efron2012large}. For the second comparison method we use the estimator in \citep[Sec 3.2]{cai2010optimal}, although we use $\gamma = 0.1$ as the recommended value $\gamma = 0.2$ leads to significant underestimation of the background proportion. Note that both of these methods are still meant to estimate $\theta$.
The HIV dataset \citep{van2003cellular} consists of a study of $4$ HIV subjects and $4$ control subjects. The measurements of $n = 7680$ gene expression levels were acquired using cDNA microarrays on each subject. We compute $t$ statistics for the two sided $t$ test and then transform them into $z$ values using \eqref{symmetric_t_z_transformation}, with the degree of freedom being $6$ here. We would like to know what proportion of these genes do not show a significant difference in expression levels between HIV and control subjects. The results are summarized in the first row of \tabref{symmetric_unknown_realdata}.
The fitted largest symmetric component as well as confidence bands are shown in \figref{symmetric_realdata_picture_c}.
The Leukemia dataset comes from \citep{golub1999molecular}. There are $72$ patients in this study, of which $45$ have ALL (Acute Lymphoblastic Leukemia) and $27$ have AML (Acute Myeloid Leukemia), with AML being considered more severe. High density oligonucleotide microarrays gave expression levels on $n = 7128$ genes. Following \citep[Ch 6.1]{efron2012large},
the raw expression levels on each microarray, $x_{i,j}$ for gene $i$ on array $j$, were transformed to a normal score
\begin{equation}
y_{i,j} = \Phi^{-1} \bigg{(} \big{(}\textsf{rank}(x_{i,j}) - 0.5\big{)} / n\bigg{)},
\end{equation}
where $\textsf{rank}(x_{i,j})$ denotes the rank of $x_{i,j}$ among $n$ raw values of array $j$.
Then $t$ tests were then conducted on ALL and AML patients, and $t$ statistics were transformed to $z$ values according to \eqref{symmetric_t_z_transformation}, now with $70$ degrees of freedom. As before, we would like to know the proportion of genes that do not show a significant difference in expression levels between ALL and AML patients. The results are summarized in the second row of \tabref{symmetric_unknown_realdata}.
The fitted largest symmetric component as well as confidence bands are shown in \figref{symmetric_realdata_picture_d}.
The Parkinson dataset comes from \citep{lesnick2007genomic}. In this dataset, substantia nigra tissue --- a brain structure located in the mesencephalon that plays an important role in reward, addiction, and movement --- from postmortem the brain of normal and Parkinson disease patients were used for RNA extraction and hybridization, done on Affymetrix microarrays. In this dataset, there are $n = 54 277$ nucleotide sequences whose expression levels were measured on $16$ Parkinson's disease patients and $9$ control patients. We wish to find out the proportion of sequences that do not show significant difference between Parkinson and control patients. The results are summarized in the third row of \tabref{symmetric_unknown_realdata}.
The fitted largest symmetric component as well as confidence bands are shown in \figref{symmetric_realdata_picture_e}.
The Police dataset is analyzed in \citep{ridgeway2009doubly}. In 2006, based on $500 000$ pedestrian stops in New York City, each of the city's $n = 2749$ police officers that were regularly involved in pedestrian stops were assigned a $z$ score on the basis of their stop data, in consideration of possible racial bias. For details on computing this $z$ score, we refer the reader to \citep{ridgeway2009doubly, efron2012large}. Large positive $z$ values are considered as possible evidence of racial bias. We would like to know the percentage of these police officers that do not exhibit a racial bias in pedestrian traffic stops. The estimated proportion are reported on the last row of \tabref{symmetric_unknown_realdata}.
The symmetric component as well as confidence bands are presented in \figref{symmetric_realdata_picture_f}.
\begin{table}[htpb]
\centering\small
\caption{Real datasets where background distribution is unknown and needs to be estimated. We compare the methods for extracting a background symmetric component among those in \secref{symmetric_experiments} that apply.}
\label{tab:symmetric_unknown_realdata}
\bigskip
\setlength{\tabcolsep}{0.03in}
\begin{tabular}{p{0.15\textwidth}
p{0.1\textwidth}
p{0.1\textwidth} p{0.1\textwidth}
p{0.1\textwidth} p{0.1\textwidth} p{0.1\textwidth}
}
\toprule
{\bf Model} & {Center} & {\bf $\hat{\pi}_0$} & {\bf $\hat{\pi}_0^{\rm{L}}$} & {\bf $\hat{\pi}_0^{\rm{U}}$} & {\bf $\hat{\theta}^{\rm{E}}$} & {\bf $\hat{\theta}^{\rm{CJ}}$} \\
\midrule
HIV & -0.62 & 0.950 & 0.775 & 1 & 0.940 & 0.926 \\
\midrule
Leukemia & 0.16 & 0.918 & 0.639 & 1 & 0.911 & 0.820 \\
\midrule
Parkinson & -0.18 & 0.985 & 0.924 & 1 & 0.998 & 0.993 \\
\midrule
Police & 0.10 & 0.982 & 0.767 & 1 & 0.985 & 0.978 \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}[htpb]
\label{fig:symmetric_realdata_unknown}
\centering
\centering
\subfigure[HIV dataset]{\label{fig:symmetric_realdata_picture_c}\includegraphics[scale=0.35]{real_data_hiv_symmetric.png}} \qquad
\centering
\subfigure[Leukemia dataset]{\label{fig:symmetric_realdata_picture_d}\includegraphics[scale=0.35]{real_data_leukemia_symmetric.png}} \\
\centering
\subfigure[Parkinson dataset]{\label{fig:symmetric_realdata_picture_e}\includegraphics[scale=0.35]{real_data_parkinson_symmetric.png}} \qquad
\centering
\subfigure[Police dataset]{\label{fig:symmetric_realdata_picture_f}\includegraphics[scale=0.35]{real_data_police_symmetric.png}}
\caption{Estimated symmetric component on HIV ($z$ values), Leukemia ($z$ values), Parkinson ($z$ values), and Police ($z$ scores) datasets: the black curve represents the fitted density; the center orange curve represents the computed $\hat{h}_0$; the top and bottom orange curves represent the $95\%$ simultaneous confidence bands for $h_0$; the estimated center of the symmetric component is indicated by a dotted vertical line.}
\label{fig:real_data_symmetric_unknown}
\end{figure}
\section{Monotone background component}
\label{sec:monotone}
In this section, we turn our attention to extracting from a density its monotone background component following the Patra--Sen approach. For this to make sense, we only consider densities supported on $\mathbb{R}_+ = [0, \infty)$. In fact, all the densities we consider in this section will be supported on $\mathbb{R}_+$.
For such a density $f$, we thus define
\begin{equation}
\label{nonincreasing_pi0_definition}
\pi_0 := \sup \big\{\pi: \exists g \in \mathcal{M} \text{ s.t. } f - \pi g \geq 0 \text{ a.e.}\big\},
\end{equation}
where $\mathcal{M}$ is the class of monotone (necessarily non-increasing) densities on $\mathbb{R}_+$.
Note that $\pi_0 \in [0,1]$ is well-defined for any density $f$, with $\pi_0 = 1$ if and only if $f$ itself is monotone.
Recall that the essential infimum of a measurable set $A$, denoted $\essinf A$, is defined as the supremum over $t \in \mathbb{R}$ such that $A \cap (-\infty, t)$ has Lebesgue measure zero.
Everywhere in this section, we will assume that $f$ is c\`adl\`ag, meaning that, at any point, it is continuous from the right and admits a limit from the left.
\begin{thm}
\label{thm:nonincreasing}
Assuming $f$ is c\`adl\`ag, we have
\begin{align}
\label{nonincreasing_pi0}
\pi_0 = \int_{0}^\infty h_0(x) d x,
&& h_0(x) := \essinf\{f(y) : y \leq x\}.
\end{align}
Moreover, if $\pi_0 > 0$ the supremum in \eqref{nonincreasing_pi0_definition} is attained by the following density and no other:
\begin{equation}
\label{nonincreasing_g0}
g_0(x) := \frac{h_0(x)}{\pi_0}.
\end{equation}
\end{thm}
Note that $\pi_0 = 0$ (i.e., $f$ has no monotone background component) if and only if $\essinf\{f(y) : y \le x\} = 0$ for some $x >0$, or equivalently, if $\{x \in [0,t] : f(x) \le \varepsilon\}$ has positive measure for all $t > 0$ and all $\varepsilon > 0$. If $f$ is c\`adl\`ag, this condition reduces to $f(0) = 0$. Also, if $f$ is c\`adl\`ag, $h_0(x) = \min\{f(y) : y < x\}$.
\begin{proof}
Note that $\pi_0$ can be equivalently defined as
\begin{align}
\pi_0 = \sup \big\{\textstyle{\int h}: \text{$h$ is monotone and $0 \le h \le f$ a.e.}\big\}.
\end{align}
Note that $h_0$, as defined in the statement, satisfies the above conditions, implying that $\pi_0 \ge \int h_0$.
Take $h$ satisfying these same conditions, namely, $h$ is monotone and $0 \le h(x) \le f(x)$ for almost all $x$, say, for $x \in \mathbb{R}_+ \setminus A$ where $A$ has Lebesgue measure zero. Take such an $x$. Then for any $y \le x$ we have $h(x) \le h(y)$, and $h(y) \le f(y)$ if in addition $y \notin A$. Hence,
\[
h(x) \le \inf\{f(y) : y < x, y \notin A\} \le \essinf\{f(y) : y < x\} = h_0(x),
\]
where the second inequality comes from the fact that $A$ has zero Lebesgue measure and the definition of essential infimum.
Hence, $\int h \le \int h_0$ with equality if and only if $h = h_0$ a.e., in particular implying that $\pi_0 \le \int h_0$. We have thus established that $\pi_0 = \int h_0$, and also that $\int h = \pi_0$ if and only if $h = h_0$ a.e.. This not only proves \eqref{nonincreasing_pi0}, but also \eqref{nonincreasing_g0}.
\end{proof}
We have thus established that, in the setting of this section where $f$ is assumed to be c\`adl\`ag, the background component as defined above is given by
\begin{equation}
h_0(x) = \pi_0 g_0(x) = \essinf\{f(y) : y < x\},
\end{equation}
and $f$ can be expressed as a mixture of the background density and another, unspecified, density $u$, as follows:
\begin{equation}
f = \pi_0 g_0 + (1-\pi_0) u.
\end{equation}
The procedure is summarized in \tabref{monotone_algorithm}.
An illustration of this decomposition is shown in \figref{illustration_nonics}.
(In this section, $\mathcal{E}(\sigma)$ denotes the exponential distribution with scale $\sigma$ and $\mathcal{G}(\kappa, \sigma)$ denotes the Gamma distribution with shape $\kappa$ and scale $\sigma$. Recall that $\mathcal{E}(\sigma) \equiv \mathcal{G}(1, \sigma)$.)
By construction, the density $u$ is such that it has no monotone background component in that $\essinf\{u(y) : y < x\} = 0$ for any $x > 0$.
\begin{table}[htpb]
\centering
\caption{Monotone background computation.}
\label{tab:monotone_algorithm}
\bigskip
\setlength{\tabcolsep}{0.22in}
\begin{tabular}{ p{0.9\textwidth} }
\toprule
{\textbf{inputs}: density ${f}$ defined on $[0, \infty)$} \\ \midrule
$h_0(x) = \essinf\{f(y) : y \leq x\}$
$\pi_0 = \int_0^\infty h_0(x) dx$
$g_0(x) = h_0(x) / \pi_0$ \\
\midrule
\textbf{return} $\pi_0, g_0, h_0$
\\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}[htpb]
\centering
\includegraphics[width=0.3\textwidth]{illustrationplot_nonics.png}
\caption{The density $f$ of the Gamma mixture $0.85 \text{ } \mathcal{Exp} (1) + 0.15 \text{ } \mathcal{Gamma} (100, 1/20)$, in black, and its decomposition into $\pi_0 g_0$, in orange, and $(1 - \pi_0) u$, in blue. Notice that the orange curve has a flat part around the middle and is slightly different from $0.85 \text{ } \mathcal{Exp}(1)$.}
\label{fig:illustration_nonics}
\end{figure}
\subsection{Estimation and consistency}
In practice, when all we have is a sample of observations, $x_1, \dots, x_n \in \mathbb{R}_+$, we first estimate the density, resulting in $\hat f$, and then compute the quantities defined in \eqref{nonincreasing_pi0} and \eqref{nonincreasing_g0} with $\hat f$ in place of $f$. Thus our estimates are
\begin{align}
\hat\pi_0 := \int_{0}^\infty \hat h_0(x) d x,
&& \hat h_0(x) := \essinf\{\hat f(y) : y < x\},
&& \hat g_0(x) := \frac{\hat h_0(x)}{\hat\pi_0}.
\end{align}
\begin{thm}
Assume $f$ is c\`adl\`ag, and
suppose that $\hat f$ is a true density, c\`adl\`ag, and is locally uniformly consistent for $f$. Then $\hat h_0$ is locally uniformly consistent for $h_0$ and $\hat \pi_0$ is consistent for $\pi_0$, and if $\pi_0 > 0$, then $\hat g_0$ is locally uniformly consistent fpr $g_0$.
\end{thm}
\begin{proof}
From the definitions,
\begin{align*}
h_0(x) = \essinf\{f(y) : y < x\},
&& \hat h_0(x) := \essinf\{\hat f(y) : y < x\},
\end{align*}
so that
\begin{align*}
|h_0(x) - \hat h_0(x)|
\le \esssup\{|f(y) - \hat f(y)| : y < x\},
\end{align*}
further implying that
\begin{align*}
\esssup_{x < a} |h_0(x) - \hat h_0(x)|
\le \esssup_{y < a} |f(y) - \hat f(y)|, \quad \text{for any $a > 0$},
\end{align*}
from which we get that $\hat h_0$ is locally uniformly consistent for $h_0$ whenever $\hat f$ is locally uniformly consistent for $f$.
For the remaining of the proof, we can follow in the footsteps of the proof of \thmref{symmetric_consistency} based on the fact that, for any $a > 0$,
\[
\left|\int_{[0,a]} h_0 - \int_{[0,a]} \hat h_0\right|
\le \int_{[0,a]} |h_0 - \hat h_0|
\le a \esssup_{[0,a]} |h_0 - \hat h_0|
\to 0,
\]
where the limit is in expectation as the sample size increases.
\end{proof}
\paragraph{Confidence interval and confidence band}
To go beyond point estimators, we suppose that we have available a confidence band for $f$ and deduce from that a confidence interval for $\pi_0$ and a confidence band for $g_0$.
\begin{thm}
Suppose that we have a confidence band for $f$ as in \eqref{conf1}. Then \eqref{conf2} holds with probability at least $1-\alpha$, where
\begin{align*}
\hat\pi_l := \int \hat h_l,
&& \hat\pi_u := \int \hat h_u,
&& \hat h_l(x) := \essinf \{ \hat{f}_l(y) : y < x\},
&& \hat h_u(x) := \essinf \{ \hat{f}_u(y) : y < x\}.
\end{align*}
\end{thm}
The proof is straightforward and thus omitted.
\begin{rem}
So far, we have assumed that the monotone density is supported on $[0, \infty)$, but in principle we can also consider the starting point as unspecified. If this is the case, similar to what we did for the case of a symmetric component in \tabref{symmetric_algorithm}, we can again consider several candidate locations defining the monotone component's support, and select the one yielding the largest monotone component weight.
\end{rem}
\subsection{Numerical experiments}
We are here dealing with densities supported on $[0, \infty)$, and what happens near the origin is completely crucial as is transparent from the definition of $\hat h_0$. It is thus important in practice to choose an estimator for $f$ that behaves well in the vicinity of the origin. As it is well known that kernel density estimators have a substantial bias near the origin, we opted for a different estimator. Many density estimation methods have been proposed to deal with boundary effects, including smoothing splines \citep{gu1993smoothingtheory, gu1993smoothingalgorithm}, local density estimation approaches \citep{fan1993local, loader1996local, hjort1996locally, park2002local}, and local polynomial approximation methods \citep{cattaneo2020simple, cattaneo2019lpdensity}. For consideration of simplicity and intuitiveness, we consider kernel density estimation using a reflection about the boundary point \citep{schuster1985incorporating, cline1991kernel, karunamuni2005generalized}. We acquire $\hat{f}$ from \citep{schuster1985incorporating} and, as we did before, we acquire a $95\%$ confidence band $[\hat{f}_l, \hat{f}_u]$ from \citep{cheng2019nonparametric}. We also note that $\hat{f}$ is consistent for $f$ as shown by \cite{schuster1985incorporating}.
We consider two different situations as listed in \tabref{monotone_simulation_situations}. Each situation's corresponding $\theta$, $\theta_0$, and $\pi_0$, defined as in \eqref{nonincreasing_pi0_definition}, are also presented. We again generate a sample of size $n = 1000$ from each model, and repeat each setting $1000$ times. The mean values as well as standard deviations of our method and related methods are reported in \tabref{nonincreasing_simulation_numbers}.
In the situation \textsf{M1}, our estimator achieves a smaller estimation error for $\pi_0$ than all other methods for their corresponding target, either $\theta$ or $\theta_0$, even with much fewer information on the background component. In situation \textsf{M2}, our method has a slightly higher error than the error of $\hat{\theta}_0^{\rm{PSH}}$, $\hat{\theta}_0^{\rm{E}}$, both having complete information on the background component, but lower than that of $\hat{\theta}_0^{\rm{PSC}}$ and $\hat{\theta}_0^{\rm{CJ}}$.
\begin{table}[htpb]
\centering\small
\caption{Simulated situations for the estimation of a monotone background component, together with the corresponding values of $\theta$, $\theta_0$, $\pi_0$, obtained numerically (and rounded at 3 decimals).}
\label{tab:monotone_simulation_situations}
\bigskip
\setlength{\tabcolsep}{0.05in}
\begin{tabular}{p{0.1\textwidth}
p{0.3\textwidth} p{0.1\textwidth} p{0.1\textwidth} p{0.1\textwidth}
}
\toprule
{\bf Model} & {\bf Distribution} & {\bf $\theta $} & {\bf $\theta_0 $}& {\bf $\pi_0$}\\
\midrule
\textsf{M1} & $0.85 \text{ }\mathcal{E} (1) + 0.15 \text{ } \mathcal{G} (50, 1/10) $ & 0.850 & 0.850 & 0.922 \\
\midrule
\textsf{M2} & $0.95 \text{ }\mathcal{E} (1) + 0.05 \text{ } \mathcal{G} (50, 1/10) $ & 0.950 & 0.950 & 0.993 \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[htpb]
\centering\small
\caption{A comparison of various methods for estimating a monotone background component in the situations of \tabref{monotone_simulation_situations}. As always, $\hat\pi_0$ is compared with $\pi_0$, the $\hat{\theta}_0^{\rm X}$ are compared with $\theta_0$, while the $\hat{\theta}^X$ are compared with $\theta$.}
\label{tab:nonincreasing_simulation_numbers}
\bigskip
\setlength{\tabcolsep}{0.025in}
\begin{tabular}{p{0.1\textwidth}
p{0.075\textwidth} p{0.075\textwidth} p{0.075\textwidth}
p{0.1\textwidth}
p{0.075\textwidth} p{0.075\textwidth} p{0.075\textwidth} p{0.075\textwidth}
p{0.075\textwidth}
p{0.075\textwidth}
}
\toprule
{\bf Model} & {\bf $\hat{\pi}_0$} & {\bf $\hat{\pi}_0^{\rm{L}}$} & {\bf $\hat{\pi}_0^{\rm{U}}$} & {\bf Null} & {\bf $\hat{\theta}_0^{\rm{PSC}}$} & {\bf $\hat{\theta}_0^{\rm{PSH}}$} & {\bf $\hat{\theta}_0^{\rm{PSB}}$} & {\bf $\hat{\theta}^{\rm{E}}$}
& {\bf $\hat{\theta}^{\rm{MR}}$}
& {\bf $\hat{\theta}^{\rm{CJ}}$}\\ \midrule
\textsf{M1} & 0.920 & 0.460 & 1 & $\mathcal{Exp}(1)$ & 0.843 & 0.854 & 0.889 & 0.841 & 0.866 & 0.533 \\
\text{ } & (0.020) & (0.053) & (0) & & (0.022) & (0.022) & (0.018) & (0.028) & (0.013) & (0.085)\\
\midrule
\textsf{M2} & 0.984 & 0.519 & 1 & $\mathcal{Exp}(1)$ & 0.936 & 0.953 & 0.984 & 0.954 & 0.964 & 0.842\\
\text{ } & (0.012) & (0.061) & (0) & & (0.022) & (0.020) & (0.014) & (0.028) & (0.009) & (0.083)\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Real data analysis}
\label{sec:monotone_real_data}
In this subsection we consider a real dataset where the background component could be assumed to be monotonic nonincreasing. We look at the Coronavirus dataset \citep{coronavirus}, acquired from the \textit{COVID-19 Data Repository by the Center for Systems Science and Engineering (CSSE)} at Johns Hopkins University\footnote{~\url{https://github.com/CSSEGISandData/COVID-19}}. It is well known that new coronavirus cases are consistently decreasing in the USA currently, and this trend could be seen to begin on Jan 8, 2021 as shown by the New York Times interactive data\footnote{~\url{https://www.nytimes.com/interactive/2021/us/covid-cases.html}}. For each person infected on or after Jan 8, 2021, we count the number of days between that day and the time they were infected. We are interested in quantifying how monotonic that downward trend in coronavirus infections is.
As we do not know the actual background distribution here, and Gaussian distributions are of not particular relevance, we find that none of the other comparison methods in \secref{symmetric_experiments} are applicable, and therefore only provide our method's estimate in \tabref{monotone_coronavirus_table} and \figref{monotone_coronavirus_picture}. Numerically, it can be seen that the background monotonic component accounts for around $96.7\%$ of the new cases arising on or after Jan 8, 2021.
\begin{table}[htpb]
\centering\small
\caption{Coronavirus dataset where it is of interest to gauge how monotonic the trend is starting in Jan 8, 2021.}
\label{tab:monotone_coronavirus_table}
\bigskip
\setlength{\tabcolsep}{0.1in}
\begin{tabular}{p{0.125\textwidth}
p{0.1\textwidth} p{0.1\textwidth} p{0.1\textwidth}
}
\toprule
{\bf Model} & {\bf $\hat{\pi}_0$} & {\bf $\hat{\pi}_0^{\rm{L}}$} & {\bf $\hat{\pi}_0^{\rm{U}}$} \\ \midrule
\textsf{Coronavirus} & 0.967 & 0.955 & 0.968 \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}[htpb]
\centering
\includegraphics[width=0.3\textwidth]{real_data_coronavirus_monotone.png}
\caption{Estimated background monotone component on the Coronavirus dataset: the black curve represents the fitted density; the center orange curve represents the computed $\hat{h}_0$; and the top and bottom orange curves represent the 95\% simultaneous confidence bands for $h_0$. (Due to the large amount of data here the confidence bands are very narrow.)}
\label{fig:monotone_coronavirus_picture}
\end{figure}
\section{Log-concave background component}
\label{sec:log-concave}
Our last emblematic setting is that of extracting a log-concave background component from a density. Log-concave densities have been widely studied in the literature \citep{walther2009inference, samworth2018recent}, and include a wide variety of distributions, including all Gaussian densities, all Laplace densities, all exponential densities, and all uniform densities.
They have been extensively used in mixture models \citep{chang2007clustering, hu2016maximum, pu2020algorithm}.
Following \cite{patra}, for a density $f$ we define
\begin{equation}
\label{logconcave_pi0_definition}
\pi_0 := \sup \big\{\pi: \exists g \in \mathcal{C} \text{ s.t. } f - \pi g \geq 0 \text{ a.e.}\big\},
\end{equation}
where $\mathcal{C}$ is the class of log-concave densities.
Note that $\pi_0 \in [0,1]$, with $\pi_0 = 1$ if and only if $f$ itself is log-concave.
\begin{thm}
$\pi_0$ is the value of following optimization problem:
\begin{equation}
\begin{aligned}
\label{logconcave_original_maximization}
\text{maximize}& \quad \int_{-\infty}^{\infty} h(x) dx \\
\text{over}& \quad \big\{ h: \mathbb{R} \xrightarrow{} \mathbb{R}_+,\ \text{log-concave},\ h \leq f\big\}.
\end{aligned}
\end{equation}
Indeed, this problem admits a solution, although it may not be unique.
\end{thm}
\begin{proof}
From the definition in \eqref{logconcave_pi0_definition}, it is clear that
\begin{equation}
\pi_0 = \sup \big\{\textstyle{\int h}: \text{$h$ is log-concave and $0 \le h \le f$ a.e.}\big\}.
\end{equation}
Thus we only need to show that the problem \eqref{logconcave_original_maximization} admits a solution --- and then show that it may not be unique to complete the proof.
Here the arguments are a bit more involved.
Let $(h_k)$ be a solution sequence to the problem \eqref{logconcave_original_maximization}, meaning that $h_k : \mathbb{R} \to \mathbb{R}_+$ is log-concave and satisfies $h_k \le f$, and that $q_k := \int h_k$ converges, as $k \to \infty$, to the value of the optimization problem \eqref{logconcave_original_maximization}, denoted $q_*$ henceforth. Note that $0 \le q_k \le 1$ since $0 \le h_k \le f$ and $\int f = 1$, implying that $0 \le q_* \le 1$. We only need to consider the case where $q_* > 0$, for when $q_* = 0$ the constant function $h \equiv 0$ is a solution. Without loss of generality, we assume that $q_k > 0$ for all $k$.
Each $h_k$ is log-concave, and from this we know the following: its support is an interval, which we denote $[a_k, b_k]$ with $-\infty \le a_k < b_k \le \infty$; $h_k$ is continuous and strictly positive on $(a_k, b_k)$; and $x \mapsto (\log h_k(y) - \log h_k(x))/(y-x)$ is non-increasing in $x \le y$ and $y \mapsto (\log h_k(y) - \log h_k(x))/(y-x)$ is non-increasing in $y \ge x$.
Extracting a subsequence if needed, we assume without loss of generality that $a_k \to a$ and $b_k \to b$ as $k \to \infty$, for some $-\infty \le a \le b \le \infty$.
Define $F(t) = \int_{-\infty}^t f(x) dx$ and $H_k(t) = \int_{-\infty}^t h_k(x) dx$.
We have that $H_k$ is a non-decreasing, and if $s \le t$, $H_k(t) - H_k(s) \le F(t) - F(s)$, because $h_k \le f$.
In particular, by Helly's selection theorem (equivalent to Prokhorov's theorem when dealing with distribution functions), we can extract a subsequence that converges pointwise. Without loss of generality, we assume that $(H_k)$ itself converges, and let $H$ denote its limit.
Note that $H$ is constant outside of $[a,b]$.
In fact, $H(x) = 0$ when $x < a$, because $x < a_k$ eventually, forcing $H_k(x) = 0$. For $x, x' > b$, we have that $x, x' > b_k$ for large enough $k$, implying that $H_k(x) = H_k(x')$, yielding $H(x) = H(x')$ by taking the limit $k \to \infty$.
Note also that, for all $s \le t$,
\[
H(t) - H(s)
= \lim_{k \to \infty} (H_k(t) - H_k(s))
\le F(t) - F(s).
\]
This implies that $H$ is absolutely continuous with derivative, denoted $h$, satisfying $h \le f$ a.e..
We claim that $h$ is a solution to \eqref{logconcave_original_maximization}.
We already know that $0 \le h \le f$ a.e.. In addition, we also have $\int_{-\infty}^\infty h \ge q_*$.
To see this, fix $\varepsilon > 0$, and let $t$ be large enough that $\int_t^\infty f \le \varepsilon$.
Because $h_k \le f$, we have $H_k(t) = q_k - \int_t^\infty h_k \ge q_k - \varepsilon$, implying that $H(t) \ge q_* -\varepsilon$ by taking the limit as $k \to\infty$. Hence, $\int_{-\infty}^\infty h = H(\infty) \ge H(t) \ge q^* -\varepsilon$, and $\varepsilon > 0$ being otherwise arbitrary, we deduce that $\int_{-\infty}^\infty h \ge q_*$.
It thus remains to show that $h$ is log-concave.
We establish this claim by proving that, extracting a subsequence if needed, $h_k$ converges to $h$ a.e., and it is enough to do so in an interval. Thus let $x_0$ and $\Delta > 0$ be such that $[x_0-4\Delta, x_0+4\Delta] \subset (a, b)$. Note that $H$ is strictly increasing on $(a, b)$, because each $H_k$ is strictly increasing on $(a_k, b_k)$ due to $h_k$ being log-concave.
Take any $x_0-\Delta \le x < y \le x_0+\Delta$ and let $\delta_k = (\log h_k(y) - \log h_k(x))/(y-x)$.
We assume, for example, that $\delta_k \ge 0$, and bound it from above.
Let $z_k \in [x_0-4\Delta, x_0-3\Delta]$ be such that $H_k(x_0-3\Delta) - H_k(x_0-4\Delta) = h_k(z_k) \Delta$, which exists by the mean-value theorem. Note that $h_k(z_k) \Delta \to \Delta_1 := H(x_0-3\Delta) - H(x_0-4\Delta)$, so that $h_k(z_k) \ge \Delta_2 := \Delta_1/2\Delta > 0$, eventually.
Now, for any $z$ in $[x_0-2\Delta, x_0-\Delta]$, due to $z_k < z < x < y$ and $\log h_k$ being concave,
\[
\frac{\log h_k(z) - \log h_k(z_k)}{z-z_k} \ge \frac{\log h_k(y) - \log h_k(x)}{y-x} = \delta_k,
\]
which implies
\[
h_k(z)
\ge h_k(z_k) \exp(\delta_k (z-z_k))
\ge \Delta_2 \exp(\delta_k \Delta).
\]
This being true for all such $z$, we have
\[
1
\ge \int_{-\infty}^\infty f(z) {\rm d} z
\ge \int_{-\infty}^\infty h_k(z) {\rm d} z
\ge \int_{x_0-2\Delta}^{x_0-\Delta} h_k(z) {\rm d} z
\ge \Delta \cdot \Delta_2 \exp(\delta_k \Delta),
\]
allowing us to derive $\delta_k \le M_1 := \Delta^{-1} \log(2/\Delta_1)$.
We can deal with the case where $\delta_k < 0$ by symmetry, obtaining that, for all $k$ sufficiently large and all $x_0-\Delta \le x < y \le x_0+\Delta$,
\[
\left|\frac{\log h_k(y) - \log h_k(x)}{y-x}\right| \le M_1.
\]
Let $u_k = h_k(x_0)$. Because $h_k$ is unimodal, either $h_k(x) \le u_k$ for all $x \le x_0$ or $h_k(x) \le u_k$ for all $x \ge x_0$. Extracting a subsequence if needed, and by symmetry, assume that the former is true for all $k$ large enough. Then
\begin{align*}
\Delta_1
&= H(x_0-3\Delta) - H(x_0-4\Delta) \\
&= \lim_{k \to \infty} (H_k(x_0-3\Delta) - H_k(x_0-4\Delta)) \\
&= \lim_{k \to \infty} \int_{x_0-3\Delta}^{x_0-4\Delta} h_k(x) {\rm d} x \\
&\le \liminf_{k \to \infty} \Delta u_k,
\end{align*}
so that $u_k \ge \Delta_2 > 0$, eventually.
Assuming so, we have $u_k h_k(x_0) \ge \Delta_1/\Delta \ge \Delta_2$.
And we also have $h_k(x_0) \le f(x_0)$, and together, $|\log h_k(x_0)| \le M_2 := |\log (\Delta_2)| \vee |\log f(x_0)|$.
With the triangle inequality, we thus have, for all $x \in [x_0-\Delta, x_0+\Delta]$,
\[
|\log h_k(x)| \le M_1 |x-x_0| + |\log h_k(x_0)| \le M_1 \Delta + M_2 =: M_3.
\]
The family of functions $(\log h_k)$ (starting at $k$ large enough) is thus uniformly bounded and equicontinuous on $[x_0-\Delta, x_0+\Delta]$, so that by the Arzel\`a--Ascoli theorem, we have that $(\log h_k)$ is precompact for the uniform convergence on that interval. Therefore, the same is true for $(h_k)$. Let $h_\infty$ be the uniform limit of a subsequence, and note that $h_\infty$ is continuous.
$h_\infty$ must also be a weak limit as well, since uniform convergence on a compact interval implies weak convergence on that interval.
Therefore $h_\infty = h$ a.e.~on that interval, since $(h_k)$ converges weakly to $h$.
Hence, all the uniform limits of $(h_k)$ must coincide with $h$ a.e., and since any such limit must be continuous, it means that they are the same. We conclude that, on the interval under consideration, $h$ is equal a.e.~to a continuous function which is the (only) uniform limit of $(h_k)$, and in particular, $(h_k)$ converges pointwise a.e.~to $h$ on that interval.
Thus, we have proved that the optimization problem \eqref{logconcave_original_maximization} has at least one solution. We now show that there may be multiple solutions. This is the case, for instance, when $f = \frac1m f_1 + \cdots + \frac1m f_m$ where each $f_j$ is a log-concave density and these densities have support sets that are pairwise disjoint. In that case, any of the components, meaning any $f_j$, is a solution to \eqref{logconcave_original_maximization}, and these are the only solutions. This comes from the fact that the support of a log-concave distribution is necessarily an interval.
\end{proof}
We have thus established that a density has at least one log-concave background component, and possibly multiple ones, corresponding to the solutions to \eqref{logconcave_original_maximization}. If $h$ is one such solution, then $\pi_0 = \int h$ and we may define the corresponding density as $g = h/\pi_0$ if $\pi_0 > 0$.
Then $f$ can be expressed as a mixture of the background density $g$ and another, unspecified, density $u$, as follows
\begin{equation}
f = \pi_0 g + (1-\pi_0) u.
\end{equation}
(We do not use the notation $h_0$ and $g_0$ here, since these may not be uniquely defined.)
The procedure is summarized in \tabref{logconcave_algorithm}.
An illustration of this decomposition is shown in \figref{illustration_logconcave}. Note that the density $u$ may have a non-trivial log-concave background component. This is the case, for example, if $f$ is the (nontrivial) mixture of two log-concave densities with disjoint support sets, in which case $u$ is one of these log-concave densities.
\begin{table}[htpb]
\centering
\caption{Log-concave background computation. (The input $d$ is used to initialize the function $v$ --- discretized as $\mathbf{v}$ below --- that is bounded from above by $\log f$. In our experiments, we chose $d = 0.02$.) }
\label{tab:logconcave_algorithm}
\bigskip
\setlength{\tabcolsep}{0.22in}
\begin{tabular}{ p{0.9\textwidth} }
\toprule
{\textbf{inputs}: equally spaced gridpoints $\mathbf{t} = \{t_1,t_2,\dots, t_k\}$, density $f$, a boolean $R$ indicating whether to use the Riemann integral approximation, initialization amount $d$} \\ \midrule
initialize $\mathbf{w} = (0, 0, \dots, 0)$ with length $k$
\textbf{for} $i=1,\dots,k$ \textbf{do}
\hspace{3mm} $\mathbf{w}[i] = \log(f(t_i))$
compute $\mathbf{A}$ and $\mathbf{b}$ as in \eqref{pointwise_approximation_a_b}
initialize $\mathbf{v} = \mathbf{w} - (d, d, \dots, d)$
\textbf{if} $R$ = \textsf{true} \textbf{then}
\hspace{3mm} do optimization \eqref{riemann_linear_approximation} using SQP, and record the optimizer $\mathbf{v}$ and the maximum as $\pi_0$
\textbf{else}
\hspace{3mm} do optimization \eqref{actual_linear_approximation} using SQP, and record the optimizer $\mathbf{v}$ and the maximum as $\pi_0$ \\
\midrule
\textbf{return} $\mathbf{v}, \pi_0$
\\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}[htpb]
\centering
\includegraphics[width=0.3\textwidth]{illustrationplot_logconcave_not_apx.png}
\caption{The density $f$ of the Gaussian mixture $0.85 \text{ } \mathcal{N}(0, 1) + 0.15 \text{ } \mathcal{N} (3, 1)$, in black, and its decomposition into $\pi_0 g$, in orange, and $(1 - \pi_0) u$, in blue. The orange curve is acquired by maximizing \eqref{not_apx_area}, but maximizing the Riemann integral approximation \eqref{riemann_linear_approximation} gives almost the same result (with the maximum difference between the two curves being $2 \times 10^{-14} $). Notice $\pi_0 = 0.931$ as $\pi_0 g$ also takes a non-negligible weight from the smaller component $0.15 \text{ } \mathcal{N} (0, 1)$. }
\label{fig:illustration_logconcave}
\end{figure}
\subsection{Estimation and consistency}
In practice, based on a sample, we estimate $f$, resulting in $\hat f$, and simply obtain estimates by plug-in as we did before, this time via
\begin{equation}
\begin{aligned}
\label{logconcave_fitted_maximization}
\text{maximize}& \quad \int_{-\infty}^{\infty} h(x) dx \\
\text{over}& \quad \big\{ h: \mathbb{R} \xrightarrow{} \mathbb{R}_+,\ \text{log-concave},\ h \leq \hat f\big\}.
\end{aligned}
\end{equation}
At least formally, let $\hat h$ be a solution to \eqref{logconcave_fitted_maximization}.
We then define
\begin{align}
\hat{\pi}_0 = \int_{-\infty}^{\infty} \hat h(x) dx,
&&\hat{g}(x) = \frac{\hat h(x)}{\hat\pi_0}.
\end{align}
Here, to avoid technicalities, we work under the assumption that the density estimator $\hat f = \hat f_n$ satisfies
\begin{equation}
\label{logconcave_consistency_condition}
\operatorname{\mathbb{E}}\left[\esssup_{|x| \le a} \frac{\max\{\hat f_n(x), f(x)\}}{\min\{\hat f_n(x), f(x)\}}\right] \to 1, \quad n \to \infty, \quad \forall a > 0.
\end{equation}
This condition is better suited for when the density $f$ approaches 0 only at infinity.
Also for the sake of simplicity, we only establish consistency for $\hat\pi_0$.
\begin{thm}
When \eqref{logconcave_consistency_condition} holds, $\hat\pi_0$ is a consistent.
\end{thm}
\begin{proof}
Fix $\eta > 0$ and $a > 0$, and consider the event, denoted $\Omega$, that
\begin{equation}
\esssup_{|x| \le a} \frac{\max\{\hat f(x), f(x)\}}{\min\{\hat f(x), f(x)\}} \le 1+\eta.
\end{equation}
Because of \eqref{logconcave_consistency_condition}, $\Omega$ happens with probability tending to~1 as the sample size increases.
Let $\varepsilon = 1 - \int_{[-a,a]} f$, which is small when $a$ is large.
Assume that $\Omega$ holds. Let $h$ be a solution to \eqref{logconcave_original_maximization}, so that $\pi_0 = \int h$. Then define $\tilde h(x) = (1-\eta) h(x) \IND{|x| \le a}$ and note that
\[
\int \tilde h = (1-\eta) \int_{[-a,a]} h \ge (1-\eta)(\pi_0 - \varepsilon),
\]
so that $\int \tilde h$ is close to $\pi_0$ when $\eta$ is small and $a$ is large.
We also note that $\tilde h$ is log-concave and satisfies $0 \le \tilde h \le (1-\eta) f$ a.e.. Under $\Omega$, we have $f \le (1+\eta) \hat f$, and so it is also the case that $\tilde h \le (1-\eta)(1+\varepsilon)\hat f \le \hat f$, assuming as we do that $\eta$ and $\varepsilon$ are small enough. Then $\int \tilde h \le \hat \pi_0$, by definition of $\hat \pi_0$. Gathering everything, we obtain that $(1-\eta)(\pi_0 - \varepsilon) \le \hat \pi_0$. By letting $\eta \to 0$ and $a \to \infty$ so that $\varepsilon \to 0$, we have established that $\liminf\hat\pi_0 \ge \pi_0$ in probability. (The $\liminf$ needs to be understood as the sample size increases.)
The reverse relation, meaning $\limsup \hat\pi_0 \le \pi_0$, can be derived similarly starting with a solution $\hat h$ to \eqref{logconcave_fitted_maximization}.
\end{proof}
\paragraph{Confidence interval}
Once again, if we have available a confidence band for $f$, we can deduce from that a confidence interval for $\pi_0$.
\begin{thm}
Suppose that we have a confidence band for $f$ as in \eqref{conf1}. Then
\begin{align}
\hat\pi_l \le \pi_0 \le \hat\pi_u
\end{align}
holds with probability at least $1-\alpha$, where $\hat\pi_l$ and $\hat\pi_u$ are the values of the optimization problem \eqref{logconcave_original_maximization} with $f$ replaced by $\hat f_l$ and $\hat f_u$, respectively.
\end{thm}
The proof is straightforward and thus omitted.
\subsection{Numerical method}
Unlike the previous sections, here the computation of our estimator(s) is non-trivial: indeed, after computing $\hat f$ with an off-the-shelf procedure, we need to solve the optimization problem \eqref{logconcave_fitted_maximization}. Thus, in this section, we discuss how to solve this optimization problem.
Although least concave majorants (or equivalently greatest convex minorants) have been considered extensively in the literature, for example in \citep{francuu2017new, jongbloed1998iterative}, the problem \eqref{logconcave_original_maximization} calls for a type of greatest concave minorant, and we were not able to find references in the literature that directly tackle this problem. We do want to mention \citep{gorokhovik2018minimal}, where a similar concept is discussed, but the definition is different from ours and no numerical procedure to solve the problem is provided. For lack of structure to exploit, we propose a direct discretization followed by an application of sequential quadratic programming (SQP), first proposed by \cite{wilson1963simplicial}. For more details on SQP, we point the reader to \citep{gill2012sequential} or \citep[Ch 18]{nocedal2006sequential}.
Going back to \eqref{logconcave_original_maximization}, where here $f$ plays the role of a generic density on the real line, the main idea is to restrict $v := \log h$ to be a continuous, concave, piecewise linear function. Once discretized, the integral admits a simple closed-form expression and the concavity constraint is transformed into a set of linear inequality constraints.
To setup the discretization, for $k \ge 1$ integer, let $t_{-k, k} < t_{-k+1, k} < \cdots < t_{k-1, k} < t_{k,k}$ be such that $t_{j,k} = - t_{-j,k}$ (symmetry) and $t_{j+1, k} - t_{j,k} = \delta_k$ (equispaced) for all $j$, with $\delta_k \to 0$ (dense) and $t_{k,k} \to \infty$ as $k\to\infty$ (spanning the real line). Suppose that $v$ is concave with $v \le \log f$ and to that function associate the triangular sequence $v_{j,k} := v(t_{j,k})$. Then, for each $k$,
\begin{align*}
-v_{j+1,k} + 2 v_{j,k} - v_{j-1,k}
&= - v(t_{j+1,k}) + 2 v(t_{j,k}) - v(t_{j-1,k}) \\
&= 2 \big(v(t_{j,k}) - \tfrac12 v(t_{j+1,k}+\delta_k) - \tfrac12 v(t_{j-1,k}+\delta_k)\big)
\ge 0, \quad \text{for all } k,
\end{align*}
by the fact that $v$ is concave.
In addition, $v_{j,k} = v(t_{j,k}) \le \log f(t_{j,k}) =: u_{j,k}$ (which are given).
Instead of working directly with a generic concave function $v$, we work with those that are piecewise linear as they are uniquely determined by their values at the grid points if we further restrict them to be $=-\infty$ on $(-\infty, t_{-k,k}) \cup (t_{k,k},+\infty)$. Effectively, at $k$, we replace in \eqref{logconcave_original_maximization} the class $\mathcal{C}$ with the class $\mathcal{C}_k$ of functions $h$ such that $v = \log h$ is log-concave, linear on each interval $[t_{j,k}, t_{j+1,k}]$, $v(x) = -\infty$ for $x < t_{-k,k}$ or $x > t_{k,k}$, and that satisfies $v(t_{j,k}) \le \log f(t_{j,k})$, for all $j$.
This leads us to the following optimization problem, which instead of being over a function space is over a Euclidean space:
\begin{equation}
\label{actual_linear_approximation}
\begin{split}
\text{maximize} \quad & \Lambda(\mathbf{v}) \\
\text{over} \quad & \mathbf{v} = [v_{-k,k}, \dots, v_{k,k}]^\top \quad \text{such that} \quad
\mathbf{A} \mathbf{v} \geq \mathbf{b},
\end{split}
\end{equation}
where
\begin{equation}
\label{not_apx_area}
\Lambda(v_{-k}, \dots, v_k) := \delta_k \sum_{j=-k}^{k-1} \lambda(v_j, v_{j+1}),
\end{equation}
\begin{equation}
\lambda(x, y) := \mathds{1} \{x \neq y\} \frac{\exp(x) - \exp(y)}{x-y} + \mathds{1} \{x=y\} \exp(x),
\end{equation}
and where
\begin{equation}
\label{pointwise_approximation_a_b}
\mathbf{A} = \begin{bmatrix} -1 & 2 & -1& 0 & \dots & 0 & 0 & 0\\ 0 & -1 & 2 & -1 & \dots &0 & 0 & 0\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots& \vdots& \vdots \\ 0 & 0 & 0 & 0 &\dots & -1 &2 & -1 \\ -1 & 0 & 0 & 0 &\dots & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 &\dots & 0 & 0 & 0 \\ \vdots&\vdots&\vdots&\vdots&\vdots& \vdots&\vdots& \vdots \\ 0 & 0 & 0 & 0 &\dots & 0 & 0 & -1
\end{bmatrix},
\qquad \mathbf{b} = \begin{bmatrix} 0 \\ 0 \\ \vdots \\ 0 \\ -u_1 \\ -u_2 \\ \vdots \\ -u_k \end{bmatrix}\, .
\end{equation}
As we are not aware of any method that could solve \eqref{actual_linear_approximation} exactly, we will use sequential quadratic programming (SQP). In our implementation we use the R package {\sf nloptr}\,\footnote{~\url{https://cran.r-project.org/web/packages/nloptr/index.html}} based on an original implementation of \cite{kraft1988software}.
\begin{thm}
Assume that $f$ is Lipschitz and locally bounded away from~0.
Then the value of discretized optimization problem \eqref{actual_linear_approximation} converges, as $k \to \infty$, to the value of the original optimization problem \eqref{logconcave_original_maximization}.
\end{thm}
The assumptions made on $f$ are really for convenience --- to expedite the proof of the result while still including interesting situations --- and we do expect that the result holds more broadly. We also note that, in our workflow, the problem \eqref{actual_linear_approximation} is solved for $\hat f$, and that $\hat f$ satisfies these conditions (remember that the sample size is held fixed here) if it is a kernel density estimator based on a smooth kernel function supported on the entire real line like the Gaussian kernel.
\begin{proof}
Let $h$ be a solution to \eqref{logconcave_original_maximization} so that $\int h = \pi_0$, where $\pi_0$ denotes the value of \eqref{logconcave_original_maximization}.
Define $v = \log h$ and let $v_{j,k} = v(t_{j,k})$. As we explained above, this makes $\mathbf{v}_k := (v_{-k,k}, \dots, v_{k,k})$ feasible for \eqref{actual_linear_approximation}. Therefore $\Lambda(\mathbf{v}_k) \le \pi_{0,k}$, where $\pi_{0,k}$ denotes the value of \eqref{actual_linear_approximation}.
On the other hand, let $\tilde v_k$ denote the piecewise linear approximation to $v$ on the grid, meaning that $\tilde v_k(x) = -\infty$ if $x < t_{-k,k}$ or $x > t_{k,k}$, $\tilde v_k(t_{j,k}) = v(t_{j,k})$ and $\tilde v_k$ linear on $[t_{j,k}, t_{j+1,k}]$ for all $j$. We then have
\begin{equation}
\Lambda(\mathbf{v}_k)
= \int \tilde h_k
\xrightarrow{k \to \infty} \int h,
\end{equation}
where the convergence is justified, for example, by the fact that $\tilde h_k \to h$ pointwise and $0 \le \tilde h_k \le h$ (because, by concavity of $v$, $v \ge \tilde v_k$), so that the dominated convergence theorem applies.
From this we get that
\begin{equation}
\liminf_{k\to\infty} \pi_{0,k} \ge \pi_0.
\end{equation}
In the other direction, let $\mathbf{v}_k$ be a solution of \eqref{actual_linear_approximation}, so that $\Lambda(\mathbf{v}_k) = \pi_{0,k}$. Let $\tilde v_k$ denote the linear interpolation of $\mathbf{v}_k$ on the grid $(t_{-k,k}, \dots, t_{k,k})$, defined exactly as done above, and let $\tilde h_k = \exp(\tilde v_k)$. We have that $\tilde h_k \le f$ at the grid points, but not necessarily elsewhere. To work in that direction, fix $a > 0$, taken arbitrarily large in what follows. Because of our assumptions on $f$, we have that $u := \log f$ is Lipschitz on $[-a,a]$, say with Lipschitz constant $L$.
In particular, with $\tilde u_k$ denoting the linear approximation of $u$ based on the grid, we have
\begin{equation}
|u(x) - \tilde u_k(x)|
\le L \delta_k =: \eta_k.
\end{equation}
Define $\bar v_k(x) = \tilde v_k(x) -\eta_k$ if $x \in [-a,a]$ and $\bar v_k(x) = -\infty$ otherwise. Note that $\bar v_k$ is also concave and piecewise linear, and because $\tilde h_k \le f$, $\tilde v_k \le \tilde u_k$, we have
\begin{equation}
\bar v_k(x)
= \tilde v_k(x) -\eta_k
\le \tilde u_k(x) -\eta_k
\le u(x), \quad \forall x \in [-a,a].
\end{equation}
In particular, $\bar h_k := \exp(\bar v_k)$ is feasible for \eqref{logconcave_original_maximization}, implying that $\int \bar h_k \le \pi_0$.
On the other hand, we have
\begin{equation}
\int \bar h_k
= \int \exp(\bar v_k)
= \exp(-\eta_k) \int_{[-a,a]} \exp(\tilde v_k)
= \exp(-\eta_k) \int_{[-a,a]} \tilde h_k,
\end{equation}
and so, because $\tilde h_k \le \tilde f_k := \exp(\tilde u_k)$,
\begin{align*}
\pi_{0,k}
= \Lambda(\mathbf{v}_k)
= \int \tilde h_k
&= \int_{[-a,a]} \tilde h_k + \int_{[-a,a]^\mathsf{c}} \tilde h_k \\
&= \exp(\eta_k) \int \bar h_k + \int_{[-a,a]^\mathsf{c}} \tilde h_k \\
&\le \exp(\eta_k) \pi_0 + \int_{[-a,a]^\mathsf{c}} \tilde f_k.
\end{align*}
Given $\varepsilon > 0$ arbitrarily small, choose $a$ large enough that $\int_{[-a,a]^\mathsf{c}} f \le \varepsilon$. Since
\begin{equation}
\int_{[-a,a]^\mathsf{c}} \tilde f_k \xrightarrow{k \to\infty} \int_{[-a,a]^\mathsf{c}} f,
\end{equation}
for $k$ large enough we have $\int_{[-a,a]^\mathsf{c}} \tilde f_k \le 2\varepsilon$, implying that $\pi_{0,k} \le \exp(\eta_k) \pi_0 + 2\varepsilon$. Using the fact that $\exp(\eta_k) \to 1$ since $\eta_k = L \delta_k \to 0$, and that $\varepsilon > 0$ is arbitrary, we get that
\begin{equation}
\limsup_{k\to\infty} \pi_{0,k}
\le \pi_0,
\end{equation}
concluding the proof.
\end{proof}
We mention that besides the discretization \eqref{actual_linear_approximation}, we also considered a more straightforward discretization of the integral, effectively replacing $\Lambda$ in \eqref{actual_linear_approximation} with $\Lambda_0$ defined as
\begin{equation}
\label{riemann_linear_approximation}
\Lambda_0(v_{-k}, \dots, v_k) := \delta_k \left(\tfrac{1}{2} \exp(v_{-k}) + \tfrac{1}{2}\exp(v_k) + \sum_{j=-k+1}^{k-1} \exp(v_j)\right).
\end{equation}
The outputs returned by these two discretizations, \eqref{actual_linear_approximation} and \eqref{riemann_linear_approximation}, were very similar in our numerical experiments.
\subsection{Numerical experiments}
\label{sec:logconcave_simulation_subsection}
In this subsection we consider experiments where the background component could be assumed to be log-concave. The density fitting process and confidence band acquisition are exactly the same as in \secref{symmetric_experiments}. We first consider four different situations as listed in \tabref{logcon_simulation_situations}. Although the mixture distributions are identical to those in \tabref{symmetric_simulation_situations} for the symmetric case, we need to point out that $\pi_0$ is different here, as $\pi_0$ here corresponds to the largest possible log-concave component as defined in \eqref{logconcave_pi0_definition}. We again generate a sample of size $n = 1000$ from each model, and repeat each setting $1000$ times.
We note that the output of our algorithm depends heavily on the estimation of the density $\hat{f}$. When the bandwidth selected by cross-validation yields a density with a high frequency of oscillation, the largest log-concave component is very likely to be smaller than the correct value. For an illustrative situation, see \figref{illustration_logconcave_vascillation}. In the event that such a situation happen, we recommend that the user look at a plot of $\hat{f}$ before applying our procedure, with the possibility of selecting a larger bandwidth. This is what we did, for example, for the Carina and Leukemia datasets in \figref{logconcave_bandwidth_adjustment}. In the simulations, to avoid the effect of these issue, we report the median value instead of the mean. These values are reported in \tabref{logcon_simulation_numbers}.
As can be seen, even with only the assumption of log-concavity, our method is accurate in estimating $\pi_0$, with estimation errors ranging from $0.001$ to $0.007$. Our method frequently achieves smaller error in estimating $\pi_0$ than comparison methods in estimating $\theta_0$ and $\theta$, and does so with less information on the background component. For situations where the background component is specified incorrectly, we invite the reader to \secref{incorrect_background_subsection}.
\begin{figure}[htpb]
\centering
\includegraphics[scale=0.35]{illustration_vascillation_logconcave.png}
\caption{An illustrative situation that occurred in the course of our simulations, where a high frequency of oscillation of $\hat{f}$ results in a significantly lower estimation for $\pi_0$: a Gaussian mixture density $0.85 \text{ } \mathcal{N}(0, 1) + 0.15 \text{ } \mathcal{N} (0, 1)$, in black, and corresponding largest log-concave component
$h$, in red, with the fitted density $\hat{f}$, in orange, and fitted largest log-concave component $\hat{h}$, in blue. Here, as before, $\hat{f}$ is obtained by kernel density estimation with bandwidth chosen by cross-validation. The result based on this estimate is $\hat{\pi}_0 = 0.807$, while the actual value is $\pi_0 = 0.931$.}
\label{fig:illustration_logconcave_vascillation}
\end{figure}
\begin{rem}
We note that here the SQP algorithm as implemented in the R package \textsf{nloptr} sometimes gives $\hat{h} = 0$ as the largest log-concave component due to some parts of the estimated density $\hat{f}$ being $0$. This situation happens occasionally when computing the lower confidence bound, and it is of course incorrect in situations like those in \tabref{logcon_simulation_situations}. When this situation occurs, we rerun the algorithm on the largest interval of non-zero $\hat{f}$ values and report the greatest log-concave component acquired on that interval.
\end{rem}
\begin{table}[htpb]
\centering\small
\caption{Log-concave simulation situations of Gaussian mixtures, as well as values of $\theta$, $\theta_0$, and $\pi_0$, obtained through numerical optimization.}
\label{tab:logcon_simulation_situations}
\bigskip
\setlength{\tabcolsep}{0.03in}
\begin{tabular}{p{0.1\textwidth}
p{0.55\textwidth} p{0.1\textwidth} p{0.1\textwidth} p{0.1\textwidth}
}
\toprule
{\bf Model} & {\bf Distribution} & {\bf $\theta $} & {\bf $\theta_0 $} & {\bf $\pi_0$}\\
\midrule
\textsf{L1} & $0.85 \text{ }\mathcal{N} (0, 1) + 0.15 \text{ }\mathcal{N}(3, 1) $ & 0.850 & 0.850 & 0.931 \\
\midrule
\textsf{L2} & $0.95 \text{ }\mathcal{N} (0, 1) + 0.05 \text{ }\mathcal{N}(3, 1) $ & 0.950 & 0.950 & 0.981 \\
\midrule
\textsf{L3} & $0.85 \text{ }\mathcal{N} (0, 1) + 0.1 \text{ }\mathcal{N}(2.5, 0.75) + 0.05 \text{ }\mathcal{N}(-2.5, 0.75) $ & 0.850 & 0.851 & 0.975\\
\midrule
\textsf{L4} & $0.85 \text{ }\mathcal{N} (0, 1) + 0.1 \text{ }\mathcal{N}(2.5, 0.75) + 0.05 \text{ }\mathcal{N}(5, 0.75) $ & 0.850 & 0.850 & 0.946\\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[htpb]
\centering\small
\caption{A comparison of various methods for estimating a log-concave background component in the situations of \tabref{logcon_simulation_situations}. Here we report the median values instead of the mean values (and also report the standard deviations, as before). As always, $\hat{\pi}_0$ should be compared with $\pi_0$, $\hat{\theta}_0^X$ should be compared with $\theta_0$, and $\hat{\theta}^X$ should be compared with $\theta$.}
\label{tab:logcon_simulation_numbers}
\bigskip
\setlength{\tabcolsep}{0.025in}
\begin{tabular}{p{0.1\textwidth}
p{0.075\textwidth} p{0.075\textwidth} p{0.075\textwidth}
p{0.1\textwidth}
p{0.075\textwidth} p{0.075\textwidth} p{0.075\textwidth} p{0.075\textwidth}
p{0.075\textwidth} p{0.075\textwidth}
}
\toprule
{\bf Model} & {\bf $\hat{\pi}_0$} & {\bf $\hat{\pi}_0^{\rm{L}}$} & {\bf $\hat{\pi}_0^{\rm{U}}$} & {\bf Null} & {\bf $\hat{\theta}_0^{\rm{PSC}}$} & {\bf $\hat{\theta}_0^{\rm{PSH}}$} & {\bf $\hat{\theta}_0^{\rm{PSB}}$} & {\bf $\hat{\theta}^{\rm{E}}$} & {\bf $\hat{\theta}^{\rm{MR}}$} & {\bf $\hat{\theta}^{\rm{CJ}}$}
\\
\midrule
\textsf{L1} & 0.932 & 0.596 & 1 & $\mathcal{N} (0,1)$ & 0.850 & 0.858 & 0.894 & 0.861 & 0.890 & 0.888 \\
& (0.034) & (0.074) & (0) & & (0.024) & (0.023) & (0.018) & (0.023) & (0.014) & (0.085) \\
\midrule
\textsf{L2} & 0.974 & 0.662 & 1 & $\mathcal{N}(0,1)$ & 0.942 & 0.958 & 0.988 & 0.953 & 0.972 & 0.964 \\
\text{ } & (0.028) & (0.080) & (0) & & (0.023) & (0.022) & (0.015) & (0.026) & (0.008) & (0.082) \\
\midrule
\textsf{L3} & 0.969 & 0.611 & 1 & $\mathcal{N} (0,1)$ & 0.864 & 0.948 & 0.936 & 0.856 & 0.896 & 0.705
\\
& (0.031) & (0.077) & (0) & & (0.023) & (0.034) & (0.017) & (0.024) & (0.015) & (0.085)
\\
\midrule
\textsf{L4} & 0.942 & 0.632 & 1 & $\mathcal{N}(0,1)$ & 0.848 & 0.857 & 0.893 & 0.849 & 0.890 & 0.712
\\
\text{ } & (0.033) & (0.074) & (0) & & (0.023) & (0.024) & (0.018) & (0.024) & (0.014) & (0.088) \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Real data analysis}
In this subsection we examine real datasets of two component mixtures where the background component could be assumed to be log-concave. We first consider the six real datasets presented in \secref{symmetric_read_data}, but this time look for a background log-concave component. When the background is known, the numerical results of the Prostate and Carina datasets can be found in \tabref{logconcave_realdata_known_background}, \figref{logconcave_prostate_picture} and \figref{logconcave_carina_picture}. When the background is unknown, the numerical results of the HIV, Leukemia, Parkinson and Police datasets can be found in \tabref{logconcave_realdata_unknown_background}, \figref{logconcave_hiv_picture}, \figref{logconcave_leukemia_picture}, \figref{logconcave_parkinson_picture} and \figref{logconcave_police_picture}.
In addition to the above six datasets, we also include here the Old Faithful Geyser dataset \citep{azzalini1990look}. This dataset consists of $272$ waiting times, in minutes, between eruptions for the Old Faithful Geyser in Yellowstone National Park, Wyoming, USA. We attempt to find the largest log-concave component of the waiting times. The results are summarized in the first row of \tabref{logconcave_realdata_unknown_background}, \figref{logconcave_geyser_with_band} and \figref{logconcave_geyser_without_band}. We want to note from this example that the curve $\hat{h}_u$ acquired from $\hat{f}_u$ leading to the computation of $\hat{\pi}_0^{\rm U}$, is not the upper confidence bound of $h_0$, as shown by \figref{logconcave_geyser_with_band}.
\begin{rem}
We observe here through these real datasets that compared to symmetric background assumptions in \secref{symmetric_read_data}, the largest log-concave background component usually has a higher weight than the largest symmetric background component.
\end{rem}
\begin{table}[htpb]
\centering\small
\caption{Real datasets where the background log-concave component is known. (Note that we work with $z$ values here instead of $p$-values so our result in Prostate dataset is slightly different from that in \citep{patra}.)}
\label{tab:logconcave_realdata_known_background}
\bigskip
\setlength{\tabcolsep}{0.03in}
\begin{tabular}{
p{0.1\textwidth}
p{0.07\textwidth} p{0.07\textwidth} p{0.05\textwidth}
p{0.08\textwidth}
p{0.07\textwidth} p{0.07\textwidth} p{0.07\textwidth} p{0.07\textwidth}
p{0.07\textwidth} p{0.07\textwidth}
}
\toprule
{\bf Model} & {\bf $\hat{\pi}_0$} & {\bf $\hat{\pi}_0^{\rm{L}}$} & {\bf $\hat{\pi}_0^{\rm{U}}$} & {\bf Null} & {\bf $\hat{\theta}_0^{\rm{PSC}}$} & {\bf $\hat{\theta}_0^{\rm{PSH}}$} & {\bf $\hat{\theta}_0^{\rm{PSB}}$} & {\bf $\hat{\theta}^{\rm{E}}$} & {\bf $\hat{\theta}^{\rm{MR}}$} & {\bf $\hat{\theta}^{\rm{CJ}}$} \\
\midrule
Prostate & 0.994 & 0.809 & 1 & $\mathcal{N} (0,1)$ & 0.931 & 0.941 & 0.975 & 0.931 & 0.956 & 0.867 \\
\midrule
Carina & 0.600 & 0.242 & 1 & \textsf{bgstars} & 0.636 & 0.645 & 0.677 & 0.951 & 0.664 & 0.206 \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[htpb]
\centering\small
\caption{Real datasets where the background log-concave distribution is unknown. (Note that Efron's method will not run on the Geyser dataset.)}
\label{tab:logconcave_realdata_unknown_background}
\bigskip
\setlength{\tabcolsep}{0.05in}
\begin{tabular}{
p{0.15\textwidth}
p{0.1\textwidth} p{0.1\textwidth} p{0.1\textwidth}
p{0.1\textwidth} p{0.1\textwidth}
}
\toprule
{\bf Model} & {\bf $\hat{\pi}_0$} & {\bf $\hat{\pi}_0^{\rm{L}}$} & {\bf $\hat{\pi}_0^{\rm{U}}$} & {\bf $\hat{\theta}^{\rm{E}}$ } & {\bf $\hat{\theta}^{\rm{CJ}}$ }\\
\midrule
Geyser & 0.693 & 0.287 & 1 & NA & 1 \\
\midrule
HIV & 0.984 & 0.804 & 1 & 0.940 & 0.926 \\
\midrule
Leukemia & 0.981 & 0.695 & 1 & 0.911 & 0.820 \\
\midrule
Parkinson & 1 & 0.605 & 1 & 0.998 & 0.993 \\
\midrule
Police & 0.997 & 0.765 & 1 & 0.985 & 0.978 \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}[htpb]
\centering
\centering
\subfigure[Prostate dataset]{\label{fig:logconcave_prostate_picture}\includegraphics[scale=0.35]{real_data_prostate_logconcave.png}} \qquad
\centering
\subfigure[Carina dataset]{\label{fig:logconcave_carina_picture}\includegraphics[scale=0.35]{real_data_carina_logconcave.png}}
\caption{Estimated background log-concave component on the Prostate ($z$ values) and Carina (radial velocity) datasets: the black curve represents fitted density; the blue curve represents computed $\hat{h}$, one of the largest log-concave components. Due to unusually high frequency of oscillation in $\hat{f}$ in the Carina dataset, we consider increasing the bandwidth from the one acquired by cross-validation, and the corresponding result is shown in \figref{logconcave_carina_adj_bandwidth}.}
\end{figure}
\begin{figure}[htpb]
\centering
\centering
\subfigure[HIV dataset]{\label{fig:logconcave_hiv_picture}\includegraphics[scale=0.35]{real_data_hiv_logconcave.png}} \qquad
\centering
\subfigure[Leukemia dataset]{\label{fig:logconcave_leukemia_picture}\includegraphics[scale=0.35]{real_data_leukemia_logconcave.png}} \\
\centering
\subfigure[Parkinson dataset]{\label{fig:logconcave_parkinson_picture}\includegraphics[scale=0.35]{real_data_parkinson_logconcave.png}} \qquad
\centering
\subfigure[Police dataset]{\label{fig:logconcave_police_picture}\includegraphics[scale=0.35]{real_data_police_logconcave.png}}
\caption{Estimated background log-concave component on HIV ($z$ values), Leukemia ($z$ values), Parkinson ($z$ values), and Police ($z$ scores) datasets: the black curve represents fitted density; the blue curve represents computed $\hat{h}$, one of the largest log-concave components. Due to the high frequency of oscillation in $\hat{f}$ in the Leukemia dataset, we consider increasing the bandwidth from the one acquired by cross-validation, and the corresponding result is shown in \figref{logconcave_leukemia_adj_bandwidth}. }
\label{fig:logconcave_other_real_data}
\end{figure}
\begin{figure}[htpb]
\centering
\centering
\subfigure[Carina dataset]
{
\label{fig:logconcave_carina_adj_bandwidth}
\includegraphics[scale=0.35]{real_data_carina_logconcave_longer_bandwidth.png}} \qquad
\centering
\subfigure[Leukemia dataset]{
\label{fig:logconcave_leukemia_adj_bandwidth}
\includegraphics[scale=0.35]{real_data_leukemia_logconcave_larger_bandwidth.png}}
\caption{Carina (radial velocity) and Leukemia ($z$ values) datasets with kernel density, with bandwidth chosen `by hand' instead of by cross-validation: the black curve represents fitted density with increased bandwidth; the blue curve represents the computed $\hat{h}$. We note that for the Carina dataset, the bandwidth was increased from $3.085$ to $6$, and $\hat{\pi}_0$ changed from $0.550$ to $0.600$. For the Leukemia dataset, the bandwidth was been increased from $0.124$ to $0.25$, and $\hat{\pi}_0$ changed from $ 0.939$ to $0.981$.}
\label{fig:logconcave_bandwidth_adjustment}
\end{figure}
\begin{figure}[htpb]
\centering
\centering
\subfigure[Geyser dataset with $\hat{h}_l$ and $\hat{h}_u$]{\label{fig:logconcave_geyser_with_band}\includegraphics[scale=0.35]{real_data_geyser_logconcave_withband.png}} \qquad
\centering
\subfigure[Geyser dataset with only $\hat{h}$ ]{\label{fig:logconcave_geyser_without_band}\includegraphics[scale=0.35]{real_data_geyser_logconcave_noband.png}}
\caption{Estimated background log-concave component on the Geyser (duration) dataset: the black curve represents the fitted density; the blue curve represents the computed $\hat{h}$; the gray curves (left) represent $\hat{f}_l$ and $\hat{f}_u$; the light blue curves (left) represents $\hat{h}_l$ and $\hat{h}_u$. Note from the left plot that $\hat{h}_u$ is only used to compute $\hat{\pi}_0^{\rm U}$, and is not an upper bound for $h$. }
\end{figure}
\section{Conclusion and discussion}
\label{sec:discussion}
In this paper, we extend the approach of \cite{patra} to settings where the background component of interest is assumed to belong to three emblematic examples: symmetric, monotone, and log-concave. In each setting, we derive estimators for both the proportion and density for the background component, establish their consistency, and provide confidence intervals/bands. However, the important situation of incorrect background distribution and other extensions remain unaddressed, and therefore we discuss them in this section.
\subsection{Incorrect background specification}
\label{sec:incorrect_background_subsection}
As mentioned in \secref{symmetric_experiments} and \secref{logconcave_simulation_subsection}, our method requires much less information than comparable methods, and therefore is much less prone to misspecification of the background component. In this subsection, we give an experiment illustrating this point. We consider the mixture model:
$
0.85 \text{ }\mathcal{T}_6 + 0.15 \text{ }\mathcal{N}(3, 1).
$
Instead of the correct null distribution $\mathcal{T}_6$, we take it to be $\mathcal{N} (0,1)$. This situation could happen in situations of multiple testing settings, for example in the HIV dataset in \secref{symmetric_read_data}. We consider both a symmetric background and log-concave background on this model in \tabref{incorrect_background_situations}, and report the fitted values of our method and comparison methods in \tabref{incorrect_background_results_table}. As can be seen, when estimating $\pi_0$, our estimator achieves $0.002$ error when assuming symmetric background and $0.004$ error when assuming log-concave background, less then any other method in comparison that estimates $\theta_0$ or $\theta$. The heuristic estimator of \cite{patra} has slightly higher error than our method, while the constant estimator of \cite{patra} and the estimators of \cite{efron2007size} and \cite{cai2010optimal} have large errors. The upper confidence bound of \cite{meinshausen2006estimating} also becomes incorrect.
\begin{table}[htpb]
\centering\small
\caption{Simulation situations when background specification is incorrect, as well as values of $\theta$, $\theta_0$, $\pi_0$, obtained through numerical optimization.}
\label{tab:incorrect_background_situations}
\bigskip
\setlength{\tabcolsep}{0.05in}
\begin{tabular}{p{0.1\textwidth} p{0.2\textwidth}
p{0.3\textwidth} p{0.1\textwidth} p{0.1\textwidth} p{0.1\textwidth}
}
\toprule
{\bf Model} & {\bf Background} & {\bf Distribution} & {\bf $\theta $} & {\bf $\theta_0$} & {\bf $\pi_0$} \\
\midrule
\textsf{S5} & Symmetric & $0.85 \text{ }\mathcal{T}_6 + 0.15 \text{ }\mathcal{N}(3, 1) $ & 0.850 & 0.850 & 0.859 \\
\midrule
\textsf{L5} & Log-concave & $0.85 \text{ }\mathcal{T}_6 + 0.15 \text{ }\mathcal{N}(3, 1) $ & 0.850 & 0.850 & 0.925 \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[htpb]
\centering\small
\caption{Results for the situations of \tabref{incorrect_background_situations}. We provide mean values in \textsf{S5}, and median values in \textsf{L5}.}
\label{tab:incorrect_background_results_table}
\bigskip
\setlength{\tabcolsep}{0.02in}
\begin{tabular}{p{0.08\textwidth}
p{0.075\textwidth}
p{0.075\textwidth} p{0.075\textwidth} p{0.075\textwidth}
p{0.08\textwidth}
p{0.075\textwidth} p{0.075\textwidth} p{0.075\textwidth} p{0.075\textwidth} p{0.075\textwidth} p{0.075\textwidth}}
\toprule
{\bf Model} & {Center} & {\bf $\hat{\pi}_0$} & {\bf $\hat{\pi}_0^{\rm{L}}$} & {\bf $\hat{\pi}_0^{\rm{U}}$} & {\bf Null} & {\bf $\hat{\theta}_0^{\rm{PSC}}$} & {\bf $\hat{\theta}_0^{\rm{PSH}}$} & {\bf $\hat{\theta}_0^{\rm{PSB}}$} & {\bf $\hat{\theta}^{\rm{E}}$} & {\bf $\hat{\theta}^{\rm{MR}}$} & {\bf $\hat{\theta}^{\rm{CJ}}$}\\
\midrule
\textsf{S5} & 0.073 & 0.857 & 0.541 & 1 & $\mathcal{N} (0,1)$ & 0.815 & 0.855 & 0.877 & 0.803 & 0.843 & 0.816 \\
\text{ } & (0.057) & (0.021) & (0.057) & (0) & & (0.021) & (0.022) & (0.016) & (0.026) & (0.015) & (0.086)\\
\midrule
\textsf{L5} & & 0.921 & 0.574 & 1 & $\mathcal{N} (0,1)$ & 0.817 & 0.856 & 0.877 & 0.803 & 0.843 & 0.814 \\
\text{ } & & (0.036) & (0.069) & (0) & & (0.021) & (0.022) & (0.016) & (0.026) & (0.015) & (0.086)\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Combinations}
Although not discussed in the main part of the paper, some combinations of the shape constraints considered earlier are possible. For example, one could consider extracting a maximal background that is symmetric {\em and} log-concave; or one could consider extracting a maximal background that is monotone {\em and} log-concave. As it turns out, these two combinations are intimately related. Mixtures of symmetric log-concave distributions are considered, for example, in \citep{pu2020algorithm}.
\subsection{Generalization to higher dimensions}
All our examples were on the real line, corresponding to real-valued observations, simply because the work was in large part motivated by multiple testing in which the sample stands for the test statistics. But the approach is more general. Indeed, consider a measurable space, and let $\mathcal{D}$ be a class of probability distributions on that space. Given a probability distribution $\mu$, we can quantify how much there is of $\mathcal{D}$ in $\mu$ by defining
\begin{equation}
\pi_0 := \sup \big\{\pi: \exists \nu \in \mathcal{D} \text{ s.t. } \mu \ge \pi \nu\big\}.
\end{equation}
For concreteness, we give a simple example in an arbitrary dimension $d$ by generalizing the setting of a symmetric background component covered in \secref{symmetric}.
Although various generalizations are possible, we consider the class --- also denoted $\mathcal{S}$ as in \eqref{symmetric_pi0_definition} --- of spherically symmetric (i.e., radial) densities with respect to the Lebesgue measure on $\mathbb{R}^d$.
It is easy to see that the background component proportion is given by
\begin{align}
\pi_0 = \int_{\mathbb{R}^d} h_0(x) d x,
&& h_0(x) := \min \{ f(y) : \|y\| = \|x\|\},
\end{align}
and, if $\pi_0 > 0$, the background component density is given by $g_0 := h_0/\pi_0$.
\subsection*{Acknowledgments}
We are grateful to Philip Gill for discussions regarding the discretization of the optimization problem \eqref{logconcave_original_maximization}.
\bibliographystyle{chicago}
|
2,877,628,091,610 | arxiv | \section{Introduction}
Let $\set{B(t) : t \geq 0}$ be the Brownian motion. It is well known that there exist polynomials $H_n(x,t)$ (the Hermite polynomials) such that for each $n$, $H_n(B(t), t)$ is a martingale with respect to the filtration induced by $\set{B(t)}$. These polynomials have numerous other familiar properties. The list of properties relevant to this article includes:
\begin{itemize}
\item
Orthogonality: $\mf{E}[H_n(B(t), t) H_k(B(t), t)] = 0$ for $n \neq k$;
\item
Three-term recursion: $x H_n(x,t) = H_{n+1}(x,t) + n t H_{n-1}(x,t)$;
\item
Expansion: $H_n(x,t) = \sum_{j=0}^{[n/2]} (-1)^j \frac{n!}{(n - 2j)! 2^j j!} t^j x^{n - 2 j}$, where the coefficients are related to the number of incomplete matchings;
\item
Product formulas: $H_n(x,t) H_k(x,t) = \sum_{j=0}^{n \wedge k} \binom{n}{j} \binom{k}{j} j! t^j H_{n + k - 2 j}(x,t)$, where the coefficients are related to the number of inhomogeneous incomplete matchings;
\item
Solution of the heat equation: $\partial_t H_n(x,t) + \frac{1}{2} \partial_x^2 H_n(x,t) = 0$ with the initial condition $H_n(x,0) = x^n$;
\item
Stochastic integral representation: $H_n(B(t), t) = \int_{[0,t)^n} \,dB(t_1) \ldots \,dB(t_n)$.
\end{itemize}
Moreover, some of these results, notably the product formulas, extend to more general stochastic integrals $\int_{[0,t)^n} f(t_1, \ldots, t_n) \,dB(t_1) \ldots \,dB(t_n)$ for $f \in L^2$.
Now let $\set{X(t) : t \geq 0}$ be the $N \times N$ Hermitian Brownian motion: Hermitian random matrices with jointly complex Gaussian entries and the covariance function $\Exp{X(s)_{ij} X(t)_{k \ell}} = \frac{1}{N} \delta_{i=\ell} \delta_{j=k} (s \wedge t)$. Then as is also well-known by now, there is no polynomial $P(x,t)$ of degree greater than $2$ such that $P(X(t), t)$ is a martingale. For example, a simple calculation shows that for $s < t$,
\[
\Exp{ X(t)^3 | \leq s} = X(s)^3 + (t-s) \left(2 X(s) + \frac{1}{N} \Tr[X(s)] I_N \right).
\]
It is therefore natural to replace the algebra of polynomials by a larger algebra of trace polynomials. Here a trace monomial is, roughly speaking, a product of a regular monomial and traces of regular monomials. See Section~\ref{Subsec:Trace-background} for details. An incomplete but representative list of related work involving trace polynomials includes the study of random matrices \cite{Cebron-Free-convolution}, \cite{Driver-Hall-Kemp,Kemp-large-N-GL,Kemp-Heat-kernel}, noncommutative functions \cite{Klep-Spenko-Free-function-theory,Klep-Pascoe-Volcic}, and operator algebras \cite{Dabrowski-Guionnet-Shl-Convex,Jekel-Elementary,Jekel-Li-Shl-Wasserstein}. Other related work includes \cite{Graczyk-Vostrikova,Levy-Schur-Weyl,Huber-trace}.
In this article, we study the $\ast$-algebra of trace polynomials, the state on it induced by the Hermitian Brownian motion, the corresponding basis of Hermite trace polynomials, and the larger algebra obtained as its completion. However, most of the results in the article are stated in terms of permutations (rather than trace monomials), a general parameter $q$ (rather than $\frac{1}{N}$), and a general Hilbert space $\mc{H}$ (rather than $L^2(\mf{R}_+, dx)$). Thus, denote by $S_0(n)$ the symmetric group on the set $\set{0, 1, \ldots, n}$. For each $n$, $\alpha \in S_0(n)$ and $h_1, \ldots, h_n \in \mc{H}_{\mf{R}}$, we will consider symbols $\T{\alpha \otimes (h_1 \otimes \ldots \otimes h_n)}$ subject to the symmetry relation
\[
\T{\alpha \otimes_s (h_1 \otimes \ldots \otimes h_n)} = \T{(\sigma \alpha \sigma^{-1}) \otimes_s U_\sigma (h_1 \otimes \ldots \otimes h_n)}
\]
for any $\sigma \in S(n)$, where
\[
U_\sigma (h_1 \otimes \ldots \otimes h_n) = h_{\sigma^{-1}(1)} \otimes \ldots \otimes h_{\sigma^{-1}(n)}.
\]
Extending linearly in both arguments, we obtain
\[
\mc{TP}(\mc{H}_{\mf{R}}) = \set{\T{\eta \otimes_s F} : n \geq 0, \eta \in \mf{C}[S_0(n)], F \in \mc{H}_{\mf{R}}^{\otimes n}},
\]
where for now we are considering the algebraic tensor product of Hilbert spaces. We now define a star-algebra structure on $\mc{TP}(\mc{H}_{\mf{R}})$ by
\[
\T{\alpha \otimes_s F} \T{\beta \otimes_s G} = \T{(\alpha \cup \beta) \otimes_s (F \otimes G)}
\]
and
\[
\T{\eta \otimes_s F}^\ast
= \T{\eta^\ast \otimes_s F}.
\]
Here for $\alpha \in S_0(n)$, $\beta \in S_0(k)$, the permutation $\alpha \cup \beta \in S_0(n+k)$ is obtained by shifting the cycles of $\beta$ by $n$, and merging the cycles of $\alpha$ and $\beta$ containing $0$. See Notation~\ref{Notation:Union}.
Next, let $q$ be a non-zero parameter. For a transposition $\tau$, we define the contraction $C_\tau$ as follows. For $\alpha \in S_0(n)$, $C_\tau(\alpha)$ is obtained by: multiplying $\tau \alpha$; erasing the support of $\tau$ and shifting to obtain a permutation in $S_0(n-2)$; and multiplying by a weight depending on $q$ and the number of cycles in $\tau \alpha$. See Definition~\ref{Defn:Contraction-gr}. Then as usual, we define the Laplacian as the sum over transpositions $\mc{L} = \sum_\tau C_\tau$, and the Hermite trace polynomial $\I{\eta \otimes_s F} = \T{e^{-\mc{L}}(\eta \otimes_s F)}$. Here the contraction on the tensor part of the argument is the usual tensor contraction. Hermite trace polynomials satisfy several properties which parallel those of ordinary Hermite polynomials.
We may now define a linear functional $\phi$ on $\mc{TP}(\mc{H}_{\mf{R}})$ by requiring it to be unital and zero on each $\I{\eta \otimes_s F}$. If we use $\set{\T{\eta \otimes_s F}}$ as a spanning set for $\mc{TP}(\mc{H}_{\mf{R}})$, multiplication does not depend on $q$, while the state $\phi$ does. On the other hand, if we use $\set{\I{\eta \otimes_s F}}$ as a spanning set, the state does not depend on $q$, but multiplication does. $\phi$ is positive semi-definite if and only if $q$ is zero or of the form $\pm \frac{1}{N}$ for $N \in \mf{N}$. For such $q$, we can define the GNS Hilbert space $\mc{F}_q(\mc{H})$ as the completion of the quotient $\mc{TP}_q(\mc{H}_{\mf{R}})$ of $\mc{TP}(\mc{H}_{\mf{R}})$ with respect to the seminorm induced by $\phi$. The inner product takes the form
\[
\ip{(\eta \otimes_s F)}{(\zeta \otimes_s G)}_q = \delta_{n=k} \sum_{\sigma \in S(n)} \chi_q[\eta \sigma \zeta^\ast \sigma^{-1}] \ip{F}{U_\sigma G}_{\mc{H}^{\otimes n}}.
\]
Here $\chi_q$ is the normalized character $\chi_q(\alpha) = q^{\abs{\alpha}}$, where $\abs{\alpha}$ is the standard length function on the symmetric group.
Using the seminorm induced by $\phi$ allows us to extend the star-algebra structure to
\[
\overline{\mc{TP}}(\mc{H}_{\mf{R}}) = \set{\T{\eta \otimes_s F} : n \geq 0, \eta \in \mf{C}[S_0(n)], F \in \overline{\mc{H}_{\mf{R}}^{\otimes n}}},
\]
where $F$ may now lie in the Hilbert space tensor product. In the case $\mc{H} = L^2(\mf{R}_+, dx)$, $\I{\eta \otimes_s F}$ may then be interpreted as a stochastic integral of $F$. We thus obtain contraction, product, and conditional expectation formulas for such integrals. Note that $\T{\eta \otimes_s F}$ does not in general allow for such an extended argument.
Since $\mc{TP}(\mc{H}_{\mf{R}})$ is naturally graded, we may interpret $\mc{F}_q(\mc{H})$ as a Fock space. There are several natural choices of creation (and annihilation) operators which we describe. The corresponding field operators live in smaller subalgebras of $\mc{TP}(\mc{H}_{\mf{R}})$. There is an interesting connection with a different Fock space construction from \cite{Bozejko-Guta} which we describe in some detail. Incidentally, another article exploring the connection between characters of the symmetric groups and GUE matrices is \cite{Kostler-Nica-CLT-S-infty}. We have not elucidated the connection between their work and ours.
Finally, we show that, as expected, for $q = \pm \frac{1}{N}$ the structures above are isomorphic to the algebra of trace polynomials, and equivariant square-integrable matrices, in a Hermitian Brownian motion. Various corollaries follow. In particular, we obtain several versions of the chaos decomposition for this process. For $q=0$, with a different scaling, one obtains objects related to free probability.
To the best of our knowledge, the Hermite trace polynomials $\I{\eta \otimes_s F}$ are new even in the univariate case $\mc{H} = \mf{C}$. However if we further restrict to $\eta \in \mf{C}[S(n)]$ (rather than $S_0(n)$), the corresponding objects have appeared in the literature. Indeed, denoting $\chi^\lambda$ the character of the irreducible representation indexed by the partition $\lambda$, $\I{\chi^\lambda}$ is closely related to the corresponding Hermite polynomial of matrix argument. In particular, these elements are all orthogonal with respect to $\phi$. Moreover, one can form a more general set of characters $\chi^{\lambda, \lambda'}$ indexed by pairs of partitions which differ by one box, such that $\set{\I{\chi^{\lambda, \lambda'}}}$ form an orthogonal basis for $\mc{F}_{1/N}(\mf{C})$. This collection of trace polynomials is clearly deserving of additional study.
The paper is organized as follows. After the introduction and a background section, in Section~\ref{Sec:Group} we discuss the kernel of the character $\chi_q$, define the multiplication and contractions on the tensor algebra of symmetric groups, and study their properties. In Section~\ref{Sec:Fock}, we define the Fock space $\mc{F}_q(\mc{H})$, describe the kernel of the inner product, and list three chaos decompositions for this space for different choices of $\mc{H}$. In Section~\ref{Sec:Algebra}, we upgrade the algebra structure from $\mc{TP}(\mc{H}_{\mf{R}})$ to $\overline{\mc{TP}}(\mc{H}_{\mf{R}})$, define the Hermite trace polynomials $\I{\eta \otimes_s F}$ and the functional $\phi$, and prove conditional expectation and product formulas. We also study creation and annihilation decompositions on the Fock space, and three subalgebras which arise: the Gaussian subalgebra, the commutative subalgebra corresponding to pure trace polynomials, and the subalgebra corresponding to pure polynomials, which is not closed under conditional expectations. We finish the section by describing the relation to a construction by Bo\.{z}ejko and Gu\c{t}\u{a}. Finally, in Section~\ref{Sec:GUE}, we give some background on trace polynomials and the Hermitian Brownian motion, prove the isomorphism with the random matrix picture for $q = \frac{1}{N}$, and list several corollaries. We show that in the $q=0$ case, there is an isomorphism involving the free Fock space, and the case $q = - \frac{1}{N}$ is isomorphic to that for $q = \frac{1}{N}$. We also describe the relation with Hermite polynomials of matrix argument.
\textbf{Acknowledgements.} The first author has discussed various aspects of this project with a number of people. He is especially grateful to Todd Kemp, Andu Nica, JM Landsberg, Marek Bo{\.z}ejko, and Jurij Vol\v{c}i\v{c}.
\section{Preliminaries}
\label{Sec:Prelim}
\subsection{Permutations and partitions}
Denote $[n] = \set{1, \ldots, n}$ and $[0, n] = \set{0, 1, \ldots, n}$.
A permutation in $S(n)$ induces a (cycle) set partition in $\Part(n)$. Conversely, a partition $\pi \in \Part(n)$ will be identified with a permutation the elements of whose cycles are ordered in increasing order. In particular, a partition whose blocks are pairs and singletons will be identified with the corresponding involutive permutation. For such a partition, we denote by $\Sing{\pi}$ its single-element blocks, and $\Pair{\pi}$ its two-element blocks.
Similarly, a set partition $\pi$ induces a (number) partition whose parts are the sizes of blocks of $\pi$. Conversely, a partition $\lambda \in \Par(n; k)$ with parts $\lambda_1 \geq \lambda_2 \geq \ldots \geq \lambda_k$ induces a set partition whose blocks are intervals of size $\lambda_1, \ldots, \lambda_k$.
Denote by $S_0(n)$ the permutations of $[0, n]$. $S(n)$ is a subgroup of $S_0(n)$, which acts on it by conjugation. The equivalence classes under this action are subsets of the standard conjugacy classes where the number of elements of the cycle containing $0$ is preserved. So they are in a natural bijection with number partitions of $n+1$ with a marked element. Equivalently, they are indexed by pairs of partitions $(\lambda, \lambda')$, where $\lambda \in \Par(n)$, $\lambda' \in \Par(n + 1)$, and the corresponding diagrams differ by one box.
For $m > n$, we will identify the element $\alpha \in S_0(n)$ with the corresponding element of $S_0(m)$ under the natural inclusion $[0,n] \subset [0,m]$.
For $\alpha \in S_0(n)$, denote $\cyc_0(\alpha) = \cyc(\alpha) - 1$, where $\cyc(\alpha)$ is the number of cycles of $\alpha$. In other words, $\cyc_0(\alpha)$ is the number of cycles of $\alpha$ not containing $0$. Denote
\[
\abs{\alpha} = (n+1) - \cyc(\alpha) = n - \cyc_0(\alpha),
\]
which is the usual length function on the symmetric group on $[0, n]$.
To make the paper more readable, we will write the elements of the group algebra $\mf{C}[S_0(n)]$ as linear combinations $\sum_{\alpha \in S_0(n)} c_\alpha \alpha$ rather than the more standard $\sum_{\alpha \in S_0(n)} c_\alpha \delta_{\alpha}$.
A \emph{partial permutation} $\alpha \in \mc{PS}_0(n)$ is a bijection from a subset of $[0, n]$ onto a subset of $[0, n]$; proper subsets and the empty subset are allowed. Orbits of a partial permutation fall into two types. Cyclic orbits are cycles in the usual permutation sense. Linear orbits have the initial and the final element. Note that a linear orbit has at least two elements. It is convenient to abuse the terminology and consider elements of $[0, n]$ which do not belong to any orbit of $\alpha$ as single-element linear orbits of $\alpha$. Denote $\mc{PS}_0(n, N)$ the set of partial permutations of $[0, n]$ with $N$ linear orbits.
\subsection{Structure theory of the symmetric group}
\label{Sec:Centralizer}
For a partition $\lambda \in \Par(n+1)$, denote by $\chi^\lambda$ the character of the irreducible representation of $S_0(n)$ corresponding to $\lambda$. We will identify $\chi^\lambda$ with the element
\[
\sum_{\sigma \in S_0(n)} \chi^\lambda(\sigma) \sigma \in \mf{C}[S_0(n)]
\]
(recall that the characters of the symmetric group are real-valued). These elements span the center $Z(\mf{C}[S_0(n)])$, and are orthogonal, in the sense that for $\lambda \neq \mu$,
\[
\left( \sum_{\sigma \in S_0(n)} \chi^\lambda(\sigma) \sigma \right) \left( \sum_{\tau \in S_0(n)} \chi^\mu(\tau) \tau \right)
= \sum_{\rho \in S_0(n)} (\chi^\lambda \ast \chi^\mu)(\rho) \rho = 0.
\]
In particular, for any character $\chi$, $\chi^\lambda$ and $\chi^\mu$ are orthogonal with respect to the inner product induced by $\chi$.
The centralizer of $\mf{C}[S(n)]$ in $\mf{C}[S_0(n)]$ is
\[
Z(\mf{C}[S_0(n)] : \mf{C}[S(n)]) = \set{\eta \in \mf{C}[S_0(n)] : {\sigma^{-1}} \eta \sigma = \eta \text{ for all } \sigma \in S(n)}.
\]
The following are well-known \cite{Okounkov-Vershik}, \cite{Gill-Rep}.
\begin{itemize}
\item
$Z(\mf{C}[S_0(n)] : \mf{C}[S(n)])$ is generated (as an algebra) by $Z(\mf{C}[S(n)])$ and the Jucys–Murphy element ${(01)} + \ldots + {(0n)}$.
\item
For $\lambda \in \Par(n)$, write $\lambda' = \lambda + \square$ if the Young diagram for $\lambda'$ is obtained by adding one box to the Young diagram for $\lambda$. Denote $\chi^{\lambda' : \lambda}$ the character of the compression of the $\lambda'$-irreducible representation of $S_0(n)$ to the (unique) component giving a $\lambda$-irreducible representation of $S(n)$. Then $\set{\chi^{\lambda' : \lambda} : \lambda \in \Par(n), \lambda' = \lambda + \square}$ are orthogonal and span $Z(\mf{C}[S_0(n)] : \mf{C}[S(n)])$.
\end{itemize}
Let $W$ be the isomorphism
\[
W : \sum_{\lambda \in \Par(n+1)} M_{d_\lambda}(\mf{C}) \rightarrow \mf{C}[S_0(n)],
\]
where $d_\lambda$ is the dimension of the irreducible representation of $S_0(n)$ corresponding to $\lambda$. Then $W(M_{d_\lambda}(\mf{C}))$ is the ideal generated by (any one of) the Young symmetrizer(s) $c_\lambda$, and is spanned by these symmetrizers (for different choices of the tableau corresponding to $\lambda$). In particular, $W(I_{M_{d_\lambda}(\mf{C})}) = \frac{d_\lambda}{(n+1)!} \chi^\lambda$, which can be characterized as minimal central projections.
Denote
\[
\mf{C}[S_0(n)]_{\leq N}
= W \left(\sum_{\lambda \in \Par(n+1; \leq N)} M_{d_\lambda}(\mf{C}) \right)
\]
and
\[
\mf{C}[S_0(n)]_{> N}
= W \left(\sum_{\lambda \in \Par(n+1; \geq N+1)} M_{d_\lambda}(\mf{C}) \right).
\]
Let $\chi$ be a character of $S_0(n)$. Then $\chi \circ W$ is a trace on $\sum_{\lambda \in \Par(n+1)} M_{d_\lambda}(\mf{C})$, and so has the form
\[
\chi \circ W = \sum_{\lambda \in \Par(n+1)} n_{\lambda} \Tr_{M_{d_\lambda}(\mf{C})}.
\]
Here
\[
n_\lambda = \frac{\sum_\alpha \chi[\alpha] \chi^\lambda[\alpha]}{\sum_\alpha \chi^\lambda[\alpha] \chi^\lambda[\alpha]}
= \frac{1}{(n+1)!} \sum_{\alpha \in S_0(n)} \chi[\alpha] \chi^\lambda[\alpha].
\]
Denote $E_{ij}^\lambda$ the matrix units in $M_{d_\lambda}(\mf{C})$. Then for any $\chi$,
\[
\set{W(E_{ij}^\lambda): \lambda \in \Par(n+1), 1 \leq i, j \leq d_\lambda}
\]
span $\mf{C}[S_0(n)]$, and are orthogonal with respect to the (typically degenerate) inner product induced by $\chi$,
\[
\chi[W(E_{ij}^\lambda) W(E_{ij}^\mu)^\ast]
= \delta_{\lambda=\mu} \chi[W(E_{ii}^\lambda)]
= \delta_{\lambda=\mu} n_\lambda.
\]
See for example \cite{Structure-symmetric} for a detailed exposition and explicit expressions for $W(E_{ij}^\lambda)$.
\subsection{Algebraic conditional expectation}
\begin{Prop}
\label{Prop:Algebraic-CE}
Let $\mc{A}$ be a unital star-algebra, $\mc{B}$ a unital star-subalgebra, and $\phi$ a faithful, tracial state on $\mc{A}$, where positivity means that $\phi[a^\ast a] \geq 0$ for any $a \in \mc{A}$. Denote by $L^2(\mc{A}, \phi)$ and $L^2(\mc{B}, \phi)$ the corresponding GNS Hilbert spaces, with the common state vector $\Omega$. Suppose $F : \mc{A} \rightarrow \mc{B}$ is a function such that the map $a \Omega \mapsto F(a) \Omega$ extends to the orthogonal projection $P : L^2(\mc{A}, \phi) \rightarrow L^2(\mc{B}, \phi)$. Then
\begin{itemize}
\item
$\phi[F(a)] = \phi[a]$.
\item
$F$ is a $\mc{B}$-bimodule map.
\item
For any $a \in \mc{A}$, the operator on $L^2(\mc{B}, \phi)$ induced by $F(a)$ is $P a P$. In particular, the operator induced by $F(a^\ast a)$ is positive.
\end{itemize}
We call such a map $F$ an algebraic conditional expectation.
\end{Prop}
If $\mc{A}$ is a $C^\ast$ algebra, it follows that $F$ is a genuine $\phi$-preserving conditional expectation.
\begin{proof}
By assumption, for any $a \in \mc{A}$ and $b \in \mc{B}$,
\[
\phi[b^\ast a] = \phi[b^\ast F(a)],
\]
and $F(a)$ is uniquely determined by this condition. By taking $b = 1$, we get the first property. Next, for $b' \in \mc{B}$, using the fact that $\phi$ is tracial,
\[
\phi[b^\ast F(a b')] = \phi[b^\ast a b'] = \phi[b' b^\ast a] = \phi[b' b^\ast F(a)] = \phi[b^\ast F(a) b'],
\]
so $F$ is a right $\mc{B}$-module map. The proof for the left action is similar, and does not require the tracial property. Finally,
\[
\ip{b \Omega}{F(a) b' \Omega}_\phi = \ip{b \Omega}{a b' \Omega}_\phi = \ip{b \Omega}{P a P b' \Omega}_\phi
\]
and
\[
\ip{b \Omega}{F(a^\ast a) b \Omega}_\phi = \ip{a b \Omega}{a b \Omega}_\phi \geq 0. \qedhere
\]
\end{proof}
\section{The tensor algebra of symmetric groups}
\label{Sec:Group}
\subsection{A function on the symmetric group}
On the symmetric group $S_0(n)$, consider the function $\chi^{n+1}_q: \alpha \mapsto q^{\abs{\alpha}}$, and extend it to a function on the group algebra $\mf{C}[S_0(n)]$. As is well-known (see, for example, \cite{Gnedin-Gorin-Kerov-Block-characters,Kostler-Nica-CLT-S-infty}), this function is positive semi-definite for
\[
q \in \mc{Z}_{n+1} = \left[ - \frac{1}{n}, \frac{1}{n} \right] \cup \set{\pm \frac{1}{N} : 1 \leq N \leq n-1}
\]
and is not positive semi-definite for other values of $q$. It follows that these functions are positive for all $n$ if
\[
q \in \mc{Z} = \cap_n \mc{Z}_n = \set{0} \cup \set{\pm \frac{1}{N} : N \in \mf{N}}.
\]
We will typically omit the superscript on $\chi^{n+1}_q$. For $q \in \mc{Z}$, the positivity of $\chi_q$ follows from the fact that $\chi_{1/N}$ is the normalized character of the standard representation
\[
\pi_{n, q} : \mf{C}[S_0(n)] \rightarrow \End \left((\mf{C}^N)^{\otimes (n+1)} \right),
\]
while $\chi_0$ is the normalized character of its regular representation (and also the standard trace on $\mf{C}[S_0(n)]$).
It is well-known \cite{Gnedin-Gorin-Kerov-Block-characters} (see also \cite{Biane-Approx-factorization-characters,Kerov-book}) that
\[
\chi_{1/N}^{n+1} = \frac{1}{N^{n+1}} \sum_{\lambda \in \Par(n+1)} \abs{SS_N(\lambda)} \chi^\lambda,
\]
where $SS_N(\lambda)$ is the number of semistandard Young tableaux of shape $\lambda$ with entries belonging to the set $\set{1, \ldots, N}$. In particular, the coefficients are zero if $\lambda$ has more than $N$ parts, and non-zero if it has at most $N$ parts. Also
\[
\chi_0^{n+1} = \frac{1}{(n+1)!} \sum_{\lambda \in \Par(n+1)} d_\lambda \chi^\lambda.
\]
We now discuss the kernel of the normalized character $\chi_q$. See Section~4 of \cite{Procesi} for a related discussion.
\begin{Prop}
\label{Prop:Kernel-rep}
Denote
\[
\mc{N}_{gr, q, n} = \set{\eta \in \mf{C}[S_0(n)] : \chi_q[\eta \eta^\ast] = 0} = \set{\eta \in \mf{C}[S_0(n)] : \pi_{n, q}(\eta) = 0}
\]
and
\[
\mc{N}_{gr, q} = \bigoplus_{n=0}^\infty \mc{N}_{gr, q, n} \subset \bigoplus_{n=0}^\infty \mf{C}[S_0(n)].
\]
\begin{enumerate}
\item
$\mc{N}_{gr, q, n} = \set{0}$ for $q = 0$ or for $q = \frac{1}{N}$, $n+1 \leq N$.
\item
The following are equivalent descriptions of $\mc{N}_{gr, 1/N, n}$ for $n+1 > N$.
\begin{itemize}
\item
$\mc{N}_{gr, 1/N, n}$ is the ideal of the group algebra of $S_0(n)$ spanned by the Young symmetrizers corresponding to diagrams with at least $N + 1$ rows.
\item
$\mc{N}_{gr, 1/N, n} = \mf{C}[S_0(n)]_{> N}$, so that on $\mf{C}[S_0(n)]_{\leq N}$, $\chi_q$ is faithful.
\item
More explicitly, $\mc{N}_{gr, 1/N, n}$ is the ideal generated by
\[
\sigma_n^{(1/N)} = \sum_{\sigma \in S_0(N-1)} (-1)^{\abs{\sigma}} \sigma
\]
under the natural embedding of $S_0(N-1)$ into $S_0(n)$. Similarly, $\mc{N}_{gr, -1/N, n}$ is the ideal generated by
\[
\sigma_n^{(-1/N)} = \sum_{\sigma \in S_0(N-1)} \sigma.
\]
\item
Let $\pi \in \mc{PS}_0(n)$ be a partial permutation of $[0, n]$ with $N$ linear orbits. Denote $a_0, \ldots, a_{N-1}$ and $b_0, \ldots, b_{N-1}$ the initial, respectively, final elements of these orbits (recall that if an element does not belong to any actual orbit of $\pi$, we consider it as a single-element linear orbit, in which case $a_i = b_i$). Here $a_0, b_0$ belong to the orbit containing $0$. A permutation $\sigma \in S_0(N-1)$ naturally acts on the linear orbits of $\pi$ by concatenating them. Denote the result of this action by $\sigma \circ \pi \in S_0(n)$. Thus, $(\sigma \circ \pi)(b_i) = a_{\sigma(i)}$, and $(\sigma \circ \pi)(x) = \pi(x)$ otherwise. Then
\[
\mc{N}_{gr, 1/N} = \Span{\sum_{\sigma \in S_0(N-1)} (-1)^{\abs{\sigma}} \sigma \circ \pi : \pi \in \mc{PS}_0(n, N)}
\]
and
\[
\mc{N}_{gr, -1/N} = \Span{\sum_{\sigma \in S_0(N-1)} \sigma \circ \pi : \pi \in \mc{PS}_0(n, N)}.
\]
\end{itemize}
\end{enumerate}
\end{Prop}
\begin{proof}
(a) and the equivalence between the first three entries in (b) are well-known. For the final entry, denote by $\bar{\pi}$ the permutation obtained by ``closing'' the orbits of $\pi$; that is, $\bar{\pi}(b_i) = a_i$ and $\bar{\pi}(x) = \pi(x)$ otherwise. Also let $\alpha \in S(n)$ be defined by $\alpha(a_i) = i$ and arbitrarily otherwise; thus, $\alpha$ maps $\set{a_0, \ldots, a_{N-1}}$ bijectively onto $[0, N-1]$. Then a calculation shows that $\alpha^{-1} \sigma \alpha \bar{\pi} = \sigma \circ \pi$. Therefore
\[
\sum_{\sigma \in S(N-1)} (-1)^{\abs{\sigma}} \sigma \circ \pi = \sum_{\sigma \in S(N-1)} (-1)^{\abs{\sigma}} \alpha^{-1} \sigma \alpha \bar{\pi} \in \mc{N}_{gr, q}.
\]
The argument for the opposite inclusion is very close to the proof of Theorem~4.5 in \cite{Procesi}, in a somewhat different language. Start with $\alpha^{-1} \sigma_n^{(1/N)} \beta \alpha$ in the ideal. Possibly by replacing $\beta$ by its multiple by appropriate transpositions, we may assume that $0, 1, \ldots, N-1$ lie in different cycles of $\beta$. Denote $a_i = \alpha^{-1}(i)$ for $0 \leq i \leq N-1$. Then $a_0, \ldots, a_{N-1}$ lie in different cycles of $\alpha^{-1} \beta \alpha$. So $\alpha^{-1} \beta \alpha = \bar{\pi}$, where linear orbits of $\pi$ are cycles of $\alpha^{-1} \beta \alpha$ with initial elements $a_0, a_1, \ldots, a_{N-1}$, and cyclic orbits of $\pi$ are the remaining cycles of $\alpha^{-1} \beta \alpha$. It follows that
\[
(\alpha^{-1} \sigma_n^{(1/N)} \alpha) (\alpha^{-1} \beta \alpha) = \sum_{\sigma \in S(N-1)} (-1)^{\abs{\sigma}} \sigma \circ \pi.
\]
The argument for $q = -1/N$ is similar.
\end{proof}
\subsection{Algebra structure}
\label{Subsec:Algebra}
\begin{Notation}
\label{Notation:Union}
Let $\alpha \in S(n)$, $\beta \in S_0(k)$. Define $\sigma_{n, k} \in S(n+k)$ by
\[
\sigma_{n, k}(i) =
\begin{cases}
i + k, & 1 \leq i \leq n, \\
i - n, & n+1 \leq i \leq n+k.
\end{cases}
\]
Thus in word notation, $\sigma_{n,k} = k+1, k+2, \ldots, k + n, 1, 2, \ldots, k$. Note for future reference that $\sigma_{k, n} = \sigma_{n, k}^{-1}$.
Define $\alpha \cup \beta \in S_0(n + k)$ by
\begin{equation}
\label{Eq:Defn:cup}
\alpha \cup \beta = \sigma_{n, k}^{-1} \beta \sigma_{n, k} \alpha.
\end{equation}
That is, $\alpha \cup \beta$ is obtained by: combining the cycles of $\alpha$ and $\beta$ containing $0$ into
\[
(0, \alpha(0), \ldots, \alpha^{-1}(0), \beta(0) + n, \ldots, \beta^{-1}(0) + n),
\]
keeping the remaining cycles of $\alpha$, and letting the remaining cycles of $\beta$ act on the shifted set $\set{n+1, \ldots, n + k}$.
\end{Notation}
We will now define a version of tensor multiplication on $\bigoplus_{n=0}^\infty \mf{C}[S_0(n)]$ and its quotient by $\mc{N}_{gr, q}$. To distinguish this algebra structure from the usual multiplication on the group algebra, we will denote $\alpha$ by $\T{\alpha}$. We will use this identification to talk about $\chi_q$, $\mc{N}_{gr, q}$, etc., as applied to $\T{\eta}$.
\begin{Defn}
\label{Defn:T-multiplication}
Define the multiplication on $\bigoplus_{n=0}^\infty \mf{C}[S_0(n)]$ by the linear extension of
\[
\T{\alpha} \T{\beta} = \T{{\alpha \cup \beta}}.
\]
We use the ordinary adjoint on the group algebra, defined by the anti-linear extension of the relation $\T{\alpha}^\ast = \T{{\alpha^{-1}}}$.
\end{Defn}
\begin{Remark}
The subalgebra $\bigoplus_{n=0}^\infty \mf{C}[S(n)]$ is called the generic tensor algebra in \cite{Raicu-Generic-tensor}.
\end{Remark}
\begin{Prop}
\label{Prop:Multiplication-group}
\
\begin{enumerate}
\item
The multiplication in Definition~\ref{Defn:T-multiplication} is associative.
\item
${(0)}$ is the identity.
\item
$\mc{N}_{gr, q}$ is an ideal for this multiplication. Consequently the multiplication factors through to the quotient
\[
\mc{TP}_q = \bigoplus_{n=0}^\infty \mf{C}[S_0(n)] / \mc{N}_{gr, q}.
\]
For $q = \frac{1}{N}$,
\[
\mc{TP}_q = \bigoplus_{n=0}^{N-1} \mf{C}[S_0(n)] \oplus \bigoplus_{n=N}^\infty \mf{C}[S_0(n)]_{\leq N}.
\]
\end{enumerate}
\end{Prop}
\begin{proof}
(a) and (b) are immediate. (c) follows from \eqref{Eq:Defn:cup} since $\mc{N}_{gr, q}$ is an ideal for the usual group algebra multiplication.
\end{proof}
\begin{Notation}
For $\alpha \in S_0(n)$ and $S \subset [n]$, the restriction $\alpha|_{S^c}$ of a permutation is the permutation on $[0, n] \setminus S$ defined by $\alpha|_{S^c}(x) = \alpha^m(x)$, where $m = \min \set{k > 0 \ |\ \alpha^k(x) \in S^c}$.
\end{Notation}
\begin{Notation}
For $A, B \subseteq \mf{Z}$, $\abs{A} = \abs{B}$, denote $P^A_B$ the unique order-preserving bijection from $A$ to $B$, as well as (by abuse of notation) the corresponding bijection between the collections of permutations $S(A)$ and $S(B)$.
\end{Notation}
\begin{Example}
For $\alpha = (13524)$ and $S = \set{2,5}$, $\alpha|_{S^c} = (134)$ and $P^{[5] \setminus S}_{[3]} \alpha|_{S^c} = (1 2 3)$.
\end{Example}
For $\pi \in \mc{P}_{1,2}(n)$, denote $\supp{\pi} = [n] \setminus \Sing{\pi}$.
\begin{Defn}
\label{Defn:Contraction-gr}
Let $q \neq 0$. For a transposition $\tau = (ij) \in S(n)$ and $\alpha \in S_0(n)$, define the $\tau$-contraction by the linear extension of
\[
C_\tau (\alpha)
= q^{\cyc_0((\tau \alpha)|_{\supp{\tau}^c}) - \cyc_0(\tau \alpha) + 1} {P^{[0, n] \setminus \set{i, j}}_{[0, n-2]} (\tau \alpha)|_{\supp{\tau}^c}}
\]
More generally, for $\pi \in S(n)$ with the cycle structure $\pi \in \Part_{1,2}(n)$, define the contraction
\[
C_\pi (\alpha)
= q^{\cyc_0((\pi \alpha)|_{\supp{\pi}^c}) - \cyc_0(\pi \alpha) + \ell}
{P^{[0, n] \setminus \supp{\pi}}_{[0, n-2 \ell]} (\pi \alpha)|_{\supp{\pi}^c}}
\]
Extend $C_\pi$ linearly to $\mf{C}[S_0(n)]$.
\end{Defn}
\begin{Remark}
It is easy to check that for a transposition $\tau = (ij)$,
\[
q^{\cyc_0((\tau \alpha)|_{\supp{\tau}^c}) - \cyc_0(\tau \alpha) + 1} =
\begin{cases}
q^{-1}, & (ij) \text{ is a cycle in } \alpha, \\
1, & (i), (j) \text{ are cycles in } \alpha, \\
1, & i, j \text{ are consecutive elements} \\
& \text{in the same cycle of $\alpha$ of length at least } 3, \\
q, & \text{otherwise.}
\end{cases}
\]
In particular, $C_\tau$ is defined for $q=0$ unless $(ij)$ is a cycle in $\alpha$. See Section~\ref{Subsec:q=0}.
\end{Remark}
\begin{Lemma}
\label{Lemma:Contraction-kernel-gr}
Let $q \in \mc{Z} \setminus \set{0}$. Each $C_\pi$ preserves $\mc{N}_{gr, q}$. Therefore it factors through to the quotient $\mc{TP}_q$.
\end{Lemma}
\begin{proof}
We consider the case $q = \frac{1}{N}$; for $q = - \frac{1}{N}$, the argument similar. We will use the representation in Proposition~\ref{Prop:Kernel-rep}. Let $\eta \in \mc{N}_{gr, q}$. Since $\mc{N}_{gr, q}$ is an ideal, $\tau \eta \in \mc{N}_{gr, q}$. So it suffices to show that if $\pi \in \mc{PS}(n)$ and $\eta = \sum_{\sigma \in S(N)} (-1)^{\abs{\sigma}} \sigma \circ \pi$, then for any $S \subset [0, n]$,
\[
\sum_{\sigma \in S(N)} q^{\cyc_0((\sigma \circ \pi)|_{S^c}) - \cyc_0(\sigma \circ \pi)} (-1)^{\abs{\sigma}} (\sigma \circ \pi)|_{S^c} \in \mc{N}_{gr, q}.
\]
Moreover, it suffices to take $S = \set{s}$. We consider two cases.
Suppose $s = a_i = b_i$. Then $(\sigma \circ \pi)|_{\set{s}^c} = (\sigma|_{\set{i}^c}) \circ (\pi|_{\set{s}^c})$ and $\cyc_0((\sigma \circ \pi)|_{\set{s}^c}) = \cyc_0(\sigma|_{\set{i}^c})$. Each $\sigma' \in S(N-1)$ appears as $\sigma|_{\set{i}^c}$ $N$ times, once when $(i)$ is a cycle in $\sigma$ (so that $\sigma'$ has one less cycle than $\sigma$), and $N-1$ times corresponding to $\sigma(i) = j$ for each $j \in [N] \setminus \set{i}$ (so that $\sigma'$ has the same number of cycles as $\sigma$). Thus
\[
\begin{split}
\sum_{\sigma \in S(N)} q^{\cyc_0((\sigma \circ \pi)|_{S^c}) - \cyc_0(\sigma \circ \pi)} (-1)^{\abs{\sigma}} (\sigma \circ \pi)|_{S^c}
& = \sum_{\sigma' \in S(N-1)} (N-1 - q^{-1}) (-1)^{\abs{\sigma'}} \sigma' \circ (\pi|_{\set{s}^c}) \\
& = - \sum_{\sigma' \in S(N-1)} (-1)^{\abs{\sigma'}} \sigma' \circ (\pi|_{\set{s}^c})
\in \mc{N}_{gr, q}.
\end{split}
\]
If $\set{s}$ is not a single-element linear orbit of $\pi$, then $(\sigma \circ \pi)|_{\set{s}^c} = \sigma \circ (\pi|_{\set{s}^c})$, and has the same number of cycles as $\sigma \circ \pi$.
\end{proof}
\begin{Notation}
\label{Notation:Laplacian}
Denote
\[
\mc{L}_n = \sum_{\tau \text{ a transposition in } S(n)} C_\tau
\]
and $\mc{L}$ the direct sum of these operators. Note that for $\pi \in \Part_{1,2}(n)$, $C_\pi$ is a product of several transposition-type contractions, and
\[
\mc{L}^\ell = \ell! \sum_{\substack{\pi \in \Part_{1,2}(n) \\ \abs{\pi} = n - \ell}} C_\pi.
\]
Denote also
\[
\Part_{1,2}(n, k) = \set{\pi \in \Part_{1,2}(n+k) : \text{ if } (ij) \in \pi, i < j, \text{ then } i \leq n < j}
\]
the inhomogeneous partitions, and
\[
\mc{L}_{n, k} = \sum_{\substack{(ij) \in S(n+k) \\ i \leq n < j}} C_{(ij)}.
\]
Finally, for $\ell \leq n \wedge k$, denote
\[
\mc{L}_{n, k}^{(\ell)} = \ell! \sum_{\substack{\pi \in \Part_{1,2}(n,k) \\ \abs{\pi} = n - \ell}} C_\pi.
\]
\end{Notation}
\begin{Defn}
\label{Defn:I-gr}
For $\eta \in \bigoplus_{n=0}^\infty \mf{C}[S_0(n)]$, define
\[
\I{\eta}
= \T{e^{- \mc{L}} \eta}
= \sum_{\ell=0}^\infty (-1)^\ell \frac{1}{\ell!} \T{\mc{L}^\ell (\eta)}
= \sum_{\pi \in \Part_{1,2}(n)} (-1)^{n - \abs{\pi}} \T{C_\pi (\eta)}.
\]
For $q \in \mc{Z} \setminus \set{0}$, $\I{\eta}$ is also well-defined for $\eta \in \mc{TP}_q$.
\end{Defn}
\begin{Prop}
\label{Prop:T-I-gr}
\
\begin{enumerate}
\item
\[
\T{\eta}
= \I{e^{\mc{L}} \eta}
= \sum_{\ell=0}^\infty \frac{1}{\ell!} \I{\mc{L}^\ell (\eta)}
= \sum_{\pi \in \Part_{1,2}(n)} \I{C_\pi (\eta)}.
\]
\item
For $\alpha \in S_0(n)$, $\beta \in S_0(k)$,
\[
\begin{split}
\I{\alpha} \I{\beta}
& = \I{{\alpha \cup \beta}} + \sum_{\ell=1}^{\min(n, k)} \frac{1}{\ell!} \I{\mc{L}_{n,k}^{(\ell)} (\alpha \cup \beta)} \\
& = \sum_{\pi \in \Part_{1,2}(n, k)} \I{C_\pi({\alpha \cup \beta})}.
\end{split}
\]
\end{enumerate}
\end{Prop}
\begin{proof}
(a) follows by composing (terminating) power series in $\mc{L}$. For (b), the argument is very similar to the standard one for Hermite polynomials \cite{dSCViennot}. We expand
\[
\begin{split}
\I{\alpha} \I{\beta}
& = \sum_{{\sigma_1} \in \Part_{1,2}(n)} (-1)^{n - \abs{{\sigma_1}}} \T{C_{\sigma_1} (\alpha)} \sum_{{\sigma_2} \in \Part_{1,2}(k)} (-1)^{k - \abs{{\sigma_2}}} \T{C_{\sigma_2} (\beta)} \\
& = \sum_{{\sigma_1} \in \Part_{1,2}(n)} \sum_{{\sigma_2} \in \Part_{1,2}(k)} (-1)^{n + k - \abs{{\sigma_1}} - \abs{{\sigma_2}}} \T{C_{\sigma_1} (\alpha) \cup C_{\sigma_2} (\beta)} \\
& = \sum_{{\sigma_1} \in \Part_{1,2}(n)} \sum_{{\sigma_2} \in \Part_{1,2}(k)} (-1)^{n + k - \abs{{\sigma_1}} - \abs{{\sigma_2}}} \sum_{\tau \in \Part_{1,2}([\abs{\Sing{{\sigma_1}}} + \abs{\Sing{{\sigma_2}}}])} \I{C_\tau (C_{\sigma_1} (\alpha) \cup C_{\sigma_2} (\beta))} \\
& = \sum_{\pi \in \Part_{1,2}(n+k)} \I{C_\pi({\alpha \cup \beta})} \sum_{\substack{S_1 \subset \Pair{\pi|_{[n]}} \\ S_2 \subset \Pair{\pi|_{[n+1,n+k]}}}} (-1)^{\abs{S_1} + \abs{S_2}}.
\end{split}
\]
The final sum is zero unless both $\Pair{\pi|_{[n]}} = \emptyset = \Pair{\pi|_{[n+1,n+k]}}$, in which case $\pi \in \Part_{1,2}(n,k)$.
\end{proof}
\section{Fock space}
\label{Sec:Fock}
\begin{Construction}
Let $\mc{H}_{\mf{R}}$ be a real Hilbert space, and $\mc{H}$ its complexification. We will denote by $\mc{H}^{\otimes n}$ the \emph{algebraic} tensor product, which is spanned by simple tensors, and by $\overline{\mc{H}^{\otimes n}}$ the Hilbert space tensor product. For each $n$, form the algebraic Fock space
\[
\mf{C} {(0)} \oplus \bigoplus_{n=1}^\infty \left(\mf{C}[S_0(n)] \otimes \overline{\mc{H}^{\otimes n}} \right).
\]
On each component of this space, we have a natural action of $S(n)$: for $\sigma \in S(n)$
\[
\alpha \otimes F \mapsto {\sigma \alpha \sigma^{-1}} \otimes U_\sigma F,
\]
where
\[
U_\sigma (h_1 \otimes \ldots \otimes h_n) = h_{\sigma^{-1}(1)} \otimes \ldots \otimes h_{\sigma^{-1}(n)}
\]
extends to $\overline{\mc{H}^{\otimes n}}$ as an isometry. We denote by
\[
\overline{\mc{TP}}(\mc{H}) = \mf{C} {(0)} \oplus \bigoplus_{n=1}^\infty \left(\mf{C}[S_0(n)] \otimes_s \overline{\mc{H}^{\otimes n}} \right)
\]
the vector space quotient under this action. $\overline{\mc{TP}(\mc{H})}$ may be identified with the fixed point subspace of this action, which is the image of the direct sum of the projections
\begin{equation}
\label{Eq:Symm-proj}
P_n : \alpha \otimes F \mapsto \frac{1}{n!} \sum_{\sigma \in S(n)} {\sigma \alpha \sigma^{-1}} \otimes U_\sigma F
\end{equation}
Thus
\begin{equation}
\label{Eq:Quotient}
\mf{C}[S_0(n)] \otimes_s \overline{\mc{H}^{\otimes n}} = \set{\sum_{\alpha \in S_0(n)} \alpha \otimes F_\alpha \in \mf{C}[S_0(n)] \otimes \overline{\mc{H}^{\otimes n}} \ :\ \frac{1}{n!} \sum_{\sigma \in S(n)} U_\sigma F_{\sigma^{-1} \alpha \sigma} = F_\alpha}.
\end{equation}
To simplify notation, we will denote
\[
\alpha \otimes_s F = P_n (\alpha \otimes F) = \frac{1}{n!} \sum_{\sigma \in S(n)} {\sigma \alpha \sigma^{-1}} \otimes U_\sigma F.
\]
On $\mf{C} {(0)} \oplus \bigoplus_{n=1}^\infty \left(\mf{C}[S_0(n)] \otimes \overline{\mc{H}^{\otimes n}} \right)$, we have the canonical inner product
\begin{equation}
\label{Eq:Zero-inner-product}
\ip{\alpha \otimes F}{\beta \otimes G}_0
= \delta_{n=k} \chi_0[\alpha \beta^{-1}] \ip{F}{G}_{\overline{\mc{H}^{\otimes n}}}
= \delta_{n=k} \delta_{\alpha = \beta} \ip{F}{G}_{\overline{\mc{H}^{\otimes n}}},
\end{equation}
where $\chi_0$ is the canonical trace on $\mf{C}[S_0(n)]$. On this space, define the operator
\begin{equation}
\label{Eq:K}
\mc{K}_q \left( \alpha \otimes F \right) = \sum_{\beta \in S_0(n)} \chi_q[\alpha \beta^{-1}] \beta \otimes F.
\end{equation}
Note that $\mc{K}_0$ is the identity operator. It is easy to check that for $q \in \mc{Z}$, $\mc{K}_q$ is a positive semi-definite operator. Moreover,
\[
\begin{split}
\mc{K}_q P_n (\alpha \otimes F)
& = \frac{1}{n!} \sum_{\beta \in S_0(n)} \sum_{\sigma \in S(n)} \chi_q[\sigma \alpha \sigma^{-1} \beta^{-1}] \beta \otimes U_\sigma F \\
& = \frac{1}{n!} \sum_{\beta \in S_0(n)} \sum_{\sigma \in S(n)} \chi_q[\alpha \beta^{-1}] {\sigma \beta \sigma^{-1}} \otimes U_\sigma F \\
& = P_n \mc{K}_q (\alpha \otimes F),
\end{split}
\]
so $\mc{K}_q$ restricts to the subspace $\overline{\mc{TP}}(\mc{H})$. The resulting inner product on $\overline{\mc{TP}}(\mc{H})$ is
\begin{equation}
\label{Eq:IP-centralizer}
\ip{\sum_{\alpha \in S_0(n)} \alpha \otimes_s F_\alpha}{\sum_{\beta \in S_0(n)} \beta \otimes_s G_\beta}_q
= n! \sum_{\alpha, \beta \in S_0(n)} \chi_q(\alpha \beta^{-1}) \ip{F_{\alpha}}{G_{\beta}}.
\end{equation}
On the unsymmetrized Fock space, it is more natural to use the inner product coming from $\mc{K}_q P$, that is,
\begin{equation}
\label{Eq:Unsymmetrized-IP}
\ip{\alpha \otimes F}{\beta \otimes G}_q = \delta_{n=k} \sum_{\sigma \in S(n)} \chi_q(\alpha \sigma \beta^{-1} \sigma^{-1}) \ip{F}{U_\sigma G}_{\mc{H}^{\otimes n}}.
\end{equation}
For this inner product, $P_n$ is an isometric projection.
The inner product \eqref{Eq:IP-centralizer} is positive semi-definite on $\overline{\mc{TP}}(\mc{H})$ for $q \in \mc{Z}$. It is not in general positive definite. However, for $q=0$,
\begin{equation}
\label{Eq:q=0}
\ip{\sum_{\alpha \in S_0(n)} \alpha \otimes_s F_\alpha}{\sum_{\beta \in S_0(n)} \beta \otimes_s G_\beta}_0
= n! \sum_{\alpha \in S_0(n)} \ip{F_{\alpha}}{G_{\alpha}}.
\end{equation}
and so the $0$-inner product on $\overline{\mc{TP}}(\mc{H})$ is positive definite.
\end{Construction}
\begin{Remark}
In \cite{Guta-Maassen-BM}, Gu\c{t}\u{a} and Maassen considered a general Fock space construction. Let $\mc{H}$ and $\set{V_n : n \in \mf{N}}$ be Hilbert spaces, such $S(n)$ acts unitarily on $V_n$. Then one can define $V_n \otimes_s \mc{H}^{\otimes n}$ as the fixed point subspace of the action
\[
v \otimes F \mapsto (\sigma \cdot v) \otimes U_\sigma F,
\]
and a symmetrized Fock space as the orthogonal sum of these subspaces. In our case, $V_n = \mf{C}[S_0(n)]$, with the inner product induced by $\chi_q$, on which $S(n)$ acts by conjugation (which preserves $\chi_q$). One can then define the creation operator based on a sequence of maps $j_n : V_n \rightarrow V_{n+1}$ which commute with the action of $S(n)$; the annihilation operator as its adjoint; and study the algebra generated by field operators. In our setting, there are several natural equivariant choices of the map $j_n$. One possibility is the standard embedding $\mf{C}[S_0(n)] \hookrightarrow \mf{C}[S_0(n+1)]$. Another possibility is the linear extension of the map $\alpha \mapsto (0 \ n+1) \alpha$. See Sections~\ref{Subsec:Gaussian} and \ref{Subsec:Polynomial-aubalgebra}. The algebra considered throughout most of the article is much larger than the subalgebras generated by these field operators; in fact, the vacuum vector is cyclic for it.
\end{Remark}
\begin{Prop}
\label{Prop:kernel-vs}
For $q \in \mc{Z}$, the kernel of the inner product is
\[
\mc{N}_{vs, q}
= \set{\xi \in \overline{\mc{TP}}(\mc{H}) : \ip{\xi}{\xi}_q = 0}
= \Span{\eta \otimes_s F : \eta \in \mc{N}_{gr, q}, F \in \mc{F}_{f}(\mc{H})},
\]
where $\mc{F}_{f}(\mc{H})$ is the full Fock space of $\mc{H}$.
\end{Prop}
\begin{proof}
The kernel of $\mc{K}_q$ as an operator on $\mf{C}[S_0(n)] \otimes \overline{\mc{H}^{\otimes n}}$ with the (tensor) inner product \eqref{Eq:Zero-inner-product} is clearly $\mc{N}_{gr, q, n} \otimes \overline{\mc{H}^{\otimes n}}$. The kernel of the inner product on $\overline{\mc{TP}}(\mc{H})$ is the intersection of the kernel of $\mc{K}_q$ and $\overline{\mc{TP}}(\mc{H})$.
\end{proof}
\begin{Defn}
\label{Defn:Fock-space}
For $q \in \mc{Z}$, denote
\[
\overline{\mc{TP}}_q(\mc{H}) = \overline{\mc{TP}}(\mc{H}) /\mc{N}_{vs, q}.
\]
In particular,
\[
\overline{\mc{TP}}_{1/N}(\mc{H})
= \bigoplus_{n=0}^{N-1} \left( \mf{C}[S_0(n)] \otimes_s \overline{\mc{H}^{\otimes n}} \right) \oplus \bigoplus_{n=N}^\infty \left( \mf{C}[S_0(n)]_{\leq N} \otimes_s \overline{\mc{H}^{\otimes n}} \right).
\]
Denote by $\mc{F}_q(\mc{H})$ the completion of $\overline{\mc{TP}}_q(\mc{H})$ with respect to the inner product $\ip{\cdot}{\cdot}_q$.
Note that $\mc{TP}_q(\mf{C})$ is not $\mc{TP}_q$ from Proposition~\ref{Prop:Multiplication-group}, but its symmetrized version,
\[
\mc{TP}_q(\mf{C}) = \bigoplus_{n=0}^\infty Z(\mf{C}[S_0(n)] : \mf{C}[S(n)])/\mc{N}_{gr, q}.
\]
\end{Defn}
\begin{Lemma}
\label{Lemma:L2-approximation}
\
\begin{enumerate}
\item
If $F_i, F \in \overline{\mc{H}^{\otimes n}}$ and $\norm{F_i - F} \rightarrow 0$, then $\norm{(\alpha \otimes F_i) - (\alpha \otimes F)}_q^2 \rightarrow 0$.
\item
If $\mc{H}$ is infinite-dimensional, the linear span of the elements of the form $(\alpha \otimes (h_1 \otimes \ldots \otimes h_n))$ for mutually orthogonal $h_1, \ldots, h_n$ is dense in $\mc{F}_q(\mc{H})$.
\end{enumerate}
\end{Lemma}
\begin{proof}
For part (a), note that
\[
\norm{(\alpha \otimes F)}_q^2
= \sum_{\sigma \in S(n)} \chi_q(\alpha \sigma \alpha^{-1} \sigma^{-1}) \ip{F}{U_\sigma F}
\leq n! \norm{F}^2.
\]
Part (b) follows from the fact that for infinite-dimensional $\mc{H}$, the linear span of the elements of the form $h_1 \otimes \ldots \otimes h_n$ for mutually orthogonal $h_1, \ldots, h_n$ is dense in $\mc{H}^{\otimes n}$.
\end{proof}
\begin{Remark}
Recall that we denote by ${\mc{H}^{\otimes n}}$ the algebraic tensor product. Then we could equally well consider the Fock space
\[
\mf{C} {(0)} \oplus \bigoplus_{n=1}^\infty \left(\mf{C}[S_0(n)] \otimes {\mc{H}^{\otimes n}} \right),
\]
its symmetrized subspace
\[
{\mc{TP}}(\mc{H}) = \mf{C} {(0)} \oplus \bigoplus_{n=1}^\infty \left(\mf{C}[S_0(n)] \otimes_s {\mc{H}^{\otimes n}} \right),
\]
and its quotient ${\mc{TP}}_q(\mc{H})$ by the kernel of $\mc{K}_q$ for $q \in \mc{Z}$. By the preceding lemma, $\mc{TP}(\mc{H})$ is dense in $\overline{\mc{TP}}(\mc{H})$ (with respect to the seminorm), and $\mc{TP}_q(\mc{H})$ is dense in $\overline{\mc{TP}}_q(\mc{H})$. In particular, $\mc{F}_q(\mc{H})$ is the completion of either set.
\end{Remark}
\begin{Thm}[Chaos decomposition I]
Let $\set{\xi_i : i \in \Xi}$ be an orthonormal basis for $\mc{H}$, where $\Xi = [d]$ or $\Xi = \mf{N}$. Denote
\[
\Delta(\Xi^n) = \set{\mb{u} \in \Xi^n : u(1) \leq u(2) \leq \ldots \leq u(n)},
\]
and for $\mb{u} \in \Xi^n$, denote $\ker(\mb{u})$ the interval partition $\pi = (I_1, \ldots, I_k) \in \Int(n)$ such that $u(i) = u(j) \Leftrightarrow i \stackrel{\pi}{\sim} j$. Finally, denote the centralizer
\[
\begin{split}
Z(\mf{C}[S_0(n)] : \pi)
& = Z(\mf{C}[S_0(n)] : S(I_1) \times \ldots \times S(I_k)).
\end{split}
\]
\begin{enumerate}
\item
We have a decomposition
\begin{equation}
\label{Eq:Chaos-I}
\overline{\mc{TP}}(\mc{H})
= \bigoplus_{n = 0}^\infty \bigoplus_{\mb{u} \in \Delta(\Xi^n)} Z(\mf{C}[S_0(n)] : \ker(\mb{u})) \otimes_s (\xi_{u(1)} \otimes \ldots \otimes \xi_{u(n)})
\end{equation}
which is orthogonal with respect to any $q$-inner product.
\item
Any $A \in \mc{F}_{1/N}(\mc{H})$ has a unique decomposition
\[
A = \sum_{n=0}^\infty \sum_{\mb{u} \in \Xi^n} \eta_{\mb{u}} \otimes_s \xi_{\mb{u}},
\]
where $\eta_{\mb{u}} \in Z(\mf{C}[S_0(n)] : \ker(\mb{u})) \cap \mf{C}[S_0(n)]_{\leq N}$, and
\[
\ip{A}{A}_q = \sum_{n=0}^\infty \sum_{\mb{u} \in \Xi^n} \ker(\mb{u})! \chi_q[\eta_{\mb{u}}^\ast \eta_{\mb{u}}] < \infty,
\]
where as usual $\pi! = \prod_{V \in \pi} \abs{V}!$.
\item
For each $n$, for sufficiently large $N$, $\mf{C}[S_0(n)]_{\leq N} \otimes_s \overline{\mc{H}^{\otimes n}}$ is complete with respect to the $\frac{1}{N}$-norm.
\end{enumerate}
\end{Thm}
\begin{proof}
The span of the vectors of the form $\eta \otimes (\xi_{u(1)} \otimes \ldots \otimes \xi_{u(n)})$ is dense in the left-hand side of equation~\eqref{Eq:Chaos-I}. Using invariance~\eqref{Eq:Quotient}, we first see that we may take $u(1) \leq u(2) \leq \ldots \leq u(n)$. Choosing $\pi = \ker(\mb{u})$, we further see that $\xi_{\mb{u}}$ is invariant under the action of $S(I_1) \times \ldots \times S(I_k)$. So we may take $\eta$ to be invariant under the corresponding action.
For orthogonality, we observe that if $\mb{u}, \mb{v} \in \Delta(\Xi^n)$,
\[
\begin{split}
\ip{\eta \otimes \xi_{\mb{u}}}{\zeta \otimes \xi_{\mb{v}}}_q
& = \sum_{\sigma \in S(n)} \chi_q[\eta \sigma \zeta^\ast {\sigma^{-1}}] \ip{\xi_{u(1)} \otimes \ldots \otimes \xi_{u(n)}}{\xi_{v(\sigma^{-1}(1))} \otimes \ldots \otimes \xi_{v(\sigma^{-1}(n))}} \\
& = \delta_{\mb{u} = \mb{v}} \sum_{\sigma \in S(I_1) \times \ldots \times S(I_k)} \chi_q[\eta \sigma \zeta^\ast {\sigma^{-1}}] \\
& = \ker(\mb{u})! \chi_q[\eta \zeta^\ast].
\end{split}
\]
Part (b) follows from the fact that each subspace $Z(\mf{C}[S_0(n)] : \ker(\mb{u})) \otimes (\xi_{u(1)} \otimes \ldots \otimes \xi_{u(n)})$ is finite dimensional, and thus closed.
For (c), note that
\[
\begin{split}
\norm{\sum_{\alpha \in S_0(n)} \alpha \otimes_s F_\alpha}_q^2
& = \sum_{\alpha, \beta \in S_0(n)} \chi_q[\alpha \beta^{-1}] \ip{F_\alpha}{F_\beta} \\
& \geq \sum_{\alpha \in S_0(n)} \norm{F_\alpha}^2 - \abs{q} \sum_{\alpha \neq \beta \in S_0(n)} \norm{F_\alpha} \norm{F_\beta} \\
& \geq (1 - \abs{q} (n+1)!) \sum_{\alpha \in S_0(n)} \norm{F_\alpha}^2_q + \frac{1}{2} \abs{q} \sum_{\alpha, \beta \in S_0(n)} \left( \norm{F_\alpha} - \norm{F_\beta} \right)^2,
\end{split}
\]
and so for $\abs{q} < \frac{1}{(n+1)!}$, the norm $\norm{\sum_{\alpha \in S_0(n)} \alpha \otimes_s F_\alpha}_q$ is equivalent to $\sqrt{\sum_{\alpha \in S_0(n)} \norm{F_\alpha}^2}$.
\end{proof}
In two special cases, we have alternative chaos decompositions. The first one follows from the comments in Section~\ref{Sec:Centralizer}.
\begin{Prop}[Chaos decomposition II]
\label{Prop:Chaos-II}
Let $\mc{H} = \mf{C}$, so that $\mc{TP}(\mf{C}) = Z(\mf{C}[S_0(n)] : \mf{C}[S(n)])$. Then $\set{\chi^{\lambda' : \lambda} : \lambda \in \Par(n), \lambda' = \lambda + \square}$ are orthogonal and span $\mc{TP}(\mf{C})$. Moreover,
\[
\set{\chi^{\lambda' : \lambda} : \lambda' = \lambda + \square, \lambda' \in \Par(n+1; \leq N)}
\]
is an orthogonal basis for $\mc{F}_{1/N}(\mf{C})$.
\end{Prop}
\begin{Notation}
In the case $\mc{H}_{\mf{R}} = L^2(\mf{R}_+, dx)$, denote
\[
\Delta(\mf{R}^n_+) = \set{(t_1, \ldots, t_n) \in \mf{R}^n : t_1 \leq t_2 \leq \ldots \leq t_n}.
\]
\end{Notation}
\begin{Prop}[Chaos decomposition III]
\label{Chaos:III}
Let $\mc{H}_{\mf{R}} = L^2(\mf{R}_+, dx)$.
\begin{enumerate}
\item
We have a decomposition
\[
\overline{TP}(\mc{H}) = \bigoplus_{n=0}^\infty \bigoplus_{\lambda \in \Par(n+1)} \bigoplus_{i, j = 1}^{d_\lambda} W(E_{ij}^\lambda) \otimes_s L^2(\Delta(\mf{R}_+^n), dx^{\otimes n})
\]
which is orthogonal with respect to any $q$-inner product. Here we use the notation from Section~\ref{Sec:Centralizer}.
\item
Any $A \in \mc{F}_{1/N}(\mc{H})$ has a unique decomposition
\[
A = \sum_{n=0}^\infty \sum_{\lambda \in \Par(n+1; \leq N)} \sum_{i, j = 1}^{d_\lambda} W(E_{ij}^\lambda) \otimes F_{i j}^\lambda,
\]
where $F_{ij}^\lambda \in L^2(\Delta(\mf{R}_+^n), dx^{\otimes n})$ and
\[
\sum_{n=0}^\infty \sum_{\lambda \in \Par(n+1; \leq N)} n_\lambda \sum_{i, j = 1}^{d_\lambda} \norm{F_{i j}^\lambda}^2 < \infty
\]
for $n_\lambda = \frac{\abs{SS_N(\lambda)}}{N^{n+1}}$. For $A \in \mc{F}_0(\mc{H})$ the same decomposition holds with no restrictions on $\lambda$ and $n_\lambda = \frac{d_\lambda}{(n+1)!}$. However, in that case we also have the simpler isometry~\eqref{Eq:q=0}.
\end{enumerate}
\end{Prop}
\begin{proof}
Every element in the $n$'th component of $\overline{\mc{TP}}(\mc{H}_{\mf{R}})$ is equivalent to a unique element of the form
\[
\sum_{\lambda \in \Par(n+1)} \sum_{i, j = 1}^{d_\lambda} W(E_{ij}^\lambda) \otimes_s F_{i j}^\lambda
\]
for some $F_{i j}^\lambda \in L^2(\Delta(\mf{R}_+^n), dx^{\otimes n})$. For $F, G \in L^2(\Delta(\mf{R}_+^n), dx^{\otimes n})$,
\[
\begin{split}
\ip{W(E_{ij}^\lambda) \otimes_s F}{W(E_{k \ell}^\mu) \otimes_s G}
& = \sum_{\sigma \in S(n)} \chi_q[W(E_{ij}^\lambda) \sigma W(E_{k \ell}^\mu)^\ast \sigma^{-1}] \ip{F}{U_\sigma G} \\
& = \chi_q[W(E_{ij}^\lambda) W(E_{k \ell}^\mu)^\ast] \ip{F}{G} \\
& = \delta_{i = k} \delta_{j = \ell} \delta_{\lambda = \mu} n_\lambda \ip{F}{G}. \qedhere
\end{split}
\]
\end{proof}
\section{The operator algebra}
\label{Sec:Algebra}
We now define a star-algebra structure on $\mc{TP}(\mc{H}_{\mf{R}})$, and eventually on $\overline{\mc{TP}}(\mc{H}_{\mf{R}})$. To distinguish the elements of the algebra from the corresponding elements of the inner product space, we will denote $\alpha \otimes_s F$ by $\I{\alpha \otimes_s F}$. Note that this identification differs from the one in Section~\ref{Subsec:Algebra}.
\begin{Defn}
For a transposition $\tau = (ij) \in S(n)$, define the $\tau$-contraction on $\mc{H}_{\mf{R}}^{\otimes n}$ by the linear extension of
\[
C_\tau(h_1 \otimes \ldots \otimes h_n) = \ip{h_i}{h_j} h_1 \otimes \ldots \otimes \hat{h}_i \otimes \ldots \otimes \hat{h}_j \otimes \ldots \otimes h_n,
\]
More generally, for $\pi \in S(n)$ with the cycle structure $\pi \in \Part_{1,2}(n)$,
\[
\pi = \set{(v_1, w_1), \ldots, (v_\ell, w_\ell), (u_1) \ldots (u_{n - 2 \ell})},
\]
with $u_1 < u_2 < \ldots < u_{n - 2 \ell}$, define the contraction
\[
C_\pi(h_1 \otimes \ldots \otimes h_n) = \prod_{i=1}^\ell \ip{h_{v(i)}}{h_{w(i)}} h_{u_1} \otimes \ldots \otimes h_{u_{n - 2 \ell}}
\]
For $q \neq 0$, denote $C_\tau(\alpha \otimes F) = C_\tau(\alpha) \otimes C_\tau(F)$ following Definition~\ref{Defn:Contraction-gr}.
\end{Defn}
\begin{Remark}
The (tensor) contraction is not a bounded operator and so does not extend to the Hilbert space tensor product $\overline{\mc{H}^{\otimes n}}$. But if $\pi \in \Part(n,k)$, it is easy to check that
\[
C_\pi : \overline{\mc{H}^{\otimes n}} \times \overline{\mc{H}^{\otimes n}} \rightarrow \overline{\mc{H}^{\otimes n+k-2}}
\]
is a contraction.
\begin{comment}
\[
\norm{C_\pi(F \otimes G)} \leq \norm{F} \ \norm{G}.
\]
Indeed,
\[
\norm{\sum_{i, j, \mb{u}, \mb{v}} f_{i, \mb{u}}^{(j)} g_{i, \mb{v}}^{(j)} \xi_{\mb{u}} \otimes \xi_{\mb{v}}}^2
= \sum_{\mb{u}, \mb{v}} \left( \sum_{i,j} f_{i, \mb{u}}^{(j)} g_{i, \mb{v}}^{(j)} \right)^2
\leq \sum_{\mb{u}, \mb{v}} \sum_i f_{i, \mb{u}}^2 \sum_j g_{j, \mb{v}}^2.
\]
\end{comment}
\end{Remark}
\begin{Lemma}
\label{Lemma:Conjugation}
\[
C_\pi(\sigma \alpha \sigma^{-1} \otimes U_\sigma F) = \tilde{\sigma} C_{\tilde{\pi}}(\alpha) \tilde{\sigma}^{-1} \otimes U_{\tilde{\sigma}} C_{\tilde{\pi}} (F),
\]
where $\tilde{\pi} = \sigma^{-1} \pi \sigma$ and
\[
\tilde{\sigma} = P^{[0, n] \setminus \supp{\pi}}_{[0, n-2 \ell]} \sigma P_{[0, n] \setminus \supp{\tilde{\pi}}}^{[0, n-2 \ell]} \in S(n - 2 \ell).
\]
\end{Lemma}
\begin{proof}
The relation
\[
C_\pi(U_\sigma F) = U_{\tilde{\sigma}} C_{\sigma^{-1} \pi \sigma} F
\]
is not hard to check. For the second relation, we compute
\[
\begin{split}
C_\pi(\sigma \alpha \sigma^{-1})
& = q^{\cyc_0((\pi \sigma \alpha \sigma^{-1})|_{\supp{\pi}^c}) - \cyc_0(\pi \sigma \alpha \sigma^{-1}) + \ell}
P^{[0, n] \setminus \supp{\pi}}_{[0, n-2 \ell]} (\pi \sigma \alpha \sigma^{-1})|_{\supp{\pi}^c} \\
& = q^{\cyc_0((\sigma \tilde{\pi} \alpha \sigma^{-1})|_{\supp{\pi}^c}) - \cyc_0(\sigma \tilde{\pi} \alpha \sigma^{-1}) + \ell}
P^{[0, n] \setminus \supp{\pi}}_{[0, n-2 \ell]} (\sigma \tilde{\pi} \alpha \sigma^{-1})|_{\supp{\pi}^c} \\
& = q^{\cyc_0((\tilde{\pi} \alpha)|_{\supp{\tilde{\pi}}^c}) - \cyc_0(\tilde{\pi} \alpha) + \ell}
\tilde{\sigma} P^{[0, n] \setminus \supp{\tilde{\pi}}}_{[0, n-2 \ell]} (\tilde{\pi} \alpha)|_{\supp{\tilde{\pi}}^c} \tilde{\sigma}^{-1} \\
& = \tilde{\sigma} C_{\tilde{\pi}}(\alpha) \tilde{\sigma}^{-1}.
\end{split}
\]
\end{proof}
\begin{Lemma}
We keep the notation $\mc{L}_n$, $\mc{L}$, $\mc{L}_{n, k}$, $\mc{L}_{n, k}^{(\ell)}$ as in Notation~\ref{Notation:Laplacian}. Then $\mc{L}$ descends to a map on $\mc{TP}(\mc{H}_{\mf{R}})$, and $\mc{L}_{n, k}^{(\ell)}$ to a map
\[
\left( \mf{C}[S_0(n)] \otimes_s \mc{H}_{\mf{R}}^{\otimes n} \right) \times \left( \mf{C}[S_0(k)] \otimes_s \mc{H}_{\mf{R}}^{\otimes k} \right) \rightarrow \mf{C}[S_0(n+k- 2\ell)] \otimes_s \mc{H}_{\mf{R}}^{\otimes n+k- 2\ell}.
\]
\end{Lemma}
\begin{proof}
$\mc{L}$ is invariant under the action of $S(n)$, and $\mc{L}_{n,k}$ under the action of $S(n) \times S(k)$.
\end{proof}
\begin{Defn}
\label{Defn:T-I}
For $\eta \otimes F \in \mf{C}[S_0(n)] \otimes \mc{H}_{\mf{R}}^{\otimes n}$, define
\[
\T{\eta \otimes F} = \I{e^{\mc{L}} (\eta \otimes F)} = \sum_{k=0}^\infty \frac{1}{k!} \I{\mc{L}^k (\eta \otimes F)} = \sum_{\pi \in \Part_{1,2}(n)} \I{C_\pi (\eta \otimes F)}.
\]
Then $\mathrm{T}$ is also well-defined on $\mc{TP}(\mc{H}_{\mf{R}})$. Note that we cannot in general extend it to $\overline{\mc{TP}}(\mc{H}_{\mf{R}})$.
\end{Defn}
The following result follows immediately from Proposition~\ref{Prop:T-I-gr}.
\begin{Prop}
\label{Prop:T-I}
For $\alpha \in S_0(n)$,
\[
\begin{split}
\I{\alpha \otimes F} = \T{e^{- \mc{L}} (\alpha \otimes F)} & = \sum_{k=0}^\infty (-1)^k \frac{1}{k!} T \left( \mc{L}^k (\alpha \otimes F) \right) \\
& = \sum_{\pi \in \Part_{1,2}(n)} (-1)^{n - \abs{\pi}} \T{C_\pi (\alpha \otimes F)}.
\end{split}
\]
\end{Prop}
\begin{Defn}
\label{Defn:Product-tensors}
Define the star-algebra structure on $\mc{TP}(\mc{H}_{\mf{R}})$ by
\[
\T{\alpha \otimes_s F} \T{\beta \otimes_s G} = \T{(\alpha \cup \beta) \otimes_s (F \otimes G)}
\]
and
\[
\T{\alpha \otimes_s (h_1 \otimes \ldots \otimes h_n)}^\ast
= \T{{\alpha^{-1}} \otimes_s ({h}_1 \otimes \ldots \otimes {h}_n)}.
\]
\end{Defn}
\begin{Prop}
The multiplication on $\mc{TP}(\mc{H})$ is well defined, and
\[
(\T{\alpha \otimes_s F} \T{\beta \otimes_s G})^\ast = \T{\beta \otimes_s F}^\ast \T{\alpha \otimes_s G}^\ast.
\]
\end{Prop}
\begin{proof}
For $\sigma, \tau \in S(n)$, taking $\rho = \sigma \sigma_{n,k}^{-1} \tau \sigma_{n,k}$
\[
\begin{split}
& \T{{\sigma \alpha \sigma^{-1}} \otimes U_\sigma F} \T{{\tau \beta \tau^{-1}} \otimes U_\tau G} \\
&\quad = \T{{\sigma_{n,k}^{-1} \tau \beta \tau^{-1} \sigma_{n,k} \sigma \alpha \sigma^{-1}} \otimes (U_\sigma F \otimes U_\tau G)} \\
&\quad = \T{{(\sigma_{n,k}^{-1} \tau \sigma_{n,k}) (\sigma_{n,k}^{-1} \beta \sigma_{n,k}) (\sigma_{n,k}^{-1} \tau^{-1} \sigma_{n,k}) \sigma \alpha \sigma^{-1}} \otimes (U_\sigma F \otimes U_\tau G)} \\
&\quad = \T{{ \sigma (\sigma_{n,k}^{-1} \tau \sigma_{n,k}) (\sigma_{n,k}^{-1} \beta \sigma_{n,k}) \alpha (\sigma_{n,k}^{-1} \tau^{-1} \sigma_{n,k}) \sigma^{-1}} \otimes (U_\sigma F \otimes U_\tau G)} \\
&\quad = \T{{\rho(\alpha \cup \beta) \rho^{-1}} \otimes U_\rho(F \otimes G)}.
\end{split}
\]
Similarly,
\[
\begin{split}
& \T{{\beta^{-1}} \otimes_s ({g}_1 \otimes \ldots \otimes {g}_k)} \T{{\alpha^{-1}} \otimes_s ({f}_1 \otimes \ldots \otimes {f}_n)} \\
&\quad = \T{{\sigma_{n, k} \alpha^{-1} \sigma_{n, k}^{-1} \beta^{-1}} \otimes_s ({g}_1 \otimes \ldots \otimes {g}_k \otimes {f}_1 \otimes \ldots \otimes {f}_n)} \\
&\quad = \T{{\sigma_{n, k} (\alpha \cup \beta)^{-1} \sigma_{n, k}^{-1}} \otimes_s ({g}_1 \otimes \ldots \otimes {g}_k \otimes {f}_1 \otimes \ldots \otimes {f}_n)} \\
&\quad = \T{{(\alpha \cup \beta)^{-1}} \otimes_s ({f}_1 \otimes \ldots \otimes {f}_n \otimes {g}_1 \otimes \ldots \otimes {g}_k)}. \qedhere
\end{split}
\]
\end{proof}
\begin{Prop}
\label{Prop:Contraction-kernel-vs}
For $q \in \mc{Z} \setminus \set{0}$, $\mc{N}_{vs, q}$ is an ideal for multiplication in Definition~\ref{Defn:Product-tensors}, and is preserved by each $C_\pi$. Consequently, $\mc{L}$ is defined as a map on $\mc{TP}_q(\mc{H}_{\mf{R}})$, and $\mc{L}_{n,k}^{(\ell)}$ on the appropriate quotient. Also, $\T{\eta}$ is well-defined for $\eta \in \mc{TP}_q(\mc{H}_{\mf{R}})$.
\end{Prop}
\begin{proof}
Apply Lemma~\ref{Lemma:Contraction-kernel-gr} and Propositions~\ref{Prop:Multiplication-group} and \ref{Prop:kernel-vs}.
\end{proof}
The following result also follows from Proposition~\ref{Prop:T-I-gr}.
\begin{Prop}
\label{Prop:Product-I}
Let $\alpha \in S_0(n)$, $\beta \in S_0(k)$, $F \in \mc{H}_{\mf{R}}^{\otimes n}$, $G \in \mc{H}_{\mf{R}}^{\otimes k}$. Then
\[
\begin{split}
\I{\alpha \otimes F} \I{\beta \otimes G}
& = \I{{\alpha \cup \beta} \otimes (F \otimes G)} + \sum_{\ell =1}^{\min(n, k)} \frac{1}{\ell!} \I{\mc{L}_{n,k}^{(\ell)} ((\alpha \cup \beta) \otimes (F \otimes G))} \\
& = \sum_{\pi \in \Part_{1,2}(n, k)} \I{C_\pi({\alpha \cup \beta} \otimes (F \otimes G))}.
\end{split}
\]
\end{Prop}
\begin{Prop}
\label{Prop:DE}
\
\begin{enumerate}
\item
By abuse of notation, define $\mc{L} \ \T{\eta \otimes F} = \T{\mc{L}(\eta \otimes F)}$. Then also $\mc{L} \ \I{\eta \otimes F} = \I{\mc{L}(\eta \otimes F)}$.
\item
Define the Euler operator on $\mc{TP}(\mc{H}_{\mf{R}})$ by
\[
E \T{\eta \otimes F} = n \T{\eta \otimes F}
\]
for $\eta \in \mf{C}[S_0(n)]$. Then for such $\eta$,
\[
(E - 2 \mc{L}) \I{\eta \otimes F} = n \I{\eta \otimes F},
\]
and it is the unique eigenfunction of this operator with eigenvalue $n$ and leading term $\T{\eta \otimes F}$. In particular, it follows that $E$ maps $\mc{TP}_q(\mc{H}_{\mf{R}})$ to itself.
\end{enumerate}
\end{Prop}
\begin{proof}
Part (a) follows from the expansion in Proposition~\ref{Prop:T-I}. For part (b), we first note that
\[
\begin{split}
& E \I{\eta \otimes F} - 2 \I{\mc{L}(\eta \otimes F)} \\
&\qquad = E \sum_{k=0}^n \frac{(-1)^k}{k!} \T{\mc{L}^k(\alpha \otimes F)} - 2 \sum_{k=0}^n \frac{(-1)^k}{k!} \T{\mc{L}^{k+1}(\alpha \otimes F)} \\
&\qquad = \sum_{k=0}^n \frac{(-1)^k (n - 2k)}{k!} \T{\mc{L}^k(\alpha \otimes F)} + 2 \sum_{k=1}^n \frac{(-1)^k k}{k!} \T{\mc{L}^{k}(\alpha \otimes F)} \\
&\qquad = \sum_{k=0}^n \frac{(-1)^k n}{k!} \T{\mc{L}^k(\alpha \otimes F)} \\
&\qquad = n \I{{\alpha} \otimes F}.
\end{split}
\]
In particular, anything of lower degree is in the sum of eigenspaces with eigenvalues $0, 1, \ldots, n-1$. It follows that specifying the leading term of an eigenfunction with a given eigenvalue determines it.
\end{proof}
\begin{Notation}
If $C_\pi(\eta \otimes F)$ is a scalar multiple of ${(0)}$ (which is the identity for the algebra), we will identify it with a scalar.
\end{Notation}
\begin{Thm}
\label{Thm:State}
Define a unital linear functional on $\bigoplus_{n=0}^\infty \mf{C}[S_0(n)] \otimes \mc{H}_{\mf{R}}^{\otimes n}$ by
\[
\state{\I{\alpha \otimes F}} = 0, \quad \state{\I{{(0)}}} = 1.
\]
Then
\begin{enumerate}
\item
\[
\state{\I{\beta \otimes G}^\ast \I{\alpha \otimes F}} = \ip{(\alpha \otimes F)}{(\beta \otimes G)}_q.
\]
In particular, $\ast$ is the adjoint with respect to this inner product, and $\phi$ is well-defined on the quotient $\mc{TP}_q(\mc{H}_{\mf{R}})$.
\item
$\varphi$ is tracial.
\item
$\varphi$ is positive for $q \in \mc{Z}$.
\item
\[
\set{A : \state{A^\ast A} = 0} = \Span{\I{\zeta} : \zeta \in \mc{N}_{vs, q}},
\]
and $\phi$ is faithful $\mc{TP}_q(\mc{H}_{\mf{R}})$.
\item
For $\alpha \in S_0(2n)$,
\begin{equation}
\label{Eq:GUE-moment}
\state{\T{\alpha \otimes F}}
= \sum_{\pi \in \Part_2(2n)} q^{n - \cyc_0(\pi \alpha)} C_\pi(F)
= \sum_{\pi \in \Part_2(2n)} q^{\abs{\pi \alpha}} C_\pi(F).
\end{equation}
and it is zero if $\alpha \in S_0(2n+1)$
\end{enumerate}
\end{Thm}
\begin{proof}
For $\alpha \in S_0(n)$, $\beta \in S_0(k)$, $\state{\I{\alpha \otimes F} \I{\beta \otimes G}} = 0$ if $n \neq k$. If $n=k$, using Proposition~\ref{Prop:Product-I}, the definition of $\varphi$, and Definition~\ref{Defn:Contraction-gr},
\[
\begin{split}
\state{\I{{\beta} \otimes G}^\ast \I{\alpha \otimes F}}
& = \sum_{\pi \in \Part_2(n, n)} \I{C_\pi((\beta^{-1} \cup \alpha) \otimes ({G} \otimes F))} \\
& = \sum_{\pi \in \Part_2(n, n)} q^{n - \cyc_0(\pi (\beta^{-1} \cup \alpha))} C_\pi({G} \otimes F) .
\end{split}
\]
The map $\sigma \mapsto \sigma^{-1} \sigma_{n,n} \sigma$ maps $S(n)$ bijectively onto $\Part_2(n, n)$. Clearly
\[
C_{\sigma^{-1} \sigma_{n,n} \sigma}({G} \otimes F) = \ip{G}{U_{\sigma^{-1}} F} = \ip{F}{U_\sigma G}.
\]
Moreover each cycle of
\[
\sigma^{-1} \sigma_{n,n} \sigma (\beta^{-1} \cup \alpha)
= \sigma^{-1} \sigma_{n,n} \sigma \sigma_{n, n} \alpha \sigma_{n,n} \beta^{-1}
\]
intersects $[0, n]$. A computation shows that its restriction to $[0,n]$ is $\sigma^{-1} \alpha \sigma \beta^{-1}$. Therefore
\[
q^{n - \cyc_0(\sigma^{-1} \sigma_{n,n} (\beta^{-1} \cup \alpha))}
= q^{n - \cyc_0(\sigma^{-1} \alpha \sigma \beta^{-1})}
= \chi_q(\sigma^{-1} \alpha \sigma \beta^{-1})
\]
It follows that
\[
\state{\I{{\beta} \otimes F}^\ast \I{\alpha \otimes G}}
= \sum_{\sigma \in S(n)} \chi_q(\sigma^{-1} \alpha \sigma \beta^{-1}) \ip{F}{U_\sigma G}_{\mc{H}_{\mf{R}}^{\otimes n}} = \ip{(\alpha \otimes F)}{(\beta \otimes G)}_q.
\]
Since $\chi_q[\beta^{-1} \alpha] = \chi_q[\alpha^{-1} \beta]$, (b) follows from (a), as do (c) and (d). For (e), using the expansion in Proposition~\ref{Prop:T-I}(a),
\[
\state{\T{\alpha \otimes F}}
= \sum_{\pi \in \Part_2(2n)} \I{C_\pi (\alpha \otimes F)}
= \sum_{\pi \in \Part_2(2n)} q^{n - \cyc_0(\pi \alpha)} C_\pi(F). \qedhere
\]
\end{proof}
\begin{Prop}
\label{Prop:Extended-algebra}
Let $\alpha \in S_0(n)$, $\beta \in S_0(k)$, $F \in \mc{H}_{\mf{R}}^{\otimes n}$, $G \in \mc{H}_{\mf{R}}^{\otimes k}$. Then
\[
\norm{\I{\alpha \otimes F} \I{\beta \otimes G}}_\varphi \leq (n + k)! (2 n)^k \norm{F} \ \norm{G}.
\]
Consequently, the star-algebra structure extends to $\overline{\mc{TP}}(\mc{H})$ and $\overline{\mc{TP}}_q(\mc{H})$.
\end{Prop}
\begin{proof}
Combining Proposition~\ref{Prop:Product-I} with the estimate in Lemma~\ref{Lemma:L2-approximation},
\[
\norm{\I{\alpha \otimes F} \I{\beta \otimes G}}_\varphi \leq (n + k)! \abs{\Part_{1,2}(n, k)} \norm{F} \ \norm{G}.
\]
$\abs{\Part_{1,2}(n, k)}$ is sequence A086885 in OEIS, and has an easy estimate
\[
\abs{\Part_{1,2}(n, k)} = \sum_{\ell=0}^{\min(n,k)} \ell! \binom{n}{\ell} \binom{k}{\ell}
= \sum_{\ell=0}^{\min(n,k)} \frac{n!}{(n-\ell)!} \binom{k}{\ell}
\leq n^k 2^k. \qedhere
\]
\end{proof}
The proof of the following proposition is very similar to Proposition~\ref{Prop:Product-I}, and is omitted.
\begin{Prop}
\label{Prop:Linearization}
For $\alpha_i \in S_0(n_i)$, and $F_i \in \overline{\mc{TP}}(\mc{H})$,
\[
\state{\I{{\alpha_1} \otimes_s F_1} \ldots \I{{\alpha_k} \otimes_s F_k}}
= \sum_{\pi \in \Part_2(n_1, \ldots, n_k)} \I{C_\pi ((\alpha_1 \cup \ldots \cup \alpha_k) \otimes_s (F_1 \otimes \ldots \otimes F_k))}.
\]
Here $\Part_2(n_1, \ldots, n_k)$ are the inhomogeneous pair partitions \cite{dSCViennot}.
\end{Prop}
\begin{Remark}
\label{Remark:GNS}
We have the (right) GNS representation of $\mc{TP}(\mc{H}_\mf{R})$ on $L^2(\mc{TP}(\mc{H}_\mf{R}), \phi) \simeq \mc{F}_q(\mc{H})$, for which the state vector ${(0)}$ is cyclic. For the corresponding representation of $\mc{TP}_q(\mc{H}_\mf{R})$, it is cyclic and separating. It follows from Proposition~\ref{Prop:Extended-algebra} that this representation extends to $\overline{\mc{TP}}_q(\mc{H}_\mf{R})$. Similarly, we have the left representation, which commutes with the right one, and for which $(0)$ is also cyclic.
\end{Remark}
\begin{Prop}
\label{Prop:CE}
Let $\mc{H}'_{\mf{R}} \subset \mc{H}_{\mf{R}}$ be a closed subspace, and $P_{\mc{H}'} : \mc{H} \rightarrow \mc{H}'$ the orthogonal projection fixing $\mc{H}'_{\mf{R}}$.
\begin{enumerate}
\item
The map defined by $\mc{F}(P_{\mc{H}'}) (\alpha \otimes F) = (\alpha \otimes (P_{\mc{H}'}^{\otimes n} F))$ extends to the orthogonal projection $\mc{F}(P_{\mc{H}'}) : \mc{F}_q(\mc{H}) \rightarrow \mc{F}_q(\mc{H}')$.
\item
The map $\Gamma(P_{\mc{H}'}) : \overline{\mc{TP}}_q(\mc{H}_{\mf{R}}) \rightarrow \overline{\mc{TP}}_q(\mc{H}'_{\mf{R}})$ obtained by the linear extension of
\[
\Gamma(P_{\mc{H}'})(\I{\alpha \otimes F}) = \I{\alpha \otimes (P_{\mc{H}'}^{\otimes n} F)}
\]
is an algebraic conditional expectation, which we will denote by $\state{\cdot \ |\ \mc{H}'}$. In the GNS representation on $\mc{F}_q(\mc{H}')$, it is implemented by
\[
\Gamma(P_{\mc{H}'})(\I{\alpha \otimes F}) = \mc{F}(P_{\mc{H}'}) \I{\alpha \otimes F} \mc{F}(P_{\mc{H}'}).
\]
\end{enumerate}
\end{Prop}
\begin{proof}
We first verify that for $F \in \mc{H}^{\otimes n}$ and $G \in (\mc{H}')^{\otimes k}$,
\[
\begin{split}
\ip{\alpha \otimes F}{\beta \otimes G}_q
& = \delta_{n=k} \sum_{\sigma \in S(n)} \chi_q(\alpha \sigma \beta \sigma^{-1}) \ip{F}{U_\sigma G} \\
& = \delta_{n=k} \sum_{\sigma \in S(n)} \chi_q(\alpha \sigma \beta \sigma^{-1}) \ip{P_{\mc{H}'}^{\otimes n} F}{U_\sigma G} \\
& = \ip{\alpha \otimes (P_{\mc{H}'}^{\otimes n} F)}{\beta \otimes G}_q,
\end{split}
\]
which implies part (a). Part (b) follows from Proposition~\ref{Prop:Algebraic-CE}.
\end{proof}
\begin{Prop}
\label{Prop:Single}
In the single-variable case, for $\alpha \in S_0(n)$, we have
\[
\state{\T{\alpha \otimes h^{\otimes n}} \ |\ \mc{H}'}
= \sum_{\pi \in \Part_{1,2}(n)} \norm{P_{(\mc{H}')^\perp} h}^{2 \abs{\Pair{\pi}}} \T{C_\pi(\alpha) \otimes (P_{\mc{H}'} h)^{\otimes \abs{\Sing{\pi}}}}.
\]
\end{Prop}
\begin{proof}
We compute
\[
\begin{split}
& \state{\T{\alpha \otimes h^{\otimes n}} \ |\ \mc{H}'} \\
&\quad = \sum_{\pi \in \Part_{1,2}(n)} \norm{h}^{n - \abs{\Sing{\pi}}} \I{C_\pi(\alpha) \otimes (P_{\mc{H}'} h)^{\otimes \abs{\Sing{\pi}}}} \\
&\quad = \sum_{\pi \in \Part_{1,2}(n)} \norm{h}^{2 \abs{\Pair{\pi}}} \\
&\quad\qquad \sum_{\sigma \in \Part_{1,2}(\Sing{\pi})} (-1)^{\abs{\Pair{\sigma}}} \norm{P_{\mc{H}'} h}^{2 \abs{\Pair{\sigma}}}
\T{C_\sigma C_\pi(\alpha) \otimes (P_{\mc{H}'} h)^{\otimes \abs{\Sing{\sigma}}}} \\
&\quad = \sum_{\rho \in \Part_{1,2}(n)} \sum_{S \subset \Pair{\rho}} (-1)^{\abs{S}} \norm{h}^{2 \abs{\Pair{\rho}} - 2 \abs{S}} \norm{P_{\mc{H}'} h}^{2 \abs{S}} \T{C_\rho(\alpha) \otimes (P_{\mc{H}'} h)^{\otimes \abs{\Sing{\rho}}}} \\
&\quad = \sum_{\rho \in \Part_{1,2}(n)}\left(\norm{h}^2 - \norm{P_{\mc{H}'} h}^2 \right)^{\abs{\Pair{\rho}}} \T{C_\rho(\alpha) \otimes (P_{\mc{H}'} h)^{\otimes \abs{\Sing{\rho}}}} \\
&\quad = \sum_{\rho \in \Part_{1,2}(n)} \norm{P_{(\mc{H}')^\perp} h}^{2 \abs{\Pair{\rho}}} \T{C_\rho(\alpha) \otimes (P_{\mc{H}'} h)^{\otimes \abs{\Sing{\rho}}}}. \qedhere
\end{split}
\]
\end{proof}
\subsection{Three subalgebras}
For $h \in \mc{H}_{\mf{R}}$, denote $\ell^+(h)$ the standard right creation operator on the full Fock space $\bigoplus_{n=0}^\infty \overline{\mc{H}^{\otimes n}}$, and by $\ell^-_k(h)$ the annihilation operators
\[
\ell^-_k(h) F = C_{(k \ n+1)} (F \otimes h).
\]
\subsubsection{Gaussian subalgebra}
\label{Subsec:Gaussian}
Denote
\[
\mc{G}(\mc{H}_{\mf{R}}) = \Alg{\T{{(0)(1)} \otimes h} : h \in \mc{H}_{\mf{R}}} = \Span{\T{{(0)(1)\ldots (n)} \otimes F} : h \in \mc{H}_{\mf{R}^{\otimes n}}}.
\]
\begin{Thm}
\label{Thm:Gaussian}
In the right GNS representation from Remark~\ref{Remark:GNS}, we may decompose
\[
\I{{(0)(1)} \otimes h} = \T{{(0)(1)} \otimes h} = a^+_{(0)(1)}(h) + a^-_{(0)(1)}(h),
\]
where
\[
a^+_{(0)(1)}(h) (\alpha \otimes F)
= {\alpha} \otimes \ell^+(h) F
\]
and
\[
\begin{split}
a^-_{(0)(1)}(h) (\alpha \otimes F)
&= \sum_{k : (k) \in \alpha} {P^{[0, n] \setminus \set{k}}_{[0, n-1]} \alpha|_{\set{k}^c}} \otimes \ell_k^-(h) F \\
&\qquad + q \sum_{k : (k) \not \in \alpha} {P^{[0, n] \setminus \set{k}}_{[0, n-1]} \alpha|_{\set{k}^c}} \otimes \ell_k^-(h) F
\end{split}
\]
These operators are adjoints of each other.
The distribution of $\I{{(0)(1)} \otimes h}$ is Gaussian with mean $0$ and variance $\norm{h}$.
\end{Thm}
\begin{proof}
According to Proposition~\ref{Prop:Product-I},
\[
\begin{split}
& \I{\alpha \otimes F} \ \I{{(0)(1)} \otimes h} \\
&\quad = \I{{\alpha} \otimes \ell^+(h) F} \\
&\quad \qquad + \sum_{k=1}^n q^{\cyc_0(\alpha|_{\set{k}^c}) - \cyc_0(\alpha) + 1} \I{{P^{[0, n] \setminus \set{k}}_{[0, n-1]} \alpha|_{\set{k}^c}} \otimes \ell^-_k(h) F} \\
&\quad = \I{{\alpha} \otimes \ell^+(h) F} \\
&\quad \qquad + \sum_{k : (k) \in \alpha} \I{{P^{[0, n] \setminus \set{k}}_{[0, n-1]} \alpha|_{\set{k}^c}} \otimes \ell^-_k(h) F}
+ q \sum_{k : (k) \not \in \alpha} \I{{P^{[0, n] \setminus \set{k}}_{[0, n-1]} \alpha|_{\set{k}^c}} \otimes \ell^-_k(h) F}
\end{split}
\]
Also
\[
\state{\I{{(0)(1)} \otimes h}^n}
= \state{\T{{(0)(1) \ldots (n)} \otimes h^{\otimes n}}}
=
\begin{cases}
\abs{\Part_2(n)} \ \norm{h}^{n}, & n \text{ even}, \\
0, & n \text{ odd}.
\end{cases}
\]
Thus the distribution of $\I{{(0)(1)} \otimes h}$ is Gaussian. Finally, since $\T{{(0)(1)} \otimes h}$ is symmetric, $a^+_{(0)(1)}(h)$ maps the $n$'th graded component into the $(n+1)$'st, and $a^-_{(0)(1)}(h)$ maps it into the $(n-1)$'st component, these operators have to be each other's adjoints.
\end{proof}
\begin{Remark}
The subspace generated by this algebra's action on ${(0)}$ is
\[
\Span{{(0)(1) \ldots (n)} \otimes_s F : F \in \mc{H}_{\mf{R}}^{\otimes n}, n \in \mf{N}}.
\]
Each element of this subspace has a unique permutation component, which can be dropped. Moreover the action of $a^{\pm}_{(0)(1)}(h)$ on this subspace is independent of $q$, and the induced inner product is the usual symmetric inner product. Therefore this space is isomorphic to the symmetric Fock space $\mc{F}_s(\mc{H}_{\mf{R}})$, with the usual Bosonic creation and annihilation operators.
\end{Remark}
\begin{Remark}
For $q=1$, the inner product on the Fock space is
\[
\ip{(\alpha \otimes F)}{(\beta \otimes G)}_1 = \delta_{n=k} \sum_{\sigma \in S(n)} \ip{F}{U_\sigma G}_{\mc{H}_{\mf{R}}^{\otimes n}}.
\]
So the non-degenerate quotient of the space is isomorphic to the symmetric Fock space $\mc{F}_s(\mc{H}_{\mf{R}})$, and $\mc{TP}_1(\mc{H}_{\mf{R}}) = \mc{G}(\mc{H}_{\mf{R}})$.
\end{Remark}
\subsubsection{Pure trace polynomial subalgebra}
\begin{Thm}
\[
\Span{\I{\alpha \otimes_s F} : \alpha(0) = 0}
\]
is the center of $\overline{\mc{TP}}(\mc{H}_{\mf{R}})$.
\end{Thm}
\begin{proof}
Note that
\[
\sigma_{n, k} (\alpha \cup \beta) \sigma_{n, k}^{-1} = \beta \sigma_{n, k} \alpha \sigma_{n, k}^{-1}.
\]
If $\alpha \in S(n)$, $\beta \in S_0(k)$, then $\beta$ and $\sigma_{n, k} \alpha \sigma_{n, k}^{-1}$ commute, and the expression above is $\beta \cup \alpha$. Since also $U_{\sigma_{n, k}}(F \otimes G) = G \otimes F$,
\[
(\alpha \cup \beta) \otimes_s (F \otimes G) = (\beta \cup \alpha) \otimes_s (G \otimes F).
\]
So using Lemma~\ref{Lemma:Conjugation}, for such $\alpha$,
\[
\begin{split}
\I{\alpha \otimes_s F} \I{\beta \otimes_s G}
& = \sum_{\pi \in \mc{P}_{1,2}(n, k)} \I{C_\pi(\alpha \cup \beta) \otimes_s C_\pi(F \otimes G)} \\
& = \sum_{\pi \in \mc{P}_{1,2}(n, k)} \I{C_\pi(\sigma_{n,k} ((\beta \cup \alpha) \sigma_{n,k}^{-1} \otimes_s U_{\sigma_{n,k}} (G \otimes F))} \\
& = \sum_{\pi \in \mc{P}_{1,2}(n, k)} \I{C_{ \sigma_{n,k}^{-1} \pi \sigma_{n,k}} ((\beta \cup \alpha) \otimes_s (G \otimes F))} \\
& = \I{\beta \otimes_s G} \I{\alpha \otimes_s F}.
\end{split}
\]
For the converse, suppose that for some for $A \in \overline{\mc{TP}}(\mc{H}_{\mf{R}})$, $A \ \I{(01) \otimes h} = \I{(01) \otimes h} A$ for all $h \in \mc{H}$. $A$ has the form
\[
A = \sum_{k=0}^{n} \sum_{\alpha \in S_0(k)} \I{\alpha \otimes_s F_\alpha}.
\]
It suffices to show that for each $\alpha \in S_0(n)$, $\alpha(0) \neq 0$ implies that $F_\alpha = 0$ (since such a term can be subtracted from $A$ with the difference still in the center). Comparing only the terms in the $(n + 1)$'th component, it follows that
\[
\sum_{\alpha \in S_0(n)} \I{(\alpha \cup (01)) \otimes_s (F_\alpha \otimes h)} = \sum_{\beta \in S_0(n)} \I{((01) \cup \beta) \otimes_s (h \otimes F_\beta)}.
\]
Recall that $U_{\sigma_{1,n}}(h \otimes F_\beta) = F_\beta \otimes h$ and
\[
\sigma_{1,n} ((01) \cup \beta) \sigma_{1,n}^{-1} = \beta \sigma_{1, n} (01) \sigma_{1,n}^{-1} = \beta (0 \ n+1).
\]
Therefore
\[
\sum_{\alpha \in S_0(n)} \I{((0 \ n+1) \alpha) \otimes_s (F_\alpha \otimes h)}
= \sum_{\beta \in S_0(n)} \I{(\beta (0 \ n+1)) \otimes_s (F_\beta \otimes h)}.
\]
Suppose $F_\alpha \neq 0$, and for some $\sigma \in S(n+1)$,
\[
(0 \ n+1) \alpha = \sigma \beta (0 \ n+1) \sigma^{-1} \text{ and } F_\alpha \otimes h = U_\sigma (F_\beta \otimes h).
\]
Then from the second relation, in fact $\sigma \in S(n)$. Applying the first relation to $0$, it follows that $\alpha(0) = 0$.
\end{proof}
\subsubsection{Polynomial subalgebra}
\label{Subsec:Polynomial-aubalgebra}
\begin{Remark}
Denote $X(h) = \T{{(01)} \otimes h}$. Then
\[
X(h_1) \ldots X(h_n) = \T{{(0 1 \ldots n)} \otimes (h_1 \otimes \ldots \otimes h_n)}.
\]
Since any two long cycles are conjugate,
\[
\mc{P}(\mc{H}_{\mf{R}}) = \Alg{\T{{(01)} \otimes h}} = \Span{\T{\alpha \otimes_s F} : \cyc_0(\alpha) = 0}
\]
and is a unital star-subalgebra of $\mc{TP}(\mc{H}_{\mf{R}})$. It is not closed under contractions or conditional expectations. In particular, the corresponding $\I{{(0 1 \ldots n)} \otimes (h_1 \otimes \ldots \otimes h_n)}$ operators do not in general belong to this subalgebra.
Clearly $\mc{P}(\mc{H}_{\mf{R}})$ and the center $\Span{\T{\alpha \otimes_s F} : \alpha(0) = 0}$ together generate $\mc{TP}(\mc{H}_{\mf{R}})$.
\end{Remark}
\begin{Thm}
Suppose $\mc{H}_{\mf{R}}$ is infinite-dimensional. Then $\mc{TP}(\mc{H}_{\mf{R}})$ is generated (as an algebra) by the conditional expectations
\[
\set{\state{A \ |\ \mc{H}'} : A \in \mc{P}(\mc{H}_{\mf{R}}), \mc{H}'_{\mf{R}} \subset \mc{H}_{\mf{R}}}.
\]
\end{Thm}
\begin{proof}
We will use induction on $n$. For $n=0$, $\alpha = (0)$. Suppose that for each $\beta \in S_0(k)$, $k < n$, $\T{\beta \otimes_s F}$ is in the algebra generated by the conditional expectations of elements of $\mc{P}(\mc{H}_{\mf{R}})$. It suffices to show that $\T{{(0)(1 \ldots n)} \otimes (h_1 \otimes \ldots \otimes h_n)}$ is in it. Let $h \perp \mc{H}'_{\mf{R}} = \Span{h_1, \ldots, h_n}$ be a non-zero vector. Then
\[
\begin{split}
& \state{X(h) \I{{(0 1 \ldots n)} \otimes (h_1 \otimes \ldots \otimes h_n)} X(h) \ |\ \mc{H}'} \\
&\quad = \state{C_{(1 \ n+2)} \I{{(0 1 \ldots n+2)} \otimes (h \otimes h_1 \otimes \ldots \otimes h_n \otimes h)} \ |\ \mc{H}'} \\
&\quad = \norm{h}^2 \I{{(0)(1 \ldots n)} \otimes (h_1 \otimes \ldots \otimes h_n)}.
\end{split}
\]
Therefore using Definition~\ref{Defn:T-I},
\[
\begin{split}
& \state{X(h) \T{{(0 1 \ldots n)} \otimes (h_1 \otimes \ldots \otimes h_n)} X(h) \ |\ \mc{H}'} \\
&\quad \state{X(h) (\I{{(0 1 \ldots n)} \otimes (h_1 \otimes \ldots \otimes h_n)} + \text{ lower order terms}) X(h) \ |\ \mc{H}'} \\
&\quad = \norm{h}^2 \I{{(0)(1 \ldots n)} \otimes (h_1 \otimes \ldots \otimes h_n)} + \text{ lower order terms}.
\end{split}
\]
The result follows.
\end{proof}
\begin{Cor}
The $\phi$-preserving conditional expectation from $\overline{\mc{TP}}(\mc{H}_{\mf{R}})$ onto its center is the map
\[
\I{\alpha \otimes_s F} \mapsto \I{{\alpha|_{\set{0}^c}} \otimes_s F}.
\]
On $\mc{TP}(\mc{H}_{\mf{R}})$ it is implemented by
\[
\state{X(h) \I{ \alpha \otimes_s (h_1 \otimes \ldots \otimes h_n)} X(h) \ |\ \mc{H}'},
\]
where $\mc{H}'_{\mf{R}} = \Span{h_1, \ldots, h_n}$ and $h \perp \mc{H}'_{\mf{R}}$ is a unit vector.
\end{Cor}
A similar representation holds for general contractions.
\begin{Lemma}
Let $\pi = \set{(v_1, w_1), \ldots, (v_\ell, w_\ell), (u_1) \ldots (u_{n - 2 \ell} )} \in \Part_{1,2}(n)$, and $\mc{H}'_{\mf{R}} \subset \mc{H}_{\mf{R}}$ a closed subspace. Let $h_1, \ldots, h_n \in \mc{H}_{\mf{R}}$ be vectors such that
\begin{itemize}
\item
$h_{v_i} = h_{w_i}$, $1 \leq i \leq \ell$.
\item
The vectors $\set{h_{v_1}, \ldots, h_{v_\ell}}$ are an orthonormal subset of $(\mc{H}'_{\mf{R}})^\perp$.
\item
$\set{h_{u_1}, \ldots, h_{u_{n - 2 \ell}}} \subset \mc{H}'_{\mf{R}}$.
\end{itemize}
Then
\[
\state{\T{\alpha \otimes (h_1 \otimes \ldots \otimes h_n)} \ |\ \mc{H}'} = \T{C_\pi(\alpha \otimes (h_1 \otimes \ldots \otimes h_n))}.
\]
\end{Lemma}
\begin{proof}
Using the assumptions on the vectors,
\[
\begin{split}
& \state{\T{\alpha \otimes (h_1 \otimes \ldots \otimes h_n)} \ |\ \mc{H}'} \\
&\quad = \state{\sum_{\sigma \in \Part_{1,2}(n)} \I{C_\sigma(\alpha \otimes (h_1 \otimes \ldots \otimes h_n))} \ |\ \mc{H}'} \\
&\quad = \sum_{\substack{\sigma \in \Part_{1,2}(n) \\ \Pair{\pi} \subset \Pair{\sigma}}} \I{C_\sigma(\alpha \otimes (h_1 \otimes \ldots \otimes h_n))} \\
&\quad = \sum_{\rho \in \Part_{1,2}([\abs{\Sing{\pi}}])} \I{C_\rho C_\pi(\alpha \otimes (h_1 \otimes \ldots \otimes h_n))} \\
&\quad = \T{C_\pi(\alpha \otimes (h_1 \otimes \ldots \otimes h_n))}. \qedhere
\end{split}
\]
\end{proof}
\begin{Prop}
In its representation on $\mc{F}_q(\mc{H})$, for $h \in \mc{H}_{\mf{R}}$, $X(h)$ is essentially self-adjoint.
\end{Prop}
\begin{proof}
Clearly $X({h})$ is symmetric. So it suffices to show that for each $\alpha \in S_0(k)$, $\T{\alpha \otimes F} {(0)}$ is an analytic vector for it. Indeed,
\[
\begin{split}
& \frac{1}{n} \norm{X(h)^n \T{\alpha \otimes F} {(0)}}^{1/n} \\
&\quad = \frac{1}{n} \state{\T{{\alpha^{-1} \cup (0 1 \ldots n) \cup \alpha} \otimes (\bar{F} \otimes h^{\otimes 2 n} \otimes F)}}^{1/2n} \\
&\quad = \frac{1}{n} \left( \sum_{\pi \in \Part_2(2n + 2 k)} q^{(n + k) - \cyc_0(\pi (\alpha^{-1} \cup (0 1 \ldots n) \cup \alpha))} C_\pi(\bar{F} \otimes h^{\otimes 2 n} \otimes F) \right)^{1/2n} \\
&\quad \leq \frac{1}{n} \left( \frac{(2 n + 2k)!}{2^{n+k} (n+k)!} \norm{F}^2 \norm{h}^{2n} \right)^{1/2n} \\
&\quad \sim \frac{1}{n} 2^{1/2} (n + k)^{1/2} e^{-1/2} \norm{h} \rightarrow 0. \qedhere
\end{split}
\]
\end{proof}
\begin{Thm}
\label{Thm:GUE}
In the right GNS representation, we may decompose $X(h) = a^+_{(01)}(h) + a^-_{(01)}(h)$, where
\[
a^+_{(01)}(h) (\alpha \otimes F)
= {(0 \ n+1) \alpha} \otimes \ell^+(h) F
\]
and
\[
\begin{split}
a^-_{(01)}(h) (\alpha \otimes F)
&= q \sum_{k \neq \alpha^{-1}(0)} P^{[0, n] \setminus \set{k}}_{[0, n-1]} ((0k)\alpha)|_{\set{k}^c} \otimes \ell^-_k(h) F \\
&\qquad+ P^{[0, n] \setminus \set{((0k)\alpha)^{-1}(0)}}_{[0, n-1]} \alpha|_{\set{\alpha^{-1}(0)}^c} \otimes \ell^-_{\alpha^{-1}(0)}(h) F,
\end{split}
\]
where if $\alpha(0) = 0$, the final term is absent.
The distribution of $\I{{(01)} \otimes h}$ is the unnormalized average empirical distribution of a GUE matrix with mean $0$ and variance $\norm{h}$.
\end{Thm}
\begin{proof}
\[
\begin{split}
& \I{\alpha \otimes F} \ \I{(01) \otimes h} \\
&\quad = \I{(0 \ n+1) \alpha \otimes \ell^+(h) F} \\
&\quad \quad + \sum_{k=1}^n q^{\cyc_0(((0k)\alpha)|_{\set{k}^c}) - \cyc_0(\alpha) + 1} \I{P^{[0, n] \setminus \set{k}}_{[0, n-1]} ((0k)\alpha)|_{\set{k}^c} \otimes \ell^-_k(h) F} \\
&\quad = \I{(0 \ n+1) \alpha \otimes \ell^+(h) F} \\
&\quad \quad + q \sum_{k \neq \alpha^{-1}(0)} \I{P^{[0, n] \setminus \set{k}}_{[0, n-1]} ((0k)\alpha)|_{\set{k}^c} \otimes \ell^-_k(h) F} \\
&\quad \quad + \ip{h_{\alpha^{-1}(0)}}{h} \I{P^{[0, n] \setminus \set{((0k)\alpha)^{-1}(0)}}_{[0, n-1]} \alpha|_{\set{\alpha^{-1}(0)}^c} \otimes \ell^-_{\alpha^{-1}(0)}(h) F}.
\end{split}
\]
Also,
\[
\state{\I{{(01)} \otimes h}^n}
= \state{\T{{(0 1 \ldots n)} \otimes h^{\otimes n}}}
= \begin{cases}
\sum_{\pi \in \Part_2(n)} q^{(n/2) - \cyc_0((0 \ldots n) \pi)} \norm{h}^{n}, & n \text{ even}, \\
0, & n \text{ odd},
\end{cases}
\]
which should be compared with Theorem 22.12 in \cite{Nica-Speicher-book}.
\end{proof}
\subsection{The relation to a construction by Bo\.{z}ejko and Gu\c{t}\u{a}}
We contrast the algebra $\mc{P}(\mc{H}_{\mf{R}})$ with a construction from \cite{Bozejko-Guta}. In section 5 of that paper, Bo\.{z}ejko and Gu\c{t}\u{a} considered the Fock space with the inner product
\[
\ip{f_1 \otimes \ldots \otimes f_n}{g_1 \otimes \ldots \otimes g_k}_q = \delta_{n=k} \sum_{\sigma \in S(n)} \chi_q[\sigma] \prod_{i=1}^n \ip{f_i}{g_{\sigma(i)}}
\]
for $q = \pm \frac{1}{N}$. On this space, they defined the creation operator $a^+(h)$ in the usual way, and the annihilation operator as its adjoint, which comes out to be
\[
a^-(h)(h_1 \otimes \ldots \otimes h_n) = \ip{h_1}{h} + q \sum_{k=2}^n \ip{h_k}{h} (h_2 \otimes \ldots \otimes h_{i-1} \otimes h_1 \otimes h_{i+1} \otimes \ldots \otimes h_n)
\]
(compare with Theorem~\ref{Thm:GUE}). Then (Lemma~5.1) the operators $\omega(h) = a^+(h) + a^-(h)$ satisfy (with our notation)
\[
\ip{\Omega}{\omega(h_1) \ldots \omega(h_{2n})} = \sum_{\pi \in \Part_2(2n)} q^{n - c(\pi)} C_\pi(h_1 \otimes \ldots \otimes h_n)
\]
and the corresponding expression is zero for an odd number of factors (compare with equation~\eqref{Eq:GUE-moment}). Here $c(\pi)$ is again the number of cycles of a permutation corresponding to a partition $\pi$, but this correspondence is more subtle. For a pair partition $\pi$, there is a unique non-crossing partition $\tilde{\pi}$ with the same openers and closers as $\pi$. If $i \stackrel{\pi}{\sim} j$ and $i \stackrel{\tilde{\pi}}{\sim} k$, then for the corresponding permutation $\sigma$, $\sigma(i) = j$ if $i < j$, and $\sigma(i) = k$ if $k < i$ (it is easy to check that this is an alternative). In other words, $\sigma$ is a permutation with an upper partition $\pi$ and the lower partition $\tilde{\pi}$ in the sense of Corteel \cite{Corteel-Crossings-permutations}. Then $c(\pi)$ is the number of cycles of $\sigma$.
Moreover (Lemma~5.1), the creation and annihilation operators satisfy a commutation relation
\[
a^-(f) a^+(g) = \ip{f}{g} + q \ d\Gamma(|g \rangle \langle f |),
\]
where $d\Gamma(A)$ is the standard second quantization operator. We prove an analog of this relation in our context below. Note however that in other aspects, our construction behaves quite differently. For example, there is no simple commutation relation between $a^+_{(01)}$ and $d\Gamma(A)$ below. Also, for $q = - \frac{1}{N}$, the distribution of $\omega(h)$ only has finite support, in contrast to Theorem~\ref{Thm:State} and Propositions~\ref{Prop:Negative}.
\begin{Remark}
Combining our construction with \cite{Bozejko-Guta}, one could consider a Fock space with the inner product
\[
\ip{f_1 \otimes \ldots \otimes f_n}{g_1 \otimes \ldots \otimes g_k}_q = \delta_{n=k} \sum_{\sigma \in S(n)} \chi_q[\sigma \alpha \sigma^{-1} \alpha^{-1}] \prod_{i=1}^n \ip{f_i}{g_{\sigma(i)}}
\]
for a fixed permutation $\alpha$, for example for $\alpha = (0 1 \ldots n)$. It is easy to see that for $q \in \mc{Z}$, this inner product is positive semi-definite for any $\alpha$. For particular choices of $\alpha$, it is positive semi-definite for a wider range of $q$.
\end{Remark}
\begin{Defn}
Let $A$ be a (bounded for simplicity) linear operator on $\mc{H}$. Define its differential second quantization
\[
d\Gamma(A)(\alpha \otimes (h_1 \otimes \ldots \otimes h_n))
= \sum_{i=1}^n {(0 i) \alpha} \otimes (h_1 \otimes \ldots \otimes A h_i \otimes \ldots \otimes h_n).
\]
\end{Defn}
Note that
\[
d\Gamma(I)(\alpha \otimes F)
= \sum_{i=1}^n {(0 i)} \alpha \otimes F,
\]
where $\sum_{i=1}^n {(0 i)}$ is the Jucys-Murphy element.
\begin{Lemma}
\
\begin{enumerate}
\item
For $P_n$ the symmetrizing projection from equation~\eqref{Eq:Symm-proj}, $d\Gamma(A) \ P_n = P_n \ d\Gamma(A)$. Therefore $d\Gamma(A)$ restricts to an operator on $\mc{TP}(\mc{H})$ and $\mc{TP}_q(\mc{H})$.
\item
$(d\Gamma(A))^\ast = d\Gamma(A^\ast)$.
\end{enumerate}
\end{Lemma}
\begin{proof}
For part (a), we note that
\[
\begin{split}
& d\Gamma(A) \ P_n (\alpha \otimes (h_1 \otimes \ldots \otimes h_n)) \\
&\quad = \sum_{\sigma \in S(n)} \sum_{i=1}^n {(0i) \sigma \alpha \sigma^{-1}} \otimes (h_{\sigma^{-1}(1)} \otimes \ldots \otimes A h_{\sigma^{-1}(i)} \otimes \ldots \otimes h_{\sigma^{-1}(n)}) \\
&\quad = \sum_{\sigma \in S(n)} \sum_{i=1}^n {\sigma (0 \sigma^{-1}(i)) \alpha \sigma^{-1}} \otimes (h_{\sigma^{-1}(1)} \otimes \ldots \otimes A h_{\sigma^{-1}(i)} \otimes \ldots \otimes h_{\sigma^{-1}(n)}) \\
&\quad = P_n \ d\Gamma(A) (\alpha \otimes (h_1 \otimes \ldots \otimes h_n)).
\end{split}
\]
The restriction to $\mc{TP}_q(\mc{H})$ follows since $\mc{N}_{gr, q}$ is an ideal. Similarly, for part (b),
\[
\begin{split}
& \ip{ d\Gamma(A) (\alpha \otimes (f_1 \otimes \ldots \otimes f_n))}{\beta \otimes (g_1 \otimes \ldots \otimes g_n)}_q \\
&\quad = \sum_{\sigma \in S(n)} \sum_{i=1}^n \chi_q((0 i) \alpha \sigma \beta^{-1} \sigma^{-1}) \ip{A f_{i}}{g_{\sigma^{-1}(i)}} \prod_{j \neq i} \ip{f_j}{g_{\sigma^{-1}(j)}} \\
&\quad = \sum_{\sigma \in S(n)} \sum_{i=1}^n \chi_q(\alpha \sigma ((0 \sigma^{-1}(i)) \beta)^{-1} \sigma^{-1}) \ip{f_i}{A^\ast g_{\sigma^{-1}(i)}} \prod_{j \neq i} \ip{f_j}{g_{\sigma^{-1}(j)}} \\
&\quad = \ip{\alpha \otimes (f_1 \otimes \ldots \otimes f_n)}{ d\Gamma(A^\ast) (\beta \otimes (g_1 \otimes \ldots \otimes g_n))}. \qedhere
\end{split}
\]
\end{proof}
\begin{Prop}
Splitting $X(h)$ into the creation operator $a^+(h)$ and annihilation operator $a^-(h)$ as in Theorem~\ref{Thm:GUE}, we have
\[
a^-(f) a^+(g) = \ip{f}{g} + q \ d\Gamma(|g \rangle \langle f |).
\]
\end{Prop}
\begin{proof}
With the notation from Theorem~\ref{Thm:GUE},
\[
\begin{split}
& a^-(f) a^+(g) (\alpha \otimes_s (h_1 \otimes \ldots \otimes h_n)) \\
&\quad = a^-(f) ({(0 \ n+1) \alpha} \otimes_s h_1 \otimes \ldots \otimes h_n \otimes g) \\
&\quad = q \sum_{k = 1}^n \ip{h_k}{f} {P^{[0, n+1] \setminus \set{k}}_{[0, n]} ((0k)(0 \ n+1) \alpha)|_{\set{k}^c}} \otimes_s (h_1 \otimes \ldots \otimes \hat{h}_k \otimes \ldots \otimes h_n \otimes g) \\
&\quad \qquad + \ip{f}{g} {\alpha} \otimes_s (h_1 \otimes \ldots \otimes h_n)
\end{split}
\]
Denoting $\sigma_k = (k \ k+1 \ldots n-1 \ n)$, we have
\[
U_{\sigma_k} (h_1 \otimes \ldots \otimes \hat{h}_k \otimes \ldots \otimes h_n \otimes g) = h_1 \otimes \ldots \otimes h_{k-1} \otimes g \otimes h_{k+1} \otimes \ldots \otimes h_n.
\]
On the other hand, the bijection
\[
\tau_k = P^{[0, n+1] \setminus \set{k}}_{[0, n]} : i \mapsto
\begin{cases}
i, & 1 \leq i \leq k - 1 \\
i-1, & k+1 \leq i \leq n+1.
\end{cases}
\]
and so $\sigma_k \tau_k(i) = i$ for $i \neq n+1$, $\sigma_k \tau_k(n+1) = k$. Therefore
\[
\sigma_k \tau_k ((0k)(0 \ n+1) \alpha)|_{\set{k}^c} \tau_k^{-1} \sigma_k^{-1}
= (0 k) \alpha. \qedhere
\]
\end{proof}
\begin{Remark}
\[
\exp(d\Gamma(I)) = \sum_{k=0}^\infty \frac{1}{k!} (d\Gamma(I))^k
= \sum_{\beta \in S_0(n)} \beta \sum_{k=0}^\infty \frac{1}{k!} \abs{\set{\mb{i} \in [n]^k : \beta = (0 i(k)) \ldots (0 i(1))}}.
\]
Here the coefficient of $\beta$ is the generating function of the number of primitive factorizations of $\beta$. See \cite{Matsumoto-Novak-primitive}.
\end{Remark}
\section{Trace polynomials in GUE matrices}
\label{Sec:GUE}
\subsection{Background}
\label{Subsec:Trace-background}
Let $\set{x_i : i \in S}$ be a collection of non-commuting variables. Informally, a trace polynomial in these variables is a polynomial in these variables as well as in a tracial functional $\tr$ applied to these variables. See \cite{Cebron-Free-convolution} for a formal definition using a universal property.
For the purposes of this paper, we will use a more constructive definition. Let $\alpha \in S_0(n)$. Denote
\[
\tr_\alpha[x_1, \ldots, x_n]
= \prod_{i \text{ in the cycle starting with $0$}} x_i \prod_{\text{other cycles}} \tr\left[ \prod_{i \text{ in the cycle}} x_i \right].
\]
For example, for $\alpha = (024)(137)(56)$,
\[
\tr_\alpha[x_1, \ldots, x_7]
= x_2 x_4 \tr[x_1 x_3 x_7] \tr[x_5 x_6].
\]
Compare with Notation~22.29 in \cite{Nica-Speicher-book}. Then $\tr_\alpha[x_1, \ldots, x_n]$ is a trace monomial, and a trace polynomial is a linear combination of trace monomials.
Next, let $\mc{A}$ be a unital algebra, $\mc{C}$ its center, and $F: \mc{A} \rightarrow \mc{C}$ a unital $\mc{C}$-bimodule linear map. For any $\alpha \in S_0(n)$, we can similarly form $F_\alpha(a_1, \ldots, a_n) \in \mc{A}$, and consider it as the application of the trace monomial $\tr_\alpha[x_1, \ldots, x_n]$ to the elements $a_1, \ldots, a_n$. See \cite{Cebron-Free-convolution}.
We extend the notation to $F_\eta$ for $\eta \in \mf{C}[S_0(n)]$ by linearity.
\subsubsection{Invariant theory of $N \times N$ matrices}
Let $\set{x_{ij}^{(k)} : k \in S, 1 \leq i, j \leq N}$ be formal commuting variables subject to the relation $x_{ji}^{(k)} = (x_{ij}^{(k)})^\ast$. For each $k$, form a matrix $X^{(k)} = (x_{ij}^{(k)})_{i, j = 1}^N$. Let $\mc{A}_{N, S}$ be the collection of all matrices with polynomial entries
\[
\mc{A}_{N, S} = M_N(\mf{C}) \otimes \mf{C}[x_{ij}^{(k)} : k \in S, 1 \leq i, j \leq N].
\]
Let $Y \in \mc{A}_{N, S}$, $Y = P(X^{(k)} : k \in S)$, where each entry $P_{ab}$ is a polynomial in the entries of its argument. We say that $Y$ is equivariant if for any $U \in U_N(\mf{C})$,
\[
P(U X^{(k)} U^\ast : k \in S) = U Y U^\ast.
\]
Denote
\[
\mc{A}_{N, S}^{\text{equiv}}
= \set{\text{equivariant } Y \in \mc{A}_{N, S}}.
\]
Then
\begin{equation}
\label{Eq:Equivariant}
\mc{A}_{N, S}^{\text{equiv}} = \Span{\Tr_\alpha(X^{(k(1))}, \ldots, X^{(k(n))}) : \alpha \in S_0(n), n \geq 0, k(1), \ldots, k(n) \in S},
\end{equation}
where $\Tr$ is the (un-normalized) trace on $M_N(\mf{C})$. Indeed, since for the purposes of this expansion, $x_{ij}^{(k)}$ and $(x_{ij}^{(k)})^\ast$ can be considered as independent variables, this follows directly from the first Procesi-Razmyslov theorem \cite{Procesi}. The result is usually formulated using $GL(n)$-invariance. However, the argument ultimately reduces to Schur-Weyl duality, for which (in the case of inner product spaces) unitary invariance is sufficient.
\subsubsection{Hermitian Brownian motion}
Let $\set{b_{ij}(h) : h \in \mc{H}_{\mf{R}}}$ be $N^2$ standard independent Gaussian processes indexed by the same real Hilbert space, represented on the same probability space. Define the $N \times N$ Hermitian Gaussian process $\set{X(h) : h \in \mc{H}_{\mf{R}}}$ by
\[
X(h)_{ij} =
\begin{cases}
\frac{1}{\sqrt{2N}} (b_{ij}(h) + \sqrt{-1} b_{ji}(h)), & i < j, \\
\frac{1}{\sqrt{N}} b_{ij}(h), & i = j, \\
\frac{1}{\sqrt{2N}} (b_{ij}(h) - \sqrt{-1} b_{ji}(h)), & i > j. \\
\end{cases}
\]
Equivalently, each $X(h)$ is a Hermitian random matrix, whose entries are centered Gaussian variables with the joint covariance
\[
\Exp{X(f)_{ij} X(g)_{k \ell}} = \frac{1}{N} \delta_{i=\ell} \delta_{j=k} \ip{f}{g},
\]
so that
\[
(I \otimes \mf{E})[X(f) X(g)] = \ip{f}{g} I_N.
\]
Note that $(I \otimes \mf{E})[\Tr_\alpha(X(h_1), \ldots, X(h_n))]$ is always a scalar. Indeed, since the distribution of this random matrix is unitarily invariant, so is the distribution of its entry-wise expectation, which then has to be a multiple of identity. By a slight abuse of notation, we will denote this scalar-valued functional by $\mf{E}$ again.
\begin{Prop}
\label{Prop:GUE-moments}
Let $\set{D_i: i \in I}$ be non-random $N \times N$ matrices. For even $n$,
\begin{multline*}
D^{(0)} (I \otimes \mf{E})\left[\Tr_\alpha[X(h_1) D^{(1)}, X(h_2) D^{(2)}, \ldots, X(h_n) D^{(n)}]\right] \\
= \frac{1}{N^{n/2}} \sum_{\pi \in \Part_2(n)} C_\pi(h_1 \otimes \ldots \otimes h_n) D^{(0)} \Tr_{\pi \alpha}[D^{(1)}, D^{(2)}, \ldots, D^{(n)}].
\end{multline*}
In particular,
\[
\Exp{\Tr_\alpha[X(h_1), X(h_2), \ldots, X(h_n)]}
= \sum_{\pi \in \Part_2(n)} \frac{1}{N^{n/2 - \cyc_0(\pi \alpha)}} C_\pi(h_1 \otimes \ldots \otimes h_n).
\]
\end{Prop}
\begin{proof}
\[
\begin{split}
& \left(D^{(0)} (I \otimes \mf{E})\left[\Tr_\alpha[X(h_1) D^{(1)}, X(h_2) D^{(2)}, \ldots, X(h_n) D^{(n)}]\right]\right)_{\ell r} \\
&\quad = \sum_{\substack{\mb{u}, \mb{v} \in [0,n]^N \\ u(0) = r, v(0) = \ell}} \prod_{i=0}^n D_{v(i), u(\alpha(i))}^{(i)} \Exp{\prod_{i=1}^n X_{u(i), v(i)}(h_i)} \\
&\quad = \sum_{\pi \in \Part_2(n)} \sum_{\substack{\mb{u}, \mb{v} \in [0,n]^N \\ u(i) = v(\pi(i)) \\ u(0) = r, v(0) = \ell}} \prod_{i=0}^n D_{v(i), u(\alpha(i))}^{(i)} \prod_{(i, j) \in \pi} \Exp{X_{v(\pi(i)), v(i)}(h_i) X_{v(i), v(\pi(i))}(h_{\pi(i)})} \\
&\quad = \sum_{\pi \in \Part_2(n)} \sum_{\substack{\mb{u}, \mb{v} \in [0,n]^N \\ u(i) = v(\pi(i)) \\ u(0) = r, v(0) = \ell}} \prod_{i=0}^n D_{v(i), u(\alpha(i))}^{(i)} \frac{1}{N^{n/2}} C_\pi(h_1 \otimes \ldots \otimes h_n) \\
&\quad = \sum_{\pi \in \Part_2(n)} \sum_{\substack{\mb{v} \in [0,n]^N \\ u(0) = r, v(0) = \ell}} \prod_{i=0}^n D_{v(i), v(\pi \alpha(i))}^{(i)} \frac{1}{N^{n/2}} C_\pi(h_1 \otimes \ldots \otimes h_n) \\
&\quad = \sum_{\pi \in \Part_2(n)} \left( D^{(0)} \Tr_{\pi \alpha}[D^{(1)}, D^{(2)}, \ldots, D^{(n)}]\right)_{\ell r} \frac{1}{N^{n/2}} C_\pi(h_1 \otimes \ldots \otimes h_n). \qedhere
\end{split}
\]
\end{proof}
\begin{Notation}
Denote
\[
\begin{split}
\mc{A}_{N}(\mc{H}) & = M_N(\mf{C}) \otimes \mf{C}[b_{ij}(h) : h \in \mc{H}_{\mf{R}}, 1 \leq i, j \leq N] \\
& = \set{P(b_{ij}(h): h \in \mc{H}_{\mf{R}}, 1 \leq i, j \leq N) : P \in \mc{A}_{N, \mc{H}_{\mf{R}}}}
\end{split}
\]
and
\[
\begin{split}
\mc{A}_{N}^{\text{equiv}}(\mc{H})
& = \set{P(b_{ij}(h): h \in \mc{H}_{\mf{R}}, 1 \leq i, j \leq N) : P \in \mc{A}_{N, \mc{H}_{\mf{R}}}^{\text{equiv}}} \\
& = \Span{\Tr_\alpha(X(h_1), \ldots, X(h_n) : \alpha \in S_0(n), n \geq 0, h_1, \ldots, h_n \in \mc{H}_{\mf{R}})}.
\end{split}
\]
Denote $\mc{A}_{N}^{2, \text{equiv}}(\mc{H})$ the $L^2$ completion of $\mc{A}_{N}^{\text{equiv}}(\mc{H})$ with respect to the expectation functional $\mf{E}$.
\end{Notation}
\subsection{The isomorphisms}
\begin{Thm}
\label{Thm:E-map}
Let $q = 1/N$. Define the evaluation map $\mc{E}$ from $\bigoplus_{n=0}^\infty \mf{C}[S_0(n)] \otimes \mc{H}_{\mf{R}}^{\otimes n}$ to the algebra $\mc{A}_N^{\text{equiv}}(\mc{H})$ by the linear extension of
\[
\mc{E}[\T{\alpha \otimes (h_1 \otimes \ldots \otimes h_n)}] = \Tr_\alpha(X(h_1), \ldots, X(h_n)).
\]
\begin{enumerate}
\item
This map factors through to $\mc{TP}(\mc{H}_{\mf{R}})$.
\item
$\mc{E}$ is a star-homomorphism of algebras.
\item
$\mf{E} \circ \mc{E} = \phi$. It follows that $\mc{E}$ extends to a homomorphism from $\overline{\mc{TP}}(\mc{H}_{\mf{R}})$, and to an isometry from $\mc{F}_{1/N}(\mc{H})$ to $\mc{A}_N^{2, \text{equiv}}(\mc{H})$.
\item
For $A \in \overline{\mc{TP}}_{1/N}(\mc{H}_{\mf{R}})$,
\[
\Exp{\mc{E}[A] \ |\ X(h) : h \in \mc{H}'_{\mf{R}}}
= \mc{E}[\state{A \ |\ \mc{H}'}],
\]
so in particular,
\[
\Exp{\Tr_\alpha(X(h_1), \ldots, X(h_n)) \ |\ X(h) : h \in \mc{H}_{\mf{R}}'}
= \mc{E}[\state{\T{\alpha \otimes_s F} \ |\ \mc{H}'}].
\]
\end{enumerate}
\end{Thm}
\begin{proof}
(a) follows from
\[
\Tr_{\sigma \alpha \sigma^{-1}}(X(h_{\sigma^{-1}(1)}), \ldots, X(h_{\sigma^{-1}(n)})) = \Tr_\alpha(X(h_1), \ldots, X(h_n)),
\]
and (b) from
\[
\Tr_\alpha(X(h_1), \ldots, X(h_n)) \Tr_\beta(X(h_{n+1}), \ldots, X(h_{n+k})) = \Tr_{\alpha \cup \beta}(X(h_1), \ldots, X(h_{n+k}))
\]
and
\[
\Tr_\alpha(X(h_1), \ldots, X(h_n))^\ast = \Tr_{\alpha^{-1}} (X({h}_1), \ldots, X({h}_n)).
\]
(c) follows by comparing Theorem~\ref{Thm:State}(e) and Proposition~\ref{Prop:GUE-moments}.
For part (d), for $h_1, \ldots, h_k \in \mc{H}'_{\mf{R}}$, using earlier parts and properties of conditional expectations,
\[
\begin{split}
& \Exp{\Exp{\mc{E}[A] \ |\ X(h) : h \in \mc{H}'_{\mf{R}}} \Tr_\beta(X(h_1), \ldots, X(h_k))} \\
&\quad = \Exp{\mc{E}[A] \Tr_\beta(X(h_1), \ldots, X(h_k))} \\
&\quad = \Exp{\mc{E}[A] \mc{E}[\T{\beta \otimes (h_1 \otimes \ldots \otimes h_n)}]} \\
&\quad = \state{A \T{\beta \otimes (h_1 \otimes \ldots \otimes h_n)}} \\
&\quad = \state{\state{A \ |\ \mc{H}'} \T{\beta \otimes (h_1 \otimes \ldots \otimes h_n)}} \\
&\quad = \Exp{\mc{E}[\state{A \ |\ \mc{H}'}] \Tr_\beta(X(h_1), \ldots, X(h_k))}.
\end{split}
\]
Since $\mf{E}$ is faithful on $\mc{A}_N^{\text{equiv}}(\mc{H})$, the result follows.
\end{proof}
Numerous corollaries follow by combining Theorem~\ref{Thm:E-map} with results from earlier in the article. We only list a few of them explicitly. Others include Proposition~\ref{Prop:Chaos-II} (chaos decomposition in the univariate case), \ref{Prop:T-I} (expansion of the Hermite polynomial and stochastic integral), \ref{Prop:Product-I} (product formula for Hermite polynomials and stochastic integrals), \ref{Prop:Linearization} (linearization coefficients).
\begin{Cor}
Define the $\alpha$-Hermite polynomial
\[
H_\alpha(X(h_1), \ldots, X(h_n)) = \mc{E}[\I{\alpha \otimes (h_1 \otimes \ldots \otimes h_n)}].
\]
Let $\mc{H} = L^2(\mf{R}_+, dx)$. Then for each $\alpha$, $H_\alpha(X(\chf{[0,t]}))$ is a martingale.
\end{Cor}
For Hermite polynomials of matrix argument, the martingale property was proved in \cite{Lawi} by generating function methods. See Section~\ref{Subsec:Hermite} for the connection.
\begin{Cor}
(Compare with Proposition~\ref{Prop:Single})
\[
\Exp{\Tr_\alpha(X(h)) \ |\ X(h) : h \in \mc{H}'_{\mf{R}}}
= \sum_{\rho \in \Part_{1,2}(n)} \norm{P_{(\mc{H}')^\perp} h}^{2 \abs{\Pair{\rho}}} \Tr_{C_\rho(\alpha)}(X(P_{\mc{H}'} h)).
\]
In particular, for $\mc{H} = L^2(\mf{R}_+, dx)$ and $\mc{H}_s = L^2([0,s], dx)$, for $s \leq t$,
\[
\Exp{\Tr_\alpha(X(\chf{[0,t]})) \ |\ X(h) : h \in \mc{H}_s}
= \sum_{\rho \in \Part_{1,2}(n)} (t-s)^{\abs{\Pair{\rho}}} \Tr_{C_\rho(\alpha)}(X(\chf{[0,s]})).
\]
It follows that $\mc{L}$ is the generator of the process $\set{X(t) = X(\chf{[0,t]}) : t \geq 0}$, in the sense that for formal univariate trace polynomial $\tr_\alpha(x)$,
\[
\left.\frac{d}{dt}\right|_{t=s} \Exp{\tr_\alpha(X(t)) \ |\ X(h) : h \in \mc{H}_s}
= \tr_{\mc{L} \alpha}(X(s)).
\]
\end{Cor}
\begin{Remark}
In the case $\mc{H}_{\mf{R}} = L^2(\mf{R}_+, dx)$ and $F \in L^2(\mf{R}_+^n, dx^{\otimes n})$, we may identify $\mc{E}[\I{\eta \otimes_s F}]$ with a stochastic integral
\[
\int F(t_1, \ldots, t_n) \,\Tr_\eta[dX(t_1), \ldots, dX(t_n)].
\]
Indeed, consider first $F = \chf{J_1} \otimes \ldots \otimes \chf{J_n}$, where all $J_j$ are disjoint. Then from Proposition~\ref{Prop:T-I},
\[
\I{\eta \otimes (\chf{J_1} \otimes \ldots \otimes \chf{J_n})} = \T{\eta \otimes (\chf{J_1} \otimes \ldots \otimes \chf{J_n})}
\]
and so
\[
\mc{E}[\I{\eta \otimes (\chf{J_1} \otimes \ldots \otimes \chf{J_n})}]
= \Tr_\eta[X(\chf{J_1}), \ldots, X(\chf{J_n})] \\
\]
which we define to be
\[
\int \chf{J_1}(t_1) \ldots \chf{J_n}(t_n) \,\Tr_\eta[dX(t_1), \ldots, dX(t_n)].
\]
A general $F \in L^2(\mf{R}_+^n, dx^{\otimes n})$ can be approximated by linear combinations of such function in the $L^2$ norm, and by Lemma~\ref{Lemma:L2-approximation} we also get the approximation of $\I{\eta \otimes_s F}$.
\end{Remark}
\begin{Cor}[Chaos decomposition IV; compare with Proposition~\ref{Chaos:III}]
Each element $A \in \mc{A}_N^{2, \text{equiv}}(L^2(\mf{R}_+, dx))$ has a unique decomposition
\[
A = \sum_{n=0}^\infty \sum_{\lambda \in \Par(n+1; \leq N)} \sum_{i, j = 1}^{d_\lambda} \int F_{ij}^{\lambda}(t_1, \ldots, t_n) \,\Tr_{W(E_{ij}^\lambda)}[dX(t_1), \ldots, dX(t_n)],
\]
where $F_{ij}^\lambda \in L^2(\Delta(\mf{R}_+^n), dx^{\otimes n})$ and
\[
\sum_{n=0}^\infty \sum_{\lambda \in \Par(n+1; \leq N)} n_\lambda \sum_{i, j = 1}^{d_\lambda} \norm{F_{i j}^\lambda}^2 < \infty
\]
for $n_\lambda = \frac{\abs{SS_N(\lambda)}}{N^{n+1}}$.
\end{Cor}
\subsubsection{Hermite polynomials of matrix argument.}
\label{Subsec:Hermite}
\begin{Remark}
If $\alpha \in S(n)$ rather than $S_0(n)$, in the case of a single variable, $\Tr_\alpha[X]$ depends only on the conjugacy class of $\alpha$, in other words on the number partition $\lambda \in \Par(n)$. Moreover
\[
\Tr_\lambda[X] = p_\lambda(x_1, \ldots, x_N),
\]
where $\set{x_1, \ldots, x_N}$ are (random) eigenvalues of $X$ and $p_\lambda$ is the power sum symmetric polynomial. For $X = X(h)$, we also get non-homogeneous symmetric polynomials
\[
h_\lambda(x_1, \ldots, x_N) = \mc{E}[\I{\lambda \otimes h^{\otimes n}}].
\]
With respect to the inner product induced from $\mc{F}_{1/N}(\mf{C})$, these polynomials are orthogonal for different $n$ but not necessarily for different $\lambda \in \Par(n)$. We now recall a different and more familiar basis of polynomials which are fully orthogonal with respect to this inner product.
\end{Remark}
\begin{Defn}
Fix $N \in \mf{N}$, and denote
\[
D^\ast = \sum_{i=1}^N \frac{\partial^2}{\partial x_i^2} + \sum_{i \neq j} \frac{1}{x_i - x_j} \left( \frac{\partial}{\partial x_i} - \frac{\partial}{\partial x_j} \right)
\]
and
\[
E^\ast = \sum_{i=1}^N x_i \frac{\partial}{\partial x_i}.
\]
For $\lambda \in \Par(n)$, the Hermite polynomial of matrix argument (for $\beta = 2$) is the symmetric polynomial in $\set{x_1, \ldots, x_N}$ with leading term $\frac{\abs{\lambda}!}{c_\lambda} s_\lambda$ which is an eigenfunction of the operator $D^\ast - E^\ast$ with eigenvalue $-n$ (note a misprint in \cite{Dumitriu-MOPS}). Here (see Corollary~7.1.7.4 in \cite{Stanley-volume-2})
\[
s_\lambda = \frac{1}{n!}\sum_{\nu \in \Par(n)} \frac{n!}{z_\nu} \chi^\lambda(\nu) p_\nu = \frac{1}{n!} \sum_{\alpha \in S(n)} \chi^\lambda(\alpha) p_\alpha
\]
is the Schur polynomial and $c_\lambda = \prod (\lambda_i + \lambda_j' - i - j + 1)$ is the hook length. See \cite{Baker-Forrester-CS-model,Dumitriu-MOPS,Forrester-book} for more details.
\end{Defn}
\begin{Prop}
For $\eta \in \mf{C}[S_0(n)]$,
\[
D^\ast \mc{E}[\T{\eta \otimes h^{\otimes n}}]
= 2 N \mc{E}[\T{\mc{L}(\eta) \otimes h^{\otimes (n-2)}}].
\]
and
\[
E^\ast \mc{E}[\T{\eta \otimes h^{\otimes n}}]
= \mc{E}[ E\T{\eta \otimes h^{\otimes n}}].
\]
\end{Prop}
\begin{proof}
For a partition $\mu = 1^{k_1} 2^{k_2} \ldots m^{k_m}$, a (long) computation (see \cite{Ans-Gai-Han-He-Mehl} for a similar computation in the unitary case) shows that
\begin{multline*}
D^\ast(p_\mu)
= p_\mu \Biggl[ 2 k_2 \frac{N^2}{p_2} + 2 N \sum_{l=3}^{m} k_l l \frac{p_{l-2}}{p_l}
+ \sum_{l=2}^m k_l(k_l-1)l^2 \frac{p_{2l-2}}{p_l^{2}} + k_1 (k_1 - 1) \frac{N}{p_1^2} \\
+ \sum_{2\leq l_1 < l_2\leq m}^{ } 2 k_{l_1}k_{l_2}l_1l_2 \frac{p_{l_1+l_2-2}} {p_{l_1}p_{l_2}}
+ 2 \sum_{l = 2}^m k_1 k_l l \frac{p_{l-1}}{p_1 p_l}
+ \sum_{l=3}^{m} k_l l \sum_{a=1}^{l-3} \frac{p_ap_{l-2-a}}{p_l}\Biggr]
\end{multline*}
On the other hand, for a permutation $\alpha$ of cycle type $\mu = 1^{k_1} 2^{k_2} \ldots m^{k_m}$, a contraction by a transposition
\begin{itemize}
\item
Eliminates a cycle of length $2$: in $k_2$ cases, with weight $q^{-1}$.
\item
Turns a cycle of length $l > 2$ into a cycle of length $l-2$: in $k_l l$ cases, with weight $1$.
\item
Splits a cycle of length $l$ into those of different lengths $a < l-2-a$: in $k_l l$ cases, with weight $q$.
\item
Splits a cycle of length $l$ into those of equal lengths $(l-2)/2$: in $k_l l/2$ cases, with weight $q$.
\item
Eliminates two cycles of length $1$: in $k_1 (k_1 - 1)/2$ cases, with weight $1$.
\item
Eliminates a cycle of length $1$ and turns a cycle of length $l$ into a cycle of length $l-1$: in $k_1 k_2 l$ cases, with weight $q$.
\item
Glues together two cycles of different lengths $2 \leq l_1 < l_2$: in $k_{l_1} l_{l_2} l_1 l_2$ cases, with weight $q$.
\item
Glues together two cycles of equal length $l \geq 2$: in $k_l (k_l - 1) l^2/2$ cases, with weight $q$.
\end{itemize}
Comparing these two results, we see that $D^\ast p_\mu$ corresponds to $2 N \mc{E}[\T{\mc{L} \mu}]$, giving the first relation. The second relation follows from $E^\ast p_\mu = n p_\mu$.
\end{proof}
\begin{Cor}
Let $\norm{h}^2 = \frac{1}{N}$. Then $\mc{E}[\I{\chi^\lambda \otimes h^{\otimes n}}]$ is a multiple of the Hermite polynomial of matrix argument.
\end{Cor}
\begin{proof}
Since their leading terms differ by a factor of $\frac{\abs{\lambda}!}{c_\lambda n!}$, it suffices to verify that $\mc{E}[\I{\chi^\lambda \otimes h^{\otimes n}}]$ is an eigenfunction of $D^\ast - E^\ast$ with eigenvalue $-n$. Indeed, for $\norm{h}^2 = \frac{1}{N}$,
\[
(D^\ast - E^\ast) \mc{E}[\T{\eta \otimes h^{\otimes n}}]
= \mc{E}[(2 \mc{L} - E) \T{\eta \otimes h^{\otimes n}}].
\]
So using Proposition~\ref{Prop:DE},
\[
(D^\ast - E^\ast) \mc{E}[\I{\eta \otimes h^{\otimes n}}]
= \mc{E}[(2 \mc{L} - E) \I{\eta \otimes h^{\otimes n}}]
= - n \mc{E}[\I{\eta \otimes h^{\otimes n}}]. \qedhere
\]
\end{proof}
\subsubsection{$q=0$}
\label{Subsec:q=0}
The scaling used throughout most of the article (corresponding to the un-normalized trace) gives well-defined inner products and Fock space structure for $q=0$, see equation~\eqref{Eq:q=0}. However the contractions, and so the operators $\T{\alpha \otimes F}$, may not be defined. In this section we consider a different scaling, corresponding to the normalized trace. Under this normalization, we get well-defined contractions, but a more degenerate Fock space structure.
\begin{Defn}
We will now denote $\alpha \otimes_s F$ by $\tilde{I}(\alpha \otimes_s F)$. Define the normalized contractions
\[
\begin{split}
\tilde{C}_\pi (\alpha)
& = q^{\cyc_0(\alpha) - \cyc_0((\pi \alpha)|_{\supp{\pi}^c})} C_\pi(\alpha) \\
& = q^{\cyc_0(\alpha) - \cyc_0(\pi \alpha) + \ell}
{P^{[0, n] \setminus \supp{\pi}}_{[0, n-2 \ell]} (\pi \alpha)|_{\supp{\pi}^c}},
\end{split}
\]
and the operators
\begin{equation}
\label{Eq:Normalized-relation}
\tilde{T}(\alpha \otimes_s F)
= \sum_{\pi \in \Part_{1,2}(n)} \tilde{I}(\tilde{C}_\pi (\alpha) \otimes_s C_\pi(F)),
\end{equation}
so that
\[
\tilde{I}(\alpha \otimes_s F)
= \sum_{\pi \in \Part_{1,2}(n)} (-1)^{n - \abs{\pi}} \tilde{T}(\tilde{C}_\pi (\alpha) \otimes_s C_\pi(F))
\]
and
\[
\tilde{T}(\alpha \otimes_s F) \tilde{T}(\beta \otimes_s G) = \tilde{T}((\alpha \cup \beta) \otimes_s (F \otimes G)).
\]
Finally, define
\[
\state{\tilde{I}(\alpha \otimes_s F)} = 0, \quad \state{\tilde{I}({(0)})} = 1.
\]
\end{Defn}
\begin{Remark}
If we assume that $\tilde{T}(\alpha \otimes F) = q^{\cyc_0(\alpha)} \T{\alpha \otimes F}$, then for $q=0$, such a multiple is zero unless $\alpha$ is a single cycle (containing $0$). As seen below, this need not be the case in general.
\end{Remark}
\begin{Remark}
\label{Remark:Standard-form}
Let $\lambda$ be an interval partition of $[0,n]$. As discussed in Section~\ref{Sec:Centralizer}, each permutation $\alpha \in S_0(n)$ is conjugate, under the action of $S(n)$, to a permutation with cycle structure $\lambda$ in which the elements in each cycle, as well as the cycles, appear in increasing order. Such permutation is not unique. In the results below, we will only consider permutations of this type; the results are easily extended to general $\alpha \in S_0(n)$, but the notation gets heavier.
\end{Remark}
\begin{Remark}
Let $\mc{F}_f(\mc{H}) = \mf{C} \Omega \oplus \bigoplus_{n=1}^\infty \overline{\mc{H}^{\otimes n}}$ be the full Fock space of $\mc{H}_{\mf{R}}$. For $f \in \mc{H}_{\mf{R}}$, denote by $S(f)$ the semicircular element corresponding to $f$ in its standard representation on $\mc{F}_f(\mc{H}_{\mf{R}})$. Denote by $\Gamma_f(\mc{H}_{\mf{R}})$ the algebra generated by $\set{X(h) : h \in \mc{H}_{\mf{R}}}$, and by $\Phi$ the vacuum state on it. Denote by $U(h_1 \otimes \ldots \otimes h_n)$ the multivariate Chebyshev polynomials, which are the elements of $\Gamma_f(\mc{H}_{\mf{R}})$ which satisfy $U(h_1 \otimes \ldots \otimes h_n) \Omega = h_1 \otimes \ldots \otimes h_n$. They are also determined by the recursion
\[
S(f) U(h_1 \otimes \ldots \otimes h_n) = U(f \otimes h_1 \otimes \ldots \otimes h_n) + \ip{f}{h_1} U(h_2 \otimes \ldots \otimes h_n).
\]
More generally, for $F \in \overline{\mc{H}_{\mf{R}}^{\otimes n}}$, $U(F)$ is the unique element of the weak closure of $\Gamma_f(\mc{H}_{\mf{R}})$ satisfying $U(F) \Omega = F$.
\end{Remark}
\begin{Prop}
Let $q=0$, and $\lambda, \mu \in S_0(n)$ as in Remark~\ref{Remark:Standard-form}.
\begin{enumerate}
\item
$\tilde{C}_\pi ({\lambda}) = 0$ unless $\pi$ is non-crossing and $\pi \leq {\lambda}$ as a cycle partition, in which case
\[
\tilde{C}_\pi ({\lambda})
= {P^{[0, n] \setminus \supp{\pi}}_{[0, n-2 \ell]} (\pi {\lambda})|_{\supp{\pi}^c}}.
\]
\item
Define the map $\mc{E}_0 : \mc{TP}(\mc{H}_{\mf{R}} \rightarrow \Gamma_f(\mc{H}_{\mf{R}})$ by
\[
\mc{E}_0[\tilde{T}(\alpha \otimes (h_1 \otimes \ldots \otimes h_n))] = \Phi_\alpha[S(h_1), \ldots, S(h_n)].
\]
Then $\mc{E}_0$ extends to a homomorphism from $\overline{TP}(\mc{H}_{\mf{R}})$, and to an isometric isomorphism from $\mc{F}_0(\mc{H})$ to $L^2(\Gamma_f(\mc{H}_{\mf{R}}), \Phi) \simeq \mc{F}_f(\mc{H})$.
\item
If $\lambda$ is not a single cycle (containing $0$), then $\tilde{I}({\lambda} \otimes F) = 0$. Otherwise,
\[
\mc{E}_0[\tilde{I}({(01 \ldots n)} \otimes (h_1 \otimes \ldots \otimes h_n))] = U(h_1 \otimes \ldots \otimes h_n)
\]
and more generally
\[
\mc{E}_0[\tilde{I}({(01 \ldots n)} \otimes F)] = U(F).
\]
\end{enumerate}
\end{Prop}
\begin{proof}
Note first that
\[
\cyc_0(\alpha) - \cyc_0(\pi \alpha) + \ell = - \abs{\alpha} + \abs{\pi^{-1} \alpha} + \abs{\pi} \geq 0.
\]
Moreover, for $\alpha = \lambda$ as in Remark~\ref{Remark:Standard-form}, this is equal to $0$ if and only if $\pi \leq \lambda$, and on each block of $\lambda$, $\pi$ is non-crossing. Therefore
\[
\tilde{T}({\lambda} \otimes F)
= \sum_{\substack{\pi \in \mc{NC}_{1,2}(n) \\ \pi \leq {\lambda}}} \tilde{I}(\tilde{C}_\pi ({\lambda}) \otimes C_\pi(F)),
\]
and so for $F = h_1 \otimes \ldots \otimes h_n$,
\[
\state{\tilde{T}({\lambda} \otimes F)}
= \sum_{\substack{\pi \in \mc{NC}_{2}(n) \\ \pi \leq {\lambda}}} C_\pi(F)
= \Phi[\Phi_\lambda[S(h_1), \ldots, S(h_n)]].
\]
The homomorphism property follows as before, so $\mc{E}_0$ is an isometric isomorphism. In particular, if $(0)$ is a cycle in $\lambda$, then
\begin{equation}
\label{Eq:T-scalar}
\tilde{T}({\lambda} \otimes F) = \state{\tilde{T}({\lambda} \otimes F)} = \sum_{\substack{\pi \in \mc{NC}_{2}(n) \\ \pi \leq {\lambda}}} C_\pi(F)
\end{equation}
is a scalar. Next,
\[
\tilde{I}(\lambda \otimes F)
= \sum_{\substack{\pi \in \mc{NC}_{1,2}(n) \\ \pi \leq \lambda}} (-1)^{n - \abs{\pi}} \tilde{T}(\tilde{C}_\pi ({\lambda}) \otimes C_\pi(F)).
\]
Suppose $\lambda$ contains a cycle $V$ not containing $0$. Then the sum above is
\[
\begin{split}
\tilde{I}(\alpha \otimes F)
& = \sum_{\substack{\pi \in \mc{NC}_{1,2}([n] \setminus V) \\ \pi \leq \lambda}} (-1)^{n - \abs{V} - \abs{\pi}} \tilde{T}(\tilde{C}_{\pi}({\lambda|_{V^c}}) \otimes C_{\pi}(\bigotimes _{i \in [n] \setminus V} h_i)) \\
&\qquad \cdot \sum_{\tau \in \mc{NC}_{1,2}(V)} (-1)^{\abs{V} - \abs{\tau}} \tilde{T}(\tilde{C}_{\tau}({V}) \otimes C_{\tau}( \bigotimes_{i \in V} h_i)).
\end{split}
\]
Since $V$ does not contain $0$, denoting $F_V = \bigotimes_{i \in V} h_i$ and using \eqref{Eq:T-scalar}, the second of these sums is
\[
\sum_{\tau \in \mc{NC}_{1,2}(V)} (-1)^{\abs{V} - \abs{\tau}} \sum_{\sigma \in \mc{NC}_{2}(\Sing{\tau})} C_\sigma C_\tau F_V
= \sum_{\rho \in \mc{NC}_2(V)} \sum_{S \in \Pair{\rho}} (-1)^{\abs{V} - \abs{S}} C_\rho F_V = 0.
\]
On the other hand, $\mc{E}_0[\tilde{I}({(01 \ldots n)} \otimes (h_1 \otimes \ldots \otimes h_n))]$ is a polynomial in $\set{S(h_1), \ldots, S(h_n)}$, with leading term $S(h_1) \ldots S(h_n)$, and
\[
\begin{split}
& \Phi[\mc{E}_0[\tilde{I}({(01 \ldots k)} \otimes (g_1 \otimes \ldots \otimes g_k))]^\ast \mc{E}_0[\tilde{I}({(01 \ldots n)} \otimes (h_1 \otimes \ldots \otimes h_n))]] \\
&\quad = \phi[\tilde{I}({(01 \ldots k)} \otimes (g_1 \otimes \ldots \otimes g_k))^\ast \tilde{I}({(01 \ldots n)} \otimes (h_1 \otimes \ldots \otimes h_n))] \\
&\quad = \delta_{n=k} \prod_{i=1}^n \ip{h_i}{g_i} \\
&\quad = \Phi[U(g_1 \otimes \ldots \otimes g_k)^\ast U(h_1 \otimes \ldots \otimes h_n)]
\end{split}
\]
since there is only one non-crossing pair partition in $\mc{P}(n,n)$. Since $\mc{E}_0$ is an isometry, the final statement follows.
\end{proof}
Finally we consider the case of $q = - \frac{1}{N}$. We will use the notation $C_\pi^{(q)}(\alpha)$, $\mathrm{T}^{(q)}[\alpha \otimes F]$, $\phi_q$, and the multiplication $\cdot_q$ to indicate the dependence on $q$.
\begin{Prop}
\label{Prop:Negative}
Define the map $R : \mc{TP}(\mc{H}_{\mf{R}}) \rightarrow \mc{TP}(\mc{H}_{\mf{R}})$ by
\[
R(\alpha \otimes F) = (-1)^{\abs{\alpha}} \alpha \otimes F,
\]
so that $R(\I{\alpha \otimes F}) = (-1)^{\abs{\alpha}} \I{\alpha \otimes F}$. Then
\begin{itemize}
\item
$\mathrm{T}^{(-q)}(R(\alpha \otimes F)) = R(\mathrm{T}^{(q)}(\alpha \otimes F))$
\item
$R$ is a star-isomorphism from $(\mc{TP}(\mc{H}_{\mf{R}}), \cdot_q)$ to $(\mc{TP}(\mc{H}_{\mf{R}}), \cdot_{-q})$
\item
For $\alpha \in S_0(2n)$, $R$ satisfies
\[
\phi_{-q}[R(\mathrm{T}^{(q)} (\alpha \otimes F))] = (-1)^n \varphi_{q}[\mathrm{T}^{(q)}(\alpha \otimes F)].
\]
\item
For $q = 1/N$, $R$ is an isometry from $\mc{F}_{1/N}(\mc{H}_{\mf{R}})$ onto $\mc{F}_{-1/N}(\mc{H}_{\mf{R}})$.
\end{itemize}
\end{Prop}
\begin{proof}
It suffices to consider the case $\mc{H} = \mf{C}$. $R$ is clearly a bijection. We first compute
\[
\begin{split}
\mathrm{T}^{(-q)}(\alpha)
& = \sum_{\pi \in \Part_{1,2}(n)} \I{C_\pi^{(-q)}(\alpha)} \\
& = \sum_{\pi \in \Part_{1,2}(n)} (-1)^{\cyc_0((\pi \alpha)|_{\supp{\pi}^c}) - \cyc_0(\pi \alpha) + \ell} \I{C_\pi^{(q)}(\alpha)} \\
& = \sum_{\pi \in \Part_{1,2}(n)} (-1)^{- 2 \ell - \abs{(\pi \alpha)|_{\supp{\pi}^c})} + \abs{\pi \alpha} + \ell} \I{C_\pi^{(q)}(\alpha)} \\
& = (-1)^{\abs{\alpha}} \sum_{\pi \in \Part_{1,2}(n)} (-1)^{- \ell + \abs{\pi}} (-1)^{\abs{(\pi \alpha)|_{\supp{\pi}^c})}} \I{C_\pi^{(q)}(\alpha)} \\
& = (-1)^{\abs{\alpha}} \sum_{\pi \in \Part_{1,2}(n)} (-1)^{\abs{(\pi \alpha)|_{\supp{\pi}^c})}} \I{C_\pi^{(q)}(\alpha)} \\
& = (-1)^{\abs{\alpha}} \sum_{\pi \in \Part_{1,2}(n)} \I{R(C_\pi^{(q)}(\alpha))} \\
& = (-1)^{\abs{\alpha}} R(\mathrm{T}^{(q)}(\alpha))
\end{split}
\]
Since
\[
\begin{split}
R(\mathrm{T}^{(q)}(\alpha)) \cdot_{-q} R(\mathrm{T}^{(q)}(\beta))
& = (-1)^{\abs{\alpha} + \abs{\beta}} \mathrm{T}^{(-q)}(\alpha) \cdot_{-q} \mathrm{T}^{(-q)}(\beta) \\
& = (-1)^{\abs{\alpha \cup \beta}} \mathrm{T}^{(-q)}(\alpha \cup \beta)
= R(\mathrm{T}^{(q)}(\alpha) \mathrm{T}^{(q)}(\alpha)),
\end{split}
\]
$R$ is a homomorphism. It clearly commutes with the adjoint operation. Since the action of the state $\phi$ on $\I{\alpha \otimes F}$ does not depend on $q$, and $\ip{A}{B}_q = \phi_q[B^\ast A]$, the isometry property follows from the homomorphism property. Finally, for $\alpha \in S_0(2n)$,
\[
\begin{split}
\phi_{q}\left[(-1)^{\abs{\alpha}} \mathrm{T}^{(q)}(\alpha \otimes F)\right]
& = \sum_{\pi \in \Part_2(2n)} (-1)^{\abs{\alpha}} q^{\abs{\pi \alpha}} C_\pi(F) \\
& = \sum_{\pi \in \Part_2(2n)} (-1)^{\abs{\pi}} (-q)^{{\pi \alpha}} C_\pi(F) \\
& = (-1)^n \sum_{\pi \in \Part_2(2n)} (-q)^{{\pi \alpha}} C_\pi(F) \\
& = (-1)^n \phi_{-q}\left[\mathrm{T}^{(-q)}(\alpha \otimes F)\right]. \qedhere
\end{split}
\]
\end{proof}
\newcommand{\etalchar}[1]{$^{#1}$}
\def\cprime{$'$} \def\cprime{$'$}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
2,877,628,091,611 | arxiv | \section{Introduction}
String theory, as well as other higher-dimensional theories, suggests
the existence of a landscape of vacua with diverse properties. The
vacua are characterized by different compactifications of extra
dimensions, by branes wrapped around extra dimensions in different
ways, by different values of fluxes, etc. The number of
possibilities is combinatorial and can be as high as $10^{1000}$
\cite{Susskind}. In the cosmological context, high-energy vacua drive
exponential inflationary expansion of the universe. Transitions
between different vacua occur through tunneling \cite{CdL} and quantum
diffusion \cite{AV83,Linde86}, with regions of different vacua
nucleating and expanding in the never-ending process of eternal
inflation. As a result, the entire landscape of vacua is explored.
This ``multiverse'' picture has been a subject of much recent
research.
Most of this recent work has focused on the sector of the landscape
with 3 large spatial dimensions and the rest of the dimensions
compactified. If all vacua had this property, we would be able to
describe the entire multiverse by a 4-dimensional effective
theory. However, the landscape generally includes states with
different numbers of compactified dimensions. Even a simple toy model
based on a 6-dimensional Einstein-Maxwell theory exhibits vacua
with 0, 2, and 4 compact dimensions. We therefore
expect the multiverse to include regions of different (effective)
dimensionality \cite{Linde88}.
Topology-changing transitions between vacua with different numbers of
compact dimensions have been discussed in
Refs.~\cite{Linde88,Decompactification,BPSPV09,Carroll09} in the
context of Einstein-Maxwell theory compactified on a sphere. The
radius of the compact dimensions in this theory is stabilized by a
magnetic flux through the sphere, but the resulting vacua are only
metastable and can decay by quantum tunneling through a
barrier.\footnote{The model of Ref.~\cite{Linde88} had an extra
ingredient -- the inflaton field -- and a different mechanism of
decompactification. The inflaton undergoes quantum diffusion in the
course of eternal inflation, and the compact dimensions are
destabilized whenever the inflaton potential gets below certain
critical values.} The tunneling results in the formation of an
expanding bubble, inside of which the compact sphere is destabilized
and its radius grows with time. Transitions of this sort are called
{\it decompactification} transitions. The {\it compactification}
tunneling transitions go in the opposite direction: the number of
compact dimensions increases from parent to daughter vacuum. Such
transitions have been discussed in the interesting recent paper by
Carroll, Johnson and Randall \cite{Carroll09}, who also obtained an
estimate of the transition rate. Our main goal in the present paper
is to elucidate the nature of the tunneling instantons and the
spacetime geometry resulting from topology-changing transitions.
In the next Section we begin with a brief review of flux vacua in the
$6d$ Einstein-Maxwell theory. Compactification tunneling transitions
from $6d$ to effectively $2d$ and $4d$ spacetimes are discussed in
Sections III and IV, respectively, and decompactification transitions
are discussed in Section V. Our conclusions are summarized and
discussed in Section VI.
\section{The landscape of $6d$ Einstein-Maxwell theory} \label{$6d$ landscape}
We shall consider a 6-dimensional Einstein-Maxwell theory,
\begin{equation}
S=\int{d^6 {x} \sqrt{-\tilde g} \left({1\over 2} {\tilde R}^{(6)}
- {1\over 4} F_{MN} F^{MN} - {\Lambda_6}\right)},
\label{EM-6D-action}
\end{equation}
where $M,N = 0...5$ label the six-dimensional coordinates, $F_{MN}$ is
the Maxwell field strength, ${\Lambda_6}$ is the six-dimensional
cosmological constant, ${\tilde R}^{(6)}$ is the 6-dimensional
curvature scalar, and we use units in which the $6d$ Planck mass is $M_6 = 1$.
We shall assume that $\Lambda_6>0$. This model has a
long pedigry \cite{Freund-Rubin,EM6D}; more recently it has been
discussed as a toy model for string theory compactification
\cite{flux-compactifications}.
Flux vacua are described by solutions of this model with the spacetime
metric given by a $(6-q)$-dimensional maximally symmetric space of
constant curvature and a static $q$-dimensional sphere of
fixed radius, namely a metric of the form,
\begin{equation}
ds^2= {\tilde g}_{MN} dx^M dx^N = {\tilde g}_{\mu \nu} d x^{\mu}
d x^{\nu} + R^2 d\Omega_q^2~.
\label{6D-metric}
\end{equation}
Here, ${\tilde g}_{\mu\nu}$ is the metric of the $(6-q)$-dimensional
de Sitter, Minkowski, or anti-de Sitter space, and $d\Omega_q^2$ is
the metric on a unit $q$-sphere. We shall use the convention that the
Greek indices label the large dimensions and take values
$\mu,\nu=0,1,...,5-q$. The compact dimensions will be labeled by
indices from the beginning of the Latin alphabet, $a,b=6-q,...,5.$
The simplest solution of the model, corresponding to $q=0$, is the
$6d$ de Sitter space with $F_{MN}=0$.
For $q=2$, the only ansatz for the Maxwell field that is
consistent with the symmetries of the metric is a monopole-like
configuration on the extra-dimensional 2-sphere \cite{EM6D},
\begin{equation}
F_{ab}= {Q_m\over{4\pi}} \sqrt{g_2} \epsilon_{ab},
\label{Fab}
\end{equation}
where $Q_m$ is the corresponding magnetic charge and $g_2$ is the
determinant of the metric on a unit 2-sphere. All other
components of $F_{MN}$ are equal to zero. We shall assume that our
model includes electrically charged particles with elementary charge $e$
(not shown in the action (\ref{EM-6D-action})). Then the magnetic
charge is subject to the usual charge quantization condition,
\begin{equation}
Q_m={2\pi n\over{e}} ,
\label{Q}
\end{equation}
where $n$ is an integer. This sector
of the model can be described in terms of an effective $4d$ field
theory. Representing the $6d$ metric as
\begin{equation}
ds^2= e^{- \psi(x)/M_4} g_{\mu \nu}
dx^{\mu} dx^{\nu} + e^{\psi(x)/M_4} R^{2}~d\Omega_2^2
\label{psimetric}
\end{equation}
and integrating over the internal manifold, we obtain
\begin{equation}
S= \int{d^4 x \sqrt{-g}\left({1\over 2} M_4^2 R^{(4)} - {1\over 2}
\partial_{\mu} \psi \partial^{\mu} \psi - V(\psi)\right)}.
\label{4Deffectiveaction}
\end{equation}
Here, the potential for the size of the internal dimension
is
\begin{equation}
V(\psi)= 4\pi \left({{n^2}\over{8 e^2 R^2}}
e^{-3\psi/M_4} - e^{-2\psi/M_4} + {R^2 \Lambda_6} e^{-\psi/M_4}
\right)
\label{V}
\end{equation}
and
\begin{equation}
M_4^2 = 4 \pi R^2 ,
\label{M4}
\end{equation}
is the $4d$ Planck mass.
For any particular value of $n$, we can set
the minimum of the potential to be at $\psi =0$, by setting
\begin{equation}
R^2 = {1\over{\Lambda_6}} \left(1 - \sqrt{1 - {{3 n^2
\Lambda_6}\over {8 e^2}}}\right).
\label{R2}
\end{equation}
The $4d$ cosmological constant is the value of the potential at
this minimum and is given by
\begin{equation}
\Lambda_4 =
V(\psi=0,n)={{4\pi}\over {3}} \left(1- 2 \sqrt{1 - {{3 n^2
\Lambda_6}\over {8 e^2}}}\right)~.
\label{Lambda4}
\end{equation}
Positive-energy vacua are obtained for
\begin{equation}
n>n_0 \equiv \left({{2}\over{\Lambda_6}}\right)^{1/2} e .
\end{equation}
The potential also has a maximum at some $\psi > 0$ and approaches
zero at $\psi\to\infty$, as illustrated in Fig.~\ref{potential}. It is clear from the figure that a positive-energy vacuum can decay by tunneling through a barrier, leading to decompactification of extra dimensions\footnote{Perturbative stability in similar models has been discussed in \cite{Bousso:2002fi,Krishnan:2005su}.}.
\begin{figure}
\centering\leavevmode
\epsfysize=8cm \epsfbox{compacFIG-1.eps}
\put(-10,55){\Large {\bf {$\psi$}}}
\put(-340,240){\Large {\bf {$V(\psi)$}}}
\caption {Plot of the $4d$ effective potential, in units of $M_4^4$,
as a function of the field $\psi$. We show the potential for 3
different values of the flux quantum $n = 180, 200, 220$. The rest of
the parameters of the model are fixed to $e^2 = 2$ and
$\Lambda_6 = 10^{-4}$, so $n_0 =200$.}
\label{potential}
\end{figure}
No vacuum solutions exist for $n > n_{max}=2n_0/\sqrt{3}$.
The corresponding value of $Q_m$ is
\begin{equation}
Q_{max}^{(m)}=4\pi \left({{2}\over {3\Lambda_6}}\right)^{1/2}.
\label{Qmax(m)}
\end{equation}
A large landscape is possible only if $n_0\gg 1$, or equivalently
$\Lambda_6\ll e^2$.
Finally, in the $q=4$ sector the Maxwell field takes the form
\begin{equation}
F_{\mu\nu} = {Q_e \over{A_4 R^4}}\sqrt{-g_2}\epsilon_{\mu\nu}.
\end{equation}
where $R$ is the radius of the 4-sphere compactification
manifold, $A_4=8\pi^2/3$ is the 4-volume of a unit 4-sphere,
and $\sqrt{-g_2} = \sqrt{-\tilde g_{\mu \nu}}$ is the
determinant of the metric in the 2 large dimensions.
The charge ${Q}_e$ is quantized in terms of the
elementary charge $e$ by the relation
\begin{equation}
Q_e = n e .
\label{Qe}
\end{equation}
One can also represent this sector as a flux compactification model
in the dual formulation in terms of a 4-form flux. This is discussed
in Appendix \ref{appelecsec}.
We can now look for solutions of our model where the
spacetime described by the large 2 dimensions is characterized by a
constant scalar curvature, $R^{(2)}= 2H^2$. Einstein's equations
then reduce to the following relations:
\begin{equation}
H^2 = {\Lambda_6 \over {2}}\left(1- {3{Q_e^2}\over{2
A_4^2\Lambda_6 R^8 }}\right) ,
\label{H2}
\end{equation}
\begin{equation}
{3 \over {R^2}} = {\Lambda_6
\over {2}}\left(1+{{Q_e^2}\over{2 A_4^2\Lambda_6 R^8}} \right) .
\label{R^2}
\end{equation}
Solutions of these equations are discussed in Appendix
\ref{quarticequation}. For $n>n_{max}=(9A_4/e)(3/2\Lambda_6)^{3/2}$,
there are no solutions. For $n<n_{max}$, there are two solutions,
one with a positive and the other with a negative value of $H^2$.
(This situation is similar to that in the $q=2$ sector, where
we also have two solutions, one
corresponding to a minimum and the other to a maximum of the
potential. In both cases, the higher-energy solutions are unstable;
see the discussion in Sec.III). As $n$ is increased, the two solutions
approach one another, until they both reach $H=0$ at $n=n_{max}$.
The corresponding maximum value of $Q_e$ is given by
\begin{equation}
Q_{max}^{(e)}=24\pi^2 \left({{3}\over {2\Lambda_6}}\right)^{3/2}~.
\label{Qmax(e)}
\end{equation}
Once again, we can obtain an effective $2d$ theory by integrating
out the extra dimensions. The resulting action is that of a $2d$
dilatonic gravity. The action
cannot be reduced to the form (\ref{4Deffectiveaction}), with Einstein
gravity plus a minimally coupled scalar, essentially because the
Einstein Lagrangian is a pure divergence in $2d$. Since the ``dilaton"
field $\psi$ has a non-minimal coupling to gravity, the radius of the
compact dimensions cannot be found by simply finding the extrema of
the dilaton potential but one can nevertheless identify the
solutions presented above as solutions of the scalar tensor theory in
$1+1$ dimensions.
To summarize, we have found that the landscape of vacua in our theory
includes a $dS_6$ vacuum, a number of $dS_4$ and $AdS_4$ vacua with
extra dimensions compactified on $S_2$, and a number of $AdS_2$ vacua
with extra dimensions compactified on $S_4$. We expect that quantum
transitions should occur from $dS_6$ and $dS_4\times S_2$ vacua to all other vacua, resulting
in an eternally inflating multiverse. The AdS vacua are terminal, in
the sense that all AdS regions collapse to a big crunch singularity.
Transitions between $dS_4 \times S_2$ vacua with different flux
quantum numbers $n$ were discussed in Ref.~\cite{BPSPV09}, where it
was shown that such transitions proceed through nucleation of
magnetically charged branes. The theory does have black brane
solutions with the necessary properties \cite{Gibbons95,Gregory}. Our
focus in the present paper is on the topology-changing transitions.
As we shall see, such transitions also involve nucleation of charged
black branes or black holes.
\section{From $dS_6$ to $(A)dS_2\times S_4$}\label{blackhole}
\subsection{The instanton}
As discussed in Ref.~\cite{Carroll09}, transitions from
$dS_6$ to $AdS_2 \times S_4$ can be
mediated by nucleation of black holes in $dS_6$. The corresponding
instanton is the Euclideanized $6d$ Reissner-Nordstrom-de Sitter (RNdS)
solution \cite{Tangherlini},
\begin{equation}
ds^2 = f(r)d\tau^2 + f(r)^{-1}dr^2 + r^2 d\Omega_4^2,
\label{BHinstanton}
\end{equation}
where
\begin{equation}
f(r)= 1 - {2{\tilde M}\over{r^3}}+{{\tilde Q}^2\over{r^6}} -H_6^2 r^2 .
\end{equation}
Here,
\begin{equation}
H_6 = \sqrt{{{\Lambda_6}\over {10}}}
\end{equation}
is the expansion rate of $dS_6$, ${\tilde M}$ is the mass parameter,
and ${\tilde Q}$ is related to the black hole charge $Q_e$ as
\begin{equation}
Q_e=\sqrt{12}A_4{\tilde Q}.
\label{qtilde}
\end{equation}
The electric field is given by
\begin{equation}
F_{\tau r}={iQ_e\over{A_4 r^4}} ,
\end{equation}
and $Q_e$ is quantized in units of the elementary
charge, as in Eq.~(\ref{Qe}).\footnote{This is based on the definition
of the charge as the integral of the flux through the $S_4$ sphere
around the pointlike charged object\begin{equation}
Q = \int{F_{\tau r}~ r^4~\sin^3{\theta}\sin^2{\phi}\sin{\psi} d\theta
d\phi d\psi}
\end{equation}
}
Zeros of the function $f(r)$ correspond to horizons in the Lorentzian
solution. There are generally three horizons: the inner ($r_-$) and
outer ($r_+$) black hole horizons and the cosmological horizon $r_c$.
The range of the variable $r$ in (\ref{BHinstanton}) is
\begin{equation}
r_+ \leq r \leq r_c,
\label{range}
\end{equation}
so that the metric is positive-definite and non-singular, with
\begin{equation}
f(r_+)=f(r_c)=0 .
\label{f=0}
\end{equation}
The Euclidean time $\tau$ is a cyclic variable with a period chosen to
eliminate conical singularities at $r=r_+,r_c$.
Nucleation of charged black holes in dS space has been extensively
studied, both in $4d$ \cite{Moss,Romans,Mann} and in higher dimensions
\cite{Dias}. For relatively small values of ${\tilde Q}$,
the relevant instantons have the topology of $S_2 \times S_4$,
and the avoidance of a conical singularity in the geometry imposes the condition
\begin{equation}
|f'(r_+)|=|f'(r_c)|.
\label{lukewarm}
\end{equation}
The period of $\tau$ is then
\begin{equation}
\Delta\tau = {4\pi\over{|f'(r_c)|}}.
\end{equation}
These are the so-called ``lukewarm instantons".
The condition (\ref{lukewarm}) implies that the black hole and
cosmological horizons have the same temperature and imposes a relation
between the parameters $\tilde M$, $\tilde Q$, and $H_6$. In our
case, $H_6$ and $e$ should be regarded as fixed by the model, while
$\tilde M$ is an adjustable parameter. The horizon radii $r_+$
and $r_c$ are plotted in Fig.~\ref{horizonradii} as functions of the
charge ${\tilde Q}$. (More precisely, we plot $r_+ H_6$
and $r_c H_6$ vs. ${\tilde Q}H^3_6$.)
As ${\tilde Q}$ is increased at a fixed $H_6$, the black hole horizon
radius $r_+$ grows until it coincides, at
${\tilde Q} \approx 0.125 H_6^{-3} \equiv {\tilde Q}_c$, with the
cosmological horizon $r_c$.
\begin{figure}
\centering\leavevmode
\epsfysize=6cm \epsfbox{compacFIG-2.eps}
\caption {Plot of $r_+ H_6$ (bottom) and $r_c H_6$ (top) vs. ${\tilde
Q}H^3_6$. }
\label{horizonradii}
\end{figure}
In the limit when the black hole is much
smaller than the cosmological horizon,
\begin{equation}
r_+ \ll r_c ,
\label{smallBH}
\end{equation}
it follows from Eqs.~(\ref{f=0}), (\ref{lukewarm}) that, up to the
leading corrections in ${\tilde q}\equiv {\tilde Q}H_6^3$, we have
\begin{equation}
{\tilde M}\approx {\tilde Q}\left(1-{1\over{2}}{\tilde q}^{2/3}\right),
\label{MQ}
\end{equation}
\begin{equation}
r_+\approx {\tilde Q}^{1/3}\left(1+{1\over{3}}{\tilde q}\right)^{1/3},
\label{r+Q}
\end{equation}
\begin{equation}
r_c\approx H_6^{-1} (1-{\tilde q}),
\label{rcQ}
\end{equation}
and
\begin{equation}
\Delta\tau\approx 2\pi H_6^{-1}(1+4{\tilde q}).
\label{tauQ}
\end{equation}
Thus, the instanton in this limit
describes the nucleation of nearly extremal black holes in $dS_6$. These
black holes can be thought of as 0-branes in our theory.
The mass $M$ of the black holes can be
determined from the behavior of the metric in the range $r_+\ll r \ll r_c$.
This gives \cite{MyersPerry86}
\begin{equation}
M={32\pi^2\over{3}}{\tilde M} .\label{ADMmass}
\end{equation}
Geometrically, we can picture the RNdS instanton in the following way.
Consider the surface $r=r_*$ with $r_+\ll r_*\ll r_c$. This is a
$5d$ surface with topology $S_4 \times S_1$. It encircles the black
hole horizon $r=r_+$ with a sphere of radius $r_* \gg r_+$, so that
$f(r_*)\approx 1$. Outside of this surface, the metric is that of a
6-sphere, which is the Euclideanized $dS_6$. The $S_1$ comes from the
periodic time direction. The length of this circle is given by the
period $\Delta\tau$ and is approximately the length of a big circle on
$S_6$. Thus, the instanton can be pictured as a 0-brane whose
worldline runs along a big circle on $S_6$, as illustrated in Fig.~\ref{Alexfig1}.
\begin{figure}
\centering\leavevmode
\epsfysize=8cm
\epsfig{file=compacFIG-3.ps,width=8cm,angle=-90}
\caption {Topology-change instanton. The charged black hole worldline
runs along a big circle on a 6-sphere. The spherical geometry is
distorted in the vicinity of the worldline.}
\label{Alexfig1}
\end{figure}
Instantons of this form are known to describe the production of
particle-antiparticle pairs in de Sitter space \cite{Basu}.
The corresponding instanton action can be estimated as
\begin{equation}
S_{inst}\approx -{16\pi^3 \over{3H_6^4}} + {2\pi M\over{H_6}} ,
\label{BHaction}
\end{equation}
where the first term is the $6d$ Euclidean de Sitter action,
$S_{dS_6}$, and $M$ is the particle mass. The second term on the
right-hand side of (\ref{BHaction}) is the contribution of the
particle worldline action,
\begin{equation}
M\int d\tau~.
\end{equation}
One might also expect to see $O(M)$ corrections to the de Sitter
action of a comparable magnitude, caused by distortions of the de
Sitter geometry due to the presence of the mass. However, since the
action is minimized on the de Sitter solution, the linear in $M$
correction to the action should vanish in the limit of small $M$.
In our case, the instanton describes nucleation of a pair of
oppositely charged black holes with an initial separation of
$2H_6^{-1}$. The black holes are then driven apart by the de Sitter
expansion. We expect that for ${\tilde q} \ll 1$ the action can
be approximated by Eq.~(\ref{BHaction}). Disregarding the
pre-exponential factor, the nucleation rate is then given by
\begin{equation}
\Gamma \sim \exp (S_{dS_6}-S_{inst}) \sim \exp(-2\pi M/H_6) \sim \exp
\left({4\pi\over{\sqrt{3}}}{Q_e\over{H_6}}\right) ,
\end{equation}
where in the last step we have used the relations (\ref{ADMmass}),
$\tilde M \approx \tilde Q$, and (\ref{qtilde}). This agrees with
the intuitive expectation that $\Gamma \sim \exp(-M/T_{dS})$,
where $T_{dS}=H_6/2\pi$ is the Gibbons-Hawking temperature of
$dS_6$.\footnote{The nucleation rate can be modified if the
action includes a topological term, $S_{top}=-\alpha\chi$,
where $\alpha=const$ and $\chi$ is the Euler character \cite{Parikh}.
In our case, $\chi(S_6)=2$ and $\chi(S_4\times S_2)=4$, so the
additional factor in $\Gamma$ is $e^{2\alpha}$. The same factor would
also appear in the nucleation rate of black branes discussed in Section IV.A.}
An alternative tunneling channel is given by the so-called charged
Nariai instanton, which describes nucleation of maximally large black
holes having $r_+ = r_c$ \cite{Bousso1,Bousso2}. This instanton has a
simple geometry in the form of a product of two round spheres,
$S_2\times S_4$, and is a Euclidean continuation of the $dS_2 \times
S_4$ vacuum solution that we discussed in Section II.
The action for lukewarm and charged Nariai instantons in an arbitrary
number of dimensions has been calculated in \cite{Dias}. In the
6-dimensional case we get\footnote{Note that this expression has been
corrected in the latest electronic version of the paper in
\cite{Dias}. We would like to thank Oscar Dias for clarifying this
point to us.}
\begin{equation}
S_{lukewarm}=16\pi^2\Delta\tau \left[-\frac{1}{6}H_6^2 (r_c^5 -
r_+^5) + \frac{1}{2}{\tilde Q}^2(r_+^{-3} - r_c^{-3})\right]
\label{Sluke}
\end{equation}
and
\begin{equation}
S_{Nariai}= -\frac{32\pi^3}{3}R^4.
\label{SNariai}
\end{equation}
where $R$ is the radius of the compact 4-sphere, which can be
determined from Eq.~(\ref{R^2}). The actions (\ref{Sluke}), (\ref{SNariai}),
in units of the de Sitter action $S_{dS_6}$, are plotted in Fig.~\ref{actions} as
functions of ${\tilde q}={\tilde Q} H_6^3$. The lukewarm instanton exists only for
${\tilde Q}<{\tilde Q}_c$, and in all this range its action is smaller\footnote{Note that $S_{lukewarm}$ and $S_{Nariai}$ are negative (in addition to $S_{dS_6} < 0$), and that is why in Fig. \ref{actions} the graph for the lukewarm action is above that for the Nariai action.} (and thus the
nucleation rate is higher) than that for the charged Nariai instanton.
As ${\tilde Q}$ approaches ${\tilde Q}_c$ from below, the two
instanton actions approach one another and coincide at ${\tilde
Q}={\tilde Q}_c$. The Nariai instanton is the only
relevant tunneling channel in the range ${\tilde Q}_c < {\tilde Q} <
{\tilde Q}_{max}$,\footnote{There is yet another kind of instanton,
the so-called ``cold" instanton, describing nucleation of extremal
Reissner-Nordstrom black holes. Such instantons do not seem to
describe compactification transitions, so we do not consider them
here.} where ${\tilde Q}_{max}= Q_{max}^{(e)}/\sqrt{12}A_4$ and
$Q_{max}^{(e)}$, given by Eq.~(\ref{Qmax(e)}), is the largest value
of $Q_e$ above which no instanton solutions exist.
\begin{figure}
\centering\leavevmode
\epsfysize=6cm \epsfbox{compacFIG-4.eps}
\caption {Plot of $S/S_{dS}$ vs. ${\tilde Q}H^3_6$ for the lukewarm
(red/solid line), Nariai (green/dashed-dotted line) and approximate
lukewarm (blue/dashed line) actions.}
\label{actions}
\end{figure}
Also shown in Fig.~\ref{actions} is the approximate action (\ref{BHaction}). As
expected, it is in a good agreement with the exact action at small
values of ${\tilde Q}$. We have also verified this directly, by expanding the
lukewarm instanton action (\ref{Sluke}) at small ${\tilde Q}$. This gives
\begin{equation}
S_{lukewarm}=-{16\pi^3\over{3H_6^4}}(1-4{\tilde q}+{\tilde q}^{4/3} +...),
\end{equation}
where the neglected terms are $O({\tilde q}^{5/3})$. The first two
terms in this expansion coincide with our approximate formula (\ref{BHaction}).
\subsection{Spacetime structure}
To analyze the spacetime structure resulting from the tunneling, we
use a Lorentzian continuation of the metric (\ref{BHinstanton}),
$\tau\to it$. Introducing a new radial variable $\xi$ as
\begin{equation}
f(r)^{-1/2}dr = d\xi,
\end{equation}
we have
\begin{equation}
ds^2 = -h(\xi)dt^2 + d\xi^2 + r(\xi)^2 d\Omega_4^2,
\label{BHmetric}
\end{equation}
where $h(\xi)\equiv f(r(\xi))$. We can choose the origin of $\xi$ so
that $\xi=0$ at $r=r_+$. Then, in the vicinity of $r= r_+$ we have
\begin{equation}
h(\xi)\approx \xi^2.
\end{equation}
We can now continue the metric across the black hole horizon by
replacing $\xi\to it$, $t\to \xi$,\footnote{Here we follow the
discussion in \cite{Carroll09}.}
\begin{equation}
ds^2 = -dt^2 + t^2 d\xi^2 + r_+^2 d\Omega_4^2.
\label{nearhorizon}
\end{equation}
This describes an expanding $2d$ FRW universe with 4 extra dimensions
compactified on a sphere. The big bang at $t=0$ is non-singular and
corresponds to the horizon at $r=r_+$.
The form (\ref{nearhorizon}) applies only in the vicinity of $t=0$.
More generally, the metric behind the horizon can be expressed as
\begin{equation}
ds^2 = -dt^2 + a^2(t) d\xi^2 + r^2(t) d\Omega_4^2,
\label{behindhorizon}
\end{equation}
with
\begin{equation}
F_{\mu\nu}= Q_e {{a(t)}\over {A_4 r(t)^4}} \epsilon_{\mu\nu} ~~~~ (\mu,\nu = 0,1).
\end{equation}
In the limit (\ref{smallBH}), and at $t\gg r_+$, we expect the metric
to approach that of $AdS_2\times S_4$. The
extra dimensions in this metric are stabilized by the electric field;
that is why the black holes mediating the topology change need to
be charged.
The Penrose diagram for a RNdS black hole is shown in Fig.~\ref{Alexfig2}. Region
I in the diagram is the region $r_+ < r < r_c$ covered by the static
coordinates (\ref{BHmetric}). Region II is the exterior
asymptotically de Sitter space, and $i_+$ is its spacelike future
infinity. Region III is the part of the spacetime behind the black
hole horizon, which is described by the metric (\ref{behindhorizon}).
It corresponds approximately to $AdS_2 \times S_4$, with the horizon
at $r_+$ playing the role of the big bang and that at $r_-$ the role
of the big crunch of the AdS space. In a pure AdS space, both
horizons are non-singular, so the black hole is a traversable
wormhole, and it is possible for a timelike curve to go from region I
across region III into a region similar to I and into another $dS_6$
space, or to a timelike singularity in region IV. However, if
perturbations are included, the horizon at $r=r_-$ develops a true
curvature singularity, and the metric cannot be extended beyond region
III. A spacelike slice through the topology-changing region is
illustrated in a lower-dimensional analogue in
Fig. \ref{funnel-picture}.
\begin{figure}
\centering\leavevmode
\epsfig{file=compacFIG-5.ps,width=8cm,angle=-90}
\caption {Penrose diagram for a charged black hole mediating a
topology-changing transition $dS_6\to AdS_2\times S_4$ .
Perturbations result in a singularity at $r=r_-$, so the striped
regions are not accessible.}
\label{Alexfig2}
\end{figure}
\begin{figure}
\centering\leavevmode
\epsfig{file=compacFIG-5-2.ps,width=8cm,angle=-90}
\caption {A spacelike slice through a lower-dimensional analogue of
the black hole topology-changing region. In this funnel-like geometry,
the flat region at the top has two large dimensions, while the tube
of the funnel has one large and one compact dimension.}
\label{funnel-picture}
\end{figure}
Thus, we conclude that, from the viewpoint of a $6d$ observer, the
topology-changing transitions $dS_6\to AdS_2\times S_4$ look like
nucleations of pairs of electrically charged black holes. Each black
hole contains an infinite $AdS_2\times S_4$ universe behind its
horizon.
The Lorentzian continuation of the charged Nariai instanton
is a $dS_2\times S_4$ solution, corresponding to one of the positive-energy
$(H^2 >0)$ $q=4$ vacua that we discussed in Section II. These
solutions are known to be unstable: small perturbations cause them
to disintegrate into $AdS_2\times S_4$ and $dS_6$ regions \cite{Bousso2}.
\section{From $dS_6$ to $(A)dS_4 \times S_2$}
\subsection{The instanton}
We shall now argue that topology change from $dS_6$ to $dS_4\times
S_2$ and to $AdS_4 \times S_2$ can occur through the nucleation of
spherical black 2-branes. Since the radius of $S_2$ is stabilized by
magnetic flux, the branes have to be magnetically charged.
It is well known that Einstein-Maxwell theory with $\Lambda_6 = 0$ has
magnetically charged, asymptotically flat black brane solutions
\cite{Gibbons95, Gregory}. In particular, the extreme 2-brane in $6d$ is
described by the metric,
\begin{equation}
ds^2 = \left( 1-{r_0\over{r}}\right)^{2/3} (-dt^2 + dx^2 + dy^2) +
\left(1-{r_0\over{r}} \right)^{-2} dr^2 + r^2 d\Omega_2^2,
\label{flatbrane}
\end{equation}
with
\begin{equation}
r_0 = {\sqrt{3} Q_m \over{8\pi}}.
\label{r0}
\end{equation}
The surface $r=r_0$ is a non-singular event horizon. The parameter
$Q_m$ in (\ref{r0}) is the quantized magnetic charge (\ref{Q}). The mass
per unit area is equal to the brane tension and is given by \cite{Lu,BPSPV09}
\begin{equation}
T_2 = {2Q_m\over{\sqrt{3}}}.
\label{T2}
\end{equation}
Note that the metric (\ref{flatbrane}) is invariant with respect to Lorentz
boosts in the $x$ and $y$ directions, as expected. Using the results
of Refs.~\cite{Gregory} and \cite{Gibbons95}, it can be shown that
(\ref{flatbrane}) is the only solution which is both non-singular and
boost-invariant.
Generalizations of black brane solutions to asymptotically de Sitter
background are not known analytically, but can be found numerically,
using the ansatz
\begin{equation}
ds^2 = B^2(\xi)[-dt^2 + \exp(2 t)(dx^2 + dy^2)] + d\xi^2 + r^2(\xi) d\Omega_2^2 ,
\label{flatdSbrane}
\end{equation}
where the coordinates $x,y,t$ can be thought of as brane worldsheet
coordinates.
Alternatively, one can use the closed de Sitter form of the worldsheet metric,
\begin{equation}
ds^2 = B^2(\xi)[-dt^2 + \cosh^2 t d{\Omega'}_2^2] + d\xi^2 + r^2(\xi)
d\Omega_2^2 ,
\label{closedbrane}
\end{equation}
with the same functions $B(\xi)$ and $r(\xi)$. For a pure $dS_6$
space, these functions are given by
\begin{equation}
B(\xi) =H_6^{-1} \cos (H_6 \xi) , ~~~~ r(\xi) = H_6^{-1}\sin (H_6 \xi) ,
\label{S6}
\end{equation}
with $\xi= \pi/2H_6$ corresponding to the de Sitter horizon.
(This metric covers only part of de Sitter space.)
In the presence of a brane, the Einstein equations for $B(\xi)$ and
$r(\xi)$ with the ansatz (\ref{Fab}) for the Maxwell field have the
form
\begin{eqnarray}
\label{inflating-brane-eqs}
{1 \over {B(\xi)^2}} + {1\over {r(\xi)^2}} - {{B'(\xi)^2}\over {B(\xi)^2}}
-{{4 B'(\xi) r'(\xi)}\over {B(\xi) r(\xi)}} - {{r'(\xi)^2}\over
{r(\xi)^2}} - {{2 B''(\xi)^2}\over {B(\xi)}} - {{2 r''(\xi)}\over
{r(\xi)}} &=& \Lambda + {{Q_m^2} \over {32\pi^2 r(\xi)^4}}~~,
\nonumber \\ \nonumber \\
{{3}\over {B(\xi)^2}} + {1\over {r(\xi)^2}} - {{3
B'(\xi)^2} \over {B(\xi)^2}} - {{6 B'(\xi) r'(\xi)}\over {B(\xi)
r(\xi)}} - {{r'(\xi)^2}\over {r(\xi)^2}}&=& \Lambda
+ {{Q_m^2} \over {32\pi^2 r(\xi)^4}}~~~~~~ \\ \nonumber \\
{{3}\over {B(\xi)^2}} - {{3 B'(\xi)^2} \over {B(\xi)^2}} - {{3 B'(\xi)
r'(\xi)}\over {B(\xi) r(\xi)}} - {{3 B''(\xi)} \over {B(\xi)}}
- {{r''(\xi)}\over
{r(\xi)}}&=& \Lambda - {{Q_m^2} \over {32\pi^2
r(\xi)^4}}~. \nonumber
\end{eqnarray}
The first of these equations follows from the other two, so there are
two independent equations for the two functions, $B(\xi)$ and $r(\xi)$.
Before discussing the solutions of these equations, let us consider a
Euclidean continuation of the metric (\ref{closedbrane}). This can be
found by setting $t = i\theta - i\pi/2$,
\begin{equation}
ds^2 = B^2(\xi) d{\Omega}_3^2 + d\xi^2 + r^2(\xi) d\Omega_2^2 .
\label{euclideandbrane}
\end{equation}
We are interested in compact, non-singular instanton solutions, so the
range of $\xi$ has to be finite, $0<\xi<\xi_m$, with the endpoints
corresponding to zeros of $B(\xi)$,
\begin{equation}
B(0)=B(\xi_m)=0.
\label{bc1}
\end{equation}
These endpoints are non-singular, provided that
\begin{equation}
B'(0)=B'(\xi_m)=1
\label{bc2}
\end{equation}
and
\begin{equation}
r'(0)=r'(\xi_m)=0.
\label{bc3}
\end{equation}
The boundary conditions (\ref{bc1})-(\ref{bc3}) select a unique
solution of Eqs.~(\ref{inflating-brane-eqs}). For specified values
of $H_6$ and $Q_m$, the solution can be found numerically. For small
values of $Q_m$,
\begin{equation}
Q_m\ll H_6^{-1},
\label{smallg}
\end{equation}
the brane horizon radius is much smaller than that of the cosmological
horizon at $\xi_m\approx \pi/2H_6$, and we expect the solution to be
well approximated by the spherical metric (\ref{S6}), except in the
vicinity of $\xi=0$. We also expect that in this limit the brane
horizon radius is $r(0)\approx r_0$ with $r_0$ from (\ref{r0}) and the
brane tension is approximately given by (\ref{T2}).
\begin{figure}
\centering\leavevmode
\epsfysize=6cm \epsfbox{compacFIG-6.eps}
\put(0,-10){\Large {\bf {$\xi$}}}
\put(-300,170){\Large {\bf {$B(\xi)$}}}
\caption[Fig 6] {Numerical solution for the function $B(\xi)$ for the
magnetically charged inflating brane (solid line). We show for comparison the pure de
Sitter solution Eq. (\ref{S6}) shifted to match the numerical
solution at the cosmological horizon endpoint, namely $\xi_m$ (dashed
line). We use here the parameters specified in Fig. 1 and the brane
solution corresponds to the case $n=10$.}
\label{Bnumerical-1}
\end{figure}
\begin{figure}
\centering\leavevmode
\epsfysize=6cm \epsfbox{compacFIG-7.eps}
\put(0,-10){\Large {\bf {$\xi$}}}
\put(-300,180){\Large {\bf {$r(\xi)$}}}
\caption[Fig 7] {Numerical solution for the function $r(\xi)$ (solid
line) for the same parameters as in Figure \ref{Bnumerical-1}. The dashed line is the
solution for $r(\xi)$ for the pure $dS_6$ case, Eq. (\ref{S6}). Note that we have shifted the pure de Sitter solution to make
the cosmological horizon coincide with $\xi_m$..}
\label{rnumerical-1}
\end{figure}
\begin{figure}
\centering\leavevmode
\epsfysize=6cm \epsfbox{compacFIG-8.eps}
\put(0,-10){\Large {\bf {$\xi$}}}
\put(-300,180){\Large {\bf {$r(\xi)$}}}
\caption[Fig 7] {Magnified plot of the numerical solution in
Fig. \ref{rnumerical-1} for $r(\xi)$
near the brane horizon (solid line). We also show for comparison the
analytic estimate of the brane horizon for this particular brane
solution (dashed line) given by Eq.~(\ref{r0}).}
\label{rnumerical-closeup}
\end{figure}
In Figs.~\ref{Bnumerical-1} and \ref{rnumerical-1} we find the
numerical solution for
the case $e^2 = 2$, $\Lambda_6 = 10^{-4}$ and $n=10$. This value of
$n$ corresponds to a small magnetic charge so one can see from the figures
that the functions only deviate slightly from their pure de Sitter
counterparts. In Fig.~\ref{rnumerical-closeup} we show a close up of the
region near the black brane horizon and compare the asymptotic value
of $r$ with the analytic estimate in the small charge regime, namely,
$r_0$ from Eq. (\ref{r0}).
Analysis similar to that in Sec.~III.A indicates that the black brane
instanton can be pictured as a 2-brane, whose worldsheet has the form
of a 3-sphere and is wrapped around a ``big circle" of $S_6$, as
illustrated in Fig.~\ref{Alexfig1}. As discussed in \cite{Basu}, such instantons
describe spontaneous nucleation of horizon-radius spherical branes in
de Sitter space. The instanton action in the limit (\ref{smallg}) can
be estimated as
\begin{equation}
S_{inst}\approx S_{dS_6} + A_3 H_6^{-3} T_2 ,
\label{Sapprox}
\end{equation}
where $A_3 = 2\pi^2$ is the volume of a unit 3-sphere and $T_2$
is the tension of the brane given by (\ref{T2}). As for the black
hole instanton, the presence of the brane does not induce any linear
in $T_2$ corrections to $S_{dS_6}$, because $dS_6$ is a minimum of the action.
The brane nucleation rate is given by
\begin{equation}
\Gamma \sim \exp (-2\pi^2 H_6^{-3} T_2).
\end{equation}
We note that it follows from Eq.~(\ref{Lambda4}) that for small values
of $Q_m$ satisfying (\ref{smallg}) the $4d$ cosmological constant is
necessarily negative. For $Q_m\sim H_6^{-1}$, the brane and
cosmological horizons are comparable to one another, and the metric
significantly differs from (\ref{S6}) all the way to $r=r_c$. In this
regime, the instanton action needs to be calculated numerically.
An example in this regime is shown in Figs. \ref{Bnumerical-2} and \ref{rnumerical-2}.
\begin{figure}
\centering\leavevmode
\epsfysize=6cm \epsfbox{compacFIG-9.eps}
\put(0,-10){\Large {\bf {$\xi$}}}
\put(-300,170){\Large {\bf {$B(\xi)$}}}
\caption[Fig 6] {Numerical solution for the function $B(\xi)$ for the
magnetically charged inflating brane solution with $n=200$.}
\label{Bnumerical-2}
\end{figure}
\begin{figure}
\centering\leavevmode
\epsfysize=6cm \epsfbox{compacFIG-10.eps}
\put(0,-10){\Large {\bf {$\xi$}}}
\put(-300,180){\Large {\bf {$r(\xi)$}}}
\caption[Fig 7] {Numerical solution for the function $r(\xi)$ for the
case $n=200$.}
\label{rnumerical-2}
\end{figure}
The tunneling process can also be described using the $4d$ effective
theory. The instanton is then the usual Coleman-De Luccia (CdL) instanton. This
description was used by Carroll, Johnson and Randall (CJR)
\cite{Carroll09} and is equivalent to ours, but the geometry of the
instanton and the resulting spacetime structure are not evident in this approach.
CJR performed a numerical calculation of the instanton action as a
function of the magnetic charge $Q_m$; the result is shown in their
Fig.~17. We have verified that their calculation agrees with our
approximate formula (\ref{Sapprox}). Since the graph obtained by CJR
is nearly a straight line, our formula gives a good approximation in
the entire range of $Q_m$.
CJR have also considered a Nariai-type instanton and compared its
action to that of the CdL-type instanton. In the
present case, the Nariai-type instanton is the Euclideanized
unstable $dS_4\times S_2$ vacuum
solution, corresponding to the maximum of $V(\psi)$. Its action is
greater than that for the black brane instanton discussed above for
all values of $Q_m$, up to some critical value $Q_c^{(m)}$ at which
the two actions coincide\footnote{One can see how the inflating
black brane solutions approach the Nariai limiting case by looking
at the numerical solutions presented in Figs. \ref{Bnumerical-2} and
\ref{rnumerical-2}. This is an intermediate solution where the
function $B(\xi)$ approaches a pure sine function and
$r(\xi)$ has a small variation from $r_+$ to $r_c$ as one would expect
for a near Nariai solution.}.
The black brane instanton does not exist
for $Q_m>Q_c^{(m)}$, and no instanton solutions exist for
$Q_m>Q_{max}^{(m)}$ with $Q_{max}^{(m)}$ given by Eq.~(\ref{Qmax(m)}). The
value of $Q_c^{(m)}$ depends non-trivially on $\Lambda_6$.
\subsection{Spacetime structure}
Let us now analyze the spacetime structure resulting from brane
nucleation. The region between the brane horizon, $\xi=0$, and the
cosmological horizon, $\xi=\xi_m$, is described by the metric
(\ref{closedbrane}). Surfaces of constant $\xi$ in this metric have
the geometry of $dS_3\times S_2$, indicating that the brane worldsheet
is effectively an expanding $3d$ de Sitter space. In the vicinity of
the brane horizon at $\xi=0$, we have $B(\xi)\approx \xi$ and
$r(\xi)\approx r_+$, where $r_+ =r(0)$ is the brane horizon radius,
\begin{equation}
ds^2 = \xi^2 (-dt^2 + \cosh^2t d{\Omega'}_2^2) + d\xi^2 + r_+^2 d\Omega_2^2 .
\label{nearhorizonbrane}
\end{equation}
Analytic continuation across the horizon is obtained by replacing
$\xi\to it$, $t\to \chi + i\pi/2$. This gives
\begin{equation}
ds^2 = -dt^2 + t^2 d{\cal H}_3^2 + r_+^2 d\Omega_2^2 ,
\label{nearhorizon2}
\end{equation}
where
\begin{equation}
d{\cal H}_3^2 = d\chi^2 + \sinh^2\chi d\Omega^2
\end{equation}
is a $3d$ hyperbolic metric of unit curvature radius. The metric
(\ref{nearhorizon2}) describes an expanding, open $4d$ FRW universe
with 2 extra dimensions compactified on a sphere.
\begin{figure}
\centering\leavevmode
\epsfig{file=compacFIG-11.eps,width=5cm,angle=-90}
\caption {Scalar field potential of the effective $4d$ theory.
In the newly formed compactified region, $\psi$ starts at $\psi_+$
and rolls to $\psi=0$.}
\label{Alexfig3}
\end{figure}
The metric (\ref{nearhorizon2}) applies only in the vicinity of $t=0$,
where the $4d$ universe is curvature-dominated. More generally, the
metric behind the black brane horizon has the form
\begin{equation}
ds^2 = -dt^2 + a^2(t) d{\cal H}_3^2 + r^2(t) d\Omega_2^2.
\label{dS4xS2}
\end{equation}
The evolution of $a(t)$ and $r(t)$ can be found by numerically solving
the Lorentzian version of the $6d$ Einstein equations.
Alternatively, we can use the effective $4d$ theory
(\ref{4Deffectiveaction}). The $4d$ metric is given by
\begin{equation}
ds_4^2 = -d{\tilde t}^2 + {\tilde a}^2({\tilde t}) d{\cal H}_3^2
\end{equation}
and the scalar field $\psi$ is a function of ${\tilde t}$ only. The
$4d$ and $6d$ descriptions are connected by the relations
\begin{equation}
dt = e^{-\psi({\tilde t})/2M_4}d{\tilde t},
\end{equation}
\begin{equation}
a(t) = e^{-\psi({\tilde t})/2M_4} {\tilde a}({\tilde t}),
\end{equation}
\begin{equation}
r(t) = R e^{\psi({\tilde t})/2M_4} .
\end{equation}
The $4d$ evolution equations have the form
\begin{equation}
{\ddot \psi}+3{{\dot {\tilde a}}\over{\tilde a}} {\dot\psi} + V'(\psi) = 0 ,
\label{e1}
\end{equation}
\begin{equation}
\left({{\dot {\tilde a}}\over{\tilde a}}\right)^2 -{1\over{\tilde
a}^2} = {1\over{3M_4^2}} \left({{\dot\psi}^2\over{2}} + V(\psi)\right),
\label{e2}
\end{equation}
where dots stand for derivatives with respect to ${\tilde t}$ and
$V(\psi)$ given by (\ref{V}).
From a $4d$ point of view, our tunneling process is the usual
Coleman-De Luccia (CdL) tunneling through the barrier in the potential
$V(\psi)$. The field values $\psi_c$ and $\psi_+$, corresponding
respectively to $r_c$ and $r_+$ are located on the opposite sides of
the barrier (see Fig.~\ref{Alexfig3}). The evolution starts at ${\tilde t}=0$ with
$\psi=\psi_+$ and ${\tilde a}=0$, and the field $\psi$ starts rolling
towards the potential minimum at $\psi = 0$. The long-term evolution
depends on whether this minimum has positive or negative energy
density. For $\Lambda_4 <0$, we have an $AdS_4\times S_2$ vacuum, and
the evolution ends in a singular big crunch, while for $\Lambda_4 >0$,
the evolution is non-singular and the metric asymptotically approaches
a $4d$ de Sitter space with $\psi\to 0$. The corresponding spacetime
diagrams are shown in Fig.~\ref{Alexfig4}.
\begin{figure}
\centering\leavevmode
\epsfig{file=compacFIG-12.eps,width=5cm,angle=-90}
\caption {Spacetime diagrams for transitions from $dS_6$ to
$dS_4\times S_2$ (a) and $AdS_4\times S_2$ (b).}
\label{Alexfig4}
\end{figure}
Nariai-type instantons are similar to Hawking-Moss instantons
\cite{HawkingMoss}. As discussed in \cite{Carroll09}, they describe
nucleation of regions of size comparable to the horizon, filled with
the unstable vacuum at the maximum of $V(\psi)$. Small perturbations
will cause $\psi$ to roll away from the maximum. As a result, the
newly formed region will either disintegrate into $dS_6$ and $(A)dS_4
\times S_2$ domains, or, if the potential $V(\psi)$ is sufficiently
flat near the maximum, it will become a site of eternal inflation, with
$dS_6$ and $(A)dS_4 \times S_2$ sub-regions constantly being formed.
\section{From $dS_4\times S_2$ to $dS_6$}
The decompactification transition from $dS_4\times S_2$ to $dS_6$ has
been discussed in Refs.~\cite{BPSPV09,Carroll09}. In terms of the effective
$4D$ theory, this is the usual false vacuum decay. A bubble with
$\psi\sim \psi_c$ nucleates in the inflating background of $\psi=0$
vacuum. As the bubble expands, the field rolls off to infinity in the
bubble interior. The tunneling is described by the same CdL instanton
as for the inverse transition, $dS_6\to dS_4\times S_2$, and the tunneling
rate is
\begin{equation}
\Gamma \sim \exp(S_0-S_{inst}) ,
\label{GammaDC}
\end{equation}
where
\begin{equation}
S_0 = -24\pi^2 M_4^4/\Lambda_4 ,
\end{equation}
with $M_4$ and $\Lambda_4$ from Eqs.~(\ref{M4}) and (\ref{Lambda4}),
respectively. The rate (\ref{GammaDC}) has been estimated in
\cite{BPSPV09} and has been calculated numerically for different
values of parameters in Ref.~\cite{Carroll09}.
The $4d$ evolution after tunneling is described by the same
equations (\ref{e1}), (\ref{e2}) as for the inverse transition, except
now the evolution starts with $\psi({\tilde t}=0)=\psi_c$ and the
field $\psi$ rolls towards $\psi=\infty$, so the extra dimensions
become large. The role of the bubble wall is played by the expanding
black 2-brane. A spacelike slice through the decompactification bubble
spacetime is illustrated in Fig.~\ref{Alexfig5}.
The 6-dimensional metric in the bubble interior has the form
(\ref{dS4xS2}). At late times, $t\to\infty$, the metric approaches
that of $dS_6$ \cite{Decompactification}, with $a(t)\approx H_6^{-1}
\sinh(H_6 t)$ and $r(t)\approx H_6^{-1}\cosh (H_6 t)$. This is a
somewhat unfamiliar, ``mixed" slicing of de Sitter space, with open
slices in 3 out of 5 spatial dimensions and closed spherical slices
in the remaining two directions. The compact dimensions always remain
compact, but asymptotically become very large, and the metric becomes
locally isotropic in the limit.
\begin{figure}
\centering\leavevmode
\epsfig{file=compacFIG-13.eps,width=16cm}
\caption {A spatial slice through the decompactification bubble
spacetime. The junctions marked as ``2-branes'' in this
lower-dimensional analogue become a $2$-sphere in $5d$.}
\label{Alexfig5}
\end{figure}
\section{Conclusions}
The Einstein-Maxwell theory in 6 dimensions admits a rich landscape of
vacua and provides a useful toy model for investigating the dynamics
of the multiverse. The landscape of this theory includes a $dS_6$
vacuum, a number of $dS_4$ and $AdS_4$ vacua with extra dimensions
compactified on $S_2$, and a number of $AdS_2$ vacua with extra
dimensions compactified on $S_4$. There are also some perturbatively
unstable $dS_4\times S_2$ and $dS_2\times S_4$ vacua. In this paper we studied quantum
tunneling transitions between vacua of different effective
dimensionality. We identified the appropriate instantons and
described the spacetime structure resulting from the tunneling. Our
results reinforce and extend the earlier analyses in
Refs.~\cite{Carroll09} and \cite{BPSPV09}.
We found that compactification transitions from $dS_6$ to $AdS_2
\times S_4$ occur through nucleation of pairs of electrically charged
black holes. The charge of these black holes is quantized in units of
the elementary charge of the theory, and their mass is determined by
the condition that the temperature at the black hole horizon $(r=r_+)$
is the same as that at the cosmological horizon $(r=r_c)$ in the
corresponding Reissner-Nordstrom-de Sitter solution. These black
holes can be thought of as 0-branes of the theory. In the limit when
$r_+ \ll r_c$ the black holes become nearly extremal. The instanton
in this limit can be pictured as $S_6$ (Euclideanized de Sitter) with
a 0-brane worldline running along a big circle.
Transitions from $dS_6$ to $dS_4 \times S_2$ and $AdS_4 \times S_2$
proceed through nucleation of spherical, magnetically charged black
2-branes. Once again, in the limit when the black brane horizon
radius $r_+$ is small compared to $r_c$, the corresponding instanton
can be pictured as a brane whose Euclidean worldsheet (which has the form
of a 3-sphere) is wrapped around a ``big circle'' of $S_6$. The process
of black brane formation through quantum tunneling is very similar to
nucleation of spherical domain walls during inflation, which was
analyzed in Ref.~\cite{Basu}.
The initial radius of a nucleating brane is set by the de Sitter
horizon, $H_6^{-1}$. Once the brane is formed, this radius is
stretched by the exponential expansion of the universe, while the
transverse dimension of the brane (which can be identified with its
horizon radius $r_+$) remains fixed. Behind the horizon, in the black
brane interior, the spacetime is effectively 4-dimensional, with the
extra two dimensions compactified on $S_2$. Observers in this newly
formed compactified region see only a homogeneous FRW universe,
approaching either dS or AdS space.
Transitions from $dS_6$ to unstable $dS_2\times S_4$ (or $dS_4\times
S_2$) vacua are mediated by Nariai-type instantons, which are similar
to Hawking-Moss instantons. Small perturbations cause fragmentation
of these vacua into regions of $AdS_2\times S_4$ (or $AdS_4\times S_2$) and $dS_6$.
We also discussed decompactification transitions $dS_4 \times S_2 \to
dS_6$, in which the effective dimension of the daughter vacuum is
higher than that of the parent vacuum. These transitions are
described by the same instanton as the inverse process, $dS_6 \to dS_4
\times S_2$, but the resulting spacetime structure is rather
different. Observers in the parent vacuum see
nucleation and subsequent expansion of spherical bubbles, as in
ordinary CdL vacuum decay. The role of the bubble wall is played by a
spherical magnetically charged black brane (the same kind of brane as
in the inverse transition). The bubble interior is initially
anisotropic, but as the compact dimensions expand, it approaches
local isotropy, with the metric approaching the $6d$ de Sitter space.
However, the anisotropy may still be observable in the CMB
if inflation inside the bubble terminates after a
relatively small number of $e$-foldings. These issues will be
discussed separately in \cite{BlancoPillado:2010uw}.
Apart from the topology-changing transitions that we discussed here,
our model admits tunneling transitions between $(A)dS_4 \times S_2$
vacua having the same topology, but characterized by different values of the
magnetic flux. Such tunnelings, which are analogous to the Schwinger
process, and the corresponding instantons, have been discussed in
\cite{BPSPV09}. The role of the bubble walls in the resulting bubbles is played
by magnetically charged branes, which can be either solitonic branes
(if the model is augmented by adding the appropriate Higgs fields)
or black branes of the type described here. In the latter case, the region inside
the black brane horizon is a new $(A)dS_4 \times S_2$ universe. Thus,
the tunneling results in the formation of two compactified regions:
one in the interior of the expanding bubble, where the flux has been changed with
respect to the parent vacuum, and the other in the interior of the black brane itself.
Topology-changing transitions of the kind we discussed in this paper
will inevitably occur in the course of eternal inflation. As a
result, the multiverse will be populated by vacua of all possible
dimensionalities. A spacelike slice through such a multiverse might
look like Linde's famous ``cactus" picture \cite{Linde-website};
its version adapted for our present model is shown in
Fig.~\ref{cactus}. We shall discuss the structure of this
multi-dimensional multiverse and its implications for the
measure problem in a separate paper.
Finally, as Fig.~\ref{cactus} suggests, we expect that there should be
other kinds of instantons that interpolate directly between the
$dS_4 \times S_2$ and $AdS_2 \times S_4$ sectors in our model. These
would be somewhat more complicated solutions that will not only change
the topology of spacetime, but would also act as sources
and sinks for the different fluxes involved.
\begin{figure}
\centering\leavevmode
\epsfig{file=compacFIG-16-rot.ps,width=10cm,angle=-90}
\caption {A spacelike slice through a multi-dimensional multiverse.
The dark regions represent black holes and black branes. This is an
adaptation of Andrei Linde's picture in \cite{Linde-website}.}
\label{cactus}
\end{figure}
\section{Acknowledgments}
We are grateful to Raphael Bousso, Oscar Dias, Jaume Garriga, Andrei Linde, Maulik Parikh, Oriol
Pujolas, Mike Salem and Ben Shlaer for very helpful
discussions. J.J. B.- P. would like to thank the Theory division at CERN for
their support and hospitality while part of this work was being completed. This work
was supported in part by the National Science Foundation Grants
06533561 (J.J.B-P.) and 0353314 (A.V.).
|
2,877,628,091,612 | arxiv | \section*{Introduction}
The orthogonal groups of finite dimensional quadratic spaces over fields appear
in many branches of mathematics. They form an infinite family of algebraic
groups, indexed by the dimension of the underlying space. Moreover, if the
field is algebraically closed (or more generally quadratically closed) then
there is only one quadratic space of any given dimension, up to isomorphism.
However, over a general field $\mathbb{F}$ (we consider only characteristic
different from 2 in this manuscript) there may be many different quadratic space
of the same dimension, hence many orthogonal groups of the same dimension. One
way to view these subtleties is as different $\mathbb{F}$-rational structures on
the orthogonal group over the algebraic closure of $\mathbb{F}$.
In lower dimensions the orthogonal groups become isomorphic (up to finite
index) with members of other families of algebraic groups. These isomorphisms,
especially over $\mathbb{R}$, are very useful in several mathematical fields:
E.g., the relation between $SO^{+}(2,1)$ and $SL_{2}(\mathbb{R})$, between
$SO^{+}(2,2)$ and the product of two copies of $SL_{2}(\mathbb{R})$, and between
$SO^{+}(2,3)$ and $Sp_{4}(\mathbb{R})$ has far-reaching applications in the
theory of modular and automorphic forms---see \cite{[B]}, for example. Now,
these isomorphisms (or isogenies) are relatively simple to describe over a
quadratically closed in dimensions up to 6. Indeed, in the case of algebraically
closed fields and dimensions 4 and 6 such a description appears already in
\cite{[vdW]}, but for more general fields that reference simply refers to the
resulting groups as the $\mathbb{F}$-rational structures mentioned above.
Section III of \cite{[D]} also presents some results for the general orthogonal
groups in these dimensions, but again using this descent method. Moreover, some
aspects of the theory becomes simpler for the general orthogonal group. Hence
in some cases our method is a true refinement of that of \cite{[D]}. In
addition, there is an isogeny in dimension 8 over $\mathbb{R}$, namely signature
$(2,6)$, which is related to a symplectic quaternionic group. This relation
appears in detail in \cite{[SH]}. The phenomenon of triality (see \cite{[KMRT]}
for more details) in the isotropic case may also be viewed as a type of an
exceptional isogeny.
The special orthogonal group of a quadratic space $V$ over $\mathbb{F}$ comes
with a natural central extension, with kernel $\mathbb{F}^{\times}$, called the
Gspin group or the even Clifford group. It has a general definition in terms of
Clifford algebras, namely the set of elements of the even Clifford algebra
of $V$ conjugation by which preserves the embedding of $V$ into the full
Clifford algebra. The spin group is a subgroup of the Gspin group, which maps
onto the spinor norm kernel of $SO(V)$ with kernel $\pm1$. It may also be
described in similar terms using the Clifford algebra. It is these groups, the
spin and Gspin groups, which mainly appear in the exceptional isogenies.
However, the condition of preservation of $V$ under conjugation is not so easy
to verify without delving deep into the multiplicative structure of the Clifford
algebra. This makes the actual structure of the groups thus obtained less
visible.
We use a different, more elementary method in order to determine the spin and
Gspin groups of spaces of dimensions up to 8, thus providing the groups which
are isogenous to the special orthogonal groups in these dimensions. The idea is
simple: We first observe that these groups are invariant under rescaling of the
space, allowing we to choose our space with some extra, useful properties. We
then show how a group (which ends up being the Gspin group) acts naturally on
this (rescaled) space, with kernel $\mathbb{F}^{\times}$. The surjectivity is a
consequence of the Cartan--Dieudonn\'{e} Theorem, since we show how all the
reflections can be realized in an appropriate semi-direct product which maps to
the full orthogonal group, showing that we do get the full Gspin group.
Moreover, in all these cases the spinor norm, which takes values in
$\mathbb{F}^{\times}/(\mathbb{F}^{\times})^{2}$, factors through a map to
$\mathbb{F}^{\times}$, which takes $r$ in the kernel $\mathbb{F}^{\times}$ to
$r^{2}$. This allows us to determine the spin group as the subgroup of the Gspin
group which maps to 1 under this map. In some isotropic cases there are
particular choices or conjugations which one may apply in order for the
realization of the spin and Gspin groups becomes well-known classical groups.
In many cases, especially the isotropic ones, these groups have equivalent
representations, which appear in dimensions 3 and 4 in many places in the
literature. We give a general, simple construction for these equivalent
representations. Dimension 6 is strongly related to the second exterior power of
a 4-dimensional space. We provide the resulting equivalent representations in
this case too.
The possible complexity of quadratic spaces increases with the size of the
group $\mathbb{F}^{\times}/(\mathbb{F}^{\times})^{2}$. It is thus expedient to
see what are the results for the cases where this group is small, in particular
when it is 1 or 2. In the latter case one must distinguish between two cases,
the Euclidean and the quadratically finite ones, according to whether the field
admits a non-split quaternion algebra or not. We give full details for these
cases.
This manuscript is divided into 12 sections. In Section \ref{Alg} we present the
required notation for central simple algebras over fields, and Section
\ref{Grp} contains the definitions for the groups which we shall encounter.
Section \ref{Dim123} is concerned with dimensions up to 3, and Section
\ref{Dim4} deals with the 4-dimensional case. In Section \ref{Dim6d1} we
examine the case of 6 dimensions and trivial discriminant, and Section
\ref{Dim5} considers 5-dimensional quadratic spaces. Section \ref{Dim6gen} we go
on to general 6-dimensional quadratic spaces, and Section \ref{Wedge} presents
the equivalent representations which arise from the exterior square of
4-dimensional spaces. Section \ref{Dim8id1} takes care of isotropic
8-dimensional quadratic spaces with trivial discriminant, while in Section
\ref{Dim7rd} spaces of dimension 7 which represent their discriminant are
examined. Section \ref{Dim8igen} considers general isotropic spaces of
dimension 8, and Section \ref{ManySq} presents the results for fields in which
the squares have index 1 or 2 in the group of non-zero elements in detail.
\section{Finite Dimensional Algebras \label{Alg}}
Let $\mathbb{F}$ be a field of characteristic different from 2. An
\emph{algebra} over $\mathbb{F}$ is a ring $R$ with identity together with an
embedding of $\mathbb{F}$ into the center of $R$ (taking $1\in\mathbb{F}$ to $1
\in R$). We shall only consider finite-dimensional algebras here, so that we
shall write \emph{algebra} for \emph{finite-dimensional algebra} throught.
Wedderburn's Theorem states that any $\mathbb{F}$-algebra the product of simple
rings $R$, and each such ring admits two maps, the \emph{reduced norm} and the
\emph{reduced trace}, to its center, which is a finite field extension
$\mathbb{K}$ of $\mathbb{F}$. The latter map, denoted $Tr^{R}_{\mathbb{K}}$, is
a homomorphism of additive groups. The former, which we denote
$N^{R}_{\mathbb{K}}$, is multiplicative, yielding a group homomorphism from the
group $R^{\times}$ of invertible elements of $R$ into $\mathbb{K}^{\times}$. The
field extension $\mathbb{K}/\mathbb{F}$ comes with its norm and trace maps
$N^{\mathbb{K}}_{\mathbb{F}}$ and $Tr^{\mathbb{K}}_{\mathbb{F}}$, and the total
norm and trace maps $N^{R}_{\mathbb{F}}$ and $Tr^{R}_{\mathbb{F}}$ are the
compositions $N^{\mathbb{K}}_{\mathbb{F}} \circ N^{R}_{\mathbb{K}}$ and
$Tr^{\mathbb{K}}_{\mathbb{F}} \circ Tr^{R}_{\mathbb{K}}$ respectively. The norm
and trace from the total ring $\prod_{i}R_{i}$ into $\mathbb{F}$ and the
product of the $N^{R_{i}}_{\mathbb{F}}$ and the sum of the
$Tr^{R_{i}}_{\mathbb{F}}$ respectively.
All the tensor products (of vector spaces or algebras) will be over
$\mathbb{F}$, hence the index will be omitted. In case one of the multipliers in
a tensor product is a commutative $\mathbb{F}$-algebra, we shall shorten using
subscripts, such as $R_{\mathbb{E}}$ for $R\otimes\mathbb{E}$.
Many of our algebras will come with an involution, sending $x \in R$ to
$\overline{x}$, which is $\mathbb{F}$-linear and inverts the order of products.
Then $R$ decomposes, as a vector space, as the direct sum of the space $R^{+}$
of \emph{symmetric} elements which are invariant under the involution, and
the space $R^{-}$ of the \emph{anti-symmetric} ones, which are inverted by it
(see Section 2 of \cite{[KMRT]}, and recall that $ch\mathbb{F}\neq2$). In fact,
we shall encounter two cases. In the first case $R$ will be simple with center
$\mathbb{F}$ (hence of dimension $n^{2}$ over $\mathbb{F}$ for some number $n$
called the \emph{degree} of $R$), in which case the involution is said to be
\emph{of the first kind}. Then the involution can be either \emph{orthogonal},
where $R^{+}$ has dimension $\frac{n(n+1)}{2}$ and $R^{-}$ is of dimension
$\frac{n(n-1)}{2}$, or \emph{symplectic}, where the dimensions are interchanged.
The second case is where the center of $R$ is a (\'{e}tale) quadratic
$\mathbb{F}$-algebra $\mathbb{E}$, which is either a field extension (which is
separable by the assumption on $ch\mathbb{F}$) with Galois automorphism $\rho$
(denoted $z \mapsto z^{\rho}$ for $z\in\mathbb{E}$), or
$\mathbb{E}=\mathbb{F}\times\mathbb{F}$, with $\mathbb{F}$ embedded diagonally
and $\rho$ interchanging the two coordinates. Note that for $z$ in a quadratic
$\mathbb{F}$-algebra $\mathbb{E}$ we have
$z^{\rho}=Tr^{\mathbb{E}}_{\mathbb{F}}(z)-z$. We shall encounter only such
algebras which come from central simple algebras over $\mathbb{F}$, i.e., those
$R$ over $\mathbb{E}$ such that there exists a central simple algebra $S$ over
$\mathbb{F}$, with involution $x\mapsto\overline{x}$, such that $R \cong
S_{\mathbb{E}}$. Then we write $y^{\rho}$ for the image of $y \in R \cong
S_{\mathbb{E}}$ under $Id_{S}\otimes\rho$, and the involution in question, which
is \emph{of the second type} or \emph{unitary}, is
$y\mapsto\overline{y}^{\rho}$. In this case $R$ has dimension $2n^{2}$ over
$\mathbb{F}$, and the spaces $R^{+}$ and $R^{-}$ are both of dimension $n^{2}$
over $\mathbb{F}$ (and are \emph{not} vector spaces over $\mathbb{E}$). In case
$\mathbb{E}=\mathbb{F}\times\mathbb{F}$ we have $R=S \times S$, and
$\overline{(s,t)}^{\rho}=(\overline{t},\overline{s})$.
For a finite-dimensional $\mathbb{F}$-algebra $R$, let $M_{n}(R)$ be the ring of
$n \times n$ matrices over $R$. In case $R$ is commutative, the reduced norm and
trace from $M_{n}(R)$ into $R$ are just the matrix determinant and matrix trace
respectively. If furthermore $R=\mathbb{E}$ is a quadratic $\mathbb{F}$-algebra,
we shorten $(M^{t})^{\rho}=(M^{\rho})^{t}$ to just $M^{t\rho}$, where $M^{t}$
denotes the matrix which is the transpose of $M$. Similarly, for
$z\in\mathbb{E}^{\times}$ we write $z^{-\rho}$ for
$(z^{-1})^{\rho}=(z^{\rho})^{-1}$.
\smallskip
A \emph{quaternion algebra} $B$ over $\mathbb{F}$ is a central simple
$\mathbb{F}$-algebra of degree 2. It comes with a natural (symplectic)
involution, called the \emph{main involution}, which we denote by
$\iota:x\mapsto\overline{x}=Tr^{B}_{\mathbb{F}}(x)-x$ (or sometimes $\iota_{B}$
where the quaternion algebra will not be clear from the context). This is the
only symplectic involution on $B$---see Proposition 2.21 of \cite{[KMRT]}.
Since $ch\mathbb{F}\neq2$, every quaternion algebra is generated by two
anti-commuting (traceless) elements with squares in $\mathbb{F}$. The algebra in
which the squares of these elements are $\alpha$ and $\beta$ respectively will
be denoted $\big(\frac{\alpha,\beta}{\mathbb{F}}\big)$. We may multiply each
generator by an element of $\mathbb{F}^{\times}$, so that muliplying $\alpha$
or $\beta$ by squares yields an isomorphic quaternion algebra. If $\mathbb{E}$
is a quadratic $\mathbb{F}$-algebra and $B$ is a quaternion algebra then the
norms $N^{\mathbb{E}}_{\mathbb{F}}$ and $N^{B}_{\mathbb{F}}$ are quadratic
functions, and we have
\begin{lem}
The equalities
$N^{\mathbb{E}}_{\mathbb{F}}(z+w)=N^{\mathbb{E}}_{\mathbb{F}}(z)+N^{\mathbb{E}}
_{\mathbb{F}}(w)+Tr^{\mathbb{E}}_{\mathbb{F}}(zw^{\rho})$ and
$N^{B}_{\mathbb{F}}(x+y)=N^{B}_{\mathbb{F}}(x)+N^{B}_{\mathbb{F}}(y)+Tr^{B}_{
\mathbb{F}}(x\overline{y})$ hold for every $z$ and $w$ in $\mathbb{E}$ and $x$
and $y$ in $B$. \label{BEpol}
\end{lem}
\begin{proof}
The equalities $N^{\mathbb{E}}_{\mathbb{F}}(t)=tt^{\rho}$ and
$Tr^{\mathbb{E}}_{\mathbb{F}}(t)=t+t^{\rho}$ hold for every $t\in\mathbb{E}$,
and we also have $N^{B}_{\mathbb{F}}(s)=s\overline{s}$ and
$Tr^{B}_{\mathbb{F}}(s)=s+\overline{s}$ for every $s \in B$. The lemma follows
directly from these equalities.
\end{proof}
For such $\mathbb{E}$ and $B$ we denote $\mathbb{E}_{0}$ and $B_{0}$ the spaces
of traceless elements in $\mathbb{E}$ and $B$. These are the spaces
$\mathbb{E}^{-}$ and $B^{-}$ with respect to the involutions $\rho$ and $\iota$
respectively, and they have respective dimensions 1 and 3 (as $\rho$ is unitary
and $\iota$ is symplectic).
A quaternion algebra $B$ over $\mathbb{F}$ either \emph{splits}, i.e., it is
isomorphic to $M_{2}(\mathbb{F})$, or a division algebra. In the former case
$Tr^{B}_{\mathbb{F}}$ is the matrix trace, $N^{B}_{\mathbb{F}}$ is the
determinant, and $\iota_{B}$ is the adjoint involution $\binom{a\ \ b}{c\ \
d}\mapsto\binom{\ \ d\ \ -b}{-c\ \ \ \ a}$. A \emph{splitting field} of $B$ is
an extension $\mathbb{K}$ of $\mathbb{F}$ such that the quaternion algebra
$B_{\mathbb{K}}$ over $\mathbb{K}$ splits. A quadratic extension $\mathbb{K}$ of
$\mathbb{F}$, with Galois automorphism $\sigma$, is a splitting field of $B$ if
and only if it can be embedded into $B$. By choosing an embedding, the subset of
$B$ which anti-commutes with $\mathbb{K}$ (namely $x \in B$ such that
$xz=z^{\sigma}x$ for all $z\in\mathbb{K}$) form a 1-dimensional subspace over
$\mathbb{K}$, which is contained in $B_{0}$. The choice of the square $\delta$
of an invertible element there (which is a representative of a class in
$\mathbb{F}^{\times}/N^{\mathbb{K}}_{\mathbb{F}}(\mathbb{K}^{\times})$) yields
an embedding of $B$ into $M_{2}(\mathbb{K})$ in which $z+wj \in B$ (with
$z$ and $w$ from $\mathbb{K}$) is taken to $\binom{\ z\ \ \ \delta
w}{w^{\sigma}\ \ z^{\sigma}}$. We denote the image of this algebra as
$(\mathbb{K},\sigma,\delta)$. For this algebra we shall use
\begin{lem}
The action of $\sigma=Id_{B}\otimes\sigma$ on
$B_{\mathbb{K}}=M_{2}(\mathbb{K})$ for $B=(\mathbb{K},\sigma,\delta)$ is
defined by $\binom{a\ \ b}{c\ \ d}\mapsto\binom{0\ \ \delta}{1\ \
0}\binom{a^{\sigma}\ \ b^{\sigma}}{c^{\sigma}\ \ d^{\sigma}}\binom{0\ \
\delta}{1\ \ 0}/\delta$. \label{KsplitB}
\end{lem}
\begin{proof}
As $B_{\mathbb{K}}=M_{2}(\mathbb{K})$, it suffices to show that
$(\mathbb{K},\sigma,\delta)$ is the set of matrices which are stable under the
operation which is asserted to be $Id_{B}\otimes\sigma$. As this operation
takes $M=\binom{a\ \ b}{c\ \ d}$ to $\binom{\ d^{\sigma}\ \ \ \ \delta
c^{\sigma}}{b^{\sigma}/\delta\ \ \ a^{\sigma}\ }$, we find that $M$ is
invariant under this operation if and only if $d=a^{\sigma}$ and $b=\delta
c^{\sigma}$. As these conditions indeed characterize
$(\mathbb{K},\sigma,\delta)$, this proves the lemma.
\end{proof}
For the split algebra $B=M_{2}(\mathbb{F})$ we have the following
\begin{lem}
For any $g \in M_{2}(\mathbb{F})$, conjugation by $\binom{0\ \ -1}{1\ \ \ \ 0}$
takes $\overline{g}$ and $g^{t}$ to one another. \label{Sadjt}
\end{lem}
\begin{proof}
When $g=\binom{a\ \ b}{c\ \ d}$ we have $g^{t}=\binom{a\ \ c}{b\ \ d}$ and
$\overline{g}=\binom{\ \ d\ \ -b}{-c\ \ \ \ a}$. The result now follows from a
simple calculation.
\end{proof}
Lemma \ref{Sadjt} will allow us to obtain equivalent models for our spin and
Gspin groups, which are also used in the literature. Note that it relates a
symplectic involution on $M_{2}(\mathbb{F})$ with an orthogonal one.
Another type of algebras which we shall encounter are \emph{bi-quaternion
algebras}, which are simple algebras $A$ of degree 4 which may be presented as
$A \cong B \otimes C$ where $B$ and $C$ are quaternion algebras. Given such a
presentation, there is an involution
$\iota_{B}\otimes\iota_{C}:x\mapsto\overline{x}$ on $A$, which is orthogonal.
However, $A$ may be presented as the tensor product of two quaternion algebras
in many ways, each giving a different orthogonal involution. Moreover, not all
the orthogonal involutions on $A$ may be obtained in this way, and $A$ admits
also symplectic involutions---see Proposition 2.7 of \cite{[KMRT]}. However, we
shall use the notation $\overline{x}$ for a bi-quaternion algebra only when the
presentation as $B \otimes C$ is clear from the context.
\smallskip
We shall be needing subgroups of the groups of the form $R^{\times}$ arising
from simple $\mathbb{F}$-algebras $R$, which are defined by the norm. If
$\mathbb{K}$ is the center of $R$ and $H\subseteq\mathbb{E}^{\times}$, we shall
denote $R^{H}$ the subgroup of $\mathbb{E}^{\times}$ consisting of those
elements $x \in R^{\times}$ such that $N^{R}_{\mathbb{E}}(x) \in H$. We shall
extend this notation to algebras of the form $R=S \times S$ for a central
simple $\mathbb{F}$-algebra $S$, with subgroups of
$\mathbb{F}^{\times}\times\mathbb{F}^{\times}$. If $\mathbb{E}$ is any
commutative $\mathbb{F}$-algebra then $\mathbb{E}^{H}$ is defined similarly.
Note that for an algebra of the form $S_{\mathbb{E}}$ with $S$ central simple
over $\mathbb{F}$, we shall use the norm to the center $\mathbb{E}$ and not to
$\mathbb{F}$ for defining $S_{\mathbb{E}}^{H}$.
The group of invertible matrices in $M_{n}(R)$ will be denoted $GL_{n}(R)$ also
when $R$ is not commutative. If $R$ comes with an involution
$x\mapsto\overline{x}$, then $M\mapsto\overline{M}^{t}$ is an involution on
$M_{n}(R)$ of the same type as the involution on $R$. Note that if $R$ is not
commutative then $M\mapsto\overline{M}$ and $M \mapsto M^{t}$ do not behave
well with respect to products. If $R$ is simple then $M_{n}(R)$ is also simple,
with the same center $\mathbb{K}$, and the group $M_{n}(R)^{H}$ for a subgroup
$H\subseteq\mathbb{K}^{\times}$ will be denoted $GL_{n}^{H}(R)$. In case
$H=\{1\}$ we shall use just the superscript 1 (with no brackets), and where
$R=\mathbb{E}$ is a field extension of $\mathbb{F}$, we shall write
$SL_{n}(\mathbb{E})$ for $GL_{n}^{1}(\mathbb{E})$.
\section{Orthogonal and Other Groups \label{Grp}}
We shall be considering quadratic spaces over $\mathbb{F}$. As
$ch\mathbb{F}\neq2$, this is equivalent to spaces endowed with a symmetric
bilinear form, so that we use the bilinear and quadratic forms interchangeably.
All the spaces we consider will of (positive) finite dimension and
non-degenerate, and these assumptions will be made even when not stated
explicitly. Many of our vector spaces will be subsets of $\mathbb{F}$-algebras,
so that we write the pairing (or product) of two vectors $v$ and $w$ of a
quadratic space $v$ as $\langle v,w \rangle$. Moreover, the number $\langle v,v
\rangle$ will be written $|v|^{2}$ (in order to distinguish it from $v^{2}$ in
the algebra involved), and will be called the called the \emph{vector norm} (and
not just norm) of $v$. In case confusion may arise as to which vector space is
considered, we may write also $\langle v,w \rangle_{V}$ and $|v|_{V}^{2}$. We
have
\begin{lem}
the equality $|v+w|^{2}=|v|^{2}+|w|^{2}+2\langle v,w \rangle$ holds for any $v$
and $w$ in $V$. \label{Vpol}
\end{lem}
\begin{proof}
The lemma follows directly from the definition of the vector norm and the
symmetry of the bilinear form.
\end{proof}
While no ``absolute value'' $|v|$ exists in general, we shall write the $m$th
power of $|v|^{2}$ as $|v|^{2m}$. Two vectors $v$ and $w$ are said to be
\emph{orthogonal} or \emph{perpendicular} if $\langle v,w \rangle=0$, and
$v^{\perp}$ denotes the (1-codimensional) subspace of elements of $V$ which are
perpendicular to $v$. A vector $0 \neq v \in V$ is called \emph{isotropic} is
$|v|^{2}=0$, and \emph{anisotropic} otherwise. A quadratic space $V$ is called
\emph{isotropic} if it contains (non-zero) isotropic vectors, and
\emph{anisotropic} otherwise. An \emph{orthogonal basis} is a basis consisting
of vectors which are all orthogonal to one another (hence they must all be
anisotropic), and every quadratic space admits such a basis since
$ch\mathbb{F}\neq2$. A \emph{rescaling} of a quadratic space $V$ is the same
quadratic space but with all the pairings and vector norms multiplied by a
global scalar from $\mathbb{F}^{\times}$. The \emph{determinant} of a quadratic
space $V$ is defined as the determinant of a Gram matrix representing the
bilinear form of $V$ in some basis (which reduces to the product of the vector
norms of an orthogonal basis) in
$\mathbb{F}^{\times}/(\mathbb{F}^{\times})^{2}$. However, it turns out more
useful to consider the \emph{discriminant} of $V$, which is the determinant
multiplied by $(-1)^{n(n-1)/2}$ where $n$ is the dimension of $V$. By some abuse
of notation, we shall sometime treat the discriminant as an actual
representative from $\mathbb{F}^{\times}$ mapping to the appropriate class in
$\mathbb{F}^{\times}/(\mathbb{F}^{\times})^{2}$, but this will always be
independent of the representative chosen. Note that the discriminant is
invariant under rescaling if the dimension is even, but it is multiplied by the
rescaling factor when the dimension is odd.
\smallskip
Given a quadratic space $V$, we define the \emph{orthogonal group} $O(V)$ to be
the group of linear transformations of $V$ which preserves the bilinear form.
Given an anisotropic vector $v \in V$, the map which inverts $v$ and leaves the
space $v^{\perp}$ invariant belongs to $O(V)$. We call this map the
\emph{reflection inverting $v$}. The only property of $O(V)$ which we shall use
here is the \emph{Cartan--Dieudonn\'{e} Theorem}, namely
\begin{prop}
The group $O(V)$ is generated by reflections. \label{CDT}
\end{prop}
\begin{proof}
For a simple proof, see e.g., Corollary 4.3 of \cite{[MH]}.
\end{proof}
The determinant is a surjective homomorphism $O(V)\to\{\pm1\}$ (as reflections
have determinant $-1$) and the kernel, the \emph{special orthogonal group}, is
denoted $SO(V)$. It consists of those transformations which can be written as
the product of an \emph{even} number of reflections. There is a homomorphism
$O(V)\to\mathbb{F}^{\times}/(\mathbb{F}^{\times})^{2}$, called the \emph{spinor
norm}, which takes the reflection inverting an anisotropic vector $v$ to the
image of $|v|^{2}$ in $\mathbb{F}^{\times}/(\mathbb{F}^{\times})^{2}$. As with
the discriminant, we may sometimes say that an element of $O(V)$ has spinor norm
$t\in\mathbb{F}^{\times}$, meaning that its spinor norm is
$t(\mathbb{F}^{\times})^{2}\in\mathbb{F}^{\times}/(\mathbb{F}^{\times})^{2}$.
Note the rescaling the bilinear form leaves the spinor norms of elements of
$SO(V)$ invariant, but multiplies those of elements in
$O(V) \setminus SO(V)$ by the rescaling factor. Hence for $SO(V)$ the spinor
norm is well-defined also when we consider $V$ up to rescaling. The subgroup of
$SO(V)$ consisting of elements having spinor norm 1 (i.e., a square) is denoted
$SO^{1}(V)$. Note that the global inversion $-Id_{V}$ always has spinor norm
which equals the discriminant of the space, in correspondence with $-Id_{V}$
being in $SO(V)$ (hence having a spinor norm which is invariant under
rescalings) if and only if $V$ has even dimension.
We shall define the \emph{spin group} of $V$ to be a double cover of
$SO^{1}(V)$. The \emph{Gspin group}, or the \emph{even Clifford group}, of $V$
is defined as a group mapping onto $SO(V)$ with with kernel
$\mathbb{F}^{\times}$. We wish to construct these groups, in low dimensions,
without needing to investigate the Clifford algebra of $V$. This becomes much
simpler after some normalization by rescaling. Therefore we do not consider
groups like the pin group, $O^{1}(V)$, and the full Clifford group, which map
onto subgroups of $O(V)$ which are not contained in $SO(V)$, as they are not
invariant under rescaling.
\smallskip
Let $\mathbb{E}$ be a quadratic extension of $\mathbb{F}$, with Galois
automorphism $\rho$. A \emph{unitary space} over $\mathbb{E}$ (with respect to
$\rho$ is a vector space (again finite-dimensional and non-trivial) vector space
over $\mathbb{E}$ with a (non-degenerate) Hermitian sesqui-linear form, where
the conjugation is defined using $\rho$. Unitary spaces may be defined in terms
Hermitian Gram matrices, which may always be reduced (by the choice of an
appropriate basis) to regular diagonal matrices over the fixed field
$\mathbb{F}$ of $\rho$. A unitary space also has a determinant (and a
discriminant), which are similarly defined and lie in
$\mathbb{F}^{\times}/N^{\mathbb{E}}_{\mathbb{F}}(\mathbb{E}^{\times})$. We
shall allow ourselves the same abuse of notation for these unitary determinants
and discriminants. We shall present unitary spaces only through representing
Gram matrices (which are Hermitian and regular), and the \emph{unitary group} of
such a matrix $M$, denoted $U_{\mathbb{E},\rho}(M)$, is the group of linear
transformations of a unitary spaces which preserve the sesqui-linear form
defined by $M$, namely those $g \in GL_{n}(\mathbb{E})$ such that
$gMg^{t\rho}=M$. Elements of $U_{\mathbb{E},\rho}(M)$ have determinants in
$\mathbb{E}^{1}$, and we define $SU_{\mathbb{E},\rho}(M)$ to be the subgroup of
unitary matrices whose determinant is 1. Matrices which multiply the
sesqui-linear form (hence $M$) by a scalar from $\mathbb{E}^{\times}$, which
must then be in $\mathbb{F}^{\times}$, form the \emph{general unitary group}
$GU_{\mathbb{E},\rho}(M)$. Now, if $g \in GU_{\mathbb{E},\rho}(M)$ multiplies
the sesqui-linear form (hence $M$) by a scalar $s=s(g)\in\mathbb{F}^{\times}$,
and the space has dimension $n$, then we have $N^{\mathbb{E}}_{\mathbb{F}}(\det
g)=s(g)^{n}$. In case $n$ is even, we define the group
$GSU_{\mathbb{E},\rho}(M)$ consisting of elements $g$ of the latter group such
that $\det g=s(g)^{n/2}$ (note that this is not equivalent to the condition that
$\det g\in\mathbb{F}^{\times}$---see Lemma \ref{ind2int} and Corollary
\ref{iso6gen} below for an example with $n=4$). It follows that
$SU_{\mathbb{E},\rho}(M)=U_{\mathbb{E},\rho}(M) \cap GSU_{\mathbb{E},\rho}(M)$.
All these groups are invariant under rescaling of $M$ by an element of
$\mathbb{F}^{\times}$. In small dimension we may relate the (general) unitary
groups to other groups, as is seen in the following
\begin{lem}
If $M$ is 1-dimensional then $GU_{\mathbb{E},\rho}(M)$,
$U_{\mathbb{E},\rho}(M)$, and $SU_{\mathbb{E},\rho}(M)$ are
$\mathbb{E}^{\times}$, $\mathbb{E}^{1}$, and $\{1\}$ respectively. In the case
of 2-dimensional, $GSU_{\mathbb{E},\rho}(M)$ is conjugate to $B^{\times}$ for
some quaternion algebra $B$ over $\mathbb{F}$ which is split over $\mathbb{K}$,
and $SU_{\mathbb{E},\rho}(M)$ is conjugate to $B^{1}$. \label{uniEB}
\end{lem}
\begin{proof}
In case $M$ is just a scalar (which may be taken to be 1), the unitary
relations on $z \in GL_{1}(\mathbb{E})=\mathbb{E}^{\times}$ are just
$zz^{\rho}\in\mathbb{F}^{\times}$ (which poses no further restriction on $z$),
$zz^{\rho}=1$, and $z=1$ respectively. In the 2-dimensional case we may take
$M$ to be diagonal (this change might impose some conjugacy relation), and
after rescaling we may assume that $M=\binom{-\varepsilon\ \ 0}{\ \ 0\ \ 1}$
where $\varepsilon$ represents the discriminant of the unitary space. Now,
multiplying the defining relation of $g \in GSU_{\mathbb{E},\rho}(M)$ by
$\binom{0\ \ -1}{1\ \ \ \ 0}$ from the right and using Lemma \ref{Sadjt}
transforms this relation to $g\binom{0\ \ \varepsilon}{1\ \
0}\overline{g}^{\rho}=\det g\binom{0\ \ \varepsilon}{1\ \ 0}$. As $\det
g=g\overline{g}$, the latter relation shows that $\overline{g}$ is invariant
under the relation from Lemma \ref{KsplitB}, implying that $\overline{g}$ lies
in $B=(\mathbb{E},\rho,\varepsilon)$. As the latter algebra is closed under the
adjoint involution (which restricts to its main involution), we find that $g
\in B^{\times}$ as well. As $SU_{\mathbb{E},\rho}(M)$ is the subgroup of
determinant 1 elements in $GSU_{\mathbb{E},\rho}(M)$, it is taken to $B^{1}$ in
this map. This proves the lemma.
\end{proof}
For a subgroup $H$ of $\mathbb{F}^{\times}$, we write
$GU_{\mathbb{E},\rho}^{H}(M)$, as well as $GSU_{\mathbb{E},\rho}^{H}(M)$ of $n$
is even, for the subgroup of the appropriate groups consisting of matrices $g$
whose multiplier $t(g)$ lies in $H$. Thus
$GSU_{\mathbb{E},\rho}^{H}(M)=GSU_{\mathbb{E},\rho}(M) \cap
GU_{\mathbb{E},\rho}^{H}(M)$. The same argument as in Lemma \ref{uniEB} shows
that if $M$ is 1-dimensional then $GU_{\mathbb{E},\rho}^{H}(M)$ is
$\mathbb{E}^{H}$, while for 2-dimensional $M$ the group
$GSU_{\mathbb{E},\rho}^{H}(M)$ is conjugate to $B^{H}$ for some quaternion
algebra $B$ over $\mathbb{F}$.
\smallskip
The (classical) \emph{symplectic group} $Sp_{2n}(\mathbb{F})$ is the group
consisting of those matrices $g \in GL_{2n}(\mathbb{F})$ such that
$g\binom{0\ \ -I}{I\ \ \ \ 0}g^{t}=\binom{0\ \ -I}{I\ \ \ \ 0}$. More generally,
a symplectic group is the group of linear transformations of a vector space
(which must be of even dimension) which preserve a non-degenerate anti-symmetric
bilinear form on it, but every such group is isomorphic (or conjugate) to the
classical one. The \emph{general symplectic group} $GSp_{2n}(\mathbb{F})$
consists of those matrices whose action multiplies $\binom{0\ \ -I}{I\ \ \ \ 0}$
by a scalar. If we consider a quadratic extension $\mathbb{E}$ of $\mathbb{F}$
with Galois automorphism $\rho$, then preserving an anti-Hermitian matrix via
$g:M \mapsto gMg^{t\rho}$ is the same as preserving the Hermitian matrix which
is obtained from $M$ through multiplication by a scalar from $\mathbb{E}_{0}$,
so that no new groups are obtained in this way. On the other hand, using a
central simple algebra $R$ with an involution $x\mapsto\overline{x}$ of the
first kind, one defines further types of symplectic groups. We have the
operation of $GL_{n}(R)$ on $M_{n}(R)$ via $g:M \mapsto gM\overline{g}^{t}$, and
any matrix (Hermitian, anti-Hermitian, or neither) may be used to define such a
group. Note that $g:M \mapsto gMg^{t}$ and $g:M \mapsto gM\overline{g}$ may not
be used here, as they do not define actions of $M_{n}(R)$. Now, given any $M
\in GL_{n}(R)$, we let $Sp_{R}(M)$ is the group of elements $g \in GL_{n}(R)$
which preserve $M$, and $GSp_{R}(M)$ consists of those matrices whose action
multiplies $M$ by a scalar from $\mathbb{F}^{\times}$. Note that rescaling $M$
by a factor from $\mathbb{F}^{\times}$ still does not change these groups. If
$B$ is a quaternion algebra (with its main involution) and $M$ is Hermitian, we
may choose a basis for our space such that $M$ is diagonal, with entries from
$\mathbb{F}$. In this case we just have
\begin{lem}
If $M$ has dimension 1 then $GSp_{B}(M)=B^{\times}$ and $Sp_{B}(M)=B^{1}$.
\label{Sp1B}
\end{lem}
\begin{proof}
Indeed, $M$ is just a scalar from $\mathbb{F}^{\times}$, which may be taken to
be 1. Hence the $GSp$ relation just states that
$x\overline{x}\in\mathbb{F}^{\times}$ (which poses no restriction on the element
$x \in GL_{1}(B)=B^{\times}$), and the $Sp$ condition means $x\overline{x}=1$.
This proves the lemma.
\end{proof}
In resemblance with the classical case, we shall use $Sp_{2n}(R)$ and
$GSp_{2n}(R)$ for the case where $M$ is the anti-Hermitian matrix $\binom{0\ \
-I}{I\ \ \ \ 0}$. As usual, for a subgroup $H\subseteq\mathbb{F}^{\times}$ we
define $GSp_{R}^{H}(M)$ and $GSp_{2n}^{H}(R)$ to be the subgroup of $GSp_{R}(M)$
and $GSp_{2n}(R)$ in which the multiplier comes from $H$. The proof of Lemma
\ref{Sp1B} shows that if $B$ is a quaternion algebra with its main involution
and $M$ is 1-dimensional and Hermitian then the first group is just $B^{H}$.
\section{Dimension $\leq3$ \label{Dim123}}
In dimension 1 we have only one quadratic space (up to isomorphism), namely
$\mathbb{F}$ itself, and the bilinear form is determined by the norm of 1 (which
is most naturally normalized to be 1). Proposition \ref{CDT} shows that
$O(\mathbb{F})$ is generated by the only reflection $-Id_{\mathbb{F}}$, so that
it equals $\{\pm1\}$ and $SO(\mathbb{F})=\{1\}$. The spinor norm is just 1 on
$SO(\mathbb{F})$ (i.e., $SO^{1}(\mathbb{F})=SO(\mathbb{F})$). The Gspin group is
thus $\mathbb{F}^{\times}$, and the spin group is $\{\pm1\}$, both mapping to
the trivial group $SO(\mathbb{F})=\{1\}$.
\medskip
For dimension 2 we define $\mathbb{E}$ to be $\mathbb{F}(\sqrt{d})$, where $d$
is the discriminant of the space. Our space is described by the following
\begin{lem}
Any 2-dimensional quadratic space of discriminant $d$ is isometric to
$\mathbb{E}$, with a rescaling of the quadratic form
$N^{\mathbb{E}}_{\mathbb{F}}$. Rescaled appropriately, we get $2\langle z,w
\rangle=Tr^{\mathbb{E}}_{\mathbb{F}}(zw^{\rho})$ for $z$ and $w$ in
$\mathbb{E}$. \label{sp2}
\end{lem}
\begin{proof}
The second equality for $\mathbb{E}$ with
$|z|^{2}=N^{\mathbb{E}}_{\mathbb{F}}(z)$ follows from Lemmas \ref{BEpol} and
\ref{Vpol}. Hence $1\in\mathbb{E}$ satisfies $|1|^{2}=1$, an element from
$\mathbb{E}_{0}$ has vector norm $-d$ (up to $(\mathbb{F}^{\times})^{2}$), and
they are orthogonal. Now, rescaling our original space space such that some
anisotropic vector has vector norm 1, we find the orthogonal complement must be
spanned by a vector whose vector norm is the determinant $-d$, just like in
$\mathbb{E}$. This proves the lemma.
\end{proof}
Multiplication from $\mathbb{E}^{1}$ preserves the vector norms, which defines a
map $\mathbb{E}^{1} \to O(\mathbb{E})$, which is clearly injective. However, in
order to define the spin and Gspin group and be in the same spirit as the
constructions for higher dimensions, we shall use
\begin{lem}
The action $g:z \mapsto gzg^{-\rho}$ defines a map $\mathbb{E}^{\times} \to
O(\mathbb{E})$, with kernel $\mathbb{F}^{\times}$. The semi-direct product of
$Gal^{\mathbb{E}}_{\mathbb{F}}=\{Id_{\mathbb{E}},\rho\}$ with
$\mathbb{E}^{\times}$ also maps to $O(\mathbb{E})$. \label{ac2}
\end{lem}
\begin{proof}
As $N^{\mathbb{E}}_{\mathbb{F}}(g^{\rho})=N^{\mathbb{E}}_{\mathbb{F}}(g)$ and
the norm is multiplicative, we have the equalities $|gzg^{-\rho}|^{2}=|z|^{2}$
as well as $|z^{\rho}|^{2}=|z|^{2}$. Hence both $\mathbb{E}^{\times}$ and $\rho$
map to $O(\mathbb{E})$. The kernel of the map from $\mathbb{E}^{\times}$
consists of those elements of $\mathbb{E}^{\times}$ such that $g=g^{\rho}$,
which is $\mathbb{F}^{\times}$. The equality
$(gzg^{-\rho})^{\rho}=g^{\rho}z^{\rho}g^{-1}$ shows that the map from the
semi-direct product is also a homomorphism. This proves the lemma.
\end{proof}
In fact, the map from $\mathbb{E}^{\times}$ defined in Lemma \ref{ac2} and the
map defined above it have the same image, by Hilbert's Theorem 90. The next step
is
\begin{lem}
Fix some $0 \neq h\in\mathbb{E}^{0}$. Then for every $g\in\mathbb{E}^{\times}$
the map taking $z\in\mathbb{E}$ to $(gh)z^{\rho}(gh)^{-\rho}$ is the reflection
inverting $g$. \label{ref2}
\end{lem}
\begin{proof}
As $h^{\rho}=-h$, this map takes $z$ to $-gz^{\rho}g^{-\rho}$. It is clear that
$g$ is inverted (as $g^{\rho}=z^{\rho}$ cancels with $g^{-\rho}$). Now,
elements which are perpendicular to $g$ are those from $g\mathbb{E}_{0}$ (so
that multiplying by $g^{\rho}$ yields an element of
$N^{\mathbb{E}}_{\mathbb{F}}(g)\mathbb{E}_{0}=\mathbb{E}_{0}$, on which
$Tr^{\mathbb{E}}_{\mathbb{F}}$ vanishes). They are all multiples of $gh$. As
$(gh)^{\rho}=-hg^{\rho}$, a similar calculation shows that $gh$ is invariant
under this operation. This proves the lemma.
\end{proof}
Using all this, we can now establish
\begin{thm}
A special orthogonal group of a 1-dimensional space is a one-element trivial
group. For a 2-dimensional space of discriminant $d$, the Gspin group is
$\mathbb{E}^{\times}$, and the spin and special orthogonal groups are isomorphic
to $\mathbb{E}^{1}$. \label{dim12}
\end{thm}
\begin{proof}
The 1-dimensional part was already proven. Lemma \ref{ref2} and Proposition
\ref{CDT} show that the map the semi-direct product defined in Lemma \ref{ac2}
surjects onto $O(\mathbb{E})$. As $\rho$ represents an element of $O(\mathbb{E})
\setminus SO(\mathbb{E})$ (it inverts $\mathbb{E}_{0}$ and leaves $\mathbb{F}$
invariant), Lemma \ref{ref2} shows that $\mathbb{E}^{\times}$ maps to
$SO(\mathbb{E})$. As $\mathbb{E}^{\times}$ has the same index 2 in the
semi-direct product as $SO(\mathbb{E})$ has in $O(\mathbb{E})$, this map is also
surjective, with kernel $\mathbb{F}^{\times}$. Hence $\mathbb{E}^{\times}$ is
$Gspin(\mathbb{E})$, and $SO(\mathbb{E})$ is isomorphic to $\mathbb{E}^{1}$.
Now, the fact that $\rho$ has spinor norm $-d$ (as this is the vector norm of
non-zero elements of $\mathbb{E}_{0}$, up to $(\mathbb{F}^{\times})^{2}$),
implies that the image of $g\in\mathbb{E}^{\times}$ in $SO(\mathbb{E})$ has
spinor norm $N^{\mathbb{E}}_{\mathbb{F}}(g)$: Indeed, Lemma \ref{ref2} shows
that its composition with $\rho$ inverts an element of norm
$-dN^{\mathbb{E}}_{\mathbb{F}}(g)$, and the spinor norm is a group homomorphism.
Thus $SO^{1}(\mathbb{E})$ is the image of elements
$g\in\mathbb{E}^{(\mathbb{F}^{\times})^{2}}$, and as we may divide by elements
of the kernel $\mathbb{F}^{\times}$ of $\mathbb{E}^{\times} \to SO(\mathbb{E})$,
it suffices to consider $g\in\mathbb{E}^{1}$. Hence the map $\mathbb{E}^{1} \to
SO^{1}(\mathbb{E})$, which is just $g \mapsto g^{2}$ since $g^{-\rho}=g$ for
$g\in\mathbb{E}^{1}$, is surjective, and the kernel is just
$\mathbb{E}^{1}\cap\mathbb{F}^{\times}=\{\pm1\}$. It follows that
$spin(\mathbb{E})=\mathbb{E}^{1}$ as well, and $SO^{1}(\mathbb{E})$ is the group
$(\mathbb{E}^{1})^{2}$ of squares of elements from $\mathbb{E}^{1}$. This proves
the proposition.
\end{proof}
Another way to write the groups $Gspin(\mathbb{E})$ and $spin(\mathbb{E})$ are
as $GU_{\mathbb{E},\rho}(1)$ and $U_{\mathbb{E},\rho}(1)$ respectively, by Lemma
\ref{uniEB}. We remark that when $SO(\mathbb{E})$ is given in terms of
multiplication from $\mathbb{E}^{1}$, the spinor norm of $u\in\mathbb{E}^{1}$,
can be evaluated as $N^{\mathbb{E}}_{\mathbb{F}}(1+u)$ for $u\neq-1$ and $d$ for
$u=-1$ (note that the latter represents $-Id_{\mathbb{E}}$). To see this, write
$u=\frac{g}{g^{\rho}}$ for $g\in\mathbb{E}^{\times}$, so that $1+u$ equals
$\frac{Tr^{\mathbb{E}}_{\mathbb{F}}(g)}{N^{\mathbb{E}}_{\mathbb{F}}(g)}g$, and
$N^{\mathbb{E}}_{\mathbb{F}}(1+u) \in
N^{\mathbb{E}}_{\mathbb{F}}(g)(\mathbb{F}^{\times})^{2}$ since
$g\not\in\mathbb{E}_{0}$ for $u\neq-1$. Comparing these models we find that an
element $u\in\mathbb{E}^{1}$ (other than $-1$) lies in $(\mathbb{E}^{1})^{2}$ if
and only if $N^{\mathbb{E}}_{\mathbb{F}}(1+u)\in\mathbb{F}^{\times}$, a fact
which may also be verified directly. The remaining element $-1$ lies in
$SO^{1}(\mathbb{E})$ if and only if its spinor norm is a square, i.e., if
$\mathbb{E}=\mathbb{F}(\sqrt{-1})$. It is easy to verify that
$-1\in(\mathbb{E}^{1})^{2}$ precisely then this is indeed the case.
As a special case of Theorem \ref{dim12} we obtain
\begin{cor}
A 2-dimensional quadratic space is isotropic if and only if it has a trivial
discriminant. In this case the Gspin group is to
$\mathbb{F}^{\times}\times\mathbb{F}^{\times}$, while the spin and special
orthogonal groups are isomorphic to $\mathbb{F}^{\times}$. \label{iso2}
\end{cor}
\begin{proof}
By Lemma \ref{sp2}, an isotropic 2-dimensional quadratic space comes, up to
rescaling, from a quadratic algebra contains non-zero norm 0 elements. But such
an algebra cannot be a field, which is equivalent to $d$ being a square. In this
case $\mathbb{E}=\mathbb{F}\times\mathbb{F}$, so that
$Gspin(\mathbb{F}\times\mathbb{F})=\mathbb{E}^{\times}=\mathbb{F}^{\times}
\times\mathbb{F}^{\times}$. The group $\mathbb{E}^{1}$ (which is the spin group)
consists of the pairs $\big(r,\frac{1}{r}\big)$ with $r\in\mathbb{F}^{\times}$,
so it is isomorphic to $\mathbb{F}^{\times}$. As $\mathbb{F}^{\times}$ is
embedded in $\mathbb{E}^{\times}$ diagonally, the quotient
$SO(\mathbb{F}\times\mathbb{F})$ is the isomorphic image of the subgroup
$\{(r,1)|r\in\mathbb{F}^{\times}\}$, which is also a copy of
$\mathbb{F}^{\times}$. This proves the corollary.
\end{proof}
We remark that in the case presented in Corollary \ref{iso2}, the element
$\big(r,\frac{1}{r}\big)$ of $\mathbb{E}^{1}$ (considered as
$SO(\mathbb{F}\times\mathbb{F})$ now) is $\frac{g}{g^{\rho}}$ for $g=(r,1)$, so
that its spinor norm is just $r$. This value coincides with the norm of
$\big(1+r,1+\frac{1}{r}\big)$ for $r\neq-1$ and with $d=-1$ for $r=-1$. Hence
$SO(\mathbb{F}\times\mathbb{F})=\mathbb{F}^{\times}$ modulo
$(\mathbb{F}^{\times})^{2}$. The group $SO^{1}(\mathbb{F}\times\mathbb{F})$ is
just $(\mathbb{F}^{\times})^{2}$, given as the quotient of
$Spin(\mathbb{F}\times\mathbb{F})=\mathbb{E}^{1}\cong\mathbb{F}^{\times}$ modulo
$\{\pm1\}$.
The space appearing in Corollary \ref{iso2} is called a \emph{hyperbolic plane}.
It may also be generated by two isotropic vectors with non-zero pairing, so that
it is isometric to all its rescalings. In fact, every isotropic quadratic space
contains a hyperbolic plane, and the complement is uniquely determined up to
isomorphism by the Witt Cancelation Theorem.
\medskip
In dimension 3 we can assume, by rescaling , that our quadratic form has
determinant in $(\mathbb{F}^{\times})^{2}$ (hence discriminant $-1$). Such a
vector space (a 3-dimensional quadratic space with discriminant 1) will be
called a \emph{traceless quaternionic space} over $\mathbb{F}$, for a reason to
be explained by the following
\begin{lem}
If $B$ is a quaternion algebra over $\mathbb{F}$, then the space $B_{0}$ with
the vector norm $|x|^{2}=N^{B}_{\mathbb{F}}(x)$ is a traceless quaternionic
space. Every traceless quaternionic space is isometric to a space which obtained
in this way from some quaternion algebra $B$. The pairing on such a space is
given by $2\langle x,y \rangle=Tr^{\mathbb{E}}_{\mathbb{F}}(x\overline{y})$ for
$x$ and $y$ in $B_{0}$. \label{sp3}
\end{lem}
\begin{proof}
The latter formula for the pairing on $B_{0}$ is a consequence of Lemmas
\ref{BEpol} and \ref{Vpol} (in fact, the same formula holds for $x$ and $y$ in
$B$ with this quadratic form). Now, two elements of $B_{0}$ are orthogonal if
and only if they anti-commute. Writing $B$ is
$\big(\frac{\alpha,\beta}{\mathbb{F}}\big)$, we get two such elements having
norms $-\alpha$ and $-\beta$, and as their product is orthogonal to both of them
of squares to $-\alpha\beta$ (hence with norm $+\alpha\beta$), we get the
required determinant in $(\mathbb{F}^{\times})^{2}$. Conversely, if two
orthogonal elements of a traceless quaternionic space have norms $-\alpha$ and
$-\beta$ respectively, then the determinant condition shows that a generator for
their orthogonal complement can be normalized to have norm $+\alpha\beta$, and
this space is isometric to $B_{0}$ for
$\big(\frac{\alpha,\beta}{\mathbb{F}}\big)$. This proves the lemma.
\end{proof}
Regarding the group acting here we have
\begin{lem}
The group $B^{\times}$ is mapped into $O(B_{0})$ via $g:u \mapsto
gu\overline{g}/N^{B}_{\mathbb{F}}(g)$, with kernel $\mathbb{F}^{\times}$.
Letting $-1$ operate as $-Id_{B_{0}}$ yields a map from the direct product
$B^{\times}\times\{\pm1\}$ into $O(B_{0})$. \label{ac3}
\end{lem}
\begin{proof}
Since $N^{B}_{\mathbb{F}}(g)=\overline{g}g$, $g$ maps $u$ to $gug^{-1}$, and the
multiplicativity of the norm shows that $|gug^{-1}|^{2}=|u|^{2}$. An element
lies in the kernel if and only if it is central (since with the complement
$\mathbb{F}$ of $B_{0}$ in $B$ it does commute), so that the kernel is indeed
$\mathbb{F}^{\times}$. The centrality of $-Id_{B_{0}}$ in $O(B_{0})$ yields the
last assertion. This proves the lemma.
\end{proof}
We remark that the operation of $-1$ coincides with the operation of the main
involution of $B$. the analysis of the orthogonal group begins with the
following
\begin{lem}
If $g \in B_{0}$ has non-zero vector norm, then the orthogonal transformation
taking $u \in B_{0}$ to $-\frac{gu\overline{g}}{N^{B}_{\mathbb{F}}(g)}$ is the
reflection inverting $g$. \label{ref3}
\end{lem}
\begin{proof}
The proof of Lemma \ref{sp3} shows that conjugation by $g$ inverts $g^{\perp}$,
and it clearly leaves $g$ invariant. Composing with the central map
$-Id_{B_{0}}$, we establish the lemma.
\end{proof}
We can now prove
\begin{thm}
The Gspin group of a 3-dimensional $\mathbb{F}$-space is the group $B^{\times}$
for some quaternion algebra $B$ over $\mathbb{F}$, which is generated by
invertible elements of $B_{0}$. The spin is the subgroup $B^{1}$ arising from
this quaternion algebra $B$. \label{dim3}
\end{thm}
\begin{proof}
Proposition \ref{CDT} and Lemma \ref{ref3}, the operation in which is the
action, from Lemma \ref{ac3}, of $g$ on $-u$, imply that the map
$B^{\times}\times\{\pm1\} \to O(B_{0})$ is surjective. As reflections and
$-Id_{B_{0}}$ has determinant $-1$, the image of $B^{\times}$ lies in
$SO(B_{0})$, and index considerations show that the map $B^{\times} \to
SO(B_{0})$ is surjective. As the kernel is $\mathbb{F}^{\times}$, we find that
$Gspin(B_{0})=B^{\times}$. Taking out the action of the central element
$-Id_{B_{0}}$, Lemma \ref{ref3} shows that $B_{0} \cap B^{\times}$ indeed
generates $B^{\times}$ (a fact which in this case is easily verified directly),
since the full kernel $\mathbb{F}^{\times}$ is clearly generated by this set:
$t=(tg)g^{-1}$ for $t\in\mathbb{F}^{\times}$. As for spinor norms, we first
observe that under our normalization $-Id$ has spinor norm 1. Hence Lemma
\ref{ref3} and the fact that $|g|^{2}=N^{B}_{\mathbb{F}}(g)$ imply that the
spinor norm of any $g \in B_{0} \cap B^{\times}$ is $N^{B}_{\mathbb{F}}(g)$.
Since such elements were seen to generate $B^{\times}$, the spinor norm of any
$g \in B^{\times}$ is $N^{B}_{\mathbb{F}}(g)$. The group $SO^{1}(B_{0})$ is thus
the image of elements having reduced norms in $(\mathbb{F}^{\times})^{2}$, and
by appropriate scalar multiplication we may restrict to elements from $B^{1}$.
As the only scalars in $B^{1}$ are $\pm1$, we find that $B^{1}$ is indeed
$spin(B_{0})$. This proves the proposition.
\end{proof}
Lemmas \ref{uniEB} and \ref{Sp1B} shows that the Gspin group from Theorem
\ref{dim3} can also be described as $GSp_{B}(1)$ and as
$GSU_{\mathbb{K},\sigma}\binom{-\varepsilon\ \ 0}{\ \ 0\ \ 1}$, in case
$\mathbb{K}=\mathbb{F}(\eta)$ is a quadratic extension of $\mathbb{F}$ (with
Galois automorphism $\sigma$) which splits $B$, and $\varepsilon\in\mathbb{F}$
is such that $B\cong\big(\frac{\eta,\varepsilon}{\mathbb{F}}\big)$. They also
imply that the spin group in question is isomorphic to $Sp_{B}(1)$, as well as
to $SU_{\mathbb{K},\sigma}\binom{-\varepsilon\ \ 0}{\ \ 0\ \ 1}$ for such
$\mathbb{K}$, $\sigma$, and $\varepsilon$.
The isotropic case in dimension 3 is given in
\begin{cor}
A quadratic space of dimension 3 is isotropic if and only it is related to the
split quaternion algebra $B=M_{2}(\mathbb{F})$. The Gspin group
$Gspin\big(M_{2}(\mathbb{F})_{0}\big)$ is then $GL_{2}(\mathbb{F})$, and
$spin\big(M_{2}(\mathbb{F})_{0}\big)=SL_{2}(\mathbb{F})$. We also have
$SO\big(M_{2}(\mathbb{F})_{0}\big)=PGL_{2}(\mathbb{F})$ and
$SO^{1}\big(M_{2}(\mathbb{F})_{0}\big)=PSL_{2}(\mathbb{F})$. \label{iso3}
\end{cor}
\begin{proof}
If $B_{0}$ is isotropic then $B$ cannot be a division algebra, and the space
$M_{2}(\mathbb{F})_{0}$ does split. The Gspin and spin groups are determined by
Theorem \ref{dim3} with $B=M_{2}(\mathbb{F})$, and dividing the former by
$\mathbb{F}^{\times}$ and the latter by $\{\pm1\}$ yields the asserted
projective groups. This proves the corollary.
\end{proof}
Lemma \ref{uniEB} allows us to write the Gspin and spin groups from Corollary
\ref{iso3} also as $GSU_{\mathbb{K},\sigma}\binom{-1\ \ 0}{\ \ 0\ \ 1}$ and
$SU_{\mathbb{K},\sigma}\binom{-1\ \ 0}{\ \ 0\ \ 1}$ respectively, for any
quadratic extension $\mathbb{K}$ of $\mathbb{F}$ (with Galois automorphism
$\sigma$), since every such $\mathbb{K}$ splits $M_{2}(\mathbb{F})$. In this
case matrix transposition is also an element of
$O\big(M_{2}(\mathbb{F})_{0}\big) \setminus SO\big(M_{2}(\mathbb{F})_{0}\big)$,
and this element arises as the composition of $-Id$ and conjugation by
$\binom{0\ \ -1}{1\ \ \ \ 0}$ (see Lemma \ref{Sadjt}).
In this split case there is an additional assertion, which is given by
\begin{cor}
The groups $GL_{2}(\mathbb{F})$, $SL_{2}(\mathbb{F})$, $PGL_{2}(\mathbb{F})$,
and $PSL_{2}(\mathbb{F})$ are the Gspin, spin, special orthogonal, and spinor
norm kernel groups of the space $M_{2}^{sym}(\mathbb{F})$ of symmetric
$2\times2$ matrices over $\mathbb{F}$, on which they all operate via
$g:X\mapsto\frac{gXg^{t}}{\det g}$. \label{alt3}
\end{cor}
\begin{proof}
As right multiplication by $\binom{\ \ 0\ \ 1}{-1\ \ 0}$ takes
$M_{2}(\mathbb{F})_{0}$ to $M_{2}^{sym}(\mathbb{F})$ and preserves determinants,
the corollary follows from Corollary \ref{iso3} and Lemma \ref{Sadjt}.
\end{proof}
Note that the operation from Corollary \ref{alt3} replaces the symplectic main
involution on $B=M_{2}(\mathbb{F})$ by an orthogonal one, and the usual space
$B^{-}$ by an $M_{2}(\mathbb{F})^{+}$ space. The possible generators of
$O\big(M_{2}^{sym}(\mathbb{F})\big)/SO\big(M_{2}^{sym}(\mathbb{F})\big)$ are
again $-Id$ (which is now $X\mapsto-X^{t}$) and the adjoint involution.
\section{Dimension 4 \label{Dim4}}
Given a quadratic space of dimension 4 and discriminant $d$ over $\mathbb{F}$,
we define $\mathbb{E}$ to be $\mathbb{F}(\sqrt{d})$, with automorphism $\rho$.
Given a quaternion algebra $B$ over $\mathbb{F}$, the tensor product
$B_{\mathbb{E}}$ comes endowed with the (unitary) involution
$\iota\otimes\rho:x\mapsto\overline{x}^{\rho}$. Our space is given by
\begin{lem}
The space $B_{\mathbb{E}}^{-}$ of this involution becomes, when endowed with
the quadratic form $|x|^{2}=N^{B_{\mathbb{E}}}_{\mathbb{E}}(x)$, a quadratic
space over $\mathbb{F}$ with discriminant $d$, in which $2\langle x,y
\rangle=Tr^{B_{\mathbb{E}}}_{\mathbb{E}}(x\overline{y})$ holds for every $x$
and $y$. Every such space is obtained, up to rescaling in this way. \label{sp4}
\end{lem}
\begin{proof}
$B_{\mathbb{E}}^{-}$ is contained in the quadratic space $B_{\mathbb{E}}$ over
$\mathbb{E}$ with the same vector norm, and the combination of Lemmas
\ref{BEpol} and \ref{Vpol} shows that the formula for the pairing holds (in
$\mathbb{E}$) for any two elements of the larger space. Now,
$B_{\mathbb{E}}^{-}$ is $\mathbb{E}_{0} \oplus B_{0}$ inside $B_{\mathbb{E}}$,
and the direct sum is thus orthogonal inside there. Since
$N^{B_{\mathbb{E}}}_{\mathbb{E}}$ coincides with $N^{B}_{\mathbb{F}}$ on $B_{0}$
and is the square map on $\mathbb{E}_{0}$ (both $\mathbb{F}$-valued), we find
that $|x|^{2}\in\mathbb{F}$ for every $x \in B_{\mathbb{E}}^{-}$. Moreover,
$B_{0}$ has determinant 1 and $\mathbb{E}_{0}$ is spanned by an element $h$ with
$|h|^{2}=h^{2}=d$, so that the determinant and discriminant of such a space are
$d$. Conversely, given a quadratic space of discriminant $d$, we may rescale our
it such that a anisotropic element $v$ of our choice has vector norm $d$. The
subspace $v^{\perp}$ is a traceless quaternionic space, so that by Lemma
\ref{sp3} it can be presented as $B_{0}$ for a quaternion algebra $B$ over
$\mathbb{F}$. Hence we found a presentation of our space as $\mathbb{E}_{0}
\oplus B_{0}$, namely $B_{\mathbb{E}}^{-}$, with
$|x|^{2}=N^{B_{\mathbb{E}}}_{\mathbb{E}}(x)$. This proves the lemma.
\end{proof}
Note that the proof of Lemma \ref{sp4} involved a choice of a vector, and
choosing another vector (with the appropriate rescaling) may lead to other
quaternion algebras which are not isomorphic to $B$. However, $B_{\mathbb{E}}$
remains the same algebra, but with a different unitary involution. The
correspondence between quaternion algebras over $\mathbb{F}$ which are
contained in $B_{\mathbb{E}}$ and generate it over $\mathbb{E}$ and unitary
involutions on $B_{\mathbb{E}}$ is given in Proposition 2.22 of \cite{[KMRT]}.
However, we shall consider, $B$, $B_{\mathbb{E}}$, and the involution as fixed.
As $\iota$ is canonical, this gives an interpretation of $\rho$ on
$B_{\mathbb{E}}$ as well. We also remark that the complementary space
$B_{\mathbb{E}}^{+}$ is obtained from $B_{\mathbb{E}}^{-}$ via multiplication by
an element of $\mathbb{E}_{0}$, so that it is isometric to a rescaling of
$B_{\mathbb{E}}^{-}$ as a quadratic space over $\mathbb{F}$.
$B_{\mathbb{E}}$ operates on itself by $g:x \mapsto gx\overline{g}^{\rho}$, and
this action preserves the eigenspaces $B_{\mathbb{E}}^{\pm}$. Moreover, the two
spaces are invariant under $\iota$. Using this, we now prove
\begin{lem}
The group $B_{\mathbb{E}}^{\mathbb{F}^{\times}}$ maps into
$O(B_{\mathbb{E}}^{-})$ if an element $g$ operates as
$x\mapsto\frac{gx\overline{g}^{\rho}}{N^{B_{\mathbb{E}}}_{\mathbb{E}}(g)}$. The
kernel of this map is $\mathbb{F}^{\times}$. Let $\tilde{\iota}$ be the
non-trivial element of a cyclic group of order 2, which operates on
$B_{\mathbb{E}}^{\mathbb{F}^{\times}}$ as $\rho$. Sending $\tilde{\iota}$ to
operate as $\iota$ defines a group homomorphism from the semi-direct product
of $\{1,\tilde{\iota}\}$ and $B_{\mathbb{E}}^{\mathbb{F}^{\times}}$ to
$O(B_{\mathbb{E}}^{-})$. \label{ac4}
\end{lem}
\begin{proof}
For $g \in B_{\mathbb{E}}^{\mathbb{F}^{\times}}$ we may replace
$\frac{\overline{g}^{\rho}}{N^{B_{\mathbb{E}}}_{\mathbb{E}}(g)}$ by just
$g^{-\rho}$, and the fact that $N^{B_{\mathbb{E}}}_{\mathbb{E}}$ is
multiplicative implies that the equality
$\Big|\frac{gx\overline{g}^{\rho}}{N^{B_{\mathbb{E}}}_{\mathbb{E}}(g)}\Big|^{2}
=|x|^{2}$ holds for every $x \in B_{\mathbb{E}}^{-}$. An element $g$ is in the
kernel of this action if and only if it is central and satisfies
$gg^{\rho}=N^{B_{\mathbb{E}}}_{\mathbb{E}}(g)=g^{2}$, which is equivalent to $g$
being in $\mathbb{F}^{\times}$. We also know that $\iota$ preserves
$B_{\mathbb{E}}^{-}$ and $|\overline{x}|^{2}=|x|^{2}$ for elements of that
space. The equality
$\overline{gx\overline{g}^{\rho}}=g^{\rho}\overline{x}\overline{g}$ shows that
the map to $O(B_{\mathbb{E}}^{-})$ respects the product rule of the semi-direct
product, which completes the proof of the lemma.
\end{proof}
We remark that other elements of $B_{\mathbb{E}}^{\times}$ do not increase the
image of the map $B_{\mathbb{E}}^{\mathbb{F}^{\times}} \to
O(B_{\mathbb{E}}^{-})$ from Lemma \ref{ac4}. Indeed, if $g \in
B_{\mathbb{E}}^{\times}$ and $t\in\mathbb{E}^{\times}$ are such that
$x\mapsto\frac{gx\overline{g}^{\rho}}{t}$ preserves $B_{\mathbb{E}}^{-}$ and is
orthogonal, then $t\in\mathbb{F}^{\times}$, the number
$\frac{N^{B_{\mathbb{E}}}_{\mathbb{E}}(g)}{t}$ lies in $\mathbb{E}^{1}$ hence
equals $\frac{s^{\rho}}{s}$ for some $s\in\mathbb{E}^{\times}$ by Hilbert's
Theorem 90. But then $sg \in B_{\mathbb{E}}^{\mathbb{F}^{\times}}$ and operates
like $x\mapsto\frac{gx\overline{g}^{\rho}}{t}$. This assertion will also follow
from Theorem \ref{dim4} below. As for the kernel, note that a non-zero element
$r\in\mathbb{E}_{0}$ is also central and lies
$B_{\mathbb{E}}^{\mathbb{F}^{\times}}$, but as $rr^{\rho}=-r^{2}$ and
$N^{B_{\mathbb{E}}}_{\mathbb{E}}(r)=r^{2}$, such elements operate as $-Id$
rather than trivially.
The properties of the larger homomorphism from Lemma \ref{ac4} will follow from
\begin{lem}
For an element $g \in B_{\mathbb{E}}^{-} \cap B_{\mathbb{E}}^{\times}$, the
reflection inverting $g$ lies in the image of the map from Lemma \ref{ac4},
being
$x\mapsto\frac{g\overline{x}\cdot\overline{g}^{\rho}}{N^{B_{\mathbb{E}}}_{
\mathbb{E}}(g)}$.
\label{ref4}
\end{lem}
\begin{proof}
The proof of Lemma \ref{sp4} shows that every such $g$ is in
$B_{\mathbb{E}}^{\mathbb{F}^{\times}}$, so that the latter transformation comes
from the semi-direct product appearin in Lemma \ref{ac4}. Now,
$\overline{g}^{\rho}=-g$ for $g \in B_{\mathbb{E}}^{-}$, and
$N^{B_{\mathbb{E}}}_{\mathbb{E}}(g)=\overline{g}g$. Thus, for $x=g$ the result
of the action is $-g$, while if $x \in g^{\perp}$ the equality from Lemma
\ref{ac4} allows us to replace $-g\overline{x}$ by $+x\overline{g}$, so that the
total expression is just $x$. This proves the lemma.
\end{proof}
The groups obtained in dimension 4 are now given in the following
\begin{thm}
The quadratic space $B_{\mathbb{E}}^{-}$ has Gspin group
$B_{\mathbb{E}}^{\mathbb{F}^{\times}}$, and it is generated by
$B_{\mathbb{E}}^{-} \cap B_{\mathbb{E}}^{\times}$. The spin group is
$B_{\mathbb{E}}^{1}$. \label{dim4}
\end{thm}
\begin{proof}
The surjectivity of the map from the semi-direct product from Lemma \ref{ac4}
onto $O(B_{\mathbb{E}}^{-})$ follows from Lemma \ref{ref4}, and Proposition
\ref{CDT}. The fact that $\iota$ has determinant $-1$ in
$O(B_{\mathbb{E}}^{-})$ and index considerations show that
$B_{\mathbb{E}}^{\mathbb{F}^{\times}}$ maps onto $SO(B_{\mathbb{E}}^{-})$. The
kernel of latter map being $\mathbb{F}^{\times}$ by Lemma \ref{ac4}, we find
that $Gspin(B_{\mathbb{E}}^{-})=B_{\mathbb{E}}^{\mathbb{F}^{\times}}$. The
semi-direct product structure shows, with Lemma \ref{ref4}, that
$B_{\mathbb{E}}^{-} \cap B_{\mathbb{E}}^{\times}$ generates
$B_{\mathbb{E}}^{\mathbb{F}^{\times}}$ (since again the kernel
$\mathbb{F}^{\times}$ is not a problem), a fact which also in this case may
still be verified directly. Now, $\iota$ reflects the traceless quaternionic
space $B_{0}$, of determinant 1, so that its spinor norm is 1. Lemma \ref{ref4}
thus implies that the spinor norm of $g \in B_{\mathbb{E}}^{-} \cap
B_{\mathbb{E}}^{\times}$ is $|g|^{2}=N^{B_{\mathbb{E}}}_{\mathbb{E}}(g)$. As
such elements were seen to generate $B_{\mathbb{E}}^{\mathbb{F}^{\times}}$, the
multiplicativity of the norm implies that the spinor norm of any $g \in
B_{\mathbb{E}}^{\mathbb{F}^{\times}}$ is $N^{B_{\mathbb{E}}}_{\mathbb{E}}(g)$.
Elements whose (spinor, hence algebra) norms lie in $(\mathbb{F}^{\times})^{2}$
can be dividing by scalars from the kernel $\mathbb{F}^{\times}$, and land in
$B_{\mathbb{E}}^{1}$. Thus $B_{\mathbb{E}}^{1}$ maps surjectively onto
$SO^{1}(B_{\mathbb{E}}^{-})$, and the kernel consists of those scalars whose
norm (hence square) is 1. As these are just $\pm1$, $B_{\mathbb{E}}^{1}$ is the
spin group, which completes the proof of the proposition.
\end{proof}
In addition to $\iota$, $\rho$ also represents an element of
$O(B_{\mathbb{E}}^{-})$. Its spinor norm is $d$, being the composition of
$\iota$ and $-Id$ as well as being the reflection in a generator of
$\mathbb{E}_{0}$. However, $\iota$ is a more canonical representative of
$O(B_{\mathbb{E}}^{-})/SO(B_{\mathbb{E}}^{-})$, being independent of the
$\mathbb{F}$-structure on $B_{\mathbb{E}}$. Note that Theorems \ref{dim3} and
\ref{dim4} imply that if $d$ is not a square then any spin group arising from a
4-dimensional quadratic space over $\mathbb{F}$ with discriminant $d$ is
isomorphic to the spin group of a suitable 3-dimensional quadratic space over
$\mathbb{E}=\mathbb{F}(\sqrt{d})$. The converse does not hold though, as we need
the quaternion algebra over $\mathbb{E}$ to come from one over $\mathbb{F}$.
Recall now that every quaternion algebra $B$ over $\mathbb{F}$ becomes, with
$|x|^{2}=N^{B}_{\mathbb{F}}$, a quadratic space of discriminant 1. Lemma
\ref{sp4} and Theorem \ref{dim4} has the following
\begin{cor}
Any 4-dimensional space of discriminant 1 is isometric to some quaternion
algebra $B$ over $\mathbb{F}$ with its reduced norm (perhaps rescaled). The
Gspin group $Gspin(B)$ consists of those pairs in $B^{\times} \times B^{\times}$
having the same reduced norm, operating via left multiplication and inverted
right multiplication, and $spin(B)=B^{1} \times B^{1}$ with the same operation.
\label{dim4d1}
\end{cor}
\begin{proof}
The condition $d\in(\mathbb{F}^{\times})^{2}$ implies that
$\mathbb{E}=\mathbb{F}\times\mathbb{F}$. Hence $B_{\mathbb{E}}=B \times B$, and
$B_{\mathbb{E}}^{-}$ is the subspace $\{(x,-\overline{x})|x \in B\}$ of $B
\times B$. As
$N^{B_{\mathbb{E}}}_{\mathbb{E}}(x,-\overline{x})=N^{B}_{\mathbb{F}}(x)$, the
first assertion follows from Lemma \ref{sp4}. Now,
$B_{\mathbb{E}}^{\mathbb{F}^{\times}}$ consists, by definition, of those pairs
$(g,h) \in B^{\times} \times B^{\times}$ such that
$N^{B}_{\mathbb{F}}(g)=N^{B}_{\mathbb{F}}(h)$, sending $(x,-\overline{x})$ to
$(gx\overline{h},-h\overline{x}\cdot\overline{g})$ (the second entry being also
$-\overline{gx\overline{h}}$), divided by the common norm of $g$ and $h$. In
terms of the operation with $(g,h)^{-\rho}$ from the proof of Lemma \ref{sp4},
the action on $B$ is given by $(g,h):x \mapsto gxh^{-1}$. Restricting to
elements of norm 1, the spin group is seen to be $B^{1} \times B^{1}$. This
proves the corollary.
\end{proof}
The spin group from Corollary \ref{dim4d1} is the product of two copies of a
spin group of a 3-dimensional space over $\mathbb{F}$, which complements the
relation to spaces over the quadratic extension $\mathbb{E}$ in the other
discriminants. Lemmas \ref{uniEB} and \ref{Sp1B} allow us to write such spin
groups as $Sp_{B}(1) \times Sp_{B}(1)$, as well as
$SU_{\mathbb{K},\sigma}\binom{-\varepsilon\ \ 0}{\ \ 0\ \ 1} \times
SU_{\mathbb{K},\sigma}\binom{-\varepsilon\ \ 0}{\ \ 0\ \ 1}$ in case $B$ is
isomorphic to $\big(\frac{\eta,\varepsilon}{\mathbb{F}}\big)$ and
$\mathbb{K}=\mathbb{F}(\eta)$ with Galois automorphism $\sigma$.
We note that the $\mathbb{F}$-structure on $B_{\mathbb{E}}$, i.e., the
quaternion algebra over $\mathbb{F}$ which yields $B_{\mathbb{E}}$ after
tensoring with $\mathbb{E}$, is equivalent to the choice of the automorphism on
$B_{\mathbb{E}}$ which we denoted here also by $\rho$, or equivalently, by the
involution $x\mapsto\overline{x}^{\rho}$ on $B_{\mathbb{E}}$ (see Proposition
2.22 of \cite{[KMRT]}). Proposition 2.18 shows that all these involutions are
related, and are in one-to-one correspondence with the space
$(B_{\mathbb{E}}^{+} \cap B_{\mathbb{E}}^{\times})/\mathbb{F}^{\times}$, since
every such involution is $x \mapsto b\overline{x}^{\rho}b^{-1}$ for invertible
$b \in B_{\mathbb{E}}^{+}$ which is uniquely determined up to multiplication
from $\mathbb{F}^{\times}$. However, we shall not need these results in what
follows.
As for isotropy, here we have
\begin{cor}
The space $B_{\mathbb{E}}^{-}$ is isotropic if and only if $\mathbb{E}$ splits
$B$. We may then take $B=M_{2}(\mathbb{F})$.
$Gspin\big(M_{2}(\mathbb{E})^{-}\big)$ is
$GL_{2}^{\mathbb{F}^{\times}}(\mathbb{E})$, the spin group
$spin\big(M_{2}(\mathbb{E})^{-}\big)$ as $SL_{2}(\mathbb{E})$, and
$SO^{1}\big(M_{2}(\mathbb{E})^{-}\big)$ as $PSL_{2}(\mathbb{E})$.
\label{iso4}
\end{cor}
\begin{proof}
If $B_{\mathbb{E}}^{-}$ is isotropic then $B_{\mathbb{E}}$ cannot be a division
algebra. Conversely, if $\mathbb{E}$ splits $B$ then there is an embedding
$i:\mathbb{E} \to B$, and if $r\in\mathbb{E}_{0}$ then $r+i(r)$ belongs to
$B_{\mathbb{E}}^{-}$ and is a zero-divisor (hence isotropic). By splitting a
hyperbolic plane and rescaling a vector $v$ which is perpendicular to this
hyperbolic plane to have vector norm $d$. Then $v^{\perp}$ is isotropic, and
the corresponding quaternion algebra $B$ splits by Corollary \ref{iso3}. The
Gspin and spin groups are given in Theorem \ref{dim4} (written in terms of a
split algebra), and the assertion about $SO^{1}\big(M_{2}(\mathbb{E})^{-}\big)$,
which is $spin\big(M_{2}(\mathbb{E})^{-}\big)/\{\pm1\}$, is immediate. This
proves the corollary.
\end{proof}
Also here we can consider matrix conjugation an element of
$O\big(M_{2}(\mathbb{E})^{-}\big)$ of determinant 1, and it comes as the
composition of the main involution (adjoint) and conjugation by $\binom{\ \ 0\ \
1}{-1\ \ 0}$ by Lemma \ref{Sadjt}. In this case we can get an equivalent
quadratic space, as is given in the following
\begin{cor}
We can consider the groups $GL_{2}^{\mathbb{F}^{\times}}(\mathbb{E})$ and
$SL_{2}(\mathbb{E})$ as the Gspin and spin groups of the space
$M_{2}^{Her}(\mathbb{E},\rho)$ of $2\times2$ matrices of $\mathbb{E}$ which are
Hermitian with respect to $\rho$, with the vector norm being the determinant.
The operation is via $g:X\mapsto\frac{gXg^{t\rho}}{\det g}$. In the case of
trivial discriminant, we may consider the subgroup of
$GL_{2}(\mathbb{F}) \times GL_{2}(\mathbb{F})$ consisting of pairs of matrices
of the same determinant and $SL_{2}(\mathbb{F}) \times SL_{2}(\mathbb{F})$ as
the Gspin and the spin groups of $M_{2}(\mathbb{F})$ with the determinant also
via the action $(g,h):M \mapsto gMh^{t}$ divided by the common norm of $g$ an
$h$ (which is trivial on the latter group). \label{alt4}
\end{cor}
\begin{proof}
The first assertion follows directly from Corollary \ref{iso3} and Lemma
\ref{Sadjt}, since right multiplication by $\binom{\ \ 0\ \ 1}{-1\ \ 0}$ takes
$M_{2}(\mathbb{E})^{-}$ to $M_{2}^{Her}(\mathbb{E})$. The second assertion is
obtained from the same considerations together with Corollary \ref{dim4d1}.
\end{proof}
The quotient group
$O\big(M_{2}^{Her}(\mathbb{E},\rho)\big)/SO\big(M_{2}^{Her}(\mathbb{E},
\rho)\big)$, as well as the group
$O\big(M_{2}(\mathbb{F})\big)/SO\big(M_{2}(\mathbb{F})\big)$, is again
generated by adjoint or transposition, but here $\rho$ coincides with the latter
transformation.
\section{Dimension 6, Discriminant 1 \label{Dim6d1}}
Our presentation of 6-dimensional spaces of discriminant 1 is based on
presentations of bi-quaternion algebras over $\mathbb{F}$ as tensor products of
two quaternion algebras, as in
\begin{lem}
For two quaternion algebras over $\mathbb{F}$, $B$ and $C$ say, the subspace
$(B_{0}\otimes1)\oplus(1 \otimes C_{0})$ of the bi-quaternion algebra $A=B
\otimes C$ is a quadratic space of dimension 6 and discriminant 1 if we define
$|x\otimes1+1 \otimes y|^{2}=N^{B}_{\mathbb{F}}(x)-N^{C}_{\mathbb{F}}(y)$.
Every 6-dimensional quadratic space of discriminant 1 may be obtained, up to
isometries and rescalings, in this way. \label{sp6d1}
\end{lem}
\begin{proof}
Lemma \ref{sp3} shows that $(B_{0}\otimes1)\oplus(1 \otimes C_{0})$ is the
direct sum of two 3-dimensional spaces of determinants 1 and $-1$, so that the
total determinant is $-1$ and the discriminant is 1. Conversely, given a
quadratic space $V$ of dimension 6 and discriminant 1, choose a 3-dimensional
non-degenerate subspace of $V$, and rescale $V$ such that the chosen space has
determinant 1. The discriminant 1 condition implies that the orhogonal
complement is also a traceless quaternionic space but rescaled by $-1$, so that
the assertion follows from Lemma \ref{sp3}. This proves the lemma.
\end{proof}
The space from Lemma \ref{sp6d1} is called the \emph{Albert form} of the two
quaternion algebras $B$ and $C$. The bi-quaternion algebra $A=B \otimes C$
comes with the involution $\iota_{B}\otimes\iota_{C}$, which depends on the
presentation of $A$ as $B \otimes C$. When we decompose $A$ as $A^{+} \oplus
A^{-}$ according to this involution, then $A^{+}$ is the 10-dimensional space
$B_{0} \otimes C_{0}\oplus\mathbb{F}(1\otimes1)$, and $A^{-}$ is the space
$(B_{0}\otimes1)\oplus(1 \otimes C_{0})$ from Lemma \ref{sp6d1}. In particular,
$\iota_{B}\otimes\iota_{C}$ is orthogonal---compare Proposition 2.23(1) of
\cite{[KMRT]}. Note that the product structure on our original space depends on
the choice of the 3-dimensional space which we normalize to be
$B_{0}\otimes1$: Observe that $B_{0}\otimes1$ and $1 \otimes C_{0}$ are
precisely those elements of $A^{-}$ whose algebra square lies in $\mathbb{F}$.
Several remarks are in order here. First, there are many involutions, orthogonal
and symplectic, on $A$, and to each of them one may associate a quadratic
6-dimensional space of discriminant 1 (an Albert form). Two Albert forms are
isometric up to rescaling if and only if they come from isomorphic
bi-quaternion algebras, by a result of \cite{[J]} (Lemma \ref{sp6d1} already
shows that this construction gives all the Albert form, up to rescaling).
However, not all the involutions on $A$, and not even all the orthogonal
involutions on $A$, come from a presentation of $A$ as $B \otimes C$, though
the mere existence of an involution of the first kind on a degree 4 central
simple algebra $A$ implies that $A$ is the tensor product of two quaternion
algebras over $\mathbb{F}$ (a theorem of Albert---see Theorem 16.1 of
\cite{[KMRT]}). However, we stick to one fixed involution, which does arise
in this way, and our results are independent of all the results mentioned in
this paragraph. We also remark that Section 16 of \cite{[KMRT]} presents
results which are very similar to ours, but some of the calculations do not
appear there (in particular, the main calculation required for Proposition
\ref{NAFg} below), and we concentrate on a case where many technical aspects
become simpler.
Our analysis is based on the following
\begin{lem}
If $\theta:A^{-} \to A^{-}$ takes $u=x\otimes1+1 \otimes y$ to
$\tilde{u}=-x\otimes1+1 \otimes y$ then $u\tilde{u}=\tilde{u}u=|u|^{2}$ in $A$,
and $2\langle v,w \rangle=v\tilde{w}+w\tilde{v}=\tilde{v}w+\tilde{w}v$.
\label{vnorm6}
\end{lem}
\begin{proof}
The first assertion follows from a simple and direct calculation. The second
equality then follows from Lemma \ref{Vpol}. This proves the lemma.
\end{proof}
Note that $\theta$ might be considered as the restriction of the map $\iota_{B}
\otimes Id_{C}$ on $A$ to $A^{-}$, but as the latter map behaves badly with
respect to products (it neither preserves them nor inverts them), we consider
only the restriction $\theta$ to $A^{-}$. We also remark that interchanging the
roles of $B$ and $C$ just means replacing $\theta$ by $-\theta$ and inverting
the sign of the Albert form.
The reduced norm $N^{A}_{\mathbb{F}}$ is a degree 4 form on $A$ which takes any
tensor product $b \otimes c$ to
$N^{B}_{\mathbb{F}}(b)^{2}N^{C}_{\mathbb{F}}(c)^{2}$. In certain calculations
we shall need to evaluate it, which is done, under some assumptions, in the
following
\begin{prop}
If $A=M_{2}(B)$ for some quaternion algebra $B$ then the equality
\[N^{A}_{\mathbb{F}}\binom{a\ \ b}{c\ \
d}=N^{B}_{\mathbb{F}}(a)N^{B}_{\mathbb{F}}(d)+N^{B}_{\mathbb{F}}(b)N^{B}_{
\mathbb{F}}(c)-Tr^{B}_{\mathbb{F}}(\overline{a}b\overline{d}c)\] holds for
every $\binom{a\ \ b}{c\ \ d} \in A$, i.e., every $a$, $b$, $c$, and $d$ from
$B$. \label{NAexp}
\end{prop}
\begin{proof}
First, the assertion holds in case one of $a$, $b$, $c$, or $d$ is 0. This
follows from evaluation of $4\times4$ determinants in case $B=M_{2}(\mathbb{F})$
and $A=M_{4}(\mathbb{F})$, hence holds in general since $B$ may be considered as
a subalgebra of matrices over a splitting field and the assertions are invariant
under scalar extensions. Now, if one entry is invertible then we can determine
the reduced norm by right multiplication with a matrix of the sort $\binom{1\ \
x}{0\ \ 1}$ (of reduced norm 1): For example, if $a$ is invertible then we take
$x=-a^{-1}b$ and find that that $N^{A}_{\mathbb{F}}\binom{a\ \ b}{c\ \ d}$
equals $N^{B}_{\mathbb{F}}(a)N^{B}_{\mathbb{F}}(d-ca^{-1}b)$ by
multiplicativity, and then using Lemma \ref{BEpol} we get the asserted value.
Similar considerations cover the cases where $b$, $c$, or $d$ are invertible,
which completes the proof in the case where $B$ is a division algebra. In case
$B=M_{2}(\mathbb{F})$ and all of $a$, $b$, $c$, and $d$ are non-zero and not
invertible, we may conjugate everything in $B$ (an operation leaving our
expression invariant) such that $a$ becomes the matrix $\binom{t\ \ 0}{0\ \ 0}$
for some $t\in\mathbb{F}^{\times}$. Observe that left multiplication by
$\binom{1\ \ x}{0\ \ 1}$ takes the upper left entry of our matrix to $a+xc$.
Recall that $\det c=0$ but $c\neq0$. Hence if $c$ has right column 0 then we may
choose $x$ such that $a+xc=0$, while if the right column of $c$ is not 0 we may
choose $x$ such that $x$ such that $xc$ has upper row 0 and lower row non-zero,
so that $a+xc$ becomes invertible. As our expression is invariant under left
multiplication by $\binom{1\ \ x}{0\ \ 1}$, this completes the proof of the
Proposition.
\end{proof}
We can use Proposition \ref{NAexp} to get an explicit expression for the
$N^{A}_{\mathbb{F}}$ in general. Writing an element of $A$ as $\alpha+\delta
i+\beta j+\gamma ij$ where $i$ and $j$ generate $C$, anti-commute, and square to
$\eta$ and $\varepsilon$ respectively, we may embed $C$ into
$M_{2}(\mathbb{K})$, where $\mathbb{K}=\mathbb{F}(\sqrt{\eta})$ with Galois
automorphism $\sigma$, as the algebra $(\mathbb{K},\sigma,\delta)$, and then our
element becomes an element of $A_{\mathbb{K}}=M_{2}(B_{\mathbb{K}})$ for which
we may apply Proposition \ref{NAexp}. The result is
\[N^{A}_{\mathbb{F}}(\alpha+\delta i+\beta j+\gamma
ij)=N^{B}_{\mathbb{F}}(\alpha)^{2}+\eta^{2}N^{B}_{\mathbb{F}}(\delta)^{2}
+\varepsilon^{2}N^{B}_{\mathbb{F}}(\beta)^{2}+\eta^{2}\varepsilon^{2}N^{B}_{
\mathbb{F}}(\gamma)^{2}+\] \[-\eta
Tr^{B}_{\mathbb{F}}\big((\overline{\alpha}\delta)^{2}\big)-\varepsilon
Tr^{B}_{\mathbb{F}}\big((\overline{\alpha}\beta)^{2}\big)+\eta\varepsilon
Tr^{B}_{\mathbb{F}}\big((\overline{\alpha}\gamma)^{2}\big)+\eta\varepsilon
Tr^{B}_{\mathbb{F}}\big((\overline{\delta}\beta)^{2}\big)+\]
\[-\eta^{2}\varepsilon
Tr^{B}_{\mathbb{F}}\big((\overline{\delta}\gamma)^{2}\big)-\eta\varepsilon^{2}
Tr^{B}_{\mathbb{F}}\big((\overline{\beta}\gamma)^{2}\big)-2\eta\varepsilon
Tr^{B}_{\mathbb{F}}\big(\overline{\alpha}\beta\overline{\delta}
\gamma)+2\eta\varepsilon
Tr^{B}_{\mathbb{F}}\big(\overline{\alpha}\gamma\overline{\delta}\beta),\] and it
is seen to reduce to $N^{B}_{\mathbb{F}}(b)^{2}N^{C}_{\mathbb{F}}(c)^{2}$ when
our element is a single tensor $b \otimes c$ (by choosing $i$ to be the
traceless part of $c$ if it is non-zero). More importantly, we have
\begin{cor}
The equality $N^{A}_{\mathbb{F}}(u)=|u|^{4}$ holds for every $u \in A^{-}$.
\label{NAvn2}
\end{cor}
\begin{proof}
One way to see it is by writing $u=x\otimes1+1 \otimes y$ and choosing $i=y$ in
the basis for $C$ for the latter formula (if it does not vanish).
Alternatively, we consider $A=M_{2}(B)$ (with $C=M_{2}(\mathbb{F})$) first,
where elements of $A^{-}=M_{2}(B)^{-}$ take the form $u=\binom{\lambda\ \ -r}{s\
\ -\overline{\lambda}}$, in which $\lambda \in B$ and $r$ and $s$ are from
$\mathbb{F}$. For such an element we have
$|u|^{2}=N^{B}_{\mathbb{F}}(\lambda)-rs$ (this expression resembles the
Moore determinant of Hermitian matrices---see Corollary \ref{iso6d1} below), and
the expression from Proposition \ref{NAexp} indeed yields the square of the
latter expression. For the general case we embed $C$ in $M_{2}(\mathbb{K})$ and
$A$ in $M_{2}(B_{\mathbb{K}})$ as above and use extension of scalars. This
proves the corollary.
\end{proof}
Corollary \ref{NAvn2} emphasizes the fact that an element $u \in A^{-}$ is
invertible if and only if $|u|^{2}\neq0$. We also note that for every $a \in A$
we have $N^{A}_{\mathbb{F}}(a)=N^{A}_{\mathbb{F}}(\overline{a})$, either by a
direct evaluation using our formulae (and the fact that the same assertion holds
for the reduced norms of the quaternion algebras $B$ and $C$) or by Corollary
2.2 of \cite{[KMRT]}.
\smallskip
The bi-quaternion algebra $A$ operates on itself via $g:M \mapsto
gM\overline{g}$, and this action preserves the subspaces $A^{\pm}$. The
properties for this action of $A$ on the 6-dimensional space $A^{-}$ underlying
the Albert form which will be useful for our purposes are given in the
following
\begin{prop}
The action of $g \in A$ multiplies the norm $|u|^{2}$ of the element $u \in
A^{-}$ by $N^{A}_{\mathbb{F}}(g)$. The only invertible elements whose action on
$A^{-}$ is a global scalar multiplication are scalars from
$\mathbb{F}^{\times}$. \label{NAFg}
\end{prop}
\begin{proof}
We first consider the case where $C=M_{2}(\mathbb{F})$ and $A=M_{2}(B)$. The
element $u$ takes the form $\binom{\lambda\ \ -r}{s\ \ -\overline{\lambda}}$,
with $|u|^{2}=N^{B}_{\mathbb{F}}(\lambda)-rs$, as in the proof of Corollary
\ref{NAexp}. Now, for $g=\binom{a\ \ b}{c\ \ d} \in M_{2}(B)$ we have
$\overline{g}=\binom{\ \ \overline{d}\ \ -\overline{b}}{-\overline{c}\ \ \ \
\overline{a}}$, and then $gu\overline{g} \in M_{2}(B)^{-}$ involves the
quaternion
$a\lambda\overline{d}+sb\overline{d}+ra\overline{c}+b\overline{\lambda}\overline
{c}$ and the numbers
$rN^{B}_{\mathbb{F}}(a)+Tr^{B}_{\mathbb{F}}(a\lambda\overline{b})+sN^{B}_{
\mathbb{F}}(b)$ and
$sN^{B}_{\mathbb{F}}(d)+Tr^{B}_{\mathbb{F}}(c\lambda\overline{d})+rN^{B}_{
\mathbb{F}}(c)$ in the places of $r$ and $s$ respectively. Evaluating the norm
of the element involving these parameters yields the desired expression
\[\big[N^{B}_{\mathbb{F}}(a)N^{B}_{\mathbb{F}}(d)+N^{B}_{\mathbb{F}}(b)N^{B}_{
\mathbb{F}}(c)-Tr^{B}_{\mathbb{F}}(\overline{a}b\overline{d}c)\big]
\big(N^{B}_{\mathbb{F}}(\lambda)-rs\big).\] In addition, if $g$ is such that
these parameters are multiples of the original ones by a global scalar, then
$b\overline{d}=a\overline{c}=0$ and
$N^{B}_{\mathbb{F}}(b)=N^{B}_{\mathbb{F}}(c)=0$, and the fact that the trace
form on $B$ is non-degenerate implies also $\overline{b}a=\overline{d}c=0$.
Since invertible elements of $A$ have non-zero norm, we find that
$N^{B}_{\mathbb{F}}(a)$ and $N^{B}_{\mathbb{F}}(d)$ must not vanish, hence $a$
and $d$ are invertible and thus $b=c=0$. The global scalar property now implies
$N^{B}_{\mathbb{F}}(a)=N^{B}_{\mathbb{F}}(d)$, examining the effect on $\lambda$
being $d$ or $\overline{a}$ implies $a=d$, and the scalar multiplication
property shows that this element $a=d$ is central in $A$. Hence
$g\in\mathbb{F}^{\times}$ as desired, and the converse direction is trivial. .
In the case where $A$ is a division algebra, we can extend scalars to a
splitting field $\mathbb{K}$ of $C$, apply the arguments over $\mathbb{K}$, and
then return to our space over $\mathbb{F}$. This works since the multiplier lies
in $\mathbb{F}$, it is the reduced norm of an element of a bi-quaternion algebra
over $\mathbb{F}$, and the only elements in $A$ which become scalars in
$A_{\mathbb{K}}$ come from $\mathbb{F}$. This proves the proposition.
\end{proof}
Before we define the group which will become the Gspin group in this case, we
need another
\begin{lem}
The equality
$\widetilde{gu\overline{g}}=N^{A}_{\mathbb{F}}(g)\overline{g}^{-1}\tilde{u}g^{-1
}$ holds for any $g \in A^{\times}$ and $u \in A^{-}$. \label{AxA-rel}
\end{lem}
\begin{proof}
From Lemma \ref{vnorm6} and Proposition \ref{NAFg} we get
\[gu\overline{g}\cdot\widetilde{gu\overline{g}}=|\widetilde{gu\overline{g}}|^{2}
=N^{A}_{\mathbb{F}}(g)|u|^{2}=N^{A}_{\mathbb{F}}(g)g \cdot u\tilde{u} \cdot
g^{-1}=N^{A}_{\mathbb{F}}(g)gu\overline{g}\cdot\overline{g}^{-1}\tilde{u}g^{-1},
\] since $|u|^{2}=u\tilde{u}$ is a scalar (hence central in $A$). The assertion
now follows for every $u \in A^{-} \cap A^{\times}$, and extends to all of
$A^{-}$ by linearity and the fact that $A^{-}$ admits a basis of (orthogonal)
vectors which lie in $A^{\times}$. This proves the lemma.
\end{proof}
We consider the subgroup $A^{(\mathbb{F}^{\times})^{2}}$ of $A^{\times}$. This
group contains every single tensor $b \otimes c$ with $b \in B^{\times}$ and $c
\in C^{\times}$ (and in particular any scalar from $\mathbb{F}^{\times}$), and
Corollary \ref{NAvn2} shows that $A^{-} \cap A^{\times} \subseteq
A^{(\mathbb{F}^{\times})^{2}}$. Let $\widetilde{A}^{(\mathbb{F}^{\times})^{2}}$
denote the ``metaplectic-like'' double cover of $A^{(\mathbb{F}^{\times})^{2}}$
consisting of pairs $(g,t) \in
A^{(\mathbb{F}^{\times})^{2}}\times\mathbb{F}^{\times}$ such that
$N^{A}_{\mathbb{F}}(g)=t^{2}$ (with coordinate-wise product). This group appears
in Section 17 of \cite{[KMRT]} in relation with the orthogonal group of $O(V)$,
and we prove this connection using very simple means.
\begin{lem}
The operation $(g,t):u\mapsto\frac{gu\overline{g}}{t}$ defines a map
$\widetilde{A}^{(\mathbb{F}^{\times})^{2}} \to O(A^{-})$, with kernel consisting
of the elements $(r,r^{2})$ for $r\in\mathbb{F}^{\times}$. The automorphism
$g\mapsto\overline{g}^{-1}$ of $A^{\times}$ preserves
$A^{(\mathbb{F}^{\times})^{2}}$, and
$(g,t)\mapsto(t\overline{g}^{-1},t)$ is an automorphism of
$\widetilde{A}^{(\mathbb{F}^{\times})^{2}}$ of order 2. If $\tilde{\theta}$
generates a cyclic group of order 2 acting on
$\widetilde{A}^{(\mathbb{F}^{\times})^{2}}$ by this automorphism then sending it
to $\theta$ yields a map from the associated semi-direct product to $O(A^{-})$.
\label{ac6d1}
\end{lem}
\begin{proof}
The fact that the image of $\widetilde{A}^{(\mathbb{F}^{\times})^{2}}$ lies in
$O(A^{-})$ follows from Proposition \ref{NAFg} and the definition of
$\widetilde{A}^{(\mathbb{F}^{\times})^{2}}$. As for the kernel, Proposition
\ref{NAFg} implies that elements of the kernel must lie over
$\mathbb{F}^{\times}$, and the fact that they take the form $(r,r^{2})$ (and
that these elements indeed lie in $\widetilde{A}^{(\mathbb{F}^{\times})^{2}}$)
is immediate. The fact that $N^{A}_{\mathbb{F}}(\overline{g}^{\ -1})$ is the
reciprocal of $N^{A}_{\mathbb{F}}(g)$ shows that the automorphism preserves
$A^{(\mathbb{F}^{\times})^{2}}$, and the fact that
$N^{A}_{\mathbb{F}}(t)=t^{4}$ and $t\overline{t}^{\ -1}=1$ for
$t\in\mathbb{F}^{\times}$ proves the assertions about the automorphism of
$\widetilde{A}^{(\mathbb{F}^{\times})^{2}}$. As $\theta \in O(A^{-})$ (clear),
the result about the semi-direct product follows from Lemma \ref{AxA-rel} and
the fact that $t^{2}=N^{A}_{\mathbb{F}}(g)$ for
$(g,t)\in\widetilde{A}^{(\mathbb{F}^{\times})^{2}}$ . This proves the lemma.
\end{proof}
The multiplication by $t$ in the automorphism of
$\widetilde{A}^{(\mathbb{F}^{\times})^{2}}$ is not necessary. However, we put
it there for certain maps below (see Lemmas \ref{Qtheta} and \ref{Qhatpsi}) to
take a neater form.
As in the previous cases, we now prove
\begin{lem}
The reflection in the vector $g \in A^{-} \cap A^{\times}$ lies in the image of
the semi-direct product from Lemma \ref{ac6d1}, and takes the form
$u\mapsto\frac{g\tilde{u}\overline{g}}{|g|^{2}}$. \label{ref6d1}
\end{lem}
\begin{proof}
Corollary \ref{NAvn2} shows that
$(g,|g|^{2})\in\widetilde{A}^{(\mathbb{F}^{\times})^{2}}$, so that the asserted
operation lies in the image of the map from Lemma \ref{ac6d1}. Now,
$\overline{g}=-g$ since $g \in A^{-}$, and Lemma \ref{vnorm6} shows that the
denominator is $g\tilde{g}$. In the same manner as in the previous cases,
replacing $-g\tilde{u}$ by $u\tilde{g}$ for $u \in g^{\perp}$ and just
substituting $g$ for $u=g$ shows that this expression yields $u$ for $u \in
g^{\perp}$ case and $-g=-u$ for $u=g$. This proves the lemma.
\end{proof}
We remark that the operation from Lemma \ref{ref6d1} is indeed invariant under
interchanging the roles of $B$ and $C$, since then both $\theta$ and $|g|^{2}$
are being inverted.
We now come to prove
\begin{thm}
The Gspin group of $A^{-}$ is $\widetilde{A}^{(\mathbb{F}^{\times})^{2}}$, and
it is generated by the elements $(g,|g|^{2})$ with $g \in A^{-} \cap
A^{\times}$. The spin group is a subgroup mapping bijectively onto $A^{1}$.
\label{dim6d1}
\end{thm}
\begin{proof}
As in the previous cases, the fact that $\theta$ has determinant $-1$ (it
inverts a 3-dimensional subspace), Lemma \ref{ref6d1}, and Proposition \ref{CDT}
show that the map from the semi-direct product in Lemma \ref{ac6d1} to
$O(A^{-})$ is surjective, and its restriction to
$\widetilde{A}^{(\mathbb{F}^{\times})^{2}}$ maps surjectively onto $SO(A^{-})$,
with kernel (isomorphic to) $\mathbb{F}^{\times}$. Hence
$Gspin(A^{-})=\widetilde{A}^{(\mathbb{F}^{\times})^{2}}$, and from the structure
of the semi-direct product we deduce the generation of the latter group by
elements arising from $A^{-} \cap A^{\times}$. Note that here the latter
assertion is not so easy to establish in a direct manner. As $\theta$ has spinor
norm 1 (the 3-dimensional space which it inverts has determinant 1), Lemma
\ref{ref6d1} implies that the spinor norm of the element $(g,|g|^{2}=t)$ is
$t=|g|^{2}$. As these elements generate
$\widetilde{A}^{(\mathbb{F}^{\times})^{2}}$, multiplicativity shows that the
spinor norm of every every element
$(g,t)\in\widetilde{A}^{(\mathbb{F}^{\times})^{2}}$ is just $t$ (in particular,
the element $(1,-1)$, operating as $-Id_{A^{-}}$, has the required spinor norm
$-1$ like the determinant of $A^{-}$). Thus, the elements lying over
$SO^{1}(A^{-})$ come with second coordinate from $(\mathbb{F}^{\times})^{2}$,
and as $(r,r^{2})$ operates trivially for every $r\in\mathbb{F}^{\times}$, we
may divide by it and consider just elements of
$\widetilde{A}^{(\mathbb{F}^{\times})^{2}}$ of the form $(g,1)$. As this group
maps isomorphically onto $A^{1}$ in the natural projection
$\widetilde{A}^{(\mathbb{F}^{\times})^{2}} \to A^{(\mathbb{F}^{\times})^{2}}$,
and its intersection with the image of the kernel $\mathbb{F}^{\times}$ is just
$\pm1$, we find that the spin group $spin(A^{-})$ is just (this isomorphic image
of) $A^{1}$. This proves the theorem.
\end{proof}
Observe that while the space depends on the choice of the involution on $A$
(i.e., the decomposition as a tensor product of quaternion algebras), the groups
$SO(A^{-})$ and $SO^{1}(A^{-})$ indeed do not.
For the isotropic case, we have
\begin{cor}
If $A^{-}$ is isotropic then there is a quaternion algebra $B$ such that the
Gspin group is the double cover of the group
$\widetilde{GL}_{2}^{(\mathbb{F}^{\times})^{2}}(B)$ consisting of those matrices
in which the expression from Proposition \ref{NAFg} is a square, where the
double cover involves choosing a square root for it. The spin group is just the
subgroup $GL_{2}^{1}(B)$ in which this expression equals 1. If $A^{-}$ splits
another hyperbolic plane then it is the sum of three such planes, the Gspin
group is $\widetilde{GL}_{4}^{(\mathbb{F}^{\times})^{2}}(\mathbb{F})$, and the
spin group is $SL_{4}(\mathbb{F})$. \label{iso6d1}
\end{cor}
\begin{proof}
The fact that $A$ cannot be a division algebra follows immediately from the fact
that some elements of $A^{-}$ are zero-divisors when $A^{-}$ is isotropic.
Moreover, in this case $A^{-}$ splits a hyperbolic plane, so that by choosing
the subspace to normalize as the $B_{0}$ part to be a subspace of the orthogonal
complement of this hyperbolic plane we may assume (see Corollary \ref{iso3})
that $C=M_{2}(\mathbb{F})$. Hence $A=M_{2}(B)$, the space $A^{-}$ consists of
the matrices $\binom{\lambda\ \ -r}{s\ \ -\overline{\lambda}}$ from the proof of
Corollary \ref{NAvn2}, and
$A^{(\mathbb{F}^{\times})^{2}}=GL_{2}^{(\mathbb{F}^{\times})^{2}}(B)$ is defined
according to the expression from Proposition \ref{NAFg}. Thus, Theorem
\ref{dim6d1} shows that $Gspin\big(M_{2}(B)^{-}\big)$ is the double cover
$\widetilde{GL}_{2}^{(\mathbb{F}^{\times})^{2}}(B)$, and
$spin\big(M_{2}(B)^{-}\big)$ is $A^{1}=GL_{2}^{1}(B)$ according to the same
expression. Observe that the complement of a hyperbolic plane (the $r$ and $s$
coordinates) is the space $B$ from Corollary \ref{dim4d1}. Hence $A^{-}$ splits
another hyperbolic plane if and only if $B$ also splits (see Corollary
\ref{iso4}), and as in this case the complement of these two planes has
discriminant 1, it is again a hyperbolic plane by Corollary \ref{iso2}. Hence
$A=M_{4}(\mathbb{F})$, the reduced norm is the determinant, and the Gspin and
spin groups from Theorem \ref{dim6d1} become indeed
$\widetilde{GL}_{4}^{(\mathbb{F}^{\times})^{2}}(\mathbb{F})$ and
$SL_{4}(\mathbb{F})$ respectively. This proves the corollary.
\end{proof}
Theorem 16.5 of \cite{[KMRT]} relates the index of $A$ (4 in case of a division
algebra, 1 in the split case, 2 in the middle) to the number of hyperbolic
planes one can split from the Albert form of $A$ (0, 3, and 1 respectively).
The hardest direction is to prove that if the Albert form is anisotropic then
$A$ is a division algebra, yielding the inverse direction of Corollary
\ref{iso6d1}. On the other hand, all the remaining assertions are either trivial
or follow from the proof of Corollary \ref{iso6d1}.
We also have
\begin{cor}
Given a quaternion algebra $B$, the groups
$\widetilde{GL}_{2}^{(\mathbb{F}^{\times})^{2}}(B)$ and $GL_{2}^{1}(B)$ are also
the Gspin and spin groups of the space $M_{2}^{Her}(B)$ of Hermitian $2\times2$
matrices over $B$, which is quadratic with the vector norm being (minus) the
Moore determinant, through the action via
$g:X\mapsto\frac{gX\iota_{B}(g)^{t}}{t}$. In addition, the groups
$\widetilde{GL}_{4}^{(\mathbb{F}^{\times})^{2}}(\mathbb{F})$ and
$SL_{4}(\mathbb{F})$ are the Gspin and spin groups of $M_{4}^{as}(\mathbb{F})$
the space of anti-symmetric matrices over $\mathbb{F}$ (with minus the Pfaffian
as the quadratic form) on which they operate as $g:T\mapsto\frac{gTg^{t}}{t}$,
as well as on another space, with the operation $\big(g=\binom{a\ \ b}{c\ \
d},t\big):S\mapsto\binom{a\ \ b}{c\ \ d}S\binom{\ \ d^{t}\ \ -b^{t}}{-c^{t}\ \ \
\ a^{t}}/t$. \label{alt6d1}
\end{cor}
\begin{proof}
The first assertion follows, as in Corollaries \ref{alt3} and \ref{alt4}, from
Corollary \ref{iso6d1}, Lemma \ref{Sadjt}, and the fact that $M_{2}^{Her}(B)$ is
the image of $M_{2}(B)^{-}$ under right multiplication by $\binom{\ \ 0\ \
1}{-1\ \ 0}$. Indeed, the expression $N^{B}_{\mathbb{F}}(\lambda)-rs$ becomes
minus the Moore determinant of $\binom{r\ \ \lambda}{\overline{\lambda}\ \ s}$.
For the second assertion, we may apply Lemma \ref{Sadjt} again, but now for the
elements of $B=M_{2}(\mathbb{F})$ which are entries of matrices in $A=M_{2}(B)$.
The representation just described becomes $M_{4}^{as}(\mathbb{F})$ under this
operation: The diagonal blocks $rI$ and $sI$ are taken to multiples of the
anti-symmetric matrix $\binom{\ \ 0\ \ 1}{-1\ \ 0}$, the off-diagonal blocks are
taken care of by Lemma \ref{Sadjt} and the fact that $\binom{\ \ 0\ \ 1}{-1\ \
0}$ is anti-symmetric, and the Moore determinant becomes the Pfaffian. The
fourth representation is obtained from the one considered in Theorem
\ref{dim6d1} by this right multiplication. This proves the corollary.
\end{proof}
The representation on $M_{4}^{as}(\mathbb{F})$ uses an orthogonal involution,
while the two others use symplectic ones (compare with Proposition 2.23(1) of
\cite{[KMRT]} again). The generator $\theta$ of $O(A^{-})/SO(A^{-})$ corresponds
to the adjoint involution of $2\times2$ matrices in Theorem \ref{dim6d1} in case
$A=M_{2}(B)$, and to $X\mapsto-X^{t}$ on the space $M_{2}^{Her}(B)$ from
Corollary \ref{alt6d1}. On the other hand, conjugating by $\binom{\ \ 0\ \
1}{-1\ \ 0}$ sends the latter involution to $X\mapsto-adjX$ on $M_{2}^{Her}(B)$,
and the product of $X$ and its image under this involution yields $|X|^{2}$
again. $\theta$ yields involutions on $M_{4}^{as}(\mathbb{F})$ and on the other
space as well, and after appropriate conjugations (by $\binom{\ \ 0\ \ 1}{-1\ \
0}$ on the entries as $2\times2$ matrices, or on both), we see that these spaces
also admit involutions with the property that the product of a vector with its
image yields the vector norm. The latter involution on $M_{4}^{as}(\mathbb{F})$
will be denoted $T\mapsto\hat{T}$.
\section{Dimension 5 \label{Dim5}}
The spaces we get in dimension 5 are those given in
\begin{lem}
Every 5-dimensional space is isometric, up to a scalar multiple, to the
orthogonal complement of an anisotropic vector $Q$ in the space $A^{-}$ arising
from a bi-quaternion algebra $A$ presented as $B \otimes C$. \label{sp5}
\end{lem}
\begin{proof}
First we remark that for anisotropic $Q \in A^{-}$, the subspace $Q^{\perp}$ of
$A^{-}$ is 5-dimensional and non-degenerate. Conversely, a 5-dimensional
quadratic space can be can be extended, uniquely up to the choice of a
generator, to a 6-dimensional space of discriminant 1: If the space has some
discriminant (or determinant) $d$, this is done by adding an element $Q$ with
$|Q|^{2}=-d$. The lemma now follows directly from Lemma \ref{sp6d1}.
\end{proof}
In fact, by choosing the 3-dimensional subspace from the proof of Lemma
\ref{sp6d1} to be contained in our original space $Q^{\perp}$ we make sure that
$Q \in 1 \otimes C_{0}$ (see Proposition \ref{NQF2NA} below for a more precise
statement), but we shall not need this fact. In any case, we shall write our
vector space as $Q^{\perp} \subseteq A^{-}$. The next step is
\begin{lem}
The groups $SO(Q^{\perp} \subseteq A^{-})$, $SO^{1}(Q^{\perp} \subseteq A^{-})$,
$Gspin(Q^{\perp} \subseteq A^{-})$, and $spin(Q^{\perp} \subseteq A^{-})$ are
the subgroups of $SO(A^{-})$, $SO^{1}(A^{-})$, $Gspin(A^{-})$, and $spin(A^{-})$
respectively, consisting of those elements whose action stabilizes $Q$.
\label{ac5}
\end{lem}
\begin{proof}
The Witt Cancelation Theorem implies that any element of $O(Q^{\perp} \subseteq
A^{-})$ comes from an element of $O(A^{-})$ under which $Q^{\perp}$ is
invariant, and $\mathbb{F}Q$ must therefore also be invariant under such
extensions. As $O(\mathbb{F})=\{\pm1\}$, and $-1$ has determinant $-1$ there,
the assertion about $SO(Q^{\perp} \subseteq A^{-})$ follows. Considering
extensions of $O(Q^{\perp} \subseteq A^{-})$ to $O(A^{-})$ by taking only $+1$
on $\mathbb{F}Q$, Proposition \ref{CDT} shows that any element there has the
same spinor norm in both $O(Q^{\perp} \subseteq A^{-})$ and $O(A^{-})$ (since
this is true for reflections in vectors from $Q^{\perp}$). The assertion for
$SO^{1}(Q^{\perp} \subseteq A^{-})$ follows, and for the remaining two groups we
just use the maps into $O(A^{-})$. This proves the lemma.
\end{proof}
For a simpler description, consider the group $A^{\times}_{\mathbb{F}Q}$ of
those $g \in A^{\times}$ such that $gQ\overline{g}\in\mathbb{F}Q$ (i.e., those
which preserve the 1-dimensional space $\mathbb{F}Q$), which comes with a group
homomorphism $m:A^{\times}_{\mathbb{F}Q}\to\mathbb{F}^{\times}$ defined
by $gQ\overline{g}=t(g)Q$. We remark that $A^{\times}_{\mathbb{F}Q}$ can be seen
as the group of similitudes of $A$ with respect to the involution $x \mapsto
Q\overline{x}Q^{-1}$ (which is symplectic since $Q \in A^{-}$). We now have
\begin{lem}
The group $A^{\times}_{\mathbb{F}Q}$ is a subgroup of
$A^{(\mathbb{F}^{\times})^{2}}$. The double cover
$\widetilde{A}^{(\mathbb{F}^{\times})^{2}}$ splits over
$A^{\times}_{\mathbb{F}Q}$, and the splitting group contains the group
$\{(r,r^{2})|r\in\mathbb{F}^{\times}\}$. \label{AFQAF2}
\end{lem}
\begin{proof}
The equality $gQ\overline{g}=t(g)Q$ holding for $g \in A^{\times}_{\mathbb{F}Q}$
implies, by Proposition \ref{NAFg} and the fact that $|Q|^{2}\neq0$, the
equality $N^{A}_{\mathbb{F}}(g)=t(g)^{2}$. This establishes the first assertion,
as well as introducing the splitting homomorphism $g\mapsto\big(g,t(g)\big)$
from $A^{\times}_{\mathbb{F}Q}$ into $\widetilde{A}^{(\mathbb{F}^{\times})}$.
The fact that $rQ\overline{r}=r^{2}Q$ for $r\in\mathbb{F}^{\times}$ completes
the proof of the lemma.
\end{proof}
We can now prove
\begin{thm}
We have $Gspin(Q^{\perp} \subseteq A^{-})=A^{\times}_{\mathbb{F}Q}$, and
$spin(Q^{\perp} \subseteq A^{-})$ is the group $A^{1}_{Q}$ of elements of $g
\in A^{1}$ such that $gQ\overline{g}=Q$. \label{dim5}
\end{thm}
\begin{proof}
For the Gspin group, Lemma \ref{ac5} shows that it suffices to prove that the
image of $A^{\times}_{\mathbb{F}Q}$ under the splitting map from Lemma
\ref{AFQAF2} is the stabilizer of $Q$ under the action of
$\widetilde{A}^{(\mathbb{F}^{\times})}$. But it is clear from the definition
that the image of this splitting map stabilizes $Q$, and that elements of this
stabilizer must come from $A^{\times}_{\mathbb{F}Q}$. As replacing $m$ by $-m$
yields an element taking $Q$ to $-Q$, the assertion for the Gspin group follows.
The statement for the spin group follows directly from Lemma \ref{ac5}, since no
scalar appears in the action of $(g,1)$ for $A^{1}$. Note that for $g \in
A^{1}_{Q}$ we have $t(g)=1$, so that the corresponding element is indeed of the
form $(g,1)$ for $A^{1}$. This proves the proposition.
\end{proof}
Note that our construction is based on the choice of $Q \in A^{-}$. However, the
only parameter which is required to know the isomorphism class of
$A^{\times}_{\mathbb{F}Q}$ and $A^{1}_{Q}$ is given in the following
\begin{prop}
If an element $R \in A^{-}$ satisfies
$|R|^{2}\in|Q|^{2}(\mathbb{F}^{\times})^{2}N^{A}_{\mathbb{F}}(A^{\times})$ then
$A^{\times}_{\mathbb{F}Q} \cong A^{\times}_{\mathbb{F}R}$ and $A^{1}_{Q} \cong
A^{1}_{R}$. \label{NQF2NA}
\end{prop}
\begin{proof}
First note that $A^{\times}_{\mathbb{F}rQ}=A^{\times}_{\mathbb{F}Q}$ and
$A^{1}_{rQ}=A^{1}_{Q}$ for every $r\in\mathbb{F}^{\times}$, so once only
(non-zero) vector norms are involved, we can consider them modulo
$(\mathbb{F}^{\times})^{2}$. Now, for any $a \in A^{\times}$, conjugation by $a$
takes $A^{\times}_{\mathbb{F}Q}$ to $A^{\times}_{\mathbb{F}aQ\overline{a}}$ and
$A^{1}_{Q}$ to $A^{1}_{aQ\overline{a}}$. Applying this to $(a,t)$ with $a \in
A^{(\mathbb{F}^{\times})^{2}}$ and using the transitivity of the action of
$SO(A^{-})$ on elements of the same vector norm (Witt Cancelation again)
establishes the assertion in case $|R|^{2}=|Q|^{2}$ (hence also if
$|R|^{2}\in|Q|^{2}(\mathbb{F}^{\times})^{2}$), and then doing so for general $a
\in A^{\times}$ allows us to divide also by $N^{A}_{\mathbb{F}}(A^{\times})$.
This proves the proposition.
\end{proof}
When we consider the case where $A^{-}$ is isotropic, the relation from
Proposition \ref{NQF2NA} becomes simpler by the following
\begin{lem}
If $A=M_{2}(B)$ then $N^{A}_{\mathbb{F}}(A)=N^{B}_{\mathbb{F}}(B)$ and
$N^{A}_{\mathbb{F}}(A^{\times})=N^{B}_{\mathbb{F}}(B^{\times})$. \label{NM2B}
\end{lem}
\begin{proof}
For any $b \in B$ we get $N^{B}_{\mathbb{F}}(b)$ as the reduced norm of the
element $\binom{b\ \ 0}{0\ \ 1}$ of $A$. For the other direction, the proof of
Proposition \ref{NAFg} shows that if a matrix in $M_{2}(B)$ has an invertible
entry then its reduced norm is the product of two norms from $B$ (e.g.,
$N^{B}_{\mathbb{F}}(a)N^{B}_{\mathbb{F}}(d-ca^{-1}b)$ if $a \in B^{\times}$),
which completes the proof for division algebras $B$ since $N^{B}_{\mathbb{F}}$
is multiplicative. As for $B=M_{2}(\mathbb{F})$ and $A=M_{4}(\mathbb{F})$, every
element of $\mathbb{F}$ is both a $2\times2$ determinant and a $2\times2$
determinant, the proof of the lemma is complete.
\end{proof}
Rather than isotropy, in this case we have
\begin{cor}
If the 5-dimensional space of discriminant $d$ represents $-d$ then its Gspin
and spin groups are isomorphic to $GSp_{B}\binom{-\delta\ \ 0}{\ \ 0\ \ 1}$ and
$Sp_{B}\binom{-\delta\ \ 0}{\ \ 0\ \ 1}$ respectively, for some quaternion
algebra $B$ and some $\delta\in\mathbb{F}^{\times}$ which may be taken from a
set of representatives for $F^{\times}/N^{B}_{\mathbb{F}}(B^{\times})$. In case
this space splits two hyperbolic planes, these groups are isomorphic to the
classical groups $GSp_{4}(\mathbb{F})$ and $Sp_{4}(\mathbb{F})$ respectively.
\label{iso5}
\end{cor}
\begin{proof}
In this case (which covers the case of an isotropic space since a hyperbolic
plane represents every element of $\mathbb{F}$) the space $A^{-}$ from the
proof of Lemma \ref{sp5} will be isotropic. Hence we can assume that
$A=M_{2}(B)$ (with the associated involution) by Corollary \ref{iso6d1}, and the
Gspin and spin groups from Theorem \ref{dim5} are $GL_{2}(B)_{\mathbb{F}Q}$ and
$GL_{2}^{1}(B)_{Q}$ respectively. Lemma \ref{NQF2NA} shows that the only
parameter required for determining the isomorphism classes of these groups is
$|Q|^{2}$ up to $(\mathbb{F}^{\times})^{2}N^{A}_{\mathbb{F}}(A^{\times})$, and
by Lemma \ref{NM2B} and the fact that $(\mathbb{F}^{\times})^{2} \subseteq
N^{B}_{\mathbb{F}}(B^{\times})$ for quaternion algebras (as
$N^{B}_{\mathbb{F}}(r)=r^{2}$), we may take the vector norm $\delta$ from the
required set of representatives and any $Q$ with $|Q|^{2}=\delta$ will do. We
choose $Q$ to be the element $\binom{0\ \ \delta}{1\ \ 0}$ of
$M_{2}(\mathbb{F})_{0}=1 \otimes C_{0} \subseteq M_{2}(B)^{-}$, with
$|Q|^{2}=-\det Q=\delta$. Using Corollary \ref{alt6d1}, the fact that
multiplying $Q$ by $\binom{\ \ 0\ \ 1}{-1\ \ 0}$ from the right gives this
diagonal matrix shows that our groups are indeed $GSp_{B}\binom{-\delta\ \ 0}{\
\ 0\ \ 1}$ and $Sp_{B}\binom{-\delta\ \ 0}{\ \ 0\ \ 1}$.
In case our space contains two hyperbolic planes, Corollary \ref{iso6d1} shows
that $B$ also splits. As $N^{B}_{\mathbb{F}}(B^{\times})=\mathbb{F}^{\times}$
in this case, we have, up to isomorphism, the same group for every choice of $Q$
(Lemma \ref{NQF2NA}). Instead of taking $Q$ as above, we recall from the proof
of Corollary \ref{NAvn2} that the space $A^{-}$ (now for split $B$ and $C$)
consists of matrices of the form $\binom{\ Y\ \ \ -rI\ \ }{sI\ \ -adjY}$ with
$r$ and $s$ from $\mathbb{F}$ and $Y \in M_{2}(\mathbb{F})$, and we take $Q$ to
be the element with $r=s=0$ and $Y=\binom{\ \ 0\ \ 1}{-1\ \ 0}$. Going over to
the representation from Corollary \ref{iso6d1} in which
$\widetilde{GL}_{4}^{(\mathbb{F}^{\times})^{2}}(\mathbb{F})$ operates by
$g:T\mapsto\frac{gTg^{t}}{t}$, our groups are easily seen to be the classical
$GSp_{4}(\mathbb{F})$ and $Sp_{4}(\mathbb{F})$. This proves the corollary.
\end{proof}
As for equivalent representations, we get
\begin{cor}
The groups $GSp_{B}\binom{-\delta\ \ 0}{\ \ 0\ \ 1}$ and $Sp_{B}\binom{-\delta\
\ 0}{\ \ 0\ \ 1}$ are the Gspin and spin groups of the orthogonal complement of
$\binom{-\delta\ \ 0}{\ \ 0\ \ 1}$ inside $M_{2}^{Her}(B)$ from Corollary
\ref{alt6d1}. The classical groups $GSp_{4}(\mathbb{F})$ and
$Sp_{4}(\mathbb{F})$ can be seen as the Gspin and spin group of either the
complement of $\binom{0\ \ -I}{I\ \ \ \ 0}$ in $M_{4}^{as}(\mathbb{F})$ or of
the orthogonal complement of the adjoint representation on the Lie algebra
$\mathfrak{gsp}_{4}(\mathbb{F})$ of $GSp_{4}(\mathbb{F})$ inside
$M_{4}(\mathbb{F})=\mathfrak{gl}_{4}(\mathbb{F})$.
\label{alt5}
\end{cor}
\begin{proof}
This follows directly by restricting the representations from Corollary
\ref{alt6d1} to our groups. Note that the operation $g=\binom{a\ \ b}{c\ \
d}:S\mapsto\binom{a\ \ b}{c\ \ d}S\binom{\ \ d^{t}\ \ -b^{t}}{-c^{t}\ \ \ \
a^{t}}$ is conjugation for $Sp_{4}(\mathbb{F})$ (and conjugation tensored with
the determinant for $GSp_{4}(\mathbb{F})$), leaving the Lie algebra
$\mathfrak{sp}_{4}(\mathbb{F})$ invariant. The additional invariant vector is
$I$, adding which to $\mathfrak{sp}_{4}(\mathbb{F})$ yields
$\mathfrak{gsp}_{4}(\mathbb{F})$. This proves the corollary.
\end{proof}
We remark that the simplicity of $Sp_{4}(\mathbb{F})$ as an algebraic group
implies that the action on $\mathfrak{sp}_{4}(\mathbb{F})$ is an irreducible
representation of $Sp_{4}(\mathbb{F})$. Hence Corollary \ref{alt5} yields a
complete reduction of the representation $\mathfrak{gl}_{4}(\mathbb{F})$ of
these groups as the direct sum of $\mathfrak{sp}_{4}(\mathbb{F})$,
$\mathbb{F}I$ (as the determinant), and the 5-dimensional representation of
$Sp_{4}(\mathbb{F})$ as an $SO$ or $SO^{1}$ group. The adjoint representation
appears for the group $Sp_{B}\binom{-\delta\ \ 0}{\ \ 0\ \ 1}$ in Corollary
\ref{alt5} only if $\delta=-1$.
\section{Dimension 6, General Discriminant \label{Dim6gen}}
If our 6-dimensional quadratic space over $\mathbb{F}$ now has some discriminant
$d$, take $\mathbb{E}=\mathbb{F}(\sqrt{d}))$ with Galois automorphism $\rho$ as
before. Our object of interest will be bi-quaternion algebras over $\mathbb{E}$,
which take the form $A_{\mathbb{E}}$ for some bi-quaternion algebra $A$ over
$\mathbb{F}$. Given a presentation of $A$ as $B \otimes C$ as in Lemma
\ref{sp6d1}, both the orthogonal involution $\iota_{B}\otimes\iota_{C} \otimes
Id_{\mathbb{E}}:x\mapsto\overline{x}$ and the unitary involution
$\iota_{B}\otimes\iota_{C}\otimes\rho:x\mapsto\overline{x}^{\rho}$ are defined
on $A_{\mathbb{E}}$. The space $A_{\mathbb{E}}^{-}$ from Lemma \ref{sp6d1} is
defined as a quadratic space over $\mathbb{E}$, and we shall consider the vector
space over $\mathbb{F}$ which is defined in the following
\begin{lem}
Take some anisotropic $Q \in A^{-}$. The set of elements $u \in
A_{\mathbb{E}}^{-}$ satisfying the equality
$u^{\rho}=-\frac{Q\tilde{u}Q}{|Q|^{2}}$ form a 6-dimensional quadratic space of
discriminant $d$ over $\mathbb{F}$. Every quadratic space of dimension 6 and
discriminant $d$ over $\mathbb{F}$ is isomorphic, up to rescaling, to a space
which is obtained in this way. \label{sp6gen}
\end{lem}
\begin{proof}
The proof of Lemma \ref{ref6d1} shows that the expression, which we require to
equal $u^{\rho}$, is just $u$ if $u \in Q^{\perp} \subseteq
A_{\mathbb{E}}^{-}$, and is $-cQ$ in case $u=cQ$. As $Q^{\rho}=Q$, it follows
that the elements $u$ which satisfy this property is precisely $(Q^{\perp}
\subseteq A^{-})\oplus\mathbb{E}_{0}Q$, which is the quadratic space over
$\mathbb{F}$ which is obtained from $A^{-}$ by rescaling the vector norm of $Q$
by $d$. It is thus 6-dimensional and of the required discriminant. Conversely,
given a space of dimension 6 and discriminant $d$, choose an arbitrary isotropic
vector, and divide its vector norm by $d$. The resulting space has discriminant
1, so that by Lemma \ref{sp6d1} it takes the form $A^{-}$ for some bi-quaternion
algebra $A=B \otimes C$ over $\mathbb{F}$. But then we may extend scalars to
$\mathbb{E}$, and multiplying $|Q|^{2}$ back by $d$ (to get its original value)
means precisely replacing the $\mathbb{F}$-subspace $A^{-}$ of
$A_{\mathbb{E}}^{-}$ by the space $(Q^{\perp} \subseteq
A^{-})\oplus\mathbb{E}_{0}Q$ considered above. This completes the proof of the
lemma.
\end{proof}
We denote the space from Lemma \ref{sp6gen} by $(A_{\mathbb{E}}^{-})_{\rho,Q}$,
and observe that extending its scalars to $\mathbb{E}$ also gives
$A_{\mathbb{E}}^{-}$. The formulae from Lemma \ref{vnorm6} hold for $u \in
(A_{\mathbb{E}}^{-})_{\rho,Q}$, since such elements lie in $A_{\mathbb{E}}^{-}$
and the expressions are $\mathbb{F}$-valued on $(A_{\mathbb{E}}^{-})_{\rho,Q}$.
Note that $\rho$ acts on $(A_{\mathbb{E}}^{-})_{\rho,Q}$ like the reflection in
a generator of $\mathbb{E}_{0}Q$.
\smallskip
The group $A_{\mathbb{E}}^{\times}$ operates on $A$ either through the map from
Lemma \ref{ac6d1} (preserving $A_{\mathbb{E}}^{-}$), or via $g:M \mapsto
gM\overline{g}^{\rho}$. We define $A_{\mathbb{E},\rho,\mathbb{F}Q}^{\times}$ to
be the subgroup of $A_{\mathbb{E}}^{\times}$ which stabilizes the 1-dimensional
vector space $\mathbb{F}Q$ under the latter action (i.e., multiplies $Q$ by a
scalar), and let
$t:A_{\mathbb{E},\rho,\mathbb{F}Q}^{\times}\to\mathbb{F}^{\times}$ be the group
homomorphism defined via $gQ\overline{g}^{\rho}=t(g)Q$. Note that as
$\overline{Q}^{\rho}=-Q$ and $gQ\overline{g}^{\rho}$ lies in the same eigenspace
of $\iota_{B}\otimes\iota_{C}\otimes\rho$, allowing $t(g)$ to be in
$\mathbb{E}^{\times}$ does not produce a larger group. This group can be seen as
the group of similitudes of the unitary involution $x \mapsto
Q\overline{x}^{\rho}Q^{-1}$. It is stable under $\rho$ (just apply $\rho$ to
the defining equation), and its automorphism $\rho$ commutes with the map $t$
into $\mathbb{F}^{\times}$. The group which we shall consider here is the one
appearing in the following
\begin{lem}
The group $A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$ consisting of those elements
$g \in A_{\mathbb{E},\rho,\mathbb{F}Q}^{\times}$ such that
$N^{A_{\mathbb{E}}}_{\mathbb{E}}(g)=t(g)^{2}$ has index either 1 or 2 in
$A_{\mathbb{E},\rho,\mathbb{F}Q}^{\times} \cap
A_{\mathbb{E}}^{\mathbb{F}^{\times}}$. The former group is stable under $\rho$,
and it coincides with $A_{\mathbb{E},\rho,\mathbb{F}Q}^{\times} \cap
A_{\mathbb{E}}^{(\mathbb{F}^{\times})^{2}}$ unless
$-1\in(\mathbb{F}^{\times})^{2}$ and the above index is 2. \label{ind2int}
\end{lem}
\begin{proof}
Given $g \in A_{\mathbb{E},\rho,\mathbb{F}Q}^{\times}$, the norm
$N^{A_{\mathbb{E}}}_{\mathbb{E}}(g)$ lies in $t(g)^{2}\mathbb{E}^{1}$. For
$g \in A_{\mathbb{E}}^{\mathbb{F}^{\times}}$ the second multiplier comes from
$\mathbb{E}^{1} \cap \mathbb{F}=\{\pm1\}$, so that the group
$A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$ equals
$A_{\mathbb{E},\rho,\mathbb{F}Q}^{\times} \cap
A_{\mathbb{E}}^{\mathbb{F}^{\times}}$, unless
$A_{\mathbb{E},\rho,\mathbb{F}Q}^{\times}$ contains elements $g$ with
$N^{A_{\mathbb{E}}}_{\mathbb{E}}(g)=-t(g)^{2}$, a case in which
$A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$ has index 2 in this intersection. The
stability under $\rho$ follows from the fact that for $g \in
A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$ we have $t(g)=t(g^{\rho})$ and
$N^{A}_{\mathbb{F}}(g)=t(g)^{2}$ lies in $\mathbb{F}$. The assertion about the
intersection with $A_{\mathbb{E}}^{(\mathbb{F}^{\times})^{2}}$ follows
immediately from the considerations concerning
$A_{\mathbb{E}}^{\mathbb{F}^{\times}}$. This proves the lemma.
\end{proof}
Lemma \ref{ind2int} is related to some delicate facts about the definition of
$GSU$ groups---see also Corollary \ref{iso6gen} below. Note that as
$N^{A_{\mathbb{E}}}_{\mathbb{E}}$ is of degree 4, Hilbert's Theorem 90 cannot
help us in getting a more accurate result, like the one following Lemma
\ref{ac4}. In any case, we have $A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}
\subseteq A_{\mathbb{E}}^{(\mathbb{F}^{\times})^{2}} \subseteq
A_{\mathbb{E}}^{(\mathbb{E}^{\times})^{2}}$ (by Lemma \ref{ind2int}), and the
double cover $\widetilde{A}_{\mathbb{E}}^{(\mathbb{E}^{\times})^{2}}$ splits
over $A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$---indeed, the map
$g\mapsto\big(g,t(g)\big)$ is, by definition, a splitting map.
Note that unless $Q^{2}\in\mathbb{F}$ (i.e., $\tilde{Q}\in\mathbb{F}Q$, which
means that $Q$ lies in either $B_{\mathbb{E}}\otimes1$ or $1 \otimes
C_{\mathbb{E}}$), the map $\theta$ from Lemma \ref{vnorm6} does not preserve
$(A_{\mathbb{E}}^{-})_{\rho,Q}$ (see Lemma \ref{QthetaQrel} below). Moreover,
the automorphism from Lemma
\ref{ac6d1} does not preserve $A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$, since
inverting $Q$ may change the 1-dimensional space it generates. However, in order
to carry out the usual considerations also in this case, we have the following
\begin{lem}
The element $(Q,|Q|^{2})\tilde{\theta}$ squares to $(-|Q|^{2},|Q|^{4})$ in the
semi-direct product of $\{1,\tilde{\theta}\}$ and
$\widetilde{A}_{\mathbb{E}}^{^{(\mathbb{E}^{\times})^{2}}}$. It operates on
$(A_{\mathbb{E}}^{-})_{\rho,Q}$ as $\rho$ (hence preserves this space), and it
takes $(g,t) \in \widetilde{A}_{\mathbb{E}}^{^{(\mathbb{E}^{\times})^{2}}}$
to $(tQ\overline{g}^{-1}Q^{-1},t)$ by conjugation. This is an order 2
automorphism of $\widetilde{A}_{\mathbb{E}}^{^{(\mathbb{E}^{\times})^{2}}}$,
which preserves the image of $A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$ under the
splitting map, and coincides with the action of $\rho$ on the latter group.
\label{Qtheta}
\end{lem}
\begin{proof}
The product rule of the semi-direct product from Lemma \ref{ac6d1} implies that
the square of the element in question is
$(Q\cdot|Q|^{2}\overline{Q}^{-1},|Q|^{2}\cdot|Q|^{2})$, which equals
$(-|Q|^{2},|Q|^{4})$ since $Q \in A_{\mathbb{E}}^{-}$. This element sends $u \in
A_{\mathbb{E}}^{-}$ to $-\frac{Q\tilde{u}Q}{|Q|^{2}}$ (use the fact that $Q \in
A_{\mathbb{E}}^{-}$ again), which for $u \in (A_{\mathbb{E}}^{-})_{\rho,Q}$
coincides with $u^{\rho}\in(A_{\mathbb{E}}^{-})_{\rho,Q}$ according to Lemma
\ref{sp6gen}. The formula for the conjugation follows directly from Lemma
\ref{ac6d1}, and the fact that it is an automorphism of order 2 either follows
from the centrality of $(-1,1)$ or can be easily verified directly using the
fact that $Q \in A_{\mathbb{E}}^{-}$ once more. Now, multiplying the equation
stating that an element $g$ lies in $A_{\mathbb{E},\rho,\mathbb{F}Q}^{\times}$
by $(gQ)^{-1}$ from the left yields the equality
$\overline{g}^{\rho}=t(g)Q^{-1}g^{-1}Q$, and after applying
$\iota_{B}\otimes\iota_{C}$ we get the equality
$g^{\rho}=t(g)Q\overline{g}^{\ -1}Q^{-1}$ (since $Q \in A_{\mathbb{E}}^{-}$).
This shows that conjugation by $(Q,|Q|^{2})\tilde{\theta}$ operates on
$A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$ as $\rho$, and as the latter group was
seen to be preserved by $\rho$ in Lemma \ref{ind2int}, this completes the proof
of the lemma.
\end{proof}
We can now proceed with
\begin{lem}
The image of the embedding of $A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$ into
$\widetilde{A}_{\mathbb{E}}^{^{(\mathbb{E}^{\times})^{2}}}$ by
$g\mapsto\big(g,t(g)\big)$ preserves the $\mathbb{F}$-subspace
$(A_{\mathbb{E}}^{-})_{\rho,Q}$ in the action of
$\widetilde{A}_{\mathbb{E}}^{^{(\mathbb{E}^{\times})^{2}}}$ on
$A_{\mathbb{E}}^{-}$. Adding the element $(Q,|Q|^{2})\tilde{\theta}$ from Lemma
\ref{Qtheta} gives a group, which contains
$A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$ as a subgroup of index 2, and the
larger group also maps to $O\big((A_{\mathbb{E}}^{-})_{\rho,Q}\big)$.
\label{ac6gen}
\end{lem}
\begin{proof}
An element $u \in (A_{\mathbb{E}}^{-})_{\rho,Q}$ satisfies the condition of
Lemma \ref{sp6gen}, and for $g \in A_{\mathbb{E},\rho,\mathbb{F}Q}^{\times}$ we
have the formulae for $g^{\rho}$ and $\overline{g}^{\rho}$ from the proof of
Lemma \ref{Qtheta}. We evaluate \[(gu\overline{g})^{\rho}=t(g)Q\overline{g}^{\
-1}Q^{-1}\cdot\frac{-Q\tilde{u}Q}{|Q|^{2}}t(g)Q^{-1}g^{-1}Q=-\frac{QN^{A}_{
\mathbb{F}}(g)\overline{g}^{\ -1}\tilde{u}g^{-1}Q}{|Q|^{2}}\] (recall the
$A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$ condition), and this equals
$\frac{Q\widetilde{gu\overline{g}}Q}{|Q|^{2}}$ by Lemma \ref{AxA-rel} (over
$\mathbb{E}$). This proves the first assertion. The remaining assertions follow
directly from Lemma \ref{Qtheta} and Lemma \ref{ac6d1} over $\mathbb{E}$, as the
quadratic structure on $(A_{\mathbb{E}}^{-})_{\rho,Q}$ is defined as a subset of
$A_{\mathbb{E}}^{-}$. This proves the lemma.
\end{proof}
The issue with the reflections is a bit tricky here:
\begin{lem}
Let $h\in\mathbb{E}^{0}$ be some non-zero element (so that $d=h^{2}$). For any
$g\in(A_{\mathbb{E}}^{-})_{\rho,Q} \cap A_{\mathbb{E}}^{\times}$, the element
$ghQ^{-1}$ of $A_{\mathbb{E}}^{\times}$ lies in
$A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$. The combination of this element with
the element from Lemma \ref{Qtheta} operates on $(A_{\mathbb{E}}^{-})_{\rho,Q}$
as the reflection in $g$. \label{ref6gen}
\end{lem}
\begin{proof}
The fact that $Q \in A_{\mathbb{E}}^{-}$ satisfies $Q^{\rho}=Q$,
$g\in(A_{\mathbb{E}}^{-})_{\rho,Q}$, and $h^{\rho}=-h$ allows us to write
\[(ghQ^{-1})Q\overline{(ghQ^{-1})}^{\rho}=-h^{2}gQ^{-1}g^{\rho}=d\frac{g\tilde{g
}Q}{|Q|^{2}}=\frac{d|g|^{2}}{|Q|^{2}}Q.\] As
$N^{A}_{\mathbb{F}}(ghQ^{-1})=\frac{h^{4}N^{A}_{\mathbb{F}}(g)}{N^{A}_{\mathbb{F
}}(Q)}$ equals $\big(\frac{d|g|^{2}}{|Q|^{2}}\big)^{2}$ by Corollary
\ref{NAvn2}, $ghQ^{-1}$ indeed lies in $A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$
with $t(ghQ^{-1})=\frac{d|g|^{2}}{|Q|^{2}}$. We may decompose the resulting
element of $\widetilde{A}_{\mathbb{E}}^{^{(\mathbb{E}^{\times})^{2}}}$ as the
product of $(g,|g|^{2})$, $(h,d)=(h,h^{2})$ (which acts trivially on
$A_{\mathbb{E}}^{-}$), and $\big(Q^{-1},\frac{1}{|Q|^{2}}\big)$. In the product
with the element from Lemma \ref{Qtheta} the two terms involving $Q$ cancel,
and the total action (on $A_{\mathbb{E}}^{-}$) is as
$u\mapsto\frac{g\tilde{u}\overline{g}}{|g|^{2}}$, which was seen in Lemma
\ref{ref6d1} to be the reflection in $g$. As $(A_{\mathbb{E}}^{-})_{\rho,Q}$
inherits its quadratic structure from $A_{\mathbb{E}}^{-}$ and $g$ lies in the
smaller space, this total operation is the reflection in $g$ also on
$(A_{\mathbb{E}}^{-})_{\rho,Q}$. This proves the lemma.
\end{proof}
We are now in position to prove
\begin{thm}
The Gspin group $Gspin\big((A_{\mathbb{E}}^{-})_{\rho,Q}\big)$ is
$A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$. The spin group
$spin\big((A_{\mathbb{E}}^{-})_{\rho,Q}\big)$ is the subgroup
$A_{\mathbb{E},\rho,Q}^{1}$ consisting of those elements $g \in
A_{\mathbb{E}}^{1}$ satisfying $gQ\overline{g}^{\rho}=Q$. \label{dim6gen}
\end{thm}
\begin{proof}
Theorem \ref{dim6d1} shows that the semi-direct product of
$\{1,\tilde{\theta}\}$ with
$\widetilde{A}_{\mathbb{E}}^{(\mathbb{E}^{\times})^{2}}$, which is also
generated by $(Q,|Q|^{2})\tilde{\theta}$ and the latter group, maps surjectively
onto $O(A_{\mathbb{E}}^{-})$, with kernel $\mathbb{E}^{\times}$, and such that
$\widetilde{A}_{\mathbb{E}}^{(\mathbb{E}^{\times})^{2}}$ is the inverse image of
$O(A_{\mathbb{E}}^{-})$. Lemma \ref{ac6gen} implies that the subgroup generated
by $(Q,|Q|^{2})\tilde{\theta}$ and the image of
$A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$ under $g\mapsto\big(g,t(g)\big)$ lies
in the inverse image of the subgroup $O\big((A_{\mathbb{E}}^{-})_{\rho,Q}\big)$
of $O(A_{\mathbb{E}}^{-})$ (respecting the $\mathbb{F}$-structure), and Theorem
\ref{dim6d1} and the fact the elements of
$O\big((A_{\mathbb{E}}^{-})_{\rho,Q}\big)$ have the same determinant in
$O\big((A_{\mathbb{E}}^{-})_{\rho,Q}\big)$ and in $O(A_{\mathbb{E}}^{-})$ (this
is just extension of scalars) show that the inverse image of
$SO\big((A_{\mathbb{E}}^{-})_{\rho,Q}\big)$ in the smaller group is
$A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$. Lemma \ref{ref6gen} and Proposition
\ref{CDT} imply that the map from the group generated by
$A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$ and $(Q,|Q|^{2})\tilde{\theta}$ to
$O\big((A_{\mathbb{E}}^{-})_{\rho,Q}\big)$ is surjective. Hence the map
$A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}} \to
SO\big((A_{\mathbb{E}}^{-})_{\rho,Q}\big)$ also surjects, and its kernel,
which consists of those scalars $r\in\mathbb{E}^{\times}$ such that
$rQ\overline{r}^{\rho}=r^{2}Q$, is precisely $\mathbb{F}^{\times}$. Note that
elements $r\in\mathbb{E}_{0}$ also lie in
$A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$, but as they satisfy $t(r)=-r^{2}$
they not in the kernel (they operate as $-Id_{(A_{\mathbb{E}}^{-})_{\rho,Q}}$).
It follows that $Gspin\big((A_{\mathbb{E}}^{-})_{\rho,Q}\big)$ is indeed
$A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$. Now, the proof of Theorem
\ref{dim6d1} shows that the spinor norm map factors through the projection
$(g,t) \mapsto t$ before going over to any quotient
($\mathbb{E}^{\times}/(\mathbb{E}^{\times})^{2}$, or anything else). Thus
elements of $SO^{1}\big((A_{\mathbb{E}}^{-})_{\rho,Q}\big)$ are obtained from
$g \in A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$ with
$t(g)\in(\mathbb{F}^{\times})^{2}$, and the usual interplay with scalars
reduces to those $g$ with $t(g)=1$. But such $g$ preserve $Q$ in the twisted
operation, and must lie in $A_{\mathbb{E}}^{1}$ by the
$A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$ condition, which proves the second
assertion since the scalars which square to 1 in $\mathbb{F}$ (which again form
the kernel) are just $\pm1$. This completes the proof of the theorem.
\end{proof}
The case $d=-1$ of Theorem \ref{dim6gen} gives back Theorem \ref{dim6d1} in the
following way. We have $\mathbb{E}=\mathbb{F}\times\mathbb{F}$, so that
$A_{\mathbb{E}}=A \times A$, and $Q$ lies in the original space (no
re-normalization). As
$A_{\mathbb{E}}^{(\mathbb{E}^{\times})^{2}}=A^{(\mathbb{F}^{\times})^{2}} \times
A^{(\mathbb{F}^{\times})^{2}}$, taking $g \in A^{(\mathbb{F}^{\times})^{2}}$ to
be the first coordinate and choosing $t$ such that
$t^{2}=N^{A}_{\mathbb{F}}(g)$, we find that the condition
$(g,h)Q\overline{(g,h)}^{\rho}=tQ$ is equivalent to each one of the conditions
$gQ\overline{h}=tQ$ and $hQ\overline{g}=tQ$, hence to $h$ being
$tQ\overline{g}^{\ -1}Q^{-1}$ (indeed, with the same reduced norm as $g$). Hence
$A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}} \cong
\widetilde{A}^{(\mathbb{F}^{\times})^{2}}$, through the map $(g,t) \mapsto
(g,tQ\overline{g}^{\ -1}Q^{-1})$ (in the opposite direction), so that the second
coordinate associated with $(g,t)$ depends on $Q$, but the isomorphism class of
the group does not (see also Proposition \ref{NQF2NAE} below for the general
case). A similar assertion holds for $A_{\mathbb{E},\rho,Q}^{1}$.
\smallskip
For the independence property, we observe that apart from the choice of the
element $Q$, there is also the choice of the $\mathbb{F}$-structure on
$A_{\mathbb{E}}$, i.e., of the interpretation of $x^{\rho}$ for $x \in
A_{\mathbb{E}}$. We thus replace this notation by defining $\sigma$ to be a ring
automorphism of order 2 whose restriction to $\mathbb{E}$ is $\rho$, and which
satisfies $(\overline{x})^{\sigma}=\overline{x^{\sigma}}$ for every $x \in A$.
By replacing $\sigma$ with another such automorphism, say $\tau$, we get a
presentation of $A_{\mathbb{F}}$ as the tensor product of another bi-quaternion
algebra over $\mathbb{F}$ with $\mathbb{E}$. Any element $y \in
A_{\mathbb{E}}^{\times}$ such that $\overline{y}y\in\mathbb{E}^{\times}$
produces such an automorphism by defining $x^{\tau}=yx^{\sigma}y^{-1}$, where
the similitude condition is equivalent to the condition
$(\overline{x})^{\tau}=\overline{x^{\tau}}$ for every $x \in A$: Compare
$(\overline{x})^{\tau}=y\overline{x}^{\sigma}y^{-1}$ with
$\overline{x^{\tau}}=\overline{y}^{-1}\overline{x}^{\sigma}\overline{y}$ and use
the centrality of $A_{\mathbb{E}}$. The relations we have are contained in the
following
\begin{prop}
$(i)$ Let $Q$ and $R$ be $\sigma$-invariant elements of $A_{\mathbb{E}}^{-}$
(i.e., elements of $A^{-}$) such that
$|R|^{2}\in|Q|^{2}(\mathbb{F}^{\times})^{2}N^{A}_{\mathbb{F}}(A^{\times})$. Then
the groups $A_{\mathbb{E},\sigma,\mathbb{F}Q}^{t^{2}}$ and
$A_{\mathbb{E},\sigma,\mathbb{F}R}^{t^{2}}$ as well as
$A_{\mathbb{E},\sigma,Q}^{1}$ and $A_{\mathbb{E},\sigma,R}^{1}$ are isomorphic.
$(ii)$ Assume that $e \in A_{\mathbb{E}}^{\mathbb{F}^{\times}}$ is such that
$ee^{-\sigma}$ is invariant under $x\mapsto\overline{x}^{\sigma}$, and $S \in
A_{\mathbb{E}}^{-}$ is invariant under $\tau:x \mapsto
ee^{-\sigma}x^{\sigma}e^{\sigma}e^{-1}$ and satisfies
$|S|^{2}\in|Q|^{2}N^{A_{\mathbb{E}}}_{\mathbb{E}}(e)(\mathbb{F}^{\times})^{2}N^{
A}_{\mathbb{F}}(A^{\times})$. In this case the groups
$A_{\mathbb{E},\sigma,\mathbb{F}Q}^{t^{2}}$ and
$A_{\mathbb{E},\tau,\mathbb{F}S}^{t^{2}}$ and the groups
$A_{\mathbb{E},\sigma,Q}^{1}$ and $A_{\mathbb{E},\tau,R}^{1}$ are isomorphic.
$(iii)$ Let $b \in A_{\mathbb{E}}^{\times}$ be such that
$\overline{b}^{\sigma}=b$ and $b\overline{b}\in\mathbb{E}^{\times}$, and define
$\eta:x \mapsto bx^{\sigma}b^{-1}$. If the element $T=Qb^{-1}$ lies in
$A_{\mathbb{E}}^{-}$ then the groups $A_{\mathbb{E},\sigma,\mathbb{F}Q}^{t^{2}}$
and $A_{\mathbb{E},\eta,\mathbb{F}T}^{t^{2}}$ coincide, and same assertion
holds for $A_{\mathbb{E},\sigma,Q}^{1}$ and $A_{\mathbb{E},\eta,T}^{1}$.
\label{NQF2NAE}
\end{prop}
\begin{proof}
Considering $A^{-}$ as a space of discriminant 1 again, we find (as in the
proof of Proposition \ref{NQF2NA}) that we can write
$R=rcQ\overline{c}=rcQ\overline{c}^{\sigma}$ with $r\in\mathbb{F}^{\times}$ and
$c \in A^{\times}$ (hence $c=c^{\sigma}$). It follows that conjugation by $c$
takes the group $A_{\mathbb{E},\sigma,\mathbb{F}Q}^{t^{2}}$ to
$A_{\mathbb{E},\sigma,\mathbb{F}R}^{t^{2}}$ (as conjugation preserves reduced
norms, the $N^{A}_{\mathbb{E}}=t^{2}$ condition is also preserved), in a manner
which commutes with the multiplier maps. This proves part $(i)$, since the
groups $A_{\mathbb{E},\sigma,Q}^{1}$ and $A_{\mathbb{E},\sigma,R}^{1}$ are
defined by the condition $t=1$ on the larger groups
$A_{\mathbb{E},\sigma,\mathbb{F}Q}^{t^{2}}$ to
$A_{\mathbb{E},\sigma,\mathbb{F}R}^{t^{2}}$. For part $(ii)$, first note that
the $\sigma$-image of $ee^{-\sigma}$ is its inverse, so that the condition on
that element implies that it is a similitude and
$(\overline{x})^{\tau}=\overline{x^{\tau}}$ holds for every $x \in
A_{\mathbb{E}}$. Consider the element $eQ\overline{e}$ of $A_{\mathbb{E}}^{-}$,
whose vector norm is
$N^{A_{\mathbb{E}}}_{\mathbb{E}}(e)|Q|^{2}\in\mathbb{F}^{\times}$ by Proposition
\ref{NAFg} and the assumption on $e$. The relation between $\tau$ and $\sigma$
and the assumption on $Q$ and $e$ show that this element is $\tau$-invariant,
and as the properties of $e$ and $\tau$ imply that for $g \in
A_{\mathbb{E}}^{\times}$ we have
\[\overline{ege^{-1}}^{\tau}=\overline{e}^{-1}\overline{e}^{\sigma}
\cdot\overline{ege^{-1}}^{\sigma}\cdot\overline{e}^{-\sigma}\overline{e}
=\overline{e}^{-1}\overline{g}^{\sigma}\overline{e},\] it follows that
conjugation by $e$ sends $A_{\mathbb{E},\sigma,\mathbb{F}Q}^{t^{2}}$ to
$A_{\mathbb{E},\tau,\mathbb{F}eQ\overline{e}}^{t^{2}}$ as well as
$A_{\mathbb{E},\sigma,Q}^{1}$ to $A_{\mathbb{E},\tau,eQ\overline{e}}^{1}$. As
$S$ is related to $eQ\overline{e}$ in the same manner as $Q$ and $R$, part
$(ii)$ now follows from part $(i)$, using conjugation by an appropriate
$\tau$-invariant element $d \in A_{\mathbb{E}}^{\times}$. Now, the fact that
$Q \in A^{-}$ and the element $T$ from part $(iii)$ lies in $A_{\mathbb{E}}^{-}$
imply
$T^{\eta}=-\overline{T}^{\eta}=-b\overline{Qb^{-1}}^{\sigma}b^{-1}=+b\overline{b
}^{-\sigma}Qb^{-1}$, and this gives $Qb^{-1}=T$ again by our assumption on $b$.
Furthermore, for $g \in A$ we have
$gT\overline{g}^{\eta}=gQ\overline{g}^{\sigma}b^{-1}$ by the definitions of
$T$ and $\eta$, so that the $A_{\mathbb{E},\sigma,\mathbb{F}Q}^{t^{2}}$ and
$A_{\mathbb{E},\eta,\mathbb{F}S}^{t^{2}}$ conditions coincide, with the same
multiplier $t$. This proves part $(iii)$ (for both the Gspin and spin groups),
and completes the proof of the proposition.
\end{proof}
Note that part $(i)$ of Proposition \ref{NQF2NAE} is a special case of part
$(ii)$ there, but it is required for the proof of the latter part. We also
remark that for $\tau$ and $\sigma$ which are connected as in part $(ii)$ of
Proposition \ref{NQF2NAE}, the subring which $\tau$ stabilizes is isomorphic to
$A$. Indeed, if $x=x^{\tau}=aa^{-\sigma}x^{\sigma}a^{\sigma}a^{-1}$ then
$a^{-1}xa$ is $\sigma$-invariant, so that conjugation by $a$ maps $A$ onto this
subring (in fact, the Skolem--Noether Theorem implies that every automorphism on
$A_{\mathbb{E}}$ which stabilizes a subring which is isomorphic to $A$ must
arise in this way). This is related to the fact that the vector norm relation is
based on $N^{A}_{\mathbb{F}}(A^{\times})$ also when $\tau$ is involved. On the
other hand, $\eta$ from part $(iii)$ of that proposition may arise from a
non-isomorphic bi-quaternion algebra over $\mathbb{F}$. E.g., assuming that $Q
\in C_{0}$, any element $b\in\mathbb{F}\oplus\mathbb{E}_{0}C_{0}$ such that
$Tr^{C_{\mathbb{E}}}_{\mathbb{E}}(Qb^{-1})=0$ satisfies all the assumptions
(including $Qb^{-1} \in A_{\mathbb{E}}^{-}$), and we have seen in the paragraph
preceding Corollary \ref{iso4} that up to letting $Q \in C_{0}$ vary, this
covers all the bi-quaternion algebras which arise as the tensor product of $B$
with some quaternion algebra over $\mathbb{F}$ which becomes $C_{\mathbb{E}}$
over $\mathbb{E}$. We may also apply this construction with $b \in B_{0} \otimes
Q$, yielding an operation on both $B$ in $C$. We remark that by Proposition 2.18
of \cite{[KMRT]}, any two involutions $x\mapsto\overline{x}^{\sigma}$ and
$x\mapsto\overline{x}^{\tau}$ must be related through conjugation by some
element $b$ satisfying $\overline{b}^{\sigma}=b$, with the similitude condition
to preserve commutativity with $\iota_{B}\otimes\iota_{C} \otimes
Id_{\mathbb{E}}$. It seems likely that the similitude condition implies the
existence of $Q \in A^{-} \cap A^{\times}$ such that $Qb^{-1} \in
A_{\mathbb{E}}^{-}$, but we have not checked this in detail. Was this the case,
Proposition \ref{NQF2NAE} would relate all the involutions which commute with
$\iota_{B}\otimes\iota_{C} \otimes Id_{\mathbb{E}}$ to one another, yielding an
$\mathbb{F}$-structure invariance result.
\smallskip
Fixing $Q$ and $\sigma$ back again (and writing $\rho$ for the automorphism of
$A_{\mathbb{E}}$ as before), the assertion involving isotropy in this case is
\begin{cor}
If the space becomes isotropic when one extends scalars to $\mathbb{E}$ then
the Gspin group is a ``quaternionic $GSU$ group'', consisting of the
matrices $g \in M_{2}(B_{\mathbb{E}})$, for some quaternion algebra
$B_{\mathbb{E}}$ over $\mathbb{E}$ which comes from a quaternion algebra $B$
over $\mathbb{F}$, which multiply a diagonal matrix of the sort $\binom{-\delta\
\ 0}{\ \ 0\ \ 1} \in M_{2}(\mathbb{F})$ with $\delta$ determined up to
multiplication from $N^{B}_{\mathbb{F}}(B^{\times})$, through the action $g:X
\mapsto gX\overline{g}^{t}$. The $GSU$ condition means that the reduced norm of
these matrices equals the square of the multiplier of this element. The spin
group is the associated ``quaternionic special unitary group'', of matrices
which preserve this element and have reduced norm 1. In case the space splits
two hyperbolic planes over $\mathbb{E}$, the Gspin group becomes the
$GSU_{\mathbb{E},\rho}$ group of a unitary space of dimension 4 and determinant
(or discriminant) 1 over $\mathbb{E}$ (with $\rho$), and the spin group is the
associated special unitary group. \label{iso6gen}
\end{cor}
\begin{proof}
In the first case $A_{\mathbb{E}}$ is isomorphic to $M_{2}(B_{\mathbb{E}})$ for
some quaternion algebra $B$ over $\mathbb{F}$. We may normalize (using parts
$(ii)$ and $(iii)$ of Proposition \ref{NQF2NAE} if necessary) the involutions
such that $A=M_{2}(B)$, i.e., with $C=M_{2}(\mathbb{F})$, and as every element
of $\mathbb{F}^{\times}$ is a norm from $C_{0}$, part $(i)$ of Proposition
\ref{NQF2NAE} allows us to restrict attention to spaces
$\big(M_{2}(B_{\mathbb{E}})^{-}\big)_{\rho,Q}$ with $Q \in
M_{2}(\mathbb{F})_{0}$. Moreover, as in the proof of Corollary \ref{iso5}, we
may take $Q$ of the form $\binom{0\ \ \delta}{1\ \ 0}$, and part $(i)$ of
Proposition \ref{NQF2NAE} and Lemma \ref{NM2B} allows us to take $\delta$ from a
set of representatives for $\mathbb{F}^{\times}/N^{B}_{\mathbb{F}}(B^{\times})$.
By Lemma \ref{Sadjt}, the condition $g \in
A_{\mathbb{E},\sigma,\mathbb{F}Q}^{t^{2}}$ (or $g \in
GL_{2}^{t^{2}}(B_{\mathbb{E}})_{\sigma,\mathbb{F}Q}$) becomes $g\binom{-\delta\
\ 0}{\ \ 0\ \ 1}\iota_{B}(g)^{t\rho}=t\binom{-\delta\ \ 0}{\ \ 0\ \ 1}$ for some
$t=t(g)\in\mathbb{F}^{\times}$ and
$N^{M_{2}(B_{\mathbb{E}})}_{\mathbb{E}}(g)=t(g)^{2}$, while elements of
$A_{\mathbb{E},\sigma,Q}^{1}=GL_{2}^{1}(B_{\mathbb{E}})_{\sigma,Q}$ have reduced
norm 1 and their action leaves the element $\binom{-\delta\ \ 0}{\ \ 0\ \ 1}$
invariant. This proves the first assertion.
If after tensoring $(A_{\mathbb{E}}^{-})_{\rho,Q}$ with $\mathbb{E}$ we get a
2-dimensional isotropic subspace (and then we even get a 3-dimensional such
space) then $B_{\mathbb{E}}$ also splits, hence $B$ may be presented as
$\big(\frac{-d,\varepsilon}{\mathbb{F}}\big)$ for some
$\varepsilon\in\mathbb{F}^{\times}$ (which is unique up to
$(\mathbb{F}^{\times})^{2}$ and multiplication by $+d$). The model we thus take
for $B$ is the algebra $(\mathbb{E},\rho,\varepsilon)$, and by Lemma
\ref{KsplitB}, $M \mapsto M^{\rho}$ is $\binom{a\ \ b}{c\ \ d}\mapsto\binom{0\ \
\varepsilon}{1\ \ 0}\binom{a^{\rho}\ \ b^{\rho}}{c^{\rho}\ \ d^{\rho}}\binom{0\
\ \varepsilon}{1\ \ 0}/\varepsilon$ on $B_{\mathbb{E}}=M_{2}(\mathbb{E})$. Thus
the operation $g\mapsto\overline{g}^{\rho}$ or $g\mapsto\iota_{B}(g)^{t\rho}$
involves the map $M\mapsto\binom{0\ \
\varepsilon}{1\ \ 0}M^{\rho}\binom{0\ \ \varepsilon}{1\ \
0}/\varepsilon$ on the entries (in addition to adjoint or transposition of
$2\times2$ matrices over $B_{\mathbb{E}}$). It follows that each entry from
$B_{\mathbb{E}}$ in $\binom{0\ \ \delta}{1\ \ 0}$ or $\binom{-\delta\ \ 0}{\ \
0\ \ 1}$ appearing in the definition of the groups (which we write as $\binom{0\
\ \delta I}{I\ \ \ 0\ }$ or $\binom{-\delta I\ \ 0}{\ \ 0\ \ \ I}$ since
$B_{\mathbb{E}}$ is the matrix algebra $M_{2}(\mathbb{E})$) is replaced by the
corresponding multiple of $\binom{0\ \ \varepsilon}{1\ \ 0}$ when we apply
$\rho$ on the $\mathbb{E}$-entries of matrices in
$M_{2}(B_{\mathbb{E}})=M_{4}(\mathbb{E})$. Applying Lemma \ref{Sadjt} to the
$2\times2$-entries changes each such $\binom{0\ \ \varepsilon}{1\ \ 0}$ to
$\binom{-\varepsilon\ \ 0}{\ \ 0\ \ 1}$. Therefore, the
$A_{\mathbb{E},\sigma,\mathbb{F}Q}^{t^{2}}=GL_{2}^{t^{2}}(B_{\mathbb{E}})_{
\sigma,\mathbb{F}Q}$ condition just means being in the
$GSU_{\mathbb{E},\rho}$ group of a 4-dimensional space over $\mathbb{E}$ with a
sesqui-linear form having an orthogonal basis with norms $\delta\varepsilon$,
$-\varepsilon$, $-\delta$, and 1 (hence the determinant is 1 modulo
$N^{\mathbb{E}}_{\mathbb{F}}(\mathbb{E}^{\times})$), while the group
$A_{\mathbb{E},\sigma,Q}^{1}=GL_{2}^{1}(B_{\mathbb{E}})_{\sigma,Q}$ is the
corresponding $SU_{\mathbb{E},\rho}$ group. This proves the corollary.
\end{proof}
We emphasize that only unitary spaces with determinant (or discriminant) 1
appear in this context.
In this isotropic case we have the following equivalent representations:
\begin{cor}
The groups $GL_{2}^{t^{2}}(B_{\mathbb{E}})_{\sigma,\mathbb{F}Q}$ and
$GL_{2}^{1}(B_{\mathbb{E}})_{\sigma,Q}$ with $Q=\binom{0\ \ \delta}{1\ \ 0}$ are
the Gspin and spin groups of the space $\mathbb{E} \oplus B$, embedded as
$(y,\lambda)\mapsto\binom{\lambda\ \ \ -\delta y}{y^{\rho}\ \
-\overline{\lambda}\ }$ or as $(y,\lambda)\mapsto\binom{\delta y\ \ \lambda\
}{\overline{\lambda}\ \ \ y^{\rho}}$, with the vector norm being
$N^{B}_{\mathbb{F}}(\lambda)-\delta N^{\mathbb{E}}_{\mathbb{F}}(y)$ and the
actions being $g:M \mapsto gM\overline{g}$ and $g:X \mapsto gX\iota{B}(g)^{t}$
respectively. The $GSU_{\mathbb{E},\rho}$ and $SU_{\mathbb{E},\rho}$ groups of a
4-dimensional unitary space of discriminant 1 are the Gspin and spin groups of
the direct sum of 3 copies of $\mathbb{E}$, with the vector norm of $(z,w,y)$
being $N^{\mathbb{E}}_{\mathbb{F}}(z)-\varepsilon
N^{\mathbb{E}}_{\mathbb{F}}(w)-\delta N^{\mathbb{E}}_{\mathbb{F}}(y)$. It may be
embedded in $M_{4}(\mathbb{E})$ by replacing $\lambda$ by $\binom{\ z\ \
\varepsilon w}{w^{\rho}\ \ z^{\rho}}$ with $z$ and $w$ from $\mathbb{E}$ in the
two representations from above. In addition, these groups act via the operation
$g:T \mapsto gTg^{t}$ on the subspace of $M_{4}^{as}(\mathbb{E})$ consisting of
those matrices $\binom{a\ \ b}{c\ \ d}$ in which the anti-symmetric matrices $a$
and $d=\binom{0\ \ -y}{y\ \ \ \ 0}$ satisfies $a=\delta d^{\rho}$ while
$b=-c^{t}$ takes the form $\binom{\varepsilon w\ \ -z\ }{z^{\rho}\ \
-w^{\rho}}$, as well as through $\binom{a\ \ b}{c\ \ d}:S\mapsto\binom{a\ \
b}{c\ \ d}S\binom{\ \ d^{t}\ \ -b^{t}}{-c^{t}\ \ \ \ a^{t}}$ on another
embedding of the three copies of $\mathbb{E}$ into the complement of
$\mathfrak{sp}_{4}(\mathbb{E})$ in $M_{4}(\mathbb{E})$. \label{alt6gen}
\end{cor}
\begin{proof}
Recall that $(A_{\mathbb{E}}^{-})_{\rho,Q}$ is $(Q^{\perp} \subseteq
A^{-})\oplus\mathbb{E}_{0}Q$, and the off-diagonal matrices which lie in
$Q^{\perp}$ are spanned by $\binom{0\ \ -\delta}{1\ \ \ \ 0}$. Adding
$\mathbb{E}_{0}Q$ yields the first representation, on which the action is via
$g:M\mapsto\frac{gM\overline{g}}{t(g)}$, and the second one, on which the groups
operated via $g:X\mapsto\frac{gX\iota_{B}(g)^{t}}{t(g)}$, arises from Lemma
\ref{Sadjt} as in the proofs of Corollaries \ref{alt6d1} and \ref{alt5} (with
the norm being some generalization of the Moore determinant). The two
representations of the $GSU_{\mathbb{E},\rho}$ and $SU_{\mathbb{E},\rho}$ groups
are also obtained by applying Lemma \ref{Sadjt} to the entries as $2\times2$
matrices, as the proofs of Corollaries \ref{alt6d1} and \ref{alt5} do, while
recalling that $B$ is embedded into $M_{2}(\mathbb{E})$ as
$(\mathbb{E},\rho,\varepsilon)$. This proves the corollary.
\end{proof}
We remark that for the case where $A=M_{4}(\mathbb{F})$ we have used a different
choice of $Q$ in Corollaries \ref{iso5} and \ref{alt5} in order to obtain the
classical symplectic group. Choosing this $Q$ for Corollaries \ref{iso6gen} and
\ref{alt6gen} yields the $GSU_{\mathbb{E},\rho}$ and $SU_{\mathbb{E},\rho}$
conditions for an anti-diagonal symmetric matrix (which is explicitly $\binom{0\
\ E}{E\ \ 0}$ with $E=\binom{0\ \ 1}{1\ \ 0}$). On the other hand, here the
splitting of $A_{\mathbb{E}}$ might come from $A=M_{2}(B)$ with $B$ which is
non-split over $\mathbb{F}$, so that we have many groups in this case. In cany
case, we restrict attention to the classical unitary groups (of diagonal
matrices), as any unitary group is conjugate to a classical one.
\section{Relations with the Exterior Square \label{Wedge}}
For the groups arising from bi-quaternion algebras, hence related to $4\times4$
matrices, there are representations which are equivalent to those presented
here. Given a field $\mathbb{M}$, the group $GL_{4}(\mathbb{M})$ operates on
the 6-dimensional exterior square $\bigwedge^{2}\mathbb{M}^{4}$ of the natural
representation space $\mathbb{M}^{4}$, and if we denote the canonical
basis for $\mathbb{M}^{4}$ by $e_{i}$, $1 \leq i\leq4$ for $\mathbb{M}^{4}$
then the 6 elements $e_{i} \wedge e_{j}$ with $1 \leq i<j\leq4$ form a basis for
$\bigwedge^{2}\mathbb{M}^{4}$. The map taking $u$ and $v$ from
$\bigwedge^{2}\mathbb{M}^{4}$ to the multiple of $e_{1} \wedge e_{2} \wedge
e_{3} \wedge e_{4}$ which equals $u \wedge w \in \bigwedge^{4}\mathbb{M}^{4}$ is
bilinear and symmetric (hence we denote this value by $\langle u,w \rangle$),
and in fact independent, up to rescaling, of the choice of basis. The connection
which we have arises from the following
\begin{lem}
For any $n$, the natural representation of $GL_{n}(\mathbb{M})$ on
$\bigwedge^{2}\mathbb{M}^{n}$ is equivalent to its representation on the space
$M_{n}^{as}(\mathbb{M})$ of anti-symmetric matrices via $g:T \mapsto gTg^{t}$.
For $n=4$ the equivalence preserves the bilinear forms, where on
$\bigwedge^{2}\mathbb{M}^{4}$ we take the bilinear form defined above and on
$M_{n}^{as}(\mathbb{M})$ we use the Pfaffian. \label{wedge2equiv}
\end{lem}
\begin{proof}
Using the standard basis for $\mathbb{M}^{n}$, consider the map which takes, for
all $1 \leq i<j \leq n$, the basis element $e_{i} \wedge e_{j}$ of
$\bigwedge^{2}\mathbb{M}^{n}$ to the anti-symmetric matrix $E_{ij}-E_{ji}$. The
fact that for $g=(a_{kl})$ we have
$\big(g(E_{ij}-E_{ji})g^{t}\big)_{kl}=a_{ki}a_{lj}-a_{kj}a_{li}$ shows that
this is indeed an equivalence of representations. For $n=4$ we find that $e_{1}
\wedge e_{2}$ and $e_{3} \wedge e_{4}$ as well as $e_{1} \wedge e_{4}$ and
$e_{2} \wedge e_{3}$ span hyperbolic planes, while $e_{1} \wedge e_{3}$ and
$e_{2} \wedge e_{4}$ span another hyperbolic plane but with the sign inverted.
As this is in correspondence with the values bilinear form arising from the
Pfaffian takes on the images of these vectors in $M_{n}^{as}(\mathbb{M})$, this
completes the proof of the lemma.
\end{proof}
Given $g \in GL_{4}(\mathbb{M})$ and $u$ and $v$ from
$\bigwedge^{2}\mathbb{M}^{4}$, we have $\langle gu,gw \rangle=\det g\langle u,w
\rangle$. This is so, since $gu \wedge gv$ is $\langle u,w \rangle$ times the
image of the generator $e_{1} \wedge e_{2} \wedge e_{3} \wedge e_{4}$ of
$\bigwedge^{4}\mathbb{M}^{4}$ via the $\bigwedge^{4}$-action of $g$, and the
latter action is multiplication by the determinant by definition. Lemma
\ref{wedge2equiv} and the isomorphism of representations appearing in the proof
of Corollary \ref{alt6d1} thus provide an alternative proof for Proposition
\ref{NAFg}, via appropriate restriction of scalars.
Assume now that $ch\mathbb{M}\neq2$. Then sums and differences of complementary
pairs form an orthogonal basis for $\mathbb{M}^{4}$. All the representations in
this Section will be variants of this one, using this basis. The first one is
\begin{prop}
$SL_{4}(\mathbb{F})$ is the spin group of the quadratic space
$\bigwedge^{2}\mathbb{M}^{4}$, and the Gspin group is the double cover
$\widetilde{GL}_{4}^{(\mathbb{F}^{\times})^{2}}(\mathbb{F})$, operating with
division by the chosen square root. The classical $Sp_{4}(\mathbb{F})$ and
$GSp_{4}(\mathbb{F})$ are the spin and Gspin groups of the subspace of
$\bigwedge^{2}\mathbb{M}^{4}$ which is spanned by $e_{1} \wedge e_{2}$, $e_{1}
\wedge e_{4}$, $e_{2} \wedge e_{3}$, $e_{3} \wedge e_{4}$, and $e_{1} \wedge
e_{3}-e_{2} \wedge e_{4}$.
\label{wedge2splitF}
\end{prop}
\begin{proof}
The first two assertions follow directly from Corollary \ref{alt6d1} by taking
$\mathbb{M}=\mathbb{F}$ in Lemma \ref{wedge2equiv}. For the last two assertions
we apply Corollary \ref{alt5}, observing that our element $Q$ is taken through
all these maps to $e_{1} \wedge e_{3}+e_{2} \wedge e_{4}$, the orthogonal
complement of which is spanned by the asserted vectors. This proves the
proposition.
\end{proof}
When we consider spaces of general discriminant $d$, let $\mathbb{E}$ be
$\mathbb{F}(\sqrt{d})$ with Galois automorphism $\rho$. Considering the
bi-quaternion algebra $A=M_{2}(B)$ with
$B=\big(\frac{d,\varepsilon}{\mathbb{F}}\big)$ (which splits over
$\mathbb{E}$), and choosing some $\delta\in\mathbb{F}^{\times}$, we find that
\begin{prop}
The $SU_{\mathbb{E},\rho}$ group of a space with an orthogonal basis having
norms $\delta\varepsilon$, $-\varepsilon$, $-\delta$, and 1 is the spin group of
the $\mathbb{F}$-subspace of $\bigwedge^{2}\mathbb{E}^{4}$ which is the direct
sum of $\mathbb{F}(e_{1} \wedge e_{4}-e_{2} \wedge e_{3})$,
$\mathbb{E}_{0}(e_{1} \wedge e_{4}+e_{2} \wedge e_{3})$, $\mathbb{F}(e_{2}
\wedge e_{4}-\varepsilon e_{1} \wedge e_{3})$, $\mathbb{E}_{0}(e_{2} \wedge
e_{4}+\varepsilon e_{1} \wedge e_{3})$, $\mathbb{F}(e_{3} \wedge e_{4}+\delta
e_{1} \wedge e_{2})$, and $\mathbb{E}_{0}(e_{3} \wedge e_{4}-\delta e_{1} \wedge
e_{2})$. The Gspin group is the associated $GSU_{\mathbb{E},\rho}$ group.
\label{wedge2splitE}
\end{prop}
\begin{proof}
Multiplying each entry of the elements of the second representation appearing
in Corollary \ref{iso6gen} by $\binom{\ \ 0\ \ 1}{-1\ \ 0}$ from the right
takes, if $\lambda \in B=\big(\frac{d,\varepsilon}{\mathbb{F}}\big)$, this
incarnation of $\mathbb{E} \oplus B$ to the asserted direct sum (the first two
generate the image of the diagonal matrices in $B$, the second two yield those
of the off-diagonal matrices, and the latter two give the image of
$\mathbb{E}$). The assertion now follows from Corollary \ref{alt6gen} and Lemma
\ref{wedge2equiv} for $\mathbb{M}=\mathbb{E}$. This proves the proposition.
\end{proof}
Note that unlike in Proposition \ref{wedge2splitF}, here the image of $Q$
became the element $e_{3} \wedge e_{4}-\delta e_{1} \wedge e_{2}$, appearing in
the 6th direct summand in Proposition \ref{wedge2splitE}.
\smallskip
We now consider the case where $A^{-}$ is isotropic, but does not necessarily
split more than one hyperbolic plane. Then $A$ is isomorphic to $M_{2}(B)$ for
some quaternion algebra $B$, and by taking $\mathbb{K}$ to be a quadratic
extension of $\mathbb{F}$, whose Galois automorphism we denote $\eta$, which
splits $B$, we may write $B\cong(\mathbb{K},\eta,\varepsilon)$ for some
$\varepsilon\in\mathbb{F}^{\times}$. In this case we have
\begin{prop}
The groups $\widetilde{GL}_{2}^{(\mathbb{F}^{\times})^{2}}(B)$ and
$GL_{2}^{1}(B)$ are the Gspin and spin groups of the $\mathbb{F}$-subspace of
$\bigwedge^{2}\mathbb{K}^{4}$ which is obtained as the direct sum of the four
spaces $\mathbb{F}(e_{1} \wedge e_{4}-e_{2} \wedge e_{3})$,
$\mathbb{K}_{0}(e_{1} \wedge e_{4}+e_{2} \wedge e_{3})$, $\mathbb{F}(e_{2}
\wedge e_{4}-\varepsilon e_{1} \wedge e_{3})$, $\mathbb{K}_{0}(e_{2} \wedge
e_{4}+\varepsilon e_{1} \wedge e_{3})$, and the hyperbolic plane
$\mathbb{F}e_{1} \wedge e_{2}\oplus\mathbb{F}e_{3} \wedge e_{4}$. The
quaternionic symplectic groups $Sp_{B}\binom{-\delta\ \ 0}{\ \ 0\ \ 1}$ and
$GSp_{B}\binom{-\delta\ \ 0}{\ \ 0\ \ 1}$ are the spin and Gspin groups of the
subspace which is the direct sum of the first four 1-dimensional spaces with
$\mathbb{F}(e_{3} \wedge e_{4}+\delta e_{1} \wedge e_{2})$. If $d$ is a
non-trivial discriminant and $\mathbb{E}=\mathbb{F}(\sqrt{d})$ has Galois
automorphism $\rho$ over $\mathbb{F}$, then
$GL_{2}^{t^{2}}(B_{\mathbb{E}})_{\rho,\mathbb{F}Q}$ and
$GL_{2}^{1}(B_{\mathbb{E}})_{\rho,Q}$ are the Gspin and spin groups of the
direct sum of the latter 5-dimensional space and $\mathbb{E}_{0}(e_{3} \wedge
e_{4}-\delta e_{1} \wedge e_{2})$. \label{wedge2iso}
\end{prop}
\begin{proof}
The first space is the image of $M_{2}(B)^{-}$ under the isomorphisms from
Corollary \ref{alt6d1} and Lemma \ref{wedge2equiv} with
$\mathbb{M}=\mathbb{K}$. For the second and third space we note that our vector
$Q$ is the same matrix from Proposition \ref{wedge2splitE}, hence has the same
image. This completes the proof of the proposition as we did for Propositions
\ref{wedge2splitF} and \ref{wedge2splitE}, where for the latter assertion we
need to apply Lemma \ref{wedge2equiv} with $\mathbb{M}=\mathbb{K}\mathbb{E}$.
\end{proof}
We remark that if $B$ is not split but $\mathbb{E}$ splits $B$ then taking
$\mathbb{K}=\mathbb{E}$ in Proposition \ref{wedge2iso} yields Proposition
\ref{wedge2splitE} again.
\smallskip
In the general case, where $A$ might be a division algebra, we write $A=B
\otimes C$ take $\mathbb{L}$ to be a quadratic extension of $\mathbb{F}$ (with
Galois automorphism $\omega$) which splits $C$. As we have seen that our choice
of $Q$ may be taken from $C_{0}$, we assume that
$C\cong(\mathbb{L},\omega,\delta)$ with the same $\delta$. Hence in general we
have
\begin{prop}
A space for which the groups $\widetilde{A}^{(\mathbb{F}^{\times})^{2}}$ and
$A^{1}$ appear as the Gspin and spin groups is the direct sum of the
$\mathbb{F}$-spaces $\mathbb{L}_{0}(e_{1} \wedge e_{4}-e_{2} \wedge e_{3})$,
$\mathbb{K}_{0}(e_{1} \wedge e_{4}+e_{2} \wedge e_{3})$, $\mathbb{F}(e_{2}
\wedge e_{4}-\varepsilon e_{1} \wedge e_{3})$, $\mathbb{K}_{0}(e_{2} \wedge
e_{4}+\varepsilon e_{1} \wedge e_{3})$, $\mathbb{L}_{0}(e_{3} \wedge
e_{4}+\delta e_{1} \wedge e_{2})$, and $\mathbb{F}(e_{3} \wedge e_{4}-\delta
e_{1} \wedge e_{2})$ inside $\bigwedge^{2}(\mathbb{K}\mathbb{L})^{4}$. The
stabilizers $A^{\times}_{\mathbb{F}Q}$ and $A^{\times}_{Q}$ are the Gspin and
spin groups of the direct sum of the first 5 spaces, if $Q \in A^{-}$ is chosen
to be $\binom{0\ \ \delta}{1\ \ 0} \in C_{0}$. With this choice of $Q$, a space
of discriminant $d$ for which $A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$ is the
Gspin group and $A_{\mathbb{E},\rho,Q}^{1}$ is the spin group, where
$\mathbb{E}=\mathbb{F}(\sqrt{d})$ and $\rho$ is its Galois automorphism, is
obtained by adding $\mathbb{E}_{0}(e_{3} \wedge e_{4}-\delta e_{1} \wedge
e_{2})$ to the latter 5-dimensional space. \label{wedge2gen}
\end{prop}
\begin{proof}
The embedding of $C$ inside $M_{2}(\mathbb{L})$ embeds $A$ into
$M_{2}(B_{\mathbb{L}})$, hence $A^{-}$ into $M_{2}(B_{\mathbb{L}})^{-}$. As the
two scalar entries represent scalar entries of $C \subseteq M_{2}(\mathbb{L})$,
they are related to one another via $\omega$ and multiplication by $\delta$,
yielding the two latter spaces under the maps from Corollary \ref{alt6d1} and
Lemma \ref{wedge2equiv} with $\mathbb{M}=\mathbb{K}\mathbb{L}$. As the
remaining entries come from $\lambda \in B_{\mathbb{L}}$ which in fact lies in
$B_{0}\oplus\mathbb{L}_{0}$, the off-diagonal entries of $\lambda \in
M_{2}(\mathbb{K}\mathbb{L})$ are related through $\eta$ and multiplication by
$\varepsilon$, yielding the middle two spaces via these identifications. The
diagonal entries must therefore be negated by $\eta\omega$, hence come from
$\mathbb{K}_{0}\oplus\mathbb{L}_{0}$, which gives the first two subspaces. This
proves the assertions about $\widetilde{A}^{(\mathbb{F}^{\times})^{2}}$ and
$A^{1}$, and the ones about the symplectic groups follow since we use the same
element $Q=\binom{0\ \ \delta}{1\ \ 0}$ as in Propositions \ref{wedge2splitE}
and \ref{wedge2iso} (note that this element belongs to $C$ as a subalgebra of
$M_{2}(\mathbb{L})$ in our normalizations). For the remaining assertions we use
again the same element $Q$, and we argue as in Propositions \ref{wedge2splitE}
and \ref{wedge2iso}, where $\mathbb{M}$ is now taken to be
$\mathbb{K}\mathbb{L}\mathbb{E}$ in Lemma \ref{wedge2equiv}. This proves the
proposition.
\end{proof}
We remark again that out choice of $Q$ in Proposition \ref{wedge2gen}, which
looks rather special, is entirely general when we normalize $B$ and $C$
appropriately. Note that taking $\mathbb{K}=\mathbb{L}$ in Proposition
\ref{wedge2gen} in case $A$ is not a division algebra, or
$\mathbb{L}=\mathbb{E}$ in case $A$ is a division algebra but $A_{\mathbb{E}}$
is not, does not give us Proposition \ref{wedge2iso} again. This is so, since
the split algebra $C$ is normalized as $M_{2}(\mathbb{F})$ in Proposition
\ref{wedge2iso}, but as some subalgebra $(\mathbb{L},\omega,\delta)$ of
$M_{2}(\mathbb{L})$, with the $\mathbb{F}$-structure from Lemma \ref{KsplitB},
in Proposition \ref{wedge2gen}.
\section{Dimension 8, Isotropic, Discriminant 1 \label{Dim8id1}}
The spaces we consider here are described in the following
\begin{lem}
The direct sum of a hyperbolic plane with an Albert form arising from a
presentation of a bi-quaternion algebra $A$ as the tensor product $B \otimes C$
of two quaternion algebras over $\mathbb{F}$ is isotropic of dimension 8 and
discriminant 1. Any isotropic 8-dimensional space of discriminant 1 is obtained,
up to rescaling and isometries, in this way. \label{sp8id1}
\end{lem}
\begin{proof}
Recall that a space is isotropic if and only if it has a hyperbolic plane as a
direct summand, and that a hyperbolic plane is isometric to its rescalings. The
lemma now follows directly from Lemma \ref{sp6d1}.
\end{proof}
We thus fix a bi-quaternion algebra $A=B \otimes C$ with the (orthogonal)
involution $\iota_{B}\otimes\iota_{C}:x\mapsto\overline{x}$ as above, and the
space from Lemma \ref{sp8id1} may be denoted $A^{-} \oplus H$ (where $H$ stands
for a hyperbolic plane). It will be useful to embed this space into $M_{2}(A)$
by taking the sum of $u \in A^{-}$, $-p$ times one generator of the hyperbolic
plane, and $q$ times the second generator, to the matrix $U=\binom{u\ \ -p}{q\ \
-\tilde{u}}$. We shall henceforth identify $A^{-} \oplus H$ with the space of
those matrices. The norm $|U|^{2}=u\tilde{u}-pq$ (see Lemma \ref{vnorm6})
resembles the ``Moore-like determinant'' of $M_{2}(B)^{-}$ from the proof of
Corollary \ref{NAvn2}, but now with a subset of a bi-quaternion algebra rather
than a usual quaternion algebra.
Let $\widehat{GSp}_{A}\binom{1\ \ \ \ 0}{0\ \ -1}$ be the group of (invertible)
matrices $\binom{a\ \ b}{c\ \ d} \in M_{2}(A)$ which preserve the space
$\mathbb{F}\binom{1\ \ \ \ 0}{0\ \ -1}$ under the operation $\binom{a\ \ b}{c\ \
d}:M\mapsto\binom{a\ \ b}{c\ \ d}M\binom{\ \ \overline{d}\ \
-\overline{b}}{-\overline{c}\ \ \ \ \overline{a}}$ on $M_{2}(A)$. An element
$\binom{a\ \ b}{c\ \ d} \in M_{2}(A)$ lies in $\widehat{GSp}_{A}\binom{1\ \ \ \
0}{0\ \ -1}$ if and only if $a\overline{b}$ and $c\overline{d}$ lie in $A^{-}$
and there exists an element $m\in\mathbb{F}^{\times}$ such that
$a\overline{d}+b\overline{c}=m$ (and equivalently
$d\overline{a}+c\overline{b}=m$). The fact that our matrix $\binom{1\ \ \ \
0}{0\ \ -1}$ equals its inverse implies that if $\binom{a\ \ b}{c\ \
d}\in\widehat{GSp}_{A}\binom{1\ \ \ \ 0}{0\ \ -1}$ then so is $\binom{\ \
\overline{d}\ \ -\overline{b}}{-\overline{c}\ \ \ \ \overline{a}}$ (with the
same multiplier $m$), so that $\overline{c}a$ and $\overline{d}b$ belong to
$A^{-}$ and $\overline{a}d+\overline{c}b=m$ as well as
$\overline{d}a+\overline{b}c=m$. We call these relations \emph{the $GSp$
relations}, and the map taking an element of $\widehat{GSp}_{A}\binom{1\ \ \ \
0}{0\ \ -1}$ to the scalar $m$ by which its action multiplies $\binom{1\ \ \ \
0}{0\ \ -1}$ is a group homomorphism $\widehat{GSp}_{A}\binom{1\ \ \ \
0}{0\ \ -1}\to\mathbb{F}^{\times}$. Now, any element $\binom{a\ \ b}{c\ \ d}$ of
$\widehat{GSp}_{A}\binom{1\ \ \ \ 0}{0\ \ -1}$, with multiplier $m$, satisfies
$N^{M_{2}(A)}_{\mathbb{F}}\binom{a\ \ b}{c\ \ d}^{2}=m^{8}$ (for the reduced
norm of the degree 8 algebra $M_{2}(A)$ over $\mathbb{F}$). The function
$\frac{N^{M_{2}(A)}_{\mathbb{F}}}{m^{4}}$ is thus a group homomorphism
$\widehat{GSp}_{A}\binom{1\ \ \ \ 0}{0\ \ -1}\to\{\pm1\}$, and we define
$GSp_{A}\binom{1\ \ \ \ 0}{0\ \ -1}$ to be it kernel. We shall see in Lemma
\ref{genie} below that unless $A=M_{4}(\mathbb{F})$, the latter homomorphism is
trivial and $\widehat{GSp}_{A}\binom{1\ \ \ \ 0}{0\ \ -1}$ and $GSp_{A}\binom{1\
\ \ \ 0}{0\ \ -1}$ coincide, so that the reduced norm condition becomes
redundant. The kernel of the restriction of $m$ to $\widehat{GSp}_{A}\binom{1\ \
\ \ 0}{0\ \ -1}$ is just the symplectic group $Sp_{A}\binom{1\ \ \ \ 0}{0\ \
-1}$ consisting of those matrices which preserve the element $\binom{1\ \ \ \
0}{0\ \ -1}$ itself under this operation, and have reduced norm 1.
We begin our analysis of this group with the following
\begin{lem}
Let $\binom{a\ \ b}{c\ \ d}$ be an element of $\widehat{GSp}_{A}\binom{1\ \ \ \
0}{0\ \ -1}$, with multiplier $m$. If $a \in A^{\times}$ then $\binom{a\ \ b}{c\
\ d}$ is the product $\binom{1\ \ 0}{\beta\ \ 1}\binom{a\ \ \ \ 0\ \ \ }{0\ \
m\overline{a}^{-1}}\binom{1\ \ \alpha}{0\ \ 1}$ with $\alpha$ and $\beta$ from
$A^{-}$. Moreover, these matrices lie in $GSp_{A}\binom{1\ \ \ \ 0}{0\ \ -1}$.
\label{invent}
\end{lem}
\begin{proof}
$b\overline{a}$ and $\overline{a}c$ lie in $A^{-}$ by the $GSp$ relations, and
since $a \in A^{\times}$ we find that
$\alpha=a^{-1}b=a^{-1}(b\overline{a})\overline{a}^{\ -1}$ and
$\beta=ca^{-1}=\overline{a}^{\ -1}(\overline{a}c)a^{-1}$ are also in $A^{-}$.
Hence $b=a\alpha$ and $c=\beta a$, and as $d\overline{a}+c\overline{b}=m$ by the
$GSp$ relations, it follows that $d=m\overline{a}^{-1}+\beta a\alpha$. But this
is easily seen to be the value of the asserted product. We claim that the
multipliers lie in $GSp_{A}\binom{1\ \ \ \ 0}{0\ \ -1}$, which will prove the
second assertion. In order to evaluate the reduced norms we may extend scalars
to a splitting field of $A$, and then we evaluate $8\times8$ determinants. Now,
the unipotent elements become unipotent $8\times8$ matrices, hence they have
determinant 1, in correspondence with their multipliers being 1. The diagonal
matrix is a block matrix, hence has reduced norm $N^{A}_{\mathbb{F}}(a) \cdot
N^{A}_{m\mathbb{F}}(\overline{a}^{-1})=m^{4}$, which proves the claim as the
multiplier of this element is $m$. This completes completes the proof of the
lemma.
\end{proof}
The following technical result will be very useful in what follows.
\begin{lem}
Given any element $\binom{a\ \ b}{c\ \ d} \in GSp_{A}\binom{1\ \ \ \ 0}{0\ \
-1}$, there exists $v \in A^{-}$ such that the result of multiplying it from the
left by a matrix of the sort product $\binom{1\ \ v}{0\ \ 1}$ has an invertible
upper left entry. In fact, this is the case for ``almost any $v \in A^{-}$''. In
addition, $GSp_{A}\binom{1\ \ \ \ 0}{0\ \ -1}$ has index 2 in
$\widehat{GSp}_{A}\binom{1\ \ \ \ 0}{0\ \ -1}$ if and only if $A$ is split.
\label{genie}
\end{lem}
\begin{proof}
The upper left entry of the product in question is $a+vc$. By fixing an
arbitrary non-zero $w \in A^{-}$, consider the expression
$N^{A}_{\mathbb{F}}(a+swc)$ as a function of $s\in\mathbb{F}$. It is a
polynomial of degree not exceeding 4 in $s$. Now, if $a \in A^{\times}$ then
this polynomial never vanishes at $s=0$. Hence for all but at most 4 (non-zero)
values of $s$ (the roots of this polynomial), $v=sw$ has the desired property
(in fact, this polynomial has at most two roots, as it decomposes as the
product of the global constant $N^{A}_{\mathbb{F}}(a)$ with the polynomial
$N^{A}_{\mathbb{F}}(1+swca^{-1})$ and the latter expression is a square by the
proof of Lemma \ref{invent} and Lemma \ref{normsq} below). In case $a=0$ we know
that $c$ must be invertible (for the $GSp$ relation
$a\overline{d}+b\overline{c}=mI$ to be possible), and then with any anisotropic
$v$ the element $a+vc=vc$ is invertible. This covers the case where $A$ is a
division algebra (since then either $a=0$ or $a \in A^{\times}$), so assume that
$A=M_{2}(B)$ for $B$ a quaternion algebra over $\mathbb{F}$. We only have to
consider the case where $a$ is non-zero singular $2\times2$ matrices over $B$.
Hence there exist $\sigma$ and $\tau$ in $B$, not both zero, such that
$a\binom{\sigma}{\tau}=0$ as a 2-vector over $B$. As $a\neq0$, this allows us to
construct some $y \in GL_{2}(B)$ such that $ay$ has right column 0. It is then
easy to find $x \in GL_{2}(B)$ such that $xay=\binom{1\ \ 0}{0\ \ 0}$, and by
multiplying $\binom{a\ \ b}{c\ \ d}$ from the left by $\binom{y\ \ \ 0\ \ }{0\ \
\overline{y}^{-1}}$ and from the right by $\binom{x\ \ \ 0\ \ }{0\ \
\overline{x}^{-1}}$ we may consider only elements with $a=\binom{1\ \ 0}{0\ \
0}$. Indeed, the right multiplication by $\binom{y\ \ \ 0\ \ }{0\ \
\overline{y}^{-1}}$ does not affect our assertions, and conjugating $\binom{1\ \
v}{0\ \ 1}$ by $\binom{x\ \ \ 0\ \ }{0\ \ \overline{x}^{-1}}$ yields just
$\binom{1\ \ xv\overline{x}}{0\ \ \ 1\ }$, with $xv\overline{x} \in A^{-}$.
Write $c$ as $\binom{\lambda\ \ \mu}{\kappa\ \ \nu}$, with entries from $B$. The
$GSp$ condition $\overline{c}a \in A^{-}$ implies $\nu=0$ and
$\kappa\in\mathbb{F}$, and as $\overline{d}a+\overline{b}c=mI$ is invertible, we
find that $\mu \in B^{\times}$. Choose $w$ of the sort $\binom{\sigma\ \ -r}{1\
\ -\overline{\sigma}}$, and consider the polynomial in $s$ defined above. As
$a+swc=\binom{1+s\sigma\lambda+sr\kappa\ \ s\sigma\mu}{\
s\lambda-s\kappa\overline{\sigma}\ \ \ \ \ s\mu}$, one may use Lemma \ref{Vpol}
to evaluate the coefficients of the powers of $s$ in the resulting expression
from the formula from Proposition \ref{NAexp}, which gives us
$s^{2}N^{B}_{\mathbb{F}}(\mu)+2s^{3}\kappa
N^{B}_{\mathbb{F}}(\mu)|w|^{2}+s^{4}N^{A}_{\mathbb{F}}(w)N^{A}_{\mathbb{F}}
(c)$. Evaluating $N^{A}_{\mathbb{F}}(c)$ as $\kappa^{2}N^{B}_{\mathbb{F}}(\mu)$
(by Proposition \ref{NAexp}) and using Corollary \ref{NAvn2}, this polynomial is
(up the the global scalar $N^{B}_{\mathbb{F}}(\mu)\in\mathbb{F}^{\times}$) just
$s^{2}(1+\kappa|w|^{2}s)^{2}$. Hence our upper entry is invertible for all
non-zero $s$ if $w$ is isotropic or $c$ is not invertible (i.e., $\kappa=0$),
and we also have to omit the value $s=-\frac{1}{\kappa|w|^{2}}$ otherwise. As
these are at most two values of $s$ and $ch\mathbb{F}\neq2$, there is at least
one multiple of $w$ which we yields an invertible $a+swc$. Note that we have
only assumed that our matrix lies in $\widehat{GSp}_{A}\binom{1\ \ \ \ 0}{0\ \
-1}$, so that the last assertion of Lemma \ref{invent} implies that
$\widehat{GSp}_{A}\binom{1\ \ \ \ 0}{0\ \ -1}=GSp_{A}\binom{1\ \ \ \ 0}{0\ \
-1}$ whenever $A$ is not split.
Consider now the case $B=M_{2}(\mathbb{F})$ and $A=M_{4}(\mathbb{F})$. By an
argument similar to the one given above, we may restrict attention to the case
where $a$ is $\binom{I_{k}\ \ 0}{0\ \ \ 0}$ for some $1 \leq k\leq3$. If $k=2$
then $c=\binom{\alpha\ \ \beta}{\gamma\ \ \delta}$ once again (with entries from
$M_{2}(\mathbb{F})$), the $GSp$ condition imply $\delta=0$,
$\gamma\in\mathbb{F}I$, and $\beta \in GL_{2}(\mathbb{F})$, and the argument
from the case of $B$ division works equally well. In particular, all these
elements (as well as the elements containing invertible entries) lie in
$GSp_{A}\binom{1\ \ \ \ 0}{0\ \ -1}$. We now claim that the cases $k=3$ and
$k=1$ can only occur for elements of $\widehat{GSp}_{A}\binom{1\ \ \ \ 0}{0\ \
-1}$ which are not in $GSp_{A}\binom{1\ \ \ \ 0}{0\ \ -1}$. First we demonstrate
the existence of elements of $\widehat{GSp}_{A}\binom{1\ \ \ \ 0}{0\ \ -1}$
which are not in $GSp_{A}\binom{1\ \ \ \ 0}{0\ \ -1}$: One example is the matrix
$\binom{e\ \ f}{g\ \ h}$ in which $e=\binom{I_{3}\ \ 0}{0\ \ \ 0}$, $h=\binom{0\
\ 0\ }{0\ \ I_{3}}$, $f$ has only lower left entry 1 and other 15 entries
vanish, and $g$ has upper right entry 1 (and the rest 0), with multiplier 1 and
determinant $-1$. Consider now the case $k=3$, and write $\binom{\alpha\ \
\beta}{\gamma\ \ \delta}$ with the entries from $M_{2}(\mathbb{F})$. The
condition that $\overline{a}c=\binom{0\ \ 0\ }{0\ \ I_{3}}\binom{\alpha\ \
\beta}{\gamma\ \ \delta}$ lies in $A^{-}$ implies, in particular, that in the
rightmost column of $c$ the only entry which may be non-zero is the upper right
one, which we denote by $t$. $t$ may not vanish, for otherwise the matrix
$\overline{d}a+\overline{b}c=mI$ would have to be singular, a contradiction .
But now left multiplication from our representative of
$\widehat{GSp}_{A}\binom{1\ \ \ \ 0}{0\ \ -1}/GSp_{A}\binom{1\ \ \ \ 0}{0\ \
-1}$ would yield a matrix in $M_{2}(A)$ whose upper left entry is
$\binom{I_{3}\ \ 0}{*\ \ \ t}$ (where $(*\ \ t)$ is the upper row of $c$). As
this matrix is invertible, the resulting element lies in $GSp_{A}\binom{1\ \ \ \
0}{0\ \ -1}$, hence the original one $\binom{a\ \ b}{c\ \ d}$ does not. In the
case $k=1$, if none of the entries are invertible then $c$ must have rank 3
(again, for $\overline{d}a+\overline{b}c=mI$ to be non-singular), and again we
are in the case $k=3$. This completes the proof of the lemma.
\end{proof}
\begin{cor}
Any element of $GSp_{A}\binom{1\ \ \ \ 0}{0\ \ -1}$, with multiplier $m$, can be
written as $\binom{1\ \ v}{0\ \ 1}\binom{1\ \ 0}{\beta\ \ 1}\binom{a\ \ \ \ 0\ \
\ }{0\ \ m\overline{a}^{-1}}\binom{1\ \ \alpha}{0\ \ 1}$ for some $a \in
A^{\times}$, and $u$, $\alpha$, and $\beta$ from $A^{-}$. \label{genform}
\end{cor}
\begin{proof}
This follows directly from Lemmas \ref{invent} and \ref{genie}.
\end{proof}
Many arguments below shall make use of the following
\begin{lem}
For $\eta$ and $\omega$ in $A^{-}$, define
$D(\eta,\omega)=1+2\langle\eta,\tilde{\omega}\rangle+|\omega|^{2}|\eta|^{2}$.
Then the element $1+\eta\omega$ of $A^{-}$ has reduced norm
$D(\eta,\omega)^{2}$, and its product with $1+\tilde{\omega}\tilde{\eta}$ (from
either side) yields the scalar $D(\eta,\omega)$. \label{normsq}
\end{lem}
\begin{proof}
The fact that the products $(1+\eta\omega)(1+\tilde{\omega}\tilde{\eta})$ and
$(1+\tilde{\omega}\tilde{\eta})(1+\eta\omega)$ both yield $D(\eta,\omega)$
easily follow from Lemma \ref{vnorm6}. For the reduced norm, multiply our
element by $\tilde{\omega}$ from the right. The result is
$\tilde{\omega}+|\omega|^{2}\eta$ (Lemma \ref{vnorm6} again) and lies in
$A^{-}$, so that its reduced norm is $|\tilde{\omega}+|\omega|^{2}\eta|^{2}$ by
Corollary \ref{NAvn2}. Moreover, Lemma \ref{Vpol} evaluates this vector norm as
$|\omega|^{2}D(\eta,\omega)$, so that this proves the assertion for anisotropic
$\omega$. Assume now $|\omega|^{2}=0$. Take some $\xi \in A^{-}$ with
$|\xi|^{2}\neq0$, and consider the two expressions
$N^{A}_{\mathbb{F}}\big(1+\eta(\omega+s\xi)\big)$ and
$D(\eta,(\omega+s\xi))^{2}$ for $s\in\mathbb{F}$. Both are polynomials of degree
4 in $s$, which were seen to coincide wherever $|(\omega+s\xi)|^{2}\neq0$. The
latter assumption occurs for any $s$ other than $s=0$ or
$s=-\frac{2\langle\omega,\xi\rangle}{|\xi|^{2}}$ (Lemma \ref{Vpol} again and the
isotropy of $\omega$), so that we omit at most two values. By extending scalars
if necessary, we may assume that $\mathbb{F}$ has more than 6 elements. Then we
have two polynomials of degree 4 which coincide on more than 4 elements of
$\mathbb{F}$, hence they must be the same polynomial. Substituting $s=0$ in the
two equals polynomials verify the assertion for isotropic $\omega$ as well. This
completes the proof of the lemma.
\end{proof}
The freedom of choice we have in Lemma \ref{genie} shows that there are many
different choices of parameters to get the same element of $GSp_{A}\binom{1\ \ \
\ 0}{0\ \ -1}$ in the form of Corollary \ref{genform}. Hence some compatibility
assertions will be required wherever we use the form from that Corollary. these
will be based on the following
\begin{lem}
Assume that the expression using $a$, $v$, $\alpha$ and $\beta$ in Corollary
\ref{genform} yields the same element (of multiplier $m$) as the one arising
from $c$, $w$, $\gamma$, and $\delta$ respectively. Then we have the equalities
$(i)$ $c=\big(1-(w-v)\beta\big)a$, $(ii)$
$\delta=\frac{\beta-|\beta|^{2}(\tilde{w}-\tilde{v})}{D(\beta,w-v)}$, and
$(iii)$
$\gamma=\alpha-ma^{-1}\frac{w-v-|w-v|^{2}\tilde{\beta}}{D(\beta,w-v)}\overline{a
}^{\ -1}$. \label{comp}
\end{lem}
\begin{proof}
The product from Corollary \ref{genform} equals $\binom{(1+v\beta)a\ \
(1+v\beta)a\alpha+mv\overline{a}^{-1}}{\ \beta a\ \ \ \ \ \ \ \ \beta
a\alpha+m\overline{a}^{-1}\ }$ (matrix multiplication). When we compare this
matrix with the one arising from $c$, $w$, $\gamma$, and $\delta$, we first
find that $\beta a=\delta c$ and $(1+v\beta)a=(1+w\delta)c$. Multiplying the
first equality by $w$ from the left and subtracting the result from the second
equality establishes $(i)$. We write $\delta=\beta ac^{-1}$, substitute $c$ from
part $(i)$, use Lemma \ref{normsq} for the inverse of $1-(w-v)\beta$, and apply
Lemma \ref{vnorm6}, which proves part $(ii)$. We can now write $a\alpha$ as the
upper right entry of our common matrix minus $v$ times the lower right entry and
the same for $c\gamma$ (but with $w$). Comparing yields
$c\gamma=a\alpha-(w-v)(\beta a\alpha+m\overline{a}^{-1})$, which equals
$c\alpha-m(w-v)\overline{a}^{-1}$. Multiplying by $c^{-1}$ from the left and
using part $(i)$ and Lemmas \ref{normsq} and \ref{vnorm6} again yields part
$(iii)$. This proves the lemma.
\end{proof}
In addition, the description of the product in the parameters from Corollary
\ref{genform} is given in the following
\begin{lem}
Given two elements $g$ and $h$ of $GSp_{A}\binom{1\ \ \ \ 0}{0\ \ -1}$, with
multipliers $m$ and $n$ respectively, assume that $v \in A^{-}$ is such that
left multiplication of both $g$ and $gh$ by $\binom{1\ \ -v}{0\ \ \ \ 1}$ yields
matrices with invertible upper left entry. Then if $a$, $v$, $\alpha$ and
$\beta$ are the parameters thus obtained for $g$ as in Corollary \ref{genform}
and $e$, $z$, $\kappa$, and $\nu$ are parameters for $h$, then parameters for
$gh$ may be taken as $x$, $v$, $\xi$, and $\zeta$ with
$x=a\big(1+(\alpha+z)\nu\big)e$,
$\xi=\kappa+ne^{-1}\frac{\alpha+z+|\alpha+z|^{2}\tilde{\nu}}{D(\alpha+z,\nu)}
\overline{e}^{\ -1}$, and
$\zeta=\beta+m\overline{a}^{\
-1}\frac{\nu+|\nu|^{2}(\tilde{\alpha}+\tilde{z})}{D(\alpha+z,\nu)}a^{-1}$.
\label{GSpprod}
\end{lem}
\begin{proof}
Comparing the expressions for the product shows that the expression $\binom{a\ \
\ \ 0\ \ \ }{0\ \ m\overline{a}^{-1}}\binom{1\ \ \alpha+z}{0\ \ \ \ 1\ \
}\binom{1\ \ 0}{\nu\ \ 1}\binom{e\ \ \ \ 0\ \ \ }{0\ \ n\overline{e}^{-1}}$
equals $\binom{\ \ 1\ \ \ \ 0}{\zeta-\beta\ \ 1}\binom{x\ \ \ \ \ 0\ \ \
}{0\ \ mn\overline{x}^{-1}}\binom{1\ \ \xi-\kappa}{0\ \ \ \ 1\ \ }$. When we
consider the upper left entries of both sides we obtain the asserted value for
$x$. The values for $\xi$ (resp. $\zeta$) is obtained by comparing the upper
right (resp. lower left) entries, using the value of $x$, Lemma \ref{normsq},
and Lemma \ref{vnorm6}. This proves the lemma.
\end{proof}
Now, the group which will end up being the Gspin group is, in general, not the
full group $GSp_{A}\binom{1\ \ \ \ 0}{0\ \ -1}$, but a double cover of a certain
subgroup. This subgroup is defined using the following
\begin{prop}
The map $\varphi$ which takes an element of $GSp_{A}\binom{1\ \ \ \ 0}{0\ \
-1}$, decomposes it as in Corollary \ref{genform}, and sends it to the image of
$N^{A}_{\mathbb{F}}(a)$ in $\mathbb{F}^{\times}/(\mathbb{F}^{\times})^{2}$ is a
well-defined group homomorphism $\varphi:GSp_{A}\binom{1\ \ \ \ 0}{0\ \
-1}\to\mathbb{F}^{\times}/(\mathbb{F}^{\times})^{2}$. \label{GSphom}
\end{prop}
\begin{proof}
Given two decompositions of the same element of $GSp_{A}\binom{1\ \ \ \ 0}{0\ \
-1}$, part $(i)$ of Lemma \ref{comp} shows that the reduced norms used to
define $\varphi$ in these decompositions differ by the reduced norm of an
element of the form considered in Lemma \ref{normsq}. Hence, by that lemma, the
result in $\mathbb{F}^{\times}/(\mathbb{F}^{\times})^{2}$ is the same for both
decompositions. Hence $\varphi$ is well-defined. Now, given two elements of
$GSp_{A}\binom{1\ \ \ \ 0}{0\ \ -1}$, the level of freedom in Lemma \ref{genie}
allows us to find $v \in A^{-}$ for which left multiplication by $\binom{1\ \
-v}{0\ \ \ \ 1}$ renders the upper left entries of both $g$ and $gh$ invertible.
We then invoke Lemma \ref{GSpprod}, and using Lemma \ref{normsq} for the reduced
norm of the multiplier between $a$ and $e$ in $x$ there shows that $\varphi$ is
also multiplicative. This proves the lemma.
\end{proof}
Note that the choice of the upper left entry in Proposition \ref{GSphom} is
arbitrary, but does not affect the value of $\varphi$ in the sense that a
similar definition in terms of another entry would yield the same result.
Indeed, $\varphi$ is a group homomorphism (Proposition \ref{GSphom}) which
attains 1 on the unipotent matrices and $GSp_{A}\binom{1\ \ \ \ 0}{0\ \ -1}$
contains the element $\binom{0\ \ 1}{1\ \ 0}$, which has multiplier 1. As the
latter element may be obtained from the parameters $v=-u$, $a=u$, and
$\alpha=\beta=\frac{\tilde{u}}{|u|^{2}}$ in Corollary \ref{genform} for some
anisotropic $u$ (by Lemma \ref{vnorm6}), Corollary \ref{NAvn2} shows that
$\varphi\binom{0\ \ 1}{1\ \ 0}$ is also trivial. It thus suffices to compare the
reduced norms of the entries $b$, $c$, and $d$ in the matrix $\binom{a\ \ b}{c\
\ d}$ when given in the form of Lemma \ref{invent}, and see that if one of them
is invertible then it has the same reduced norm as $a$ up to
$(\mathbb{F}^{\times})^{2}$. For $b$ and $c$ the assertion follows directly from
Corollary \ref{NAvn2}, while $\frac{\overline{a}d}{m}$ (or
$\frac{d\overline{a}}{m}$) is an element of the form given in Lemma
\ref{normsq}. Hence $\varphi$ is more intrinsic than it seems at first sight. In
particular, in case an element of $GSp_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\
\ -1}$ has any invertible entry, we may use the reduced norm of that entry in
order to evaluate the $\varphi$-image of that element.
\smallskip
Denote the kernel of the map $\varphi$ from Proposition \ref{GSphom} by
$GSp_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$. We now construct a
certain group automorphism of a double cover of the subgroup
$GSp_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$, which is again based on
some choice of square root. For the definition we shall use
\begin{lem}
Let $a$, $v$, $\alpha$, $\beta$, $c$, $w$, $\gamma$, and $\delta$ as in Lemma
\ref{comp}, and assume that the (common) element lies in
$GSp_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$. Let
$t\in\mathbb{F}^{\times}$ be such that $N^{A}_{\mathbb{F}}(a)=t^{2}$, and denote
$tD(\beta,w-v)$ by $s$. The following expressions remain invariant by replacing
$a$ by $c$, $v$ by $w$, $\alpha$ by $\gamma$, $\beta$ by $\delta$, and $t$ by
$s$: $(i)$ $t\tilde{\beta}\overline{a}^{\ -1}$. $(ii)$
$t(1+\tilde{v}\tilde{\beta})\overline{a}^{\ -1}$. $(iii)$
$t\tilde{\beta}\overline{a}^{\ -1}\tilde{\alpha}+\frac{m}{t}a$. $(iv)$
$t(1+\tilde{v}\tilde{\beta})\overline{a}^{\
-1}\tilde{\alpha}+\frac{m}{t}\tilde{v}a$. \label{psiwd}
\end{lem}
\begin{proof}
Lemma \ref{normsq} and part $(i)$ of Lemma \ref{comp} yield $\overline{c}^{\
-1}=\frac{1-(\tilde{w}-\tilde{v})\tilde{\beta}}{D(\beta,w-v)}\overline{a}^{\
-1}$. Part $(i)$ then follows from the definition of $s$, Lemma \ref{normsq},
and part $(ii)$ of Lemma \ref{comp}. Part $(ii)$ is now obtained from part
$(i)$, the latter equation, the definition of $s$, and simple algebra. Now, part
$(iii)$ of Lemma \ref{comp}, Lemma \ref{AxA-rel}, and the assumption on $t$
imply $s\overline{c}^{\
-1}\tilde{\gamma}=t\big(1-(\tilde{w}-\tilde{v})\tilde{\beta}\big)\overline{}^{\
-1}\tilde{\alpha}-\frac{m}{t}(\tilde{w}-\tilde{v})a$. Part $(iii)$ is
established using part $(ii)$ of Lemma \ref{comp}, Lemma \ref{normsq}, and the
definition of $s$. Part $(iv)$ now follows from the latter equality and part
$(iii)$. This completes the proof of the lemma.
\end{proof}
By definition, an element of $GSp_{A}\binom{1\ \ \ \ 0}{0\ \ -1}$ belongs to
$GSp_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$ if and only if when
decomposed as in Corollary \ref{genform}, the diagonal matrix has entries from
$A^{(\mathbb{F}^{\times})^{2}}$. Using the double cover
$\widetilde{A}^{(\mathbb{F}^{\times})^{2}}$ from Lemma \ref{ac6d1} we now have
\begin{thm}
The group $GSp_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$ admits a
well-defined double cover $\widetilde{GSp}_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \
0}{0\ \ -1}$, in which the parameter $a \in A^{(\mathbb{F}^{\times})^{2}}$ from
Corollary \ref{genform} is replaced by an element
$(a,t)\in\widetilde{A}^{(\mathbb{F}^{\times})^{2}}$ lying over it. Define a map
$\psi$ from $\widetilde{GSp}_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$
to itself by replacing the paramter $(a,t)$ by $\big(t\overline{a}^{\
-1},t\big)$ and sending the other paramters from Corollary \ref{genform} to
their $\theta$-images. Then $\psi$ is a well-defined group automorphism of order
2 of $\widetilde{GSp}_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$, which
commutes with the multiplier map to $\mathbb{F}^{\times}$. \label{GSppsi}
\end{thm}
\begin{proof}
Assume, as in Lemma \ref{comp}, that two sets of paramters in Corollary
\ref{genform}, say $a$, $v$, $\alpha$ and $\beta$ versus $c$, $w$, $\gamma$, and
$\delta$, describe the same element of $GSp_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \
0}{0\ \ -1}$. Lemma \ref{normsq} and part $(i)$ of Lemma \ref{comp} show that
the same element of the double cover
$\widetilde{GSp}_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$ is obtained
from $(a,t)$, $v$, $\alpha$ and $\beta$ and from $\big(c,D(v-w,\beta)t\big)$,
$w$, $\gamma$, and $\delta$. Hence this double cover is well-defined. Consider
now our element of $GSp_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$ written
in terms of the parameters $(a,t)$, $v$, $\alpha$ and $\beta$. Its matrix form
appears at the beginning of the proof of Lemma \ref{comp}, and one sees that its
$\psi$-image (using these parameters) has precisely the four entries which
appear in the various parts of Lemma \ref{psiwd}. But this lemma precisely shows
that taking the set of parameters $\big(c,D(v-w,\beta)t\big)$, $w$, $\gamma$,
and $\delta$ instead yields the same entries. This shows that $\psi$ is
well-defined, and its image is clearly in
$\widetilde{GSp}_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$ again and has
the same multiplier. It is a map of order 2 since so are $\theta$ and the map
$(a,t)\mapsto\big(t\overline{a}^{\ -1},t\big)$ on
$\widetilde{A}^{(\mathbb{F}^{\times})^{2}}$. Finally, let $(e,r)$, $z$,
$\kappa$, and $\nu$ be parameters of another element of
$\widetilde{GSp}_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$ and assume
that $v$ represents a parameter also for the product of these two elements.
Then Lemma \ref{GSpprod} provides expressions for the parameters $x$, $v$,
$\xi$, and $\zeta$ of the product in $GSp_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \
0}{0\ \ -1}$, and Lemma \ref{normsq} shows that we may replace $x$ by
$\big(x,tD(a+z,\nu)r\big)$ for the parameter of the product in the double cover
$\widetilde{GSp}_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$. Showing that
$\psi$ is multiplicative amounts to verifying that sending $(a,t)$ to
$\big(t\overline{a}^{\ -1},t\big)$, $(e,r)$ to
$\big(r\overline{e}^{\ -1},r\big)$, and the $A^{-}$-parameters $v$, $\alpha$,
$\beta$, $z$, $\kappa$, and $\nu$ to their $\theta$-images results in the same
effect on $\big(x,tD(a+z,\nu)r\big)$ and on $\xi$ and $\zeta$ (the parameter
$v$ of the product already appears). Now, for $x$ this follows directly from
Lemma \ref{normsq} and the multiplicativity of $y\mapsto\overline{y}^{\ -1}$ on
$A^{\times}$, and for $\xi$ and $\zeta$ this follows from Lemma \ref{AxA-rel},
the assumptions on $r$ and $t$, the preservation of multipliers, and the fact
that $\theta$ preserves the bilinear form on $A^{-}$ (this is relevant also for
the action on the denominators $D(\eta,\omega)$). This completes the proof of
the theorem.
\end{proof}
As with $\varphi$, the automorphism $\psi$ from Theorem \ref{GSppsi} may be
defined using the other entries, hence is a more intrinsic automorphism of
$\widetilde{GSp}_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$ that one might
think at first. To see this, observe that the element $\binom{0\ \ 1}{1\ \ 0}$,
of multiplier 1, equals its $\psi$-image with the appropriate choice of square
root: Indeed, it may be obtained from a set of parameters arising from an
anisotropic vector $u$, and the result is independent of $u$. By taking the
square root $-|u|^{2}$ for the reduced norm of $a=u$ (Corollary \ref{NAvn2}
again), we find that $\psi$ just replaces every instance of $u$ by $\tilde{u}$,
and the resulting matrix must therefore be $\binom{0\ \ 1}{1\ \ 0}$ again. Hence
we may use the same argument as for $\varphi$ in order to obtain that $\psi$ may
be evaluated, for example, by applying $(g,t)\mapsto\big(t\overline{g}^{\
-1},t\big)$ to any invertible entry of the matrix in question. However, we shall
stick to our form of Corollary \ref{genform} in what follows.
\smallskip
The relation between the group $GSp_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \
-1}$ and the space $A^{-} \oplus H$ from Lemma \ref{sp8id1} (embedded in
$M_{2}(A)$ as described above) begins to reveal itself in the following
\begin{lem}
Any anisotropic vector $U \in A^{-} \oplus H$ lies also in
$GSp_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$, with multiplier
$|U|^{2}$. The involution $\hat{\psi}=Id_{H}\oplus\theta$ of $A^{-} \oplus H$
coincides, on anisotropic elements, with the map $\psi$ from Theorem
\ref{GSppsi}, after an appropriate lift of these elements into the double cover
$\widetilde{GSp}_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$. The products
$U\hat{\psi}(U)$ and $\hat{\psi}(U)U$ both equal the scalar $|U|^{2}$, and the
for pairing of two vectors $U$ and $V$ in $A^{-} \oplus H$ we have $2\langle U,V
\rangle=U\hat{\psi}(V)+V\hat{\psi}(U)=\hat{\psi}(U)V+\hat{\psi}(V)U$.
\label{vnorm8}
\end{lem}
\begin{proof}
For the first assertion we evaluate $\binom{u\ \ -p}{q\ \ -\tilde{u}}\binom{1\ \
\ \ 0}{0\ \ -1}\binom{\ \ \tilde{u}\ \ \ \ p}{-q\ \ -u}$ (recall that $u$ and
$\tilde{u}$ are in $A^{-}$), and the result is indeed $|U|^{2}\binom{1\ \ \ \
0}{0\ \ -1}$ for $U=\binom{u\ \ -p}{q\ \ -\tilde{u}}$. In case $p\neq0$, we
multiply $U$ by $\binom{1\ \ \ \ 0}{0\ \ -1}$ (the simplest element with
multiplier $-1$, clearly equals its $\psi$-image) and by $\binom{0\ \ 1}{1\ \
0}$ from the right. The resulting element has multiplier $pq-|u|^{2}$, and it
may be obtained by taking the parameters $a=p$ (with the square root $p^{2}$ of
$N^{A}_{\mathbb{F}}(p)=p^{4}$), $v=0$, $\alpha=\frac{u}{p}$, and
$\beta=\frac{\tilde{u}}{p}$. As $p^{2}\overline{p}^{\ -1}=p$ once again, $\psi$
just applies $\theta$ to the coordinates $u$ and $\tilde{u}$, and multiplying by
$\binom{\ \ 0\ \ 1}{-1\ \ 0}$ (which was seen to equal its $\psi$-image) from
the right again proves the assertion for the case $p\neq0$. Otherwise
$|u|^{2}\neq0$, and we choose the parameters $v=\alpha=0$, $a=u$ (for the
reduced norm of which Corollary \ref{NAvn2} allows us to take $-|u|^{2}$ as a
square root), and $\beta=\frac{q\tilde{u}}{|u|^{2}}$. The resulting $\psi$-image
is once again obtained by just applying $\theta$ to $u$ and to $\tilde{u}$,
completing the verification of this assertion. Now, the products
$U\hat{\psi}(U)=\binom{u\ \ -p}{q\ \ -\tilde{u}}\binom{\tilde{u}\ \ -p}{q\ \
-u}$ and $\hat{\psi}(U)U=\binom{\tilde{u}\ \ -p}{q\ \ -u}\binom{u\ \ -p}{q\ \
-\tilde{u}}$ are easily evaluated, by Lemma \ref{vnorm6}, as $|U|^{2}I$, and the
last assertion now follows from Lemma \ref{Vpol}. This proves the lemma.
\end{proof}
More importantly, we also have
\begin{prop}
If $U \in A^{-} \oplus H$ and
$\big(g,\psi(g)\big)\in\widetilde{GSp}_{A}^{\mathbb{F}^{2}}\binom{1\ \ \
\ 0}{0\ \ -1}$ then the matrix $gU\psi(g)^{-1}$ also lies in $A^{-} \oplus H$,
and it has the same vector norm as $U$. \label{GSppresHA-}
\end{prop}
\begin{proof}
It suffices to prove the assertion for a generating subset of
$\widetilde{GSp}_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$. By Corollary
\ref{genform}, the set of diagonal and unipotent matrices is a generating set.
$\psi$ operates as $\theta$ on the $A^{-}$-coordinates of the unipotent
generators (by choosing 1 to be the square root of $N^{A}_{\mathbb{F}}(1)$),
while on the diagonal ones it operates as $(a,t)\mapsto\big(t\overline{a}^{\
-1},t\big)$. The composition with inversion as $-\theta$ on the unipotent ones
and $(a,t)\mapsto\big(\frac{\overline{a}}{t},\frac{1}{t}\big)$ on the diagonal
ones. The action of $\binom{1\ \ v}{0\ \ 1}$ thus leaves $q$ invariant, takes
$u$ to $u+qv$ and $\tilde{u}$ to $\tilde{u}+q\tilde{v}$, and $p$ to
$p+u\tilde{v}+v\tilde{u}+qv\tilde{v}$. $\binom{1\ \ 0}{w\ \ 1}$ sends $q$ to
$q+wu+\tilde{u}\tilde{w}+pw\tilde{w}$, $u$ to $u+p\tilde{w}$, $\tilde{u}$ to
$\tilde{u}+pw$, and leaves $p$ invariant. Finally, applying $\binom{a\ \ \ \ 0\
\ \ }{0\ \ m\overline{a}^{-1}}$ multiplies $q$ by $\frac{m}{t}$ and $p$ by
$\frac{t}{m}$, and maps $u$ to $\frac{au\overline{a}}{t}$, $\tilde{u}$ to
$t\overline{a}^{\ -1}\tilde{u}a^{-1}$. The image of $u$ is the the image of
$\tilde{u}$ under $\theta$ in all these cases (this is clear in the first two
operations and uses Lemma \ref{AxA-rel} and the fact that
$t^{2}=N^{A}_{\mathbb{F}}(a)$ for the latter case). In addition, the expressions
we add to $p$ in the first case and $q$ in the second case are $2\langle u,v
\rangle+q|v|^{2}$ and $2\langle u,\tilde{w} \rangle+p|w|^{2}$ by Lemma
\ref{vnorm6} respectively, multiplying which by $q$ (resp. $p$) yields
$|u+qv|^{2}-|u|^{2}$ (resp. $|u+p\tilde{w}|^{2}-|u|^{2}$) by Lemma \ref{Vpol}.
The fact that Lemma \ref{ac6d1} implies that
$\big|\frac{au\overline{a}}{t}\big|^{2}=|u|^{2}$ for the latter generators now
completes the verification of both assertions for all the necessary cases.
This proves the proposition.
\end{proof}
We shall also make use of the following
\begin{lem}
Let $U \in A^{-} \oplus H$ and
$\big(g,\psi(g)\big)\in\widetilde{GSp}_{A}^{\mathbb{F}^{2}}\binom{1\ \ \
\ 0}{0\ \ -1}$ be given. Then the equality
$\hat{\psi}\big(gU\psi(g)^{-1}\big)=\psi(g)\hat{\psi}(U)g^{-1}$ holds.
\label{hpsiGSprel}
\end{lem}
\begin{proof}
First, Proposition \ref{GSppresHA-} shows that $gU\psi(g)^{-1} \in A^{-} \oplus
H$ and it has vector norm $|U|^{2}$. In particular, its $\hat{\psi}$-image
is defined. Now, using Lemma \ref{vnorm8} we write
\[gU\psi(g)^{-1}\hat{\psi}\big(gU\psi(g)^{-1}\big)=|gU\psi(g)^{-1}|^{2}=|U|^{2}
=g|U|^{2}g^{-1}\] (since $|U|^{2}$ is a scalar), and applying Lemma \ref{vnorm8}
again the latter term can be presented as
$gU\hat{\psi}(U)g^{-1}=gU\psi(g)^{-1}\cdot\psi(g)\hat{\psi}(U)g^{-1}$. If $U$
is anisotropic (hence invertible) then so is $gU\hat{\psi}(U)g^{-1}$, and the
assertion follows for such $U$. For the rest we observe that both sides are
linear in $U$ and $A^{-} \oplus H$ is spanned by anisotropic vectors. This
completes the proof of the lemma.
\end{proof}
We can now give more details to the group action in this case:
\begin{lem}
The group $\widetilde{GSp}_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$ maps
to $O(A^{-} \oplus H)$ with kernel $\mathbb{F}^{\times}$, in which the choice of
the square root of the reduced norm of the coordinate $r$ of $rI$ is $r^{2}$.
Let $\tilde{\psi}$ be an element generating a group of order 2. If
$\tilde{\psi}$ operates on $\widetilde{GSp}_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \
0}{0\ \ -1}$ as the automorphism $\psi$ then sending it to the involution
$\hat{\psi}$ on $A^{-} \oplus H$ yields a map from the semi-direct product of
$\{1,\tilde{\psi}\}$ with $\widetilde{GSp}_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \
0}{0\ \ -1}$ to $O(A^{-} \oplus H)$. \label{ac8id1}
\end{lem}
\begin{proof}
Proposition \ref{GSppresHA-} defines a map
$\widetilde{GSp}_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1} \to O(A^{-}
\oplus H)$. Given $r\in\mathbb{F}^{\times}$, the scalar matrix $rI$ lies in
$GSp_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$ (with multiplier $r^{2}$),
and one easily verifies that it equals its $\psi$-image if the square root of
$N^{A}_{\mathbb{F}}(r)=r^{4}$ is taken to be $r^{2}$. This defines an embedding
of $\mathbb{F}^{\times}$ into $\widetilde{GSp}_{A}^{\mathbb{F}^{2}}\binom{1\ \ \
\ 0}{0\ \ -1}$, with image in the kernel of the action on $A^{-} \oplus H$ by
the centrality of such $rI$. In order to show that these are the only elements
operating trivially, let $\binom{a\ \ b}{c\ \ d}$ be an element of
$GSp_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$,
with multiplier $m$, and let $\binom{e\ \ f}{g\ \ h}$ describe the inverse of
its $\psi$-image (with multiplier $\frac{1}{m}$). The action sends the elements
$\binom{0\ \ 0}{1\ \ 0}$ and $\binom{0\ \ 1}{0\ \ 0}$ of $A^{-} \oplus H$ to
$\binom{be\ \ bf}{de\ \ df}$ and $\binom{ag\ \ ah}{cg\ \ ch}$ respectively. If
the action is trivial, we must have $be=bf=0$ and $cg=ch=0$. But then we get,
from the $GSp$ relations for $\binom{e\ \ f}{g\ \ h}$, the equalities
$b=mb(e\overline{h}+f\overline{g})=0$ and
$c=mc(h\overline{e}+g\overline{f})=0$, so that $a \in
A^{(\mathbb{F}^{\times})^{2}}$ with $N^{A}_{\mathbb{F}}(a)=t^{2}$,
$d=m\overline{a}^{\ -1}$, $f=g=0$, $e=\frac{\overline{a}}{t}$, and
$h=\frac{ta^{-1}}{m}$. But the action on $A^{-} \subseteq A^{-} \oplus H$ was
seen in Proposition \ref{GSppresHA-} to be via the map from Lemma \ref{ac6d1},
which shows that the only elements
$(a,t)\in\widetilde{A}^{(\mathbb{F}^{\times})^{2}}$ which act trivially are of
the form $(r,r^{2})$ with $r\in\mathbb{F}^{\times}$. In order for the action of
this element to be trivial also on $H \subseteq A^{-} \oplus H$, the formula
from Proposition \ref{GSppresHA-} implies that $m$ must be $r^{2}$ as well, so
that our element is indeed $rI$ with $\psi$-image also $rI$ (note that using the
other sign for the $\psi$-image yields elements operating as $-Id_{A^{-} \oplus
H}$). The fact that $\hat{\psi}$ clearly lies in $O(A^{-} \oplus H)$ and the
scalar $\frac{1}{m(g)}$ operates trivially now implies, together with Lemma
\ref{hpsiGSprel}, that the map to $O(A^{-} \oplus H)$ is well-defined on the
semi-direct product in question. This proves the lemma.
\end{proof}
Once again, we shall need an assertion about reflections:
\begin{lem}
An anisotropic vector $g \in A^{-} \oplus H$ may be lifted to
$\widetilde{GSp}_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$ such that its
$\psi$-image is $-\hat{\psi}(g)$. The map taking $U \in A^{-} \oplus H$ to the
image of $\hat{\psi}(U)$ under the action of this lift of $g$ is the reflection
in $g$. \label{ref8id1}
\end{lem}
\begin{proof}
By Lemma \ref{vnorm8}, such $g$ lies in $GSp_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \
0}{0\ \ -1}$, and $\psi(g)$ can be taken from $\{\pm\hat{\psi}(g)\}$. Hence such
a lift to $\widetilde{GSp}_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$
exists, and operates orthogonally on $A^{-} \oplus H$ by Proposition
\ref{GSppresHA-} (or Lemma \ref{ac8id1}). In the evaluation of the result on
$U=g$ the two factors involving $\hat{\psi}(g)$ cancel to give just $-g$. On the
other hand, if $u \in g^{\perp}$ then Lemma \ref{vnorm8} allows us to replace
$g\hat{\psi}(U)$ by $-U\hat{\psi}(g)$, and a similar argument shows that the
final result is just $U$. This proves the lemma.
\end{proof}
We can finally prove
\begin{thm}
The group $\widetilde{GSp}_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$ is
the Gspin group $Gspin(A^{-} \oplus H)$. It is generated by lifts of anisotropic
elements $A^{-} \oplus H$ whose $\psi$-images coincide with their
$-\hat{\psi}$-images, so that $GSp_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \
-1}$ is generated by $(A^{-} \oplus H) \cap GL_{2}(A)$. The spin group
$spin(A^{-} \oplus H)$ is the double cover
$\widetilde{Sp}_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$ of
$Sp_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$ defined by those pairs in
$\widetilde{GSp}_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$ having
multiplier 1. \label{dim8id1}
\end{thm}
\begin{proof}
Lemma \ref{ref8id1} and Proposition \ref{CDT} show, as in all the previous
cases, that the map from the semi-direct product of Lemma \ref{ac8id1} to
$O(A^{-} \oplus H)$ is surjective. Moreover, $\hat{\psi}$ has determinant $-1$
as an element of $O(A^{-} \oplus H)$ (it inverts a 3-dimensional subspace and
leaves the elements of its orthogonal complement invariant), so that
$\widetilde{GSp}_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$ maps to
$SO(A^{-} \oplus H)$ (Lemma \ref{ref8id1} again), with kernel
$\mathbb{F}^{\times}$, and the map is again surjective (by index
considerations). This shows that $Gspin(A^{-} \oplus
H)=\widetilde{GSp}_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$, and the
structure of the semi-direct product shows (using Lemma \ref{ref8id1} and
Proposition \ref{CDT} again) that this group is generated by those elements of
$\widetilde{GSp}_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$ lying over
$(A^{-} \oplus H) \cap GL_{2}(A)$ whose images under $\psi$ and $-\hat{\psi}$
coincide. The generation of $GSp_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \
-1}$ follows, as the projection from the double cover is surjective. As the
element $\hat{\psi}$ of $O(A^{-} \oplus H)$ inverts a subspace of determinant 1,
it has spinor norm 1. Hence Lemma \ref{ref8id1} implies that this lift of
invertible $g \in A^{-} \oplus H$ has spinor norm $|g|^{2}$, which coincides
with the multiplier of this element. As these were seen to be a generating set
for $\widetilde{GSp}_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$, the
spinor norm of any element of the latter group is its multiplier (modulo
squares). The fact that this map factors through the projection to
$GSp_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$ is related to the space
$A^{-} \oplus H$ having discriminant 1, so that multiplying by $-Id_{A^{-}
\oplus H}$ does not affect the spinor norms. Therefore $SO^{1}(A^{-} \oplus H)$
consists of the images of those elements whose norm is a square, and
multiplication by suitable elements from the kernel $\mathbb{F}^{\times}$, we
may restrict to elements of multiplier 1. These are the elements of the double
cover $\widetilde{Sp}_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$ of the
symplectic group $Sp_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$ which is
defined by the multiplier 1 condition. As the only scalars with multiplier 1
are $\pm1$, this is the kernel of the (surjective) map from
$\widetilde{Sp}_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$ onto
$SO^{1}(A^{-} \oplus H)$, whence the former group is indeed $spin(A^{-} \oplus
H)$. This proves the theorem.
\end{proof}
\smallskip
Our space $A^{-} \oplus H$ is already assumed to be isotropic. However, we may
consider what happens when it splits more than one hyperbolic plane.
\begin{cor}
In case $A^{-} \oplus H$ splits more than one hyperbolic plane, there is a
quaternion algebra $B$ over $\mathbb{F}$ such that the Gspin and spin groups are
isomorphic to double covers of the subgroups of $GSp_{4}(B)$ and $Sp_{4}(B)$
whose presentation as in Corollary \ref{genform} (but with $M_{2}(B)^{-}$
replaced with $M_{2}^{Her}(B)$ and the lower right entry of the diagonal
generators being $m\iota_{B}(a)^{-t}$ rather than $m\overline{a}^{\ -1}$) uses
parameters from $GL_{2}^{(\mathbb{F}^{\times})^{2}}(B) \subseteq GL_{2}(B)$. If
it splits more than two hyperbolic planes, then our space is the direct sum of 4
hyperbolic planes, and our description of the spin group presents it as a double
cover of the group $SO^{1}\binom{0\ \ I}{I\ \ 0}$ of the direct sum of 4
hyperbolic planes in two ways, which are inequivalent to one another or to the
natural presentation as a double cover of such a group. The Gspin group is a
double cover of a subgroup of the general special orthogonal group of the direct
sum of 4 hyperbolic planes, again in two inequivalent ways. \label{iso8id1}
\end{cor}
\begin{proof}
Conjugation by an arbitrary element $\binom{e\ \ f}{g\ \ h} \in GL_{2}(A)$ takes
the group $GSp_{A}\binom{1\ \ \ \ 0}{0\ \ -1}$ to the $GSp$ group of the matrix
$\binom{e\ \ f}{g\ \ h}\binom{1\ \ \ \ 0}{0\ \ -1}\binom{\ \ \overline{h}\ \
-\overline{f}}{-\overline{g}\ \ \ \ \overline{e}}$, with the same multipliers.
$\varphi$ may be defined through the reverse conjugation, and conjugating any
$\psi$-image by the same matrix yields an order 2 automorphism of a double cover
of the kernel of this $\varphi$ (we may also transfer the action on $A^{-}
\oplus H$ by conjugating its image as well). When we do this with $\binom{e\ \
f}{g\ \ h}=\binom{1\ \ 0}{0\ \ R}$ for $R \in A^{-} \cap A^{\times}$ (and
multiplying by the global scalar $-1$) we get $GSp_{A}\binom{R\ \ 0}{0\ \ R}$.
In this case Corollary \ref{NAvn2} shows that the $GSp_{A}^{\mathbb{F}^{2}}$ (or
$\ker\varphi$) condition keeps its shape, since this conjugation just multiplies
the entries by $R$ or $R^{-1}$. When the $A^{-}$ part of $A^{-} \oplus H$ is
also isotropic, Corollary \ref{iso6d1} shows that we can take $A=M_{2}(B)$, and
we choose the element $R=\binom{0\ \ -1}{1\ \ \ \ 0}$. An application of Lemma
\ref{Sadjt} once on matrices in $M_{2}(A)$ and another time on the entries from
$A=M_{2}(B)$ shows that $GSp_{M_{2}(B)}\binom{R\ \ 0}{0\ \ R}$ is exactly the
group of matrices $g \in M_{4}(B)$ such that $g\binom{0\ \ -I}{I\ \ \ \
0}\iota_{B}(g)^{t}$ is some multiple of $\binom{0\ \ -I}{I\ \ \ \ 0}$. The group
$GSp_{M_{2}(B)}\binom{R\ \ 0}{0\ \ R}$ is thus $GSp_{4}(B)$, and the same for
the symplectic groups: $Sp_{M_{2}(B)}\binom{R\ \ 0}{0\ \ R}=Sp_{4}(B)$. Applying
this to the generators appearing in Corollary \ref{genform} indeed yields the
generators appearing in the parentheses, and as the upper left entry is not
affected, we get the asserted description of the Gspin and spin groups.
We can also conjugate with $\binom{1\ \ 0}{0\ \ S}$ with $S \in A^{+}$, yielding
$GSp_{A}\binom{S\ \ \ \ 0}{0\ \ -S}$. If $A^{-}$ splits more than one hyperbolic
plane then $A^{-} \oplus H$ is the sum of 4 hyperbolic planes (see Corollary
\ref{iso6d1} again), $B=M_{2}(\mathbb{F})$, and $A=M_{4}(\mathbb{F})$. We choose
$S$ to be the tensor product of $\binom{0\ \ -1}{1\ \ \ \ 0}$ with itself, i.e.,
the matrix $\binom{\alpha\ \ \beta}{\gamma\ \ \delta} \in M_{4}(\mathbb{F})$ in
which $\alpha=\delta=0$ and $\gamma=-\beta=\binom{0\ \ -1}{1\ \ \ \ 0} \in
M_{2}(\mathbb{F})$. By the form of the conjugating matrix, the definition of
$\ker\varphi$ remains the same also in this case. After applying Lemma
\ref{Sadjt} twice as in the previous case, plus another time on the entries of
$B=M_{2}(\mathbb{F})$, the group $\widehat{GSp}_{A}\binom{S\ \ \ \ 0}{0\ \ -S}$
is seen to be the group of matrices $g \in M_{8}(\mathbb{F})$ such that
$g\binom{0\ \ I}{I\ \ 0}g^{t}$ is a multiple of $\binom{0\ \ I}{I\ \ 0}$ (i.e.,
the \emph{general orthogonal group} of that matrix). $GSp_{A}\binom{S\ \ \ \
0}{0\ \ -S}$ is then the \emph{general special orthogonal group}, in which the
determinant is the 4th power of the multiplier, and $Sp_{A}\binom{S\ \ \ \ 0}{0\
\ -S}$ is just $SO\binom{0\ \ I}{I\ \ 0}$.
We claim that the map $\varphi$ on $Sp_{M_{4}(\mathbb{F})}\binom{1\ \ \ \ 0}{0\
\ -1}$ corresponds to the spinor norm in this presentation as a special
orthogonal group. It suffices to verify this again on a set of generators, and
we use those from the proof of Proposition \ref{GSppresHA-} once more. Recall
that we must take the conjugates of our elements by $\binom{1\ \ 0}{0\ \ S}$,
and that $S \in M_{4}(\mathbb{F})^{+}$ and satisfies $S^{2}=I$. This conjugation
replaces $\binom{1\ \ v}{0\ \ 1}$ and $\binom{1\ \ 0}{w\ \ 1}$ by $\binom{1\ \
vS}{0\ \ \ 1\ }$ and $\binom{\ 1\ \ \ 0}{Sw\ \ 1}$, with $vS$ being in
$M_{4}^{as}(\mathbb{F})$ by two applications of Lemma \ref{Sadjt} (as in
Corollary \ref{alt6d1}), and $Sw=Sw\overline{S} \cdot S$ lies there as well. For
$\binom{a\ \ \ 0\ \ }{0\ \ \overline{a}^{-1}}$ (we restrict to elements of
multiplier 1, since we do not consider ``spinor norms for general orthogonal
groups'' here), the choice of $S$ and Lemma \ref{Sadjt} imply that after
conjugation, $\overline{a}^{\ -1}$ is replaced by $a^{-t}$. Now, the unipotent
generators (which lie in $\ker\varphi$) are squares in $Sp_{A}\binom{1\ \ \ \
0}{0\ \ -1}$ (a square root is obtained by dividing the entry $vS$ or $Sw$ from
$M_{4}^{as}(\mathbb{F})$ by 2). Hence they have trivial spinor norms since the
range $\mathbb{F}^{\times}/(\mathbb{F}^{\times})^{2}$ of the spinor norm has
exponent 2. For $\binom{a\ \ \ 0\ \ }{0\ \ a^{-t}}$ with $a \in
GL_{4}(\mathbb{F})$, we recall that the latter group is generated by elementary
matrices, which operate only on two of these hyperbolic planes. It thus suffices
to consider the operation of $\binom{g\ \ \ 0\ \ }{0\ \ g^{-t}}$ with $g \in
GL_{2}(\mathbb{F})$ on the direct sum of two hyperbolic planes. But Corollaries
\ref{dim4d1} and \ref{iso4} show that the latter space is isometric to
$M_{2}(\mathbb{F})$ with the determinant as the vector norm, the Gspin group
being the ``equal determinant subgroup'' of the product of two copies of
$GL_{2}(\mathbb{F})$. By considering the first hyperbolic plane as generated by
$\binom{1\ \ 0}{0\ \ 0}$ and $\binom{0\ \ 0}{0\ \ 1}$ and the second one by
$\binom{0\ \ 0}{1\ \ 0}$ and $\binom{0\ \ -1}{0\ \ \ \ 0}$, the action of
$\binom{g\ \ \ 0\ \ }{0\ \ g^{-t}}$ becomes the action of the pair consisting of
$g$ and $\binom{1\ \ \ \ 0\ \ }{0\ \ \det g}$, and the spinor norm is indeed
$\det g$. Thus $Sp_{M_{4}(\mathbb{F})}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \
-1}$ is isomorphic to $SO^{1}\binom{0\ \ I}{I\ \ 0}$, and our description of the
spin group as $\widetilde{Sp}_{M_{4}(\mathbb{F})}^{\mathbb{F}^{2}}\binom{1\ \ \
\ 0}{0\ \ -1}$ is once again a presented as $spin\binom{0\ \ I}{I\ \ 0}$. We
thus have three representations of this group as a spin group of the direct sum
of 4 hyperbolic planes: The original one, the projection onto
$Sp_{M_{4}(\mathbb{F})}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$, and the
composition of the latter projection with $\psi$. These representations are not
equivalent, since their kernels, all of order 2, are different: The non-trivial
element there is $-I$ with $\psi$-image $-I$ in the original representation, $I$
with $\psi$-image $-I$ in the projection, and $-I$ with $\psi$-image $I$ in the
composition. On the other hand,
$GSp_{M_{4}(\mathbb{F})}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$ is a
subgroup of the general special orthogonal group of $\binom{0\ \ I}{I\ \ 0}$
which is defined by some condition which restricts to the triviality of the
spinor norm on $SO\binom{0\ \ I}{I\ \ 0}$, the Gspin group in question is a
double cover of this subgroup, and $\psi$ presents it as a double cover of this
subgroup in an inequivalent way (again, the projections have different kernels).
This completes the proof of the corollary.
\end{proof}
Note that the groups from Corollary \ref{iso8id1} are not
$\widetilde{GSp}_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$, but
conjugates of the latter group inside $GL_{2}(A)$, with given conjugators. This
yields a definition for $\psi$ on these groups. However, we shall conjugate this
$\psi$ by $\binom{R\ \ 0}{0\ \ R}$ or by $\binom{S\ \ 0}{0\ \ S}$. The formula
for the resulting map looks just like that of $\psi$, but in which
$\overline{a}$ is replaced by $\iota_{B}(a)^{t}$ or just $a^{t}$, while
$\theta:v\mapsto\tilde{v}$ on $A^{-}$ becomes $X\mapsto-adjX$ on
$M_{2}^{Her}(B)$ or $T\mapsto\hat{T}$ on $M_{4}^{as}(\mathbb{F})$ (both having
the property that multiplication of the vector and its image under the
involution yields the vector norm). Hence when the groups from Corollary
\ref{iso8id1} are considered, this is the choice of $\psi$ with which they
come.
The fact that in the hyperbolic case we get 3 inequivalent 8-dimensional
representations of the spin group, in all of which the image is the $SO^{1}$
group of the direct sum of 4 hyperbolic planes, is an incarnation of
\emph{triality} for this case. Triality exists for more general settings, namely
some non-isotropic spaces of dimension 8 and discriminant 1 (see Section 35 of
\cite{[KMRT]} for more details), but our methods here restrict to the isotropic
case.
We remark that allowing non-trivial spinor norms in the second case in Corollary
\ref{iso8id1} is in some sense dual to allowing multipliers. We have seen that
the Gspin group was mapping to the general special orthogonal group in this
case. On the other hand, we may allow the map $\psi$ from Theorem \ref{GSppsi}
to have a free choice of a scalar (not necessarily squaring to the reduced norm
of an entry), which would extend the definition of $\psi$ to (an
$\mathbb{F}^{\times}$-cover of) all of $GSp_{A}\binom{1\ \ \ \ 0}{0\ \ -1}$, not
only to elements with trivial $\varphi$-image. The group constructed in Theorem
\ref{GSppsi} would then be an $\mathbb{F}^{\times}$-cover of
$GSp_{A}\binom{1\ \ \ \ 0}{0\ \ -1}$, and the action from Proposition
\ref{GSppresHA-} and Lemma \ref{ac8id1} may multiply the bilinear form on $A^{-}
\oplus H$ by a scalar. In the split $A$ case this was seen, when we considered
the generators $\binom{a\ \ \ 0\ \ }{0\ \ \overline{a}^{-1}}$ with multiplier 1,
to produce elements whose image in the projection to $SO(M_{4}(\mathbb{F})^{-}
\oplus H)$, as well as in the composition of this projection with $\psi$, may
have arbitrary spinor norms (but the spinor norm does have to be the same for
these two maps).
In any case, we may have many alternative descriptions of this picture:
\begin{cor}
Let $\Xi$ be an element of $\widetilde{GSp}_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \
0}{0\ \ -1}$, such that $\Xi\psi(\Xi)$ is a scalar $r\in\mathbb{F}^{\times}$.
Define the map $\psi_{\Xi}:\widetilde{GSp}_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \
0}{0\ \ -1}\to\widetilde{GSp}_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$
by conjugating $\psi$ by $\Xi$, i.e., $\psi_{\Xi}(g)=\Xi\psi(g)\Xi^{-1}$. Let
$\hat{\psi}_{\Xi}$ be the composition of $\hat{\psi}$ with the operation of
$\Xi$ on $A^{-} \oplus H$, and we embed the latter space into $M_{2}(A)$ by
multiplying the image from above by $\Xi^{-1}$ from the right. Then all the
assertions from Lemma \ref{vnorm8} to Theorem \ref{dim8id1} and Corollary
\ref{iso8id1} hold by replacing every $U$ by $U\binom{0\ \ -1}{1\ \ \ \ 0}$,
$\psi$ by $\psi_{\Xi}$, and $\hat{\psi}$ by $\hat{\psi}_{\Xi}$, up to rescaling
the bilinear forms by the scalar $r$. \label{alt8id1}
\end{cor}
\begin{proof}
The assumption that $\Xi\psi(\Xi)$ is central implies that $\psi_{\Xi}$ again
has order 2 as an automorphism of $\widetilde{GSp}_{A}^{\mathbb{F}^{2}}\binom{1\
\ \ \ 0}{0\ \ -1}$. As an element $V$ of the latter space is $U\Xi^{-1}$ with $U
\in A^{-} \oplus H$ as above, its $\psi_{\Xi}$-image equals
$\Xi\psi(U)\psi(\Xi)^{-1}\Xi^{-1}$. This coincides with our definition of
$\hat{\psi}_{\Xi}$ on $U$ and the modified embedding, so that $\hat{\psi}_{\Xi}$
preserves this embedding of $A^{-} \oplus H$ and is the restriction of a branch
of $\psi_{\Xi}$. In addition, our assumption on $\Xi$ implies that the latter
expression is just $\frac{\Xi\hat{\psi}(U)}{r}$. The original Lemma \ref{vnorm8}
now yields its modified version, with the appropriate rescaling . Furthermore,
multiplying our $V$ by $\psi_{\Xi}(g)^{-1}=\Xi\psi(g)^{-1}\Xi^{-1}$ from the
right gives $U\psi(g)^{-1}\Xi^{-1}$. After left multiplication by $g$, the
original Proposition \ref{GSppresHA-} implies the modified one. All the rest now
follows from these assertions in the same way. This proves the corollary.
\end{proof}
For example, if we take $\Xi=\binom{0\ \ -1}{1\ \ \ \ 0}$ (with $r=1$) in
Corollary \ref{alt8id1} then $A^{-} \oplus H$ becomes the space of matrices of
the form $\binom{p\ \ u}{\tilde{u}\ \ q}$, with the usual $p$, $q$, and $u$ and
with minus the ``bi-quaternionic Moore determinant'' as the vector norm. The map
$\hat{\psi}_{\Xi}$ interchanges $p$ and $q$ with minus one another and leaves
$A^{-}$ pointwise fixed. In general, all the (equivalent) representations we get
in Corollary \ref{alt8id1} are still based on the map $\psi$, which is more
complicated, hence we shall not present any of them explicitly.
\section{Dimension 7, Representing the Discriminant \label{Dim7rd}}
The spaces we consider here are given in
\begin{lem}
The orthogonal complement of a vector in $A^{-} \oplus H$ of some vector norm
$-\delta\neq0$ has discriminant $\delta$ and it contains a vector of norm
$\delta$. Any vector space of dimension 7 containing a vector whose vector norm
equals the discriminant of the space can be obtained in this manner, up to
rescaling . \label{sp7rd}
\end{lem}
We remark that if some vector $Q$ has vector norm which equals the discriminant
of the space then it continues to hold after rescalings.
\begin{proof}
The discriminant and determinant of $A^{-} \oplus H$ is 1. As a vector $Q$ of
vector norm $-\delta$ spans a space of determinant $-\delta$, the complement in
$A^{-} \oplus H$ has the same determinant $-\delta$, and its discriminant is
$\delta$ since $(-1)^{7(7-1)/2}=-1$. As a hyperbolic plane contains vectors of
any given vector norm, the Witt Cancelation Theorem allows us to find some
element of $O(A^{-} \oplus H)$ taking $Q$ to some element of $H \subseteq A^{-}
\oplus H$. The orthogonal complement in $H$ is generated by a vector of vector
norm $\delta$, and its inverse image under the orthogonal map we applied has the
same vector norm. Conversely, adding some vector $Q$ which is perpendicular to
the total space and such that $|Q|^{2}$ is the determinant of the space yields a
space of discriminant 1. The sum of $Q$ with a vector whose vector norm is the
discriminant of the space is then isotropic. Lemma \ref{sp8id1} now completes
the proof of the lemma.
\end{proof}
In view of Lemma \ref{sp7rd}, we write our space as
$A^{-}\oplus\langle\delta\rangle$, using a generator of the orthogonal
complement in $H$ whose norm is $\delta$. Moreover, we embed the space $A^{-}
\oplus H$ into $M_{2}(A)$ as seen after Lemma \ref{sp8id1}, and we choose $Q$ to
be the matrix $\binom{0\ \ -\delta}{1\ \ \ \ 0}$ (of vector norm $-\delta$). We
now have
\begin{lem}
Given such $A$ with $\iota_{B}\otimes\iota_{C}$ and $\delta$, the groups
$Gspin(A^{-}\oplus\langle\delta\rangle)$ and
$spin(A^{-}\oplus\langle\delta\rangle)$ are, up to isomorphism, the stabilizers
of the matrix $\binom{0\ \ \delta}{1\ \ 0}$ in the action given in Proposition
\ref{GSppresHA-} inside the groups
$\widetilde{GSp}_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ 0}{0\ \ -1}$ and
$\widetilde{Sp}_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ 0}{0\ \ -1}$ respectively. The
double cover $\widetilde{GSp}_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ 0}{0\ \ -1}$
splits over $Gspin(A^{-}\oplus\langle\delta\rangle)$. These groups operate by
conjugation on $A^{-}\oplus\langle\delta\rangle$ if we identify the latter space
as the space of matrices of the form $\binom{p\ \ \ \delta u}{\tilde{u}\ \ -p}$
with $u \in A^{-}$ and $p\in\mathbb{F}$, with the vector norm being the
``bi-quaternionic $A^{-}$-Moore determinant'' divided by $-\delta$.
\label{ac7rd}
\end{lem}
\begin{proof}
The first assertion is proved by the same argument used for Lemma \ref{ac5}.
Now, that the action from Proposition \ref{GSppresHA-} is based on the map
$\psi$, hence an element of $GSp_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ 0}{0\ \ -1}$
would stabilize $\binom{0\ \ -\delta}{1\ \ \ \ 0}$ if and only if the
$\psi$-image of one of its lifts equals its conjugate by $\binom{0\ \
-\delta}{1\ \ \ \ 0}$. This yields the splitting of the double cover over
$Gspin(A^{-}\oplus\langle\delta\rangle)$, since every element there comes with a
natural choice of $\psi$-image. Take now $\Xi=\binom{0\ \ -\delta}{1\ \ \ \ 0}$
in Corollary \ref{alt8id1}, which equals its $\psi$-image and squares to
$-\delta$. The space thus obtained is the one written explicitly here, and the
remaining assertions follow since $\psi_{\Xi}(g)=g$ for $g \in
Gspin(A^{-}\oplus\langle\delta\rangle)$ by definition. This proves the lemma.
\end{proof}
In order to give a more detailed description of the groups from Lemma
\ref{ac7rd}, we begin by proving
\begin{lem}
If an element of $\widetilde{GSp}_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ 0}{0\ \ -1}$
lying over $\binom{e\ \ f}{g\ \ h} \in GSp_{A}^{\mathbb{F}^{2}}\binom{1\ \ \
0}{0\ \ -1}$ stabilizes $\binom{0\ \ -\delta}{1\ \ \ \ 0}$ then either $e$ or
$g$ are invertible. \label{stabinv}
\end{lem}
\begin{proof}
First observe that is $(a,t)\in\widetilde{A}^{(\mathbb{F}^{\times})^{2}}$ then
the action of the element $\binom{a\ \ \ \ 0\ \ \ }{0\ \
t\overline{a}^{-1}}$ of $\widetilde{GSp}_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ 0}{0\
\ -1}$, with $\psi$-image $\binom{t\overline{a}^{-1}\ \ 0\ \ }{\ \ 0\ \ ma/t}$,
stabilizes this matrix. The proof of Lemma \ref{genie} thus shows that it
suffices to consider elements in which $e=\binom{1\ \ 0}{0\ \ 0} \in A=M_{2}(B)$
for some quaternion algebra $B$ over $\mathbb{F}$ (for $A$ division the lemma is
immediate). We have seen in the proof of Lemma \ref{genie} that $g$ takes the
form $\binom{\lambda\ \ \mu}{r\ \ 0}$ with $\lambda$ and $\mu$ from $B$ and
$r\in\mathbb{F}$, and a similar argument shows that $f=\binom{\sigma\ \ s}{\tau\
\ 0}$ where $\sigma$ and $\tau$ are in $B$ and $s\in\mathbb{F}$. Moreover,
$\mu\overline{\tau}$ equals the multiplier $m$ (hence both $\mu$ and $\tau$ are
invertible), and $h$ has lower right entry $m+rs$. As parameters for $\binom{e\
\ f}{g\ \ h}$ in Corollary \ref{genform} may be taken to be $v=\binom{\ \ 0\ \
0}{-1\ \ 0}$, $a=\binom{1\ \ 0}{\lambda\ \ \mu}$, $\alpha$ with upper right
entry $s\in\mathbb{F}$, and $\beta=\binom{0\ \ 1}{r\ \ 0}$. The
$GSp_{A}^{\mathbb{F}^{2}}$ condition means that
$N^{B}_{\mathbb{F}}(a)=N^{B}_{\mathbb{F}}(\mu)$ is a square, say $t^{2}$, hence
$t\overline{a}^{\ -1}=\binom{\mu/t\ \ \ \ 0}{\overline{\lambda}\mu/t\ \ t}$, and
the action of $\theta$ leaves $u$, $\beta$, and the $s$-entry of $\alpha$
invariant. The lowest row of $\binom{1\ \ v}{0\ \ 1}\binom{1\ \ 0}{\beta\ \ 1}$
is $(r\ \ 0\ \ 0\ \ 1)$, while in $t\overline{a}^{\ -1}\binom{1\ \
\tilde{\alpha}}{0\ \ 1}$ the most upper right and lower right entries are
$\frac{s\mu}{t}$ and $\frac{m\mu}{t}$ respectively. Hence the most lower right
entry of $\psi\binom{e\ \ f}{g\ \ h}$ is $\frac{(m+rs)\mu}{t}$. As
$\psi\binom{e\ \ f}{g\ \ h}$ has multiplier $m$, its inverse has
$\frac{(m+rs)\mu}{mt}$ as its most upper left entry. Now, the second row of
$\binom{e\ \ f}{g\ \ h}\binom{0\ \ -\delta}{1\ \ \ \ 0}$ is $(\tau\ \ 0\ \ 0\ \
0)$, so that the second row of the action of $\binom{e\ \ f}{g\ \ h}$ on
$\binom{0\ \ -\delta}{1\ \ \ \ 0}$ starts with $\frac{(m+rs)\tau\mu}{mt}$. But
we have assumed that $\binom{e\ \ f}{g\ \ h}$ preserves $\binom{0\ \ -\delta}{1\
\ \ \ 0}$, so that the latter expression must vanish. As $\tau$ and $\mu$ are in
$B^{\times}$ and $m\neq0$, it follows that $r\neq0$, whence $g=\binom{\lambda\ \
\mu}{r\ \ 0}$ is invertible as desired. This completes the proof of the lemma.
\end{proof}
The determination of the group from Lemma \ref{ac7rd} may now be carried out
using the explicit formulae for $\psi$. The result is
\begin{prop}
Any element of $Gspin(A^{-}\oplus\langle\delta\rangle)$ may be presented either
as $\binom{\ a\ \ -t\delta\tilde{\beta}\overline{a}^{-1}}{\beta a\ \ \ \
t\overline{a}^{-1}\ \ }$ with
$(a,t)\in\widetilde{A}^{(\mathbb{F}^{\times})^{2}}$ and $\beta \in A^{-}$, or as
$\binom{wc\ \ -s\delta\overline{c}^{-1}}{\ c\ \ \ \
s\tilde{w}\overline{c}^{-1}}$ where $(c,s)$ is in
$\widetilde{A}^{(\mathbb{F}^{\times})^{2}}$ and $w$ comes from $A^{-}$.
\label{stabdelta}
\end{prop}
\begin{proof}
Lemma \ref{stabinv} implies that for every element $\binom{a\ \ b}{c\ \ d} \in
GSp_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ 0}{0\ \ -1}$ which stabilizes $\binom{0\ \
-\delta}{1\ \ \ \ 0}$, either $\binom{a\ \ b}{c\ \ d}$ or $\binom{0\ \
-\delta}{1\ \ \ \ 0}\binom{a\ \ b}{c\ \ d}$ has an invertible upper right entry.
In the first case $a$ lies under some element
$(a,t)\in\widetilde{A}^{(\mathbb{F}^{\times})^{2}}$, and Lemma \ref{invent}
shows that $b=\beta a$ for some $\beta \in A^{-}$. Moreover, the formula from
Theorem \ref{GSppsi} shows that $\psi\binom{a\ \ b}{c\ \ d}$ has left entry
$\binom{t\overline{a}^{-1}}{t\tilde{\beta}\overline{a}^{-1}}$, and if
$\psi\binom{a\ \ b}{c\ \ d}$ coincides with the image of $\binom{a\ \ b}{c\ \
d}$ under conjugation by $\binom{0\ \ -\delta}{1\ \ \ \ 0}$, then $\binom{a\ \
b}{c\ \ d}$ must have the asserted right colmun. If $a$ is not invertible, then
we may multiply $\binom{a\ \ b}{c\ \ d}$ by $\binom{0\ \ -\delta}{1\ \ \ \ 0}$
from the left, obtain an element of the form just described, and dividing by
$\binom{0\ \ -\delta}{1\ \ \ \ 0}$ back again shows that our element must be of
the second suggested form. This proves the proposition.
\end{proof}
Note that an element of the second form may be uniquely presented as the product
of an anisotropic element of $A^{-}\oplus\langle\delta\rangle=\binom{0\ \
-\delta}{1\ \ \ \ 0}^{\perp} \subseteq A^{-} \oplus H$ in which the lower left
entry is 1 and a diagonal matrix stabilizing $\binom{0\ \ \delta}{1\ \ 0}$ (the
latter multipliers form, as Proposition \ref{stabdelta} shows, a group which is
isomorphic to $\widetilde{A}^{(\mathbb{F}^{\times})^{2}}$): Indeed, such an
element is just the product $\binom{w\ \ \ \ \delta}{1\ \ -\tilde{w}}\binom{c\ \
\ \ \ 0\ \ \ }{0\ \ -r\overline{c}^{-1}}$.
In total, we have
\begin{thm}
The Gspin group of $A^{-}\oplus\langle\delta\rangle$ consists of elements
$\binom{a\ \ b}{c\ \ d}$ of $GSp_{A}^{\mathbb{F}^{2}}\binom{1\ \ \ 0}{0\ \ -1}$
in which $a\overline{d}$ and $b\overline{c}$ are scalars from $\mathbb{F}$,
which square to $N^{A}_{\mathbb{F}}(a)$ (or equivalently
$N^{A}_{\mathbb{F}}(d)$) and $\delta^{2}N^{A}_{\mathbb{F}}(c)$ (which equals
also $\frac{N^{A}_{\mathbb{F}}(b)}{\delta^{2}}$) respectively. It is
characterized by either the two elements $bd^{-1}$ and $-\delta ca^{-1}$ or the
two elements $ac^{-1}$ and $-\delta db^{-1}$ being well-defined elements of
$A^{-}$ which are $\theta$-images of one another. It is generated by anisotropic
vectors of the form $\binom{v\ \ \ \ \delta}{1\ \ -\tilde{v}}$ or $\binom{v\ \ \
\ 0}{0\ \ -\tilde{v}}$. The spin group consists of those elements in which the
two scalars $a\overline{d}$ and $b\overline{c}$ sum to 1. \label{dim7rd}
\end{thm}
\begin{proof}
Any element may be presented in one of the two forms given in Proposition
\ref{stabdelta}. In the first case we have $a\overline{d}=t$ and
$b\overline{c}=t\delta|\beta|^{2}$, while in the second one these numbers are
$-s|w|^{2}$ and $-\delta s$ respectively. The relations with the reduced norms
of $a$, $b$, $c$, and $d$ are easily verified using Corollary \ref{NAvn2} and
the definition of $\widetilde{A}^{(\mathbb{F}^{\times})^{2}}$, and the relation
between $bd^{-1}$ and $-\delta ca^{-1}$ in the first case and $ac^{-1}$ and
$-\delta db^{-1}$ in the third case are also immediate. Conversely, if
$\binom{a\ \ b}{c\ \ d}$ is a matrix in which $a\overline{d}$ and
$b\overline{c}$ are scalars, not both zero, then either $d=t\overline{a}^{\ -1}$
or $b=-s\delta\overline{c}^{\ -1}$ for some scalars $t$ and $s$. If
$t^{2}=N^{A}_{\mathbb{F}}(a)$ then $N^{A}_{\mathbb{F}}(d)$ takes the same value,
while if $s^{2}=N^{A}_{\mathbb{F}}(c)$ then $N^{A}_{\mathbb{F}}(b)$ is obtained
by multiplication by $\delta^{4}$. If we write $c=\beta a$ in the first case and
$a=wc$ in the second case, then the respective values
$b=-t\delta\tilde{\beta}\overline{a}^{\ -1}$ and $d=s\tilde{w}\overline{c}^{\
-1}$ immediately follow. Now, elements with invertible lower left entry were
seen to take the form $\binom{w\ \ \ \ \delta}{1\ \ -\tilde{w}}\binom{c\ \ \ \ \
0\ \ \ }{0\ \ -s\overline{c}^{-1}}$. The generation of these elements by the
asserted set now follows from Theorem \ref{dim6d1}, as the map
$(a,t)\mapsto\binom{a\ \ \ \ 0\ \ }{0\ \ t\overline{a}^{-1}}$ is a group
injection which sends the generators $(v,|v|^{2})$ for $v \in A^{-} \cap
A^{\times}$ to $\binom{v\ \ \ \ 0}{0\ \ -\tilde{v}}$. The other elements are
obtained by multiplying the appropriate elements by the generator $\binom{0\ \
\delta}{1\ \ 0}$, and the assertion about generation follows. For the spin
group, observe that the proof of Lemma \ref{ac5} shows that the spinor norm of
an element of $SO(A^{-}\oplus\langle\delta\rangle)$ is the same when considered
there or in $SO(A^{-} \oplus H)$ (by leaving $\binom{0\ \ -\delta}{1\ \ \ \ 0}$
invariant), and the proof of Theorem \ref{dim8id1} implies that in the latter
group the spinor norms of (the image of) $\binom{a\ \ b}{c\ \ d}$ is just the
multiplier $a\overline{d}+b\overline{c}$. As by the usual scalar multiplication
we may normalize this multiplier to 1 wherever it is a square, the assertion
about the spin group is also established. This proves the theorem.
\end{proof}
After fixing $A$, we have the following assertion about the dependence of the
Gspin and spin groups on $\delta$:
\begin{prop}
If $\varepsilon\in\delta(\mathbb{F}^{\times})^{2}N^{A}_{\mathbb{F}}(A^{\times})$
then the spin and Gspin groups of $A^{-}\oplus\langle\varepsilon\rangle$ are
isomorphic to those of $A^{-}\oplus\langle\delta\rangle$. \label{deltadep}
\end{prop}
\begin{proof}
Consider first multiplication from $(\mathbb{F}^{\times})^{2}$. Let
$r\in\mathbb{F}^{\times}$, and we examine the result of conjugation by
$\binom{1\ \ 0}{0\ \ 1/r}$. This operation multiplies the upper right entry by
$r$ and divides the lower left entry by $r$. Hence on elements of
$Gspin(A^{-}\oplus\langle\delta\rangle)$ of the first form of Proposition
\ref{stabdelta} this operation corresponds to leaving $a$ (and $t$) invariant,
dividing $\beta$ by $r$, and multiplying $\delta$ by $r^{2}$, while for elements
of the second form it means dividing $c$ by $r$ (hence dividing $s$ by $r^{2}$),
multiplying $w$ by $r$, and again multiplying $\delta$ by $r^{2}$. Hence this
conjugation takes $Gspin(A^{-}\oplus\langle\delta\rangle)$ into
$Gspin(A^{-}\oplus\langle r^{2}\delta \rangle)$. Conjugation by the inverse
element shows that the map between these two groups is bijective. As for
multiplication by norms from $A^{\times}$, we now consider the conjugation by
$\binom{e\ \ \ 0\ \ }{0\ \ \overline{e}^{-1}}$ for some $e \in A^{\times}$. For
elements having the first form in Proposition \ref{stabdelta}, this operation
sends $a$ to $eae^{-1}$ (hence $t$ remains invariant) and $\beta$ to
$\overline{e}^{\ -1}\beta e^{-1}$, and Lemma \ref{AxA-rel} shows that $\delta$
must be multiplied by $N^{A}_{\mathbb{F}}(e)$. As for the other elements, this
operation takes $c$ to $\overline{e}^{\ -1}ce^{-1}$ (and therefore $s$ is
divided by $N^{A}_{\mathbb{F}}(e)$) and $w$ to $ew\overline{e}$, so that again
$\delta$ has to be multiplied by $N^{A}_{\mathbb{F}}(e)$ (Lemma \ref{AxA-rel} is
just a consistency check in this case). This
$Gspin(A^{-}\oplus\langle\delta\rangle)$ is isomorphic to a subgroup of
$Gspin\big(A^{-}\oplus\big\langle N^{A}_{\mathbb{F}}(e)\delta \big\rangle\big)$,
and an argument using the inverse conjugation show the bijectivity. As
conjugation preserves multipliers and the spin groups are the subgroups of the
Gspin groups which are defined by the multiplier 1 condition, this completes the
proof of the proposition.
\end{proof}
By Proposition \ref{deltadep}, it suffices to take $\delta$ from a set of
representatives for
$\mathbb{F}^{\times}/(\mathbb{F}^{\times})^{2}N^{A}_{\mathbb{F}}(A^{\times})$.
In addition, Lemma \ref{NM2B} once again shows that if $A=M_{2}(B)$ then this
group involves just classes modulo $N^{B}_{\mathbb{F}}(B^{\times})$.
The concept of isotropy in this case is the one considered in
\begin{cor}
Assume that $A^{-}\oplus\langle\delta\rangle$ contains an isotropic vector which
is orthogonal to a vector of vector norm which equals the discriminant. Then the
Gspin and spin group consist of matrices $\binom{a\ \ b}{c\ \ d}$ from
$GSp_{4}(B)$ (or $Sp_{4}(B)$), for some quaternion algebra $B$ over
$\mathbb{F}$, in which $a\iota_{B}(d)^{t}$ and $b\iota_{B}(c)^{t}$ lie in
$\mathbb{F}$, and square to
$N^{M_{2}(B)}_{\mathbb{F}}(a)=N^{M_{2}(B)}_{\mathbb{F}}(d)$ and
$\delta^{2}N^{M_{2}(B)}_{\mathbb{F}}(c)=\frac{N^{M_{2}(B)}_{\mathbb{F}}(b)}{
\delta^{2}}$ respectively. In every such matrix, either $bd^{-1}$ and $-\delta
ca^{-1}$ or $ac^{-1}$ and $-\delta db^{-1}$ lie in $M_{2}^{Her}(B)$ and are
minus the adjoints of one another. These groups operate by conjugation on the
space of matrices $\binom{\ pI\ \ \ -\delta X}{adjX\ \ -pI\ } \in M_{4}(B)$,
where $X \in M_{2}^{Her}(B)$ and $p\in\mathbb{F}$, as the Gspin and spin groups
of this space with the with the ``bi-quaternionic $A^{-}$-Moore determinant''
divided by $-\delta$ as the vector norm. The spin group may also be presented as
the spin group of the space of matrices of the sort $\binom{\delta X\ \ \ \ pI\
}{pI\ \ \ adjX}$, with the same ``bi-quaternionic Moore determinant'' as the
vector norm, via $g:N \mapsto gN\iota_{B}(g)^{t}$.
In case $A^{-}\oplus\langle\delta\rangle$ splits three hyperbolic planes, we
get groups of $8\times8$ matrices $\binom{a\ \ b}{c\ \ d}$ which multiply the
bilinear form defined by $\binom{0\ \ I}{I\ \ 0}$ by a scalar, such that
$ad^{t}$ and $bc^{t}$ are scalar $4\times4$ matrices, whose squares are $\det
a=\det d$ and $\delta^{2}\det c=\frac{\det b}{\delta^{2}}$ respectively.
Moreover, either the pair $bd^{-1}$ and $-\delta ca^{-1}$ or the pair $ac^{-1}$
and $-\delta db^{-1}$ are defined, they lie in $M_{4}^{as}(\mathbb{F})$, and
they are sent to one another by the involution $T\mapsto\hat{T}$ which was
described in the paragraph following Corollary \ref{alt6d1}. The space on which
these groups operate by conjugation consists of matrices of the form $\binom{pI\
\ -\delta T}{\hat{T}\ \ \ -pI}$ with $T \in M_{4}^{as}(\mathbb{F})$.
\label{iso7rd}
\end{cor}
\begin{proof}
We have seen in the proof of Corollary \ref{iso8id1} that $GSp_{4}(B)$ is
obtained from $GSp_{M_{2}(B)}\binom{1\ \ \ \ 0}{0\ \ -1}$ through conjugation by
$\binom{1\ \ 0}{0\ \ R}$ with $R=\binom{0\ \ -1}{1\ \ \ \ 0}$. As this operation
takes a matrix $\binom{a\ \ b}{c\ \ d}$ to $\binom{\ a\ \ \ bR^{-1}\ }{Rc\ \
RdR^{-1}}$, Lemma \ref{Sadjt} shows that the relations from Theorem \ref{dim7rd}
become the ones asserted here (the reduced norms are not affected, since
$N^{M_{2}(B)}_{\mathbb{F}}(R)=1$). The assertions involving $bd^{-1}$ and
$-\delta ca^{-1}$ or $ac^{-1}$ and $-\delta db^{-1}$ follow from those appearing
in Theorem \ref{dim7rd} through the fact that $\theta(X)=-adjX$ for $X \in
M_{2}(B)^{-}$, right multiplication by $R$ sends this space to $M_{2}^{Her}(B)$,
and $adjR=R^{-1}$ for our $R$. Recall now that the group
$Gspin(A^{-}\oplus\langle\delta\rangle) \subseteq GSp_{M_{2}(B)}\binom{1\ \ \ \
0}{0\ \ -1}$ operates on the space defined in Lemma \ref{ac7rd} by conjugation.
Conjugating the formula for this action by $\binom{1\ \ 0}{0\ \ R}$ yields the
action of our subgroup of $GSp_{4}(B)$ by conjugation on the asserted space with
the asserted quadratic form. The $Sp_{4}$ condition now shows that multiplying
the latter space from the right by $\binom{0\ \ -I}{I\ \ \ \ 0}$ yields a space
on which $spin\big(M_{2}^{Her}(B)\oplus\langle\delta\rangle\big) \subseteq
Sp_{4}(B)$ operates via $g:N \mapsto gN\iota_{B}(g)^{t}$, and this space is
easily seen to be the one from the last assertion,
In the case of splitting three hyperbolic planes, we apply the same argument
with the matrix $S$ from the proof of Corollary \ref{iso8id1}. Once again Lemma
\ref{Sadjt} yields the desired relations between the squares, and $\det S=1$.
We recall from the proof of Corollary \ref{alt6d1} that multiplication of
$M_{4}(\mathbb{F})^{-}$ by $S$ (from either side) yields anti-symmetric
matrices, and that the vector norm is taken to minus the pfaffian by this
operation. Conjugating the space from Lemma \ref{ac7rd} by $\binom{1\ \ 0}{0\ \
S}$ yields the first space with $T$ and $\hat{T}$. The fact that the spin group
is contained in $O\binom{0\ \ I}{I\ \ 0}$ allows us to multiply our
representation by $\binom{0\ \ I}{I\ \ 0}$ from the right, yielding the second
space with the action $g:L \mapsto gLg^{t}$ of the spin group. This completes
the proof of the corollary.
\end{proof}
Note that the description of the groups from Corollary \ref{iso7rd} is in
correspondence with the choice of $\psi$ on the groups from Corollary
\ref{iso8id1}. For the Gspin groups, the representations which extend those
defined by $g:N \mapsto gN\iota_{B}(g)^{t}$ and $g:L \mapsto gLg^{t}$ from
Corollary \ref{iso7rd} and preserve the bilinear form must include division by
the multiplier $m$. In addition, although in both cases we may obtain natural
8-dimensional representations of these groups by adding $\binom{0\ \ -I}{I\ \ \
\ 0}$ to the first representation and $\binom{0\ \ I}{I\ \ 0}$ to the second
one, this is not dual to preserving $\binom{0\ \ -\delta}{1\ \ \ \ 0}$ since the
full $GSp$ group (of $\binom{Q\ \ 0}{0\ \ Q}$ or $\binom{S\ \ \ \ 0}{0\ \ -S}$)
also preserve this matrix by definition. We also mention the fact that starting
with the representation appearing in Corollary \ref{alt8id1} yields precisely
the representations of $Gspin(A^{-}\oplus\langle\delta\rangle)$ and
$spin(A^{-}\oplus\langle\delta\rangle)$ already given in Lemma \ref{ac7rd} and
Corollary \ref{iso7rd}.
\section{Dimension 8, Isotropic, Any Discriminant \label{Dim8igen}}
Let $d$ be a discriminant, and let $\mathbb{E}=\mathbb{F}(\sqrt{d})$ be the
associated quadratic extension of $\mathbb{F}$, with Galois automorphism $\rho$.
We shall be interested in the spaces given in the following
\begin{lem}
Let $A=B \otimes C$ be a bi-quaternion algebra over $\mathbb{F}$ with the
involution corresponding to this presentation, and let $Q \in A^{-}$ be
anisotropic. The direct sum $(A_{\mathbb{E}}^{-})_{\rho,Q} \oplus H$ of a
hyperbolic plane and the space from Lemma \ref{sp6gen} is 8-dimensional,
isotropic, and has discriminant $d$. Moreover, this yields all the isotropic
8-dimensional quadratic spaces of discriminant $d$ over $\mathbb{F}$.
\label{sp8igen}
\end{lem}
\begin{proof}
The space $(A_{\mathbb{E}}^{-})_{\rho,Q}$ from Lemma \ref{sp6gen} has dimension
6 and discriminant $d$, hence determinant $-d$. Adding the isotropic space $H$,
of determinant $-1$ yields a space with the desired properties. On the other
hand, if an 8-dimensional space is isotropic and has discriminant $d$, then it
splits a hyperbolic plane, and the complement has dimension 6 and discriminant
$d$. The lemma now follows from Lemma \ref{sp6gen} and the fact that hyperbolic
planes are isometric to their rescalings.
\end{proof}
Extending scalars in the space from Lemma \ref{sp8igen} to $\mathbb{E}$, we
obtain an isotropic 8-dimensional space of discriminant 1, which equals
$A_{\mathbb{E}}^{-} \oplus H_{\mathbb{E}}$ by Lemma \ref{sp8id1} and we present
it as the subspace of $M_{2}(A_{\mathbb{E}})$ as before. Therefore our space
$(A_{\mathbb{E}}^{-})_{\rho,Q}$ is isomorphic to the space of matrices
$\binom{u\ \ -p}{q\ \ -\tilde{u}}$ in which $u\in(A_{\mathbb{E}}^{-})_{\rho,Q}$
and $p$ and $q$ are in $\mathbb{F}$, with the restriction of the quadratic form
from $A_{\mathbb{E}}^{-} \oplus H_{\mathbb{E}}$. Proposition \ref{GSppresHA-}
shows that the group $\widetilde{GSp}_{A_{\mathbb{E}}}^{\mathbb{E}^{2}}\binom{1\
\ \ \ 0}{0\ \ -1}$ acts on $A_{\mathbb{E}}^{-} \oplus H_{\mathbb{E}}$, and we
are interested in the subgroup which preserves the subspace
$(A_{\mathbb{E}}^{-})_{\rho,Q} \oplus H$. Observe that the $H$ part of
$(A_{\mathbb{E}}^{-})_{\rho,Q} \oplus H$ is invariant under the automorphism
$\rho$ of $M_{2}(A)$, and the action of $\rho$ on the other part is given in
Lemma \ref{sp6gen}. In particular $\rho$ preserves the subspace
$(A_{\mathbb{E}}^{-})_{\rho,Q} \oplus H$ of $A_{\mathbb{E}}^{-} \oplus
H_{\mathbb{E}}$, and operates as the reflection in $Q$ on this space.
We shall be needing also non-invertible elements of $A$ having the
$A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$-property, namely those $a \in A$
which satisfy $aQ\overline{a}^{\rho}=0$ (the reduced norm condition immediately
follows, since $a \not\in A^{\times}$ hence its reduced norm vanishes). We
denote the union of the set of those elements with
$A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$ by
$A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2},0}$. It is no longer a group, but it is
closed under multiplication, and
$t:A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2},0}\to\mathbb{F}$ (including 0) is
multiplicative. Moreover, apart from $(A_{\mathbb{E}}^{-})_{\rho,Q}$, we shall
also be needing the space $(A_{\mathbb{E}}^{-})_{\rho,\tilde{Q}}$. We shall
need a few simple relations between these sets.
\begin{lem}
Let $v\in(A_{\mathbb{E}}^{-})_{\rho,Q}$ and
$w\in(A_{\mathbb{E}}^{-})_{\rho,\tilde{Q}}$ be given. Then we have $(i)$
$\tilde{v}\in(A_{\mathbb{E}}^{-})_{\rho,\tilde{Q}}$. $(ii)$ $Q^{-1}vQ^{-1}$ is
also in $(A_{\mathbb{E}}^{-})_{\rho,\tilde{Q}}$. $(iii)$
$vQ^{-1} \in A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2},0}$, with
$t(vQ^{-1})=-\frac{|v|^{2}}{|Q|^{2}}$. $(iv)$
$\tilde{w}\in(A_{\mathbb{E}}^{-})_{\rho,Q}$. $(v)$
$QwQ$ also lies in $(A_{\mathbb{E}}^{-})_{\rho,Q}$. $(vi)$ $Qw$ is an element
of $A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2},0}$, whose multiplier $t(Qw)$ is
$-|Q|^{2}|w|^{2}$. \label{QthetaQrel}
\end{lem}
\begin{proof}
Lemma \ref{sp6gen} implies $v^{\rho}=-\frac{Q\tilde{v}Q}{|Q|^{2}}$, hence
$\tilde{v}=-|Q|^{2}Q^{-1}v^{\rho}Q^{-1}=-\frac{\tilde{Q}v^{\rho}\tilde{Q}}{
|\tilde{Q}|^{2}}$ since $Q^{-1}=\frac{\tilde{Q}}{|Q|^{2}}$ and
$|\tilde{Q}|^{2}=|Q|^{2}$. Applying $\rho$ to the latter equation and using the
fact that $\theta^{2}=Id_{A_{\mathbb{E}}^{-}}$ and that $Q$, hence also
$\tilde{Q}$, are $\rho$-invariant, yields the $\tilde{Q}$-based condition from
Lemma \ref{sp6gen} for $\tilde{v}$. This establishes part $(i)$. For part
$(ii)$, recall first that $\rho$ preserves the space
$(A_{\mathbb{E}}^{-})_{\rho,Q}$. It thus preserves also
$(A_{\mathbb{E}}^{-})_{\rho,\tilde{Q}}$. Write $v$ as $(v^{\rho})^{\rho}$, which
equals $-\frac{Q\tilde{v}^{\rho}Q}{|Q|^{2}}$ by Lemma \ref{sp6gen}. Hence
$Q^{-1}vQ^{-1}=-\frac{\tilde{v}^{\rho}}{|Q|^{2}}$, which implies part $(ii)$
since the latter element lies in $(A_{\mathbb{E}}^{-})_{\rho,\tilde{Q}}$ by part
$(i)$. For part $(iii)$ observe that as $Q^{-1}$ and $v$ lie in
$A_{\mathbb{E}}^{-}$ and $Q^{-\rho}=Q^{-1}$, the expression $vQ^{-1} \cdot
Q\cdot\overline{vQ^{-1}}^{\rho}$ equals $vQ^{-1}v^{\rho}$. Substituting the
expression for $v^{\rho}$ from Lemma \ref{sp6gen} again, the latter expression
becomes $-\frac{|v|^{2}}{Q^{2}}Q$, and part $(iii)$ follows since the reduced
norm condition is a consequence of Corollary \ref{NAvn2}. Parts $(iv)$ and
$(v)$ are proved either by applying the necessary changes in the proofs of
parts $(i)$ and $(ii)$ respectively, or since the maps given in parts $(i)$ and
$(ii)$ are injective maps between 6-dimensional vector space over $\mathbb{F}$
and the maps from parts $(iv)$ and $(v)$ are their inverses. For part $(vi)$ we
write $w$ as $Q^{-1}uQ^{-1}$ for some $u\in(A_{\mathbb{E}}^{-})_{\rho,Q}$ using
parts $(ii)$ and $(v)$, and then the assertion for $Qw=uQ^{-1}$ follows from
part $(iii)$ since $u=QwQ=-Qw\overline{Q}$ has vector norm $|Q|^{4}|w|^{2}$ by
Proposition \ref{NAFg} and Corollary \ref{NAvn2}. This completes the proof of
the lemma.
\end{proof}
We have seen that parts $(i)$ and $(iv)$ in Lemma \ref{QthetaQrel}, as well as
parts $(ii)$ and $(v)$ there, are inverses. Moreover, a claim similar to part
$(iii)$ (but without the multiplier) appears in the first assertion of Lemma
\ref{ref6gen}. We remark that the proof of parts $(iii)$ and $(iv)$ in that
lemma shows that if the vector $v$ or $w$ is anisotropic then the converse
implication also holds (cancel $vQ$ from the left in the proof of part $(iii)$,
and applying the same argument to extend it to part $(vi)$). On the other hand,
if $v$ (or $w$) are not isotropic then the converse implications in parts
$(iii)$ and $(iv)$ of Lemma \ref{QthetaQrel} may not hold. Indeed, by taking a
non-zero isotropic vector $v$ in $(A_{\mathbb{E}}^{-})_{\rho,Q}$ and
$z\in\mathbb{E}\setminus\mathbb{F}$, then $vQ^{-1}$ as well as $zvQ^{-1}$ lie in
$A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2},0}$, while $zv$ no longer lies in
$(A_{\mathbb{E}}^{-})_{\rho,Q}$ (and the same for
$w\in\in(A_{\mathbb{E}}^{-})_{\rho,\tilde{Q}}$). Note that this argument does
not affect our assertions in the anisotropic case, since in this case $zv$
would have vector norm $z^{2}|v|^{2}$ and the multiplier of $zvQ^{-1}$ (which
does belong to $A_{\mathbb{E},\rho,\mathbb{F}Q}^{\times}$) is
$-N^{\mathbb{E}}_{\mathbb{F}}(z)\frac{|v|^{2}}{|Q|^{2}}$, which do not coincide
if $|v|^{2}\neq0$.
We shall also need the following complement of Lemma \ref{normsq} here:
\begin{lem}
For $\eta\in(A_{\mathbb{E}}^{-})_{\rho,Q}$ and
$\omega\in(A_{\mathbb{E}}^{-})_{\rho,\tilde{Q}}$, the element $1+\eta\omega$ of
$A$ lies in $A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2},0}$, with multiplier
$D(\eta,\omega)$. \label{combAErhoFQm20}
\end{lem}
\begin{proof}
First note that the expression $D(\eta,\omega)$ from Lemma \ref{normsq} lies in
$\mathbb{F}$ for such $\eta$ and $\omega$, since $(A_{\mathbb{E}}^{-})_{\rho,Q}$
and $(A_{\mathbb{E}}^{-})_{\rho,\tilde{Q}}$ are quadratic spaces over
$\mathbb{F}$. Now, as in the proof of Lemma \ref{normsq}, we begin by assuming
that $\omega$ is anisotropic, and write $1+\eta\omega$ as
$\big(\tilde{\omega}+|\omega|^{2}\eta\big)\frac{\omega}{|\omega|^{2}}$. Then
$\frac{\omega}{|\omega|^{2}}\in(A_{\mathbb{E}}^{-})_{\rho,Q}$, and
$\tilde{\omega}+|\omega|^{2}\eta\in(A_{\mathbb{E}}^{-})_{\rho,\tilde{Q}}$ by
part $(iv)$ of Lemma \ref{QthetaQrel}. But now parts $(iii)$ and $(iv)$ of that
lemma show that $\big(\tilde{\omega}+|\omega|^{2}\eta\big)Q^{-1}$ and
$Q\frac{\omega}{|\omega|^{2}}$ are both in
$A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2},0}$, with multipliers
$-\frac{|\tilde{\omega}+|\omega|^{2}\eta|}{|Q|^{2}}$ and
$-\frac{|Q|^{2}}{|\omega|^{2}}$ respectively, so that the assertion
follows from the multiplicativity of $A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2},0}$
and $t:A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2},0}\to\mathbb{F}$ and the fact that
the former vector norm was seen to be $|\omega|^{2}D(\eta,\omega)$ in the proof
of Lemma \ref{normsq}. For isotropic $\omega$, we use the polynomial method from
the proof of Lemma \ref{normsq} again, and consider
$\big(1+\eta(\omega+s\xi)\big)Q\overline{\big(1+\eta(\omega+s\xi)\big)}^{\rho}$
and $D(\eta,\omega+s\xi)Q$ as $A$-valued polynomials in $s$, for some fixed,
anisotropic $\xi\in(A_{\mathbb{E}}^{-})_{\rho,\tilde{Q}}$. This means that we
consider $A$ as a vector space over $\mathbb{F}$, we choose a basis for it which
includes $Q$, and consider the two sets of 16 polynomials in $s$ arising as the
coefficients using this basis (in the latter set, 15 polynomials will
identically vanish and one is the coefficient $D(\eta,\omega+s\xi)$). By what we
have proved, both sets of polynomials coincide for every $s$ perhaps maybe $s=0$
and $s=-\frac{2\langle\omega,\xi\rangle}{|\xi|^{2}}$, and the same argument as
in the proof of Lemma \ref{normsq} shows that they coincide for every $s$. The
reduced norm condition required for $A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2},0}$
is satisfied by Lemma \ref{normsq} itself. Substituting $s=0$ verifies our
assertion also for isotropic $\omega$, which completes the proof of the lemma.
\end{proof}
\smallskip
Denote $GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}_{\rho,Q}$ the set of
those elements $\binom{a\ \ b}{c\ \ d}$ of $GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \
0}{0\ \ -1}$, whose multipliers are in $\mathbb{F}^{\times}$, and which arise,
in terms of Corollary \ref{genform}, from parameters $v$ and $\alpha$ from
$(A_{\mathbb{E}}^{-})_{\rho,Q}$, $\beta$ which lies in
$(A_{\mathbb{E}}^{-})_{\rho,\tilde{Q}}$, and where $a$ is assumed to be in
$A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$. This definition is independent of the
choice of parameters, as one sees in the following
\begin{lem}
Let $c$, $w$, $\gamma$, and $\delta$ be a set of parameters for some element of
$GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}_{\rho,Q}$ as in Corollary
\ref{genform}. If $w\in(A_{\mathbb{E}}^{-})_{\rho,Q}$ then
$c \in A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$,
$\gamma\in(A_{\mathbb{E}}^{-})_{\rho,Q}$, and
$\delta\in(A_{\mathbb{E}}^{-})_{\rho,\tilde{Q}}$. \label{GSpAEQindep}
\end{lem}
\begin{proof}
By definition, there is a set $a \in A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$,
$v\in(A_{\mathbb{E}}^{-})_{\rho,Q}$, $\alpha\in(A_{\mathbb{E}}^{-})_{\rho,Q}$,
and $\beta\in(A_{\mathbb{E}}^{-})_{\rho,\tilde{Q}}$ of parameters from Corollary
\ref{genform} for that element. Given $w$ as an alternative parameter, the other
parameters $c$, $\gamma$, and $\delta$ are determined by the formulae from Lemma
\ref{comp}. The assumptions on $a$, $w$, $v$, and $\beta$ and Lemma
\ref{combAErhoFQm20} shows that the two multipliers in the expression for $c$ in
part $(i)$ of that lemma belong to $A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$.
Part $(i)$ of Lemma \ref{QthetaQrel} shows that the expression for $\delta$ in
part $(ii)$ of Lemma \ref{comp} is in $(A_{\mathbb{E}}^{-})_{\rho,\tilde{Q}}$,
as that the denominator $D(\beta,w-v)$ was seen to lie in $\mathbb{F}^{\times}$.
When we examine the expression for $\gamma$ in part $(iii)$ of Lemma \ref{comp},
we get the image of the action of $a^{-1}$ on some vector, which lies in
$(A_{\mathbb{E}}^{-})_{\rho,Q}$ by part $(iv)$ of Lemma \ref{QthetaQrel} and the
$\mathbb{F}$-rationality of $m$ and of the denominator $D(\beta,w-v)$ again.
Since $a$, hence also $a^{-1}$, comes from
$A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$, this expression also lies in
$(A_{\mathbb{E}}^{-})_{\rho,Q}$ by Lemma \ref{ac6gen}. This completes the proof
of the lemma.
\end{proof}
We can now prove
\begin{prop}
The set $GSp_{A_{\mathbb{E}}}\binom{1\ \
\ \ 0}{0\ \ -1}_{\rho,Q}$ is a subgroup of $GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \
0}{0\ \ -1}$, which is stable under $\rho$. It is contained in
$GSp_{A_{\mathbb{E}}}^{\mathbb{E}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$ and comes
endowed with a splitting map into the double cover
$\widetilde{GSp}_{A_{\mathbb{E}}}^{\mathbb{E}^{2}}\binom{1\ \ \
\ 0}{0\ \ -1}$. \label{GSpAErhoFQ}
\end{prop}
\begin{proof}
As in the proofs of Proposition \ref{GSphom} and Theorem \ref{GSppsi}, given
two elements of $GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}_{\rho,Q}$, we
may assume that in the form of Corollary \ref{genform}, the left multiplier $g$
has parameters $a$, $v$, $\alpha$ and $\beta$, the right multiplier $h$ arises
from $e$, $z$, $\kappa$, and $\nu$, and $x$, the same $v$, $\xi$, and $\zeta$
are parameters for the product $gh$. We may further assume that $v$ and $z$ lie
in $(A_{\mathbb{E}}^{-})_{\rho,Q}$. Lemma \ref{GSpAEQindep} implies that
$\alpha$ and $\kappa$ also come from the same space, $\beta$ and $\nu$ are in
$(A_{\mathbb{E}}^{-})_{\rho,\tilde{Q}}$, and $a$ and $e$ are elements of
$A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$. We have to show that the remaining
parameters for $gh$ lie in the appropriate sets. For $x$ this follows from the
properties of $a$, $e$, $\alpha$, $z$, and $\nu$ by Lemma \ref{combAErhoFQm20}.
Invoking part $(iv)$ of Lemma \ref{QthetaQrel}, as well as Lemma \ref{ac6gen}
for the action of $e^{-1} \in A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$, verifies
the assertion for $\xi$, since the multiplier $n$ and the denominator
$D(\alpha+z,\nu)$ lie in $\mathbb{F}^{\times}$ by the proof of Lemma
\ref{combAErhoFQm20}. Now, $\zeta$ is the sum of $\beta$ and the image of the
action of $\overline{a}^{\ -1}$ on a vector, which is contained in
$(A_{\mathbb{E}}^{-})_{\rho,\tilde{Q}}$ by part $(i)$ of Lemma
\ref{QthetaQrel}. By parts $(i)$ and $(iv)$ of the latter lemma, it is
sufficient to show that
$\tilde{\zeta}-\tilde{\beta}\in(A_{\mathbb{E}}^{-})_{\rho,Q}$. But using Lemma
\ref{AxA-rel} and part $(i)$ of Lemma \ref{QthetaQrel} again, the latter vector
is obtained by an element of $(A_{\mathbb{E}}^{-})_{\rho,Q}$ by the action of
$a \in A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$ (up to scalars from
$\mathbb{F}^{\times}$), which verifies the assertion for $\zeta$ as well. Hence
$GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}_{\rho,Q}$ is a subgroup of
$GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}$. For the stability under
$\rho$, we may write any element of $GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \
-1}_{\rho,Q}$ as in Corollary \ref{genform}, with parameters as in Lemma
\ref{GSpAEQindep}. The fact that $\rho$ preserves
$(A_{\mathbb{E}}^{-})_{\rho,Q}$, $(A_{\mathbb{E}}^{-})_{\rho,\tilde{Q}}$, and
$A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$ implies the preservation of
$GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}_{\rho,Q}$ as well. Now, the
value of the map $\varphi$ from Proposition \ref{GSphom} is based only on the
parameter from $A_{\mathbb{E}}^{\times}$ in Corollary \ref{genform}. As for
matrices in $GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}_{\rho,Q}$ these
parameters come from $A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$ and the latter
group is contained in $A_{\mathbb{E}}^{(\mathbb{E}^{\times})^{2}}$ by the
definition of the former group in Lemma \ref{ind2int}, our subgroup is contained
in $GSp_{A_{\mathbb{E}}}^{\mathbb{E}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$. In
addition, the double cover
$\widetilde{GSp}_{A_{\mathbb{E}}}^{\mathbb{E}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$
is defined by adding a choice of a square root for the reduced norm of the
parameter from $A_{\mathbb{E}}^{(\mathbb{E}^{\times})^{2}}$, so that we get a
parameter from $\widetilde{A}_{\mathbb{E}}^{(\mathbb{E}^{\times})^{2}}$ for
elements of this double cover. But
$\widetilde{A}_{\mathbb{E}}^{(\mathbb{E}^{\times})^{2}}$ was seen to split over
$A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$ via $g\mapsto\big(g,t(g)\big)$.
Moreover, this splitting map is compatible with parameter changes inside
$A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$,
$(A_{\mathbb{E}}^{-})_{\rho,Q}$, and
$(A_{\mathbb{E}}^{-})_{\rho,\tilde{Q}}$ by part $(i)$ of Lemma \ref{comp} and
Lemma \ref{combAErhoFQm20}. This establishes the splitting of
$\widetilde{GSp}_{A_{\mathbb{E}}}^{\mathbb{E}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$
over $GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}_{\rho,Q}$ as well, and
completes the proof of the proposition.
\end{proof}
In case we wish to evaluate $\psi$-images of elements of
$GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}_{\rho,Q}$ using other entries,
we cannot use the matrix $\binom{0\ \ 1}{1\ \ 0}$, as it does not belong to this
group. However, the element $\binom{0\ \ Q}{\tilde{Q}\ \ 0}$, of multiplier
$-|Q|^{2}$, does belong there: By choosing some non-zero $h\in\mathbb{E}_{0}$,
we recall that $hQ$ and $\frac{Q}{h|Q|^{2}}$ are in
$(A_{\mathbb{E}}^{-})_{\rho,Q}$,
$\frac{\tilde{Q}}{h|Q|^{2}}\in(A_{\mathbb{E}}^{-})_{\rho,Q}$, and $h|Q|^{2}$ is
in $A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$, and our element may be obtained
from the parameters $v=-hQ$, $a=h|Q|^{2}$, $\alpha=\frac{Q}{h|Q|^{2}}$, and
$\beta=\frac{\tilde{Q}}{h|Q|^{2}}$. Recall that the multiplier $t(h|Q|^{2})$ of
$h|Q|^{2} \in A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$ is $-h^{2}|Q|^{4}$, so
that as an element of $GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}_{\rho,Q}$
we must have $\psi\binom{0\ \ Q}{\tilde{Q}\ \ 0}=\binom{\ \ 0\ \ -\tilde{Q}}{-Q\
\ \ \ 0}$. In any case, we may use this element in order to transfer the formula
for $\psi$ on $GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}_{\rho,Q}$ to be
based on the other matrix entries.
As in the situation we encountered in dimension 6 and general discriminant, we
remark that unless $Q^{2}$ is a scalar and $Q$ and $\tilde{Q}$ span the same
vector space, the space $(A_{\mathbb{E}}^{-})_{\rho,Q} \oplus H$ is not
invariant under the linear automorphism $\hat{\psi}$ from Lemma \ref{vnorm8}. In
addition, the automorphism $\psi$ does now preserve the subgroup
$GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}_{\rho,Q}$ (embedded via the
splitting map from Proposition \ref{GSpAErhoFQ}) in this case. However, we can
still pursue the usual route by using the following
\begin{lem}
Let $\hat{Q}$ denote the element of
$\widetilde{GSp}_{A_{\mathbb{E}}}^{\mathbb{E}^{2}}\binom{1\ \ \
\ 0}{0\ \ -1}$ lying over $\binom{Q\ \ \ \ 0}{0\ \ -\tilde{Q}}$ in which the
square root $t$ of $N^{A_{\mathbb{E}}}_{\mathbb{E}}(Q)$ is chosen to be
$|Q|^{2}$. Then the element $\hat{Q}\tilde{\psi}$ of the semi-direct product of
$\{1,\tilde{\psi}\}$ and
$\widetilde{GSp}_{A_{\mathbb{E}}}^{\mathbb{E}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$
from Lemma \ref{ac8id1} squares to $-|Q|^{2}$ (with $|Q|^{4}$ as the square root
of its reduced norm) in that semi-direct product. Its action on
$(A_{\mathbb{E}}^{-})_{\rho,Q} \oplus H$ coincides with that of $\rho$ (which
preserves it), and conjugation by this element operates on
$\widetilde{GSp}_{A_{\mathbb{E}}}^{\mathbb{E}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$
via $g\mapsto\hat{Q}\psi(g)\hat{Q}^{-1}$. This automorphism of
$\widetilde{GSp}_{A_{\mathbb{E}}}^{\mathbb{E}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$
has order 2, and it preserves the subgroup $GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \
0}{0\ \ -1}_{\rho,Q}$ embedded through the splitting map from Proposition
\ref{GSpAErhoFQ} as its operation on the latter group is the same as that of
$\rho$. \label{Qhatpsi}
\end{lem}
\begin{proof}
The multiplier of $\hat{Q}$ is $|Q|^{2}$, and as $Q \in A_{\mathbb{E}}^{-}$ and
$t=|Q|^{2}$ indeed satisfies $t^{2}=N^{A}_{\mathbb{F}}(Q)$ by Corollary
\ref{NAvn2}, the definition of the map $\psi$ in Theorem \ref{GSppsi} shows that
$\psi(\hat{Q})$ is $\binom{-\tilde{Q}\ \ 0}{\ \ 0\ \ Q}$ (with the same square
root $-|Q|^{2}$). The first assertion follows immediately from the fact that
$\hat{Q}\psi(\hat{Q})=-|Q|^{2}I$ and the product of the square roots is
$|Q|^{4}$ (the number $D(\alpha+z,w)$ appearing in the proof of Theorem
\ref{GSppsi} in the square root of the reduced norm of $x$ from Lemma
\ref{GSpprod} is just 1). Now, $\tilde{\psi}$ operates as $\hat{\psi}$ on
$A_{\mathbb{E}}^{-} \oplus H_{\mathbb{E}}$ (which is just $\theta$ on the
$A_{\mathbb{E}}^{-}$ part), and the operation of the diagonal element $\hat{Q}$
was evaluated in Proposition \ref{GSppresHA-}: The $H_{\mathbb{E}}$ part is
pointwise fixed since $t=m$ for our element, and the combined operation on
$A_{\mathbb{E}}^{-}$ is via $u\mapsto-\frac{Q\tilde{u}Q}{|Q|^{2}}$. But Lemma
\ref{sp6gen} shows that on $(A_{\mathbb{E}}^{-})_{\rho,Q}$ the latter map
coincides with $\rho$, and as $\rho$ leaves $H$ also pointwise fixed, this
establishes the second assertion. The formula for the conjugation on
$\widetilde{GSp}_{A_{\mathbb{E}}}^{\mathbb{E}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$
follows directly from the structure of the semi-direct product in Lemma
\ref{ac8id1}, and it is of order 2 either since $-|Q|^{2}I$ (with the square
root $|Q|^{4}$ of $N^{A_{\mathbb{E}}}_{\mathbb{E}}(-|Q|^{2})$) operates
trivially or by a direct evaluation. In order to examine the action of this
automorphism on $GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}_{\rho,Q}$ in
view of that of $\rho$, we recall the relation between $\hat{Q}$ and
$\psi(\hat{Q})$, so that we evaluate $-\frac{1}{|Q|^{2}}\hat{Q}g\psi(\hat{Q})$
for $g \in GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}_{\rho,Q}$ and
compare it with $g^{\rho}$. It suffices to take $g$ from a set of generators of
the latter group, namely $\binom{1\ \ v}{0\ \ 1}$ with
$v\in(A_{\mathbb{E}}^{-})_{\rho,Q}$, $\binom{1\ \ 0}{w\ \ 1}$ with
$w\in(A_{\mathbb{E}}^{-})_{\rho,\tilde{Q}}$, and $\binom{a\ \ \ \ 0\ \ \ }{0\ \
m\overline{a}^{-1}}$ where $a$ lies in
$A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$. Now, $\psi$ replaces $v$ and $w$ by
their $\theta$-images and $a$ by $t(a)\overline{a}^{\ -1}$, and after
conjugating by $\hat{Q}$ and using Lemma \ref{vnorm6} we find that $v$ is sent
to $-\frac{Q\tilde{v}Q}{|Q|^{2}}$, $w$ is taken to
$-\frac{\tilde{Q}\tilde{w}\tilde{Q}}{|Q|^{2}}$, the diagonal entries of the
unipotent generators remain invariant, and $a$ is mapped to
$t(a)Q\overline{a}^{\ -1}Q^{-1}$ (the other entry $\overline{a}^{\ -1}$ becomes
$\frac{Q^{-1}aQ}{t(a)}$). But as we assume that
$v\in(A_{\mathbb{E}}^{-})_{\rho,Q}$,
$w\in(A_{\mathbb{E}}^{-})_{\rho,\tilde{Q}}$, and $a \in
A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$, the first two expressions are
$v^{\rho}$ and $w^{\rho}$ by Lemma \ref{sp6gen}, while the proof of Lemma
\ref{Qtheta} shows that the latter expression is just $a^{\rho}$ (and the one
in parentheses is $\overline{a}^{\ -\rho}$). Since the multiplier $m$ lies in
$\mathbb{F}$ and is thus $\rho$-invariant, this establishes the coincidence of
$\rho$ and conjugation by $\hat{Q}\tilde{\psi}$ on
$GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}_{\rho,Q}$, and as $\rho$ was
seen to preserve this group in Proposition \ref{GSpAErhoFQ}, conjugation by
$\hat{Q}\tilde{\psi}$ does the same. This completes the proof of the lemma.
\end{proof}
Note that $\Xi=\hat{Q}$ satisfies the conditions of Corollary \ref{alt8id1},
and the operations of $\hat{Q}\tilde{\psi}$ appearing in Lemma \ref{Qhatpsi} are
just $\hat{\psi}_{\hat{Q}}$ and $\psi_{\hat{Q}}$ respectively. Now, Lemma
\ref{Qhatpsi} gives an intrinsic description of $GSp_{A_{\mathbb{E}}}\binom{1\ \
\ \ 0}{0\ \ -1}_{\rho,Q}$, and we can also establish some properties of its
entries and the elements from the $GSp$ relations:
\begin{cor}
The subgroup $GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}_{\rho,Q}$ is
characterized as the set of elements of
$\widetilde{GSp}_{A_{\mathbb{E}}}^{\mathbb{E}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$
on which the automorphism $\psi_{\hat{Q}}$ operates as $\rho$. \label{GSprhoQst}
\end{cor}
\begin{proof}
The fact that $\psi_{\hat{Q}}(g)=g^{\rho}$ for $g \in
GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}_{\rho,Q}$ was seen in the proof
of Lemma \ref{Qhatpsi}. Conversely, let
$g\in\widetilde{GSp}_{A_{\mathbb{E}}}^{\mathbb{E}^{2}}\binom{1\ \ \ \ 0}{0\ \
-1}$ be an element satisfying $\psi_{\hat{Q}}(g)=g^{\rho}$. Then the multipliers
of both sides coincide, and as $\psi$ commutes with $m$ and
$m(g^{\rho})=m(g)^{\rho}$ we find that $m(g)\in\mathbb{F}^{\times}$. Morever,
Lemma \ref{genie} allows us to find a set of parameters for Corollary
\ref{genform} for $g$ in which $v\in(A_{\mathbb{E}}^{-})_{\rho,Q}$. As
$\binom{1\ \ v}{0\ \ 1}$ lies in $GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \
-1}_{\rho,Q}$, it suffices to consider elements
$g\in\widetilde{GSp}_{A_{\mathbb{E}}}^{\mathbb{E}^{2}}\binom{1\ \ \ \ 0}{0\ \
-1}$ having invertible upper left entry. But these elements have a unique
decomposition as in Lemma \ref{invent}. Comparing these decompositions for
$g^{\rho}$ and $\psi_{\hat{Q}}(g)$ and recalling that the unipotent matrices are
assumed to have unipotent $\psi$-images (and not with $-1$ on the diagonal)
reduces the verification to the multipliers appearing in Lemma \ref{invent},
lifted into $\widetilde{GSp}_{A_{\mathbb{E}}}^{\mathbb{E}^{2}}\binom{1\ \ \ \
0}{0\ \ -1}$. But these verifications are carried out in the proof of Lemma
\ref{Qhatpsi}. This proves the corollary.
\end{proof}
One can also show that if $\binom{a\ \ b}{c\ \ d}$ is an element of
$GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}_{\rho,Q}$ then $a$, $bQ^{-1}$,
$Qc$, and $QdQ^{-1}$ all lie in $A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2},0}$, the
elements $a\overline{b}=-b\overline{a}$ and $\overline{b}d=-\overline{d}b$ of
$A_{\mathbb{E}}^{-}$ belong to $(A_{\mathbb{E}}^{-})_{\rho,Q}$, and
$\overline{a}c=-\overline{c}a$ and $\overline{c}d=-\overline{d}c$ come from
$(A_{\mathbb{E}}^{-})_{\rho,\tilde{Q}}$. In fact, conjugating by $\binom{1\ \
0}{0\ \ Q}$ as in Corollary \ref{iso8id1} yields a subgroup
$GSp_{A_{\mathbb{E}}}\binom{Q\ \ 0}{0\ \ Q}_{\rho,Q}$ of
$GSp_{A_{\mathbb{E}}}\binom{Q\ \ 0}{0\ \ Q}$ with a simpler description: All the
entries of elements $\binom{e\ \ f}{g\ \ h}$ of that group come from
$A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2},0}$, and for which the elements
$eQ\overline{f}=fQ\overline{e}$ and $gQ\overline{h}=hQ\overline{g}$ of
$A_{\mathbb{E}}^{-}$ lie in $(A_{\mathbb{E}}^{-})_{\rho,Q}$, while
$\overline{e}Q^{-1}g=\overline{g}Q^{-1}e$ and
$\overline{f}Q^{-1}h=\overline{h}Q^{-1}f$ are in
$(A_{\mathbb{E}}^{-})_{\rho,\tilde{Q}}$ (use parts $(ii)$ and $(v)$ of Lemma
\ref{QthetaQrel} again). Moreover, when many of these terms are
invertible, some of the latter assertions follow from one another---see Lemma
\ref{QthetaQrel} and the remarks following it. However, once may non-zero
entries which are not invertible are involved, there may be some matrices
satisfying these conditions which do not belong to
$GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}_{\rho,Q}$. Hence we content
ourselves with the description appearing in Corollary \ref{GSprhoQst}.
The reason for considering $GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \
-1}_{\rho,Q}$ is given in the following
\begin{lem}
Consider the operation of $GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \
-1}_{\rho,Q}$, viewed as a subgroup of
$\widetilde{GSp}_{A_{\mathbb{E}}}^{\mathbb{E}^{2}}\binom{1\ \ \ \ 0}{0\ \ -1}$
via the lift from Proposition \ref{GSpAErhoFQ}, on the $\mathbb{E}$-vector space
$A_{\mathbb{E}}^{-} \oplus H_{\mathbb{E}}$. This action preserves the
$\mathbb{F}$-subspace $(A_{\mathbb{E}}^{-})_{\rho,Q} \oplus H$. The group
generated by $GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}_{\rho,Q}$ and the
element $\hat{Q}\tilde{\psi}$ from Lemma \ref{Qhatpsi} containg the latter
group with index 2, and maps into $O\big((A_{\mathbb{E}}^{-})_{\rho,Q} \oplus
H\big)$ as well. \label{ac8igen}
\end{lem}
\begin{proof}
As in Proposition \ref{GSppresHA-}, it suffices to prove the assertion for a
generating subset of the subgroup. Corollary \ref{genform} and the definition of
$GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}_{\rho,Q}$ show that the set
consisting of unipotent matrices $\binom{1\ \ v}{0\ \ 1}$ with
$v\in(A_{\mathbb{E}}^{-})_{\rho,Q}$ and $\binom{1\ \ 0}{w\ \ 1}$ where
$w\in(A_{\mathbb{E}}^{-})_{\rho,\tilde{Q}}$ together with the subgroup of
diagonal matrices $\binom{a\ \ \ \ 0\ \ \ }{0\ \ m\overline{a}^{-1}}$ with $a
\in A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$ and $m\in\mathbb{F}^{\times}$ is a
such a generating set. The action of these generators on an arbitrary element
$\binom{u\ \ -p}{q\ \ -\tilde{u}} \in A_{\mathbb{E}}^{-} \oplus H_{\mathbb{E}}$
was seen in the proof of Proposition \ref{GSppresHA-} to be as follows: The $u$
coordinate becomes $u+qv$, $u+p\tilde{w}$, and $\frac{au\overline{a}}{t(a)}$
(recall the choice of the $\psi$-image), $p$ is sent to $p+2\langle u,v
\rangle+q|v|^{2}$, $p$, and $\frac{t(a)p}{m}$, and $q$ is mapped to $q$,
$q+2\langle u,\tilde{w} \rangle+q|w|^{2}$, and $\frac{mq}{t(a)}$, respectively.
Our assumptions on $v$, $w$, $m$, and $a$ show, using part $(iv)$ of Lemma
\ref{QthetaQrel} for $w$ and Lemma \ref{ac6gen} for $a$, that if $p$ and $q$ are
from $\mathbb{F}$ and $u\in(A_{\mathbb{E}}^{-})_{\rho,Q}$ then the same
assertion holds for their images. The remaining assertions follow, as in the
proof of Lemma \ref{ac6gen}, from Lemma \ref{Qhatpsi}, Lemma \ref{ac8id1} over
$\mathbb{E}$, and the fact that $(A_{\mathbb{E}}^{-})_{\rho,Q} \oplus H$
inherits its quadratic structure from $A_{\mathbb{E}}^{-} \oplus
H_{\mathbb{E}}$. This proves the lemma.
\end{proof}
The assertion about reflections in this case appears in the following
\begin{lem}
Let $g$ be an anisotropic element of $(A_{\mathbb{E}}^{-})_{\rho,Q} \oplus H$.
Then the product $g\hat{Q}^{-1}$ belongs to $GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \
0}{0\ \ -1}_{\rho,Q}$, and if we compose the action of the element
$\hat{Q}\tilde{\psi}$ from Lemma \ref{Qhatpsi} with that of the latter
composition we obtain the reflection in $g$ on $(A_{\mathbb{E}}^{-})_{\rho,Q}
\oplus H$. \label{ref8igen}
\end{lem}
\begin{proof}
First, the fact that $(A_{\mathbb{E}}^{-})_{\rho,Q} \oplus H$ is a quadratic
space over $\mathbb{F}$ means that all the multipliers are from $\mathbb{F}$.
Consider the element $g\hat{Q}^{-1}$, for anisotropic $g=\binom{u\ \ -p}{q\ \
-\tilde{u}}\in(A_{\mathbb{E}}^{-})_{\rho,Q} \oplus H$. If $p\neq0$, we multiply
it by $\binom{0\ \ Q}{\tilde{Q}\ \ 0}$ from the right. As the product of
$\hat{Q}^{-1}=\frac{\psi(\hat{Q})}{|Q|^{2}}$ and the latter element yields the
matrix $\binom{\ \ 0\ \ 1}{-1\ \ 0}$ from the proof of Lemma \ref{vnorm8}, we
may use the parameters given in that lemma for this case. As the scalar
$p\in\mathbb{F}^{\times}$ lies in $A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$, the
vector $\frac{u}{p}$ is in $(A_{\mathbb{E}}^{-})_{\rho,Q}$ by our assumption on
$U$, and $\frac{\tilde{u}}{p}\in(A_{\mathbb{E}}^{-})_{\rho,\tilde{Q}}$ by part
$(i)$ of Lemma \ref{QthetaQrel}, we indeed get $g\hat{Q}^{-1} \in
GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}_{\rho,Q}$ if $p\neq0$. with
$p=0$, so that $u \in A_{\mathbb{E}}^{\times}$, we get a matrix of multiplier
$\frac{|u|^{2}}{|Q|^{2}}$ for which the parameters may be taken to be
$v=\alpha=0$, $a=uQ^{-1}$, and $\beta=\frac{q\tilde{u}}{|u|^{2}}$. Parts $(i)$
and $(iii)$ of Lemma \ref{QthetaQrel} show that $g\hat{Q}^{-1} \in
GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}_{\rho,Q}$ in case $p=0$ as well.
We must, however, consider these elements in the double cover
$\widetilde{GSp}_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}$. Recall from the
proof of Lemma \ref{vnorm8} that the branch of $\psi$ which coincides with
$\hat{\psi}$ is determined for $p\neq0$ by the condition that after right
multiplication by $\binom{\ \ 0\ \ 1}{-1\ \ 0}$ (considered as its own
$\psi$-image) the square root of $N^{A}_{\mathbb{F}}(p)=p^{4}$ is $p^{2}$, while
for $p=0$ we take $-|u|^{2}$ for the square root of $N^{A}_{\mathbb{F}}(u)$. One
verifies that the chosen $\psi$-image in the definition of $\hat{Q}$ and
$\hat{Q}^{-1}$ in Lemma \ref{Qhatpsi} and the $\psi$-image of $\binom{0\ \
Q}{\tilde{Q}\ \ 0}$ as an element of $GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\
\ -1}_{\rho,Q}$ combine to give $\binom{\ \ 0\ \ 1}{-1\ \ 0}$ with itself as its
$\psi$-image, and indeed $p^{2}$ is the multiplier of $p \in
A_{\mathbb{E},\rho,\mathbb{F}Q}^{t^{2}}$ as a parameter of $g\binom{\ \ 0\ \
1}{-1\ \ 0}$. On the other hand, part $(iii)$ of Lemma \ref{QthetaQrel} shows
that the multiplier of the parameter $uQ^{-1}$ is $-\frac{|u|^{2}}{|Q|^{2}}$,
and multiplying it by the chosen square root $|Q|^{2}$ of the diagonal element
$\hat{Q}$ yields the desired value $-|u|^{2}$. It follows that the composition
of $g\hat{Q}^{-1}$ and $\hat{Q}\tilde{\psi}$ is the combination $g\tilde{\psi}$,
with $g \in \widetilde{GSp}_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}$ whose
$\psi$-image coincides with $\hat{\psi}(g)$. The action of this element on
$A_{\mathbb{E}}^{-} \oplus H_{\mathbb{E}}$ was seen in Lemma \ref{ref8id1} (over
$\mathbb{E}$) to be the reflection in $g$, and this assertion descends to
$(A_{\mathbb{E}}^{-})_{\rho,Q} \oplus H$ for anisotropic $g$ which is taken from
the latter space. This completes the proof of lemma.
\end{proof}
Observe that Lemma \ref{ref8igen} in fact uses the alternative representation
appearing in Corollary \ref{alt8id1} with $\Xi=\hat{Q}$, with right
multiplication by $\hat{Q}^{-1}$ on the space and $\psi_{\hat{Q}}$ as the
automorphism of the group.
Now we are in a position to prove
\begin{thm}
We have $Gspin\big((A_{\mathbb{E}}^{-})_{\rho,Q} \oplus
H\big)=GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}_{\rho,Q}$, and the spin
group $spin\big((A_{\mathbb{E}}^{-})_{\rho,Q} \oplus H\big)$ is the subgroup
$Sp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}_{\rho,Q}$ consisting of those
elements of $GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}_{\rho,Q}$
having multiplier 1 (namely the intersection of $GSp_{A_{\mathbb{E}}}\binom{1\ \
\ \ 0}{0\ \ -1}_{\rho,Q}$ with $\widetilde{Sp}_{A_{\mathbb{E}}}\binom{1\ \ \ \
0}{0\ \ -1}$). \label{dim8igen}
\end{thm}
\begin{proof}
We have a surjective map from the semi-direct product of $\{1,\tilde{\psi}\}$
with $\widetilde{GSp}_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}$ (which we may
consider as generated by $\hat{Q}\tilde{\psi}$ and the latter group) onto
$O(A_{\mathbb{E}}^{-} \oplus H_{\mathbb{E}})$, whose kernel is
$\mathbb{E}^{\times}$, and such that the inverse image of
$SO(A_{\mathbb{E}}^{-} \oplus H_{\mathbb{E}})$ is
$\widetilde{GSp}_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}$. By Lemma
\ref{ac8igen}, the subgroup which $\hat{Q}\tilde{\psi}$ generates with
$GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}_{\rho,Q}$ is sent to the
subgroup $O\big((A_{\mathbb{E}}^{-})_{\rho,Q} \oplus H\big)$ of
$O(A_{\mathbb{E}}^{-} \oplus H_{\mathbb{E}})$. Invoking Lemma \ref{ref8igen} and
Proposition \ref{CDT}, we obtain that this subgroup surjects onto
$O\big((A_{\mathbb{E}}^{-})_{\rho,Q} \oplus H\big)$. By Theorem \ref{dim8id1}
and the fact that the determinant commutes with the injection of
$O\big((A_{\mathbb{E}}^{-})_{\rho,Q} \oplus H\big)$ into $O(A_{\mathbb{E}}^{-}
\oplus H_{\mathbb{E}})$, we find that an element of the group generated by
$\hat{Q}\tilde{\psi}$ and $GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \
-1}_{\rho,Q}$ is sent to $SO\big((A_{\mathbb{E}}^{-})_{\rho,Q} \oplus H\big)$
if and only if it comes from $GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \
-1}_{\rho,Q}$ alone. It follows that the map $GSp_{A_{\mathbb{E}}}\binom{1\ \ \
\ 0}{0\ \ -1}_{\rho,Q} \to O\big((A_{\mathbb{E}}^{-})_{\rho,Q} \oplus H\big)$
is surjective. An element $GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \
-1}_{\rho,Q}$ acts trivially if and only if it is a scalar matrix $rI$ (with
$r\in\mathbb{E}^{\times}$) such that its $\rho$-image is the same as
$\hat{Q}\psi(rI)\hat{Q}^{-1}$, and moreover we require $\psi(rI)$ to be $+rI$ in
the double cover $\widetilde{GSp}_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}$.
This happens if and only if $r\in\mathbb{F}^{\times}$ (non-zero elements
$r\in\mathbb{E}_{0}$ satisfy the first condition but have the wrong sign of
$\psi(rI)$, hence they do not operate trivially but rather as
$-Id_{(A_{\mathbb{E}}^{-})_{\rho,Q} \oplus H}$). This proves that
$Gspin\big((A_{\mathbb{E}}^{-})_{\rho,Q} \oplus
H\big)=GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}_{\rho,Q}$. The spinor
norm of an element of $\widetilde{GSp}_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \
-1}$ was seen in Theorem \ref{dim8id1} to be the image of its multiplier $m$, so
that the same assertion holds for the images of $GSp_{A_{\mathbb{E}}}\binom{1\ \
\ \ 0}{0\ \ -1}_{\rho,Q}$ in $SO\big((A_{\mathbb{E}}^{-})_{\rho,Q} \oplus
H\big)$. It follows that $SO^{1}\big((A_{\mathbb{E}}^{-})_{\rho,Q} \oplus
H\big)$ consists of images of those $g \in GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \
0}{0\ \ -1}_{\rho,Q}$ such that $m(g)\in(\mathbb{F}^{\times})^{2}$. Dividing by
scalars, we restrict attention to those $g$ with $m(g)=1$, i.e., to $g \in
Sp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}_{\rho,Q}$, and as the kernel of
the restriction to $Sp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}_{\rho,Q}$ is
$\{\pm1\}$, the latter group is indeed the
$spin\big((A_{\mathbb{E}}^{-})_{\rho,Q} \oplus H\big)$. This completes the proof
of the theorem.
\end{proof}
\smallskip
When we wish to consider the independence of our groups of the choices which we
made, we first observe that by the Witt Cancelation Theorem, the complement of
any hyperbolic plane inside an isotropic space is independent (up to
isomorphism) of the specific hyperbolic plane we took. Hence it remains to
see what happens when we change the choices for the complement
$(A_{\mathbb{E}}^{-})_{\rho,Q}$. For this we again denote by $\sigma$ the map
on $A_{\mathbb{E}}$ which was previously denoted $\rho$, and let $Q$ and
$\sigma$ vary. The resulting independence assertion appears in
\begin{prop}
Let $\sigma$, $\tau$, and $\eta$ be ring automorphisms of $A_{\mathbb{E}}$, all
of order 2, which restrict to $\rho$ on $\mathbb{E}$, and let $Q$, $R$, $S$ and
$T$ be elements of $A_{\mathbb{E}}^{-}$ which satisfy the conditions of parts
$(i)$, $(ii)$, and $(iii)$ of Proposition \ref{NQF2NAE}. Assume further that
the element $T=Qb^{-1}$ of $A_{\mathbb{E}}^{-}$ has vector norm
$\frac{|Q|^{2}}{b\overline{b}}$. Then the four groups
$GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}_{\sigma,Q}$,
$GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}_{\sigma,R}$,
$GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}_{\tau,S}$, and
$GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}_{\eta,T}$ are isomorphic in
such a way that the isomorphisms take the $Sp$ subgroups to one another.
\label{indepQ}
\end{prop}
\begin{proof}
The proof of Proposition \ref{NQF2NAE} shows that
$A_{\mathbb{E},\sigma,\mathbb{F}Q}^{t^{2}}$ coincides with
$A_{\mathbb{E},\eta,\mathbb{F}S}^{t^{2}}$, and conjugation by $e$ takes
this group to $A_{\mathbb{E},\tau,\mathbb{F}eQ\overline{e}}^{t^{2}}$. Moreover,
this proof yields the existence of two elements $c$ and $d$ of
$A_{\mathbb{E}}^{\times}$, with $c^{\sigma}=c$ and $d^{\tau}=d$, such that $c$
conjugates $A_{\mathbb{E},\sigma,\mathbb{F}Q}^{t^{2}}$ to
$A_{\mathbb{E},\sigma,\mathbb{F}R}^{t^{2}}$ and $d$ conjugates
$A_{\mathbb{E},\tau,\mathbb{F}eQ\overline{e}}^{t^{2}}$ to
$A_{\mathbb{E},\tau,\mathbb{F}S}^{t^{2}}$. We first claim that that the action
of $c$ (resp. $e$) on $A_{\mathbb{E}}^{-}$ sends the space
$(A_{\mathbb{E}}^{-})_{\sigma,Q}$ to $(A_{\mathbb{E}}^{-})_{\sigma,R}$ (resp.
$(A_{\mathbb{E}}^{-})_{\tau,eQ\overline{e}}$). Indeed, for any $v \in
A_{\mathbb{E}}^{-}$ we have $(cv\overline{c})^{\sigma}=cv^{\sigma}\overline{c}$
and $(ev\overline{e})^{\tau}=ev^{\sigma}\overline{e}$ (this is clear for the
$\sigma$-invariant element $c$, and for $e$ we use the fact that
$x^{\tau}=ee^{-\sigma}x^{\sigma}e^{\sigma}e^{-1}$ and the invariance of
$ee^{-\sigma}$ under $x\mapsto\overline{x}^{\sigma}$). Hence if
$v\in(A_{\mathbb{E}}^{-})_{\sigma,Q}$ then we substitute the value of
$v^{\sigma}$ from Lemma \ref{sp6gen}, and then using Lemma \ref{AxA-rel} and
Proposition \ref{NAFg} we get the asserted result (recall that
$R=rcQ\overline{c}$). It follows that the operation of $d$ maps
$(A_{\mathbb{E}}^{-})_{\tau,eQ\overline{e}}$ to
$(A_{\mathbb{E}}^{-})_{\tau,S}$. Next, we show that
$(A_{\mathbb{E}}^{-})_{\eta,T}=(A_{\mathbb{E}}^{-})_{\sigma,Q}$ as well. To see
this, first observe that if $\overline{b}^{\sigma}=b$ and
$b\overline{b}\in\mathbb{E}^{\times}$ then applying $\sigma$ to the latter
scalar yields $\overline{b}b$. As $b$ is invertible (with inverse
$\frac{\overline{b}}{b\overline{b}}$), this shows that the latter scalar lies in
fact in $\mathbb{F}^{\times}$. The fact that both $Q$ and $T=Qb^{-1}$ are in
$A_{\mathbb{E}}^{-}$ allows us now to write $T$ also as
$\frac{bQ}{b\overline{b}}$. Hence if $v\in(A_{\mathbb{E}}^{-})_{\sigma,Q}$ then
using the definition of $\eta$ and Lemma \ref{sp6gen} we find that
$v^{\eta}=-b\overline{b}\frac{T\tilde{v}T}{|Q|^{2}}$, which equals
$-\frac{T\tilde{v}T}{|T|^{2}}$ by our assumption on $|T|^{2}$. This proves that
$v$ indeed lies in $(A_{\mathbb{E}}^{-})_{\eta,T}$. The equality of the two
spaces, as well as the fact that the entire spaces with $R$, $eQ\overline{e}$,
and $S$ are in the image of $(A_{\mathbb{E}}^{-})_{\sigma,Q}$, follows either
by inverting the above argument or by comparing dimensions (and using the
injectivity of all the operations considered here).
Let now $g$ be an element of $GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \
-1}_{\sigma,Q}$, and we present it by using the appropriate parameters in
Corollary \ref{genform}. The previous paragraph shows that by using the same
parameters we also get that $g \in GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \
-1}_{\eta,T}$. For the other groups, observe that when we conjugate the
generators $\binom{1\ \ v}{0\ \ 1}$, $\binom{1\ \ 0}{w\ \ 1}$, and $\binom{a\ \
\ \ 0\ \ \ }{0\ \ m\overline{a}^{-1}}$ of the group
$GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}$ by some diagonal element of
the form $\binom{x\ \ \ 0\ \ }{0\ \ \overline{x}^{-1}}$ (of multiplier 1), then
$v$, $w$, and $a$ are taken to $xv\overline{x}$, $\overline{x}^{\ -1}wx^{-1}$,
and $xax^{-1}$ respectively. Moreover, if $w=\tilde{u}$ for some $u \in
A_{\mathbb{E}}^{-}$ then its image is $\widetilde{xu\overline{x}}$ (up to the
scalar $N^{A}_{\mathbb{F}}(x)$) by Lemma \ref{AxA-rel}. For the generators of
$GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}_{\sigma,Q}$, in which
$v\in(A_{\mathbb{E}}^{-})_{\sigma,Q}$,
$w\in(A_{\mathbb{E}}^{-})_{\sigma,\tilde{Q}}$, and $a \in
A_{\mathbb{E},\sigma,\mathbb{F}Q}^{t^{2}}$, we have seen that for $x=c$ (resp.
$x=e$) the images of $v$ and $a$ lie in $(A_{\mathbb{E}}^{-})_{\sigma,R}$ and
$A_{\mathbb{E},\sigma,\mathbb{F}Q}^{t^{2}}$ (resp.
$(A_{\mathbb{E}}^{-})_{\tau,eQ\overline{e}}$ and
$A_{\mathbb{E},\tau,\mathbb{F}eQ\overline{e}}^{t^{2}}$) respectively. Moreover,
$w=\tilde{u}$ for $u\in(A_{\mathbb{E}}^{-})_{\sigma,Q}$ by parts $(i)$ and
$(iv)$ of Lemma \ref{QthetaQrel}, and the fact that both $c$ and $e$ have
reduced norms in $\mathbb{F}$ shows that its image is the $\theta$-image of an
element which lies in $(A_{\mathbb{E}}^{-})_{\sigma,R}$ (resp.
$(A_{\mathbb{E}}^{-})_{\tau,eQ\overline{e}}$). Hence part $(i)$ of Lemma
\ref{QthetaQrel} proves the assertion for $w$ as well. This shows that
conjugation by $\binom{c\ \ \ 0\ \ }{0\ \ \overline{c}^{-1}}$ (resp. $\binom{e\
\ \ 0\ \ }{0\ \ \overline{e}^{-1}}$) sends $GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \
0}{0\ \ -1}_{\sigma,Q}$ to $GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \
-1}_{\sigma,R}$ (resp. $GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \
-1}_{\tau,eQ\overline{e}}$), and further conjugation of the latter group by
$\binom{d\ \ \ 0\ \ }{0\ \ \overline{d}^{-1}}$ takes the latter group to
$GSp_{A_{\mathbb{E}}}\binom{1\ \ \ \ 0}{0\ \ -1}_{\tau,S}$. The fact that these
identifications and conjugations yields the full groups is established either
by inverting the above argument, or by observing that since the maps and
identifications from the previous paragraph are all surjective, we get all the
generators of the required groups in this process. Since our identifications
and isomorphisms commute with the multiplier maps to $\mathbb{F}^{\times}$, the
$Sp$ groups are sent to one another in this process. This completes the proof of
the proposition.
\end{proof}
Note that as both $\eta$ and $v\mapsto-\frac{T\tilde{v}T}{|T|}^{2}$ are
involutions which separate $A_{\mathbb{E}}^{-}$ to $\pm1$-eigenspaces, the proof
of Proposition \ref{indepQ} already shows that the vector norm of $T=Qb^{-1}$
must be $\pm\frac{|Q|^{2}}{b\overline{b}}$. It seems likely that only the $+$
sign is possible (making the additional assumption in Proposition \ref{indepQ}
redundant), but we have not checked this out in detail. The remarks about the
transitivity of the relations from Proposition \ref{NQF2NAE} on the possible
choices of ring automorphisms of $A_{\mathbb{E}}^{-}$ (commuting with
$\iota_{B}\otimes\iota_{C}$ and reducing to $\rho$ on $\mathbb{E}$ as usual)
extend to similar assertions for Proposition \ref{indepQ}.
Going back to our previous notation, with $\rho$ on $A_{\mathbb{E}}$ as well as
on $M_{2}(A_{\mathbb{E}})$, the spaces with more isotropic vectors which appear
in this case are considered in the following
\begin{cor}
Assume that after extending scalars to $\mathbb{E}$, the quadratic space
$\big((A_{\mathbb{E}}^{-})_{\rho,Q} \oplus
H\big)_{\mathbb{E}}=A_{\mathbb{E}}^{-} \oplus H_{\mathbb{E}}$ splits more than
one hyperbolic plane. Then there exists some quaternion algebra $B$ over
$\mathbb{F}$ and some number $\delta\in\mathbb{F}^{\times}$ representing a class
modulo $N^{B}_{\mathbb{F}}(B^{\times})$ such that the following assertions hold:
The quadratic space is $\mathbb{E} \oplus B \oplus H$ with the norms from
$\mathbb{E}$ multiplied by $-\delta$, and its Gspin and spin groups, denoted
$GSp_{4}(B_{\mathbb{E}})_{\rho,\delta}$ and
$Sp_{4}(B_{\mathbb{E}})_{\rho,\delta}$ respectively, consist of those elements
of $GSp_{4}(B_{\mathbb{E}})$ and $Sp_{4}(B_{\mathbb{E}})$ on which $\psi$ is
defined as in the remark following Corollary \ref{iso8id1}, and conjugating it
by the diagonal matrix with diagonal entries $-\delta$, 1, $-1$, and $\delta$
operates in the same way as $\rho$. When the latter $\mathbb{E}$-space splits
more than two hyperbolic planes, meaning that it is the direct sum of 4
hyperbolic planes, then there exists representatives $\varepsilon$ of a class
modulo $N^{\mathbb{E}}_{\mathbb{F}}(\mathbb{E}^{\times})$ and $\delta$ of a
class with respect to $N^{B}_{\mathbb{F}}(B^{\times})$ for
$B=(\mathbb{E},\rho,\varepsilon)$, for which the following holds: The space is
the direct sum of a hyperbolic plane and three copies of $\mathbb{E}$, with the
norms in two copies are multiplied by $-\varepsilon$ and $-\delta$, and the spin
group is a the subgroup of a double cover of $SO^{1}\binom{0\ \ I}{I\ \ 0}$ over
$\mathbb{E}$ on which $\rho$ coincides with the conjugation of $\psi$ by the
diagonal $8\times8$ matrix whose diagonal entries are $\delta\varepsilon$,
$-\delta$, $-\varepsilon$, 1, $\varepsilon$, $-1$, $-\delta\varepsilon$, and
$\delta$. The Gspin group is a the subgroup of the spinor norm related subgroup
of the general special orthogonal group of $\binom{0\ \ I}{I\ \ 0}$ which is
defined by the same relation between $\rho$ and $\psi$. \label{iso8igen}
\end{cor}
\begin{proof}
The existence of $B$ and $\delta$, as well as $\varepsilon$, is a consequence of
Corollary \ref{iso6gen}, from which we also adopt the choice of $Q$ to be
$\binom{0\ \ \delta}{1\ \ 0}$. The form of the spaces is then given in Corollary
\ref{alt6gen}. The description of the Gspin and spin groups now follows from
Theorem \ref{dim8igen}, Corollary \ref{GSprhoQst}, and Corollary \ref{iso8id1},
taking into consideration the conjugation by $\binom{1\ \ 0}{0\ \ R}$ or
$\binom{1\ \ 0}{0\ \ S}$ (which are $\rho$-invariant and operate on our
$\hat{Q}$ in the same way), the additional conjugation by $\binom{R\ \ 0}{0\ \
R}$ or $\binom{S\ \ 0}{0\ \ S}$ in the definition of $\psi$ on these groups, and
the action of $\rho$ on $(\mathbb{E},\rho,\varepsilon)$ for the latter case.
This proves the corollary.
\end{proof}
We remark that in the second case considered in Corollary \ref{iso8igen} the
ambient spin group of the direct sum of 4 hyperbolic planes over $\mathbb{E}$
comes, as seen in Corollary \ref{iso8id1}, with three inequivalent maps onto
the associated $SO^{1}$ group. The subgroup considered in Corollary
\ref{iso8igen} in this case is defined by the images in two of these
representations (those which is not the defining representation as a spin group)
being $\rho$-images of one another, after replacing one of them by an
equivalent one. The fact that the double cover splits over this group, but it
still maps to the defining representation with an order 2 kernel, is related to
the fact that $-I$ equals its $\rho$-image. Indeed, of the order 2 elements in
the kernels of the three representations, only $-I$ with $\psi$-image $-I$
satisfies the $\psi_{\hat{Q}}=\rho$ condition. The associated Gspin group maps
onto the full special orthogonal group (with kernel $\mathbb{F}^{\times}$) in
the first representation, but remains injective in the other two
representations, allowing the bilinear form to be multiplied by scalars from
$\mathbb{F}^{\times}$ (the same scalar in both representations).
\smallskip
Every subspace of dimension 7 is contained in a space as we considered in Lemma
\ref{sp8igen}. Indeed, just choose a non-zero vector norm from the space, and
add a vector whose vector norm is the additive inverse of the chosen one.
However, there is then a relation between the choice of the vector norm and the
discriminant of the resulting 8-dimensional space, meaning that the description
of the groups thus obtained depends on many choices. Hence we content ourselves
with 7-dimensional spaces in which we can make the resulting space have
discriminant 1, yielding a ``canonical'' complement, and do not pursue the rest
(until a good description of non-isotropic 8-dimensional spaces becomes
available).
\section{Fields with Many Squares \label{ManySq}}
Both the discriminant and the spinor norm take values in the group
$\mathbb{F}^{\times}/(\mathbb{F}^{\times})^{2}$. It is thus worthwhile to
consider explicitly the cases where this group is very small. A field of
characteristic different from 2 for which this group is trivial is called
\emph{quadratically closed}. This is the case, for example, where $\mathbb{F}$
is algebraically closed, e.g., $\mathbb{F}=\mathbb{C}$. Over such a field
$\mathbb{F}$ (with characteristic different from 2), every two non-degenerate
quadratic spaces of the same dimension $n$ are isomorphic, hence the special
orthogonal group can be denoted simply $SO(n,\mathbb{F})$. The group
$SO^{1}(n,\mathbb{F})$ always coincides with $SO(n,\mathbb{F})$, as the spinor
norm is trivial. Note that $\mathbb{F}$ admits neither non-trivial quadratic
field extensions nor non-split quaternion algebras, so that we may always take
$\mathbb{E}=\mathbb{F}\times\mathbb{F}$ and $B$ (or $C$) to be
$M_{2}(\mathbb{F})$. Gathering the results of Theorems \ref{dim12}, \ref{dim3},
\ref{dim4}, \ref{dim6d1}, \ref{dim5}, \ref{dim6gen}, \ref{dim8id1},
\ref{dim7rd}, and \ref{dim8igen} and their corollaries, we establish the
following assertions:
$SO(1,\mathbb{F})=\{1\}$, with $spin(1,\mathbb{F})=\{\pm1\}$ and
$Gspin(1,\mathbb{F})=\mathbb{F}^{\times}$.
$SO(2,\mathbb{F})\cong\mathbb{F}^{\times}$, with $spin(2,\mathbb{F})$ being also
$\mathbb{F}^{\times}$ and
$Gspin(2,\mathbb{F})=\mathbb{F}^{\times}\times\mathbb{F}^{\times}$.
$spin(3,\mathbb{F})=SL_{2}(\mathbb{F})$, $SO(3,\mathbb{F})=PSL_{2}(\mathbb{F})$,
and $Gspin(3,\mathbb{F})=GL_{2}(\mathbb{F})$.
$spin(4,\mathbb{F})=SL_{2}(\mathbb{F}) \times SL_{2}(\mathbb{F})$,
$SO(4,\mathbb{F})$ is the quotient by $\{\pm(I,I)\}$, and $Gspin(4,\mathbb{F})$
is a subgroup $GL_{2}(\mathbb{F}) \times GL_{2}(\mathbb{F})$ determined by the
equal determinant condition.
$spin(5,\mathbb{F})=Sp_{4}(\mathbb{F})$, $SO(5,\mathbb{F})=PSp_{4}(\mathbb{F})$,
and $Gspin(5,\mathbb{F})=GSp_{4}(\mathbb{F})$.
$spin(6,\mathbb{F})=SL_{4}(\mathbb{F})$, $SO(6,\mathbb{F})$ is the quotient by
$\pm I$ (this is not $PSL_{4}(\mathbb{F})$, since in order to obtain the latter
group one must also divide by the two square roots of $-1$), and
$Gspin(6,\mathbb{F})=GL_{4}(\mathbb{F})$ since the $(\mathbb{F}^{\times})^{2}$
condition on the determinant is vacuous over a quadratically closed field.
$spin(7,\mathbb{F})$ is the subgroup of those elements $\binom{a\ \ b}{c\ \ d}$
of $SO\binom{0\ \ I}{I\ \ 0} \subseteq GL_{8}(\mathbb{F})$ in which $ad^{t}$ and
$bc^{t}$ are in $\mathbb{F}$ and square to the determinants of the corresponding
$4\times4$ blocks, and either $bd^{-1}$ and $ca^{-1}$ or $ac^{-1}$ and $db^{-1}$
are anti-symmetric and multiply to give minus their Pfaffian.
$Gspin(7,\mathbb{F})$ is the group defined by the same conditions, but in which
the bilinear form arising from $\binom{0\ \ I}{I\ \ 0}$ may be multiplied by a
scalar.
Finally, $spin(8,\mathbb{F})$ has 3 inequivalent representations in which it
maps onto the $SO\binom{0\ \ I}{I\ \ 0}$ with different order 2 kernels.
$Gspin(8,\mathbb{F})$ maps to $SO\binom{0\ \ I}{I\ \ 0}$ with kernel
$\mathbb{F}^{\times}$, but extending the two other representations of
$spin(8,\mathbb{F})$ to it results in transformations which may multiply the
bilinear form defined by $\binom{0\ \ I}{I\ \ 0}$ by non-trivial scalars.
\smallskip
We now present the case where $\mathbb{F}^{\times}/(\mathbb{F}^{\times})^{2}$
has order 2 (and $ch\mathbb{F}\neq2$). In this case there is a unique quadratic
field extension of $\mathbb{F}$, which we denote $\mathbb{E}$. Hence all the
(non-trivial) unitary groups defined over $\mathbb{F}$ are based on $\mathbb{E}$
with its Galois automorphism $\rho$ over $\mathbb{F}$. One family of fields
having this property is the family of finite fields of odd cardinality. As
another example, recall that a field $\mathbb{F}$ is \emph{Euclidean} if it is
ordered and every positive element is a square. For these fields we have
\begin{prop}
If $\mathbb{F}$ is Euclidean then $\mathbb{E}$ equals $\mathbb{F}(\sqrt{-1})$,
it is quadratically closed, and we have
$N^{\mathbb{E}}_{\mathbb{F}}(\mathbb{E}^{\times})=(\mathbb{F}^{\times})^{2}$.
Moreover, the quaternion algebra $\big(\frac{-1,-1}{\mathbb{F}}\big)$, which we
denote $\mathbb{H}$ from now on, is not split and satisfies
$N^{\mathbb{H}}_{\mathbb{F}}(\mathbb{H}^{\times})=(\mathbb{F}^{\times})^{2}$ as
well. \label{Euc}
\end{prop}
\begin{proof}
We know that $\mathbb{E}=\mathbb{F}(\sqrt{-1})$ since $-1$ cannot be a square
in an ordered field. Thus, for any $z\in\mathbb{E}$,
$N^{\mathbb{E}}_{\mathbb{F}}(z)$ may be presented as the sum of two squares in
$\mathbb{E}$, not both zero if $z\neq0$. The assertion
$N^{\mathbb{E}}_{\mathbb{F}}(\mathbb{E}^{\times})=(\mathbb{F}^{\times})^{2}$ now
follows from the properties of orderings. A similar argument shows that
$N^{\mathbb{H}}_{\mathbb{F}}(\alpha)$ is the sum of four squares, hence
non-zero for $\alpha\neq0$. Hence $\mathbb{H}$ is a division algebra and
$N^{\mathbb{H}}_{\mathbb{F}}(\mathbb{H}^{\times})=(\mathbb{F}^{\times})^{2}$.
It remains to show that $\mathbb{E}$ is quadratically closed. Any
$z\in\mathbb{E}^{\times}$ can be written as $r^{2}u$ with
$r\in\mathbb{F}^{\times}$ and $u\in\mathbb{E}^{1}$: As
$N^{\mathbb{E}}_{\mathbb{F}}(z)\in(\mathbb{F}^{\times})^{2}$, it is the square
of some $t\in\mathbb{F}^{\times}$, and by replacing $t$ by $-t$ is necessary,
we may assume $t>0$. But then $t=r^{2}$, and
$u=\frac{z}{r^{2}}\in\mathbb{E}^{1}$ as desired. But then Hilbert's Theorem 90
implies that $u=\frac{w^{\rho}}{w}$ for $w\in\mathbb{E}^{\times}$, and as this
element is the quotient of
$N^{\mathbb{E}}_{\mathbb{F}}(w)\in(\mathbb{F}^{\times})^{2}$ and
$w^{2}\in(\mathbb{E}^{\times})^{2}$, we find that $u$ (hence also $z$) is a
square in $\mathbb{E}$. Thus $\mathbb{E}$ is indeed quadratically closed, which
completes the proof of the proposition.
\end{proof}
Note that $\mathbb{H}$ and $M_{2}(\mathbb{F})$ are the only quaternion algebras
over $\mathbb{F}$: For any symbol $\big(\frac{\alpha,\beta}{\mathbb{F}}\big)$
we may get an isomorphic quaternion algebra with $\alpha$ and $\beta$ taken
from $\{\pm1\}$, yielding $\mathbb{H}$ if $\alpha=\beta=-1$ and a split algebra
otherwise. Thus, there are also two bi-quaternion algebras over $\mathbb{F}$,
namely $M_{2}(\mathbb{H})$ and $M_{4}(\mathbb{F})$. All these split over the
quadratically closed extension $\mathbb{E}$. We also remark that the
Artin--Schreier Theorem states that every field $\mathbb{F}$ such that the
algebraic closure of $\mathbb{F}$ has non-trivial finite degree over
$\mathbb{F}$ (like $\mathbb{R}$) must be Euclidean.
For a non-degenerate quadratic form over a Euclidean field one defines the
\emph{signature}: An orthogonal basis can be normalized to have norms in $\pm1$,
and then the signature is the number of $+1$s and the number of $-1$s. These are
two numbers $(p,q)$ which sum to the dimension of the space, and we have
\begin{prop}
The signature classifies quadratic spaces over $\mathbb{F}$ up to isometry.
\label{sign}
\end{prop}
\begin{proof}
We can distinguish the spaces of signature $(n,0)$ and $(0,n)$ (these spaces
are called \emph{definite}, the others \emph{indefinite}) from the others by the
fact that the first one has only positive vector norms, the second has only
negative vector norms, and all the rest are isotropic. This completes the
verification for dimensions 1 and 2. For a larger dimension, it suffices to
prove that isotropic spaces with different signatures are not isometric. But
each such space splits a hyperbolic plane, and the complements must have
different signatures since a hyperbolic plane has signature $(1,1)$. As these
complements are not isometric by the induction hypothesis, the original spaces
cannot be isometric by the Witt Cancelation Theorem. This proves the
proposition.
\end{proof}
\smallskip
For a finite field $\mathbb{F}$ of odd cardinality,
$\mathbb{F}^{\times}/(\mathbb{F}^{\times})^{2}$ has order 2, but it is not
Euclidean as it cannot be ordered (only fields of characteristic 0 may admit
orderings). In fact, we have the following complement to Proposition \ref{Euc}:
\begin{prop}
Let $\mathbb{F}$ be a field such that
$\mathbb{F}^{\times}/(\mathbb{F}^{\times})^{2}$ has order 2. Then if
$\mathbb{F}$ is not Euclidean then
$N^{\mathbb{E}}_{\mathbb{F}}(\mathbb{E}^{\times})=\mathbb{F}^{\times}$,
$\mathbb{E}$ is not quadratically closed, there are no non-split quaternion
algebras over $\mathbb{F}$, and every quadratic form over $\mathbb{F}$ is
determined by its dimension and discriminant.
\label{F/F2=2}
\end{prop}
\begin{proof}
The norm group $N^{\mathbb{E}}_{\mathbb{F}}(\mathbb{E}^{\times})$ is a subgroup
of $\mathbb{F}^{\times}$ which contains $(\mathbb{F}^{\times})^{2}$. Hence if
the latter has index 2 in the former,
$N^{\mathbb{E}}_{\mathbb{F}}(\mathbb{E}^{\times})$ must coincide with one of
these two groups. Now, the only quaternion algebra which may not split is
$\mathbb{H}=\big(\frac{d,d}{\mathbb{F}}\big)$ where $d\in\mathbb{F}^{\times}$ is
not a square. This algebra does split if and only if $d \in
N^{\mathbb{E}}_{\mathbb{F}}(\mathbb{E}^{\times})$, which is equivalent to
$N^{\mathbb{E}}_{\mathbb{F}}(\mathbb{E}^{\times})$ being the full group
$\mathbb{F}^{\times}$ by what we have seen above. $\mathbb{E}$ cannot be
quadratically closed in the this case, since the norm of a square in
$\mathbb{E}$ is a square in $\mathbb{F}$.
We claim that if $\mathbb{H}$ does not split then $\mathbb{F}$ must be
Euclidean. First observe that the product of the two generators of $\mathbb{H}$
squares to $-d^{2}$, so that if $\mathbb{H}$ does not split then
$-1\not\in(\mathbb{F}^{\times})^{2}$, and we may take $d=-1$. But then norms
from $\mathbb{E}$ are sums of two squares. It thus follows from the assumption
$N^{\mathbb{E}}_{\mathbb{F}}(\mathbb{E}^{\times})=(\mathbb{F}^{\times})^{2}$
that $(\mathbb{F}^{\times})^{2}$ is closed under addition, and as
$\mathbb{F}^{\times}=(\mathbb{F}^{\times})^{2}\times\{\pm1\}$ we find that
$(\mathbb{F}^{\times})^{2}$ defines an ordering on $\mathbb{F}$. This proves
that $\mathbb{F}$ is indeed Euclidean.
It remains to prove that in the non-Euclidean case, a quadratic form is
determined by its dimension and discriminant. In dimension 1 the discriminant
characterizes the quadratic space over any field. In dimension 2 it follows from
Lemma \ref{sp2} that any space contains vectors of any given non-zero vector
norm: For trivial discriminant this is clear (a hyperbolic plane), and for the
non-trivial discriminant this follows from our assumption on
$N^{\mathbb{E}}_{\mathbb{F}}(\mathbb{E}^{\times})$. The assertion for dimension
2 is now clear, by taking a vector of vector norm 1 and knowing what the
orthogonal complement must be. Every quadratic space of dimension 3 (hence also
larger) must therefore be isotropic: Fix an anisotropic vector $v$, the space
$v^{\perp}$ must contain some $u$ with $|u|^{2}=-|v|^{2}$, and then $u+v$ is
isotropic. The assertion now follows by induction: If two quadratic spaces have
the same dimension at least 3 and the same discriminant, then both are
isotropic, both split hyperbolic planes, and in both the complement has the same
dimension and the same discriminant. As the complements are isometric by the
induction hypothesis, the same assertion holds for the original ones. This
completes the proof of the proposition.
\end{proof}
In addition to finite fields, every \emph{quasi-finite} field (i.e., a perfect
field which admits a unique extension of every finite order) of characteristic
different from 2 satisfies the conditions of Proposition \ref{F/F2=2}, and is
not Euclidean. The same holds for (perfect) fields whose absolute Galois group
misses some factors $\mathbb{Z}_{p}$ for odd $p$ in the pro-finite completion
of $\mathbb{Z}$, such as direct limits of finite fields where the power of 2 in
the exponent is bounded (otherwise the result is quadratically closed). Hence we
call a non-Euclidean field of characteristic different from 2 such that
$\mathbb{F}^{\times}/(\mathbb{F}^{\times})^{2}$ has order 2 \emph{quadratically
finite}.
In Proposition \ref{Euc} we have seen that the quadratic extension of a
Euclidean field is quadratically closed. As a complementary claim, Proposition
\ref{F/F2=2} has the following
\begin{cor}
If $\mathbb{F}$ is quadratically finite then so is its (unique) quadratic
extension $\mathbb{E}$. \label{quadfin}
\end{cor}
\begin{proof}
First observe that if $z\in\mathbb{E}$ satisfies
$N^{\mathbb{E}}_{\mathbb{F}}(z)\in(\mathbb{F}^{\times})^{2}$ then an argument
similar to the last part of the proof of Proposition \ref{Euc} shows that $z$
can be presented as $\frac{tN^{\mathbb{E}}_{\mathbb{F}}(w)}{w^{2}}$ for some
$t\in\mathbb{F}^{\times}$ and $w\in\mathbb{F}^{\times}$. It follows that
$z\in(\mathbb{E}^{\times})^{2}$ since we clearly have
$\mathbb{F}^{\times}\subseteq(\mathbb{E}^{\times})^{2}$. Thus the index of
$(\mathbb{E}^{\times})^{2}$ in $\mathbb{E}^{\times}$ can be at most 2, but it
has to be precisely 2 since Proposition \ref{F/F2=2} shows that
$\mathbb{E}^{\times}/(\mathbb{E}^{\times})^{2}$ cannot be of order 1. Since
$-1\in\mathbb{F}^{\times}$ is a square in $\mathbb{E}$, the latter field cannot
be Euclidean, hence it is quadratically finite by Proposition \ref{F/F2=2}. This
proves the corollary.
\end{proof}
For notational purposes, it will be convenient to identify the group
$\mathbb{F}^{\times}/(\mathbb{F}^{\times})^{2}$, in both the Euclidean and
quadratically finite cases, with $\{\pm1\}$ (this is the Legendre symbol over
$q$ in the finite case). As the spinor norm takes values in this group, we
write $SO^{+}$ for the $SO^{1}$ groups.
We know that $SO^{+}(V)=SO(V)$ wherever $V$ has dimension 1 (and this
group is trivial). But this is almost the only case where this may happen:
\begin{prop}
Let $V$ be a quadratic space of dimension $>1$ over a field $\mathbb{F}$ which
is either Euclidean or quadratically finite. Then $SO^{+}(V)$ has index 2 in
$SO(V)$ unless $\mathbb{F}$ is Euclidean and the space is definite, a case in
which the index is 1. \label{SO+ind2} \end{prop}
\begin{proof}
As $SO^{+}(V)$ is the kernel of a map $SO(V)\mapsto\{\pm1\}$, it either
coincides with $SO(V)$ (if the map is trivial) or has index 2 in it. Therefore
it suffices to construct an element having non-trivial spinor norm, and prove
that there is no such element in the exceptional case. Now, if $\mathbb{F}$ is
quadratically finite then the proof of Proposition \ref{F/F2=2} shows that
there $V$ contains vectors with vector norms in $(\mathbb{F}^{\times})^{2}$ as
well as anisotropic vectors whose vector norms do not lie in
$(\mathbb{F}^{\times})^{2}$. The same assertion clearly holds for indefinite
spaces in the Euclidean case. Hence the composition of reflections in one
vector of each vector norm yields the desired element of $SO(V)$. On the other
hand, if $\mathbb{F}$ is Euclidean and the space is definite then every
reflection has the same vector norm. As Proposition \ref{CDT} implies the
elements of $SO(V)$ are products of an even number of reflections, the
triviality of the spinor norm in this case follows. This proves the proposition.
\end{proof}
Using Proposition \ref{sign}, all the special orthogonal groups over a
Euclidean field take the form $SO(p,q,\mathbb{F})$ for some natural numbers $p$
and $q$, where we have $SO(p,q,\mathbb{F})=SO(q,p,\mathbb{F})$ by a global sign
inversion on the space. By Proposition \ref{SO+ind2}, the subgroup
$SO^{+}(p,q,\mathbb{F})$ has index 2 in $SO(p,q,\mathbb{F})$ unless $pq=0$. On
the other hand, it follows from Proposition \ref{F/F2=2} that a special
orthogonal group over a quadratically finite field takes the form
$SO(n,\varepsilon,\mathbb{F})$, where $n$ is the dimension and
$\varepsilon\in\{\pm\}$ represents the discriminant. Moreover, as rescaling may
change the discriminant in odd dimensions, we write just $SO(n,\mathbb{F})$ for
odd $n$. Proposition \ref{SO+ind2} shows that $SO^{+}(n,\varepsilon,\mathbb{F})$
or $SO^{+}(n,\mathbb{F})$ always has index 2 there if $n>1$ (and otherwise the
groups are trivial). The spin and Gspin groups are denoted with $SO$ or $SO^{+}$
replaced by $spin$ and $Gspin$ respectively. For the finite fields
$\mathbb{F}_{q}$, where we may replace $\mathbb{F}_{q}$ by simply $q$ in the
notation, this means that $spin(n,\varepsilon,q)$ and $SO(n,\varepsilon,q)$ have
the same cardinality (for $n>1$), while the cardinality of
$SO^{+}(n,\varepsilon,q)$ is half that number and to get the cardinality of
$Gspin(n,\varepsilon,q)$ we multiply by $q-1$.
Recall that in some settings we write results in terms of unitary groups.
Arguments which are similar to the orthogonal groups over quadratically closed
and Euclidean fields show that unitary spaces over quadratically finite fields
are determined by their dimensions, while over Euclidean field they have
signatures just like the quadratic ones. Hence we shall use notations like
$U_{\mathbb{E},\rho}(n)$ in the former case and $U_{\mathbb{E},\rho}(p,q)$ in
the latter case, as well as $U$ replaced by $SU$, $GU$, or $GSU$ when required.
The groups $Sp_{\mathbb{H}}(p,q)$ are defined in a similar manner in the
Euclidean case.
\smallskip
When we apply Theorems \ref{dim12}, \ref{dim3}, \ref{dim4}, \ref{dim6d1},
\ref{dim5}, \ref{dim6gen}, \ref{dim8id1}, \ref{dim7rd}, and \ref{dim8igen}, and
their corollaries, to the quadratically finite field case, the results we get
are as follows:
$SO(1,\mathbb{F})$ as well as $SO^{+}(1,\mathbb{F})$ are $\{1\}$, while
$spin(1,\mathbb{F})$ equals $\{\pm1\}$ and $Gspin(1,\mathbb{F})$ is
$\mathbb{F}^{\times}$.
$SO(2,+,\mathbb{F})$ and $spin(2,+,\mathbb{F})$ are both isomorphic to
$\mathbb{F}^{\times}$, $SO^{+}(2,+,\mathbb{F})$ is $(\mathbb{F}^{\times})^{2}$,
and $Gspin(2,+,\mathbb{F})=\mathbb{F}^{\times}\times\mathbb{F}^{\times}$. On
the other hand, both $SO(2,-,\mathbb{F})$ and $Spin(2,-,\mathbb{F})$ are
isomorphic to $\mathbb{E}^{1}$ (or equivalently $U_{\mathbb{E},\rho}(1)$),
$SO^{+}(2,-,\mathbb{F})$ is $(\mathbb{E}^{1})^{2}$, and $Gspin(2,-,\mathbb{F})$
equals $\mathbb{E}^{\times}$ (or $GU_{\mathbb{E},\rho}(1)$).
$spin(3,\mathbb{F})=SL_{2}(\mathbb{F})=SU_{\mathbb{E},\rho}(2)$,
$Gspin(3,\mathbb{F})=GL_{2}(\mathbb{F})=GSU_{\mathbb{E},\rho}(2)$,
$SO^{+}(3,\mathbb{F})=PSL_{2}(\mathbb{F})$,
and $SO(3,\mathbb{F})=PGL_{2}(\mathbb{F})$.
$spin(4,+,\mathbb{F})$ is $SL_{2}(\mathbb{F}) \times SL_{2}(\mathbb{F})$,
$Gspin(4,+,\mathbb{F})$ is a subgroup of the product $GL_{2}(\mathbb{F}) \times
GL_{2}(\mathbb{F})$ determined by the equal determinant condition, and
$SO^{+}(4,+,\mathbb{F})$ and $SO(4,+,\mathbb{F})$ are the appropriate quotients.
For the non-trivial discriminant, we get $SL_{2}(\mathbb{E})$ for
$spin(4,-,\mathbb{F})$ and $PSL_{2}(\mathbb{E})$ for $SO^{+}(4,-,\mathbb{F})$,
while $SO(4,-,\mathbb{F})$ is obtained from the latter group as a direct
product with $\{\pm1\}$ and
$Gspin(4,-,\mathbb{F})=GL_{2}^{\mathbb{F}^{\times}}(\mathbb{E})$.
The group $spin(5,\mathbb{F})$ is $Sp_{4}(\mathbb{F})$,
$SO^{+}(5,\mathbb{F})$ is $PSp_{4}(\mathbb{F})$, $Gspin(5,\mathbb{F})$ equals
$GSp_{4}(\mathbb{F})$, and $SO(5,\mathbb{F})$ is $PGSp_{4}(\mathbb{F})$.
$spin(6,+,\mathbb{F})=SL_{4}(\mathbb{F})$, $SO^{+}(6,+,\mathbb{F})$ equals
$PSL_{4}(\mathbb{F})$ if $-1\not\in(\mathbb{F}^{\times})^{2}$ (but not
otherwise!), $SO(6,+,\mathbb{F})$ is the direct product with $\{\pm1\}$, and
$Gspin(6,+,\mathbb{F})$ equals $GL_{4}^{(\mathbb{F}^{\times})^{2}}(\mathbb{F})$.
With the other discriminant we get the groups $SU_{\mathbb{E},\rho}(4)$ for
$spin(6,-,\mathbb{F})$ and $GSU_{\mathbb{E},\rho}(4)$ for
$Gspin(6,-,\mathbb{F})$, with the groups $SO^{+}(6,-,\mathbb{F})$ and
$SO(6,-,\mathbb{F})$ being the appropriate quotients.
$spin(7,\mathbb{F})$ may again be described as the subgroup of $SO^{+}\binom{0\
\ I}{I\ \ 0} \subseteq GL_{8}(\mathbb{F})$ in which the elements, presented as
block matrices $\binom{a\ \ b}{c\ \ d}$, satisfy the conditions that $ad^{t}$
and $bc^{t}$ are scalars (with squares some block determinants) and $bd^{-1}$
and $ca^{-1}$, or $ac^{-1}$ and $db^{-1}$, are anti-symmetric and related to
one another via the Pfaffian being their product. For $Gspin(7,\mathbb{F})$ we
relax the $SO\binom{0\ \ I}{I\ \ 0}$ condition to allow scalar multiplication
of the underlying bilinear form.
The group $spin(8,+,\mathbb{F})$ admits 3 inequivalent representations as
double covers of $SO^{+}\binom{0\ \ I}{I\ \ 0}$ groups. These representations
are restrictions of representations of $Gspin(8,+,\mathbb{F})$, in one of which
the kernel becomes $\mathbb{F}^{\times}$ and in the other two the bilinear form
may be multiplied by scalars. $spin(8,-,\mathbb{F})$ is the subgroup of
$spin(8,+,\mathbb{E})$ (which has the structure we just considered by Corollary
\ref{quadfin}, whence also the notation) in which two of the representations to
$SO^{+}\binom{0\ \ I}{I\ \ 0}$ (over $\mathbb{E}$) become isomorphic, with
$\rho$ and conjugating by $\binom{I\ \ \ \ 0}{0\ \ -I}$ being the isomorphism.
$Gspin(8,-,\mathbb{F})$ is defined by the same condition on
$Gspin(8,+,\mathbb{E})$, with the two representations which become isomorphic
being those in which the bilinear form is multiplied by scalars (which are only
from $\mathbb{F}^{\times}$ in this subgroup).
\smallskip
When we consider the case of Euclidean fields, recall that
$(\mathbb{F}^{\times})^{2}$ is the set of positive elements. Hence we denote it
$\mathbb{F}^{\times}_{+}$, and furthermore replace any such superscript by
simply $+$. Recall that the determinant of a space of signature $(p,q)$ is
$(-1)^{q}$, but for the discriminant, which matters to us only for even $n$, we
must multiply by $(-1)^{n/2}$. Note that double covers which are based on
choosing a square root split here, since we have the canonical choice of the
positive square root. The results of Theorems \ref{dim12}, \ref{dim3},
\ref{dim4}, \ref{dim6d1}, \ref{dim5}, \ref{dim6gen}, \ref{dim8id1},
\ref{dim7rd}, and \ref{dim8igen} then take the form appearing below:
$SO(1,0,\mathbb{F})=SO^{+}(1,0,\mathbb{F})=\{1\}$,
$Spin(1,0,\mathbb{F})=\{\pm1\}$, and
$GSpin(1,0,\mathbb{F})$ equals $\mathbb{F}^{\times}$.
$SO(2,0,\mathbb{F})=SO^{+}(2,0,\mathbb{F})$ as well as $spin(2,0,\mathbb{F})$
are $\mathbb{E}^{1}=U_{\mathbb{E},\rho}(1,0)$ and
$GSpin(2,0,\mathbb{F})=\mathbb{E}^{\times}=GU_{\mathbb{E},\rho}(1,0)$. On the
other hand, $SO(1,1,\mathbb{F})=\mathbb{F}^{\times}$,
$SO^{+}(1,1,\mathbb{F})=\mathbb{F}^{\times}_{+}$,
$Spin(1,1,\mathbb{F})=\mathbb{F}^{\times}$ as well, and
$GSpin(1,1,\mathbb{F})=\mathbb{F}^{\times}\times\mathbb{F}^{\times}$.
$spin(3,0,\mathbb{F})$ is $\mathbb{H}^{1}$ or equivalently
$SU_{\mathbb{E},\rho}(2,0)$, $SO(3,0,\mathbb{F})=SO^{+}(3,0,\mathbb{F})$ is
obtained as the quotient by $\{\pm1\}$, and
$Gspin(3,0,\mathbb{F})=\mathbb{H}^{\times}=GSU_{\mathbb{E},\rho}(2,0)$. On the
other hand, $spin(1,2,\mathbb{F})$ is $SL_{2}(\mathbb{F})$ (or equivalently
$SU_{\mathbb{E},\rho}(1,1)$) hence $SO^{+}(1,2,\mathbb{F})=PSL_{2}(\mathbb{F})$,
$GSpin(1,2,\mathbb{F})$ equals $GL_{2}(\mathbb{F})$ (or
$GSU_{\mathbb{E},\rho}(1,1)$), and $SO^{+}(1,2,\mathbb{F})$ is
$PGL_{2}(\mathbb{F})$.
The group $spin(4,0,\mathbb{F})$ is $\mathbb{H}^{1}\times\mathbb{H}^{1}$ or
its isomorph $SU_{\mathbb{E},\rho}(2,0) \times SU_{\mathbb{E},\rho}(2,0)$,
$Gspin(4,0,\mathbb{F})$ is the subgroup of
$\mathbb{H}^{\times}\times\mathbb{H}^{\times}$ consisting of pairs of
quaternions with the same norm, and $SO^{+}(4,0,\mathbb{F})=SO(4,0,\mathbb{F})$
is the corresponding quotient. Similarly, but with the split algebra,
$spin(2,2,\mathbb{F})$ is $SL_{2}(\mathbb{F}) \times SL_{2}(\mathbb{F})$ (or
equivalently $SU_{\mathbb{E},\rho}(1,1) \times SU_{\mathbb{E},\rho}(1,1)$),
$GSpin(2,2,\mathbb{F})$ is the ``same determinant subgroup'' of
$GL_{2}(\mathbb{F}) \times GL_{2}(\mathbb{F})$, and $SO^{+}(2,2,\mathbb{F})$
and $SO(2,2,\mathbb{F})$ are the appropriate quotients. On the other hand,
$spin(1,3,\mathbb{F})=SL_{2}(\mathbb{E})$, $SO^{+}(1,3,\mathbb{F})$ is
$PSL_{2}(\mathbb{E})$, $SO^{+}(1,3,\mathbb{F})$ is the direct product of the
latter group with $\{\pm1\}$, and $GSpin(1,3,\mathbb{F})$ equals
$GL_{2}^{\mathbb{F}^{\times}}(\mathbb{E})$.
We also have $spin(5,0,\mathbb{F})=Sp_{\mathbb{H}}(2,0)$ and
$Gspin(5,0,\mathbb{F})=GSp_{\mathbb{H}}(2,0)$, with
$SO^{+}(5,0,\mathbb{F})=SO(5,0,\mathbb{F})$ being the quotient
$GSp_{\mathbb{H}}(2,0)$ of the former modulo $\{\pm1\}$ or the latter modulo
$\mathbb{F}^{\times}$. In a similar manner, $spin(4,1,\mathbb{F})$ is
$Sp_{\mathbb{H}}(1,1)$, $Gspin(4,1,\mathbb{F})$ is $GSp_{\mathbb{H}}(1,1)$, and
$SO^{+}(4,1,\mathbb{F})$ and $SO(4,1,\mathbb{F})$ are the usual quotients. In
addition, $Sp_{4}(\mathbb{F})$ is $spin(2,3,\mathbb{F})$ so that
$SO^{+}(2,3,\mathbb{F})$ is $PSp_{4}(\mathbb{F})$, and $GSpin(2,3,\mathbb{F})$
equals $GSp_{4}(\mathbb{F})$.
The group $spin(5,1,\mathbb{F})$ is $GL_{2}^{1}(\mathbb{H})$,
$GSpin(5,1,\mathbb{F})$ equals $GL_{2}(\mathbb{H})\times\{\pm1\}$ (the double
cover splits, and the superscript $+$ is unnecessary by Lemma \ref{NM2B} and the
fact that
$N^{\mathbb{H}}_{\mathbb{F}}(\mathbb{H}^{\times})=(\mathbb{F}^{\times})^{2}$),
and $SO^{+}(5,1,\mathbb{F})$ and $SO(5,1,\mathbb{F})$ are the quotients (the
latter being the direct product of the former with $\{\pm1\}$). Using the split
algebra, $Spin(3,3,\mathbb{F})$ is just $SL_{4}(\mathbb{F})$,
$SO^{+}(3,3,\mathbb{F})$ is $PSL_{4}(\mathbb{F})$ as
$-1\not\in(\mathbb{F}^{\times})^{2}$, $GSpin(3,3,\mathbb{F})$ is isomorphic to
$GL_{4}^{+}(\mathbb{F})\times\{\pm1\}$ (a split double cover), and
$SO(3,3,\mathbb{F})$ equals $PSL_{4}(\mathbb{F})\times\{\pm1\}$.
$spin(6,0,\mathbb{F})$ equals $SU_{\mathbb{E},\rho}(4,0)$,
$GSpin(6,0,\mathbb{F})$ is $GSU_{\mathbb{E},\rho}(4,0)$, and
$SO^{+}(6,0,\mathbb{F})=SO(6,0,\mathbb{F})$ is obtained as both the appropriate
quotients. Finally, $spin(4,2,\mathbb{F})$ is isomorphic to
$SU_{\mathbb{E}}(2,2)$, $GSpin(4,2,\mathbb{F})$ is $GSU_{\mathbb{E}}(2,2)$, and
the usual quotients give $SO^{+}(4,2,\mathbb{F})$ and $SO(4,2,\mathbb{F})$.
The group $spin(4,3,\mathbb{F})$ is the subgroup of $SO^{+}\binom{0\ \ I}{I\ \
0}$ ($8\times8$ matrices) consisting of those block matrices $\binom{a\ \ b}{c\
\ d}$ in which $ad^{t}$ and $bc^{t}$ are in $\mathbb{F}$ and square to $\det
a=\det d$ and $\det b=\det c$ respectively, and where either $bd^{-1}$ and
$ca^{-1}$ or $ac^{-1}$ and $db^{-1}$ belong to $M_{4}^{as}(\mathbb{F})$ and
multiply to minus their common Pfaffian. $Gspin(4,3,\mathbb{F})$ is described by
the same condition on the group of matrices in $GL_{8}(\mathbb{F})$ whose action
multiplies $\binom{0\ \ I}{I\ \ 0}$ by a scalar (with some extra condition
extending the $SO^{1}$ condition). For $Gspin(5,2,\mathbb{F})$ we get the
subgroup of $GSp_{4}(\mathbb{H})$, elements $\binom{a\ \ b}{c\ \ d}$ of which
satisfy the conditions that $a\overline{d}^{t}$ and $b\overline{c}^{t}$ are
scalars squaring to $N^{M_{2}(B)}_{\mathbb{F}}(a)=N^{M_{2}(B)}_{\mathbb{F}}(d)$
and $N^{M_{2}(B)}_{\mathbb{F}}(b)=N^{M_{2}(B)}_{\mathbb{F}}(c)$ respectively,
and either the pair $bd^{-1}$ and $ca^{-1}$ or the pair $ac^{-1}$ and $db^{-1}$
is a pair of matrices in $M_{2}^{Her}(B)$ which are minus the adjoints of one
another. $spin(5,2,\mathbb{F})$ is the group of the matrices in
$Sp_{4}(\mathbb{H})$ having these properties. $Gspin(6,1,\mathbb{F})$ and
$spin(6,1,\mathbb{F})$ are similar subgroups of $GSp_{4}(\mathbb{H})$ and
$Sp_{4}(\mathbb{H})$, but in which the pairs of minus adjoint matrices are
$bd^{-1}$ and $-ca^{-1}$ or $ac^{-1}$ and $-db^{-1}$.
$spin(6,2,\mathbb{F})$ and $Gspin(6,2,\mathbb{F})$ are double covers of
$Sp_{4}(\mathbb{H})$ and $GSp_{4}(\mathbb{H})$ respectively in two, inequivalent
ways. We have omitted the superscript $\mathbb{F}^{2}$ since the reduced norms
from $\mathbb{H}$, hence also from $M_{2}(\mathbb{H})$ by Lemma \ref{NM2B}, are
non-negative, whence the map $\varphi$ from Proposition \ref{GSphom} is trivial
in this case. $spin(4,4,\mathbb{F})$ maps in 3 inequivalent ways to
$SO^{+}\binom{0\ \ I}{I\ \ 0}$ with kernels of order 2, and
$Gspin(4,4,\mathbb{F})$ maps in one representation to $SO\binom{0\ \ I}{I\ \ 0}$
with kernel $\mathbb{F}^{\times}$ but multiplies the bilinear form by arbitrary
scalars in the other two representations (where its kernel remains $\{\pm1\}$).
The group $spin(7,1,\mathbb{F})$ is defined by the condition on
$spin(8,\mathbb{E})$ (recall that $\mathbb{E}$ is quadratically closed by
Proposition \ref{Euc}) which states that conjugating one representations to the
group $SO^{+}\binom{0\ \ I}{I\ \ 0}$ over $\mathbb{E}$ by $\binom{I\ \ \ \ 0}{0\
\ -I}$ yields the $\rho$-image of the other representation. For
$Gspin(7,1,\mathbb{F})$ we apply the same condition on $Gspin(8,\mathbb{E})$
using two representations in which the operation multiplies the bilinear form by
scalars (from $\mathbb{F}^{\times}$ here). The groups $spin(5,3,\mathbb{F})$ and
$spin(5,3,\mathbb{F})$ are obtained in the same manner but with each $4\times4$
identity matrix replaced by $\binom{I\ \ \ \ 0}{0\ \ -I}$ (involving $2\times2$
identity matrices).
We remark that the case of the spin group in signature $(6,2)$ over
$\mathbb{F}=\mathbb{R}$ was considered in \cite{[SH]}, using Clifford algebras,
Eichler transformations, and some real, complex, and quaternionic analytic
tools. The homomorphism denoted $\phi$ in Lemma 6.10 of that reference is just
$a \mapsto t\overline{a}^{\ -1}$ on
$GL_{2}(\mathbb{H})=GL_{2}^{(\mathbb{F}^{\times})^{2}}(\mathbb{H})$, with the
square roots being positive. Proposition 6.11 there is the projectivization of
our Proposition \ref{GSppsi}. Now, the notion of positive definiteness extends
from $\mathbb{F}=\mathbb{R}$ to any Euclidean field. It thus seems reasonable
that the action of $Sp_{4}(\mathbb{H})$ on the subset of
$M_{2}^{Her}(\mathbb{H})\otimes\mathbb{E}$ in which the ``imaginary part''
(which is also well-defined) is positive definite, as well as the $\psi$-images
of the elements of $\widetilde{Sp}_{4}(\mathbb{H})$ lying over $g \in
Sp_{4}(\mathbb{H})$ being those which send $Z^{t}$ (for $Z$ in the latter space)
to $g(Z)^{t}$, also extend to this more general setting. However, as we have
seen, these aspects of the theory are not required for obtaining the general
result.
We do not get presentations of the definite spin groups $spin(7,0,\mathbb{F})$
and $spin(8,0,\mathbb{F})$ here, since a definite space of dimension 7 does not
represent its discriminant, and a definite space of dimension 8 is not
isotropic, and our methods for spaces of dimensions 7 and 8 require these
properties.
Unitary groups preserving sesqui-linear forms of dimension 3 do not arise in
the context of orthogonal groups since the dimension 8 of such special unitary
groups is not the dimension of any orthogonal group. Sesqui-linear forms of
signature $(3,1)$ also do not appear here because of the discriminant 1
condition. For $\mathbb{F}=\mathbb{R}$ we can also derive this fact from Lie
theory: The dimension of $SU_{\mathbb{C}}(3,1)$ is indeed 15, but its maximal
compact subgroup $S\big(U(3) \times U(1)\big)$ has dimension 9, which does not
equal the dimension of $SO(p) \times SO(q)$ for any pair $(p,q)$ with sum 6
(the required dimension is $\frac{p(p-1)}{2}+\frac{q(q-1)}{2}$, which attains
only the values 15, 10, 7, and 6).
\smallskip
The splitting of the double covers here comes from the splitting of the sequence
\[1\to\{\pm1\}\to\mathbb{F}^{\times}\to(\mathbb{F}^{\times})^{2}\to1.\] This
happens wherever the Abelian group $\mathbb{F}^{\times}$ contains $\{\pm1\}$ as
a direct summand (e.g., when $\mathbb{F}$ may be ordered, when
$(\mathbb{F}^{\times})^{2}$ is free like in number fields of class number 1
with no complex roots of unity, etc.). Note that when the double cover
$\widetilde{A}^{(\mathbb{F}^{\times})^{2}}$ splits, the elements $(g,|g|^{2})$
with $g \in A^{-} \cap A^{\times}$ will never lie all in the splitting group, as
they generate the full double cover by Theorem \ref{dim6d1}.
On the other hand, for quadratically closed and Euclidean fields the sequence
\[1\to(\mathbb{F}^{\times})^{2}\to\mathbb{F}^{\times}\to\mathbb{F}^{\times}
/(\mathbb{F}^{\times})^{2}\to1\] splits. This fact is related to the full
special orthogonal group admitting a double cover inside the Gspin group: For
the quadratically closed case as well as the definite case this is just the
usual spin group. In the indefinite case the double cover is obtained by
imposing the condition that a certain reduced norm, determinant, or multiplier
takes only the values $\pm1$. In fact, one can show that the only additional
case where this sequence splits is for a quadratically finite field in which
$-1\not\in(\mathbb{F}^{\times})^{2}$ (for the finite field $\mathbb{F}_{q}$ this
happens if and only if $q\equiv3(\mathrm{mod\ }4)$). The double covers in this
case are obtained in a way similar to the indefinite case.
\smallskip
We remark that the cardinality of the group
$\mathbb{F}^{\times}/(\mathbb{F}^{\times})^{2}$ can be either infinite or any
finite power of 2. To see this, observe that if $\mathbb{K}=\mathbb{F}((X))$
then $\mathbb{K}^{\times}/(\mathbb{K}^{\times})^{2}$ is generated by the
injective image of $\mathbb{F}^{\times}/(\mathbb{F}^{\times})^{2}$ and the class
of $X$ (another element of order 2 in
$\mathbb{K}^{\times}/(\mathbb{K}^{\times})^{2}$ which is independent of
$\mathbb{F}^{\times}/(\mathbb{F}^{\times})^{2}$). For the case the group has
order 4 we again have different types of fields. Indeed, if $\mathbb{K}$ is our
field and it takes the form $\mathbb{F}((X))$ then if $\mathbb{F}$ is
quadratically finite then $\mathbb{K}$ admits only one non-split quaternion
algebra (this is also the case for the $p$-adic numbers for an odd prime $p$ or
their finite extensions), while for Euclidean $\mathbb{F}$ there are 3
non-isomorphic quaternion algebras over $\mathbb{K}$ which are not split. The
description of the quadratic spaces over these fields will thus be different,
with the first case probably resembling the results appearing in Section 2 of
Chapter IV of \cite{[S]}. We leave these questions, as well as the question
whether every field $\mathbb{F}$ in which
$\mathbb{F}^{\times}/(\mathbb{F}^{\times})^{2}$ has order 4 resembles one of
these families, for future research.
|
2,877,628,091,613 | arxiv | \section*{Introduction}
The accurate generation and precise detection of broadband THz beams with well-defined polarization states is of great importance for the spectroscopy of anisotropic materials such as birefringent materials\cite{Nagashima2013,Lloyd-Hughes2014,Mosley2017}, multiferroics\cite{Mosley2017} and quantum Hall systems.\cite{Failla2016} Polarization control is essential for the development of THz ellipsometry and polarimetry systems\cite{Nagashima2013,Watanabe2018} and for the nascent field of THz communications.
This has driven increased interest in polarization-control schemes that can modify the ellipticity and polarization angle of THz radiation.
The direct generation of elliptical THz polarization states has been achieved by: photoconductive emitters under a magnetic field;\cite{Johnston02-127,Castro-Camus12-3620} superimposing multiple time-delayed THz pulses with different polarization states, as demonstrated for optical rectification,\cite{Amer2005} lateral photo-Dember\cite{Lee2012} and spintronic emitters;\cite{Chen2019} pulse-shaping of the pump pulse;\cite{Sato2013} laser-induced plasma filaments under an external electric field\cite{Lu2012} or two-color mixing of the pump.\cite{Zhang2018}
Alternatively, linearly-polarized THz can be converted to elliptical or circular polarization using broadband quarter waveplates based on birefringent materials\cite{Masson2006,Nagashima2013} or total internal reflection (TIR).\cite{Hirota2006,Kawada2014}
In all these implementations the THz pulse polarization had a fixed ellipticity that was not circular at all frequencies, or the ellipticity could only be modified by the mechanical rotation or translation of a component, which is a slow and cumbersome process.
Full electrical control of the ellipticity would provide a route to accurate and precise THz polarimetry, imaging and ellipsometry.
In that regard a four-electrode photoconductive antenna has been shown to generate broadband left- or right-handed THz radiation via TIR, but the ellipticity angle varied from $35^{\circ}$ (elliptical) to $45^{\circ}$ (circular) over the experimental bandwidth, attributed to the inhomogeneous bias field of the stripline antenna.\cite{Hirota2006}
In this Article we report that multi-pixel photoconductive arrays can accurately control the polarization state of THz radiation from linear to circular. We further introduce quantitative measures that allow the accuracy and precision of polarization control to be established.
Our experimental approach used an array of interdigitated photoconductive antenna integrated with an achromatic quarter waveplate, and allows rapid electrical control of the polarization state between pure linear, left- or right-handed circular polarization, and in a way that corrected for the finite polarization performance of the optical setup.
Interdigitated photoconductive emitters\cite{Dreyhaupt2005,Beck10-2611,Singh2019} are based on interleaved electrodes on the surface of a semiconductor that form an anode/semiconductor/cathode/gap repeat unit.
Charge carriers are excited in the biased semiconductor by a fs optical pulse, creating a transient current that radiates a THz pulse, and THz pulses from each repeat unit interfere constructively in the far-field.
Interdigitated emitters benefit from lower drive voltages and superior radiation patterns than narrow-gap stripline or bowtie antennae, which exhibit rapidly diverging beams and require integrated Si correction lenses.
Further, their linear polarization purity is excellent, with an ellipticity ($< 1^{\circ}$) that is an order of magnitude better than that for bowtie and wide-gap THz emitters.\cite{Mosley2017}
Here, a multi-pixel emitter array was produced by UV photolithography and consisted of four separate pixels, each consisting of interdigitated metal contacts with a 150\,$\mu$m $\times$ 150\,$\mu$m area, on a semi-insulating GaAs substrate.
Adjacent pixels emitted orthogonally polarized THz polarization states, as defined by the direction of the applied electric field.
The THz radiation emitted by the four pixels overlaps in the far-field to produce a coherent beam of THz radiation, with a linear polarisation state and a polarisation angle controlled by the relative bias voltages applied to horizontally-emitting and vertically-emitting pixels.\cite{Mosley2019a,Maussang2019}
The emitter was photoexcited by 80\,fs pulses from a Ti:sapphire oscillator, with an average optical power of 350\,mW, and a beam larger than the entire 300\,$\mu$m $\times$ 300\,$\mu$m active area of the device.
\begin{figure}
\includegraphics[width=0.5\textwidth]{KnifeEdgeMeasurementsFigCorr.pdf}%
\caption{\label{Figure1} Measured beam profiles at 1\,THz for horizontally and vertically emitting pixels, scanning (a) horizontally, along $x$, and (b) vertically, along $y$. (c) and (d) Frequency dependence of beam width and beam center position for horizontally and vertically polarized emission, along horizontal and vertical directions respectively.}%
\end{figure}
The accurate control of the polarization state is reliant upon achieving good spatial overlap between THz beams produced from different pixels. To demonstrate that this is the case for multi-pixel emitters, we determined the frequency-dependent beam profile at the THz beam focus produced by a 3" focal length, 2" diameter off-axis parabolic mirror.
The THz time-domain waveform was measured at each position of a razor blade, which was stepped either horizontally or vertically through the focus. Both the horizontal, $E_x$, and vertical, $E_y$, components of the electric field of the THz pulse were detected using polarization-resolved electro-optic sampling\cite{VanderValk2005} in a 0.5\,mm-thick, [111]-oriented ZnTe crystal. The beam profiles, obtained by differentiating the spectral amplitude at a particular frequency versus position, are reported as points in Fig.\ \ref{Figure1}(a) and (b) for 1\,THz.
A Gaussian beam shape (solid lines) with centre positions $x_0$ and $y_0$ and standard deviation $\sigma_x$ and $\sigma_y$ was obtained at all frequencies, with frequency-dependent beam parameters reported in panels (c) and (d). The beam width, shown as $\pm2\sigma$, reduces at higher frequencies, as expected from Gaussian beam theory.
Importantly, $\sigma_x\simeq\sigma_y$ for all frequencies, while $x_0$ and $y_0$ vary by less than 150\,$\mu$m, lower than the beam diameter (D4$\sigma$, given by the vertical extent of the shaded region). These measurements demonstrate that multi-pixel emitters produce high quality Gaussian beams, free from interference effects, and validate the beam propagation assumptions of our ellipticity-control setup, discussed in the following.
To demonstrate the electrical control of the THz polarization state from linear to circular we used the experimental setup presented in Fig.\ \ref{Figure2}(a).
The 4-pixel photoconductive emitter was optically contacted to one face of a Fresnel prism made of high-resistivity float-zone silicon, with its axes $x'$ and $y'$ at an angle $\theta=45$\,$^{\circ}$ to the lab frame, defined by $x$ and $y$.
Square wave voltages with amplitudes $V_{H}=V_0 \cos\phi$ and $V_{V}=V_0\sin\phi$ were applied to pixels emitting $E_{x'}$ and $E_{y'}$, respectively, where $V_0=10$\,V and $\phi$ is the target emission angle in the emitter frame.
Thus, $\phi=0^{\circ}$ set an electric field bias across just two pixels (blue arrows) and created a THz electric field vector $\mathbf{E}_{\mathrm{em}}=(E_{x'}, E_{y'})=(E_0, 0)$ in the emitter frame, and equal s- and p- components incident onto the silicon/air interface.
At an angle of incidence close to 41.9$^{\circ}$, total internal reflection in the prism produced a phase advance $\delta=\delta_\mathrm{p}-\delta_\mathrm{s}=90^{\circ}$ for p-polarized THz (phase $\delta_p$) with respect to s-polarized THz (phase $\delta_s$), i.e.\ the prism acted as an achromatic quarter waveplate.\cite{Hirota2006}
We reduced the divergence of the THz beam within the emitter and prism by adopting a weakly-focusing IR excitation beam, which transferred the pulse wavefront of the IR beam onto the THz beam\cite{Beck10-2611} and prevented a large spread in the angle of incidence at the silicon/air interface, which would have altered $\delta$ away from the desired value.
After TIR, a circularly polarized pulse therefore resulted.
Alternatively, driving all pixels with the same bias (green and blue arrows, $\phi=45^{\circ}$) produced $(E_{x'}, E_{y'})=(E_0, E_0)$, and linearly p-polarised light before and after TIR.
\begin{figure}
\includegraphics[width=0.95\columnwidth]{figure2.pdf}%
\caption{\label{Figure2} (a) Method to generate THz pulses with controllable ellipticity, as described in the text, based on using a 4-pixel THz emitter to produce a THz electric field pulse $\mathbf{E}_{\mathrm{THz}}$ at a polarization angle $\phi$. Polarization-resolved THz time-domain waveforms measured for (b) circular right, (c) circular left and (d) linear horizontal THz radiation output, obtained using bias schemes (blue and green arrows) with different $\phi$.}%
\end{figure}
Polarization- and time-resolved waveforms of the THz electric field pulses, detected after focusing the emitted THz onto the electro-optic crystal, are reported in Fig.\ \ref{Figure2}(b)-(d) for the different bias schemes pictured. When the emission angle was close to $x'$ or $y'$ in the emitter frame (i.e.\ $\phi\simeq 0^{\circ}$ and $\phi\simeq 90^{\circ}$) the THz radiation emitted was close to circular, while linearly polarized THz resulted when $\phi\simeq 45^{\circ}$.
\begin{figure}
\includegraphics[width=0.5\textwidth]{EllipticityControlFigure3v2.pdf}%
\caption{\label{Figure3}(a) Dependence of the normalized Stokes parameter $Q(\omega)/I(\omega)$ on frequency $\omega/2\pi$ and emission angle.
(b) Normalized Stokes parameters of the detected THz pulse averaged over the experimental bandwidth, from experiment (points) and model (lines).}%
\end{figure}
The accurate control of the ellipticity from this emitter design was demonstrated by a set of experiments at different $\phi$, and which are summarized in Fig.\ \ref{Figure3}.
Polarisation-resolved time-domain THz waveforms were recorded for each $\phi$.
The Stokes parameters, $\mathbf{S}(\omega,\phi)=(I,Q,U,V)$, provide a convenient figure of merit by which the accuracy of setting a desired polarization state can be assessed, where $I=|E_{x}|^{2}+|E_{y}|^{2}$, $Q=|E_{x}|^{2}-|E_{y}|^{2}$ is the difference in intensities polarised along $x$ and $y$, $U=|E_{a}|^{2}-|E_{b}|^{2}$ is the difference for light at $\pm45^{\circ}$ to the $x$ direction and $V=|E_{r}|^{2}-|E_{l}|^{2}$ is the difference for right- and left-hand circularly polarized light.
The frequency-dependent Stokes parameters were readily extracted from the Fourier transform of the time-domain traces to get $E_{x,y}(\omega)$, or $E_{a,b}(\omega)$ and $E_{r,l}(\omega)$ after converting into a $45^{\circ}$ or circular basis. The normalized Stokes parameter $Q/I$ is presented for the experimental frequency bandwidth in Fig.\ \ref{Figure3}(a), while panel (b) illustrates the frequency-averaged values of $Q/I$ and $V/I$. The electrical control of the ellipticity can be readily seen: upon changing the bias voltage applied to the horizontally and vertically emitting pixels, the polarization changes from circular ($V/I=\pm1$ and $Q/I=0$) to linear ($V/I=0$ and $Q/I=\pm1$), moving through elliptical polarisation states in-between (where $V/I$ and $Q/I$ are both nonzero). At emission angles of $8^{\circ}$ and $-89^{\circ}$ the value of $V/I\approx\pm1$ and the value of $Q/I$ drops to zero, demonstrating that the THz pulse after the prism was almost purely circularly polarised.
The small angular offset from where only the horizontally- or vertically-emitting contacts were biased, at $0^{\circ}$ and $\pm90^{\circ}$ respectively, can be attributed to a small angular misalignment of the emitter away from $45^{\circ}$ during mounting onto the prism, and small differences in emission strength or phase from different pixels. A distinct advantage of this method is that the user can electrically compensate for these issues, simply by varying the voltages $V_H$ and $V_V$ applied. The scheme can be summarized mathematically via
\begin{equation*}
\mathbf{E}_{\mathrm{out}}=\mathbf{P}\,\mathbf{R}\,\mathbf{E}_{\mathrm{em}} = \begin{pmatrix}
1 & 0 \\
0 & i
\end{pmatrix}
\begin{pmatrix}
\cos\theta & \sin\theta \\
-\sin\theta & \cos\theta
\end{pmatrix}
\begin{pmatrix}
E_0\cos\phi \\
\alpha E_0 \sin\phi e^{i\beta}
\end{pmatrix}
\end{equation*}
\noindent where $\mathbf{P}$ is the Jones matrix for the waveplate, $\mathbf{R}(\theta)$ is the rotation matrix, $\theta=\pi/4 + \epsilon$ to account for a small misalignment of the emitter by angle $\epsilon$ relative to $\theta=45^{\circ}$, and $\alpha$ and $\beta$ account for variations in the relative amplitude and phase of THz radiation emitted along $E_{x'}$ and $E_{y'}$, or resulting from optical components in the beam path.
This expression was used to calculate the $x$ and $y$ components of $\mathbf{E}_{\mathrm{out}}$, and thus the Stokes parameters using their definitions above. Good accord with experiment was found using $\alpha=1.25$, $\beta=30^{\circ}$ and $\epsilon=8^{\circ}$, as reported in Fig.\ \ref{Figure3}(b).
Further insight into how this experimental scheme achieves polarization control can be obtained by considering $\mathbf{E}_{\mathrm{out}}$ near $\phi=0$ and for small $\beta$ and $\epsilon$, which approximates to $\mathbf{E}_{\mathrm{out}}=(E_l,E_r)=E_0(-\epsilon + \alpha\phi,1)$ in the circular basis.
Hence by setting $\phi=\epsilon/\alpha\simeq 6^{\circ}$ one can produce right-handed circularly polarized light, as seen by $V/I=+1$ in Fig.\ \ref{Figure3}(b).
Similarly, at $\phi=\alpha\epsilon \pm\pi/2$, pure left-handed THz can be obtained, with $V/I=-1$.
Hence, to first order this experimental scheme controls the ellipticity by setting $\phi$ to compensate for the amplitude difference $\alpha$ between $E_{x'}$ and $E_{y'}$, and the angular alignment error $\epsilon$.
The phase difference $\beta$, which may be caused in part by the finite ellipticity of the multi-pixel design ($<15^{\circ}$),\cite{Mosley2019a} does not affect the ellipticity near $\phi=0$ or $\phi=\pm\pi/2$ to first order.
\begin{figure}
\includegraphics[width=0.5\textwidth]{EllipticityControlFigure4.pdf}%
\caption{\label{Figure4} (a) Frequency dependence of Stokes parameters of the right-hand circularly polarised ($\phi=-89^{\circ}$) and left-hand circularly polarised ($\phi=8^{\circ}$) THz pulses, respectively. (b) Frequency dependence of linear polarisation purity for $\phi = -31^{\circ}$ and $\phi = 47^{\circ}$. (c) Histogram of the deviation from the mean, $\psi-<\psi>$, over 1000 repeated measurements of the polarization angle $\psi$, each taking 10\,ms. A Gaussian fit (red line) yields standard deviation $\sigma=0.47^{\circ}$ and $<\psi>=12.8^{\circ}$.}%
\end{figure}
To establish the performance of this system, and to enable comparison with other methods in the literature, we explored figures of merit associated with the accuracy and precision of the polarisation state generated and detected with this approach. The Stokes parameters provide a convenient figure of merit for the accuracy of the polarisation state, and are presented for the optimum circular and linear polarisation states in Figs.\ \ref{Figure4}(a) and \ref{Figure4}(b), respectively. Here we considered the combined contribution of both $Q$ and $U$ to the linear polarisation purity, using $\sqrt{Q^{2}+U^{2}}/I$, since a linear polarisation state at an angle away from the $x$ or $y$ axes has components $\pm45^{\circ}$ that contribute to $U$. High polarisation purities were achieved for both circular and linear polarizations, with $|V/I|>0.98$ and $\sqrt{Q^{2}+U^{2}}/I>0.97$ over the bandwidth of the experiment (2.5\,THz).
\begin{table*}[!t]
\centering
\begin{tabular}{|l|l|l|l|l|l|l|}
\hline
\textbf{Ref.} & \textbf{Laser} & \textbf{Scheme} & \textbf{$\sigma_{\psi} (^{\circ})$} & \textbf{$\tau$ (ms)} & \textbf{$R$ (Hz)} & \textbf{$s_{\psi}$ ($^{\circ}$)} \\ \hline
[\onlinecite{Yasumatsu2012}] & Amp. & EOS with (110) GaP, continually rotating. & 0.56 & 21 & 47.6 & 0.081 \\ \hline
[\onlinecite{Nemoto2014}] & Amp. & EOS with (111) ZnTe, gate polarization modulated via PEM. & 0.0057 & 660 & 1.5 & 0.005 \\ \hline
[\onlinecite{Xu2020}] & Amp. & EOS with (110) ZnTe, continually rotating. & 1.3 & 62 & 16 & 0.325 \\ \hline
[\onlinecite{Peng2020}] & Osc. & PCD with cross-polarized InP nanowires. & 0.38 & 1000 & 1 & 0.38 \\ \hline
This work & Osc. & EOS with (111) ZnTe, gate polarization rotated via half-waveplate. & 0.47 & 10 & 100 & 0.047 \\ \hline
\end{tabular}
\caption{\label{TAB:comparison}Comparison of calculated $s_{\psi}$ for papers that report $\sigma_{\psi}$ and $\tau$ or $R$. `Amp.' stands for Ti:sapphire laser amplifier; `Osc.' for Ti:sapphire laser oscillator; `EOS' denotes electro-optic sampling; and `PCD' refers to photoconductive detection.}
\end{table*}
Finally, we introduce the standard error in a measurement time of $T=1$\,s as a figure of merit that allows us to quantify the precision achieved in this work.
The standard error of the mean is $s=\sigma/{\sqrt{N}}$, where $\sigma$ is the standard deviation of $N$ repeated measurements of an observable.
Since $N=RT$ for a measurement rate $R$, $s=\sigma/{\sqrt{RT}}=\sigma \sqrt{\tau/T}$ where $\tau=1/R$ is the time to make one measurement.
By calculating $s$ for $T=1$\,s a fair comparison can be made of the precision of different schemes that measure the polarization of THz radiation, which have widely different sampling times, $\tau$ (Table \ref{TAB:comparison}).
As demonstrated above, the output ellipticity in our scheme depends on the set value of $\phi$, hence we measured the precision in the polarization angle $\psi$ produced by the 4-pixel emitter for a fixed value of $\phi$.
Here we define $\psi$ as the measured angle of the THz pulse relative to the $x$ direction.
We determined $\sigma_{\psi} = 0.47^{\circ}$ from 1000 repeated measurements of $E_x$ and $E_y$ at a single point in the time-domain, as reported in Fig.\ \ref{Figure4}(c)).
The sampling time, $\tau=10$\,ms, in this case was limited by the time constant of the lock-in amplifier used to average the electro-optic signal, giving $N=100$ in 1\,s.
Thus $s_{\psi}=\sigma_{\psi}/\sqrt{100}=0.047^{\circ}$, which is competitive with the current state of the art.
In Table \ref{TAB:comparison} we report our calculated values of $s_{\psi}$ for various polarization-resolved detection schemes reported in the literature.
To further lower $s_{\psi}$ and enhance precision it is desirable to either lower $\sigma$ by increasing the THz signal-to-noise ratio, as shown for high-field THz pulses from LiNbO$_3$,\cite{Nemoto2014} or using schemes that increase $R$.
In conclusion, full electrical control of the ellipticity of broadband THz pulses was obtained using a 4-pixel interdigitated photoconductive antenna with an integrated achromatic waveplate.
High polarisation purities over the entire experimental bandwidth (0.2-2.5\,THz) were obtained, for both linear ($>97\%$) and circular ($>98\%$) polarisation states. Here the bandwidth was limited by the fs laser's pulse duration and increasing absorption at high frequencies in the Si prism.
Our scheme has the distinct advantage of allowing the user to electrically compensate for small misalignments of optical components, or the effects of optical components on the amplitude and phase of the THz radiation, to produce more accurate and pure polarisation states.
The versatile electrical control of THz polarization states using multi-pixel photoconductive emitters will find widespread application in industrial and commercial THz time-domain spectrometers, the majority of which use laser oscillators.
Finally, we introduced the standard error for a 1\,s measurement time as a convenient metric for the precision, facilitating a fair comparison with different schemes reported in the literature.
This metric can be applied universally across any experimental observable and allows the experimenter to discern the fastest, most precise scheme.
\begin{acknowledgments}
CDWM would like to thank the EPSRC (UK) for a PhD studentship. The authors would like to thank Hugh Thomas and Lucas Bartol-Bibb for technical assistance.
\end{acknowledgments}
|
2,877,628,091,614 | arxiv | \section{Introduction}
The physical description of magnetic degrees of freedom in a broad class of compounds is usually based on the well known Heisenberg-Dirac-van Vleck Hamiltonian
\cite{Auerbach:1994,LM_2011,Balents_Nature,stat-trans,Bishop_2012,lamas-capponi,Fisher_2013}.
In this context, the use of ab-initio techniques of electronic structure to determine the effective values of the exchange couplings ($J_{i}$) is a very useful tool for theoretical physicists.
This theoretical description, despite its simplicity, allows us to understand the basic ingredients that give rise to a wide variety of magnetic phases
\cite{Cabra_honeycomb_prb,Messio,Honecker2000a,Mosadeq,Mulder,Trumper1,Lamas2015b,Brenig2001a,Chubukov1991,Zhang2014,Oshikawa_2013}.
The successful determination of the magnetic parameters lies in the appropriate balance between the selection of the model and the spin configuration used in the
ab-initio calculations.
In contrast, the super-exchange pathways via bridging ligands may cause that interactions between sites separated by a long distance to be large, forcing us to take very large unit cells.
The computational cost is greatly increased with the unit cell size and the use of a large number of spin configurations is restrictive.
In order to be able to perform calculations of the Heisenberg exchange coupling constants and determine the minimal model we should be able to detect the set of configurations that gives
us the best determination of our model.
In this paper we present a strategy to determine this set of optimal configurations to be used in the ab-initio estimation of the energy.
In this way we are able to obtain a well conditioned system of linear equations to determine the parameters of the magnetic model.
We apply the algorithms to determine the magnetic couplings corresponding to the compund Bi$_3$Mn$_4$O$_{12}$(NO$_3$)\cite{matsuda2010disordered}. In this compound the Mn$^{4+}$ ions form a honeycomb lattice.
Two layers of such honeycomb planar configurations are separated by Bismuth atoms, forming an almost isolated bilayer structure separated by a long distance to the next bilayer structure.
Furthermore, the magnetic susceptibility data\cite{smirnova2009synthesis} and neutron scattering\cite{matsuda2010disordered} suggests two-dimensional magnetism, so it seems reasonable to model the system with a bilayer structure of Mn$^{4+}$ ions.
There is some experimental evidence that the interlayer coupling and the first and second neighbors intralayer couplings are the most relevant interactions and could be a strong competition between them\cite{matsuda2010disordered}.
The outline of the paper is the following. In section \ref{sec:method} we state criteria used in determining a suitable strategy to obtain a well conditioned set of equations for the exchange couplings.
This system defines a family of representative models.
In section \ref{sec:application-bimono} we apply the method to define the family of magnetic models corresponding to the material Bi$_3$Mn$_4$O$_{12}$(NO$_3$) .
In section \ref{sec:conclusions} we present the conclusions and perspectives. Finally in the appendices \ref{sec:svd},\ref{app:appendix-alg} and \ref{app:visualbond} we display
the single value decomposition, the detailed algorithm to optimize the set of magnetic configurations and the visual interfase for the scripts that implement the method.
\section{Coupling constants and Relevant Configurations}
\label{sec:method}
In this section, we discuss a method based on ab-initio calculation of total energy differences to estimate the coupling constants in an effective magnetic model.
We start by considering a certain atomic lattice involving magnetic atoms. Magnetism in matter is an intrinsically quantum phenomenon, requiring for its description a full quantum
treatment of the electronic degrees of freedom. An exact approach of such problem is computationally unfeasible due to the huge size of its associated Hilbert space.
The usual strategy to tackle this problem is based on the relative weakness of the magnetic contribution to the energy, compared to those coming from the spatial degrees of freedom.
This allows to approximate the true ground state of the system as a linear combination of those Slater's determinant-like states
$$|\beta\rangle=|\{\varphi_i\},\{s^\beta_i\}\rangle_{SL}=\frac{1}{\sqrt{N}}\left||\varphi_1, s^\beta_1\rangle\ldots |\varphi_N, s^\beta_N \rangle\right|$$
that minimizes $E_{\beta}=\langle \beta |{\bf H}_{\rm full}|\beta\rangle$ for each fixed spin configuration $\{s^\beta_i\}$. By construction, these states define an orthogonal basis of the ground multiplet of the system. The evaluation of these optimizations can be performed in a relatively efficient way by means of Hartree Fock or Density Functional Theory (DFT)-like methods\footnote{In DFT, the test wave functions are not exactly Slater's determinats, but the argument holds: GS associated to different spin configurations are orthogonal among them.}. This approximation is justified if, for each $\{s_i\}$, the corresponding spectrum is gapped. In such a case, the true Ground State (GS) (and its low-lying excited states) can be obtained diagonalizing
$$
{\bf H}_0= \sum_{\beta\beta'}|\beta\rangle\langle \beta| {\bf H}_{\rm full}|\beta'\rangle \langle \beta'|
$$
DFT and Hartree-Fock formalisms provide a method to evaluate (individual) diagonal elements of ${\bf H}_0$ in an efficient way. Notice that at this point, ${\bf H}_0$ is
still in principle a huge matrix, in a way that even to evaluate every diagonal entry is a non-affordable task. To go further, we will suppose that ${\bf H}_0$
can be approximated by a simpler model, depending on a relatively small number of parameters. The family of Heisenberg's models
$${\bf H}_{ eff}[\{J_{\alpha}\}]=\sum_{\alpha=1}^{M} \frac{J_\alpha}{2}\sum_{(i,j)\in B_\alpha} \vec{\bf S}_i\cdot\vec{\bf S}_j+ J_0{\bf 1}$$
supplies a very versatile class of models, with a rich phase space, that do not break $SU(2)$ symmetry. Here, $B_a$ are sets of equivalent pairs of sites in the lattice, $J_\alpha$ the corresponding
coupling constants ( $J_0$ is a global energy offset). The coupling constants $J_{\alpha}$ can then be choosen in a way that ${\bf H}_{ eff}$ has diagonal entries close to the computed diagonal elements on ${\bf H}_0$. Notice that the condition on all the diagonal entries in both Hamiltonians defines an overconditioned set of linear equations for $J_{\alpha}$, in general it will not be possible to find $J_{\alpha}$ for a perfect match. On the other hand, as energies in DFT can only be estimated up to a finite accuracy $\Delta \varepsilon$, it makes sense to ask whether the diagonal elements in both matrices differ in less than $\Delta \varepsilon$. We define then the set of $\Delta \varepsilon-$\emph{compatible parameters}
\begin{equation}
\label{eq:compatibility}
{\cal C}_{\Delta \varepsilon}:= \{ \{J_\alpha\} / |\langle \beta|{\bf H}_{ eff}[\{J_\alpha\}]|\beta\rangle-E_{\beta}|<\Delta \varepsilon, \;\;\forall |\{s_i\}\rangle\}\,.
\end{equation}
in a way that any element in ${\cal C}_{\Delta \varepsilon}$ generates a ${\bf H}_{eff}$ with diagonal elements compatible with ${\bf H}_0$ upto the tolerance $\varepsilon$. If ${\cal C}_{\Delta \varepsilon}$ is small enough, we can expect that ${\bf H}_{eff}$ leads to the same physical predictions for any choice of $\{J_\alpha\}$ in ${\cal C}_{\Delta \varepsilon}$.
Once we have a \emph{representative} choice for $\{J_\alpha\}$, we can deal with ${\bf H}_{eff}$ by means of different analytical and numerical methods\cite{Auerbach:1994},
ranging from boson maps\cite{Sachdev1990,Trumper2,Coleman,Oshikawa_2013,Cabra_honeycomb_prb,Trumper1,Mulder,Zhang_PRB_2013,Brenig2016}, path integrals\cite{lamas-capponi,Lamas2015}, exact diagonalization\cite{Honecker2000a}, DMRG\cite{Honecker2000a,Elias2017}, etc.
If the system involves just a very small number of \emph{magnetic} atoms, the set ${\cal C}_{\Delta \varepsilon}$ can be characterized by the evaluation of the energies of all the possible spin configurations. Since ${\bf H}_{eff}$ is linear in the coupling constants $J_\alpha$, we can rewrite Eq. \ref{eq:compatibility} as
$$
{\cal C}_{\Delta \varepsilon}= \{ \vec{J} / \|A \cdot \vec{J} -\vec{E}_0\|_{\infty}<\Delta \varepsilon \}\,,
$$
where $\vec{J}$ is a vector with components $J_\alpha$, $\vec{E}_0$ is a vector with components $E_{\beta}$, and $A$ is a matrix with coefficients
$$[A]_{\beta \alpha}=\frac{1}{2}\sum_{(i,j)\in B_\alpha}\langle \{s_i\}_\beta|\vec{S}_i\cdot\vec{S}_j |\{s_i\}_\beta\rangle$$
and
$$
\|\vec{v}\|_{\infty}=\max_i |\vec{v}_i|
$$
is the \emph{maximum norm} or \emph{infinity norm}\cite{Bhatia97}. As a result, if ${\cal C}_{\Delta \varepsilon}$ is not empty, it is a convex polytope, i.e., a convex set coming from the intersection of many hemi-spaces (see Fig \ref{fig:compat1}).
\begin{figure}
\includegraphics[width=8cm]{figs/compat1}
\caption{Color online. Compabilibity regions for different values of $\Delta \varepsilon$. For some finite value $\Delta \varepsilon_0>0$,
the compatibility region becomes empty. Dashed lines represent the boundary of the hemi-spaces (semi-planes) defining the largest polytope.}
\label{fig:compat1}
\end{figure}
As the number of magnetic atoms becomes larger (let's say, $>6$), the number of configurations grows exponentially, and therefore the evaluation of all the constraints defining the compatibility polytope ${\cal C}_{\Delta \varepsilon}$ is not feasible anymore. However, we can \emph{bound}
${\cal C}_{\Delta \varepsilon}$ just looking at a smaller set of configurations: defining ${\cal S}_n$ as a subset of $n$ elements from the set of all possible configurations, we can define
$$
{\cal C}_{\Delta \varepsilon}({\cal S}_n):= \{ \vec{J} / \|A' \cdot \vec{J} -\vec{E}'_0\|_{\infty}<\Delta \varepsilon \}
$$
where $A'=A'({\cal S}_n)$ and $E_0'=E_0'({\cal S}_n)$ are built in such a way those rows correspoding to the spin configurations in ${\cal S}_n$ are preserved.
These sets satisfies ${\cal C}_{\Delta \varepsilon}({\cal S}_n)\subset {\cal C}_{\Delta \varepsilon}({\cal S}_{n'})$ if ${\cal S}_{n'}\subset {\cal S}_{n}$
and ${\cal C}_{\Delta \varepsilon}\subset {\cal C}_{\Delta \varepsilon}({\cal S}_n)$, so that as we increase the number of evaluated configurations, the compatibility set does not increase its size.
Typically, assuming that the spectrum of ${\bf H}_{eff}$ can be accurately represented by a spin Hamiltonian with a moderately small number of couplings, most of the configurations provide no information or just redundant information. In this way, a very tight bound for ${\cal C}_{\Delta \varepsilon}$ can be obtained by just considering a small set of configurations, with a size of the order of the number of free parameters in the model.
For this, it is crucial to pick the set of configurations in an optimal way: otherwise, ${\cal C}_{\Delta \varepsilon}({\cal S}_n)$ can stay covering a much larger region than ${\cal C}_{\Delta \varepsilon}$, even for quite large $n$ (see Fig \ref{fig:convergence}).
\begin{figure}
\includegraphics[width=8cm]{figs/convergence1}
\caption{Color online. Convergence of the compatibility region with the number of computed configurations. Top: Compatible model. (a) good choice of configurations. (b) a bad one. Bottom: Non compatible model. (c) good choice of configurations. The incompatibility was verified. (d) bad choice. The incompatibility is not apparent. }
\label{fig:convergence}
\end{figure}
\subsection{Choosing an optimal set of configurations}
One problem about optimizing the size of ${\cal C}_{\Delta \varepsilon}({\cal S}_n)$ is that, in order to be evaluated, it is necesary to know the energy of each configuration, which in general requires a lot of computing time. One strategy to avoid this issue is to make use of the inequality
\begin{equation}
\frac{\|\vec{x}\|_\infty}{\sqrt{n}} \leq \frac{\|\vec{x}\|_2}{\sqrt{n}} \leq \|\vec{x}\|_\infty \leq \|\vec{x}\|_2
\label{eq:metricbound}
\end{equation}
being $\|\vec{x}\|_2=\sqrt{\sum_{i=1}^n \vec{x}_i^2}$ the \emph{euclidean norm} and $n$ the number of components of the vector\cite{Bhatia97}. With this in mind, we define
\begin{equation}
\label{eq:compatibility2}
{\cal C}^{(2)}_{\Delta \varepsilon}({\cal S}_n):= \{ \vec{J} / \|A' \cdot \vec{J} -\vec{E}'_0\|_{2}<\Delta \varepsilon \}\,,
\end{equation}
where again, $A'=A'({\cal S}_n)$ and $E_0'=E_0'({\cal S}_n)$ are the restrictions of $A$ and $\vec{E}_0$ to the rows associated to ${\cal S}_n$. These sets define ellipsoids centered at the minimum of the quadratic form
\begin{equation}
\chi^2(\vec{J})= \|A'.\vec{J}-\vec{E}_0'\|^2
\label{eq:chi}
\end{equation}
and with main axes defined by the Singular Value Decomposition (SVD) of $A'$:
$$
A'=U \Sigma V^t
$$
where $U^tU=V^tV ={\bf 1}_{M+1}$ and $\Sigma={\rm diag}(\sigma(A')) \in \mathbb{R}^{(M+1) \times (M+1)}$ a diagonal rectangular matrix with the singular values of the matrix $A'$. The size of ${\cal C}^{(2)}_{\Delta \varepsilon}({\cal S}_n)$ is then bound by
$$
\|{\cal C}_{\Delta \varepsilon}^{(2)}({\cal S}_n)\| < \Delta \varepsilon \;cn({\cal S}_n)
$$
where $cn$ is the \emph{condition number} of $A'$, i.e,
$$
cn(A')=\max_{s \in \sigma(A')}\frac{1}{s}
$$
in a way that the size of the set depends not on $\vec{E}_0'$ but just on $\vec{A}'$. This allows us to evaluate it \emph{before} any DFT/ab-initio expensive simulation.
The usefulness of Def. (\ref{eq:compatibility2}) to bound the size of (\ref{eq:compatibility}) comes from Eq. \ref{eq:metricbound}, that leads up to
$$
{\cal C}^{(2)}_{\Delta \varepsilon}({\cal S}_n) \subset {\cal C}_{\Delta \varepsilon}({\cal S}_n) \subset {\cal C}^{(2)}_{\Delta \varepsilon \sqrt{n}}({\cal S}_n)
$$
in a way that ${\cal C}_{\Delta \varepsilon}\neq \O$ if ${\cal C}^{(2)}_{\Delta \varepsilon}({\cal S}_n)\neq \O$, and
${\cal C}^{(2)}_{\Delta \varepsilon\sqrt{n}}({\cal S}_n)$ bounds ${\cal C}_{\Delta \varepsilon\sqrt{n}}({\cal S}_n)$ (see Fig \ref{fig:convergence2}).
From the previous discussion, the strategy to obtain a good set of configurations to bound the compatibility zone is to look for ${\cal S}_n$
that minimizes the cost function
\begin{equation}
C({\cal S}_n):= \sqrt{n}\,cn(A'({\cal S}_n))
\label{eq:costfunction}
\end{equation}
Formally, the problem of finding the absolute minimum of $C({\cal S}_n)$ is hard, since it typically presents many relative minima with
approximately the same cost. However, what we actually need is just one these relative minima, which can be efficiently achieved by the algorithm
presented in the appendix \ref{app:appendix-alg}.
\begin{figure}
\includegraphics[width=8cm]{figs/convergence2}
\caption{Color online. Ellipsoid bound for ${\cal C}_{\Delta \varepsilon}({\cal S}_n)$. a) The inclusion relation among the three sets. b) If the tolerance is reduced, ${\cal C}^{(2)}_{\Delta \varepsilon}({\cal S}_n)$ becomes empty, but the system is still compatible. In this case, $J_\alpha^{LS}$ does not belong to the compatibility zone. c) If the tolerance is reduced further, the model becomes incompatible, but ${\cal C}_{\Delta \varepsilon \sqrt{n}}({\cal S}_n)$ still is non empty. d) For fix tolerance, as the number of configuration grows, ${\cal C}^{(2)}_{\Delta \varepsilon}({\cal S}_n)$ becomes empty,
while ${\cal C}^{(2)}_{\Delta \varepsilon \sqrt{n}}({\cal S}_n)$ grows. }
\label{fig:convergence2}
\end{figure}
\subsection{Estimation of $J_{\alpha}$ and its uncertancies}
Once an optimal set of configurations is determined, the corresponding magnetic energies can be estimated by means of DFT simulations. The next step is then to find the representative value $J^{(0)}_{\alpha}$ according to them. In the standard approach, $J^{(0)}_{\alpha}$ is estimated by
$$J_\alpha^{LS}={\rm argmin}_{J_\alpha} \chi^2(J_{\alpha})$$
the least square condition. This approach is valid if ${\cal C}_{\Delta \varepsilon}^2({\cal S}_n)\neq \O$ for the considered tolerance, since in that case $J^0_{\alpha}$ belongs to ${\cal C}_{\Delta \varepsilon}({\cal S}_n)$. This can always be achieved by choosing a large enough value for tolerance. However, in that case, the uncertancy in the estimated coupling constants could result overestimated regarding the true accuracy of the ab-initio simulation. This is a problem because the accuracy of the simulations are usually close to the scale of energy of the coupling constants which are being calculated. As a result, the estimated values for $J_{\alpha}$ are smaller in magnitud than the uncertancies, in a way that at the end of the day we are not able to state even the sign of the couplings.
To get a more realistic estimation, a Monte Carlo sampling of the region ${\cal C}_{\Delta \varepsilon}({\cal S}_n)$ can be performed in order to get the limit values of the compatible $J_{\alpha}$. An efficient stragegy consists on explore a set of random points with a gaussian distribution around $J^{LS}_{\alpha}$, with a correlation matrix $\lambda (A^t A)^{-1}$ (see Fig. \ref{fig:mcdeterm}).
\begin{figure}
\includegraphics[width=8cm]{figs/mcdet}
\caption{Color online. Monte Carlo estimation of the bounds for $J_{\alpha}$. Points are spread around the center of the ellipsoid with a gaussian distribution having a correlation matrix proportional to $A'^t A'$. The compatibility region is sampled by keeping those points that belongs to it.
The dashed box approximately bounds the region of the compatibility region.}
\label{fig:mcdeterm}
\end{figure}
\subsection{Improving a model}
If the accuracy in the DFT energy estimation is high enough, it could happen that the proposed model becomes incompatible. In that case, a more sofisticate model can be required, for instance, by considering different two sets of coupligs that initially were considered with the same value, or by adding interactions with more distant neighbors. For this new model, the optimal set of configurations to determine the new coupling constants can be different. Although, it can be expected that there is a set of configurations, including those in the optimal set for the simpler model, that is also a relative minimum. This allows us to reuse the energies obtained in the simpler model, reducing the computational cost of estimating couplings in the new model. A way to achieve this consists on, first, optimize $C({\cal S}_{n}\cup {\cal S}'_{n'} )$ regarding an ${\cal S}_{n'}$ with a fixed $n'$, being ${\cal S}_n$ the optimal set for the simpler model.
Due to the factor $\sqrt{n}$ in $C({\cal S}_n)$, this result can be improved by reducing the size of ${\cal S}_n\cup {\cal S}'_{n'}$, which can be accomplish by dropping \emph{one by one} elements from the set.
\subsection{Visualbond Spectrojotometer}
The method proposed here is suitable for being automatized. In this aim, an open source project has been developed\cite{visualbond} to provide to the community a software tool that performs each step in the analysis. The tool consists on a set of python libraries that helps to build magnetic models from structural data of the target magnetic compound (in cif format), and once the model is defined, to determine the optimal magnetic configurations to be evaluated, and finally, when the ab-initio simulations are performed, to estimate the corresponding coupling constants including their error bounds.
\section{Aplication: Magnetic model for Bi$_3$Mn$_4$O$_{12}$(NO$_3$)}
\label{sec:application-bimono}
The synthesis of the material bismuth oxynitrate, {Bi$_3$Mn$_4$O$_{12}$(NO$_3$)}, obtained by Smirnova
\textit{et al.}\cite{smirnova2009synthesis}, has given a great impulse to the study of two dimensional effective antiferromagnetic models on the bilayer honeycomb lattice.
Here, the Mn$^{4+}$ ions form a honeycomb lattice, grouped in pairs and separated by a large distance by bismuth atoms.
For this material the bilayer honeycomb lattice brings an appropriate geometry to build an effective Hamiltonian capable to describe its magnetic properties.
The Heisenberg model on the bilayer Honeycomb lattice has been attracted a plenty of theoretical studies in the last years\cite{Fisher_2013,Arlego201415,Cabra_honeycomb_prb,Cabra_honeycomb_2,fouet1975investigation,kandpal2011calculation,Richter2017,Mosadeq,Oitmaa_2012}.
The richness of this model makes it very interesting from the theoretical point of view.
This two dimensional nature of the effective model is reinforced by magnetic susceptibility data.
Also, no evidence of long-range ordering has been observed down to 0.4 K\cite{smirnova2009synthesis,matsuda2010disordered,okubo2010high,azuma2011frustrated}.
On the other hand, recent experimental studies\cite{matsuda2010disordered} have suggested that the inter-layer coupling,
as well as on-layer nearest and, to a lesser extent, next-nearest couplings, are dominant.
This last characteristic of the model makes the ab-initio calculation a good tool to understand the mechanism involved by determining an appropriated
effective model.
Recently\cite{Alaei.2017} the magnetic model of Bi$_3$Mn$_4$O$_{12}$(NO$_3$)\ has been estimated by using fifty-four different spin configurations and determined the energy by ab-initio calculations with an error $\Delta E= 0.5$ meV.
As the DMFT calculation of the energy is hard, it is convenient to reduce the number of spin configurations needed to obtain the magnetic couplings. Using the strategy presented in the previous sections we determine a set of
optimal configurations and calculate the magnetic couplings.
In table \ref{tab:jotas} we present the couplings obtained in\cite{Alaei.2017} using fifty-four calculations of the energy and the result using our optimal eleven configurations. All the couplings agree up to the error.
Using our strategy to select the optimal configurations before the ab-initio determination of energy can save many hours of machine work and allows us to work with larger unit cells.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
$J_{i}/|J_{1}|$ & $\mbox{11 configurations}\atop{(|J_1|=0.346{\rm meV})}$ & $\mbox{54 configurations}\atop{(|J_1|=0.349{\rm meV})}$ \\ \hline
$J_{1}/|J_{1}|$ & $-1.0 \pm 0.1$& $-1.0 \pm 0.06$\\ \hline
$J_{2}/|J_{1}|$ & $-0.12 \pm 0.06$& $-0.11 \pm 0.04$\\ \hline
$J_{3}/|J_{1}|$ & $-0.09 \pm 0.07$& $-0.09 \pm 0.05$\\ \hline
$J_{0}/|J_{1}|$ & $-0.3 \pm 0.21$& $-0.3 \pm 0.12$\\ \hline
$J_{1c}/|J_{1}|$ & $-0.1 \pm 0.09$& $-0.11 \pm 0.06$\\ \hline
$J_{2c}/|J_{1}|$ & $-0.05 \pm 0.07$& $-0.03 \pm 0.04$\\ \hline
$J_{3c}/|J_{1}|$ & $-0.07 \pm 0.08$& $-0.06 \pm 0.05$\\ \hline
\end{tabular}
\caption{\label{tab:jotas}
Coupling constants obtained by ab-initio calculations with fifty-four configurations in ref\cite{Alaei.2017}
}
\end{center}
\end{table}
The optimal subset of eleven configurations that we have found with the method developed in Sec \ref{sec:method} is
\begin{eqnarray}
|1\rangle &=& | \uparrow\; \uparrow\; \uparrow\; \uparrow\; \uparrow\; \uparrow\; \uparrow\; \uparrow\; \uparrow\; \uparrow\; \uparrow\; \uparrow\; \uparrow\; \uparrow\; \uparrow\; \uparrow\;\rangle \\ \nonumber
|2\rangle &=& | \uparrow\; \uparrow\; \downarrow\; \downarrow\; \uparrow\; \downarrow\; \uparrow\; \downarrow\; \uparrow\; \uparrow\; \downarrow\; \downarrow\; \uparrow\; \downarrow\; \uparrow\; \downarrow\;\rangle \\ \nonumber
|10\rangle &=& | \uparrow\; \downarrow\; \uparrow\; \downarrow\; \uparrow\; \downarrow\; \uparrow\; \downarrow\; \downarrow\; \uparrow\; \downarrow\; \uparrow\; \downarrow\; \uparrow\; \downarrow\; \uparrow\;\rangle \\ \nonumber
|17\rangle &=& | \downarrow\; \downarrow\; \downarrow\; \downarrow\; \uparrow\; \uparrow\; \uparrow\; \uparrow\; \downarrow\; \downarrow\; \uparrow\; \uparrow\; \uparrow\; \uparrow\; \uparrow\; \uparrow\;\rangle \\ \nonumber
|18\rangle &=& | \downarrow\; \downarrow\; \downarrow\; \downarrow\; \uparrow\; \uparrow\; \uparrow\; \uparrow\; \downarrow\; \downarrow\; \downarrow\; \downarrow\; \uparrow\; \uparrow\; \uparrow\; \uparrow\;\rangle \\ \nonumber
|24\rangle &=& | \uparrow\; \uparrow\; \downarrow\; \uparrow\; \uparrow\; \uparrow\; \downarrow\; \uparrow\; \downarrow\; \uparrow\; \uparrow\; \downarrow\; \uparrow\; \uparrow\; \uparrow\; \downarrow\;\rangle \\ \nonumber
|28\rangle &=& | \uparrow\; \downarrow\; \uparrow\; \downarrow\; \uparrow\; \downarrow\; \uparrow\; \downarrow\; \uparrow\; \downarrow\; \uparrow\; \downarrow\; \uparrow\; \downarrow\; \uparrow\; \downarrow\;\rangle \\ \nonumber
|34\rangle &=& | \downarrow\; \downarrow\; \uparrow\; \downarrow\; \uparrow\; \uparrow\; \downarrow\; \downarrow\; \downarrow\; \downarrow\; \downarrow\; \uparrow\; \uparrow\; \downarrow\; \uparrow\; \uparrow\;\rangle \\ \nonumber
|41\rangle &=& | \downarrow\; \uparrow\; \uparrow\; \uparrow\; \uparrow\; \downarrow\; \downarrow\; \downarrow\; \uparrow\; \downarrow\; \uparrow\; \downarrow\; \downarrow\; \downarrow\; \uparrow\; \uparrow\;\rangle \\ \nonumber
|45\rangle &=& | \downarrow\; \uparrow\; \uparrow\; \downarrow\; \downarrow\; \uparrow\; \downarrow\; \downarrow\; \downarrow\; \downarrow\; \downarrow\; \downarrow\; \uparrow\; \uparrow\; \uparrow\; \uparrow\;\rangle \\ \nonumber
|47\rangle &=& | \uparrow\; \uparrow\; \uparrow\; \uparrow\; \uparrow\; \uparrow\; \uparrow\; \uparrow\; \downarrow\; \downarrow\; \downarrow\; \downarrow\; \downarrow\; \downarrow\; \downarrow\; \downarrow\;\rangle
\end{eqnarray}
where the configurations are labeled according to those presented in ref. \onlinecite{Alaei.2017} and sites are labeled as in Fig. \ref{fig:numeracion}.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.9\columnwidth]{figs/numeracion-espines2}
\caption{Labeling for the Mn atoms in the unit cell and Coupling constant considered in the model.}
\label{fig:numeracion}
\end{center}
\end{figure}
\section{Conclusions}
\label{sec:conclusions}
In this paper we have presented a method to find an optimal set of configurations in order to determine the couplings in a magnetic model by means of
ab-initio calculations. This strategy allows to enhance the use of ab-initio calculations to establish the parameters of an effective magnetic model to be consistent with
the calculation of the energies. We apply the method to calculate the family of coupling constants consistent with the ab-initio energies. We show that, while taking an optimal set of magnetic configurations, it is possible to reduce
the number of ab-initio calculations to determine the model with a given error and we obtain results for the coupling constants in agreement to previous calculations.
Finally we make available a free software to implement the algorithms described in the present work.
\section*{Acknowledgments}
This research was partially supported by CONICET (Grant No. PIP 0691, PIP0747, PIP0720 and P-UE 22920170100066CO), UNLP (Grant No. 11/X678, 11/X680, 11/X708, 11/X788, 11/X792, 11/X806), ANPCyT (Grant No. PICT 2012-1724, 2013-2616, 2016-4083), UNNOBA (Grant No. SIB0176/2017), and Proyecto Acelerado de C\'alculo 2017 de la Red Nacional de Computaci\'on de Alto Desempe\~no (SNCAD-MINCyT) - HPC Cluster, Rosario - Argentina.
|
2,877,628,091,615 | arxiv | \section{Introduction}
Parallel software frameworks facilitate the separation of concerns between functionality and parallelism.
Cilk \cite{Blumofe:1995}, X10 \cite{Charles:2005} and the Fork/Join pool from the Java Development Kit (JDK) \cite{JavaSE:2015}, among others, provide an interface to explicitly parallelize applications by abstracting away direct interactions with the underlying architectures.
The challenge with explicit parallelism is to create an efficient application; a pareto-optimal point between introduced overheads and achieved performance.
The need for increased productivity has lead to a boom in the number and popularity of parallel frameworks.
These are deployed in various combinations creating rich multi-layered software stacks.
Consequently, the flow of information from the application down to the code generator passes a number of intermediate steps in which the information is constantly abstracted and/or optimized.
Compiler optimization principles such as loop invariant code motion, inlining and scalar replacement \cite{Muchnick:1997}, were developed in order to enhance performance by achieving better machine code quality.
They rely on control and data dependencies contained within an intermediate representation.
Applications based on parallel frameworks are composed from smaller tasks with few explicit data dependencies.
However, many of these frameworks contain the semantic information required to infer these dependencies and perform optimizations that are not detected in the original form.
Dynamic compilers are not designed to exploit the inferred data dependencies inherent in parallel software frameworks.
Furthermore, since they operate on fine-grain abstractions (methods), they do not have a full picture of the application and thus cannot infer valuable knowledge for further optimizations.
This paper provides a case study where both the application and the parallel framework semantics are examined together in order to detect missing optimization opportunities.
After identifying such opportunities, we apply familiar compilation techniques on current parallel frameworks in order to bridge the semantic gap between application logic and the programmability of frameworks in a co-designed manner.
The semantics of the framework were used to design an optimizer that improved the performance of applications running on top of it with no user involvement while maintaining software engineering principles.
This paper makes the following contributions:
1) Introduces MR4J, a lightweight Java based MapReduce framework for multicore architectures.
2) Demonstrates a co-designed optimizer where code transformations are automatically applied to enhance performance of running applications.
3) Provides an in-depth comparative performance analysis of MR4J and other shared-memory MapReduce frameworks.
\section{MapReduce: A Case Study}
MapReduce is a popular framework for regular data parallel applications originally designed for running on distributed systems.
Due to its efficiency and scalability recent efforts have provided implementations for multicore architectures.
As many `cloud' applications do not necessarily require any more resources than a single node \cite{Appuswamy:2013}, multicore implementations offer a practical alternative.
In general, MapReduce requires the provision of two tasks, each with a well-defined objective; making it an ideal framework to demonstrate optimizations based on implicit semantic information of an abstraction.
This paper implements MR4J, a lightweight implementation of MapReduce for multicore architectures to demonstrate the applicability of the abstraction within the Java programming language.
MR4J was designed to supplement existing frameworks in order to take advantage of the rich Java Application Programming Interface (API), the Virtual Machine (JVM) portability across different architectures, and recent efforts in compiler abstractions \cite{Duboscq:2013, Wurthinger:2013}.
Managed runtime languages, in particular Java, employ automatic memory management techniques, in the means of Garbage Collectors (GC) \cite{Jones:2011} that alleviate the users from the difficult and error-prone task of manual memory management.
This fact, however, introduces a performance overhead, a characteristic of managed runtime languages and is tackled by the optimizer introduced in this paper in the context of MR4J.
The performance of MR4J is evaluated against the state-of-the-art C and C++ equivalent frameworks in order to quantify the performance/productivity trade-off.
The design and implementation of the optimization creates new opportunities in defining a standard methodology for applying similar techniques to other parallel frameworks.
\subsection{What is MapReduce?}
MapReduce transforms input data into a collection of (key, value) pairs.
In order to achieve this, the input is split and individually passed as an argument to the map method.
The \textit{map} method emits intermediate (key, value) pairs that are collected by the framework and consequently grouped for the reduce phase.
The \textit{reduce} method combines all the intermediate values associated with each key into the (key, value) pairs returned as the result.
The input data is assumed independent, when split, so the benefit of this approach is that execution of each map and reduce method can be performed in parallel.
\begin{figure}
\centering
\includegraphics[trim = 0mm 50mm 0mm 0mm, clip, width=80mm]{mapreduce.pdf}
\caption{Illustration of MapReduce used for a word count application used as a running example.}
\label{fig:mapreduce}
\end{figure}
\begin{figure}
{\scriptsize
\begin{verbatim}
public class WordCount {
static final Pattern WORD = Pattern.compile("[A-Z][A-Z']*");
final Mapper<S, S, I> mapper = new Mapper<S, S, I>() {
public void map(S input, Emitter<S, I> emitter) {
Matcher words = WORD.matcher(input.toUpperCase());
while (words.find()) {
emitter.emit(words.group(), 1);
}
}
};
final Reducer<S, I> reducer = new Reducer<S, I>() {
public void reduce(S key,
List<I> values,
Emitter<S, I> emitter) {
int sum = 0;
for (I value : values) {
sum += value;
}
emitter.emit(key, sum);
}
};
public List<KeyValue<S, I>> run(List<S> input) {
MapReduce<S, S, I> mrj = new MapReduce<>(mapper, reducer);
return mrj.run(input);
}
} \end{verbatim} }
\caption{Implementation of the word count application using MR4J.}
\label{fig:implementation}
\end{figure}
Figure \ref{fig:mapreduce} contains a pseudo-code example of the map and reduce methods for a word count application and its information flow respectively.
Figure \ref{fig:implementation} contains a working implementation of this running example based on MR4J, introduced in this paper.
As depicted in Figure \ref{fig:mapreduce}, the map method receives a sentence as an argument and splits it into individual words (each with an initial count of one).
Each word is then emitted into the framework where the individual counts are collected for each unique word.
The word and its counts are consequently passed to the reduce method as its arguments.
The reduce method, in turn, accumulates the values to form the final count for the word which is emitted as the result.
\subsection{Related Frameworks}
The application of MapReduce as an abstraction spans web analytics \cite{Dean:2008}, machine learning \cite{Mahout:2013}, and databases \cite{MongoDB:2013}.
Due to its flexibility in targeting different hardware architectures, there is a wide range of implementations ranging from distributed to multicore deployments.
\subsubsection{Distributed Networks (Clusters/Clouds)}
Google coined the MapReduce name \cite{Dean:2008} and took the first steps to popularize the framework by providing an API to automatically split and distribute the input data and the execution across a cluster of processing nodes.
The API, written in C++, hides many aspects of the underlying parallelism (e.g. the scheduling, data distribution,
and fault tolerance) and relies on the Google File System (GFS) to distribute the data \cite{Ghemawat:2003}.
Dean \textit{et~al.} further refined the abstraction by adding a new method to \textit{combine} intermediate values on a processing node.
This partially reduces values associated with each key and minimizes data transfers before the reduce phase.
Hadoop \cite{Hadoop:2013}, an open source implementation of the MapReduce framework in Java, is implemented in a similar manner to the Google MapReduce framework, including the same refinements.
Hadoop uses Java interfaces to define the map and reduce methods using generic types; allowing flexible, yet typed, parameters for the (key, value) pairs.
It is also possible to configure the framework to utilize multicore systems.
\subsubsection{Multicore Architectures}
Phoenix 1.0 \cite{Ranger:2007}, Phoenix 2.0 \cite{Yoo:2009} (written in C) and Phoenix++ 1.0 \cite{Talbot:2011} (written in C++) utilize the principles of the MapReduce framework and substitute the communication strategies of clusters with shared-memory buffers.
This approach replaces worker nodes with threads to execute the tasks minimizing the overheads of the framework.
The principle aim of the project is to provide ``an efficient implementation on shared-memory systems that demonstrates its feasibility'' \cite{Ranger:2007}.
The use of threads and shared-memory enables optimizations for data locality and, with some risk to correctness, shared mutable state.
The popularity of MapReduce has encouraged implementations for different architectures and because of the complexity of memory management, the API is restrictive and closer to the original concept proposed by Dean \textit{et~al.} \cite{Dean:2008}.
\begin{figure}
\centering
\includegraphics[trim = 0mm 75mm 0mm 0mm, clip, width=80mm]{mapcombine.pdf}
\caption{Illustration of the creation and management of (key, value) pairs in MapReduce for the word counting example with a combiner replacing the reduce method.}
\label{fig:mapcombine}
\end{figure}
\subsection{Performance vs. Programmability}
A common feature of all the existing MapReduce frameworks is the acknowledgment that, without modifications to its purest form, performance is limited.
For example, the combiner method exists to reduce the size of data in the intermediate (key, value) collection.
As the implementation and further manual tuning remain external to the framework; there is an assumption of familiarity and a knowledge of parallel programming, a discipline which is known to be challenging.
Implementation of combining functions in the Phoenix frameworks improves the performance but introduces a deterioration in programmability.
Phoenix adds a new function prototype that, when implemented and supplied as an argument to the framework, incrementally combines intermediate values in a small buffer to a single value in order to prevent the allocation of new memory for the collector.
Although it improves execution time, it often duplicates the code written by the user.
This issue is further compounded by the use of void pointers for `generic' data types in the C programming language.
Casting and dereferencing void pointers increases the risk of runtime errors that can be detected at compile time in other languages.
Phoenix++ addresses this by using template classes in its C++ framework implementation.
It takes a different approach by introducing modularity and the idea of containers and combiners, having the effect of embedding the user code at the heart of the framework.
However, there is an assumption that the user is aware of the available containers and the best selection is known before compilation.
An intimate understanding of the internal workings of the framework is required if a new container is needed for an application.
Moreover, some configurations require tuning at compile time restricting the data size at runtime.
In both these frameworks the development of optimizations impacts the programmability of the framework.
The objective in implementing a framework in Java is to eliminate the need for the user to write code beyond the functionality of the application; addressing the programmability and assessing the performance.
\subsection{MapReduce for Java (MR4J)}
To evaluate the capabilities of MapReduce on the JVM, MR4J has been developed.
The design principles behind MR4J are:
1) To maximize the use of standard Java libraries and exclude the use of native code to maintain portability across hardware architectures and operating systems.
2) To create a minimal API and return to the simplicity of the original Google implementation of MapReduce in order to encourage the user to concentrate on algorithmic development rather than ad-hoc parallelization.
3) To keep the implementation simple and encapsulate the internal working of the framework exposing only the fundamental API elements.
4) To target productivity while assessing performance in a transparent (to the programmer) manner with the implemented integrated optimizer.
At the center of MR4J's design are two elements, the scheduler and the collector of intermediate (key, value) pairs.
The \texttt{ForkJoinPool} class introduced in JDK 1.7 provide a clean, off-the-shelf scheduler focusing on lightweight tasks executing on worker threads accessed from a work-stealing queue \cite{Lea:2000}.
This compares to the scheduling approach of Phoenix and removes the need to implement a new scheduler.
In the existing frameworks the collection of intermediate (key, value) pairs is local to each worker thread and not directly transferable to Java tasks.
Phoenix demonstrates the flexibility of using a hash table for that purpose and MR4J selected the same approach.
Once the map phase is complete the values are passed as an argument into the reduce method as a \texttt{List} interface for user manipulation.
\section{Optimization}
\begin{figure*}
\centering
\includegraphics[trim = 0mm 7mm 0mm 5mm, clip, width=145mm]{transformation.pdf}
\caption{Transformation of the reduce method for a word count application using MR4J.}
\label{fig:transformation}
\end{figure*}
The concept of a \textit{combiner} method to improve the locality, while reducing data, was first introduced in the original Google MapReduce framework \cite{Dean:2008}.
Its purpose is to combine emitted values locally on a processing node in order to limit the data transferred before and during the reduce phase.
In the multicore implementations, with direct access to all (key, value) pairs, it is possible to eliminate the reduce phase altogether.
Figure \ref{fig:mapcombine} illustrates how the word counting example can achieve this with a simple accumulator (an initial value of zero is assumed).
In related frameworks this optimization is manual and it is under the responsibility of the user to implement it.
Various combine and reduction algorithms have similar characteristics to the one explored in this example and therefore they can benefit from the automatic optimization explored in this paper.
This improvement will:
1) limit the source code written;
2) reduce the possibility of errors; and
3) improve performance of benchmarks where a combine method is feasible but not implemented.
The dynamic compiler is not able to optimize this case due to the semantic distance between the map and reduce methods.
They run in two phases of operation and are both embedded in tasks running in distinct time frames.
Consequently, the dynamic compiler will never see the interaction between the generation and reduction of intermediate values.
The developed optimizer is aware of this fact and by re-writing bytecode enacts the dynamic compiler to further improve the generated machine code, completely transparently to the user.
\subsection{MR4J Modifications}
Figures \ref{fig:mapreduce} and \ref{fig:mapcombine} illustrate the desired transformations in the context of MR4J in order to replace the reduce execution flow with combining.
The primary change is to provide an intermediate (key, value) pair collector that is aware of combining values (the intermediate value is held in a private encapsulating object (a \texttt{Holder})).
The same collector strategy is employed, the thread-safe hash table, with a different implementation of the emitter interface provided to the map method. Originally a new key would instantiate a new list to collect values.
In the optimized execution flow, a new key will instantiate a new holder and the value will be combined with the intermediate value held.
Before the results are returned to the user a finalization method will convert the intermediate value into the resulting value.
\subsubsection{Runtime Transformation}
The transformation of code during class loading is detailed in Figure \ref{fig:transformation}.
The reduce method is analyzed to create an intermediate representation that identifies three code fragments that will map onto the three methods required to implement the combiner in MR4J.
The purpose of each generated method is:
\texttt{Holder initialize();} provides an initial intermediate representation for values as a holder type.
In the case of all types it will provide a mutable boxing class.
\texttt{void combine(Holder, V);} contains the code from the reduce method that implements the combining.
The mutable value in the holder is modified to include the information required from the emitted value.
\texttt{V finalize(Holder);} converts the intermediate representation of the value into its final form.
Due to the implementation of Java generics, the \texttt{combine} and \texttt{finalize} methods also have a generated synthetic bridge method to act as an interface due to type erasure.
The methods ensure that type information is not erased from user code and the correct type is associated with objects on the stack during execution.
These have been omitted from Figure \ref{fig:transformation} for brevity.
The transformation is applicable when two conditions are satisfied.
Firstly, the reducer iterates over all intermediate values.
Secondly, the reduce operation is dependent only on the current intermediate value and current value in the iteration.
There are two idiomatic reducers handled directly in code that either use the size or first element in the intermediate value list.
Other complexities in determining correctness are provided by the MapReduce semantics and, therefore, they not need to be considered in the transformation.
For example should a value contain shared mutable state in a method executed, this must be thread-safe for the reduce method to provide a correct answer.
The implemented technique makes possible the potential analysis and implementation of verification code that provide hints at where violations to the safety of a MapReduce application lie.
The semantics of the framework add defined constraints that simplify checks that general purpose programming requires.
\subsection{Implementation}
A Java agent \cite{Binder:2007} was chosen as the most suitable technique to generate the new methods since it is simple to identify implementations of the reduce method.
The first step was to create an alternative execution flow in the MapReduce framework that uses the generated
methods that are hidden from the user, i.e. they contain no functionality and cannot be accessed or overriden outside of the declared package.
When the class loader loads the reduce class, it rewrites the access to these methods so they can be overridden at runtime.
The process of transforming the code follows the steps below:
1) Parse the reduce method to create an intermediate representation of the code in a program dependency graph.
2) Identify the conditions of the loop iterating over the values ensuring coverage of all values.
3) Test that the initialization block contains no external data dependencies, determine the holder type required ,and copy adjusted bytecodes to the initialize method body.
4) Test the value iteration loop body for data dependencies (assuming that the operation is associative due to the semantics of the MapReduce framework).
Copy adjusted bytecode to the combine method body.
5) Identify the original bytecode relating to the finalization of the intermediate value, from the preparation of the stack for the emit method call.
Copy adjusted bytecode to the finalize method body.
6) Set the flag to return a constant of true rather than false to enable the optimized combining execution flow in the MR4J implementation.
\section{Performance Evaluation}
MR4J is evaluated in two stages.
The first stage explores:
a) the scalability of MR4J on two different hardware configurations, and
b) the comparative evaluation of MR4J against mature and hand optimized state-of-the-art implementations in C and C++; Phoenix and Phoenix++ respectively.
The second stage evaluates the performance benefits generated by the MR4J aware optimizer.
\subsection{Experimental Set-up}
\subsubsection{Hardware Platforms}
The experiments run on two different hardware platforms in order to explore the performance on a multicore \textit{workstation} and a larger NUMA multi-socket, multicore \textit{server}.
Table \ref{tab:configuration} presents the hardware and software configurations used during the evaluation.
\subsubsection{MapReduce Software Frameworks}
The evaluation compares MR4J against the hand-tuned Phoenix \cite{Yoo:2009} and Phoenix++ \cite{Talbot:2011} implementations.
These are both configured manually using hardware specific parameters; e.g. the size of L1 cache and the number of desired threads.
MR4J uses the same L1 cache size as its buffer size and the JVM is configured to use the default garbage collector (Parallel) with an initial and maximum heap size of 12GB.
Furthermore, the \texttt{-XX:+UseNUMA} flag is set for the server configuration.
Each benchmark is executed ten times (Java includes a five iteration warm-up) and the average execution time is used to report results.
\subsubsection{Benchmarks}
The benchmarks distributed and used by Phoenix and Phoenix++ have been ported and validated on MR4J for a fair comparison.
The benchmark suite consists of the following applications, as detailed by Yoo \textit{et~al.} \cite{Yoo:2009}: Histogram (HG), K-Means Clustering (KM), Linear Regression (LR), Matrix Multiply (MM), Principal Component Analysis (PC), String Match (SM), and Word Count (WC).
\noindent In order to ensure that the same algorithms are executed across all three frameworks, modifications have been made to the original benchmarks.
For Histogram, Phoenix++ iterates over individual pixels; however due to performance and memory constraints, Phoenix and MR4J iterate over chunks of data, emitting values after partial combination in the map method.
Histogram and Word Count omit the requirement to sort the keys as this is testing the efficiency of parallel sorting algorithms rather than the core of MapReduce.
\begin{table}
\centering
{\small
\begin{tabular}{ p{26mm} C{23mm} C{23mm} }
\hline
& \textbf{Workstation} & \textbf{Server} \\
\hline
Processor & Intel Core i7 & AMD Opteron \\
& 4770 3.4GHz & 6276 2.3Ghz \\
Cores & 4 & 64 (4 x 16) \\
Hardware threads & 8 & 64 \\
L1 Cache & 32kB per core & 16kB per core \\
L2 Cache & 256kB per core & 2MB per 2 cores \\
L3 Cache & 8MB per 4 cores & 8MB per 8 cores \\
Main memory & 16GB & 252GB \\
\hline
OS & Windows 8.1 & Ubuntu 12.04 \\
C/C++ compiler & gcc 4.8.3 & gcc 4.6.4 \\
Java & \multicolumn{2}{c}{Java SE 1.8.0\_20} \\
JVM & \multicolumn{2}{c}{Java HotSpot 64-Bit Server} \\
& \multicolumn{2}{c}{(build 25.20-b23)} \\
\hline
\end{tabular} }
\caption{Hardware and software configurations.}
\label{tab:configuration}
\end{table}
\begin{table}
\centering
{\small
\begin{tabular}{ C{6mm} p{43mm} C{11mm} C{11mm} }
\hline
& \textbf{Dataset} & \textbf{Keys} & \textbf{Values} \\
\hline
HG & 1.4GB 24-bit bitmap image & Medium & Large \\
KM & 500,000 3-d points (100 clusters) & Small & Large \\
LR & 3.5GB file & Small & Large \\
MM & 3,000 x 3,000 integer matrices & Medium & Medium \\
PC & 3,000 x 3,000 integer matrix & Medium & Medium \\
SM & 500MB key file & Small & Small \\
WC & 500MB text document & Large & Large \\
\hline
\end{tabular} }
\caption{Benchmark Input Data.}
\label{tab:dataset}
\end{table}
The benchmarks demonstrate a variety of workloads, inputs, intermediate and output results.
All of these benchmarks, originally from the Phoenix paper \cite{Yoo:2009}, contain combiner methods.
These combiners are all generated by the optimizer described in this paper.
The challenge for all three frameworks was to generate a combiner for the K-Means Clustering benchmark as it requires state to obtain the average (e.g. the total number of points in a cluster).
In this case the combiner or the intermediate value contain the running sum of point coordinates.
The sum is normalized in the reducer for MR4J or in the main body of the application for Phoenix and Phoenix++.
Table \ref{tab:dataset} presents the input data sets with an approximate categorization of key and value counts.
\subsection{Performance Results}
\begin{figure}
\centering
\includegraphics[trim = 19mm 23mm 22mm 20mm, clip, width=80mm]{benchmarks.pdf}
\caption{MR4J scalability on the server configuration (one thread as the baseline).}
\label{fig:benchmarks}
\end{figure}
The scalability of MR4J can be seen in Figure \ref{fig:benchmarks} for the server configuration.
Having as a baseline the execution time on one core, the workstation shows a consistent scalability over all hardware threads, with an average of 2.85 on four cores and 3.73 on all eight hyperthreads.
Regarding the scalability of MR4J on the server configuration (Figure \ref{fig:benchmarks}), three groups of performance can be observed depending on their compute intensity and overhead of (key, value) pair generation summarized in Table \ref{tab:dataset}.
\begin{figure}
\centering
\includegraphics[trim = 19mm 23mm 22mm 20mm, clip, width=80mm]{speedup.pdf}
\caption{Relative speedup of Phoenix and MR4J against Phoenix++ on server(higher is better).}
\label{fig:speedup}
\vspace{-3mm}
\end{figure}
\begin{figure}
\centering
\includegraphics[trim = 19mm 23mm 22mm 20mm, clip, width=80mm]{optimization.pdf}
\caption{MR4J per-benchmark speedup with and without the optimizer relative to Phoenix++ on server.}
\label{fig:optimization}
\vspace{-3mm}
\end{figure}
Figure \ref{fig:speedup} contains the speedup of MR4J and Phoenix relative to Phoenix++ on the server configurations respectively.
Furthermore, Figure \ref{fig:optimization} take a more fine-grain approach and illustrate the relative speedup of MR4J against the top-performing Phoenix++ with and without the implemented optimizer respectively per benchmark.
Regarding the workstation configuration, a consistent performance behavior can be observed between MR4J, Phoenix and Phoenix++.
The performance falls in-between the two hand-tuned frameworks with the median around 0.66 for MR4J and 0.39 for Phoenix for all hardware thread counts.
The server configuration reveals a different set of results illustrating the challenges of developing scalable software for multisocket NUMA architectures.
When using the same socket (1--16 threads) the performance of MR4J and Phoenix is comparative to Phoenix++ which consistently out-performs them (0.61 and 0.81 respectively).
Scalability was a primary objective in the development of Phoenix++ \cite{Talbot:2011} and the results are supported by this evaluation.
The NUMA aware setting in the JVM is able to maintain a consistent level of performance, unlike Phoenix which employs only its locality optimizations.
However, the speedups of MR4J and Phoenix are 0.76 and 0.20 compared to Phoenix++ when using all hardware threads.
\subsection{Optimization Performance}
Figure \ref{fig:optimization} illustrates the relative speedup of MR4J against Phoenix++ before and after the optimizer is enabled for each of the benchmarks.
The majority of the benchmarks on both configurations show a significant speedup, and thus, closing the gap between MR4J and Phoenix++.
String Match is an exception, exposing the overheads of instantiating and maintaining the intermediate value.
This is due to the nature of the benchmark which has few keys, few values and little computation that can be optimized.
The main overheads of the optimizer are when detecting classes that extend the Reducer and then generating the combining code.
Since the optimizer instruments every Java class, the effect on the detection and transformation times are, on average per class, 81$\mu$s and 7.6ms respectively, which is negligible in comparison to the execution time of the benchmarks.
\section{Discussion}
The introduced MR4J is a lightweight MapReduce framework based on the standard JDK classes.
By using a simple API and by utilizing Java interfaces it is possible to improve the framework while maintaining the backwards compatibility ethos of Java.
The presented optimization illustrates how a single map method can be used in two alternative execution flows, one to reduce values and the other to combine them, thanks to the use of the \texttt{Emitter} interface.
On a multicore architecture, MR4J provides consistently better execution times than the hand-optimized C equivalent and, after optimization, is within reach of the equivalent in C++.
Phoenix and Phoenix++ offer powerful and scalable tools but with more complicated APIs that require manual configuration and tuning.
The benchmarks where MR4J is superior are those where data is organized in arrays.
The dynamic compiler is able to better optimize array accesses (through the automatic memory manager) than pointer arithmetic alone in a static compiler.
However, in benchmarks where heavy object creation is required, the ability of C and C++ to cast directly to data highlights the overhead of object allocation and management in Java.
K-Means Clustering with Points and Word Count with Strings are such examples.
\begin{figure}
\centering
\includegraphics[trim = 19mm 23mm 22mm 20mm, clip, width=80mm]{gc.pdf}
\caption{Word Count on MR4J: Heap usage and percentage of runtime spent in garbage collection.}
\label{fig:gc}
\vspace{-3mm}
\end{figure}
The optimization presented in this paper changes the execution flow within the framework. Borrowing the notion of manual combining from existing MapReduce frameworks, the implemented optimizer automates this process at runtime.
The optimizer uses the semantics of the framework and the structure of user code to eliminate the reduce phase and combine intermediate values as they are emitted from the map method.
This has the effect of improving the execution time for the majority of the tested benchmarks.
The cause of the observed speedup is the improved interaction between the optimized executed code, the dynamic compiler and the Garbage Collector (GC).
Figures \ref{fig:gc} and \ref{fig:gcoptimized} visualize the heap usage for the word count application without and with the optimizer respectively.
The execution time axes are the same for a direct comparison.
The heap usage is similar for both configurations showing a noticeable and steady increase in the size of the heap used since more references are stored for the intermediate values.
The stark difference is in the secondary axis, the time spent in the GC.
Without the optimization the inefficiency lies in the fact that Java must maintain (i.e. keep in the heap) all the object references for the intermediate values generated during the map phase.
This results in their premature promotion into the older generations before they die (and collected during minor collections).
This, consequently, results in major collections that severely impact performance.
The optimization, in turn, increases performance by:
1) reducing the number of objects allocated which avoids unnecessary object promotions leading to major GC cycles;
2) improving execution time by omitting completely the reduce phase;
3) enabling the dynamic compiler to introduce additional scalar replacements, and
4) reducing the utilized heap size and, thus, enabling larger data sets to be used; increasing the potential for utilizing smaller Big Data jobs (as mentioned in the Hadoop job analysis \cite{Appuswamy:2013}).
\begin{figure}
\centering
\includegraphics[trim = 19mm 23mm 22mm 20mm, clip, width=80mm]{gcoptimized.pdf}
\caption{Word Count on optimized MR4J: Heap usage and percentage of runtime spent in garbage collection.}
\label{fig:gcoptimized}
\end{figure}
\begin{figure}
\centering
\includegraphics[trim = 19mm 23mm 22mm 20mm, clip, width=80mm]{gcperformance.pdf}
\caption{Word Count on optimized MR4J: Heap usage and percentage of runtime spent in garbage collection.}
\label{fig:gcperformance}
\vspace{-3mm}
\end{figure}
The JVM, as publicly distributed by Oracle, contains a variety of GC algorithms allowing different tuning parameters and configurations.
Figure \ref{fig:gcperformance} depicts the relative (to the baseline un-optimized version) speedup of each benchmark when all the combinations of GC algorithms, heap sizes, and number of hyper threads are averaged.
The figure also shows that the benchmarks with the greatest reliance on (key, value) pairs (HG and WC) are improved the most.
String Match has four keys with 910 values; whereas Histogram has 768 keys and $1.4\times 10^9$ values.
\section{Conclusions}
This paper introduces MR4J, a lightweight Java based MapReduce framework for shared-memory multicore architectures built on standard JDK classes.
MR4J focuses on ease-of-programmability via a simple API in contrast to equivalent frameworks where performance is extracted via complicated manual tuning required by the programmer.
The performance loss, due to its simplicity, is overcome by a novel optimizer built for the framework.
The optimizer exploits semantic information inherently contained within the parallel software framework transparently to the user.
The design of MR4J aims to either supplement developers of large MapReduce algorithms, improve productivity or simply execute smaller applications.
The performance of MR4J is comparative to the equivalent state-of-the-art Phoenix framework, written and hand-optimized in C.
Thanks to the expressiveness, type safety and portability of Java, it creates a more productive and portable framework with comparative performance.
The original implementation of MR4J was positioned in between the two state-of-the-art MapReduce frameworks, Phoenix and Phoenix++, performance wise.
The lack of a combiner phase was penalizing performance and therefore the optimizer was implemented to supplement the framework.
The presented co-designed optimizer automates the, previously hand-optimized, combining phase in order to improve performance.
Without any modifications to user code, the optimized MR4J improves its performance up to 2.0x bridging the gap from the manually-tuned Phoenix++ to just 17\%.
The work presented in this paper is a proof-of-concept that if semantic information can be passed from the application developer to the parallel framework and the compiler, significant performance improvements can be achieved.
Especially nowadays, with the advent of complex multi-layered Big Data frameworks that are deployed on top of diverse and often heterogeneous hardware resources, semantic-based optimizations will be even harder to achieve.
In the quest for achieving vertical co-designed optimizations we plan to exercise this and other developed optimizations directly into the underlying compiler.
To that end, we plan to augment the existing state-of-the-art Graal compiler \cite{Duboscq:2013} with semantically enriched hooks in order to transfer the necessary information from the application to the compiler.
The formalization of the information flow from the application level down to the compiler and runtime level is of paramount importance in order to bridge the semantic gaps both between different software frameworks and between software and hardware.
\section{Acknowledgments}
This research was conducted with support from the UK Engineering and Physical Sciences Research Council (EPSRC), on grants AnyScale Apps EP/L000725/1, DOME EP/ J016330/1 and PAMELA EP/K008730/1.
Mikel Luj{\'a}n is supported by a Royal Society University Research Fellowship.
\bibliographystyle{abbrv}
|
2,877,628,091,616 | arxiv | \section{Introduction}\label{sec:introduction}}
\IEEEPARstart{M}{edical} image segmentation aims to delineate the interested anatomical structures, such as tumors, organs, and tissues, in a semi-automatic or fully automatic way, which has many applications in clinical practice, such as radiomic analysis~\cite{gillies2016radiomics}, treatment planning~\cite{rietzel2005treatmentplanning}, and survival analysis~\cite{zhang2020COVIDCell}, and so on.
Currently, medical image segmentation is also an active research topic. Figure~\ref{intro-wordcloud} presents the word cloud of the paper titles in the 23rd International Conference and Medical Image Computing \& Computer Assisted Intervention (MICCAI 2020)\footnote{https://miccai2020.org/en/} that is the largest international event in medical image analysis community.
It can found that the term `segmentation' has very high frequency and putting the top high-frequency words together can form a meaningful phase ``image segmentation using deep learning/network(s)''.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.3]{imgs/MICCAI-2020-WordCloud.png}
\caption{Word cloud of the paper titles in MICCAI 2020.}\label{intro-wordcloud}
\end{figure}
Since U-Net~\cite{ronneberger2015UNet2D}, the legend medical image segmentation approach, appeared in 2015, there has been numerous new segmentation methods have been proposed for various segmentation tasks~\cite{litjens2017MIA-DL-survey, Boykov2020DLSeg} in the past five years\footnote{U-Net~\cite{ronneberger2015UNet2D} has more than 20000 citations by December, 2020.}.
With so many segmentation papers on hand, it becomes extremely hard to compare them and identify the methodology progress, because the proposed methods are usually evaluated on different datasets with different dataset splits, metrics, and implementations.
Public segmentation challenges provide a standard platform of getting insights into the current cutting-edge approaches where solutions are evaluated and compared against each other in a transparent and fair way.
In MICCAI 2020, there are totally ten international 3D medical image segmentation challenges\footnote{https://www.miccai2020.org/en/MICCAI-2020-CHALLENGES.html}. All these challenges follow the Biomedical Image Analysis ChallengeS (BIAS) Initiative~\cite{maier2020BIAS-MIA}. Specifically, the challenge designs are transparent and standardized, and the proposals (\url{http://miccai.org/events/challenges/}) also have passed the peer review.
Table~\ref{tab:task-overview} provides an overview of the 10 segmentation challenges, which can be roughly divided into
\begin{itemize}
\item \textbf{five single-modality} image segmentation tasks, including three CT image segmentation tasks and two MR image segmentation tasks;
\item \textbf{five multi-modality} image segmentation tasks, including two bi-modality tasks, two triple-modality tasks, and one four-modality task.
\end{itemize}
\begin{table*}[!htbp]
\centering
\caption{Task overview of ten 3D medical image segmentation challenges. `Seg. Targets' denotes segmentation targets in each task; `\# Class and \# Train/Val./Test' denote the number of class and the number of cases in training set, validation set, and testing set, respectively. `-' denotes no validation cases. `+' denotes that additional segmentation metrics (except DSC and HD) are used to evaluate the solutions of challenge participants.}
\label{tab:task-overview}
\resizebox{\textwidth}{!}{
\begin{tabular}{llccccl}
\hline
Name & Seg. Targets & \# Class & \# Train/Val./Test & Modality & Multi-Center & Metrics \\
\hline
CADA & Cerebral Aneurysm & 1 & 92/-/23 & CT & & IoU, HD, + \\
ASOCA & Coronary arteries & 1 & 40/-/20 & CT & & DSC, HD \\
VerSeg & Vertebra & 28 & 100/-/200 & CT & Y & DSC, HD \\
MMs & Heart\tablefootnote{Myocardium, left and right ventricle} & 3 & 150(+25)/-/200 & MR & Y & DSC, HD, + \\
EMIDEC & Myocardium, infraction, re-flow & 3 & 100/-/50 & MR & & DSC, HD, + \\ \hline
ADAM & Intracranial aneurysm & 1 & 113/-/140 & TOF-MRA, structural MR & & DSC, HD, + \\
HECKTOR & Head/neck tumor & 1 & 203/-/46 & PET, CT & Y & DSC \\
MyoPS & Scar, edema & 2 & 25/-/20 & LGE, T2, bSSFP & & DSC \\
\multirow{2}{*}{ABCs} & Task 1: 5 brain structures & 5 & \multirow{2}{*}{45/15/15} & \multirow{2}{*}{CT, T1, T2} & \multirow{2}{*}{Y} & \multirow{2}{*}{DSC, SDSC} \\
& Task 2: 10 brain structures & 10 & & & & \\
BraTS & Brain tumor\tablefootnote{Whole tumor, enhancing tumor, and tumor core} & 3 & 369/125/166 & Flair, T1, T1ce, T2 & Y & DSC, HD \\
\hline
\end{tabular}}
\end{table*}
In this paper, we first provide a comprehensive review of the ten 3D medical segmentation challenges and the associated top solutions.
we also identify the "happy-families" elements in the top solutions.
Finally, we highlight some problems and potential future directions for medical image segmentation.
The main contributions of this paper are summarized as follows:
\begin{itemize}
\item We provide a comprehensive review of ten recent international 3D medical image segmentation challenges, including the task descriptions, the datasets, and more importantly, the top solutions of participant teams, which represent the cutting-edge segmentation methods at present.
\item We identify the widely used "happy-families" components in the top methods, which are useful for developing powerful segmentation approaches.
\item We summarize several unsolved problems and potential research directions, which could promote the developments in medical image segmentation field.
\end{itemize}
\section{Preliminaries: Widely Used Methods in Deep Learning-based Medical Image Segmentation}
\subsection{Network Architectures}
nnU-Net~\cite{isensee2020nnunet}, no new net, is a dynamic fully automatic segmentation framework for medical images, which is based on the widely used U-Net architecture~\cite{ronneberger2015UNet2D, cciccek2016UNet3D}.
It can automatically configures the preprocessing, the network architecture, the training, the inference, and the post-processing for any new segmentation task.
Without manual intervention, nnU-Net surpasses most existing approaches, and achieves the state-of-the-art in 33 of 53 segmentation tasks and otherwise shows comparable performances to the top leaderboard entries.
Currently, nnU-Net has been the most popular backbone for 3D medical image segmentation tasks because of its powerful, flexible, out-of-the-box, and open-sourced,
\subsection{Loss Functions}
Loss function is used to guide the network to learn meaningful predictions and dictate how the network is supposed to trade off mistakes. Cross entropy loss and Dice loss~\cite{milletari2016vnet, diceV2} are two most popular loss functions in segmentation tasks. Specifically, cross entropy aims to minimize the dissimilarity between two distributions, which is defined by
\begin{equation}
L_{CE} = -\frac{1}{N}\sum_{c=1}^{C}\sum_{i=1}^{N} g_{i}^{c} \log s_{i}^{c},
\end{equation}
where $g_i^c$ is the ground truth binary indicator of class label $c$ of voxel $i$, and $s_i^c$ is the corresponding predicted segmentation probability.
Dice loss can directly optimize the Dice Similarity Coefficient (DSC) which is the most commonly used segmentation evaluation metric. In general, there are two variants for Dice loss, one employs squared terms in the denominator \cite{milletari2016vnet}, which is defined by
\begin{equation}\label{eq:DiceV1}
L_{Dice-square} = 1- \frac{2\sum_{c=1}^{C}\sum_{i=1}^{N}g_{i}^{c}s_{i}^{c}}{\sum_{c=1}^{C}\sum_{i=1}^{N}(g_{i}^{c})^2 + \sum_{c=1}^{C}\sum_{i=1}^{N}(s_i^{c})^2}.
\end{equation}
The other does not use the squared terms in the denominator \cite{diceV2}, which is defined by
\begin{equation}\label{eq:Dice}
L_{Dice} = 1- \frac{2\sum_{c=1}^{C}\sum_{i=1}^{N}g_{i}^{c}s_{i}^{c}}{\sum_{c=1}^{C}\sum_{i=1}^{N}g_{i}^{c} + \sum_{c=1}^{C}\sum_{i=1}^{N}s_i^{c}}.
\end{equation}
The default loss function in nnU-Net is the unweighted sum $L_{CE} + L_{Dice}$.
\subsection{Evaluation Metrics}
Dice Similarity Coefficient (DSC) and Hausdorff Distance (HD) are two widely used segmentation metrics, which can measure the region overlap ratio and boundary distance, respectively.
Let $G$ and $S$ be the ground truth and the segmentation result, respectively.
DSC is defined by
\begin{equation}
DSC = \frac{2|G\cap S|}{|G|+ |S|}.
\end{equation}
A similar metric IoU (Jaccard) sometimes is used as an alternative, which is defined by
\begin{equation}
IoU = \frac{|G\cap S|}{|G\cup S|}.
\end{equation}
Let $\partial G$ and $\partial S$ are the boundary points of the ground truth and the segmentation, respectively. The Hausdorff Distance is defined by
\begin{equation}
HD(\partial G, \partial S) = \max(hd(\partial G, \partial S), hd(\partial S, \partial G)),
\end{equation}
where
\begin{equation*}
hd(\partial G, \partial S) = \max\limits_{x\in \partial G} \min\limits_{y\in \partial S} ||x-y||_2,
\end{equation*}
and
\begin{equation*}
hd(\partial S, \partial G) = \max\limits_{x\in \partial S} \min\limits_{y\in \partial G} ||x-y||_2.
\end{equation*}
To eliminate the impact of the outliers, 95\% HD is also widely used, which is based on the calculation of the 95th percentile of the distances between boundary points in $\partial G$ and $\partial S$.
\section{Single Modality Image Segmentation}
\begin{figure*}[htbp]
\centering
\includegraphics[scale=0.8]{imgs/001-CTSegTask.png}
\caption{Visualized examples in three CT segmentation tasks. The ground truth of the original image (a) in each task is shown in 2D projected onto the raw data (b) and in 3D together with a volume rendering of the raw data (c).
}\label{fig:CT-Seg}
\end{figure*}
\subsection{CADA: Cerebral Aneurysm Segmentation}
The task in CADA challenge (\url{https://cada-as.grand-challenge.org/Overview/}) is to segment the aneurysms from 3D CT images.
The organizers provide 92 cases for training and 23 cases for testing, where the cases are with cerebral aneurysms without vasospasm. The main difficulty in this challenge is the highly imbalanced labels. As shown in Figure~\ref{fig:CT-Seg} (the first row), the aneurysm is very small and most of the voxels are the background in the CT images.
Six metrics are used to quantitatively evaluate the segmentation results, including Jaccard (IoU), Hausdorff distance (HD), mean distance (MD), Pearson correlation coefficient between predicted volume and reference volume of all aneurysms (Volume Pearson R), the mean absolute difference of predicted and reference volume (Volume Bias), and Standard deviation of the difference between predicted and reference volumes (Volume Std).
For the ranking, a maximum-minimum normalization is performed according to all participants. In this way, each individual metric takes a value between 0 (worst case among all participants) and 1 (perfect fit between the reference and predicted segmentation). The ranking score is calculated as the average of the normalized metrics.
\begin{table}[!htbp]
\caption{Quantitative results of top-2 teams on CADA Challenge Leaderboard. The bold numbers denote the best results.}\label{tab:CADA-Results}
\centering
\begin{tabular}{lcc}
\hline
\multirow{2}{*}{Metrics} & Mediclouds & junma~\cite{Ma20-CADA-2nd} \\ \cline{2-3}
& Rank 1st & Rank 2nd \\ \hline
IoU & 0.758 & \textbf{0.759} \\
HD & \textbf{2.866} & 4.967 \\
MD & \textbf{1.618} & 3.535 \\
Vol. Pearson R & \textbf{0.998} & \textbf{0.998} \\
Vol.Bias & \textbf{72.24} & 75.84 \\
Vol.Std & \textbf{106.4} & 110.5 \\ \hline
\textbf{Final Score} & \textbf{0.833} & 0.832 \\ \hline
\end{tabular}
\end{table}
Table~\ref{tab:CADA-Results} shows the quantitative segmentation results of the top-2 teams on the challenge leaderboard\footnote{\url{https://cada-as.grand-challenge.org/FinalRanking/}}.
The team `junma' achieved the best IoU while the team `Mediclouds' achieved better performance in the remaining five metrics. However, the final score difference is marginal.
The method of the team `Mediclouds', unfortunately, is not available. Thus, we only present the solution of the team `junma'.
Specifically, Ma and Nie~\cite{Ma20-CADA-2nd} developed their methods based on nnU-Net~\cite{isensee2020nnunet} where the main modification to use a large patch size ($192\times224\times192$) during training and inference. Five models were trained in five-fold cross-validation and each model was trained on a TITAN V100 32G GPU.
Each testing case is predicted by the ensemble of the trained five models.
\subsection{ASOCA: Automated Segmentation Of Coronary Arteries}
The task in ASOCA challenge (\url{https://asoca.grand-challenge.org/Home/}) is to segment the coronary arteries from Cardiac Computed Tomography Angiography (CCTA) images. The organizers provide 40 cases for training and 20 cases for testing. The main difficulties in this challenge are the imbalanced problem and appearance variations. On the one hand, the coronary arteries only occupy a small proportion in the whole CT image. On the other hand, the arteries from healthy and unhealthy cases share different appearances. Figure~\ref{fig:CT-Seg} (the second row) presents a visualized example.
DSC and HD95 are used to evaluate and rank the segmentation results.
\begin{table}[!htbp]
\caption{Quantitative results of top-2 teams on ASOCA Challenge Leaderboard. The bold numbers denote the best results.}\label{tab:ASOCA-Results}
\centering
\begin{tabular}{lccc}
\hline
Team Name & DSC & HD95 & Final Rank \\ \hline
RuochenGao & \textbf{0.867} & 4.165 & 1 \\
SenYang & 0.838 & \textbf{2.336} & 2 \\ \hline
\end{tabular}
\end{table}
Table~\ref{tab:ASOCA-Results} shows the quantitative segmentation results of the top-2 teams on the challenge leaderboard\footnote{\url{https://asoca.grand-challenge.org/MICCAI_Ranking/}} during MICCAI 2020.
The 1st-place team had better DSC while the 2nd-place team obtained better HD95, indicating that the top-2 teams achieved better region overlap and boundary distance, respectively.
The team `RuochenGao' used nnU-Net~\cite{isensee2020nnunet} as the backbone. The whole pipeline include three independent networks for three tasks: epicardium segmentation, artery segmentation, and scale map regression~\cite{wang2020DTM-CVPR}. The final segmentation results were generated by the ensemble of artery segmentation results and scale map regression results followed by removing the vessels outside the epicardium.
The team `SenYang' proposed an improved 2D U-Net with selective kernel (SK-UNet) where the regular convolution blocks were replaced by SE-Res modules in the encoder. Moreover, the SK-modules~\cite{li2019selectivekernel}, including different convolution filters and kernel sizes, were used in the decoder to leverage multi-scale information.
\subsection{VerSe: Large Scale Vertebrae Segmentation Challenge}
The segmentation task in VerSe challenge (https://verse2020.grand-challenge.org/) is to segment the vertebrae from CT images.
The organizers provide 100 cases for training, 100 cases for public testing (the participants can access the testing cases) and 100 cases for hidden testing (this testing set is not publicly available and participants are required to submitted their solutions with Docker containers)~\cite{verseg2, verseg3}. The annotations consist of 28 different vertebrae but each case may only contain part of the vertebrae.
There are several difficulties in this challenge: highly varying fields-of-view (FoV) across cases, large scan sizes, highly correlating shapes of adjacent vertebrae, scan noise, the presence of vertebral fractures, metal implants, and so on~\cite{verseg-benchmark}. Figure~\ref{fig:CT-Seg} (the third row) presents a visualized example.
DSC and HD are used to evaluate and rank the segmentation results.
Payer et al., the defending champion in VerSe 2019~\cite{verseg-benchmark}, succeeded in winning this year's challenge again by the SpatialConfiguration-Net~\cite{payer2019MIA} and U-Net~\cite{ronneberger2015UNet2D,cciccek2016UNet3D}.
Specifically, they proposed a coarse-to-fine approach, including three stages:
\begin{itemize}
\item stage 1: localizing the whole spine by a 3D U-Net-based heatmap regression network, which can remove background; The network input size ranged from $32\times32\times32$ to $128\times128\times128$.
\item stage 2: localizing and identifying all vertebrae landmarks simultaneously via a 3D SpatialConfiguration-Net, which combines local appearance with spatial configuration of landmarks; The network input size ranged from $64\times64\times64$ to $96\times96\times256$ during training and was up to $128\times128\times448$ during inference. To address the missed vertebrae, a MRF-based graphical model was employed to refine the localization results.
\item stage 3: segmenting each vertebra individually by a 3D U-Net. The input size was $128\times128\times96$.
\end{itemize}
Table~\ref{tab:verse} presents the quantitative segmentation results on the public testing set.
The absent results will be added when the challenge summarize paper is released.
\begin{table}[!htbp]
\caption{Quantitative vertebrae segmentation results of the winner solution in VerSe 2020. `-' denotes not available currently.}\label{tab:verse}
\centering
\begin{tabular}{lcc}
\hline
Testing set & DSC & HD95 \\ \hline
Public & 0.9354 & - \\
Hidden & - & - \\ \hline
\end{tabular}
\end{table}
\begin{figure*}[!htbp]
\centering
\includegraphics[scale=0.8]{imgs/001-MRSegTask.png}
\caption{Visualized examples in two MR segmentation tasks. The ground truth of the original image (a) in each task is shown in 2D projected onto the raw data (b) and in 3D rendering (c).
}\label{fig:MR-Seg}
\end{figure*}
\subsection{M\&Ms: Multi-Centre, Multi-Vendor \& Multi-Disease Cardiac Image Segmentation Challenge}
The task in M\&Ms challenge \url{https://www.ub.edu/mnms/} is to segment the left and right ventricle (LV and RV, respectively) cavities and the left ventricle myocardium (MYO) from multi-center, multi-vendor, and multi-disease cardiac MR images.
The organizers provide 175 cases for training, 40 cases for validation, and 160 cases for testing, which are from four scanner vendors. Specifically, the 175 training cases consist of 75 labelled cases from the vendor A, 75 labelled cases from the vendor B, and 25 unlabelled cases from the vendor C. The 40 validation cases consist of 10 cases from each of the four vendors. The 160 testing cases consist of 40 cases from each of the four vendors. The main difficulty in this challenge is the domain shift in testing set, which requires the solutions should be generalizable across different clinical centers, scanner vendors, and patient conditions.
It should be noted that both validation cases and testing cases are not publicly available to participants during the challenge. Participants are required to build a Singularity image and shared it with the organizers.
Figure~\ref{fig:MR-Seg} (the first row) presents a visualized example.
Four metrics are used to evaluate and rank the segmentation results, including DSC, IoU, Average symmetric surface distance (ASSD), and HD.
\begin{table}[!htbp]
\caption{Quantitative segmentation results (in terms of DSC and HD) of top-3 teams on M\&Ms Challenge Leaderboard. `ED' and `ES` denote the end-diastolic and end-systolic phases cardiac MR images. The bold numbers are the best results and the italics numbers are not-significant when compared with the best results ($p>0.01$ in T-test).}\label{tab:MMs}
\setlength\tabcolsep{3pt}
\centering
\begin{tabular}{lllccc}
\hline
\multicolumn{3}{l}{\multirow{2}{*}{Metrics}} & Peter M. Full~\cite{MMs-2020-1st} & Yao Zhang~\cite{MMs-2020-2nd} & Jun Ma ~\cite{MMs-2020-3rd} \\ \cline{4-6}
\multicolumn{3}{l}{} & Rank 1st & Rank 2nd & Rank 3rd \\ \hline
\multirow{6}{*}{ED} & \multirow{2}{*}{LV} & DSC & \textbf{0.939} & \textit{0.938} & \textit{0.935} \\
& & HD & \textbf{9.10} & \textit{9.30} & \textit{9.50} \\
\cline{2-6}
& \multirow{2}{*}{MYO} & DSC & \textbf{0.839} & \textit{0.830} & \textit{0.825} \\
& & HD & \textbf{12.8} & \textit{12.9} & \textit{13.3} \\
\cline{2-6}
& \multirow{2}{*}{RV} & DSC & \textbf{0.910} & \textit{0.909} & \textit{0.906} \\
& & HD & \textbf{11.8} & \textit{12.3} & \textit{12.3} \\
\hline
\multirow{6}{*}{ES} & \multirow{2}{*}{LV} & DSC & \textbf{0.886} & \textit{0.880} & \textit{0.875} \\
& & HD & \textbf{9.10} & \textit{9.50} & \textit{10.5} \\
\cline{2-6}
& \multirow{2}{*}{MYO} & DSC & \textbf{0.867} & \textit{0.861} & \textit{0.856} \\
& & HD & \textbf{10.6} & \textit{10.8} & \textit{11.6} \\
\cline{2-6}
& \multirow{2}{*}{RV} & DSC & \textbf{0.860} & \textit{0.850} & \textit{0.844} \\
& & HD & \textbf{12.7} & \textit{13.0} & \textit{13.0} \\
\hline
\end{tabular}
\end{table}
The top-3 teams developed their methods based on nnU-Net~\cite{isensee2020nnunet}. Specifically, Full et al.~\cite{MMs-2020-1st}, the 1st-place team, handled the domain shift problem by an ensemble of five 2D and five 3D nnU-Net models that were trained with the batch normalization and extensive data augmentation, such as random rotation, flipping, gamma correction, multiplicative/additive brightness, and so on.
Zhang et al.~\cite{MMs-2020-2nd}, the 2nd-place team, used label propagation to leverage unlabelled cases and exploited the style transfer to reduce the variance among different centers and vendors. The final solution was one single model without using postprocessing and ensemble.
Ma~\cite{MMs-2020-3rd}, the 3rd-place team, addressed the domain shift problem by enlarging the training set with histogram matching, where new training cases were generated by using histogram matching to transfer the intensity distribution of 25 unlabelled cases to existing labelled cases. The final solution was an ensemble of five 3D nnU-Net models.
Table~\ref{tab:MMs} presents the quantitative segmentation results of the top-3 teams. It can be found that the differences among them were marginal and not statistically significant, indicating that \textit{all (three) roads lead to Rome}.
\subsection{EMIDEC: Automatic Evaluation of Myocardial Infarction from Delayed-Enhancement Cardiac MRI}
The task in EMIDEC challenge (\url{http://emidec.com/}) is to segment the myocardium, the infarction, and the no-reflow areas from delayed-enhancement cardiac MR images. The organizers provide 100 cases for training and 50 cases for testing~\cite{lalande2020EMIDEC-Data}. The main difficulties in this challenge are the low contrast, varied short-axis orientations, heterogeneous appearances of myocardium pathology areas, and unbalanced distribution between normal and pathological cases.
Figure~\ref{fig:MR-Seg} (the second row) presents a visualized example.
The evaluation and ranking metrics include
\begin{itemize}
\item clinical metrics: the average errors for the volume of the myocardium (in mm3), the volume (in mm3) and the percentage of infarction and no-reflow area;
\item geometrical metrics: the average DSC for the different areas and Hausdorff distance (in 3D) for the myocardium.
\end{itemize}
Table~\ref{tab:EMIDEC} presents the quantitative segmentation results of the top-3 teams on the final leaderboard\footnote{\url{http://emidec.com/leaderboard}}.
Both Zhang and Ma, the top-2 teams, used a two-stage cascaded framework and developed their methods based on nnU-Net~\cite{isensee2020nnunet}. Specifically, Zhang~\cite{zhang20-EMIDEC-1st} first used a 2D nnU-Net, focusing on the intra-slice information, to obtain a preliminary segmentation, and then a 3D nnU-Net, focusing on the volumetric spatial information, was employed to refine the segmentation results. The 3D nnU-Net took the combination of the preliminary segmentation and original image as the input.
Finally, the scattered voxels in segmentation results were removed in postprocessing step.
Ma~\cite{Ma20-EMIDEC-2nd} used the 2D nnU-Net in the two stages. Firstly a 2D U-Net was used to segment the whole heart, including the left ventricle and the myocardium. Then, the whole heart was cropped as a region of interest (ROI). Finally, a new 2D U-Net was trained to segment the infraction and no-reflow areas in the ROI. The final model was an ensemble of five 2D nnU-Net models in each stage.
Feng et al.~\footnote{\url{http://emidec.com/downloads/papers/paper-24.pdf}} used dilated 2D UNet~\cite{zhou2020ACNN} with rotation-based augmentation, which aim to overcoming the varied short-axis orientations.
\begin{table}[!htbp]
\caption{Quantitative results of top-3 teams on EMIDEC Challenge Leaderboard.}\label{tab:EMIDEC}
\setlength\tabcolsep{3pt}
\centering
\begin{tabular}{llccc}
\hline
\multirow{2}{*}{Targets} & \multirow{2}{*}{Metrics} & Zhang \cite{zhang20-EMIDEC-1st} & Ma~\cite{Ma20-EMIDEC-2nd} & Feng et al. \\ \cline{3-5}
& & Rank 1st & Rank 2nd & Rank 3rd \\ \hline
\multirow{3}{*}{Myocardium} & DSC & 0.8786 & 0.8628 & 0.8356 \\
& Vol. Diff. & 9258 & 10153 & 15187 \\
& HD & 13.01 & 14.31 & 33.77 \\ \hline
\multirow{3}{*}{Infarction} & DSC & 0.7124 & 0.6224 & 0.5468 \\
& Vol. Diff. & 3118 & 4874 & 3971 \\
& Vol. Diff. Ratio & 2.38\% & 3.50\% & 2.89\% \\ \hline
\multirow{3}{*}{Re-flow} & DSC & 0.7851 & 0.7776 & 0.7222 \\
& Vol. Diff. & 634.7 & 829.7 & 883.4 \\
& Vol. Diff. Ratio & 0.38\% & 0.49\% & 0.53\% \\ \hline
\end{tabular}
\end{table}
The methods of Zhang~\cite{zhang20-EMIDEC-1st} and Ma~\cite{Ma20-EMIDEC-2nd} obtained comparable results for myocardium and re-flow areas, but Zhang achieved significantly better results for infraction, which were 9\% and 17\% higher than the methods of Ma~\cite{Ma20-EMIDEC-2nd} and Feng et al. in terms of DSC.
The major methodology difference is that Zhang used the 3D network in the second stage while Ma and Feng et al. used the 2D network. Thus, one of the possible reasons might be that 3D network can use more image contextual information than 2D network, and also lead to better performance.
\section{Multi-modality 3D Image Segmentation}
\subsection{ADAM: Intracranial Aneurysm Detection and Segmentation Challenge}
The task in ADAM challenge (\url{http://adam.isi.uu.nl/}) is to segment the aneurysms from TOF-MRA and structural MR images.
The organizers provide 113 cases for training and 141 cases for testing. In the 113 training cases, 93 cases contain at least one untreated, unruptured intracranial aneurysm and 20 cases do not have intracranial aneurysms. In the 141 testing cases, 117 cases containing at least one untreated, unruptured intracranial aneurysm, and 26 cases do not have intracranial aneurysms. Each case has two folders:
\begin{itemize}
\item `orig' folder: containing all of the original TOF-MRA images and structural images (T1, T2, or FLAIR). The structural image was aligned to the TOF image by elastix\footnote{\url{https://elastix.lumc.nl/}}.
\item `pre' folder: All images were preprocess by `n4biasfieldcorrection'\footnote{\url{http://stnava.github.io/ANTs/}} to correct bias field inhomogeneities.
\end{itemize}
The main difficulty in this challenge is the extremely imbalanced problem. Specifically, the median image size is $512\times512\times140$, while the median aneurysm voxel size is 238, leading to a extremely imbalanced foreground-background ratio of $6.5\times10^{-6}$.
Figure~\ref{fig:ADAM} presents the visualized examples.
Participants are allowed to use any of the provided images to develop their methods. The testing set is hidden by the organizers and participants should submit their methods with Docker containers.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.45]{imgs/006-ADAM.png}
\caption{Visualized examples in ADAM Challenge. Ground truth (b) of the intracranial aneurysm is shown in 2D projected onto the TOF-MRA and the structural MR image (a) and in 3D together with a volume rendering of the raw data (c). The red arrows point to the intracranial aneurysm.
}\label{fig:ADAM}
\end{figure}
Table~\ref{tab:ADAM} presents the quantitative results of top-2 teams on ADAM Challenge Leaderboard\footnote{\url{http://adam.isi.uu.nl/results/results-miccai-2020/}} during MICCAI 2020.
Both teams developed their methods based on nnU-Net~\cite{isensee2020nnunet}.
Specifically, to alleviate the imbalanced problem, the team `junma` trained two group five-fold nnU-Net models with Dice + Cross entropy loss and Dice + TopK loss, respectively~\cite{SegLossOdyssey}. Only preprocessed TOF-MRA images were used during training.
The final model was the ensemble of five best models during cross-validation.
To speed up the inference, the default testing time augmentation in nnU-Net (TTA) was disabled during testing.
The team `jocker' modified the default nnU-Net by introducing residual blocks in the encoder and replacing the instance normalization with group normalization. The loss function was Dice + TopK loss. The final model was the ensemble of four models with different modalities and output classes.
\begin{table}[!htbp]
\caption{Quantitative results of top-2 teams on ADAM Challenge Leaderboard. The bold numbers are the best results.}\label{tab:ADAM}
\centering
\begin{tabular}{lcccc}
\hline
Team & DSC & HD95 & Volumetric Similarity & Rank \\ \hline
junma & \textbf{0.41} & 8.96 & \textbf{0.50} & 1 \\
joker & 0.40 & \textbf{8.67} & 0.48 & 2 \\ \hline
\end{tabular}
\end{table}
As shown in Table~\ref{tab:ADAM}, the team `junma' achieved the best DSC and Volumetric Similarity and the team `joker' achieved the best HD95. However, it should be noted that the differences between them are marginal.
\subsection{HECKTOR: 3D Head and Neck Tumor Segmentation in PET/CT}
The task in HECKTOR challenge (\url{http://www.aicrowd.com/challenges/hecktor}) is to segment the heck and neck tumor from PET and CT images.
The organizers provide 201 training cases from four medical centers in Montreal and 53 testing cases from another medical center in Lausane~\cite{HECKTOR-MIDL2020,HECKTOR2021overview}. The tumor ground truth was delineated for radiotherapy treatment planning on PET and CT.
Moreover, the organizers also provided bounding boxes ($114\times114\times114$ $mm^3$) locating the oropharynx region.
The main difficulties are the multi-modality fusion, imbalanced problem, and the unseen testing cases from a new medical center. Figure~\ref{fig:HECKTOR} presents the visualized examples.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.6]{imgs/007-HECKTOR.png}
\caption{Visualized examples in HECKTOR Challenge. Ground truth (b) of the head and neck tumor is shown in 2D projected onto the CT and the PET image (a) and in 3D together with a volume rendering of the raw data (c).
}\label{fig:HECKTOR}
\end{figure}
Table~\ref{tab:HECKTOR} presents the quantitative segmentation results of the top-2 teams on HECKTOR Challenge Leaderboard\footnote{\url{https://www.aicrowd.com/challenges/miccai-2020-hecktor/leaderboards}} during MICCAI 2020.
Both teams developed their methods based on the two channel 3D U-Net~\cite{cciccek2016UNet3D}. Specifically, the team `andrei.iantsen', replaced the batch normalization with squeeze-and-excitation normalization and introduced the residual blocks in the encoder. The loss function was the unweighted sum of Dice loss and focal loss~\cite{focal2017}.
Four models with leave-one-center-out splits and four additional models with random data splits were trained for 800 epoches using Adam optimizer~\cite{kingma2014adam} on two NVIDIA 1090Ti GPUs with a batch size of 2.
The final model was an ensemble of the eight models.
The team `junma'~\cite{HECKTOR-2020-2nd} firstly trained five 3D nnU-Net models~\cite{isensee2020nnunet} with Dice + TopK loss for five-fold cross-validation. Then, a segmentation quality score was defined by model ensembles, which can be used to select the cases with high uncertainties. Finally, the high uncertainty cases were refined by a hybrid active contour model with iterative convolution-thresholding methods~\cite{wang2017JCP,wang2019ICTM, ma2020ICTM-GAC}.
Both teams concatenated the PET and CT image as input and model ensembles were used to predicting the testing set. In the loss function, Dice loss was also incorporated in both teams.
\begin{table}[!htbp]
\caption{Quantitative results of top-2 teams on HECKTOR Challenge Leaderboard. The bold numbers are the best results.}\label{tab:HECKTOR}
\centering
\begin{tabular}{lcccc}
\hline
Team & DSC & Precision & Recall & Rank \\ \hline
andrei.iantsen & \textbf{0.759} & 0.833 & \textbf{0.740} & 1 \\
junma~\cite{HECKTOR-2020-2nd} & 0.752 & \textbf{0.838} & 0.717 & 2 \\ \hline
\end{tabular}
\end{table}
The final rank was based on the DSC scores on the testing set.
The 1st-place team `andrei.iantsen' obtained better DSC and Recall while the 2nd-place team `junma' obtained better Precision. However, the differences between the two teams are marginal, especially for the DSC and Precision.
Moreover, both teams achieved significantly higher Precision than Recall, indicating that most of the segmented voxels were real tumor voxels but many tumor voxels were missed by the model.
\subsection{MyoPS: Multi-sequence CMR based myocardial pathology segmentation challenge}
The task in MyoPS challenge (\url{http://www.sdspeople.fudan.edu.cn/zhuangxiahai/0/myops20/}) is to segment the myocardial pathology (i.e., scar and edema) from multi-sequence cardiac MR images, including the late gadolinium enhancement (LGE) sequence, the T2-weighted sequence, and the balanced- Steady State Free Precession (bSSFP) cine sequence.
The organizers provide 25 cases for training and 20 cases for testing~\cite{MyoPS-MICCAI, MyoPS-TPAMI}.
The main difficulties are the multi-modality fusion, imbalanced problem, low-contrast and heterogeneous appearances of myocardium lesions.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.6]{imgs/008-MyoPS.png}
\caption{Visualized examples in MyoPS Challenge. Ground truth is shown in 2D projected onto the multi-sequence MR images (b) and in 3D rendering (c).
}\label{fig:MyoPS}
\end{figure}
The winner team, `Zhai \& Gu et al.'~\cite{Zhai2020MyoPS}, proposed a coarse-to-fine framework with weighted ensemble. In the coarse segmentation stage, the whole heart was segmented by a U-Net~\cite{ronneberger2015UNet2D} from three sequence MR images.
In the fine segmentation stage, the region of interest (ROI) was cropped according to coarse segmentation results and a nnU-Net~\cite{isensee2020nnunet} was trained to simultaneously segment the left ventricle, right ventricle, healthy myocardium, scar, and edema from the concatenation of three sequence MR images and coarse segmentation results.
Cross-validation results showed that 2D U-Net achieved better performance for the edema while 2.5D U-Net achieved better performance for the scar. To obtained better performance, a weighted method was used for final ensemble. Specifically, the weights for edema and scar prediction channels were 0.8 in 2D and 2.5D U-Net, respectively, while the weights for the other prediction channels were 0.5.
\begin{table}[!htbp]
\caption{Quantitative results of the winner team `Zhai \& Gu et al.'~\cite{Zhai2020MyoPS} in MyoPS challenge.}\label{tab:MyoPS}
\centering
\begin{tabular}{lcc}
\hline
Target & DSC & Rank \\ \hline
Scar & 0.672 $\pm$ 0.244 & 1 \\
Scar + Edema & 0.731 $\pm$ 0.109 & 1 \\ \hline
\end{tabular}
\end{table}
Table~\ref{tab:MyoPS} presents the quantitative segmentation results of the winner team on the testing set\footnote{\url{http://www.sdspeople.fudan.edu.cn/zhuangxiahai/0/myops20/result.html}}. Zhai \& Gu et al. achieved an average DSC of 0.672 $\pm$ 0.244 and 0.731 $\pm$ 0.109 for scar and the combination of scar and edema, respectively. The performance was significantly better than the inter-observer variation of manual scar segmentation (DSC: 0.5243 $\pm$ 0.1578), demonstrating the effectiveness of the proposed method.
\subsection{ABCs: Anatomical Brain Barriers to Cancer Spread: Segmentation from CT and MR Images}
ABCs challenge (\url{https://abcs.mgh.harvard.edu/}) included two brain structures segmentation tasks \begin{itemize}
\item Task 1: segmenting five brain structures, including falx cerebri, tentorium cerebelli, sagittal and transverse brain sinuses, cerebellum and ventricles, which can be used for automated definition of the clinical target volume (CTV) for radiotherapy treatment.
\item Task 2: segmenting ten structures, including Brainstem, left and right eyes, left and right optic nerves, left and right optic chiasm, lacrimal glands, and cochleas, which can be used in radiotherapy treatment plan optimization.
\end{itemize}
The organizers provide 45 cases for training, 15 cases for validation, and 15 cases for testing. Each case consists of one CT image acquired for treatment planning, and two post-operative brain MRI images (i.e., contrast enhanced T1-weighted, T2-weighted FLAIR). The CT and MR images were obtained from two different CT scanners and seven different MRI scanners, respectively. The multi-modality images were co-registered, and re-sampled to the same resolution and size.
The main difficulties are the multi-modality fusion, imbalanced labels, and multi-vendor cases.
Figure~\ref{fig:ABCs} presents the visualized examples in two tasks.
Participants are required to submit their segmentation results within 48 hours after the time of download the testing set.
DSC and Surface DSC\footnote{\url{https://github.com/deepmind/surface-distance}} at the tolerance of 2 $mm$ are used to evaluate and rank the segmentation results.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.75]{imgs/009-ABCs.png}
\caption{Visualized examples in ABCs Challenge. Ground truth (b) is shown in 2D projected onto the multi-modality images (a) and in 3D together with a volume rendering of the raw data (c).
}\label{fig:ABCs}
\end{figure}
\begin{table}[!htbp]
\caption{Quantitative results of top-2 teams on ABCs Challenge Leaderboard. The bold numbers are the best results.}\label{tab:ABCs}
\centering
\begin{tabular}{lccccc}
\hline
\multirow{2}{*}{Team} & \multicolumn{2}{c}{Task 1} & \multicolumn{2}{c}{Task 2} & \multirow{2}{*}{Rank} \\ \cline{2-5}
& DSC & SDSC & DSC & SDSC & \\ \hline
Jarvis~\cite{ABCs-2020-1st} & \textbf{0.888} & \textbf{0.980} & \textbf{0.783} & 0.936 & 1 \\
HILab & 0.883 & 0.978 & 0.781 & \textbf{0.941} & 2 \\ \hline
\end{tabular}
\end{table}
Table~\ref{tab:ABCs} presents the average DSC and SDSC of testing set segmentation results of the top-2 teams on the Challenge Leaderboard\footnote{\url{https://abcs.mgh.harvard.edu/index.php/leader-board}}.
Both teams developed their methods based on nnU-Net~\cite{isensee2020nnunet}. Specifically, the team `Jarvis'~\cite{ABCs-2020-1st} used the ResU-Net where residual blocks were introduced in the U-Net encoder. The training process had three main features:
\begin{itemize}
\item the training cases `007' and `054' in Task 2 had annotation issues. Thus, the default annotations were replaced with pseudo labels generated by cross-validation.
\item the flipping along x-axis was dropped from the default data augmentation setting in nnU-Net, because the segmentation targets in Task 2 are sensitive to left and right direction.
\item in addition to the default Dice + CE loss in nnU-Net, Tversky loss~\cite{salehi2017tversky, SegLossOdyssey} was also used to train the ResU-Net.
\end{itemize}
The final model was an ensemble of default nnU-Net, ResU-Net with Dice-CE loss, and ResU-Net with Tversky loss.
The team `HILab' used a coarse-to-fine framework with nnU-Net~\cite{isensee2020nnunet} for both tasks. Specifically,
\begin{itemize}
\item in Task 1, an uncovered model was trained to obtain the coarse segmentations with small overfitting. Then, each organ was cropped from the original images and refined by an independent network. The refined organs were combined as the final segmentation results.
\item in Task 2, a coarse model was firstly trained to segment all organs. Then, each organ was also cropped from the original images and refined by an independent network. The training process was different from Task 1, where a new data augmentation technique, flipping each organ to other side was introduced to enlarge the training set. The final segmentation results were also the combination of the refined organs.
\end{itemize}
Both teams fused the three modalities by concatenating them as the network input. Model ensemble was also used by both teams but the ensemble strategies were different. In particular, the team `Jarvis' used an ensemble of multiple multi-organ segmentation networks while the team `HILab' used an ensemble of one multi-organ and multiple single-organ segmentation networks.
\subsection{BraTS: Brain Tumor Segmentation}
The segmentation task in BraTS challenge (\url{https://www.med.upenn.edu/cbica/brats2020/}) is to segment the enhancing tumor (ET), the tumor core (TC, the necrotic and non-enhancing tumor core), and the whole tumor (WT) from pre-operative multi-modality MR images. As shown in Figure~\ref{fig:BraTS}, the whole tumor comprises the enhancing tumor (red), the edema (green), and the tumor core (blue).
The organizers provide 369 cases for training, 125 cases for testing, and 166 cases for testing. Each case consists of four modalities: the native (T1) MR image, the post-contrast T1-weighted (T1Gd) MR image, the T2-weighted (T2), and the T2 Fluid Attenuated Inversion Recovery (FLAIR) MR image, which were acquired with different clinical protocols and various scanners from 19 institutions~\cite{Brats15-TMI,bratscite3,bratscite4,bratscite5,bakas2018brats}.
The main difficulties are the multi-modality fusion, imbalanced labels, low-contrast and heterogeneous appearances of the brain lesion.
Participants are required to submit their segmentation results within 48 hours after the time of download the testing set.
DSC and HD95 are used to evaluate and rank the segmentation results.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.5]{imgs/010-BraTS.png}
\caption{Visualized examples in BraTS Challenge. Ground truth is shown in 2D projected onto the multi-sequence MR images and in 3D together with a volume rendering of the raw data.
}\label{fig:BraTS}
\end{figure}
The winner team `MIC-DKFZ', leaded by Fabian et al.~\cite{isensee2020brats-1st}, extended nnU-Net~\cite{isensee2020nnunet} by incorporating BraTS-specific modifications regarding postprocessing, region-based training, a more aggressive data augmentation, BraTS ranking-based model selection as well as several minor modifications, which can improve the default nnU-Net segmentation performance substantially.
Specifically, following BraTS-specific modifications were integrated into nnU-Net’s configuration:
\begin{itemize}
\item \textbf{Region-based training:} replacing the softmax layer with a sigmoid layer and changing the optimization target to the three tumor sub-regions. The default cross entropy loss term was also replaced with a binary cross-entropy where each of the regions was optimized independently;
\item \textbf{Postprocesing:} removing enhancing tumor entirely if the predicted volume was less than a given threshold. The threshold was optimized on training set cross-validation twice, once via maximizing the mean Dice score and once via minimizing the BraTS-like ranking score;
\item \textbf{Increased batch size:} increasing the batch size from 2 to 5;
\item \textbf{Extensive data augmentation:} using more aggressive augmentations, such as increasing the the probability of applying rotation, scaling, the scale range, elastic deformation, and so on;
\item \textbf{Batch normalization:} replacing the default instance normalization with batch normalization;
\item \textbf{Batch dice:} computing the dice loss over all samples in the batch;
\item \textbf{BraTS Ranking-based model selection:} selecting the best model with BraTS-like `rank then aggregate' ranking scheme.
\end{itemize}
The final model was an ensemble of 25 cross-validation models including three groups of top
performing models.
Two tied teams ranked second. Specifically, the team `NPU\_PITT', leading by Jia et al.~\cite{jia2020brats-2nd}, proposed a Hybrid High-resolution and Non-local Feature Network (H$^2$NF-Net) Four modalities were concatenated as a four-channel input and processed at five different scales in the network.
The edema and enhancing tumor were segmented by the single HNF-Net and the tumor core was segmented by he cascaded HNF-Net.
and different brain tumor sub-regions were segmented by single and cascaded HNF-Nets.
The team `Radicals', leading by Wang et al.~\cite{wang2020brats-2nd}, proposed an end-to-end Modality-Pairing learning method with paralleled branches and more layer connections to explore the latent relationship among different modalities. Moreover, a consistence loss was introduced to minimize the prediction variance between branches. The final model was an ensemble of three Modality-Pairing models and three Vanilla nnU-Net~\cite{isensee2020nnunet} models.
\begin{table}[!htbp]
\caption{Quantitative results of the top-3 teams on BraTS 2020 Challenge Leaderboard. The bold numbers are the best results.}\label{tab:BraTS}
\setlength\tabcolsep{3.5pt}
\centering
\begin{tabular}{llcc}
\hline
Team & Target & DSC & HD95 \\
\hline
\multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}MIC\_DKFZ\\Fabian et al.~\cite{isensee2020brats-1st}\\Rank 1st~\end{tabular}} & Enhancing Tumor & 0.820~$\pm$~0.197 & 17.8 $\pm$ 74.9 \\
& Whole Tumor & 0.890~$\pm$~0.132 & 8.50 $\pm$ 40.7 \\
& Tumor Core & 0.851~$\pm$ 0.240 & \textbf{13.3 $\pm$ 69.5} \\
\hline
\multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}NPU\_PITT\\Jia et al.~\cite{jia2020brats-2nd}\\Rank 2nd (tie)~\end{tabular}} & Enhancing Tumor & \textbf{0.828~$\pm$~0.177} & \textbf{13.0 $\pm$ 63.7} \\
& Whole Tumor & 0.888~$\pm$~0.119 & \textbf{4.53 $\pm$ 6.21} \\
& Tumor Core & \textbf{0.854~$\pm$ 0.231} & 16.9 $\pm$ 69.5 \\
\hline
\multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}{Radicals}\\Wang et al.~\cite{wang2020brats-2nd}\\Rank 2nd (tie) \end{tabular}} & Enhancing Tumor & 0.816 $\pm$ 0.197 & 17.8 $\pm$ 74.9 \\
& Whole Tumor & \textbf{0.891 $\pm$ 0.112} & 6.2 $\pm$ 29.0 \\
& Tumor Core & 0.842 $\pm$ 0.244 & 19.5 $\pm$ 74.8 \\
\hline
\end{tabular}
\end{table}
Table~\ref{tab:BraTS} presents the quantitative segmentation results of the top-3 teams on the testing set\footnote{\url{https://www.med.upenn.edu/cbica/brats2020/rankings.html}}. Overall, the performance differences are marginal. The team `MIC\_DKFZ' achieved the best HD95 for the tumor core and the team `Radicals' achieved the best DSC for the whole tumor. The team `NPU\_PITT' achieved the best performance in the remaining four metrics.
\section{Discussion}
\subsection{What are the ``happy-families" elements in the top methods?}
As the Anna Karenina principle goes\footnote{\url{https://en.wikipedia.org/wiki/Anna_Karenina_principle}}:
``All happy families are alike.", there are also some common components in the top methods.
\textbf{nnU-Net~\cite{isensee2020nnunet} backbone} All the top methods used U-Net~\cite{ronneberger2015UNet2D,cciccek2016UNet3D} like architectures in the ten 3D segmentation challenges. Remarkably, nnU-Net was used by the top teams in nine out of ten challenges, because it is open-sourced, powerful, flexible, and out-of-the-box. Participants can easily integrate their new methods into nnU-Net.
\textbf{Dice-related loss functions}
Loss function is one of the most important elements in deep learning-based segmentation methods. nnU-Net used Dice + cross entropy as the default loss function. For extremely imbalanced segmentation tasks, modifying the loss function can obtain better performance.
For example, the winner in HECKTOR challenge used Dice + Focal loss. Both the winner and the runner up used Dice + TopK loss in ADAM challenge. For a more detailed analysis of segmentation loss functions, please refer to \cite{SegLossOdyssey}.
\textbf{Cascaded/coarse-to-fine framework} Cropping the region-of-interest (ROI) can eliminate the unrelated background tissues and reduce the computational burden. Thus, one can firstly trained a model to obtain the coarse segmentation and then crop the ROI. After that, training a new model with the ROI image (concatenated with the coarse segmentation) to refine the segmentation results. This strategy is quite effective for myocardial pathology and small organ segmentation tasks, which was used by both the winners in EMIDEC and MyoPS challenge, and the runner up in ABCs challenge.
\textbf{Model Ensembles} Ensemble is an effective way to fuse the performance of multiple single models. All the top teams used more than one models in their final solutions. The models were usually trained with different data splits, data augmentation techniques, networks, or loss functions, and then combined by averaging the predictions, majority vote, or cascaded frameworks.
\textbf{Concatenated input fusion in multi-modality segmentation tasks}
How to fuse multiple different images is a key question in multi-modality segmentation tasks.
Common deep-learning based image fusion methods include input-level fusion, feature-level fusion, and output-level fusion. In five multi-modality segmentation challenges,
four out of five winner teams used input-level fusion, which directly concatenated multiple images as network inputs. The winner team in ADAM challenge only use one modality but the runner up, achieving similar performance, also used the concatenation strategy to fuse different modalities.
\subsection{Problems and Opportunities}
Based on the summary of the ten segmentation challenges, it can be found that deep learning has achieved unprecedented or even human-level performance on many medical image segmentation tasks, but there still remains several problems.
Following, we introduce some of the problems and also opportunities that can promote the further development of medical image segmentation methods.
\textbf{Standardized method reports}
Many challenge organizers required the participants to submit a short paper to describe their methods. However, these papers are usually structured with their own way and some necessary details might be missed. Currently, the challenge quality has been greatly improved with the Biomedical Image Analysis ChallengeS (BIAS) initiative~\cite{maier2020BIAS-MIA}, where a checklist is used to standardize the review process and raise interpretability and reproducibility of challenge results. Thus, there is also a high demand for the challenge method reports quality control.
The winner team in MICCAI Hackathon Challenge provided an initial attempt
(\url{https://github.com/JunMa11/MICCAI-Reproducibility-Checklist}) at dealing with the method reproducibility with a checklist, but more efforts are required to make this checklist more complete and acceptable by our community.
\textbf{Publicly available baseline models}
nnU-Net has been proved to be a strong baseline.
When starting with a new 3D segmentation challenge, most of the participants will train nnU-Net baseline models, which usually cost 72-120 GPU hours (depending on the computational facilities).
This could be a great waste of energy and time, because participants are repeatedly doing the same thing.
There is a strong demand for publicly available (trained) baseline models when a new segmentation challenge is launched. In this way, participants can pay more attention to developing new methods without spending energy and time on training the baseline models.
\textbf{Fast and memory efficient models}
There is no doubt that accuracy (e.g., DSC, HD) is an important factor for segmentation methods. However, the running time and the GPU memory requirement of the segmentation methods are also critical when deploying the trained models in clinical practice.
Currently, most of the top methods use model ensembles, which could be time-consuming and require very high computing resources.
In order to promote the deep learning-based medical image segmentations to be clinically applicable, it is necessary to pay more attention to the models' running efficiency.
\textbf{Theoretical foundations of segmentation models}
Current theoretical studies of deep learning usually have strong assumptions~\cite{CNN-Theory-PNAS,E-CNN-Theory,CNN-Theory-Tao}, such as smoothness, infinite width, and so on. However, when it comes to real practice, many open problems remain unsolved. For example, what is the theoretical principle of designing segmentation network architectures? is there a generalization gap? how should we to estimate it?
what does the loss function landscape look like?
does the training process converge to a good solution? How fast?
how much data do we need when start with a new segmentation task?
\textbf{Diverse datasets and generalizable segmentation models}
Collecting diverse datasets is critical for developing generalizable segmentation models, because clinical practice requires the trained models can be applied to many (unseen) medical centers.
According to the challenge results (e.g., M\&Ms, BraTS, HECKTOR), we found that the segmentation performances have a significant drop when testing sets include unseen cases from new medical centers.
Thus, it is important to have diverse datasets to evaluate the models' generalization ability when organizing segmentation challenges.
Currently, building generalizable models that can be applied consistently across medical centres, diseases, and scanner vendors is still an unsolved and challenging problem.
\textbf{Extremely imbalanced target segmentation}
Imbalanced segmentation has been a long-term problem in medical image segmentation, especially when the size of target foreground region is several orders of magnitude less than the background size.
Recently studies have made some progress \cite{milletari2016vnet,ma-MIDL2020-SegWithDist,BDLoss-2020-MIA}, however, the extremely imbalanced segmentation still remains very difficult. For example, in ADAM challenge, the median target size is 238, which occupies $6.5\times10^{-6}$ of the median image size.
The winner method achieved a DSC score of 0.41, which has large rooms for further improvements.
\subsection{Limitations}
There are also some important but not mentioned topics in this paper.
For example, a summary of 2D medical image segmentation challenges~\cite{MoNuSeg,ross2020ROBUST-MIS,AGE-Challenge} has not been included in this paper, because we only found three 2D international image segmentation challenges in 2020, including thyroid nodule segmentation in ultrasound images (\url{https://tn-scui2020.grand-challenge.org/Home/}), optic disc and cup segmentation in fundus images (\url{https://refuge.grand-challenge.org/Home2020/}), and cataract segmentation in surgical videos (\url{https://cataracts-semantic-segmentation2020.grand-challenge.org/Home/}).
Thus, the findings would be biased with limited challenge samples and we will provide a similar summary for 2D medical image segmentation methods when there are many ($\geq10$) international challenges. Moreover, this paper only covers cutting-edge fully supervised segmentation methods, while semi-supervised learning~\cite{Not-so-supervised-MIA-19,van2020semi-survey, Luo2020smalldata-survey}, weakly-supervised learning~\cite{tajbakhsh2020embracing-MIA,NoisyLabel-Review}, and continual learning~\cite{ContinualLearning-NN,soltoggio2018borntolearn,hoi2018onlineLearning-Survey}-based segmentation methods are not mentioned. This is because, currently, there are few benchmarks or challenges~\cite{ma2020COVID-Data,Ma-2020-abdomenCT-1K} for these topics in the medical image segmentation field.
\section{Conclusion}
Challenges provide an open and fair platform for various research groups to test and validate their segmentation methods on common datasets acquired from the clinical environment.
In this paper, we have summarized ten 3D medical image segmentation challenges and the corresponding top methods.
In addition, we also identify the widely involved "happy-families" elements in the top methods and give potential future research directions in medical image segmentation.
Moreover, we also maintain a public GitHub repository (\url{https://github.com/JunMa11/SOTA-MedSeg}) to collect the cutting-edge segmentation methods based on various international segmentation challenges.
We expect that this review of the cutting-edge 3D image segmentation methods will be beneficial to both early-stage and senior researchers in related fields.
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
The authors would like to thank all the organizers for creating the public datasets and holding the great challenges.
The authors also highly appreciate Ruochen Gao (the winner in ASOCA), Ran Gu (the winner in MyoPS), Wenhui Lei (the winner in MyoPS and the runner up to winner in ABCs), Munan Ning (the winner in ABCs), and Yixin Wang (the runner up to winner in BraTS), Yao Zhang (the runner up to winner in M\&Ms), Yichi Zhang (the winner in EMIDEC) for valuable discussions.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
2,877,628,091,617 | arxiv | \section{Introduction}
\begin{figure*}
\centering
\includegraphics[width=1\textwidth]{jsonexample}
\caption{An example log line stored in the backend as a learner pauses a video. The log line is stored in JavaScript Object Notation (JSON).}
\label{json-example}
\end{figure*}
Massive open online courses (MOOCs) have provided a new way to reach enormous numbers of learners via an online platform. Many courses proceed very similarly to a campus course. However, they allow students from anywhere in the world to register for a course and earn a certificate upon successful completion. The specific layout of each MOOC varies, but currently most follow a similar format. Content is sectioned into modules, usually using weeks as intervals. Most MOOCs include online lectures (video segments), lecture questions, homework questions, labs, a forum, a Wiki, and exams (midterm and final). Students \footnote{In this paper we use learner and student to mean the same} advance through the modules sequentially, access online resources, submit assignments and participate in peer-to-peer interactions (forums). While similar to campus based courses there are significant differences in the way material is offered and the way learners interact with these courses. \footnote{A TEDx lecture by Anant Agarwal explains these and other key ideas that make MOOCs powerful}:
\vspace{-3mm}
\begin{description}
\item \textbf{Interactive learning}: MOOCs allow the insertion of interactive exercises in between lecture videos enabling student to apply the concepts they have learnt. They allow instructors to use new technology to engage learners in course material including simulations, peer grading, discussion forums, and games. They also allow instructors to integrate experiences from outside classroom into the curriculum.
\vspace{-3mm}
\item \textbf{Self paced and anytime learning}: They allow students to start their lectures anytime and engage with the course anytime as and when their schedule permits. Additionally, students can replay lecture videos, pause, play again and learn at a pace that is most beneficial for them.
\vspace{-3mm}
\item \textbf{Instantaneous feedback on assessments}: In MOOCs students can be allowed multiple attempts for a problem and can get instantaneous feedback on every attempt. This feedback can range from whether the answer was right or wrong to a more sophisticated diagnosis.
\end{description}
\vspace{-4mm}
\begin{figure}
\texttt{
\begin{tabular}{lll}
timestamp & url & event \\ \hline
$2013-11-10 08:46:21$& 191 & play\_video \\
$2013-11-10 08:46:49$ & 191 & pause\_video \\
$2013-11-10 08:47:24$& 191 & play\_video \\
$2013-11-10 08:51:25$ & 191 & pause\_video \\
$2013-11-10 08:51:48 $& 191 &play\_video \\
$2013-11-10 08:53:08 $&198 & seq\_goto \\
$2013-11-10 08:55:05$& 284 & pause\_video\\
$2013-11-10 08:56:05 $& 284 &play\_video\\
$2013-11-10 09:40:50 $&284 &pause\_video \\
$2013-11-10 09:41:13 $&284& play\_video \\
$2013-11-10 09:41:57$&284&play\_video \\
$2013-11-10 09:53:37$&284& pause\_video \\
$2013-11-10 10:15:53$& 284& problem\_check \\
$2013-11-10 10:20:27$ & 121 &problem\_check \\
$2013-11-10 10:22:27$& 123 & problem\_check \\
$2013-11-10 10:25:50$ &123 & problem\_graded \\
\end{tabular}
}
\caption{A snapshot of one learner's timeline spanning approximately 2 hours of activity recorded as click stream events. During this period the student \textit{plays} and \textit{pauses} a video and attempts the problems available on the \textit{urls} 121 and 123. \textit{urls} are encoded with numbers and we store all the meta information about the content of the \textit{url} in a different table.}
\label{studentevents}
\end{figure}
While students advance in the course, every mouse click they make on the course website is recorded, their submissions are collected and their interactions on forums are recorded. An example of a clickstream event and how it is recorded is presented in Figure~\ref{json-example}. The recorded data presents an opportunity for researchers to analyze the data post-hoc, and answer questions ranging from simple ones such as \textit{what was useful?} and \textit{what was not?} to more complex research questions such as \textit{What was the reason behind a student leaving the course?}, \textit{What were the most common misconceptions in the material?}, \textit{How do students solve a problem?}.
As data scientists, to answer these questions one first attempts to quantitatively characterize learners online behavior from \textit{web logs} and \textit{click stream} data. The raw data, recorded as shown in the Figure~\ref{json-example}, after processing, curation, and storing in a database \footnote{These three steps are extremely complex and challenging but are not in the scope of this paper}, enables extraction of \textit{per-learner} sequences of click stream events during a specified time period, shown in Figure~\ref{studentevents}. These \textit{per learner} sequence of the events only provide primitive signals that form bases for inferences such as learner's knowledge acquisition, attitude, attention. However, they have potential to help us gauge learners intent, interest and motivation in the absence of verbalized or visual feedback from the learner. To make such inferences we must form \textit{variables} capturing learner's behavior; an endeavor we call \textit{feature engineering}. Among many different types of \textit{variables} and data representations, two kinds of variables are of interest:
\noindent \textbf{Variables that capture per learner behavior with respect to a \textit{resource}}: For example, in the above sequence of clickstream events two such variables are: \textit{total time spent of the video} and the \textit{number of pauses while watching the video}. When these two variables are evaluated for all the learners and analyzed they can uncover patterns; if too many learners \textit{pause} too many times, the video could be fast and/or confusing.
\noindent \textbf{Per-learner longitudinal variables}: A longitudinal study involves repeated observation of the same variables over time. A variable is usually an aggregate or a statistic of some observations defined for that time period. In the context of the MOOC, we can define the time interval to be a \textit{week}, a \textit{day} or a \textit{time corresponding to the module/unit} or divide the course into two periods - before and after midterm. An example is \textit{time each student spent on the course website during the week}. A more complex variable is \textit{on an average, the time before the deadline learner starts to work on an assignment}.
\subsection{What is the challenge in feature engineering?}
Engineering features from this type of data: time series of click stream events that record human interaction with an online learning platform presents a very unique set of challenges. As machine learning researchers (and data scientists) our first inclination was to seek automated ways to extract features. Perhaps the methods developed for problems in image understanding, and both text and spoken language processing, such as \textit{deep learning} that enable further automation of what has already been semi-automated would transfer? With this type of data, however, we quickly realized that feature engineering needs to be primarily driven by humans because of the multiple roles they assume in this endeavor. Below we explicate through examples some of the roles humans play in engineering these features. They:
\vspace{-3mm}
\begin{description}
\item \textbf{Generate ideas based on their intuition}: Coming up with variables requires generation of ideas based on \textit{intuition}, and understanding of what could be relevant for a study. As humans who have been learners in some shape or form, we self reflect to invent variables. For example, when considering prediction of stopout/dropout, we might each quite naturally suggest ``\textit{If the student starts to homework problems very close to the deadline, he might be very likely to fall behind and eventually drop out}". Subsequently, we might propose how to operationalize such a variable into a quantitative value by measuring, `` Time difference between the dead line and the students first attempt for the homework problem". While many other aspects of feature engineering can be automated, this one cannot.
\vspace{-3mm}
\item \textbf{Bring their knowledge about the context as instructors}: For MOOCs, designing variables requires understanding of \textit{context} and \textit{content} of the course for which the variables are sought. In other words instructors or experts in the course are able to propose what variables might be important to capture. For example, an instructor might be aware of an important concept whose understanding is critical for continued success in the course and may hypothesize that a variable that captures whether the learner understood the concept or not could help predict stopout/dropout.
\item \textbf{Use their highly specialized knowledge of learning science}: Additionally researchers from learning sciences are able to propose variables grounded in theory that they can link together \textit{via} a multivariate distribution to explain latent constructs such as \textit{motivation}, \textit{intention}, and \textit{self-efficacy}.
\item \textbf{Operationalize the ideas}: Due to the type of data and a nature of variables, operationalizing these ideas into variables require a number of steps, addition of details depending upon the context, assumptions and heuristics. Take for example the variable proposed above. First it requires us to assemble the deadlines for all the problems in different weeks. Then we have to define as to what constitutes as ``start" time for student working on the assignment. Since there is no mechanism where students notify when they started to work on the assignment, we have two options: the first time they looked at the problem, or the time of the first attempt for the problem or the time they attempted but saved the answer instead of checking for correctness. One can argue that the first time student looks at the assignment might not be the start time as it might correspond to the learners simple browsing behavior, so one resorts to the first attempt made by the learner towards the problem.
\end{description}
\vspace{-3mm}
Given that humans are NOT replaceable in this endeavor, we shifted our focus towards addressing a different goal: \textit{how to increase the number of people who can participate in this endeavor?}. Towards that, in this paper, we initiate a new, broad and fundamental approach towards human driven feature engineering. In developing our new approach we focused on engineering features that could be predictors for \textit{who is likely to stopout?}
We started with a very natural approach which was to think up feature ideas ourselves and transfer them into quantities in the data. We realized that this tact is vulnerable to missing some features because there are other interpretations of what was happening that could be different from ours. This led us to construct activities which involve collecting ideas for features from others, i.e. the "crowd". This allowed us to expand the diversity of our set and eliminate our blind spots. Subsequently we evaluated the value of features in predicting stopout\xspace from a machine learning perspective and discerned an important characterization of features. Our study on feature engineering for stopout prediction problem helped us lay the foundation for building next generation scalable feature engineering platforms that are not only able to radically increase the number of people who can participate in this endeavor, but also enable a much smoother and efficient participation.
\subsection{Our contributions}
Our contributions through this paper are:
\begin{itemize}
\vspace{-2mm}
\item We develop the first set of longitudinal \textit{per-learner} variables that are able to successfully predict stopout\xspace.
\vspace{-2mm}
\item For the first time, in this domain, we present the results of soliciting ideas for variables from the ``crowd", an endeavor we conclude is going to be necessary in this domain.
\vspace{-2mm}
\item We provide an in-depth account of the feature engineering process we have used to develop features that helped us predict stopout\xspace.\footnote{Stopout is what we refer to for dropout}
\vspace{-2mm}
\item We present a systematic way to evaluate the contribution of each feature towards the prediction problem. We apply this methodology to the features we assembled and demonstrate the importance of the contribution of the ``crowd".
\vspace{-2mm}
\item We present the insights feature engineering can yield about feature types, and how to make features themselves as carefully nuanced and developed as the predictive models of behavior.
\vspace{-2mm}
\item We use our account to reflect on the feature engineering that is going to be necessary if the community wants to fulfill the goal of understanding online learning behavior from MOOC data. We present the steps necessary to scale this process and increase the pool of people contributing to finding insights from the data.
\end{itemize}
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.7\textwidth]{googleform}
\caption{Google form presented to the members of the class. The participants were asked to describe the \textit{feature} they propose, and why it would be useful in predicting stopout\xspace. A total of 30 \textit{features} were proposed, by 12 members in the class. 8 members proposed more than 1 \textit{feature}.}\label{fig:googleform}
\end{figure*}
\setlength{\arrayrulewidth}{0.5pt}
\setlength{\tabcolsep}{10pt}
\renewcommand{\arraystretch}{1.5}
\newlength{\thickline}
\setlength{\thickline}{1pt}
\makeatletter
\def\hlinex{%
\noalign{\ifnum0=`}\fi\hrule \@height \thickline \futurelet
\reserved@a\@xhline}
\makeatother
\newlength{\threecoltabwid}
\setlength{\threecoltabwid}{\textwidth - \tabcolsep * 2 * 3}
\begin{table*}[htp]
\centering
\medskip
\begin{tabular*}{\textwidth}{ >{\centering\arraybackslash}p{0.05\threecoltabwid} >{\raggedright\arraybackslash}p{0.35\threecoltabwid} >{\raggedright\arraybackslash}p{0.6\threecoltabwid}}
\hlinex
& \textbf{Describe feature} & \textbf{Why is this feature useful?} \\ \hlinex
& \textbf{pset grade over time}: Difference between grade on the current pset and average grade over previous psets. Significant decreases may be likely to be correlated with dropouts.& Anecdotally it appears that users who perform poorly on the current week (especially after not performing poorly in the preceding weeks) will subsequently give up. They may also, with low probability, post on the forum explaining their issue with the task at hand. \\ \hline
&\textbf{average pre deadline submission time}: average time between problem submission time and problem due date. & people who get things done early are probably not under time pressure that would make them drop out.
\\ \hline
&\textbf{proportion of time spent during weekends)}: Fraction of observed resource time spent on each day of the week (7 variables for Mon-Sun that add up to 1). Just for previous week, and averaged over all weeks so far. & Heavy weekend users might be more likely to drop out, because they don't have spare weekday time to devote to the course.
\\ \hlinex
\end{tabular*}
\caption{Three examples of features proposed by the students and instructors in the MIT class.}\label{cf}
\end{table*}
\vspace{-3mm}
We proceed in the following manner. In Section~\ref{sect:course} we start by describing the specific MOOC data we are working with. In Section~\ref{sect:ideation} we present 3 approaches to initially proposing features. In Section~\ref{sect:cf} we elaborate upon one of these: crowd sourcing. Next in Section~\ref{sect:features} we move to describing the operationalization of the original feature ideas and proposals and list the features we eventually engineered. We discuss, in Section~\ref{sect:challenges} the types of features and the challenges of operationalization. In Section~\ref{sect:ml}, we describe how we evaluated the importance of these features with predictive modeling based upon machine learning. In Section~\ref{sect:related} we present other efforts of feature engineering in this domain and compare with ours. We conclude in Section~\ref{sect:conclusions} with a summary of our findings and our next steps.
\section{Data collected}\label{sect:course}
We engineered features related to stopout from data collected from a MOOC offered on the MITx\xspace platform\footnote{MITx\xspace became what is now known, circa 2013, as edX\xspace}. The course is 6.002x: Circuits and Electronics taught in Fall of 2012. 6.002x had 154,763 registrants. Of those, 69,221 students looked at the first problem set, and 26,349 earned at least one point. 9,318 students passed the midterm and 5,800 students got a passing score on the final exam. Finally, after completing 15 weeks of study, 7,157 registrants earned the first certificate awarded by MITx, showing they had successfully completed 6.002x.
edX provided the following raw data:
\vspace{-2mm}
\begin{itemize}
\item A dump of click-stream data from student-browser and edX-server tracking logs in JSON format. For instance, every page visited by every student was stored as a server-side JSON (JavaScript Object Notation) event.
\vspace{-2mm}
\item Forum posts, edits, comments and replies stored in a MongoDB collection. Note that passive forum activity, such as how many views a thread received was not stored here and had to be inferred from the click-stream data.
\vspace{-2mm}
\item Wiki revisions stored in a MongoDB collection. Again, passive views of the Wiki must be inferred from the click-stream data.
\vspace{-2mm}
\item A dump of the MySQL production database containing student state information. For example, the database contained his/her final answer to a problem, along with its correctness. Note that the history of his submissions must be inferred from the click-stream data.
\vspace{-2mm}
\item An XML file describing the course calendar which included information like the release of content and the assignment deadlines.
\end{itemize}
\vspace{-3mm}
This data included 17.8 million submission events, 132.3 million curated navigational events \footnote{We received more navigational events, but only 132.3 million were well formed enough to be reliably considered for this paper. } and 90,000 forum posts.
To analyze this data at scale, as well as write reusable feature engineering scripts, we first organized the data into a schema designed to capture pertinent information. The database schema, MOOCdb, is designed to capture MOOC data across platforms thereby promoting collaboration among MOOC researchers. MOOCdb utilizes a series of scripts to pipe the raw data into a standardized schema. It identifies 4 basic student-platform interaction modalities: observing, submitting, collaborating and giving feedback. In observing mode students (somewhat passively) browse the online platform, watch videos, read material such as e-books or examine forum posts. In submitting mode, students submit information to the platform such as quiz responses, homework solutions, or other assessments. In collaborating mode students post to other students or instructors on forums, add material to a wiki or chat on google hangout or other social venues. In feedback mode students respond to surveys. MOOCdb encompasses and organizes the detailed data collected during these modalities. Its aim is to be platform agnostic by means of providing a common terminology between platforms. More about MOOCdb can be found in the MOOCdb Tech report, but the details about the schema itself are outside the scope of this report \cite{tr}.
\section{Our approaches for feature ideation}\label{sect:ideation}
With the database, we then proceeded to form ideas for the \textit{features}\xspace that we can repeatedly calculate on a \textit{per-student} basis. We proceeded in three different ways:
\vspace{-2mm}
\begin{itemize}
\vspace{-2mm}
\item \textbf{Approach 1}: We brainstormed feature ideas ourselves. Next, we operationalized our own ideas by writing feature extraction scripts. We call these features self-proposed, self-extracted\xspace.
\vspace{-2mm}
\item \textbf{Approach 2}: We asked others for ideas of what might be predictive of stopout\xspace. The people we asked included students, teachers and other researchers. We refer to this group collectively as `the crowd.' We identified ideas that we had not implemented yet, and constructed feature extraction scripts ourselves. We call these crowd-proposed, self-extracted\xspace. In the next section we provide more details for this approach.
\vspace{-2mm}
\item \textbf{Approach 3}: Finally, we asked `the crowd' to brainstorm predictive features, and to send us feature extraction scripts that we could run on MOOCdb. We provided people with a mock dataset with an identical data schema. Thus, instead of providing actual student data, we empowered the crowd to join in our data science efforts. We call the resulting features crowd-proposed, crowd-extracted\xspace.
\vspace{-2mm}
\end{itemize}
\vspace{-3mm}
Below we present the crowd sourcing experiment we performed in approach 2.
\section{Approach 2: Crowd sourcing}\label{sect:cf}
To generate ideas for \textit{features}\xspace, we sought opinions from a class at MIT. We presented the data model (what was being collected), explained what we meant by a \textit{feature}\xspace and asked members of the class (professors and students) to posit features for each student that could predict a student's stopout\xspace. We collected the input \textit{via} a google form presented in Figure~\ref{fig:googleform}. In this form we asked the users to describe the \textit{feature}\xspace and describe why they think the \textit{feature}\xspace will be useful in predicting stopout\xspace?. We did not present our features to the class.
\setlength{\arrayrulewidth}{0.5pt}
\setlength{\tabcolsep}{10pt}
\renewcommand{\arraystretch}{1.5}
\begin{table*}[htp]
\centering
\begin{threeparttable}
\caption{List of self-proposed, self-extracted\xspace covariates\xspace}\label{table:self_proposed_self_extracted}
\medskip
\begin{tabular*}{\textwidth}{ >{\centering\arraybackslash}p{0.05\threecoltabwid} >{\raggedright\arraybackslash}p{0.25\threecoltabwid} >{\raggedright\arraybackslash}p{0.7\threecoltabwid}}
\hlinex
& \textbf{Name} & \textbf{Definition} \\ \hlinex
\x{1} & stopout & Whether the student has stopped out or not \\ \hline
*\x{2} & total duration& Total time spent on all resources \\ \hline
\x{3} & number forum posts & Number of forum posts\\ \hline
\x{4} & number wiki edits& Number of wiki edits\\ \hline
*\x{5} & average length forum post& Average length of forum posts\\ \hline
*\x{6} & number distinct problems submitted & Number of distinct problems attempted \\ \hline
*\x{7} & number submissions & Number of submissions \tnote{1}\\ \hline
\x{8} & number distinct problems correct & Number of distinct correct problems \\ \hline
\x{9} & average number submissions & Average number of submissions per problem (\x{7} / \x{6})\\ \hline
\x{10} & observed event duration per correct problem & Ratio of total time spent to number of distinct correct problems (\x{2} / \x{8}). This is the inverse of the percent of problems correct \\ \hline
\x{11} & submissions per correct problem & Ratio of number of problems attempted to number of distinct correct problems (\x{6} / \x{8}) \\ \hline
\x{12} & average time to solve problem & Average time between first and last problem submissions for each problem (average(max(submission.timestamp) - min(submission.timestamp) for each problem in a week) )\\ \hline
*\x{13} & observed event variance& Variance of a student's observed event timestamps \\ \hline
\x{14} & number collaborations& Total number of collaborations (\x{3} + \x{4}) \\ \hline
\x{15} & max observed event duration & Duration of longest observed event \\ \hline
*\x{16} & total lecture duration& Total time spent on lecture resources \\ \hline
*\x{17} & total book duration & Total time spent on book resources \\ \hline
*\x{18} & total wiki duration & Total time spent on wiki resources \\ \hlinex
\end{tabular*}
\medskip
\begin{tablenotes}
\footnotesize
\item[1] In our terminology, a submission corresponds to a problem attempt. In 6.002x, students could submit multiple times to a single problem. We therefore differentiate between problems and submissions.
\end{tablenotes}
\end{threeparttable}
\end{table*}
\noindent \textbf{Outcomes}: Out of the 30 features that the class proposed, 7 were in common with ours. Out of the remaining 23 features, we extracted 10 features. These features are listed in Table~\ref{table:crowd_proposed_self_extracted} and are listed with numbers starting from 200.
The features proposed by the students and instructors in this class were \textit{intuitive}, based on \textit{experience} and self identification as once/or currently being a student. Participants also gave detailed reason as to why the feature is useful. We present three examples in Table~\ref{cf}.
\section{Operationalizing features ideas/proposals}\label{sect:features}
After curating the data and carefully gathering the proposals for features, we started operationalizing these hypothesized to be predictive of stopout\xspace. We split the course into 15 time slices/weeks. Thus, for each defined feature, we assigned each student a feature-value each week. For example, each student has a value for the feature, \textit{number of forum posts}, for each of the 15 weeks. For each week, we also assign a value for stopout\xspace. The value is 0 if the student has already stopped out by not submitting any more assignments, or it is 1 if the student will submit assignments in the future.
\subsection{Self-proposed, self-extracted features}
Table \ref{table:self_proposed_self_extracted} summarizes the features we completely developed ourselves. Each feature is calculated on a per student, per week basis. For features with *, additional details are necessary because how they are operationalized is ambiguous and we made several decisions while operationalizing them.
\begin{itemize}
\item \x{2}, \x{16}, \x{17}, \x{18}: These features are based on observed event duration. The edX server logs did not explicitly provide this, so we need to infer the duration based on the timestamps of the start of observed events. We assume that a student observed an event until he observed a different event (a new timestamp). This is a similar approach used by industry web-profile metrics. For example, if Student A had three observed events with timestamps, T1, T2 and T3, the duration of the first event would be T2 - T1, the duration of the second is T3 - T2. Sometimes, the spacing between observed events is very large, presumably because the user stopped interacting with the website. This is handled by setting the last observed event's duration to a MAX\_DURATION. Hence if $T3 - T2 > 60$, the duration is set to MAX\_DURATION. In our case, we set MAX\_DURATION to be 60 minutes, because our data included durations of up to $\sim$ 60 minutes , Additionally, the duration of the third event is MAX\_DURATION, since there is no T4.
\item \x5: A forum post's length is the number of characters in the forum post (i.e. the length of the string). We used MySQL's length function.
\item \x6, \x7: With problem submissions, week number is ambiguous. Students may submit a problem at any time (assuming the problem is released), regardless of when the problem is due. In other words, even if a problem corresponds to week number 3, a student could submit that problem in week 5. For these features, we counted a submission in week w1 if the submission's timestamp is in w1, regardless of whether or not the problem is part of w1's assigned content. We chose to do this because the feature is meant to capture a student's weekly activity.
\item \x{13}: For this feature, we tried to measure the consistency of a student's observed event patterns relative to the time of day (i.e., a student who always works on the course at 7:00 a.m. would have small variance for that week). To capture event variance, for each day, we counted the number of seconds after midnight of the observed event timestamp. We created a distribution of all of the number of seconds for each student each week. Then, we calculated the variance of the distribution (each student, week pair has it's own distribution). This variance becomes the feature. Note: student's participate from around the world, but the timestamp is in UTC time. However, because variance is valued over absolute value, the actual time is irrelevant.
\end{itemize}
\setlength{\arrayrulewidth}{0.5pt}
\setlength{\tabcolsep}{10pt}
\renewcommand{\arraystretch}{1.5}
\begin{table*}[htp]
\centering
\caption{List of crowd-proposed, self-extracted\xspace covariates\xspace}\label{table:crowd_proposed_self_extracted}
\medskip
\begin{tabular*}{\textwidth}{ >{\centering\arraybackslash}p{0.05\threecoltabwid} >{\raggedright\arraybackslash}p{0.35\threecoltabwid} >{\raggedright\arraybackslash}p{0.6\threecoltabwid}}
\hlinex
& \textbf{Name} & \textbf{Definition} \\ \hlinex
$x_{201}$ & number forum responses & Number of forum responses \\ \hline
*$x_{202}$ & average number of submissions percentile & A student's average number of submissions (feature 9) as compared with other students that week as a percentile \\ \hline
*$x_{203}$ & average number of submissions percent & A student's average number of submissions (feature 9) as a percent of the maximum average number of submissions that week \\ \hline
*$x_{204}$ & pset grade & Number of the week's homework problems answered correctly / number of that week's homework problems \\ \hline
$x_{205}$ & pset grade over time & Difference in grade between current pset grade and average of student's past pset grade \\ \hline
*$x_{206}$ & lab grade & Number of the week's lab problems answered correctly / number of that week's lab problems \\ \hline
$x_{207}$ & lab grade over time & Difference in grade between current lab grade and average of student's past lab grade \\ \hline
$x_{208}$ & number submissions correct & Number of correct submisions \\ \hline
$x_{209}$ & correct submissions percent & Percentage of the total submissions that were correct ($x_{208}$ / $x_{7}$) \\ \hline
*$x_{210}$ & average predeadline submission time & Average time between a problem submission and problem due date over each submission that week \\ \hlinex
\end{tabular*}
\end{table*}
\subsection{Crowd-proposed, self-extracted features} \label{section:crowdself}
Table \ref{table:crowd_proposed_self_extracted} summarizes the features the crowd hypothesized, but we extracted. Each feature is calculated on a per student, per week basis. A * indicates that a disambiguating explanation follows underneath.
\begin{itemize}
\item \x{202}, \x{203}: For each week, we create a distribution of all of the values for every student of feature \x9. Then, we compare a student's \x9 value to the distribution for that week. \x{202} is the percentile over that distribution, and \x{203} is the percent as compared to the max of the distribution.
\item \x{204}, \x{206}: As mentioned earlier, with regard to submissions, there is an ambiguity: whether a submission correspond to the week in which it was submitted, or the week in which the problem's module was. These features are meant to capture the grade on the module. Therefore, they are computed based on the week's homework assignment and lab assignment, rather than on the submission timestamp. The number of problems the student answered correctly out of the total number of homework or lab problems corresponding to that week constitute features \x{204} and \x{206}.
\item \x{210}: For each submission during the week, the time difference between the submission timestamp and the due date of the problem is calculated. \x{210} is the average of all of these differences.
\end{itemize}
\subsection{Crowd-proposed, crowd extracted features}
In an attempt to crowdsource feature extraction, we asked SQL-fluent MIT students and researchers to both hypothesize new features and submit scripts which would extract them. We are still in the process of collecting feature scripts from this effort at the time of writing.
\section{Types of features and challenges}\label{sect:challenges}
Our 28 features are more sophisticated because of the variety of sources used in their proposition, such as the leveraging crowd-sourced brainstorming to capture creative behavioral features. Many involve complexities beyond a simple count per week. Such complexities include:
\begin{description}
\item \textbf{Use of higher level statistics} We use, for example, the variance of the times of day that a student accesses course material each week (\x{13}) and the percentile of a student's average number of submissions (\x{202}). (\x{202}) also is a relative standing of the student amongst his peers.
\item \textbf{Curation requiring human cross-referencing}: Some features required manual curation in order to arrive at a descriptive metric. For example, \x{204}, a student's \textit{pset} grade, necessitated manual curation of problem and assignment deadlines from the course content.
\item \textbf{Referencing multiple data sources and MOOCdb modes}: Some features required linking information from multiple sources (such as \x{204}, the pset grade). This included getting deadlines for the problems from the XML file, all submissions from the server logs, and the problem's correctness from the production MySQL dump.
\item \textbf{Computationally expensive processing}: Because the features are defined on a \textit{per-student} \textit{per-week} basis, we must extract events on a \textit{per-student} basis for every week and then extract information for each student for that week. With millions of events in the database and hundreds of thousands of students this processing is computationally expensive.
\item \textbf{Integration of human intuition and context}: Some features express subtle human intuition about motivational context. For example, \x{10} represents the amount of time a student spends on the course (\x{2}) per \textit{correct} problem (\x{8}). This feature captures the less tangible gratification a student experiences based on time spent.
\end{description}
These observations have prompted us to discern 3 feature categories; we explain them by the way of examples:
\begin{description}
\item \textbf{Simple\xspace}: These features require a simple count or creation of an aggregate for every student on a per week basis. The count or aggregate is usually over an already existing field or a count of a certain type of events. Examples include: \textit{total time spent on the course}, \textit{number of problem attempts made during this week}, and \textit{amount of time spent on the videos (or a certain video)}.
\item \textbf{Complex\xspace}: These features require extraction of relational linking of data from two or more modes of student interaction. This implies more complex processing (and pre processing to link the data). They may require curation and some additional manual processing. Examples of these features include: \textit{number of times the student goes to forums while attempting problems}, or \textit{on an average how close to the deadline does the student start attempting problems}, and \textit{observed event duration per correct problem}.
\item \textbf{Derived\xspace}: These features combine one or more simple or complex features to form a new feature. Usually a mathematical function, like ratio, trend, or percentile is used in the composition. An instructor or student familiar with the course brings some domain expertise to propose such a feature. Essentially human intuition plays a key role. Examples of these type of features include: \textit{ratio of number of distinct problems correct to the total time spent on the course during that week}, \textit{the difference in pset grade of the current week and the average pset grade over past weeks} and \textit{a learner's number of submission events in percentile (compared against his peers)}.
\end{description}
\section{Evaluating our features}\label{sect:ml}
To evaluate our \textit{features}\xspace in terms of how well they collectively explain stopout, we use them in a supervised learning scenario we next describe.
\subsection{Supervised learning problem: stopout\xspace prediction}
Our goal is to use our features to predict stopout\xspace. We consider a student to have stopped out if s/he stops attempting the problems in the course. In this prediction problem, based on students behavior up until a time point (using the historical data up until that time point, a.k.a \textit{lag}), we predict whether or not a student will stopout\xspace by a certain time in future separated by a time interval from the current time point (called \textit{lead}\xspace). Thus \textit{lead}\xspace represents how many weeks in advance we attempt to predict stopout\xspace. For example, if we use a \textit{lead}\xspace of 5 and a \textit{lag}\xspace of 3, we would take the first 3 weeks of data to predict 5 weeks ahead, that is predict for the 8th week. Thus, each training data sample consists of repeated measurements of student's feature values for weeks 1, 2 and 3 as covariates\xspace. The binary stopout\xspace value for week 8 becomes the label. Figure \ref{fig:lead_lag} shows a diagram of this scenario.
\begin{figure*}[!ht]
\caption{Diagram of the students' weeks data used in a lead 5, lag 3 prediction problem}\label{fig:lead_lag}
\centering
\includegraphics[width=0.8\textwidth]{lead_lag.png}
\end{figure*}
We are careful not to use students' stopped out week's features as input to our models. In other words, if a student has stopped out in week 1, 2 or 3, we do not use this student as a data point. Including stopped out student data makes the classification problem too easy as the model will learn that a stopped out student never returns (by our stopout\xspace definition).
Since there were 14 weeks in the course, we have a total of 91 unique prediction problems by varying \textit{lead}\xspace and \textit{lag}\xspace. We consider the above example as one prediction problem.
\noindent \textbf{Randomized Logistic Regression}
We use randomized logistic regression to assess the importance of features. Our model uses 27 features to model stopout\xspace. In order to best fit a training set, a logistic regression model optimizes weights for each feature. To assess the importance of the features randomized logistic regression repeatedly models a perturbed data set (subsample) with regularization, and works as follows:
\begin{description}
\item {Step 1:} Sample without replacement 75\% of the training data each time (the variables are normalized ahead of training).
\item {Step 2:} Train a logistic regression model on the sub-sampled data, with randomized regularization coefficient for each variable. The randomized coefficient $\beta_j$ is sampled from uniform distribution $[\lambda, \frac{\lambda}{\alpha}]$, where $\alpha \in (0,1]$ and $\lambda$ is the regularization coefficient usually used in standard regularized regression approaches. This randomization places different selection pressure for different variables.
\item {Step 3:} For every covariate\xspace evaluate $b_s^{j}=\mu(w_j,th)$ where $\mu$ is a unit step function and $w_j$ is the coefficients\xspace for covariate\xspace $i$ and $th$ is the threshold we set to deem the feature important. This is set at 0.25. Thus this results in a binary vector, that represents the selection of the covariate. This binary vector is ($lag \times |features|$) long where $1$ at a location $j$ implies feature $i= j \mod 27$ was present in this model.
\item {Step 4:} Repeat Steps 1, 2 and 3 a total of 200 times.
\item {Step 5:} Estimate the importance of the covariate\xspace $j$ by calculating the selection probabilities $\frac{\sum_s b_s^{j}} {200}$.
\end{description}
\subsection{Experimental setup and results}
We divided our learners into four cohorts. These are passive collaborator\xspace, wiki contributor\xspace, forum contributor\xspace and fully collaborative\xspace. Learners who did not post in forums (but may have visited forums) and did not edit wiki pages were categorized as passive collaborator\xspace. Learners who participated in forums but not wiki were categorized as forum contributor\xspace, and who edited wiki but did not participate in forums were categorized as wiki contributor\xspace and learners who participated in both were categorized as fully collaborative\xspace. We ran randomized logistic regression for every \textit{lead}\xspace, \textit{lag}\xspace and cohort combination. Thus we have run $91 \times 4$ randomized logistic regression experiments (an experiment is described above). In each experiment 200 logistic regression models are formed, thus adding up to a total of approximately 72,000 logistic regression models. For each experiment, randomized logistic regression resulted in a vector of covariates\xspace selection probabilities. Each of these probabilities ranged from 0 to 1. \footnote{We used the scikit-learn Randomized Logistic Regression implementation.}
Randomized logistic regression analysis gave us fascinating covariate selection probability vectors for all 91 experiments and all cohorts. For each experiment the randomized logistic regression gives us these selection probability vectors for all the covariates\xspace which are learner features for different weeks. In order to gain a more quantitative grasp of which features matter for different prediction problem, we aggregate these probabilities.
\begin{paragraph}
{Week invariant feature importance} To calculate the importance of a feature for each cohort we follow the two steps:
\vspace{-2mm}
\begin{description}
\item (1) We first evaluate its importance in an experiment by summing its selection probability across different weeks. We then divide this sum with the \textit{lag}\xspace for that experiment. This is a heuristic which gives the feature's importance in that particular experiment. We illustrate this procedure for evaluating feature 1's importance in an experiment where the lag=3 in Figure~\ref{fig:wif}.
\vspace{-2mm}
\item (2) We then calculate the feature's importance in each of the 91 experiments. We then average the numbers to get the week-invariant feature importance weight.
\end{description}
\vspace{-2mm}
Figures \ref{fig:randomized_logistic_regression_no_collab} to \ref{fig:randomized_logistic_regression_wiki_only} summarize these normalized average feature importance weights for different cohorts.
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.55\textwidth]{wif}
\caption{Aggregating feature 1's weights to assemble relative feature importance for a single experiment. In this example, the lag is 3. Three weeks data is used to predict a stopout\xspace in a future week. The Randomized logistic regression gives the weights for all 27 features for all three weeks (unnormalized). To assemble the week invariance relative weight for feature 1 we sum the weights and divide it with the total weights. We note that this is a heuristic. }\label{fig:wif}
\end{figure*}
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.62\textwidth]{randomized_no_collab.png}
\caption{Feature importances for the passive collaborator\xspace cohort. Top 5 features that had the most predictive power across multiple stopout predictive problems include average pre deadline submission time\xspace, submissions per correct problem\xspace, average number of submissions in percent\xspace, correct submissions percent\xspace, pset grade over time\xspace.}\label{fig:randomized_logistic_regression_no_collab}
\end{figure*}
The first thing that struck us as we looked at these plots was the difference in feature weights between the self-proposed features and the crowd-proposed features. In all four cohorts, the majority of the weight lies in the crowd-proposed features \ref{section:crowdself} (\x{201} through \x{210})! Clearly, the crowd can be utilized to a great degree. As features mostly represent complex and derived types, such as the percentiles (\x{202} and \x{203}), these plots suggest that those types of features have a very high predictive power. Additionally, they mostly involve the submissions table in MOOCdb. This includes the lab grade (\x{206}), pset grade (\x{207}) and predeadline submission time (\x{210})).
In the passive collaborator\xspace cohort, the feature most indicative of stopout\xspace is the average predeadline submission time. The forum contributor\xspace cohort looks very similar, but uses a broader spectrum of features. In particular, we see that \x{5}, the average length of forum posts, is also highly predictive (of course, this could not have shown up in the passive collaborator\xspace cohort, as by definition those students do not participate in the forum). Interestingly, we see a very low predictive power from the number of forum posts(\x{3}) and the number of forum replies (\x{201}), despite the fact that the length of the forum post is very important. This could imply that longer posts are indicative of more engagement in the course, or a greater mastery of the material.
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.62\textwidth]{randomized_forum_only.png}
\caption{Feature importances for the forum contributor\xspace cohort. Top 5 features that had the most predictive power across multiple stopout predictive problems include lab grade over time\xspace, average pre deadline submission time\xspace, average length of forum post\xspace, lab grade\xspace, average number of submissions in percent\xspace. }\label{fig:randomized_logistic_regression_forum_only}
\end{figure*}
In the both of our smaller cohorts, fully collaborative\xspace and wiki contributor\xspace, the lab grade (\x{206}) and lab grade over time (\x{207}) are the most predictive features. Although both of these cohorts participated in the Wiki, the number of Wiki edits (\x{4}) actually contains insignificantly small predictive power in both cases. Both cohorts show similar distributions overall. Similar to the larger cohorts, features related to submissions hold the most predictive power.
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.62\textwidth]{randomized_forum_and_wiki.png}
\caption{Feature importances for the fully collaborative\xspace cohort. Top 5 features that had the most predictive power across multiple stopout predictive problems include lab grade over time\xspace, lab grade\xspace, pset grade\xspace, pset grade over time\xspace, average pre deadline submission time\xspace. }\label{fig:randomized_logistic_regression_forum_and_wiki}
\end{figure*}
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.62\textwidth]{randomized_wiki_only.png}
\caption{Feature importances for the wiki contributor\xspace cohort. Top 5 features that had the most predictive power across multiple stopout predictive problems include lab grade over time\xspace, lab grade\xspace, average pre deadline submission time\xspace, pset grade\xspace, average number of submissions in percent\xspace. }\label{fig:randomized_logistic_regression_wiki_only}
\end{figure*}
\end{paragraph}
\section{Related work}\label{sect:related}
Efforts have been made by others to construct features that describe learner behavior in MOOCs longitudinally. Here in we present a few examples. \cite{kizilcec2013deconstructing} assemble a feature per-learner that has four categorical values during each assessment period. The four categorical values represent: on track (did assessment on time), behind (turned in assessment late), auditing (did not do assessment) and out (did not participate in the course at all). Computationally these can be captured by simply checking for submission activity in each assessment period. Authors note that they are easy to collect and are able to give powerful insights about learners engagement.
\cite{halawadropout} extracted four features called \textit{video-skip}, \textit{assn-skip}, \textit{lag} and \textit{assn-performance}. The first two features inform whether a learner skipped videos in the previous week and skipped the assignments respectively. The third feature \textit{lag} checks if the learner is watching videos that are from previous weeks, if the learner is watching videos from week 2 and the course is currently in week 3 this feature's value is 1. The fourth feature measures learner's average quiz score.
\cite{balakrishnan2013predicting} constructed 5 basic features, two of which, stopout\xspace and the number of forum posts (\x{1} and \x{3}), we independently used. The other three features are time spent on the lecture videos, number of threads viewed and number of times progress page was checked.
\cite{yang2013turn} and \cite{ramesh:aaai14}, \cite{ramesh2013modeling} extract number of features from the forum activity of the learners. In \cite{yang2013turn} they are are length of post, thread starter (whether learner started threads or not) and content length (number of characters). Additionally, they construct a network of ``who talked to who" on the forums for every week and extract features on a per-learner basis from this network. \cite{ramesh:aaai14} extract counts for \textit{postActivity}, \textit{viewActivity} and \textit{voteActivity} from the learner interactions on the forums. Additionally, they extract four binary values called \textit{posts}, \textit{votes}, \textit{upvote}, \textit{downvote} which are given a value 1 if the learner has engaged in that activity. They additionally tag the posts using an automated tool called \textit{OpinionFinder}. This tool tags each post as subjective/objective and positive/negative. The features are then the number of subjective posts the learner made and the number of the positive posts the learner made out of the total number of posts.
One of the limitations of the last three efforts is that they focus primarily on forum activity. While we extract some of these features as well we note that many learners in fact do not participate in forums while still actively engage in the course. For example in the course we consider in this paper, only 7860 students participate in the forums out of a total of 105622. Hence, the analysis via these features is limited to only a small proportion of the learners.
We note that, as per our categorization of features in Section~\ref{sect:challenges} most of these features fall into the \textit{simple} category. Many of these features solely access one mode of student activity, for example submissions/assessments, and do not require combining additional information like correctness of the problem submissions. Many of these features we independently extracted in self-proposed, self-extracted\xspace and have evaluated their predictive power. For example, amount of the time spent on lecture videos did not appear to have significant predictive power when compared to other complex features.
Our extraction effort is the first instance, to our knowledge, wherein an extensive, sophisticated feature set has been constructed on MOOC behavioral data. We continue to add to this set and are accumulating a massive set of feature ideas and scripts that will seamlessly run on the data model and are available for sharing and re-use.
\section{Research findings and conclusions}\label{sect:conclusions}
\noindent \textbf{Finding 1: Features proposed by crowd for the stopout\xspace prediction problem mattered}: For all four cohorts we found that features proposed by the crowd mattered significantly more than the features we self-proposed, self-extracted\xspace.
\noindent \textbf{Finding 2: Different features mattered for different cohorts}: We also found interesting differences between features that mattered between different cohorts. For example, for the passive collaborator\xspace cohort features that explain students success in assignments mattered along with the average pre deadline submission time. For cohorts that consisted of students that interacted with other students, lab grade over time mattered consistently. For cohort that participated on forums only, the length of the forum post is a good indicator whether they are likely to stopout.
\noindent \textbf{Finding 3: Complex and derived features mattered more} We also found the more influential features were quite nuanced and complex. They incorporated data from multiple modes of learner activity (submissions, browsing and collaborations), required carefully linking data fields. Relational features that compared a learner to others and statistical summaries were proposed by the crowd and mattered quite a bit.
\noindent \textbf{What is next?}
\noindent \textbf{Addressing the challenges of feature discovery}: Human intuition and insight defy complete automation and are integral part of the process. We conclude that the best way to address this challenge is to involve as many people (experts, instructors, students, researchers as possible). People can play multiple roles. They can propose ideas or concepts from which variables can be formed, help us extract variables given mock data, or validate many ideas that we might ourselves have. To enable this, our next goal is to scale this process. We are currently designing a web based platform for collaborative feature definition and discovery.
\noindent \textbf{Addressing the challenges of curation and processing} To address these challenges, we propose :
\begin{itemize}
\item We propose the sharing and reuse of feature engineerings scripts, such as those used in this paper. that we be able widely share and re-use feature generation scripts we used in this paper. We have made sharing possible by standardizing the data schema so all our scripts can be used for multiple courses. We are currently testing our scripts across courses to assess their reuse value on different platforms \footnote{To date edX and Coursera}.
\item We anticipate, given inherent differences among courses, that some features will be present, others will need adjustment and still new ones may yet be engineered and their scripts shared. This is an area in which we are currently active. We endorse a promising direction towards shared methods that operationalize longitudinally \textit{per-learner} variables across MOOCs.
\end{itemize}
|
2,877,628,091,618 | arxiv | \section{Introduction}
Let $G$ be a split (minimal) Kac-Moody group over $\mathbb{R}$ or $\mathbb{C}$ with maximal torus $T$, and let $\theta$ be a Cartan-Chevalley involution of $G$, twisted by complex conjugation, and satisfying that $\theta(T)=T$. Furthermore, let $K$ be the subgroup fixed by $\theta$, and $\tau:G\to G, g\mapsto g\theta(g)^{-1}$. Let $A:=\tau(T)$.
In this note, we show resp. revisit that $G$ admits a (refined) Iwasawa decompositions $G=UAK$. We also show that if $G$ is of non-spherical type, then it never admits a polar decomposition $G=\tau(G)K$ nor a Cartan decompositions $G=KAK$. This has implications for the geometrical structure of the Kac-Moody symmetric space $G/K \cong \tau(G)$ as defined and studied in \cite{FHHK}.
\medskip
\textbf{Acknowledgements.}
I would like to thank Walter Freyn, Tobias Hartnick and Ralf Köhl for many inspiring discussions on Kac-Moody symmetric spaces, motivating me to write this note.
\section{Basics}
Throughout this note, we assume that the reader is familiar with topics such as Kac-Moody groups, twin buildings, and so on. A brief summary of the required theory, close in notation to what we use here, can be found in \cite{FHHK}*{Section 3} (see also \cite{Gramlich/Horn/Muehlherr}). For a comprehensive reference, we refer to \cite{Abramenko/Brown:2008}.
\begin{nota}
Throughout this paper, let $G$ be a split (minimal) Kac-Moody group of rank $n$ over some field $\mathbb{F}$. We fix the following notation:
\begin{itemize}
\item $(W,S)$: the associated Coxeter system, with $W$ the Weyl group of $G$.
\item $\Phi$ is the associated root system, with $\Pi=\{\alpha_1,\dots,\alpha_n\}$ a system of fundamental roots,
and corresponding sets $\Phi_+$ resp. $\Phi_-$ of positive resp. negative roots.
\item $\{U_\alpha\}_{\alpha\in\Phi}$ is a root group datum for $G$ (cf. \cite{CR09}).
\item $T:=\cap_{\alpha\in\Phi} N_G(U_\alpha)$ is a maximal torus of $G$.
\item $U_\eps:=\gen{U_\alpha \mid \alpha \in \Phi_\eps}$ for $\eps\in\{+,-\}$.
\item $B_\eps:=TU_\eps$ for $\eps\in\{+,-\}$ is a Borel subgroup of $G$, with unipotent radical $U_\eps$.
\item $(B_+,B_-,N)$: the associated twin $BN$-pair.
\item $G_\alpha:=\gen{ U_\alpha, U_{-\alpha} }$ for $\alpha\in\Pi$ is a fundamental rank-1-subgroup; since $G$ is split, $G_\alpha$ is isomorphic to a central quotient of $\SL_2(\mathbb{F})$.
\item $\Delta:=((\Delta_+,\delta_+),(\Delta_-,\delta_-),\delta_*)$ is the twin building associated to $G$ (we identify $\Delta_\eps$, when viewed as a chamber system, with $G/B_\eps$).
\end{itemize}
\end{nota}
\begin{remark}
\begin{enumerate}
\item One has $U_+\cap U_- =\{1\}$, and $B_+\cap B_- = T$.
\item $G$ is generated by the root groups $U_\alpha$ and the torus $T$.
\end{enumerate}
\end{remark}
\begin{example} \label{example KM groups}
\begin{enumerate}
\item Let $n\geq 1$ and $G=\SL_{n+1}(\mathbb{F})$. This is a split Kac-Moody group of type $A_n$. Here $B_\eps$ are the subgroups of upper resp. lower triangular matrices; $T$ the subgroup of diagonal matrices; $U_\eps$ the subgroups of strictly upper resp. lower triangular matrices.
The Weyl group then is isomorphic to $S_n$, and of type $A_n$, as is therefore each half of the twin building.
More generally, any split reductive algebraic group is an example.
\item
However, we are mainly interested in the \Defn{non-spherical} case, that is, when $W$ is infinite.
As an example for this, consider $G=\SL_n(\mathbb{F}[t,t^{-1}])$, for some $n\geq 2$.
This is of type $\tilde{A}_{n-1}$ and rank $n$.
\end{enumerate}
\end{example}
\begin{defn}[See \cite{Caprace:2009}]
Let $g\in G$.
\begin{enumerate}
\item
We call $g$ \Defn{diagonalizable} if it conjugate to an element of $T$. Equivalently, it stabilizes a pair of opposite chambers in the twin building $\Delta$, and hence stabilizes the twin apartment spanned by them.
\item We call $g$ \Defn{bounded} if it stabilizes spherical residues in each half of the twin building associated to $G$.
\end{enumerate}
\end{defn}
\begin{defn}
Let $\sigma\in\Aut(\mathbb{F})$ with $\sigma^2=\id$. A \Defn{$\sigma$-twisted Cartan-Chevalley-involution} of $G$ is an automorphism of $G$ which is $\Inn(G)$-conjugate to an involution $\theta\in\Aut(G)$ satisfying the following for all $\alpha\in\Phi$:
\begin{enumerate}
\item $U_\alpha^\theta=U_{-\alpha}$,
\item $\theta\circ\sigma$ induces a Cartan-Chevalley involution on $G_\alpha$.
\end{enumerate}
\end{defn}
\begin{remark}
Let $\theta$ be a twisted $\sigma$-twisted Cartan-Chevalley-involution.
\begin{enumerate}
\item
Since conjugation by $G$ resp. $\Inn(G)$ changes nothing for the results of interest for us, we will from now on simply assume that $\theta$ has the properties (i) and (ii).
\item
By \cite{Medts/Gramlich/Horn:2009}*{Lemma 4.2} and the discussing preceding it,
restricting $\theta$ to $G_\alpha$ yields an automorphism induced by a map of the form
\[ \SL_2(\mathbb{F})\to\SL_2(\mathbb{F}),\ x\mapsto
\mtr{0}{1}{-\eps}{0} x^\sigma \mtr{0}{-\eps^{-1}}{1}{0}
= \mtr100{\eps} ((x^\sigma)^T)^{-1} \mtr100{\eps^{-1}}
\]
where $\eps\in\mathbb{F}$ satisfies $\eps^\sigma=\eps$.
In other words, $\theta$ locally splits into a field automorphism, a Cartan-Chevalley automorphism (also known as sign automorphism), and a diagonal automorphism.
\item
In \cite{Caprace:2009} (see also \cite{Caprace/Muehlherr:2005,Caprace/Muehlherr:2006}), the isomorphism problem of Kac-Moody groups is solved. It turns out that every automorphism of $G$ is the product of an inner automorphism, a field automorphism, a sign automorphism, a diagonal automorphism, and a graph automorphism.
The definition of Cartan-Chevalley-involution, plus our preceding assumption, implies that $\theta$ globally splits into the product of a field automorphism, a sign automorphism, and a diagonal automorphism.
\end{enumerate}
\end{remark}
\begin{nota} From now on, $\theta$ will be a $\sigma$-twisted Cartan-Chevalley-involution, and we fix the following notation:
\begin{itemize}
\item $K:=\{ g\in G \mid \theta(g)=g \}$, the \Defn{unitary form} of $G$.
\item $Q:=\{ g\in G \mid \theta(g)=g^{-1} \}$ is the set of \Defn{$\theta$-symmetric} elements in $G$.
\item $\tau:G\to Q: g\mapsto g\theta(g)^{-1}$ is the \Defn{twist map} associated to $\theta$.
\item $A:= \tau(T)$ is a \Defn{maximal flat}.
\item $M:= K\cap T$.
\end{itemize}
\end{nota}
\begin{remark}
\begin{enumerate}
\item $\theta$ induces the inversion map $g\mapsto g^{-1}$ on $Q$ and on $\tau(G) \subseteq Q$.
\item We have $\theta(B_+)=B_-$ and $\theta(B_-)=B_+$, hence $\theta(T)=T$.
\item $\theta$ induces an involutory bijection between $\Delta_+=G/B_+$ and $\Delta_-=G/B_-$ via $gB_+\mapsto\theta(gB_+)=\theta(g)B_-$. We will refer to this map also as $\theta$. Note that $\theta$ preserves the Weyl (co-)distances $\delta_+$, $\delta_-$ and $\delta_*$. See also \cite{Gramlich/Horn/Muehlherr}*{Proposition 3.1}.
\end{enumerate}
\end{remark}
\begin{lemma}
The restriction of $\theta$ to $T$ is the map $t\mapsto \sigma(t)^{-1}$. Hence $A=\{ tt^\sigma \mid t\in T\}$.
\end{lemma}
\begin{proof}
This follows from the fact $\theta$ decomposes into the product of a diagonal automorphism, a field automorphism and sign automorphism. The diagonal automorphism acts trivially on $T$ and the sign automorphism acts by inversion.
\end{proof}
\begin{example} \label{example CC invs}
We continue Example~\ref{example KM groups}.
\begin{enumerate}
\item Let $G=\SL_{n+1}(\mathbb{R})$, and consider the Cartan-Chevalley involution $\theta(g):=(g^T)^{-1}$ on $G$.
Then $K$ is the special orthogonal group $\SO_{n+1}(\mathbb{R})$, and $Q$ the set of symmetric matrices with determinant 1, and $\tau(g)=gg^T$. Thus, $\tau(G)$ consists of the symmetric positive definite matrices with determinant 1. Finally, $A$ consists of the positive diagonal matrices with determinant 1.
For $\mathbb{F}=\mathbb{C}$ and $\sigma$ complex conjugation, a $\sigma$-twisted Cartan-Chevalley involution on $G$ is given by $\theta(g):=((g^\sigma)^T)^{-1}$.
We then have $K=\SU_{n+1}(\mathbb{R})$, $Q$ is the set of Hermitian matrices with determinant 1, and $\tau(G)\subset Q$ the subset of positive definite matrices.
$A$ is the set of positive diagonal matrices, and $M$ the set of diagonal matrices with all entries $\pm1$.
\item
Let $G=\SL_n(\mathbb{F}[t,t^{-1}])$ and $\sigma\in\Aut(\mathbb{F})$ with $\sigma^2=\id$.
Then a $\sigma$-twisted Cartan-Chevalley involution on $G$ is given by $\theta(x):=((x^{\sigma\rho})^{-1})^T$, where $\rho$ is the unique $\mathbb{F}$-linear ring automorphism of $\mathbb{F}[t,t^{-1}]$ interchanging $t$ and $t^{-1}$.
Here, $A$ is again the set of positive diagonal matrices, and $M$ the set of diagonal matrices with all entries $\pm1$.
\end{enumerate}
\end{example}
In this note, we study the (non-)existence of various decompositions of $G$:
\begin{defn} \label{def decomps}
$G$ admits, with respect to $\theta$, \ldots
\begin{itemize}
\item \ldots an \Defn{Iwasawa decomposition} if $G=BK$ holds.
\item \ldots a \Defn{refined Iwasawa decomposition} if $K\times A\times U\to G, (k,a,u)\mapsto kau$ is a bijection.
\item \ldots a \Defn{polar decomposition} if $G=\tau(G) K$ holds.
\item \ldots a \Defn{Cartan decomposition} if $G=KAK$ holds.
\item \ldots a \Defn{Kostant decomposition} if $G=KUK$ holds.
\end{itemize}
\end{defn}
\section{Iwasawa decomposition}
\begin{convention}
From now on for the rest of this paper, we will assume that either $\mathbb{F}=\mathbb{R}$ and $\sigma=\id$, or else $\mathbb{F}=\mathbb{C}$ and $\sigma$ is complex conjugation.
\end{convention}
The existence of a refined Iwasawa decomposition for complex Kac-Moody groups has been known for quite some time, see e.g. \cite{Kac/Peterson:1983}*{Corollary 4}. However, no proofs are given there. The real case is at the very least known as folklore, though I am not aware of a fully developed proof in the literature.
The existence of non-refined Iwasawa decompositions $G=BK$ of $G$ over arbitrary fields was studied extensively in \cite{Medts/Gramlich/Horn:2009}. This actually allows generalizing various results in later sections of this note beyond the real and complex case. Despite this, we mostly focus on the real and complex, as this allows for a particularly simple exposition, and is the case we are currently most interested in for applications, see \cite{FHHK}.
Thus focusing again on the real and complex case, we can rephrase the existence of an Iwasawa decomposition $G=BK$ as saying that the map $B\times K\to G,\ (b,k)\mapsto G$ is surjective. In general, this map is not injective. To rectify this, one may replace $B$ with a suitable subgroup, and study when $G$ admits a refined Iwasawa decomposition as defined above. The existence of such a refined Iwasawa decomposition in the real case is also shown in \cite{FHHK}*{Theorem 3.23}. Virtually the same argument applies for the complex case. For the convenience of the reader, we give a full proof. Note that loc.cit.\ also describes and proves a \emph{topological} Iwasawa decomposition, but only in the real case; the complex case is currently open.
\begin{lemma}
$M\cap A=\{1\}$ and $T=MA$ hold.
This induces an isomorphism of topological groups $T\cong M\times A$.
\end{lemma}
\begin{proof}
We have $T\cong\mathbb{F}^*)^m$ for some natural number $m$. As stated before, $\theta$ induces on $T$ inversion, composed with complex conjugation.
If $x\in M\cap A$, then on the one hand, $x\in A = \tau(T)$, so $x = t\overline{t} \in \mathbb{R}_{>0}^m$.
On the other hand, $x\in K$, hence $x=\theta(x)=\overline{x}^{-1}$, so $x\overline{x}=1$. Together this implies $x=1$.
The claim now follows from the polar decomposition $\mathbb{C}^* \cong \mathbb{R}_{>0} \times \{ \rho\in\mathbb{C} \mid |\rho|=1\} \cong \mathbb{R}_{>0} \times S^1$ in the complex case, and from $\mathbb{R}^* \cong \mathbb{R}_{>0} \times \{\pm 1\} \cong \mathbb{R}_{>0} \times S^0$ in the real case.
\end{proof}
\begin{lemma}
$K\cap B_\eps=M$ holds for $\eps\in\{+,-\}$.
\end{lemma}
\begin{proof}
Clearly $M\subseteq K\cap B_\eps$. Let $k\in K\cap B_{\eps}$. Then $k=\theta(k)\in K\cap B_{-\eps}$,
hence $k\in K\cap B_+\cap B_- = K\cap T = M$.
\end{proof}
\begin{prop}[Refined Iwasawa decomposition]
For $\eps\in\{+,-\}$, the maps
\[ \mu_\eps: K\times A\times U_\eps \to G,\ (k,a,u) \to kau \]
are bijections.
\end{prop}
\begin{proof}
Surjectivity follows from $M\subseteq K$, the preceding lemmas and the (unrefined) Iwasawa decomposition:
\[ KAU_\eps = KMAU_\eps = KTU_\eps = KB_+ = G. \]
Suppose now $kau = k'a'u'$, then $K\ni (k')^{-1}k = a'u'u^{-1} a^{-1} \in AU_\eps A = B_\eps$, whence $a'u'u^{-1} a^{-1} \in B_\eps\cap K = M$. Therefore $u'u^{-1} \in U_\eps \cap T = \{1\}$, so $u'=u$. This implies $a'a^{-1}\in M\cap A=\{1\}$, and so $a'=a$. We finally conclude from this $k=k'$, thus $\mu_\eps$ is indeed injective.
\end{proof}
\section{Symmetric elements of $G$}
The existence of Iwasawa decompositions $G=KB_\eps$ for $\eps\in\{+,-\}$ implies that all Borel subgroups, i.e., the $G$-conjugates of $B_+$ and $B_-$, are in fact $K$-conjugate to $B_+$ or $B_-$. In particular, all Borel subgroups $B$ of $G$ are \Defn{$\theta$-split}, i.e., for all $g\in G$ we have $B^g\cap\theta(B^g)$ is maximal torus, conjugate to $T$.
This is one of the many ingredient of the following useful lemma.
\begin{remark}
In view of the example $G=\SL_{n+1}(\mathbb{R})$, $\theta(g):=(g^T)^{-1}$, we may think of lemma as a generalization of the observation that every real symmetric matrix can be diagonalized by an orthogonal matrix, resp. every Hermitian matrix can be diagonalized by a unitary matrix. Indeed, we use this explicitly
\end{remark}
\begin{lemma} \label{lem:sym-ss}
If $g\in G$ is $\theta$-symmetric, i.e., if $\theta(g)=g^{-1}$,
the following are equivalent:
\begin{enumerate}
\item\label{enum:theta-apt} $g$ fixes a $\theta$-stable twin apartment chamberwise.
\item\label{enum:apt} $g$ fixes a twin apartment chamberwise (i.e., is diagonalizable).
\item\label{enum:cham} $g$ stabilizes a chamber.
\item\label{enum:orb-all} For all chambers $d$, the $G$-orbit $\{ g^n.d \mid n\in \mathbb{Z}\}$ is bounded in the gallery metric.
\item\label{enum:orb}
For some chamber $d$, the $G$-orbit $\{ g^n.d \mid n\in \mathbb{Z}\}$ is bounded in the gallery metric.
\item\label{enum:res} $g$ stabilizes a spherical residue in either half of the twin building.
\end{enumerate}
\end{lemma}
\begin{proof}
The implications $\ref{enum:theta-apt}\implies\ref{enum:apt}\implies\ref{enum:cham}$ and $\ref{enum:orb-all}\implies\ref{enum:orb}$ are elementary.
\begin{description}
\item[$\ref{enum:cham}\implies\ref{enum:theta-apt}$]
Let $c$ be a chamber stabilized by $g$. Since $\theta(c)$
is opposite $c$ and $\theta(c)= \theta(g.c) = g^{-1} . \theta(c)$,
we conclude that $g$ stabilizes the $\theta$-stable twin apartment
$\Sigma(c,\theta(c))$.
\item[$\ref{enum:cham}\implies\ref{enum:orb-all}$]
Suppose $c\in\Delta_+$ is a chamber stabilized by $g$. Then for $d\in\Delta_+$ and $n\in\mathbb{Z}$, we have $\delta_+(c,d)=\delta_+(g^n.c,g^n.d)=\delta_+(c,g^n.d)$. Let $\ell:W\to\mathbb{N}$ be the length map of $(W,S)$.
Then by the triangle inequality for the building $W$-metric, we have $\ell(\delta_+(d,g^n.d)) \leq \ell(\delta_+(d,c))+\ell(\delta_+(c,g^n.d)) = 2\ell(\delta_+(d,c))$.
\item[$\ref{enum:orb}\implies\ref{enum:res}$] This follows from the
Bruhat-Tits fixed point theorem applied to the CAT(0)-realization of the building; see e.g. \cite{Abramenko/Brown:2008}*{Corollary 12.67}.
\item[$\ref{enum:res}\implies\ref{enum:apt}$]
Let $R$ be a spherical residue stabilized by $g$.
Then we have $\theta(R)=\theta(g.R)=\theta(g).\theta(R) = g^{-1}.\theta(R)$,
and thus $\theta(R)$ is also stabilized by $g$.
In the terminology of \cite{Caprace:2009} this means that $g$ is bounded.
We now consider the automorphism of the spherical building $R$ induced by $g$.
The automorphism group of $R$ is a reductive algebraic group, and can be considered as a subgroup of $\GL_{n+1}(\mathbb{R})$. By \cite[Proposition~16.1.5]{HilgertNeeb12}, we can then model $\theta$ as transpose-inverse, composed with complex conjugation (if $\mathbb{F}=\mathbb{C}$).
Hence $\theta(g)=g^{-1}$ implies $g^T=\ol{g}$, i.e., $g$ is Hermitian and therefore diagonalizable. Thus it fixes a chamber in $R$, hence in $\Delta_+$.
\qedhere
\end{description}
\end{proof}
\section{The nucleus of $\theta$}
In the next section, we will show that Kac-Moody groups of non-spherical types admit no Polar decomposition $G=\tau(G)K$. To facilitate this, we first collect some general observations about Polar decompositions.
The following two elementary lemmas hold for any group $G$ and involution $\theta\in\Aut(G)$.
\begin{lemma} \label{tauX=tauY iff XK=YK}
Let $X,Y\subseteq G$. Then $\tau(X)=\tau(Y)$
if and only if $XK=YK$.
\end{lemma}
\begin{proof}
For $g,h\in G$, we have
\begin{align*}
\tau(g)=\tau(h)
&\iff g\theta(g)^{-1} = h\theta(h)^{-1} \\
&\iff h^{-1}g=\theta(h)^{-1}\theta(g)=\theta(h^{-1}g) \\
&\iff h^{-1}g \in K \\
&\iff gK = hK. \qedhere
\end{align*}
\end{proof}
\begin{lemma} \label{G=tauK iff tautau=tau}
$G=\tau(G)K$ holds if and only if $\tau(\tau(G))=\tau(G)$.
\end{lemma}
\begin{proof}
Follows from \cref{tauX=tauY iff XK=YK} for $X:=\tau(G)$, $Y:=G$ and using that $G=GK$.
\end{proof}
Recall that $Q:=\{g\in G \mid \theta(g)=g^{-1} \}$ and $\tau(G)\subseteq Q$.
Then for any $g\in Q$, we have $\tau(g)=g\theta(g^{-1})=g^2$. Thus, $\tau$
acts like the square map on $Q$ and also on $\tau(G)$.
Hence $\tau(G)=\tau(\tau(G))$ is equivalent to requiring that every element of $g\in\tau(G)$
admits a ``square root'', i.e., there is an element $h\in\tau(G)$ such that $\tau(h)=h^2=g$.
But this then also implies that every element has a fourth root, an eighth root and so on.
This motivates the following definition and the subsequent reformulation of the lemma.
\begin{defn}
The \Defn{nucleus} of $X\subseteq G$ is defined as
\[ \nucl(X) := \bigcap_{k=0}^\infty \left\{x^{(2^k)} \mid x\in X \right\}
= \{ x\in X \mid \forall k\in\mathbb{N}\, \exists y\in X : x=y^{2^k} \}
. \]
\end{defn}
\begin{lemma} \label{G=tau(G)K iff nucl(tau(G))=tau(G)}
$G=\tau(G)K$ holds if and only if $\nucl(\tau(G))=\tau(G)$.
\end{lemma}
\begin{proof}
By \cref{G=tauK iff tautau=tau}, $G=\tau(G)K$ is equivalent to $\tau^2(G)=\tau(G)$,
which in turn is equivalent to $\tau^{n+1}(G)=\tau(G)$ holding for all $n\in\mathbb{N}$.
Since $\tau^2(g)=\tau(g)\theta(\tau(g))^{-1}=\tau(g)^2$ for all $g\in G$, it follows that $\tau^{n+1}(g)=\tau(g)^{2^n}$.
Hence $\tau^2(G)=\tau(G)$ implies $\tau(G) = \tau^n(G) = \nucl(\tau(G))$ as claimed.
The converse implication follows then from $\tau^2(G)\subseteq \tau(G) = \nucl(\tau(G)) \subseteq \tau^2(G)$.
\end{proof}
\begin{prop}
For $X\subseteq G$, the elements of $\nucl(X)$ are bounded.
\end{prop}
\begin{proof}
Clearly $X\subseteq G$ implies $\nucl(X)\subseteq \nucl(G)$, thus it suffices to study $\nucl(G)$.
$G$ acts by cellular isometries on Davis' CAT(0) realization $X_+$ of $\Delta_+$ (see \cite{Davis98}, also \cite{Caprace:2009}*{Section 2.1} and \cite{Abramenko/Brown:2008}*{Chapter 12}). For $g\in G$, denote by $|g|$ be the minimal displacement of $g$.
By \cite{Bridson:1999}*{Theorem A}, $g$ is semisimple, i.e., its minimal displacement is attained on $X_+$. By \cite{Bridson/Haefliger:1999}*{Theorem II.6.8} this implies $|g^n|= n|g|$ for $n\in\mathbb{N}$.
For all $g\in \nucl(G)$ and all $n\in\mathbb{N}$ there is $g_n\in G$ with $g_n^{2^n}=g$.
Hence $|g| = |g_n^{2^n}|= 2^n \cdot |g_n|$ and thus $\lim_{n\to\infty} |g_n|=0$.
But by the Proposition in \cite{Bridson:1999}, the set $\{|g| \mid g\in G\}\subseteq[0,\infty)$ is discrete.
Therefore we must have $|g|=|g_n|=0$, i.e., $g$ fixes a point in $X_+$. But that implies that $g$ stabilizes a spherical residue in $\Delta_+$. By a symmetric argument, $g$ also fixes a spherical residue in $\Delta_-$. Hence $g$ is bounded.
\end{proof}
\begin{lemma}
$A = T\cap \tau(G)$.
\end{lemma}
\begin{proof}
The inclusion $A = \tau(T) \subseteq T\cap\tau(G)$ is obvious.
Suppose now we have $g\in G$ with $\tau(g)\in T$. By the Iwasawa decomposition,
$g = bk = utk$ for some $b=ut\in B_+$, $u\in U_+$, $t\in T$, $k\in K$.
Hence $\tau(g) = u\tau(t)\theta(u)^{-1} \in U_+ A U_-$. But by the refined Birkhoff decomposition
(see \cite[Proposition~3.3(a), p.~181]{KacPeterson85c}, also \cite[Theorem~5.2.3(g)]{Kumar02}), every element of $G$ can be uniquely written as $u_+ t' u_-$ with $u_\pm \in U_\pm$ and $t'\in T$. Hence we must have $u=1$ and $\tau(g)=\tau(t)\in A$.
\end{proof}
\begin{thm} \label{nucl(tau(G)) = diagonalizable part of tau(G)}
The set
$\nucl(\tau(G))$ equals the set of diagonalizable elements in $\tau(G)$, which in turn is the set of $K$-conjugates of $A$.
\end{thm}
\begin{proof}
By the preceding proposition, any $g\in\nucl(\tau(G))$ is bounded. By \cref{lem:sym-ss} this implies that $g$ is diagonalizable.
Suppose $g\in\tau(G)$ is diagonalizable, then it fixes some chamber $c\in\Delta_+$. But then, since $\theta(g)=g^{-1}$, it also stabilizes the chamber $\theta(c)\in\Delta_-$ opposite $c$. Since $K$ acts transitively on the pairs $(c,\theta(c))$, and since $\tau(G)$ invariant under $K$-conjugation, this implies that $g$ is contained in
\[\bigcup_{k\in K} T^k \cap \tau(G) = \bigcup_{k\in K} (T\cap \tau(G))^k = \bigcup_{k\in K} A^k
= \bigcup_{k\in K} \nucl(A^k)\subseteq\nucl(\tau(G))
.
\qedhere
\]
\end{proof}
\section{Non-existence of polar and Cartan decompositions}
In this section we apply the results from the previous section to show that if $G$ is of non-spherical type (i.e. its Weyl group $W$ is infinite), then $G$ cannot admit a Polar or Cartan decomposition as defined in \cref{def decomps}.
Indeed, by \cref{G=tau(G)K iff nucl(tau(G))=tau(G)} we have $G=\tau(G)K$ if and only if $\nucl(\tau(G))=\tau(G)$ holds. But by \cref{nucl(tau(G)) = diagonalizable part of tau(G)}, $\nucl(\tau(G))$ consists of precisely the diagonalizable elements of $\tau(G)$. We will thus establish that $\tau(G)$ contains elements which are not diagonalizable whenever $W$ is infinite.
To illustrate why this is so, we first consider as an example a Kac-Moody group of affine, non-spherical type $\tilde{A}_n$.
\begin{example}\label{Hole}
Let $n\geq 1$ and consider the affine example $G:=\SL_{n+1}(\mathbb{F}[t,t^{-1}])$ of type $\tilde{A}_n$ with the Cartan--Chevalley involution $\theta(x):=((x^{-1})^T)^\sigma$, where $\sigma$ is the $\mathbb{F}$-linear ring automorphism of $\mathbb{F}[t,t^{-1}]$ which interchanges $t$ and $t^{-1}$. Then let
\begin{align*}
u := \left(\begin{smallmatrix} 1 & 1+t \\ 0 & 1 \\ && \ddots \\ &&& 1 \end{smallmatrix}\right ) \in B_+,\qquad
v:=\tau(u) &= u\theta(u)^{-1} =
\left(\begin{smallmatrix} 1 & 1+t \\ 0 & 1 \\ && \ddots \\ &&& 1\end{smallmatrix}\right )\cdot
\left(\begin{smallmatrix} 1 & 0 \\ 1+t^{-1} & 1 \\ && \ddots \\ &&& 1\end{smallmatrix}\right ) \\
&=
\left(\begin{smallmatrix} 1 + (1+t)(1+t^{-1}) & 1+t \\ 1+t^{-1} & 1
\\ && \ddots \\ &&& 1\end{smallmatrix}\right )
\end{align*}
and the characteristic polynomial of $v$ is
\begin{align*}
c_\lambda(v)
&= \left( (\lambda - (1 + (1+t)(1+t^{-1}))(\lambda-1) - (1+t)(1+t^{-1}) \right) \cdot (\lambda-1)^{n-1} \\
&= \left(\lambda^2 - ( t + 4 + t^{-1}) \lambda + 1 \right) \cdot (\lambda-1)^{n-1}.
\end{align*}
However, the polynomial $c_\lambda(v)$ does not split into linear factors over $\mathbb{F}[t,t^{-1}]$, whence $v$ is not conjugate within $G$ to an element of the torus $T$, which consists of diagonal matrices with entries from $\mathbb{F}$.
\end{example}
This failure to diagonalize, which can essentially be reduced to considering the Moufang tree case, i.e., type $\tilde{A}_1$, is at the heart of the general case.
While can ``fix'' this failure to split in this case by going to a suitable completion of $\mathbb{F}[t,t^{-1}]$ resp. of $G$, doing so is somewhat arbitrary: There are in general multiple ways to form a completion, with different algebraic and geometric properties; moreover, we typically loose the twin building structure in the process.
\begin{lemma} \label{non-spherical => tau(G) non-diag}
Suppose $|W|=\infty$. Then $\tau(G)$ contains non-diagonalizable elements.
\end{lemma}
\begin{proof}
If the Weyl group $W$ is infinite, then by \cite{Speyer:2009}, there exists $w\in W$ such that $\ell(w^n)=n\ell(w)$ for all $n\in\mathbb{N}$; such an element $w$ is called a \Defn{straight element}.
Let $c_+\in\Delta_+$ be the chambers whose stabilizer is $B_+$. It is well-known that $B_+$ acts transitively on the set $X$ of chambers in $\Delta_-$ opposite $c_+$. In an infinite building, $X$ is a connected and thick chamber system. We also have $c_-=\theta(c_+)\in X$. Therefore, there is $g\in B_+$ such that $d(c_-, g.c_-)=w$ holds.
Now $\tau(g) c_-=g\theta(g^{-1})c_-=gc_-$ since $\theta(g^{-1})\in\theta(B_+)=B_-$.
Thus $d(c_-, \tau(g).c_-)=w$ holds. Let $\gamma$ be a minimal gallery from $c_-$ to $\tau(g).c_-$. Then, since $w$ is straight, the concatenation of $\gamma$, $\tau(g).\gamma$, $\tau(g)^2.\gamma$, \dots is still a minimal gallery.
It follows that $d(c_-,\tau(g)^nc_-)=w^n$, so $\tau(g)$ has an unbounded orbit on $G/B_-$.
But by \cref{lem:sym-ss} this means it cannot be diagonalizable.
\end{proof}
\begin{prop} \label{tauG-diag-then-spherical}
Suppose $|W|=\infty$. Then
\begin{enumerate}
\item $\tau(G)\neq\nucl(\tau(G))$.
\item $G$ does \emph{not} admit a polar decomposition.
\item $G$ does \emph{not} admit a Cartan decomposition.
\end{enumerate}
\end{prop}
\begin{proof}
\begin{enumerate}
\item Follows from \cref{nucl(tau(G)) = diagonalizable part of tau(G)} combined with \cref{non-spherical => tau(G) non-diag}.
\item Follows from (a) and \cref{G=tauK iff tautau=tau}.
\item
Supposed we had $G=KAK$. Then $\tau(G)=\tau(KAK)
=\bigcup_{k\in K} k^{-1}\tau(A)k
\subseteq \bigcup_{k\in K} A^k$.
The elements of $A$ and also $A^k$ are diagonalizable, so all elements of $\tau(G)$ are diagonalizable.
Contradiction to \cref{non-spherical => tau(G) non-diag}.
\qedhere
\end{enumerate}
\end{proof}
The fact that there is no Cartan decomposition implies that the Kac-Moody symmetric space $G/K$ is not geodesic if $|W|=\infty$ (i.e., it contains pairs of points which are not connected by a geodesic); the following obversation implies that it is nevertheless geodesically connected (i.e., any two points can be connected by a sequence of geodesics). See also \cite{FHHK}*{Theorem 1.8}.
\begin{lemma}
Suppose $|W|=\infty$. Then
\[ G=\bigcup_{n=1}^\infty (KAK)^n. \]
\end{lemma}
\begin{proof}
Recall that $G$ is generated by its fundamental rank 1 subgroups $G_\alpha=\gen{U_\alpha,U_{-\alpha}}$. For these, the Cartan decomposition $G_\alpha=K_\alpha A_\alpha K_\alpha$ holds, where $K_\alpha:=G_\alpha\cap K$ and $A_\alpha:=G_\alpha\cap A$. The claim follows, as $G = \gen{G_\alpha \mid \alpha\in\Pi} \subseteq \bigcup_{n=1}^\infty (KAK)^n \subseteq G$.
\end{proof}
\begin{question}
Does $G=(KAK)^N$ hold for some $N\in\mathbb{N}$? If so, can we bound $N$?
\end{question}
Clearly, $N\geq 2$, but what about upper bounds? I suspect that no such $N$ exists.
In closing, we mention this ``Kostant-type'' decomposition. Geometrically, it implies that any two points in the Kac-Moody symmetric space $G/K$ are ``connected'' by a globally bounded number of horospheres.
\begin{lemma}
There is $N\in\mathbb{N}$ such that $G=(KUK)^N$.
\end{lemma}
\begin{proof}
Similar to the previous proof, the fundamental rank 1 subgroups satisfy $G_\alpha = K_\alpha U_\alpha K_\alpha$.
In particular, $A_\alpha\in KUK$. Let $n$ be the rank of $G$, and $\Pi=\{\alpha_1,\dots,\alpha_n\}$ the set of fundamental roots. Then $A=A_{\alpha_1}\cdots A_{\alpha_n} \in (KUK)^n$. But then $G=UAK=AUK\subseteq (KUK)^{n+1}\subseteq G$.
\end{proof}
\begin{remark}
In the proof above, we chose $N:=n+1$, where $n$ is the rank of $G$. But we can do better: Call a \Defn{spherical covering} of the Dynkin diagram of $G$ any partition $\mathcal{P}$ of its vertices $\{1,\dots,n\}$ such that all $P\in\mathcal{P}$ corresponds to a spherical subdiagram. Clearly $\{ \{1\},\dots,\{n\}\}$ always is a spherical covering. Hence $r\leq n$ holds. A straight forward adaption of the preceding proof implies $G=(KUK)^{r+1}$.
This is still not optimal: If $G$ is of spherical type, then this gives $r=1$ and $N=2$, even though $G=KUK$ holds, i.e., one can take $N=1$.
\end{remark}
\begin{question}
Does $G=KUK$ hold when $G$ is not of spherical type?
\end{question}
\begin{bibdiv}
\begin{biblist}
\bib{Abramenko/Brown:2008}{book}{
author={Abramenko, Peter},
author={Brown, Kenneth~S.},
title={Buildings -- theory and applications},
series={Graduate Texts in Mathematics},
publisher={Springer},
address={Berlin},
date={2008},
volume={248},
}
\bib{Bridson/Haefliger:1999}{book}{
author={Bridson, Martin R.},
author={Haefliger, Andr{\'e}},
title={Metric spaces of non-positive curvature},
publisher={Springer},
address={Berlin},
date={1999},
volume={319},
}
\bib{Bridson:1999}{article}{
author={Bridson, Martin R.},
title={On the semisimplicity of polyhedral isometries},
journal={Proc. Amer. Math. Soc.},
volume={127},
date={1999},
number={7},
pages={2143--2146},
issn={0002-9939},
}
\bib{Caprace:2009}{article}{
author={Caprace, Pierre-Emmanuel},
title={``Abstract'' homomorphisms of split Kac-Moody groups},
journal={Mem. Amer. Math. Soc.},
volume={198},
date={2009},
number={924},
pages={xvi+84},
}
\bib{Caprace/Muehlherr:2005}{article}{
author = {Caprace, Pierre-Emmanuel},
author = {M{\"u}hlherr, Bernhard},
title = {Isomorphisms of Kac-Moody groups},
journal = {Invent. math.},
year = {2005},
volume = {161},
number = {2},
pages = {361--388},
}
\bib{Caprace/Muehlherr:2006}{article}{
author = {Caprace, Pierre-Emmanuel},
author = {M{\"u}hlherr, Bernhard},
title = {Isomorphisms of Kac-Moody groups which preserve bounded subgroups},
journal = {Adv. Math.},
year = {2006},
volume = {206},
number = {1},
pages = {250--278},
}
\bib{CR09}{article}{
author={Caprace, Pierre-Emmanuel},
author={R{\'e}my, Bertrand},
title={Groups with a root group datum},
date={2009},
journal={Innov. Incidence Geom.},
volume={9},
pages={5--77},
}
\bib{Davis98}{incollection}{
AUTHOR = {Davis, Michael},
TITLE = {Buildings are {$\mathrm{CAT}(0)$}},
BOOKTITLE = {Geometry and cohomology in group theory (Durham, 1994)},
PAGES = {108--123},
PUBLISHER = {Cambridge Univ. Press, Cambridge},
YEAR = {1998},
}
\bib{Medts/Gramlich/Horn:2009}{article}{
author={De Medts, Tom},
author={Gramlich, Ralf},
author={Horn, Max},
title={Iwasawa decompositions of split Kac-Moody groups},
journal={J. Lie Theory},
volume={19},
date={2009},
number={2},
pages={311--337},
issn={0949-5932},
}
\bib{FHHK}{article}{
title={Kac--Moody symmetric spaces},
author={Freyn, Walter},
author={Hartnick, Tobias},
author={Horn, Max},
author={K\"ohl, Ralf},
eprint={https://arxiv.org/abs/1702.08426},
date={2017},
note={Manuscript},
}
\bib{Gramlich/Horn/Muehlherr}{article}{
author = {Gramlich, Ralf},
author = {Horn, Max},
author = {Mühlherr, Bernhard},
title = {Abstract involutions of algebraic groups and of Kac-Moody groups},
journal = {J. Group Theory},
year = {2011},
volume = {14},
number = {2},
pages = {213--249}
}
\bib{Helminck/Wang:1993}{article}{
author={Helminck, A. G.},
author={Wang, S. P.},
title={On rationality properties of involutions of reductive groups},
journal={Adv. Math.},
volume={99},
date={1993},
number={1},
pages={26--96},
issn={0001-8708},
}
\bib{HilgertNeeb12}{book}{
author={Hilgert, Joachim},
author={Neeb, Karl-Hermann},
title={Structure and geometry of Lie groups},
publisher={Springer, New York},
date={2012},
pages={x+744},
isbn={978-0-387-84793-1},
isbn={978-0-387-84794-8},
doi={10.1007/978-0-387-84794-8},
}
\bib{Kac/Peterson:1983}{article}{
author = {Peterson, Dale H},
author = {Kac, Victor G},
title = {Infinite flag varieties and conjugacy theorems},
journal = {Proc. Nat. Acad. Sci. U.S.A.},
year = {1983},
volume = {80},
number = {6 i.},
pages = {1778--1782}
}
\bib{KacPeterson85c}{article}{
author={Kac, Victor},
author={Peterson, Dale},
title={Defining relations of certain infinite-dimensional groups},
note={The mathematical heritage of \'Elie Cartan (Lyon, 1984)},
journal={Ast\'erisque},
date={1985},
pages={165--208},
}
\bib{Kumar02}{book}{
author={Kumar, Shrawan},
title={Kac--{M}oody groups, their flag varieties and representation theory},
date={2002},
pages={xvi+606},
publisher={Birkh\"auser Boston Inc.},
address={Boston, MA},
}
\bib{Speyer:2009}{article}{
author={Speyer, David E.},
title={Conjugacy classes and straight elements in Coxeter groups},
date={2009},
journal={Proc. Amer. Math. Soc.},
volume={137},
pages={1295--1302},
}
\end{biblist}
\end{bibdiv}
\end{document}
|
2,877,628,091,619 | arxiv | \section{Introduction}
In this paper we study a particular class of von Neumann $\rho$-invariants and show that they provide a concordance obstruction. We show that these invariants are particularly computable for knots of finite algebraic order and use them to establish a new linearly independent family of twist knots.
A knot $K$ is an isotopy class of oriented locally flat embeddings of the circle $S^1$ into the 3-sphere $S^3$. A pair of knots $K$ and $J$ are called \textbf{topologically concordant} if there is a locally flat embedding of the annulus $S^1\times[0,1]$ into $S^3\times[0,1]$ mapping $S^1\times\{1\}$ to a representative of $K$ in $S^3\times\{1\}$ and $S^1\times\{0\}$ to a representative of $J$ in $S^3\times\{0\}$. A knot is called \textbf{slice} if it is concordant to the unknot or equivalently if it is the boundary of a locally flat embedding of the 2-ball $B^2$ into the 4-ball $B^4$.
The set of all knots has the structure of a commutative monoid with identity given by the unknot under the binary operation of connected sum. This monoid is not a group. The only knot with an inverse is the unknot. The quotient by the equivalence relation given by concordance, however, is a group. The inverse of any knot $K$ is given by $-K$, the reverse of the mirror image of $K$. This group is called the \textbf{topological concordance group} and is denoted $\mathcal{C}$.
Given a knot $K$ the \textbf{Alexander module} of $K$, denoted $A_0(K)$, is defined as the rational first homology of the universal abelian cover of the complement of the knot in $S^3$ or equivalently of $M(K)$, where $M(K)$ denotes zero surgery on $K$. In the language of twisted coefficients, $A_0(K) = H_1(M(K);\mathbb{Q}[t^{\pm1}])$ where $t$ is the generator of the regular first homology of $M(K)$. There is a nonsingular $\mathbb{Q}[t^{\pm1}]$-bilinear form \begin{equation*}Bl:A_0(K)\times A_0(K)\to \mathbb{Q}(t)/\mathbb{Q}[t^{\pm1}]\end{equation*} called the \textbf{Blanchfield linking form}.
For a submodule $P$ of $A_0(K)$, the \textbf{orthogonal complement} of $P$ with respect to this bilinear form, denoted $P^\perp$, is given by the set of elements of $A_0(K)$ which annihilate $P$. That is, \begin{equation*}P^\perp = \{q\in A_0(K)|Bl(p,q)=0\text{ for all }p\in P\}.\end{equation*} $P$ is called \textbf{isotropic} if $P\subseteq P^\perp$ and is called \textbf{Lagrangian} or \textbf{self annihilating} if $P=P^\perp$. We call $K$ \textbf{anisotropic} if $A_0(K)$ has no isotropic submodules. On the opposite end of the spectrum, a knot is called \textbf{algebraically slice} if it has a Lagrangian submodule.
The quotient of $\mathcal{C}$ by algebraically slice knots is called the \textbf{algebraic concordance group}. It is shown in \cite{L5} that this quotient is isomorphic to $\mathbb{Z}^\infty \oplus \mathbb{Z}_2^\infty \oplus \mathbb{Z}_4^\infty$. In particular, this shows that the concordance group has infinite rank. There is, however, much more to the concordance group. For example, in \cite[Theorem 5.1]{CG1} Casson and Gordon define a family of invariants and use them to show that, of the algebraically slice twist knots, only the unknot and the $-2$ twist knot (the stevedore knot) are slice. As a consequence of their work Jiang \cite{Ji1} shows that there is an infinite set of algebraically slice twist knots that are linearly independent in $\mathcal{C}$. Since then, the so-called Casson-Gordon invariants have served as useful tools in the detection of non-slice knots.
\begin{figure}[b]
\setlength{\unitlength}{1pt}
\begin{picture}(155,130)
\put(0,0){\includegraphics[width=.4\textwidth]{twistknot.pdf}}
\put(10,50){$n$}
\end{picture}
\caption{the $n$-twist knot.}\label{fig:twist}
\end{figure}
Given any closed oriented 3-manifold $M$ and a homomorphism $\phi:\pi_1(M) \to \Gamma$, the Von Neumann $\rho$-invariant, $\rho(M, \phi)\in \mathbb{R}$, is defined. It is an invariant of orientation preserving homeomorphism of the pair $(M,\phi)$. Restricting this invariant to the zero surgery of knots and links gives rise to an isotopy invariant.
In \cite{whitneytowers}, Cochran, Orr and Teichner use $\rho$-invariants to show that there is an infinite rank subgroup of $\mathcal{C}$ of algebraically slice knots on which the Casson-Gordon invariants vanish.
In Section~\ref{easy}, we define the Von Neumann $\rho$-invariants in which we are interested. Briefly, $\rho^0$ is the invariant associated with abelianization, $\rho^1$ with the quotient by the second term in the derived series and $\rho^1_p$ with $A_0^p(K)$, a localization of the Alexander module.
We proceed to give an additivity theorem for these invariants (Theorem~\ref{homomorphism}). As a consequence, $\rho^1_p$ is a homomorphism from the monoid of knots to $\mathbb{R}$. From the same theorem we deduce that there exist slice knots with non-vanishing $\rho^1_p$. The existence of such knots implies that these invariants are not well defined on the concordance group. In Section~\ref{rho as obstruction} we find that regardless of their ill-definedness there is a setting in which these $\rho$-invariants provide concordance information. Theorem~\ref{big theorem 1} below is proven.
\begin{reptheorem}{big theorem 1}
If $p$ is a symmetric prime polynomial, $m_1,\dots, m_n\in \mathbb{Z}$, $K_1, \dots K_n$ are knots, $A_0^p(K_i)$ has no isotropic submodules for each $i$ and $\iterate{\#}{i=1}{n}m_iK_i$ is slice then $\displaystyle \sum_{i=1}^{n}m_i\rho^1_p(K_i)=0$.
\end{reptheorem}
Restricting that theorem to the case that $n=1$ we get an application to the obstruction of torsion in the concordance group.
\begin{repcorollary}{torsion corollary 1}
Let $p$ be a symmetric polynomial and $K$ be a knot with $A_0^p(K)$ having no isotropic submodules. If $K$ is of finite order in the concordance group then $\rho^1_p(K)=0$.
\end{repcorollary}
In fact, Theorem~\ref{big theorem 1} implies that for each symmetric polynomial $p$ the $\rho^1_p$-invariant passes to a homomorphism from the subgroup generated by knots for which $A_0^p$ has no isotropic submodules. For each $p$ This subgroup contains all knots with prime Alexander polynomials. Thus, these invariant each give obstructions to linear dependence amongst these knots.
While Theorem~\ref{big theorem 1} can be compared to the results of \cite{derivatives}, it has the advantage that a knot can potentially be shown to be of infinite order after only one computation, while the results of \cite{derivatives} (for example Theorem 4.2 of that paper) only conclude that a knot is not slice, saying nothing about its concordance order and even then a $\rho$-invariant must be computed for every isotropic submodule, of which there may be many.
For the same reason this theorem compares favorably to to Casson-Gordon invariants. In order to use those invariants to obstruct a knot's being slice one must show that the value corresponding to every metabolizer of a linking pairing is non-zero. Furthermore, since the Casson-Gordon invariants obstruct a knot's being slice but not torsion, if one wishes to conclude that a knot $K$ is not torsion in the concordance group using Casson-Gordon invariants, one must show that these obstructions do not vanish for $\underset{n}{\#}K$ for every $n$.
An unfortunate fact about these invariants, $\rho$-invariants in general and Casson-Gordon invariants is that it is difficult to do significant computations involving them. Section~\ref{computational tools} addresses this shortcoming for our invariants by proving Theorem~\ref{premain} which relates the $\rho^1_p$-invariant of a knot of finite algebraic order with the $\rho^0$-invariant of a link representing a metabolizer of a Seifert form.
\begin{reptheorem}{premain}
Let $p$ be a symmetric prime polynomial. Let $K$ be a knot of finite algebraic order $n>1$ with $A_0^p(K)$ having no isotropic submodules. Let $\Sigma$ be a genus $g$ Seifert surface for $nK=\underset{n}{\#}K$. Let $L$ be a link of $g$ curves on $\Sigma$ representing a metabolizer for the Seifert form. Let $P$ be the submodule of $A_0^p(nK)$ generated by $L$. Suppose that meridians about the bands on which the components of $L$ sit form a $\mathbb{Z}$-linearly independent set in ${A^{p(t)}_0\left(nK\right)}/{P}$.
Then, $ \left| n\rho^1_p(K)-\rho^0(L)\right| \le g-1.
\end{reptheorem}
This is good news because $\rho^0$ is computable. In many cases it can be expressed in terms of the integral of a simple function on $\mathbb{T}^n$, the $n$-dimensional torus. The application of Theorem~\ref{big theorem 1} together with Theorem~\ref{premain} gives a new and tractable strategy to show that knots which are of finite algebraic concordance order are not of finite topological concordance order. By finding a metabolizing link $L$ for $K$ and computing $\rho^0(L)$ one can hope to conclude that $K$ is of infinite concordance order.
This strategy is employed in Section~\ref{twist} to give a new infinite linearly independent set of twist knots whose every member is of order 2 in the algebraic concordance group. Specifically, the theorem below is proven. This appears to be the first application of von Neumann $\rho$-invariants to the twist knots of finite algebraic order.
\begin{reptheorem}{twist theorem}
For $x$ an integer, let $n(x) = -x^2-x-1$. For $x\ge2$, the set containing the $n(x)$-twist knots, $\{T_{-7}, T_{-13},T_{-21} \dots\}$, is linearly independent in $\mathcal{C}$.
\end{reptheorem}
In related work, a similar set, neither containing nor contained by the one presented here, is given by Tamulis \cite[corollary 1.2]{Tamulis}. In \cite{knotconcordanceandtorsion}, Livingston and Naik find that twist knots which are of algebraic order 4 are of infinite concordance order. In \cite{polynomialSplittingOfCG} Kim uses results of Gilmer to prove that except for the unknot, the $-1$-twist knot and the $-2$-twist knot, no nontrivial linear combination of twist knots is ribbon. {In \cite[Corollary 1.3]{Lis1}, Lisca establishes the smooth concordance order of the twist knots (and of two-bridge knots in general). In the topological setting the concordance order of the twist knots is in general unknown, to say nothing of their linear independence.}
\section{Background information: basic properties of $\rho$ and $\sigma^{(2)}$}
For the definition of the Von Neumann rho invariant see \cite[equation 2.10, definition 2.11]{CT} and \cite[section 3]{Ha2}. For the definition of the $L^2$ signature see \cite[section 3.4, definition 3.21]{L2sign}. Instead of presenting definitions, we give the properties needed for our analysis.
The first property we essentially employ as the definition of the $\rho$-invariant. We even label it as such. The pair $(W,\Lambda)$ is referred to in \cite{Ha2} as a stable null-bordism for $\{(M_i,\Gamma_i)\}_{i=1}^n$ where it is proven that such a definition is independent of the stable null-bordism used.
\begin{definition}\label{rho}
Consider oriented 3-manifolds $M_1, \dots, M_n$, with homomorphisms $\phi_i:\pi_1(M_i)\to \Gamma_i$. Suppose that $M_1\sqcup M_2\sqcup\dots\sqcup M_n$ is the oriented boundary of a compact 4-manifold $W$ and $\psi:\pi_1(W)\to \Lambda$ is a homomorphism such that, for each $i$, there is a monomorphism $\alpha_i:\Gamma_i\to \Lambda$ making the following diagram commute:
\begin{center}$
\begin{diagram}
\node{\pi_1(M_i)} \arrow{e,t}{\phi_i}
\arrow{s,r}{i_*}
\node{\Gamma_i} \arrow{s,r,J}{\alpha_i}\\
\node{\pi_1(W)} \arrow{e,t}{\psi}
\node{\Lambda}
\end{diagram}$
\end{center}
Then $\displaystyle\sum_{i=1}^n\rho(M_i,\phi_i) = \sigma^{(2)}(W,\psi) - \sigma(W)$ where $\sigma$ is the regular signature of $W$ and $\sigma^{(2)}$ is the $L^2$ signature of $W$ twisted by the coefficient system $\psi$.
\end{definition}
The primary tool in this paper for getting information about the $L^2$ signature of a 4-manifold is a bound in terms of the rank of twisted second homology. When $\Gamma$ is PTFA (Poly-Torsion-Free-Abelian, see \cite[Definition 2.1]{whitneytowers}) and more generally whenever $\mathbb{Q}[\Gamma]$ is an Ore domain,
\begin{equation}\label{inequality}
\left|\sigma^{(2)}(W,\phi)\right| \le \operatorname{rank}_{\mathbb{Q}[\Gamma]}\left(\dfrac{H_2\left(W;\mathbb{Q}[\Gamma]\right)}{i_*\left[H_2\left(\ensuremath{\partial} W;\mathbb{Q}[\Gamma]\right)\right]}\right).
\end{equation}
where $i_*:H_2\left(\ensuremath{\partial} W;\mathbb{Q}[\Gamma]\right)\to H_2\left(W;\mathbb{Q}[\Gamma]\right)$ is the inclusion induced map.
This follows from the monotonicity of von Neumann dimension (see \cite[Lemma 1.4]{L2invts}) and the fact that $L^2$ Betti number agrees with $\mathbb{Q}[\Gamma]$ rank when $\mathbb{Q}[\Gamma]$ is an Ore Domain. (see \cite [Lemma 2.4] {Cha3} or \cite[Proposition 2.4]{FrLM}).
\section{The invariants of interest and some easy results}\label{easy}
In this section we define the invariants studied in this paper. Each is the Von Neumann $\rho$-invariant with respect to some abelian or metabelian quotient of the fundamental group of zero surgery on a knot or (in the case of $\rho^0$) a link.
\begin{definition}\label{rho0}
For a link $L$ of $n$ components with zero linking numbers let $\phi^0:\pi_1(M(L))\to \mathbb{Z}^n$ be the abelianization map. Let $\rho^0(L) = \rho(M(L), \phi^0)$ be the corresponding $\rho$-invariant.
\end{definition}
For a knot $K$, $\rho^0(K)$ is equal to the integral of the Levine-Tristram signature function (see \cite[Proposition 5.1]{structureInConcordance}). In particular, this invariant is computable but is zero for every knot of finite order in the algebraic concordance group. Despite this, $\rho^0$ can be used to get concordance information about algebraically slice knots by studying $\rho^0(L)$ where $L$ is a link given by a metabolizer of the knot. In \cite[4.3]{Collins} and \cite[example 5.10]{derivatives}, this strategy is used to give an alternate proof of the result of Casson-Gordon \cite[Theorem 5.1]{CG1} that the algebraically slice twist knots are not slice. In this paper $\rho^0$ of a metabolizer is used to get information about the invariants given in the next two definitions.
\begin{definition}\label{rho1}
For a knot $K$ let $\phi^1:\pi_1(M(K))\to \nquotient{M(K)}{2}$ be the projection map. Let $\rho^1(K) = \rho\left(M(K),\phi^1 \right)$ be the corresponding $\rho$-invariant.
\end{definition}
In order to give the definition of the third invariant we must first provide some definitions involving localizations of the Alexander module of a knot. For $p(t)\in \mathbb{Q}[t^{\pm1}]$ let \begin{equation}R_p = \left.\left\{\frac{f}{g}\right|(g,p)=1\right\}\end{equation} be the \textbf{localization of ${\mathbb{Q}[t^{\pm1}]}$ at ${p}$}. For a knot, $K$, let $A_0^p(K) = A_0(K)\otimes R_p$ be the \textbf{localization of the Alexander module of ${K}$ at ${p}$.} (The usual notation for localization would call this the localization at the multiplicative set of polynomials relatively prime to $p$.)
Throughout this paper we need to be flexible with notation. For any CW complex $X$ with $H_1(X)=\langle t\rangle\cong \mathbb{Z}$ we define the Alexander module of $X$, $A_0(X)$, as the homology of the universal abelian cover of $X$. Similarly, we define the localized Alexander modules of $X$, $A_0^p(X)=A_0(X){\otimes} R_p$. In this paper, such an $X$, if not zero surgery on a knot, is generally a 4-manifold cobordism between zero surgeries on knots.
\begin{definition}\label{rho1p}
For a polynomial $p$ and a knot $K$ let $\pi_1(M(K))^{(2)}_p$ be the kernel of the composition
\begin{equation*}
\pi_1(M(K))^{(1)}\to\nmquotient{M(K)}{1}{2}\hookrightarrow A_0(K)\to A_0^p(K)
\end{equation*}
Let $\phi^1_p:\pi_1(M(K))\to\pquotient{M(K)}{p}$ be the projection map and $\rho^1_p(K)=\rho\left(M(K), \phi^1_p \right)$ be the $\rho$-invariant associated to this homomorphism.
\end{definition}
These $\rho$-invariants are similar to the first order $\rho$-invariants defined and employed in \cite{derivatives}. One could view our definition of $\rho^1$ as the restriction of their invariants to the case of anisotropic Alexander modules (a setting not considered in that paper). The $\rho^1_p$-invariant does not appear to have been previously considered.
The following theorem describes interactions between these invariants. It can be thought of as suggesting that the $\rho^1_p$-invariant picks up information sitting in the $p$-torsion part of the Alexander module of the knot
\begin{proposition}\label{rho prime}
\begin{enumerate}
Let $\Delta$ be the Alexander polynomial of a knot $K$.
\item
If $p$ is a polynomial relatively prime to $\Delta$ then $\rho^1_p(K) = \rho^0(K)$.
\item
If $\Delta = p$ then $\rho^1_p(K)=\rho^1(K)$.
\end{enumerate}
\end{proposition}
\begin{proof}
When $p$ is relatively prime to the Alexander polynomial of $K$ then $\Delta\in S_p$ is invertable in $R_p$. Since $\Delta$ annihilates $A_0(K)$, this implies that $A_0^p(K)=A_0\otimes Q[t^{\pm1}]S_p^{-1}$ is the trivial module so the kernel of
\begin{equation*}
\pi_1(M(K))^{(1)}\to\nmquotient{M(K)}{1}{2}\hookrightarrow A_0(K)\to A_0^p(K)=0
\end{equation*}
is equal to $\pi_1(M(K))^{(1)}$. Thus, the map $\phi^1_p$ of definition~\ref{rho1p} is the same as the map $\phi^0$ of definition~\ref{rho0} (the abelianization map). This completes the proof of the first claim.
When $p$ is equal to the Alexander polynomial of $K$ the map $A_0(K)\to A_0^p(K)$ is injective. Thus the kernel of
\begin{equation*}
\pi_1(M(K))^{(1)}\to\nmquotient{M(K)}{1}{2}\hookrightarrow A_0(K)\hookrightarrow A_0^p(K)
\end{equation*}
is equal to $\pi_1(M(K))^{(2)}$ and $\phi^1_p$ is exactly the map $\phi^1$ of definition~\ref{rho1}. This completes the proof of the second part.
\end{proof}
Throughout this paper we make use of a pair of additivity properties. The second is a localized version of \cite[Theorem 11.1 parts 4, 5]{C}. The knot $J_\eta(K)$ is the result of infection. For an overview of infection, see \cite[section 8]{C}.
\begin{proposition}\label{homomorphism}
Let $J$ and $K$ be knots and $\eta$ be an unknot in the complement of $J$ such that the linking number between $J$ and $\eta$ is zero. Let $p$ be a polynomial.
\begin{enumerate}
\item $\rho^1_p(J\# K) = \rho^1_p(J) + \rho^1_p(K)$
\item $\displaystyle \rho^1_p(J_\eta(K)) = \left\{
\begin{array}{ccc}
\rho^1_p(J) & \text{if} & \eta=0 \text{ in } A_0^p(J)\\
\rho^1_p(J)+\rho^0(K) & \text{if} & \eta\neq0 \text{ in } A_0^p(J)\\
\end{array}
\right.$
\end{enumerate}
\end{proposition}
\begin{proof}
The proof of part 1 proceeds by constructing a 4-manifold $W$ with $\ensuremath{\partial}(W) = M(J)\sqcup M(K)\sqcup -M(J\#K)$ such that:
\begin{enumerate}
\item
The map induced by the inclusion of each boundary component of $W$ on the first homology is an isomorphism. We let $t$ be the generator of first homology of $W$ and of of every one of its boundary components.
\item
$H_1(W;R_p) \cong H_1(M(J);R_p)\oplus H_1(M(K);R_p)$ and the inclusion induced maps from $H_1(M(J);R_p)$ and $H_1(M(K);R_p)$ are the maps to the first and second factors of this direct sum.
\item
$H_1(M(J\#K);R_p) \cong H_1(W;R_p)$ and the inclusion induced map is an isomorphism.
\item
$H_2(W;\mathbb{Z})$ is carried by $\ensuremath{\partial} W$.
\item
$H_2(W;\mathcal{K})=0$, where $\mathcal{K}$ is the classical field of fractions of the Ore domain $\mathbb{Q}\left[\pquotient{W}{p}\right]$.
\end{enumerate}
Supposing that such a $W$ is found, the next step is to show that the inclusion induced map $\pquotient{M(K)}{p}\to \pquotient{W}{p}$ is injective for each boundary component. Since we reuse this idea with some frequency throughout this paper, we write it down as a lemma:
\begin{lemma}\label{repeated use lemma}
Let $p$ be a polynomial. Suppose that $M(K)$ is a boundary component of a 4-manifold $W$, that the inclusion induced map $H_1(M(K))\to H_1(W)$ is an isomorphism and that the inclusion induced map $i_*:A_0^p(K)\to A_0^p(W)$ is injective.
Then the inclusion induced map $\pquotient{M(K)}{p}\to \pquotient{W}{p}$ is injective
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{repeated use lemma}]
The following diagram commutes and has exact rows
\begin{equation}\label{lemma-diagram}\begin{diagram}
\node{0}\arrow{e}
\node{\onepquotient{M(K)}{p}} \arrow{e}\arrow{s,r}{\beta}
\node{\pquotient{M(K)}{p}} \arrow{e} \arrow{s,r}{i_*}
\node{\nquotient{M(K)}{1}}\arrow{s,r}{\cong}\arrow{e}
\node{0}\\
\node{0}\arrow{e}
\node{\onepquotient{W}{p}} \arrow{e}
\node{\pquotient{W}{p}} \arrow{e}
\node{\nquotient{W}{1}}\arrow{e}
\node{0}
\end{diagram}
\end{equation}
The vertical map on the right is exactly the inclusion induced map on first homology which is an isomorphism by assumption. If $\beta$ is a monomorphism, then $i_*$ is a monomorphism.
Consider the commutative diagram
\begin{equation}\begin{diagram}
\node{\onepquotient{M(K)}{p}} \arrow{e,J}\arrow{s,l}{\beta}
\node{A_0^p(M)}\arrow{s,J}\\
\node{\onepquotient{W}{p}} \arrow{e,J}
\node{A_0^p(W)}
\end{diagram}
\end{equation}
The two horizontal maps are injective by the definition of $\pi_1(M)^{(2)}_p$, while the rightmost vertical map is injective by assumption. Thus $\beta$ is injective and the proof is complete.
\end{proof}
Applying Lemma~\ref{repeated use lemma} for each boundary component and using properties (1), (2) and (3), the conditions of definition~\ref{rho} are satisfied, so \begin{equation}\rho^1_p(M(K\#J))-\rho^1_p(K)-\rho^1_p(J) = \sigma^{(2)}\left(W;\phi\right)-\sigma(W),\end{equation}
where $\phi:\pi_1(W)\to\pquotient{W}{p}$ is the quotient map.
By property (4), $\sigma(W)=0$. By property (5) and inequality~\eqref{inequality}, $\sigma^{(2)}\left(W;\phi\right)=0$. It remains only to construct such a $W$.
Let $W$ be given by taking $$M(J) \times [0,1]\sqcup M(K)\times[0,1]$$ and connecting it by gluing together neighborhoods of curves in $M(J)\times\{1\}$ and $M(K)\times\{1\}$ representing the meridians of $J$ and $K$ respectively. This could equally well be described via the addition of a 1-handle and a 2-handle to $M(J) \times [0,1]\sqcup M(K)\times[0,1]$.
It can be seen that $\ensuremath{\partial} W=M(J)\sqcup M(K) \sqcup -M(K\#J)$ by thinking of connected sum as infection along a meridian. We now provide the proof that $W$ has properties (1) through (5).
Consider the last 4 terms of the long exact sequence of the pair $(W,M(J)\sqcup M(K))$:
\begin{equation}\label{end of sequence}
H_2(W,M(J)\sqcup M(K))\to H_1(M(J)\sqcup M(K))\to H_1(W)\to 0.
\end{equation}
Letting $e$ be the relative second homology class given by core of the the added $S^1\times B^2\times [0,1]$ and $\mu_J$, $\mu_K$ be the meridians of $K$ and $J$ respectively, this exact sequence becomes
\begin{equation}\label{first few terms}
\begin{diagram}
\node{\langle e\rangle}\arrow{e,b}{e\mapsto \mu_J-\mu_K}\node{\langle\mu_J, \mu_k\rangle}\arrow{e}\node{ H_1(W)}\arrow{e}\node{ 0}
\end{diagram}
\end{equation}
Thus, $H_1(W)=\mathbb{Z}$ is generated by either $\mu_J$ or $\mu_K$, so the inclusion from $M(K)$ and $M(J)$ are both isomorphisms on first homology. The meridian of $M(K\#J)$ is isotopic in $W$ to $\mu_J$ so map from $H_1(M(K\#J))$ to $H_1(W)$ is also an isomorphism. This proves claim (1).
Consider at the terms previous to (\ref{end of sequence}) in the long exact sequence:
\begin{equation}\label{next few terms}
\begin{array}{c}H_2(M(J)\sqcup M(K))\overset{i_*}{\to} H_2(W)\overset{p_*}{\to} H_2(W,M(J)\sqcup M(K))\\\overset{\ensuremath{\partial}_*}{\to} H_1(M(J)\sqcup M(K))\end{array}
\end{equation}
We saw in (\ref{first few terms}) that $\ensuremath{\partial}_*$ in (\ref{next few terms}) is injective, so $p_*$ is the zero map and \begin{equation}H_2(M(J)\sqcup M(K))\overset{i_*}{\to} H_2(W)\end{equation} is an epimorphism, proving (4).
Consider the decomposition of $W$ as $W=M(K)\times [0,1]\cup M(J)\times [0,1]$. The intersection $M(K)\cap M(J)=S^1\times B^2$ is a neighborhood of the meridian of $J$ and $K$. The Mayer Vietoris sequence corresponding to this decomposition with coefficients in $R_p$ is
\begin{equation}\label{MVS}
\begin{array}{c}
H_n(S^1\times B^2;R_p)\overset{i_*}{\to} H_n(M(K);R_p)\oplus H_n(M(J);R_p) \overset{j_*}{\to} \\H_n(W;R_p)\overset{\ensuremath{\partial}_*}{\to} H_{n-1}(S^1\times B^2;R_p).
\end{array}
\end{equation}
The cover of $S^1\times B^2$ corresponding to this coefficient system is given by $\mathbb{R}\times B^2$. This is contractible so $H_n(S^1\times B^2;R_p)=0$ for $n\neq 0$. For $n=0$ the (\ref{MVS}) becomes
\begin{equation}\label{end of MVS}
\mathbb{Z}\overset{i_*}{\to} \mathbb{Z} \oplus \mathbb{Z} \overset{j_*}{\to} \mathbb{Z} \overset{\ensuremath{\partial}_*}{\to} 0.
\end{equation}
in particular, $i_*$ is injective so $\ensuremath{\partial}_*: H_1(W;R_p)\to H_{0}(S^1\times B^2;R_p)$
is the zero map.
Thus, for each $n\ge1$, the map
\begin{equation}\label{isom}
H_n(M(K);R_p)\oplus H_n(M(J);R_p) \to H_n(W;R_p)
\end{equation}
is an isomorphism. Setting $n=1$ proves (2).
The isomorphism exhibited in (\ref{isom}) (When $n=0$ it is still and epimorphism) implies that $H_n(W,M(K)\sqcup M(J);R_p)=0$ for all $n$. By Poincar\'e Duality, then $H^n(W,M(K\#J);R_p)=0$ for all $n$. By the universal coefficient theorem \cite[Theorem 2.36]{DaKi} $H_n(W,M(K\#J);R_p)=0$. An examination of the the long exact sequence of the pair $(W,M(K\#J))$ reveals that the inclusion induced map from $H_n(M(K\#J);R_p)$ to $H_n(W;R_p)$ is an isomorphism for all $n$. Taking $n=1$ proves (3).
Now consider the Mayer Vietoris sequence in \eqref{MVS} with coefficients in $\mathcal{K}$ instead of $R_p$.
\begin{equation}\label{MVSK}
\begin{array}{c}
H_n(S^1\times B^2;\mathcal{K})\to H_n(M(K);\mathcal{K})\oplus H_n(M(J);\mathcal{K})\to\\ H_n(W;\mathcal{K})\to H_{n-1}(S^1\times B^2;\mathcal{K}).
\end{array}
\end{equation}
By \cite[Corollary 3.12]{C} $H_n(S^1\times B^2;\mathcal{K})$, $H_n(M(K);\mathcal{K})$ and $H_n(M(J);\mathcal{K})$ vanish for all $n$, implying that $H_n(W;\mathcal{K})=0$, which together with the inequality~(\ref{bound}) proves (5) and completes the proof of part 1 of the theorem.
The second part of the theorem can be proved in an analogous manner. We give a description of $W$ and allow the interested reader to work out details.
Let $W$ be given by taking $M(J) \times [0,1]\sqcup M(K)\times [0,1] $ and gluing a neighborhood of $\eta$ in $M(J)\times\{1\}$ to a neighborhood of the meridian if $M(K)\times \{1\}$. The boundary of $W$ is given by $M(J) \sqcup M(K) \sqcup -M\left(J_\eta(K)\right)$.
Similar properties are desired. Specifically:
\begin{enumerate}
\item[(1')] The inclusion induced maps from $H_1(M(J))$ and $H_1(M(J_\eta(K)))$ to $H_1(W)$ are both isomorphisms. The inclusion induced map from $H_1(M(J))$ to $H_1(W)$ is the zero map. Let $t$ be the generator of $H_1(W)$.
\item[(2', 3')] The inclusion induced maps from $H_1(M(J);R_p)$ and $H_1(M(J_\eta(K));R_p)$ to $H_1(W;R_p)$ are both isomorphisms. The composition
\begin{equation*}
H_1(M(J))\to H_1(M(J))\underset{\mathbb{Z}}{\otimes} = R_p H_1(M(J);R_p) \to H_1(W;R_p)
\end{equation*} is the zero map if $\eta$ is zero and otherwise is injective.
\item[(4)]
$H_2(W;\mathbb{Q})$ is carried by $\ensuremath{\partial} W$.
\item[(5)]
$H_2(W;\mathcal{K})=0$, where $\mathcal{K}$ is the classical field of fractions of the Ore domain $\mathbb{Q}\left[\pquotient{W}{p}\right]$.
\end{enumerate}
\end{proof}
Notice that the first part of the Theorem~\ref{homomorphism} states that each of these metabelian $\rho$-invariants is a homomorphism from the monoid of knots to $\mathbb{R}$. One might hope that they pass to homomorphisms on the knot concordance group. This is not the case. Consider the pair of slice knots depicted in Figure~\ref{fig:counterexample}. If $\rho^0(K)\neq 0$, for example if $K$ is a trefoil knot, then $\rho^1(R)$ and $\rho^1(R_\eta(K))$ cannot both be zero by the second part of Proposition~\ref{homomorphism}.
\begin{figure}[h]
\setlength{\unitlength}{1pt}
\begin{picture}(140,160)
\put(-90,0){\includegraphics[width=.35\textwidth]{slice3.pdf}}
\put(90,0){\includegraphics[width=.35\textwidth]{infectedslice3.pdf}}
\put(-96,60){$\eta$}
\put(100,60){$K$}
\put(-24,6){$R$}
\put(140,6){$R_\eta(K)$}
\end{picture}
\caption{A pair of slice knots with differing $\rho^1$-invariant.}\label{fig:counterexample}
\end{figure}
This illustrates two obstacles to the use of $\rho^1_p$ as a concordance tool. The first is that it is not well defined on the concordance group. The next section, despite this fact, gets concordance information from these invariants. The second obstacle is the well known difficulty of actually computing $L^2$ signatures. As a consequence, in the example above we do not give a specific slice knot with non-zero $\rho^1$. Rather we gave a pair, at least one of which has non-zero $\rho^1$.
In Section~\ref{computational tools} we present a solution to the second obstacle by finding computable bounds on $\rho^1$. We compute these bounds in Section~\ref{twist} for a family of twist knots.
\section{a context in which $\rho^1_p$ provides concordance information}\label{rho as obstruction}
The ring $\mathbb{Q}[t^{\pm1}]$ has an involution defined by
\begin{equation}
q(t)\mapsto\overline{q}(t)=q(t^{-1}).
\end{equation}
When $p(t)$ is symmetric, this involution extends to the localization $R_p$. For any knot $K$, the classical Blanchfield form $Bl$ is sesquilinear on $A_0(K)$ with respect to this involution and extends to a sesquilinear form \begin{equation}Bl:A_0^p(K)\times A_0^p(K)\to\dfrac{\mathbb{Q}(t)}{R_p}.\end{equation}
For a submodule $P$ of $A_0^p(K)$, the orthogonal complement of $P$ with respect to the Blanchfield form, denoted $P^\perp$, is given by the set of all elements of $A_0^p(K)$ which annihilate $P$, that is,
$$P^\perp = \{q\in A_0^p(K)| Bl(r,q)=0\text{ for all }r\in P\}.$$
A submodule $P$ is called \textbf{isotropic} if $P\subseteq P^\perp$ and is called \textbf{Lagrangian} or \textbf{self-annihilating} if $P=P^\perp$. $K$ is called \textbf{$p$-anisotropic} if $A_0^p(K)$ has no nontrivial isotropic submodules.
The goal of this section is a theorem which allows the use of $\rho^1_p$ to obtain concordance information for knots whose every prime factor is $p$-anisotropic. One can think of this restriction as allowing us to use the mindset in \cite{derivatives} without having to think about $\rho$-invariants corresponding to the (possibly infinitely) many isotropic submodules of the Alexander module.
\begin{theorem}\label{big theorem 1}
Given a symmetric polynomial $p$, integers $m_1,\dots m_n$ and $p$-anisotropic knots $K_1, \dots K_n$, if $m_1K_1\#\dots \# m_nK_n$ is slice then $\displaystyle \sum_{i=1}^n m_i\rho^1_{p}(K_i) = 0$
\end{theorem}
The proof is delayed to subsection~\ref{proof of big 1}. We begin with an exploration of its implications as a means of detecting linear dependence in the concordance group. For example, taking $m_1=\dots= m_n$, it yields the following torsion obstruction:
\begin{corollary}\label{torsion corollary 1}
For a symmetric polynomial $p$ and a knot $K$ which decomposes into a connected sum of $p$-anisotropic knots, if $K$ is of finite order in the concordance group, then $\rho^1_p (K_i) = 0$
\end{corollary}
Since we are restricting our field of vision to $p$-anisotropic knots, it is worth noting how many knots are $p$-anisotropic. The following theorem shows that there are many knots to which Theorem~\ref{big theorem 1} applies. Its proof is analogous to the proof of \cite[Theorems 4.1 through 4.3]{Go2}.
\begin{proposition}\label{applies}
For a knot $K$ and a polynomial $p$ which has no non-symmetric factors, $A_0^p(K)$ is anisotropic if each factor of $p$ divides the Alexander polynomial of $K$ with multiplicity at most $1$.\end{proposition}
\begin{proof}[Proof of Proposition~\ref{applies}]
Let $\Delta_K$ be the Alexander polynomial of $K$.
$A_0(K)$, being a finitely generated torsion module over the PID $\mathbb{Q}[t^{\pm1}]$, has a decomposition into elementary factors:
\begin{equation}
A_0(K) = \iterate{\oplus}{i=1}{n} \dfrac{\mathbb{Q}[t^{\pm1}]}{(q_i)}
\end{equation}
where $q_i$ divides $q_{i+1}$ for each $i$ and $q_1 q_2\dots q_n=\Delta_K$. Let $h$ be the greatest common divisor of $p$ and the Alexander polynomial of $K$. Each prime factor $f$ of $h$ divides some $q_i$. If $i<n$ then since $f$ also divides $q_{i+1}$, it follows that $f$ divides that Alexander polynomial of $K$ with multiplicity greater than $1$, contradicting the assumption to the contrary.
Thus, for every $i<n$, $p$ is relatively prime to $q_i$, so $\frac{\mathbb{Q}[t^{\pm1}]}{(q_i)}\otimes R_p=0$ and
\begin{equation}
A_0^p(K) = \dfrac{\mathbb{Q}[t^{\pm1}]}{(q_n)}\otimes R_p = \dfrac{R_p}{(q_n)}
\end{equation}
is cyclic. Since $q_n=h h'$ where $h'$ is relative prime to $p$ and thus is a unit in $R_p$, the ideals $(q_n)$ and $(h)$ in $R_p$ are equal. Thus,
\begin{equation}
A_0^p(K) = \dfrac{R_p}{(q_n)} = \dfrac{R_p}{(h)}
\end{equation}
Let $\eta$ be a generator of $A_0^p(K)$. Then since the localized Blanchfield form is nonsingular, $Bl(\eta, \eta)=\frac{r}{h}\in\frac{\mathbb{Q}(t)}{R_p}$, where $r$ and $h$ are coprime.
If $Bl(s(t)\eta, s(t)\eta) = \displaystyle\dfrac{s\overline{s}r}{h}=0$, that is, $s(t)\eta$ is an isotropic element, then it must be that $h$ divides $s\overline{s}$. Since $p$ and thus $h$ are squarefree and have no non-symmetric factors, this implies that $h$ divides $s$ so $s(t)\eta=0$. Thus, any element of an isotropic submodule of $A_0^p(K)$ must be the zero element and so $K$ is $p$-anisotropic.
\end{proof}
Thus, Theorem~\ref{big theorem 1} applies for every choice of $p$ when the Alexander polynomial of $K_i$ is square-free for each $i$.
The restriction to the case of knots of finite algebraic concordance order and coprime squarefree Alexander polynomials can be addressed by the repeated application of Theorem~\ref{big theorem 1}. Doing so yields the following corollary.
\begin{corollary}\label{big corollary 1}
Let $K_1, K_2,\dots$ be knots of finite algebraic order which have coprime squarefree Alexander polynomials. If $\rho^1(K_i)$ is nonzero for each $i$, then the knots $K_i$ form a linearly independent set in $\mathcal{C}$.
\end{corollary}
\begin{proof}
Let $p_n$ be the Alexander polynomial of $K_n$.
Suppose that some linear combination $m_1K_1\#\dots \# m_jK_j\#\dots$ is slice. Proposition~\ref{applies} gives that $K_i$ is $p_n$-anisotropic for every $i$ so
\begin{equation}\label{big corollary equation}
\displaystyle \sum_{i=1}^\infty m_i\rho^1_{p_n(t)}(K_i) = 0
\end{equation}
by Theorem~\ref{big theorem 1}. Proposition~\ref{rho prime} applies to give that $\rho^1_{p_n}(K_i)=\rho^0(K_i)$ when $i\neq n$ and $\rho^1_{p_n}(K_n)=\rho^1(K_n)\neq 0$. Since $K_i$ is assumed to be of finite algebraic order, $\rho^0(K_i) = 0$.
Thus, all but one of the terms on the right hand side of \ref{big corollary equation} are zero. Dropping them reveals that $m_n\rho^1(K_n) = 0$. Since $\rho^1(K_n)$ is assumed to be nonzero, it must be that $m_n$ is zero. Repeating this argument for every natural number $n$ gives that these knots are linearly independent.
\end{proof}
\subsection{Examples}\label{subsect:examples}
As applications of Theorem~\ref{big theorem 1} we give some infinite lineary independent subsets of $\mathcal{C}$.
\begin{example}Consider any knot $J$ with non-zero $\rho^0$-invariant. For every integer $n$, excluding those of the form $n=-x^2-x$ for $x\in \mathbb{Z}$, let the knot $K_n$, be given by connected sum of $T_n(J)$ (the $n$-twisted double of $J$) with $-T_n$ (the reverse of the mirror image of the $n$ twist knot). In this example we show that the set containing all such $K_n$ is linearly independent in $\mathcal{C}$. These knots are depicted in Figure~\ref{fig:linearly independent}.
\begin{figure}[h]
\setlength{\unitlength}{1pt}
\begin{picture}(155,100)
\put(-45,0){\includegraphics[width=.7\textwidth]{jn.pdf}}
\put(-38,27){$n$}
\put(189,27){$-n$}
\put(-37,52){$J$}
\end{picture}
\caption{An infinite family of algebraically slice knots which are linearly independent in $\mathcal{C}$. ($n\neq -x^2-x$)}\label{fig:linearly independent}
\end{figure}
In order to see this, suppose $\iterate{\#}{n}{} c_n (T_n(J)\#-T_n)$ is slice. By Proposition~\ref{applies}, the knots, $T_n$ and $T_n(J)$ are $p$-anisotropic for every symmetric polynomial $p$. Thus, Theorem~\ref{big theorem 1} applies to give that
\begin{equation}\label{e1}
\displaystyle \sum_n c_n\left(\rho^1_p(T_n(J)-\rho^1_p(T_n)\right)=0
\end{equation}
Take $p$ to be the the Alexander polynomial of $T_m$. Since the twist knots all have distinct prime Alexander polynomials, Proposition~\ref{rho prime}, applies to reduce (\ref{e1}) to
\begin{equation}\label{first example equation}
\displaystyle \sum_{n\neq m} \left(c_n\rho^0(T_n(J))-c_n\rho^0(T_n)\right)+c_m\rho^1(T_m(J))-c_m\rho^1(T_n)=0
\end{equation}
By Proposition~\ref{homomorphism} $c_m\rho^1(T_m(J))-c_m\rho^1(T_n)=c_m\rho^0(J)$. Since $\rho^0$ depends only on the algebraic concordance class of a knot, $\rho^0(T_n(J))=\rho^0(T_n)$. Plugging these values into equation~\eqref{first example equation} yields
\begin{equation}\displaystyle c_m\rho^0(J)=0.\end{equation}
Since $\rho^0(J)$ is nonzero, it must be that $c_m$ is zero. Repeating this argument for every natural number $m$ we see that these knots are linearly independent.
\end{example}
Notice that the knots $K_n$ in the preceding example are algebraically slice and in particular are not anisotropic. Despite this fact Theorem~\ref{big theorem 1} applied to give a proof that they are linearly independent. The $\rho^1_p$-invariant gives concordance information about knots sitting in the subgroup generated by $p$-anisotropic knots. This group includes many $p$-isotropic knots and even some algebraically slice knots.
\begin{example}Consider the knot $J_n$ depicted on the left hand side of Figure~\ref{fig:linearly independent order 2}. For $n>0$ it is anisotropic, having prime Alexander polynomial $$\Delta_{J_n}(t) = n^2t^2+(1-2n^2)t+n^2.$$ It is concordance order 2 so Corollary~\ref{torsion corollary 1} applies to give that $\rho^1(J_n)=0$ for all $n$. For each $n$ pick a curve $\eta_n$ which is nonzero in $A_0(J_n)$. Let $T$ be a knot with nonvanishing $\rho^0(T)$-invariant. Let $K_n$ be the result of the infection depicted in Figure~\ref{fig:linearly independent order 2}. By the second part of Lemma~\ref{homomorphism} $\rho^1(K_n) = \rho^0(T)$. Corollary~\ref{big corollary 1} then applies to show that the set $\{K_n\}^\infty_{n=1}$ is linearly independent.
\end{example}
\begin{figure}[h]
\setlength{\unitlength}{1pt}
\begin{picture}(90,140)
\put(-104,30){\includegraphics[width=.35\textwidth]{genus1templatewithcurve.pdf}}
\put(60,30){\includegraphics[width=.35\textwidth]{genus1template.pdf}}
\put(56,75){\includegraphics[width=.09\textwidth]{box.pdf}}
\put(-104,50){\includegraphics[width=.09\textwidth]{box.pdf}}
\put(-72,50){\includegraphics[width=.09\textwidth]{box.pdf}}
\put(56,50){\includegraphics[width=.09\textwidth]{box.pdf}}
\put(88,50){\includegraphics[width=.09\textwidth]{box.pdf}}
\put(-90,62){$n$}
\put(-63,62){$-n$}
\put(69,62){$n$}
\put(97,62){$-n$}
\put(-109,101){$\eta_n$}
\put(69,85){$T$}
\put(-45,10){$J_n$}
\put(110,10){$K_n$}
\end{picture}
\caption{Left: An infinite family of knots of order 2. Right: An infinite family of knots which are algebraically of order 2 which is linearly independent in $\mathcal{C}$. $n>0$.}\label{fig:linearly independent order 2}
\end{figure}
These examples both hinge on knowlege of the behavior of $\rho^1_p$ under infection. If one wishes to say somthing about knots which do not result from infection, such as the twist knots, then another tool is needed. Such a tool is found in Section~\ref{computational tools} and is employed to find information about twist knots in Section~\ref{twist}.
\subsection{The proof of Theorem~\ref{big theorem 1}}\label{proof of big 1}
We now prove Theorem~\ref{big theorem 1}. The proof is compartmentalized into several lemmas.
\begin{proof}[Proof of Theorem~\ref{big theorem 1}]
By replacing $K_i$ by $-K_i$ if nessessary we may assume that each $m_i$ is non-negative. By replacing $m_iK_i$ by $K_i\#\dots \#K_i$ we may assume that $m_i=1$ for each $i$. A 4-manifold $W$ is constructed whose boundary consists of $M(K_1)\sqcup\dots \sqcup M(K_n)$. From here on $M(K_i)$ is abbreviated as $M_i$. It will be shown using Lemma~\ref{repeated use lemma} as well as Lemmas~\ref{H1 isom} and \ref{kernel is isotropic} that the inclusion induced maps $$\alpha_i:\pquotient{M_i}{p}\to\pquotient{W}{p}$$ are monomorphisms. This 4-manifold will be shown in Lemmas~\ref{second homology by bdry} and \ref{twisted homology} to have signature defect equal to zero with respect to the homomorphism, \begin{equation}\psi:\pi_1(W)\to\pquotient{W}{p}.\end{equation} Once this is done, then by definition~\ref{rho}, \begin{equation}\displaystyle \sum_{i=1}^n m_i\rho^1_p(K_i) = 0.\end{equation}
We let $V$ be the connected 4-manifold given by taking the disjoint union $\iterate{\sqcup}{i=1}{n} M_i\times [0,1]$ together with $n-1$ copies of $S^1\times B^2\times[0,1]$ indexed from $1$ to $n-1$ and gluing the i'th copy of $S^1\times B^2\times\{0\}$ to a neighborhood of a curve representing the meridian in $M_i\times\{1\}$ and the i'th copy of $S^1\times B^2\times\{1\}$ to a neighborhood of a curve representing the meridian in $M_{i+1}\times\{1\}$. Let $\ensuremath{\partial}_-(V) = \iterate{\sqcup}{i=1}{n}M_i$ denote the disjoint union of the $M_i$ boundary components of $V$. Let $\ensuremath{\partial}_+ V$ denote the $-M(K_1\#\dots \#K_n)$ boundary component of $V$.
Suppose that $K:= K_1\#\dots \#K_n$ is slice. Let $E$ be the complement of a slice disk for $K$ in $B^4$. $\ensuremath{\partial} E = M(K) = -\ensuremath{\partial}_+V$. Let $W$ be the union of $E$ and $V$ glued together along this common boundary component.
\begin{lemma}\label{H1 isom}
For each $1 \le i \le n$ the map induced by the inclusion of $M_i \subset \ensuremath{\partial} (W)$ into $W$ is an isomorphism on first homology.
\end{lemma}
\begin{proof}
Let $\mu_i$ denote the curve in $M_i$ given by the meridian. By a Mayer-Vietoris argument
\begin{equation}H_1(V) = \langle \mu_1, \dots \mu_n | \mu_1=\mu_2 = \dots \mu_n\rangle \cong \mathbb{Z}\end{equation}
is generated by the meridian of any component of $\ensuremath{\partial}_- V$. The meridian of $K$ in $\ensuremath{\partial}_+ V$ is isotopic in $V$ to the meridian of any one of the components of $\ensuremath{\partial}_- V$, so the inclusion of $\ensuremath{\partial}_+ V$ into $V$ induces an isomorphism on $H_1$. Additionally the inclusion of $\ensuremath{\partial}_+ V = \ensuremath{\partial} E$ into $E$ induces an isomorphism on $H_1$. Combining these facts with another Mayer-Vietoris argument, one sees that $H_1(W) = \mathbb{Z}$ is generated by the meridian of any one of the boundary components of $W$, which completes the proof.
\end{proof}
\begin{lemma}\label{second homology by bdry}
The second homology of $W$ is carried by the boundary of $W$ so $\sigma(W)=0$.
\end{lemma}
\begin{proof}
By a Mayer-Vietoris argument using the decomposition of the 4-ball as the union of $E$ with a the neighborhood of a slice disk for $K$, $H_2(E) \cong H_2(S^1 \times B^2) = 0$. $V$ is homotopy equivalent to the union of its boundary together with 1-cells between different components and 2-cells glued to curves which are linearly independent in first homology so $V$ has second homology carried by its boundary. As previously noted, the inclusion of any of the boundary components of $V$ into $V$ induces an isomorphism on $H_1$. The Mayer-Vietoris long exact sequence associated to $W=V \cup E$ is
\begin{center}
$H_2(V)\oplus H_2(E) \overset{i_*}{\rightarrow}H_2(W)\overset{\ensuremath{\partial}_*}{\rightarrow} H_1(M(K)) \overset {j_*\oplus k_*}{\longrightarrow} H_1(V)\oplus H_1(E)$
\end{center}
In this sequence, $j_*$ is an isomorphism so $j_*\oplus k_*$ is a monomorphism. Thus, $\ensuremath{\partial}_* = 0$ and $i_*$ is an epimorphism. Since $H_2(E)=0$, $i_*$ is an epimorphism from $H_2(V)$ to $H_2(W)$. Hence, $H_2(W)$ is carried by $H_2(V)$ which in turn is carried by $H_2(\ensuremath{\partial}_- V) = H_2(\ensuremath{\partial} W)$.
\end{proof}
\begin{lemma}\label{twisted homology}
Let $\phi:\pi_1(W)\to \Gamma$ be any PTFA coefficient system on $W$ with $\phi(\mu_i)\neq 0$ where $\mu_i$ is the meridian of any one of the boundary components of $W$. Let $\mathcal{K}$ be the classical field of fractions of the Ore domian $\mathbb{Q}[\Gamma]$.
Then $H_2(W;\mathcal{K})=0$ so $\sigma^{(2)}(W;\phi)=0$.
\end{lemma}
\begin{proof}
The submanifold $V$ decomposes as the union of $\ensuremath{\partial}_- V$ together with $n-1$ copies of $S^1\times B^2\times [0,1]$. The intersection of these two sets is given by $2n-2$ copies of $S^1\times B^2$.
The Mayer-Vietoris sequence corresponding to this decomposition with coefficients in $\mathcal{K}$ gives the exact sequence
\begin{equation}
\begin{array}{c}
H_p(\ensuremath{\partial}_- V;\mathcal{K})\oplus \iterate{\oplus}{i=1}{n-1}H_p(S^1\times B^2\times [0,1];\mathcal{K})\to H_p(V;\mathcal{K})\\\to \iterate{\oplus}{i=1}{2n-2}H_{p-1}(S^1\times B^2;\mathcal{K}).
\end{array}
\end{equation}
By \cite[corollary 3.12]{C}, for each $p$ $H_p(\ensuremath{\partial}_- V;\mathcal{K})$, $H_{p}(S^1\times B^2\times [0,1];\mathcal{K})$ and $H_{p-1}(S^1\times B^2;\mathcal{K})$ all vanish, so $H_p(V;\mathcal{K})=0$.
By \cite[2.10 b]{whitneytowers}, since there is an integral homology isomorphism from $S^1$ to $E$, $H_*(E;\mathcal{K}) = H_*(S^1;\mathcal{K}) = 0$. Consider the Mayer-Vietoris exact sequence of the decomposition $W = V\cup E$. Note that $V\cap E = M(K)$.
$$H_p(E;\mathcal{K})\oplus H_p(V;\mathcal{K})\to H_p(W;\mathcal{K})\to H_{p-1}(M(K);\mathcal{K}).$$
We have seen that $H_p(E;\mathcal{K}) = H_p(V;\mathcal{K}) = H_{p-1}(M(K);\mathcal{K}) = 0$, so $H_p(W;\mathcal{K})=0$ for all $p$. Taking $p=2$ completes the proof.
\end{proof}
For the reminder of this section we denote by $t$ the generator of the first homology of $W$. Since the maps on integral first homology induced by the inclusion of every one of $V, E, M_i, \ensuremath{\partial}_+V$ into $W$ is an isomorphism, the first homology of any one of these with coefficients in $\mathbb{Q}[t^{\pm1}]$ or $R_p$ is isomorphic to its Alexander module or localized Alexander module respectively.
\begin{lemma}\label{kernel is isotropic}
The submodule $P$ of $H_1\left(\ensuremath{\partial} W; R_p \right)$ given by the kernel of the map induced by the inclusion of $\ensuremath{\partial} W$ into $W$ is isotropic. For any component $M_i$ of $\ensuremath{\partial} W$ the submodule $Q$ of $H_1\left(M_i; R_p\right)$ given by the kernel of the map induced by inclusion into $W$ is isotropic.
\end{lemma}
\begin{proof}
$R_p$ embeds in the field $\mathbb{Q}(t)$. By Lemma~\ref{twisted homology} and the universal coefficient theorem with field coefficients \cite[Corollary 2.31]{DaKi} \begin{equation}H^2(W;\mathbb{Q}(t))=H_1(W;\mathbb{Q}(t))=0,\end{equation} so the Bockstein homomorphism \begin{equation}B:H^1(W; \mathbb{Q}(t)/R_p) \rightarrow H^2(W;R_p)\end{equation} is an isomorphism.
Consider the following commutative diagram, where $P.D.$ denotes the Poincar\'e duality isomorphism and $\kappa$ denotes the Kronecker map. The composition of the vertical maps on the right gives the Blanchfield form.
\begin {equation}
\begin {diagram}
\node{H_2(W,\ensuremath{\partial} W;R_p)}\arrow{s,r}{P.D.}\arrow{e,t}{\ensuremath{\partial}_*}\node{H_1(\ensuremath{\partial} W;R_p)}\arrow{s,r}{P.D.}\\
\node{H^2(W;R_p)}\arrow{s,r}{B^{-1}}\arrow{e,t}{i^*}\node{H^2(\ensuremath{\partial} W;R_p)}\arrow{s,r}{B^{-1}}\\
\node{H^1(W,\mathbb{Q}(t)/R_p)}\arrow{s,r}{\kappa}\arrow{e,t}{i^*}\node{H^1(\ensuremath{\partial} W;\mathbb{Q}(t)/R_p)}\arrow{s,r}{\kappa}\\
\node{\hom_{R_p}\left(H_1(W;R_p),\mathbb{Q}(t)/R_p\right)}\arrow{e,t}{\left(i_*\right)^{\operatorname{dual}}}\node{\hom_{R_p}\left(H_1(\ensuremath{\partial} W;R_p), \mathbb{Q}(t)/R_p\right)}
\end{diagram}
\end{equation}
If $x$ and $y$ are elemets of $P$, then by the exactness of
\begin{equation}
H_2(W,\ensuremath{\partial} W;R_p)\overset{\ensuremath{\partial}_*}{\to} H_1(\ensuremath{\partial} W;R_p)\overset{i_*}{\to} H_1(W;R_p)
\end{equation}
there are elements $X$ and $Y$ of $H_2(W, \ensuremath{\partial} W; R_p)$ with $\ensuremath{\partial}_* X = x$ and $\ensuremath{\partial}_* Y = y$. Thus,
\begin{eqnarray*}
Bl(x,y) &=& ((\kappa \circ B^{-1} \circ (P.D.))(x))(y) \\&=& ((\kappa \circ B^{-1} \circ (P.D.)\circ \ensuremath{\partial}_*)(X))(\ensuremath{\partial}_*Y)
\end{eqnarray*}
using the commutative diagram above then
\begin{eqnarray*}
Bl(x,y) &=&((i_*^{\operatorname{dual}} \circ \kappa \circ B^{-1} \circ (P.D.))(X))(\ensuremath{\partial}_*Y)\\
& = &(\kappa \circ B^{-1} \circ (P.D.))(X))(i_* \circ \ensuremath{\partial}_* Y)
\end{eqnarray*}
and this is zero since $i_* \circ \ensuremath{\partial}_* = 0$. Thus, for any $x,y \in P$, $Bl(x,y)=0$ and $P\subseteq P^{\perp}$
Finally, $Q\subseteq P$ is contained in an isotropic submodule and so is isotropic.
\end{proof}
At this point we can begin the final stages of the proof of Theorem~\ref{big theorem 1}. The regular signature of $W$ is zero since its integral second homology is carried by its boundary (by Lemma~\ref{second homology by bdry}). The $L^2$ signature is zero by the inequality~\eqref{inequality} since $H_2(W; \mathcal{K})=0$ (by Lemma~\ref{twisted homology}).
By Lemma~\ref{H1 isom} the homomorphism induced by inclusion of each boundary component $M_i$ into $W$ on first homology is an isomorphism. By Lemma~\ref{kernel is isotropic} and the assumption that there are no nontrivial isotropic submodules, it must be that the induced map on localized Alexander modules is an injection. Lemma~\ref{repeated use lemma} now asserts that the inclusion induced map from $\dfrac{\pi_1(M_i)}{\pi_1(M_i)^{(2)}_p}$ to $\dfrac{\pi_1(W)}{\pi_1(W)^{(2)}_p}$ is injective and by definition~\ref{rho} \begin{equation}\displaystyle \sum_{i=1}^n \rho^1_p(K_i) = \sigma^{(2)}(W;\phi^p)-\sigma(W) = 0.\end{equation} This concludes the proof of Theorem~\ref{big theorem 1}.
\end{proof}
\begin{remark}
It is not necessary that $E$ be a slice disk complement. The only properties of $E$ that we use are the following:
\begin{enumerate}
\item $\ensuremath{\partial} E = M(K)$ and the map induced by inclusion on first homology is an isomorphism.
\item The kernel of the map induced by inclusion on first homology with coefficients in $R_p$ is isotropic.
\item The signature defect of $E$ corresponding to the quotient map \begin{equation*}\pi_1(E)\to\pquotient{E}{p(t)}\end{equation*} is zero.
\end{enumerate}
These conditions are satisfied when $E$ is a (1.5)-solution for $K$ \cite[Theorems 4.2 and 4.4]{whitneytowers}. Thus, the word slice can be replaced with $(1.5)$-solvable and the concept of linear dependence in $\mathcal{C}$ can be replaced with linear dependence in $\mathcal{C}/\mathcal{F}_{(1.5)}$ in Theorem~\ref{big theorem 1} and Corollaries~\ref{torsion corollary 1} and \ref{big corollary 1}.
\end{remark}
\section{relating $\rho^1_p$ with $\rho^0$}\label{computational tools}
Let $K$ be a knot which is of finite order $n >1$ in the algebraic concordance group. Let $\Sigma$ be a genus $g$ Seifert surface for $\underset{n}{\#}K$. Let $L$ be a link of $g$ components sitting on $\Sigma$ which represents a metabolizer of the Seifert form. This section establishes a relationship between $\rho^1_p(K)$ and $\rho^0(L)$. The resulting theorem (Theorem~\ref{premain}) is used in Section~\ref{twist} to get information about $\rho^1$.
We need the following piece of notation:
\begin{definition}\label{band}
For a knot $K$ with Seifert surface $\Sigma$ and a link $L$ sitting on $\Sigma$, let $\gamma$ be a component of $L$. A curve $m$ which does not intersect $\Sigma$ is called a \textbf{meridian for the band on which $\gamma$ sits} if $m$ bounds a disk in $S^3$ which intersects $\Sigma$ in an arc which crosses $\gamma$ in a single point and does not intersect any other component of $L$.
\end{definition}
\begin{construction}\label{W for computation}
Let $V$ be as in the proof of Theorem~\ref{big theorem 1}, so $\ensuremath{\partial}_+ V$ is given by $M(J)$, where $ J = \underset{n}{\#} K$. Thinking of $L$ as sitting in $\ensuremath{\partial}_+ V$, let E be given by adjoining to $V$ a two handle along the zero framing of each component of $L$. Let $\ensuremath{\partial}_- E$ be the disjoint union of the $M(K)$ boundary components of $E$. Let $\ensuremath{\partial}_+ E$ be the boundary component of $E$ given by zero surgery on $L$ together with $J$, that is $\ensuremath{\partial}_+ E = M(L \cup J)$.
Sliding $J$ over the handles attached to $L$, one sees that $J$ bounds a disk in $M(L)$ and $\ensuremath{\partial}_+E\cong M(L)\#S^2\times S^1$. Adjoin to $V\cup E$ a three handle along the nonseperating $S^2$ in $\ensuremath{\partial}_+ E$ and call the resulting 4-manifold $W$. We denote by $\ensuremath{\partial}_+W $, the $M(L)$ component of $\ensuremath{\partial} W$ and by $\ensuremath{\partial}_-W=\ensuremath{\partial}_-V$, the disjoint union of the $M(K)$ components of $\ensuremath{\partial} W$. The submanifold $W - V$ is identical to the manifold constructed in \cite[8.1]{derivatives}, where it is called $E$ and is described in more detail.
\end{construction}
Let $p\in \mathbb{Q}[t^{\pm1}]$ be a polynomial.
We wish to use $W$ to make a claim involving the $\rho^1_p$-invariant of $K$ and the $\rho^0$-invariant of $L$. We begin with an overview of the strategy we use to do so. Let $\phi:\pi_1(W)\to\pquotient{W}{p}$ be the projection map. Lemmas~\ref{H2 by bdry} and \ref{H1 injection with coeff} together with Lemma~\ref{repeated use lemma} will give that \begin{equation}n\rho^1_p(K)-\rho^0(L)=\sigma^{(2)}(W;\phi)-\sigma(W).\end{equation} Lemma~\ref{bound} will give bounds on $\sigma^{(2)}(W;\phi)-\sigma(W)$.
\begin{lemma}\label{H2 by bdry}
\begin{enumerate}
\item
For each $M(K)$-component of $\ensuremath{\partial}_- W$ the map induced by inclusion from $H_1(M(K))$ to $H_1(W)$ is an isomorphism.
\item
The kernel of the map induced by inclusion from $A_0^p(K)$ to $A_0^p(W)$ is isotropic.
\end{enumerate}
\end{lemma}
\begin{proof}
The inclusion from $M(K)$ into $V$ induces a first homology isomorphism, as was observed in the proof of Lemma~\ref{H1 isom}. $W$ is obtained by adding 2-cells to null-homologous curves in $V$ and then adding a 3-cell. Neither of these operations changes first homology so the inclusion from $M(K)$ into $W$ induces a first homology isomorphism, which proves (1).
If $x,y\in A_0^p(K)$ are in the kernel of the inclusion induced map, then $(x,0,0,\dots,0), (y,0,0,\dots,0) \in \underset{n}{\oplus} A^p_0(K)= A_0^p\left(\underset{n}{\#} K\right)$ are in the submodule $P$ generated by the homology classes of the lifts of the components of $L$.
By Lemma~\ref{metabolic to Lagrangian}, which is stated and proved at the end of this section, $P$ is isotropic with respect to the linking form on $A_0^p\left(\iterate{\#}{n}{} K\right)$, which is precisely the n-fold direct sum of the linking form of $A_0^p(K)$ with itself.
Thus, $Bl_K(x,y)=Bl_{\# K}((x,0,\dots,0),(y,0,\dots,0)) = 0$ and the kernel of the inclusion induced map is isotropic with respect to Blanchfield linking.
\end{proof}
\begin{theorem}\label{H1 injection with coeff}
\vspace{.1in}
\noindent\begin{enumerate}
\item If $K$ is a $p$-anisotropic knot, then the map from $\pquotient{M(K)}{p}$ to $\pquotient{W}{p}$ induced by the inclusion of any one of the $M(K)$-components of $\ensuremath{\partial}_- W$ is injective.
\item Let $m_i$ be the meridian about the band of $\Sigma$ on which $L_i$ sits and $P$ be the submodule of $A_0^p(J)$ generated by $L$. If $m_1,\dots, m_n$ are $\mathbb{Z}$-linearly as elements of $A_0^p(J)/P$, then the inclusion of $M(L) = \ensuremath{\partial}_+ W$ into $W$ induces an injective map from $\nquotient{M(L)}{1}$ to $\pquotient{W}{p}$
\end{enumerate}
\end{theorem}
\begin{proof}
By Lemma~\ref{H2 by bdry} the kernel of this map on $A_0^p$ is isotropic, but by assumption the only such submodule of $A_0^p(K)$ is zero so the inclusion induced map \begin{equation}A_0^p(K)\hookrightarrow A_0^p(W)\end{equation} is a monomorphism. From this together with Lemma~\ref{H2 by bdry} part 1 and Lemma~\ref{repeated use lemma} it follows that the induced map \begin{equation}\pquotient{M(K)}{p} \to \pquotient{W}{p}\end{equation} is a monomorphism, completing the proof of (1).
By \cite[prop 8.1 (5)]{derivatives} The inclusion induced map from $H_1(M(L))$ to $H_1(W)$ is trivial. Thus, the inclusion induced map sends $\pi_1(M(L))$ to $\pi_1(W)^{(1)}$. Consider the composition
\begin{equation}\label{composition}H_1(M(L))\to\dfrac{\pi_1(W)^{(1)}}{\pi_1(W)^{(2)}}\to\onepquotient{W}{p}\to A_0^p(W) = A_0^p(J)/P\end{equation}
By \cite[prop 8.1 (3)]{derivatives} the generators of the left hand side of (\ref{composition}) (meridians of the components of $L$) are isotopic in $W$ to the meridians of the bands on which the components of $L$ sit. By assumption, the latter form a $\mathbb{Z}$-linearly independent set on the right hand side of (\ref{composition}), so this composition is injective. In particular this means that the composition of the two leftmost terms in (\ref{composition}) is injective, so $H_1(M(L))\to\onepquotient{W}{p}\subseteq \pquotient{W}{p}$ is a monomorphism which completes the proof.
\end{proof}
Theorem~\ref{H1 injection with coeff} implies
\begin{equation}\label{use W}
\sigma^{(2)}(W,\phi) - \sigma(W) = n \rho^1_p(K)-\rho^0(L).
\end{equation}
Lemma~\ref{bound} bounds the signatures on the left hand side of this equation. It is independent of any anisotropy assumption. Before we can address this lemma we must provide a definition for the Alexander nullity of a link.
In \cite[Definition 7.3.1]{Ka3} the \textbf{Alexander nullity} of an $m$-component link $L$ is defined as \begin{equation*}\eta(L)=\operatorname{rank}_{\mathbb{Q}(\mathbb{Z}^n)}(H_1(E(L),p;\mathbb{Q}(\mathbb{Z}^n))-1\end{equation*} where $E(L)=S^3-L$ is the exterior of the link, $p$ is a point in $E(L)$ and the homology is twisted by the abelianization map. A study of the long exact sequence of $(E(L),p)$ reveals that \begin{equation*}\eta(L)=\operatorname{rank}_{\mathbb{Q}(\mathbb{Z}^n)}(H_1(E(L);\mathbb{Q}(\mathbb{Z}^n)).\end{equation*} In the case of a link with pairwise zero linking a Mayer Veitoris argument reveals that \begin{equation}\label{eta}\eta(L)=\operatorname{rank}_{\mathbb{Q}(\mathbb{Z}^n)}(H_1(M(L);\mathbb{Q}(\mathbb{Z}^n)).\end{equation} The last of these interpretations is most convenient for our purposes.
\begin{lemma}\label{bound} For the 4-manifold $W$ given in Construction~\ref{W for computation} and the quotient map $\phi:\pi_1(W)\to\pquotient{W}{p}$.
\vspace{.1in}
\noindent\begin{enumerate}
\item$\sigma(W)=0$
\item$\left|\sigma^{(2)}(W,\phi) \right| \le g-1-\eta(L)$ where $\eta(L)$ is the Alexander nullity of $L$
\end{enumerate}
\end{lemma}
\begin{proof}
To see the first claim notice that the homology of W which is not carried by the $M_i$ components of $\ensuremath{\partial}_- W$ is generated by 2-handles attached to $M(K)$ along the zero framings of $g$ curves which have zero linking numbers with each other. The intersection form is thus given by the $g\times g$ zero matrix, which proves (1).
Let $\mathcal{K}$ be the classical field of fractions of the Ore domain $\mathbb{Q}\left[\pquotient{W}{p}\right]$. To see the second claim we show that \begin{equation*}\operatorname{rank}_\mathcal{K} \left(\dfrac{H_2\left(W;\mathcal{K}\right)}{i_*\left[H_2\left(\ensuremath{\partial} W;\mathcal{K}\right)\right]}\right) = g-1-\eta(L).\end{equation*}
The pair $(W,V)$ consists of g relative 2-handles and 1 relative 3-handle so $\chi(W)-\chi(V)=g-1$. Since $V$ has the homotopy type of a (disconnected) closed 3-manifold together with a $n$ 1-cells and $n$ 2-cells $\chi(V)=0$. It must be that $\chi(W)=g-1$. By \cite[Proposition 3.7]{C}, $H_0(W;\mathcal{K})=0$. By \cite[Proposition 3.10]{C}, $H_1(W;\mathcal{K})=0$. $W$ has the homotopy type of a 3-complex, so $H_4(W;\mathcal{K})=0$. Consider the long exact sequence of the pair $(W,\ensuremath{\partial} W)$,
$$H_3(\ensuremath{\partial} W;\mathcal{K})\to H_3(W;\mathcal{K})\to H_3(W,\ensuremath{\partial} W;\mathcal{K}).$$
Employing Poincar\'e duality, $H_3(\ensuremath{\partial} W;\mathcal{K}) = H_0(\ensuremath{\partial} W;\mathcal{K})=0$ and $H_3(W,\ensuremath{\partial} W;\mathcal{K})=H_1(W;\mathcal{K})=0$. Thus, $H_3(W;\mathcal{K})=0$.
Since the alternating sum of the ranks of twisted homology gives the Euler characteristic, $\operatorname{rank}_\mathcal{K} \left(H_2(W;\mathcal{K})\right) = \chi(W) = g-1$.
By Theorem~\ref{H1 injection with coeff} part 2, $$H_1(\ensuremath{\partial}_+ W;\mathcal{K}) = H_1(M(L);\mathbb{Q}(\mathbb{Z}^n))\otimes \mathcal{K} =\mathcal{K}^{\eta(L)},$$
so since $H_1(\ensuremath{\partial}_-(W);\mathcal{K}) = 0$, $H_1(\ensuremath{\partial} W;\mathcal{K}) = \mathcal{K}^{\eta(L)}$. By Poincar\'e duality, $H_2(\ensuremath{\partial} W;\mathcal{K})=H_1(\ensuremath{\partial} W;\mathcal{K})=\mathcal{K}^{\eta(L)}$.
Since $H_3(W,\ensuremath{\partial} W;\mathcal{K})=0$, the exact sequence of the pair indicates that $i_*:H_2(\ensuremath{\partial} W;\mathcal{K})\to H_2(W;\mathcal{K})$ is a monomorphism.
Thus, \begin{eqnarray*}\operatorname{rank}_\mathcal{K} \left(\dfrac{H_2\left(W;\mathcal{K}\right)}{i_*\left[H_2\left(\ensuremath{\partial} W;\mathcal{K}\right)\right]}\right) &=& \operatorname{rank}_\mathcal{K} \left(H_2(W;\mathcal{K})\right)-\operatorname{rank}_\mathcal{K} \left(H_2(\ensuremath{\partial} W;\mathcal{K})\right)\\ &=& (g-1)-\eta(L).\end{eqnarray*}
Finally, $\left|\sigma^{(2)}(W)\right| \le \operatorname{rank}_\mathcal{K} \left(\dfrac{H_2\left(W;\mathcal{K}\right)}{H_2\left(\ensuremath{\partial} W;\mathcal{K}\right)}\right)$ by inequality~\eqref{inequality}, which completes the proof.
\end{proof}
Now by equation~\ref{use W}, $\sigma^{(2)}(W,\phi)-\sigma(W) = n\rho^1_p(K) - \rho^0(L)$. Using the bound obtained in Lemma~\ref{bound} the theorem below is proven.
\begin{theorem}\label{premain}
Let $K$ be a $p$-anisotropic knot of finite algebraic order $n>1$. Let $\Sigma$ be a genus $g$ Seifert surface for $\underset{n}{\#}K$. Let $L$ be a link of $g$ curves on $\Sigma$ representing a metabolizer for the Seifert form. Let $P$ be the submodule of $A^p_0\left({\#}K\right)$ generated by $L$. Suppose that the meridians about the bands on which the components of $L$ sit form a $\mathbb{Z}$-linearly independent set in $A_0^p\left({\#}K\right)/P$. Then
$$ \left| n\rho^1_p(K)-\rho^0(L)\right| \le g-1-\eta(L).$$
\end{theorem}
All of the results of this section could be rephrased to deal with the case that $K_1\dots K_n$ are (possibly distinct) $p$-anisotropic knots and $J=\iterate{\#}{i=1}{n}K_i$ is algebraically slice with a metabolizer $L$. Following the exact same argument one gets a stronger theorem.
\begin{theorem}\label{postmain
Suppose that $K_1\dots K_n$ are (not necessarily distinct) $p$-anisotropic knots and $K=\iterate{\#}{i=1}{n}K_i$ is algebraically slice. Let $\Sigma$ be a genus $g$ Seifert surface for $K$. Let $L$ be a link of $g$ curves on $\Sigma$ representing a metabolizer for the Seifert form. Let $P$ be the submodule of $A_0^p(K)$ generated by $L$. Suppose that the meridians about the bands on which the components of $L$ sit form a $\mathbb{Z}$-linearly independent set in $A_0^p(K)/P$, where $P$ is the submodule of $A_0^p(K)$ generated by $L$. Then $$\displaystyle \left| \sum_{i=1}^{n} \rho_p^1(K_i)-\rho^0(L)\right| \le g-1-\eta(L).$$
\end{theorem}
\begin{example}
We provide an application of Theorem~\ref{postmain}. Suppose that $L$ is a 2 component link with pairwise linking number zero and such that $\left|\rho^0(L)\right|>1-\eta(L)$. Such links arise as by-products of the analysis in the next section. Let $i$ be a positive integer. The link $L$ is a metabolizer for the algebraically slice knot $K_i$ depicted in Figure~\ref{fig:antiderivative}. Notice that $K_i$ is given by the connected sum of $J_i(L)$ and $J_i$ both of which have prime Alexander polynomials and so are anisotropic. The meridians of the bands on which $L$ sit are depicted in Figure~\ref{fig:antiderivative}. For some choice of basis of the Alexander module of $K_i$, they represent
\begin{equation}
m_1=\left[\begin{array}{c}
1\\0
\end{array}\right],
m_2=\left[\begin{array}{c}
i(t-1)\\0
\end{array}\right].
\end{equation}
While the components of $L$ represent
\begin{equation}
l_1=\left[\begin{array}{c}
ti\\-i
\end{array}\right],
l_2=\left[\begin{array}{c}
i^2(1-t)\\i^2(t-1)
\end{array}\right]
\end{equation}
in
\begin{equation}
\displaystyle A_0(K_i) = \dfrac{\mathbb{Q}[t^{\pm1}]}{(i^2t^2+(1-2i^2)t+i^2)} \oplus \dfrac{\mathbb{Q}[t^{\pm1}]}{(i^2t^2+(1-2i^2)t+i^2)}.
\end{equation} Notice that $m_1, m_2, l_1, l_2$ form a $\mathbb{Z}$-linearly independent set.
\begin{figure}[h]
\setlength{\unitlength}{1pt}
\begin{picture}(150,150)
\put(-90,-75){\includegraphics[width=1.4\textwidth]{antiderivative3.pdf}}
\put(0,0){$J_i(L)$}
\put(135,0){$J_i$}
\put(-44,56){$i$}
\put(-16,56){$-i$}
\put(92,56){$i$}
\put(119,56){$-i$}
\put(-32,80){$L$}
\put(-70,110){$m_1$}
\put(65,110){$m_2$}
\end{picture}
\caption{$J_i(L)\#J_i$ has the link $L$ as a derivative. }\label{fig:antiderivative}
\end{figure}
Theorem~\ref{postmain} applies to give that $\rho^1(J_i(L))+\rho^1(J_i)$ is nonzero. Since $J_i$ is of topological order 2, Theorem~\ref{big theorem 1} gives us that $\rho^1(J_i)=0$. It must be that $\rho^1(J_i(L))$ is not zero. For $i\neq k$ positive integers $J_i(L)$ and $J_k(L)$ have distinct prime Alexander polynomials so by Corolary~\ref{big corollary 1} the knots $J_i(L)$ $i>0$ are linearly independent in $\mathcal{C}$.
\end{example}
\subsection{Metbolizers of the Seifert surface and isotropy in the Alexander module}
In this sub-section we state and prove a fact used in the proof of Lemma~\ref{H2 by bdry}. The proof relies on the formula in \cite[section 8]{BlDuality} for the Blanchfield form in terms of the Seifert matrix.
\begin{lemma}\label{metabolic to Lagrangian}
Suppose that $\Sigma$ is a genus $g$ Seifert surface for a knot $K$ and $L = L_1\dots L_g$ is a $g$-component link sitting on $\Sigma$ which spans a rank $g$ direct summand of $H_1(\Sigma)$ on which the Seifert form vanishes.
The submodule of $A_0^p(K)$ generated by the components of $L$ is isotropic.
\end{lemma}
\begin{proof
The proof will proceed by showing that the submodule of $A_0(K)$ generated by $L$ is isotropic. Once this is shown then the sesquilinearity of the Blanchfield form will complete the proof in the localized setting. For any \begin{equation}\alpha \otimes \frac{e}{f}, \beta \otimes \frac{g}{h}\end{equation}
in the submodule of $A_0^p(K)$ generated by $L$,
\begin{equation}Bl(\alpha \otimes \frac{e}{f}, \beta \otimes \frac{g}{h}) = Bl(\alpha, \beta) \frac{e\overline g}{f \overline h}\end{equation}
which is zero since $\alpha$ and $\beta$ are in the submodule of $A_0(K)$ generated by $L$.
We now set ourselves to proving that the submodule of the unlocalized Alexander module of $K$ generated by $L$ is isotropic. This submodule will be denoted by $P$.
Let the set $\{L_1\dots L_g\}$ be extended to $\{L_1\dots L_g, D_1, \dots D_g\}$ a symplectic basis for $H_1(\Sigma)$. Let $\mu_1, \dots \mu_g, \nu_1, \dots, \nu_g$ be the dual basis for $H_1(S^3 - \Sigma)$ given by meridians about the bands on which $L_i$ and $D_i$ sit. The homology classes of the lifts of $\mu_1, \dots \mu_g, \nu_1, \dots, \nu_g$ to the infinite cyclic cover of $M(K)$ form a generating set for $A_0(K)$ as a $\mathbb{Q}$-vector space. The map from $H_1(\Sigma)$ to the Alexander module induced by lifting $\Sigma$ to the cyclic cover of $M(K)$ is given with respect to these generating sets by the Seifert matrix $V$. The Blanchfield form with respect to the generating set given by the lifts of $\mu_1, \dots \mu_g, \nu_1, \dots, \nu_g$ is given by \begin{equation}\label{Blanchfield as seifert}Bl(\vec{r}, \vec{s})=(1-t)(\vec{s})^T(V-tV^T)^{-1}(\vec r)\end{equation} (see \cite[section 8]{BlDuality}).
Since $\{L_1\dots L_g\}$ is a metabolizer for the Seifert form, $V$ is given by a matrix of the form $\left[\begin{array}{cc}0&A\\B&C\end{array}\right]$, with respect to the basis $\{L_1\dots L_g, D_1, \dots D_g\}$ for $H_1(\Sigma)$ ($A,B,C$ are $g\times g$ matrices). Thus, $(V-tV^T)^{-1}$ is given by a matrix with entries in $\mathbb{Q}[t^{\pm1}]$ of the form $\left[\begin{array}{cc}D&E\\mathcal{F}&0\end{array}\right]$, where $D,E,F$ are $g\times g$ matrices with polynomial entries.
Consider any $\vec r=V\left[\begin{array}{c}\vec a\\0\end{array}\right]$, $\vec s=V\left[\begin{array}{c}\vec b\\0\end{array}\right]$ in $P$ ($a$ and $b$ are $g$-dimensional column vectors while $0$ denotes the $g$-dimensional zero vector). Plugging these values into (\ref{Blanchfield as seifert}) we see
\begin{eqnarray*}
Bl(\vec r, \vec s)=(1-t)\left[\begin{array}{c}\vec b\\0\end{array}\right]^T\left[\begin{array}{cc}0&B^T\\A^T&C^T\end{array}\right]\left[\begin{array}{cc}D&E\\mathcal{F}&0\end{array}\right]\left[\begin{array}{cc}0&A\\B&C\end{array}\right]\left[\begin{array}{c}\vec a\\0\end{array}\right].
\end{eqnarray*}
This is zero by direct computation. Since $P$ is the submodule generated by the lifts of $L_1\dots L_g$, this shows that $P$ is isotropic and completes the proof.
\end{proof}
\section{An infinite family of twist knots of algebraic order $2$ with distinct prime Alexander polynomials and nonzero $\rho^1$-invariant.} \label{twist}
Consider $T_n$ the n-twist knot depicted in Figure~\ref{fig:twist}. The goal of this section is the proof of Theorem~\ref{rho twist}.
\begin{theorem}\label{rho twist}
Let $n(x) = -x^2-x-1$. If $x\ge2$, then $\rho^1\left(T_{n(x)}\right)<0$.
\end{theorem}
From this theorem we get the immediate corollary.
\begin{corollary}\label{twist theorem}
The twist knots $T_{-7}, T_{-13}, T_{-21}, \dots, T_{-x^2-x-1}, \dots$ form a linearly independent set $\mathcal{C}$.
\end{corollary}
\begin{proof}[Proof of Corollary~\ref{twist theorem}]
These twist knots have has nonzero $\rho^1$-invariants and distinct prime Alexander polynomials $$\Delta_{T_{n}}(t)=nt^2+(1-2n)t+n.$$ By Corollary~\ref{big corollary 1} these knots form a linearly independent set.
\end{proof}
We move on to the proof of Theorem~\ref{rho twist}, which occupies us for the remainder of this paper.
For each $x$, the knot $T_{n(x)}$ is algebraically of order two (see \cite[Corollary 23]{Le10}). The knot $T_{n(x)}\#T_{n(x)}$ has the following as its Seifert matrix taken with respect to the obvious basis for the first homology of the Seifert surface depicted in Figure~\ref{fig:metabaslink}.
\begin{equation}\label{seif mat}
\left[
\begin{array}{cccc}
n(x)&1&0&0\\
0&1&0&0\\
0&0&n(x)&1\\
0&0&0&1
\end{array}
\right].
\end{equation}
This matrix has a metabolizer generated by
\begin{equation}
v_1 = \left[
\begin{array}{c}
1\\
x\\
0\\
1
\end{array}
\right],
v_2=\left[
\begin{array}{c}
0\\
1\\
1\\
-x-1
\end{array}
\right].
\end{equation}
\begin{figure}[b]
\setlength{\unitlength}{1pt}
\begin{picture}(180,180)
\put(-98,-65){\includegraphics[width=1\textwidth]{consumtwistknotmetab.pdf}}
\put(55,57){$+1$}
\put(240,57){$+1$}
\put(-86,57){$n(x)$}
\put(100,57){$n(x)$}
\put(-105,98){$m_1$}
\put(85,98){$m_2$}
\put(8,154){$x+1$ strands}
\put(205,154){$x+2$ strands}
\end{picture}
\caption{$L_x$ as a link in $S^3$ sitting on a Seifert surface for $T_{n(x)}\#T_{n(x)}$. (In this picture $x=2$.)}.\label{fig:metabaslink}
\end{figure}
As homology classes, $v_1$ and $v_2$ are represented by the link $L_x$ also depicted in Figure~\ref{fig:metabaslink}. Meridians for the bands on which the components of $L_x$ sit (also depicted in Figure~\ref{fig:metabaslink}) represent generators for the Alexander module:
\begin{equation}
m_1=\left[\begin{array}{c}
1\\0
\end{array}\right],
m_2=\left[\begin{array}{c}
0\\1
\end{array}\right]
\end{equation}
while the components of the link are given by:
\begin{equation}
l_1=\left[\begin{array}{c}
t(n(x)+xn(x)+x)+1\\t(n(x)+1)-1
\end{array}\right],
l_2=\left[\begin{array}{c}
t(n(x)+1)-1\\t(-xn(x)-n(x)-1)+x+1
\end{array}\right].
\end{equation}
In \begin{equation}A_0(T_{n(x)}\#T_{n(x)}) = \underset{2}{\oplus}\left(\dfrac{\mathbb{Q}[t^{\pm1}]}{(n(x)t^2+(1-2n(x))t+n(x))}\right)\end{equation} the elements $m_1, m_2, l_1, l_2$ form a $\mathbb{Z}$-linearly independent set.
Theorem~\ref{premain} provides us with a strategy to prove Theorem~\ref{rho twist}. Specifically, if $\rho^0(L_x) < -1$ then $\rho^1(T_n)<0$.
\begin{theorem}\label{twist2}
When $x$ is an integer greater than $1$, $\rho^0(L_x)<-1$.
\end{theorem}
\begin{proof}
An important workhorse in the proof of this theorem is Lemma~\ref{surgery}. It gives bounds on how surgery along a nullhomologous curve changes $\rho^0$. We use this theorem to reduce the computation of $~\rho^0(L_x)$ for every $x$ to a single computation for a simpler link. The proof of the lemma is easy and we leave it to the end.
\begin{lemma}\label{surgery}
If a link $L'$ is given by performing $+1$ surgery on $L$ along a nullhomologous curve in the complement of $L$ then $\rho^0(L')\le\rho^0(L)$.
\end{lemma}
As is depicted in Figure~\ref{fig:L3fromL2}, the link $L_x$ can be realized as $+1$ surgery along nullhomologous curves on $L_{x-1}$. Proposition~\ref{surgery} then implies that for $x>2$ \begin{equation}\label{Lx < L2}\rho^0(L_x)\le\rho^0(L_{x-1})\le\dots \le \rho^0(L_2).\end{equation}
\begin{figure}[h]
\setlength{\unitlength}{1pt}
\begin{picture}(180,160)
\put(-90,0){\includegraphics[width=1\textwidth]{L3fromL2.pdf}}
\put(36,58){\includegraphics[width=.06\textwidth]{box.pdf}}
\put(42,65){\SMALL{$+1$}}
\put(235,63){\includegraphics[width=.07\textwidth]{box.pdf}}
\put(241,70){$+1$}
\end{picture}
\caption{If one performs $+1$ surgery on the depicted nullhomologous curves, then the link depicted is $L_{x+1}$. If the surgery curves are erased, the link is $L_x$ ($x=2$).}\label{fig:L3fromL2}
\end{figure}
The link $L_2$ is realized in Figure~\ref{fig:L2} as $+1$ surgery along the unlink along four nullhomologous curves. Consider the link $L'$ depicted in Figure~\ref{fig:L'} obtained by performing only two of these four surgeries. By Lemma~\ref{surgery}, \begin{equation}\label{L2 < L'}\rho^0(L_2)\le\rho^0(L').\end{equation}
\begin{figure}[ht]
\setlength{\unitlength}{1pt}
\begin{picture}(100,140)
\put(-90,0){\includegraphics[width=.8\textwidth]{L2.pdf}}
\end{picture}
\caption{The link $L_2$ as the result of $+1$ surgery on the unlink along nullhomologous curves.}\label{fig:L2}
\setlength{\unitlength}{1pt}
\begin{picture}(100,140)
\put(-90,0){\includegraphics[width=.8\textwidth]{L_.pdf}}
\put(200,45){$\gamma_1$}
\put(165,55){$\gamma_2$}
\end{picture}
\caption{the link $L'$ as the result of $+1$ surgery on the unlink along fewer curves than $L_2$.}\label{fig:L'}
\end{figure}
\begin{figure}[h]
\setlength{\unitlength}{1pt}
\begin{picture}(130,140)
\put(-90,0){\includegraphics[width=.8\textwidth]{alsoL_.pdf}}
\put(200,45){$\gamma_1$}
\put(200,33){$\gamma_2$}
\end{picture}
\caption{the link $L'$ after an isotopy}\label{fig:alsoL'}
\end{figure}
The diagram for $L'$ in Figure~\ref{fig:L'} is of $+1$ surgery on nullhomologous curves $\gamma_1$, $\gamma_2$ on the unlink. Using this fact we build a 4-manifold with $M(L')$ as its boundary. Start with $V$, the boundary connected sum of two copies of $S^1\times B^3$. The boundary of $V$ is zero surgery on the two component unlink. Thinking of $\gamma_1$ and $\gamma_2$ as sitting on $\ensuremath{\partial} V$, attach two-handles to their $+1$ framings and call this manifold be $W$. the inclusion induced map from $H_1(M(L'))$ to $H_1(W)$ is a monomorphism so \begin{equation}\label{W computes L'}\rho^0(L')=\sigma(W)-\sigma^{(2)}(W,\phi),\end{equation} where $\phi$ is the abelianization map on $H_1(W)$.
The untwisted intersection matrix of $W$ is given by the $2\times2$ identity matrix so $\sigma(W)=2$.
\begin{figure}[h]
\setlength{\unitlength}{1pt}
\begin{picture}(180,140)
\put(-50,0){\includegraphics[width=.4\textwidth]{link_B_zoom.pdf}}
\put(80,-10){\includegraphics[width=.4\textwidth]{link_B_handleslide_zoom.pdf}}
\put(70,65){$\cong$}
\put(-37,55){$+1$}
\put(-37,95){$+1$}
\put(90,55){$+2$}
\put(87,95){$+1$}
\put(57,100){$\gamma_1$}
\put(57,50){$\gamma_2$}
\put(185,100){$\gamma_1$}
\put(150,45){$\gamma_2'$}
\end{picture}
\caption{Right: A closer view of the surgery curves in the diagram for $L'$ in Figure~\ref{fig:alsoL'}. Left: A diagram for $L'$ gotten by performing a handle slide.}\label{fig:alsoL'zoom}
\end{figure}
Consider the result of performing the handle slide depicted in Figure~\ref{fig:alsoL'zoom} on the diagram in Figure~\ref{fig:alsoL'} (an isotopy of the diagram in Figure~\ref{fig:L'}). The resulting diagram for $W$ is of a 4-manifold gotten from $V$ by first gluing a 2-handle to the $+2$ framing of a null-homotopic curve $(\gamma_2')$ and then another to a curve which is non-torsion in $H_1(V;\mathbb{Q}[\mathbb{Z}^2])$ $(\gamma_1)$. The second of these additions affects neither second homology nor the twisted intersection form. Let $W'$ be the 4-manifold gotten by gluing a two handle to $V$ along the $+2$ framing of $\gamma'_2$. By the above, \begin{equation}\label{W equals W'}\sigma^{(2)}(W,\phi)=\sigma^{(2)}(W',\phi).\end{equation} A Kirby diagram for $W'$ is given in Figure~\ref{fig:W'}. A convenient istopy of the diagram is given in Figure~\ref{fig:W'isotope}.
\begin{figure}[h]
\setlength{\unitlength}{1pt}
\begin{picture}(180,120)
\put(-20,-50){\includegraphics[width=.7\textwidth]{link_B_.pdf}}
\put(25,50){$+2$}
\put(80,50){$\gamma_2'$}
\put(25,15){$\bullet$}
\put(25,83){$\bullet$}
\end{picture}
\caption{A Kirby diagram for $W'$.}\label{fig:W'}
\setlength{\unitlength}{1pt}
\begin{picture}(180,140)
\put(-20,-10){\includegraphics[width=.7\textwidth]{link_B_isotope.pdf}}
\put(150,120){$+2$}
\put(120,105){$\gamma_2'$}
\put(5,53){$\bullet$}
\put(85,53){$\bullet$}
\end{picture}
\caption{The diagram in Figure~\ref{fig:W'} after an isotopy. It is convenient that the $+2$ pushoff of $\gamma_2'$ is the blackboard pushoff in this diagram.}\label{fig:W'isotope}
\end{figure}
Notice that while $\gamma_2'$ does not bound an embedded disk in $\ensuremath{\partial} V$, its lift to the abelian cover of $\ensuremath{\partial} V$ does. This disk $D$ is depicted in Figure~\ref{fig:isect'n}. The twisted second homology of $W'$ is $\mathbb{Q}[t^{\pm1},s^{\pm1}]$ generated by the 2-sphere $S$ given by the core of the 2-handle glued to $\gamma_2'$ together with $D$. The twisted intersection matrix of $W$ is given by the equivariant self intersection of $S$. The sphere $S$ has a pushoff which crosses the $S$ exactly where $D$ intersects the lifts of the $+2$ pushoff of $\gamma_2'$. Thus, the self intersection matrix of $S$ is given by counting the number of times (with coefficients) that $D$ intersects $\gamma_2'$. These intersection points are depicted in Figure~\ref{fig:isect'n}.
\begin{figure}[t]
\setlength{\unitlength}{1pt}
\begin{picture}(180,190)
\put(-90,45){\includegraphics[width=.9\textwidth]{link_B_isotopesurface.pdf}}
\put(20,50){$\gamma'_2$}
\put(20,155){$t \gamma'_2$ }
\put(210,50){$t^{-1}\gamma'_2$}
\put(210,155){$\gamma'_2$}
\put(-78,27){The portion of $D$ sitting in}
\put(-78,14){sheet 1 of $\widetilde{V}$ contributes $+t$ }
\put(-78,1){to the self intersection of $S$.}
\put(112,27){The portion of $D$ sitting in}
\put(112,14){sheet $t$ of $\widetilde{V}$ contributes $+t^{-1}$ }
\put(112,1){to the self intersection of $S$.}
\end{picture}
\caption{The lifts of the $+2$ push-off of $\gamma_2'$ in the cover of ${V}$ together with a disk, $D$, bounded by a lift of $\gamma_2'$.}\label{fig:isect'n}
\end{figure}
By counting these intersection points, the twisted signature of $W'$ is equal to the $L^2$ signature of the $1\times 1$ matrix, $[t+t^{-1}]$. A Fourier transform (see \cite[example 1.15]{L2algebraic}) send this matrix to the $1\times1$ matrix $[z+z^{-1}]$ over $L^2(S^1)$ ($S^1$ is the unit circle in $\mathbb C$ with normalized Lebesgue measure). The signature of this matrix is equal to the measure of the subset of $S^1$ on which \begin{equation}z+z^{-1} = z+\overline z = 2\operatorname{re}(z)\end{equation} is positive minus the measure of the subset on which it is negative. These sets have equal measure and so \begin{equation}\label{signature of W'}\sigma^2(W';\phi)=0\end{equation}
Combining equations (\ref{Lx < L2}), (\ref{L2 < L'}), (\ref{W computes L'}), (\ref{W equals W'}) and (\ref{signature of W'}) for each $x\ge2$
\begin{equation}
\begin{array}{rl}
\rho^0(L_x)&\le\rho^0(L_2)\le\rho^0(L')=\sigma^{(2)}(W)-\sigma(W)\\&=\sigma^{(2)}(W')-\sigma(W)=0-2<-1
\end{array}
\end{equation} and the proof is complete.
\end{proof}
\subsection{proof of Lemma~\ref{surgery}}
We now state and prove a marginally stronger version of Lemma~\ref{surgery}. As promised it is not a difficult theorem to prove.
\begin{lemma}\label{surgery stronger}
\begin{enumerate}
\item
If a link $L'$ is given by performing $+1$ surgery on $L$ along a nullhomologous curve $\gamma$ then $\rho^0(L')\le\rho^0(L)\le \rho^0(L')+2$.
\item
If $L'$ is given by performing $-1$ surgery on $L$ along a nullhomologous curve then $\rho^0(L')-2\le\rho^0(L)\le \rho^0(L')$.
\end{enumerate}
\end{lemma}\begin{proof}
Let $W$ be the 4-manifold obtained by adding to $M(L)\times[0,1]$ a 2-handle along the $+1$ framing of $\gamma$ in $M(L)\times\{1\}$. $\ensuremath{\partial}(W)$ is given by $-M(L)\sqcup M(L')$ and both inclusions induce first homology isomorphisms. The intersection form on $H_2(W)/H_2(\ensuremath{\partial} W)$ is given by the $1\times 1$ matrix whose only entry is $1$ so $W$ has regular signature $\sigma(W)=1$.
Thus, \begin{equation}\label{surgery equation}\rho^0(L')-\rho^0(L) = \sigma^{(2)}(W,\phi)-1,\end{equation} where $\phi:\pi_1(W)\to H_1(W)$ is the abelianization map. $(W,\ensuremath{\partial}_-W)$ has only one 2-handle so \begin{equation}|\sigma^{(2)}(W,\phi)|\le 1.\end{equation} Rearranging~(\ref{surgery equation}) and applying this bound \begin{equation}\left|\rho^0(L')-\rho^0(L)+1\right|\le 1\end{equation} and $\rho^0(L')\le\rho^0(L)\le \rho^0(L')+2$, completing the proof of the first claim
The proof of the second claim is identical except that the signature of the bounded 4-manifold is $-1$.
\end{proof}
We close with some related questions which we would like to address.
\begin{question}
What about the case $x=1$? The $-3$ twist knot is of infinite order in the concordance group (proven by by \cite[Corollary 1.2]{Tamulis} using Casson Gordon invariants). If one follows the technique in this paper in the case $x=1$, then one finds that $\rho^0(L_1)=-1$, so $\rho^1(T_{-3})\in[-1,0]$. Is there another choice of metabolizing link which has $\rho^0<-1$? Is it the case that $\rho^1(T_{-3})= 0$?
\end{question}
\begin{question}
There are many more twist knots of algebraic order 2 whose concordance orders are unknown. If one can find a derivative link for the connected sum of each such twist knot with itself and can compute the associated $\rho^0$-invariant, then one will have bounds on the $\rho^1$-invariant of the twist knots. Presumably these bounds will imply that most of the twist knots have non-vanishing $\rho^1$-invariant. If one can do this then one will have shown that most of the twist knots form a linearly independent subset of the concordance group.
\end{question}
\bibliographystyle{plain}
|
2,877,628,091,620 | arxiv | \section{ }
{\em Discussion}:
In our picture, the domain walls resemble quantum Hall edges. They
support currents, alternating in direction from wall to wall, in the
ground state.
Our calculation has been highly idealized, of course. For one thing,
we have ignored any possibility of energy intrinsically associated
with the fictitious field (other than the implicit constraint
enforcing values $\pm B$). While this approximation is in the spirit
of fictitious Chern-Simons fields, or of fields implementing
constraints, which have no independent dynamics, in the absence of a
detailed microscopic model we cannot assess its accuracy. Similarly,
we have been shamelessly opportunistic in maneuvering back and forth
between lattice and continuum. With all due reserve, it nevertheless
seems appropriate to mention that the sort of model discussed here is
remarkable and virtually unique, as far as we are aware, in predicting
a non-trivial preferred dopant density along the stripes. Moreover,
the preferred numerical value, which has emerged from a dynamical
calculation containing no disposable continuous parameters, is
consistent with the observed one.
{\em Acknowledgments:} We thank M.~M. Fogler and A. Zee for helpful
discussions. Research supported in part by DOE grant
DE-FG02-90ER40542.
|
2,877,628,091,621 | arxiv | \section{Introduction}
The huge importance of investigating the Early Universe through measurements of Cosmic
Microwave Background Polarization (CMBP) is widely accepted by the astrophysical
scientific community. By measuring the angular power spectra (APS) of the CMB it is possible
to study effectively the inflation era, scalar (density) and tensorial (gravitation waves)
primordial perturbations, formation processes of first stars and galaxies \cite{zss97}\cite{kinney99}\cite{cen03}. Their implications in High Energy Physics
can be
studied as well.
Despite of their challenging nature, at least with presently available technology, measurements of CMBP open up the strong possibility of providing truly remarkable new insights into Cosmology and Fundamental Physics.
The importance of tools such the $E$-mode and $B$-mode APS have been discussed by other authors, even in this volume, and these aspects will not be within the arguments of this article. Even the current status of CMBP experiments will not be reviewed here, but the starting point shall be rather the overall picture provided by their results as they can be read on the literature.
A plethora of experiments attempting to measure the $E$-mode APS are already operating or
in progress and we should assume that they will get successfully to the aim. Updated
references can be found in \cite{montroy05},
which reports the most recent result
in the field. An almost complete list of suborbital experiments aiming at CMB investigations can be found in the web site http://lambda.gsfc.nasa.gov/product/suborbit/su\_experiments.cfm.
However, the full characterization of the $E$-mode APS may be still beyond the capabilities of these experiments, which are constrained by their intrinsic limits to make only statistic detections at some angular scales. A first significant result for the $E$-mode APS full characterization is expected from the Planck ESA-satellite after 2007, unlikely before.
As in the case of CMB anisotropy (and black-body spectrum), which mesasurements began few decades ago, the early experimental approaches are done by using instruments derived from previous generation. The well known history, in fact, is that first attempts to detect anisotropy have been done with radioastronomical receivers, before understanding that new design and techniques were fundamental requirements to get to the end successfully. This was due mainly to the difficulties to remove both systematics and foregrounds at the required level, which is sligthly different in radioastronomy.
For the CMBP the situation looks similar: present CMBP experiments, in fact, are based on receivers previously designed for measuring the anisotropy, even though using state-of-the-art components. Even the foreground removal techniques were based, initially, on data taken in total intensity and the knowledge of polarized foregrounds is limited to surveys done at frequencies really far from the 70-100 GHz window, that looks suitable for CMBP observations. More recently the study of polarized galactic emission has been boosted by observing in polarization some low emission area, from which it has been possible to set important implications on CMBP measurements \cite{CPR06} \cite{CBC06}. Thus, we should expect that a significant step in the full characterization of CMBP $E$-mode shall require:
\begin{itemize}
\item{} Dedicated instrument design
\item{} Dedicated observing strategies
\item{} Dedicated foreground removal techniques
\end{itemize}
The way to measure the $B$-mode and its APS, which is a factor 10-40 tinier
(Fig.~\ref{cbFig}) than for $E$-mode, looks definitely harder. At the moment
no experiments aimed at CMBP $B$-mode detection are operating at all and we
would expect this situation will continue at least for next few years.
\begin{figure}
\centering
\includegraphics[angle=0, width=10cm]{cb.ps}
\caption{The CMB angular power spectra: anisotropy (red), $E$-mode (blue) and $B$-mode polarization (others). The latter is plotted for different T/S values.}
\label{cbFig}
\end{figure}
This paper is aimed at giving a contribution to the discussion about the first item among those listed before, that is the leading design criteria for a new instrument generation dedicated to investigate CMBP $B$-mode. The starting point will be the analysis already done to design the receivers of the SPOrt \cite{CTAL04} and BaR-SPOrt \cite{CTAL03} experiments, which represent the first attempt to realize instruments expressely oriented to measure the $Q$ and $U$ Stokes parameters.
Section 2 will review the different main receiver architectures that can be used for CMBP experiments. Later on, Section 3 will introduce the instrument systematics, which are expected to play the major role in the race to measure the CMBP $B$-modes. Section 4 will compare the different design architectures with respect to systematics contamination and will allow us to draw some conclusions.
\section{Measuring the Polarization of the CMB}
The polarization of the CMB can be expressed by the $Q$ and $U$ Stokes parameters, which describes the linear component. The circular component V, in fact, can be considered negligible when the polarization is generated by Thomson scattering, as in the case of CMB. Even the foregrounds, Galactic Synchrotron and Dust, are considered to be only linearly polarized.
In principle, there exist three main ways to obtain the $Q$ and $U$ Stokes parameters:
\begin{enumerate}
\item{} By correlation of the two circularly polarized components:
\begin{eqnarray}
Q = \Re(E_R E_L^*); \\
U = \Im(E_R E_L^*).
\label{circCorrEq}
\end{eqnarray}
It has the advantage to measure both $Q$ and $U$ simultaneously, providing thus a 100\% time efficiency.\item{} By correlation of the two linearly polarized components:
\begin{eqnarray}
U = \Re(E_x E_y^*).
\label{linCorrEq}
\end{eqnarray}
It is similar to the previous way, but can measure only $U$. $Q$ is obtained by rotating the reference frame of 45$^\circ$, that is by rotating the same receiver or by using another (independent) receiver. Time efficiency drops to 50\%.
\item{} Either by direct or off-line difference of the two linearly polarized components:
\begin{eqnarray}
Q = {|E_x|^2 - |E_y|^2 \over 2}.
\label{diffEq}
\end{eqnarray}
Similarly to the previous method, it provides just one parameter at a time and time efficiency is again 50\%.
\end{enumerate}
The listed situation are summarized in Table~\ref{archTab}.
\begin{table}
\centering
\caption{The Stokes parameters detected and time efficiency are summarized for different architectures.}
\begin{tabular}{@{}clcclc@{}}
& & & & & \\
& {\bf Architecture} & {\bf Stokes par.s} & {\bf Time Efficiency} & & \\
& {\bf detected} & & & & \\
\hline
& & & & & \\
& Correlation: & $Q$, $U$ & 100\% & & \\
& Circular Polarizations & & & & \\
& & & & & \\
& Correlation: & $U$ & 50\% & & \\
& Linear Polarizations & & & & \\
& & & & & \\
& Difference of Linear & $Q$ & 50\% & & \\
& Polarizations & & & & \\
& & & & & \\
& Difference of Linear & $Q$ & 50\% & & \\
& Polarizations (Off-line)& & & & \\
\hline
\end{tabular}
\label{archTab}
\end{table}
An important consideration is that all of these schemes, in principle, can use both
HEMTs and bolometric sensors. In fact, also the correlation schemes,
which are normally thought specific for coherent (HEMT based) receivers,
can be coupled to bolometers.
These would be used, in fact, as diodes to provide the square detection
of the signal after the correlation units (Fig. 2).
\begin{figure}
\centering
\includegraphics[angle=0, width=10cm]{hemt_bol.eps}
\caption{a correlation architecture using either HEMTs (left) or bolometers (right).}
\label{archFig}
\end{figure}
Figure~\ref{archFig} reports an example of how the same correlation scheme
can adopt either HEMTs or bolometers.
In the first case the signal is
amplified by HEMTs, after the OMT has separated the two componets, and then is correlated by the
correlation unit. The diodes provide square detection of the signals.
Low noise (HEMT) amplifiers are here necessary because of the noise generated by
diodes.
In the second case, the signal exiting the OMT is
first correlated (without pre-amplification) and then detected
by low
noise bolometers, which thus act as diodes.
What has been said before has huge relevance to define baselines for designing a
CMBP instrument. In particular we can state that:
\begin{itemize}
\item The overall architecture can be chosen almost independently of the
sensors to be used, leaving the designer free to optimize the architecture with
respect to the purity of $Q$ and $U$ measurements. Any decision about the sensor type shall be driven by other criteria.
\item The sensor choice will be only marginally influenced by
the architecture design, and other important parameters like sensitivity,
cost, availability, overall thermal design will drive the sensor choice.
\end{itemize}
Under these assumptions, the discussion about the systematics
generation and the purity of polarization measurements for
the most common architectures would be mandatory.
\section{Instrument systematics and their effects in CMB Polarization measurements}
As introduced in Section 1 the measure of $B$--modes,
for which even a 0.1\% leakage from the Temperature
anisotropy might wash out any detection, require instruments with extremely low contamination.
One should also note that the contamination from the anisotropy term
is not easily removed by destriping techniques, its level
being not constant over the scans
performed to observe the sky (e.g. see
\cite{RKA00}, \cite{SCC03} and references
therein for descriptions of destriping techniques), so that
an $intrinsically$ clean instrument is necessary to avoid complex data reductions.
A clear understanding of the contamination
from the unpolarized background introduced by
the instrument itself is thus mandatory to properly design instruments.
Several works have been written on this subject (e.g.
\cite{CTC01}\cite{LYH02}\cite{DELA02}\cite{MBB02}\cite{FFT03}\cite{PTAL03}\cite{CCS04})
and the leading effects identified so far can be divided in:
\begin{itemize}
\item Off-axis instrumental polarization. It is generated by the optics and is independent
of the architecture adopted for the receiver;
\item On-axis instrumental polarization. It is generated by the receiver
itself, in particular by the other components of the antenna assembly (e.g. OMT),
and by the correlation devices;
\item Systematics errors from thermal fluctuations. Are secondary systematics due
to the fluctuations of the offset generated by the on-axis instrumental polarization.
\end{itemize}
\subsection{Optics}
The optical properties of an antenna are described by the co-polar $g(\theta,\varphi)$
and cross-polar $\chi(\theta,\varphi)$ patterns. Both of them
are defined for the two polarizations, so that one deals with the four
functions $g_x$, $g_y$, $\chi_x$, $\chi_y$.
$g$ describes the capability of the optics to collect the wanted polarization as a
function of the direction in the sky. For instance, $g_x(\theta, \varphi)$ describes how
the $E_x(\theta, \varphi)$ electric field is collected into
the polarization state $X$ of the receiver.
The cross--polarization $\chi$, instead, measures how the other (wrong)
polarization is gathered, introducing thus a contamination.
For example, $\chi_x$
measures how much $E_y$ enters the polarization $X$ of the receiver.
The instrumental polarization, defined as the polarization signal generated
by this device in presence of a pure unpolarized signal,
is instead described by a third antenna pattern: the instrumental
polarization pattern $\Pi$ (for details see \cite{CCS04}).
It is a combination of $g$ and $\chi$ and its expression is given by
\begin{equation}
\Pi(\theta, \phi) =
\Pi_Q(\theta, \phi)+j \,\,\Pi_U(\theta, \phi)
\label{piEq}
\end{equation}
with
\begin{eqnarray}
\Pi_Q
&=& { |g_x|^2 + |\chi_x|^2
-|g_y|^2 - |\chi_y|^2
\over 2}, \label{fQeq} \\
\Pi_U
&=& \Re\,(g_x \chi_y^* +
g_y \chi_x^*).
\label{fUeq}
\end{eqnarray}
This result depends only on the optics features and applies
to any receiver scheme (correlation, difference between the
linear polarizations).
An example of this pattern is given in Figure~\ref{sportPIFig},
where the case of the SPOrt experiment is reported.
\begin{figure*}
\centering
\includegraphics[angle=0, width=0.49\hsize]{PiIp.eps}
\includegraphics[angle=0, width=0.49\hsize]{PiAng.eps}
\includegraphics[angle=0, width=0.49\hsize]{PiQ.eps}
\includegraphics[angle=0, width=0.49\hsize]{PiU.eps}
\caption{$\Pi$ pattern for the 90~GHz horn of the SPOrt experiment: $|\Pi|$ (top left),
polarization angle (top right), $\Pi_Q$ (bottom left) and $\Pi_U$
(bottom right). It is worth noting the axisymmetry of $\Pi$ and
the tangential pattern of the polarization angles.(From \cite{CCS04})}
\label{sportPIFig}
\end{figure*}
$\Pi$ describes just the response of the system to a single point source.
The contamination in the general case is instead given by its convolution
with the unpolarized field $T_b(\theta, \varphi)$ of the sky emission.
The equation in the antenna reference frame and in brightness
temperature unit is given by (see always \cite{CCS04})
\begin{eqnarray}
Q_{\rm sp} + j\,U_{\rm sp} &=&
{1 \over 4 \pi} \int_\Omega T_b(\theta, \phi) \,\, \Pi(\theta,\phi)
\,d\Omega. \label{tspZeq}
\end{eqnarray}
where $Q_{\rm sp}$ and $U_{\rm sp}$ are the contaminations on the two
linear Stokes parameters.
In general, evaluations of this convolution and of its effects
on CMBP measurements require a numerical approach. However,
the special case of axisymmetric antennae allows some simplifications
which lead to an analytical solution and, in turn, help
understand the main characteristics of such a contaminant.
Moreover, few important classes of antennae, like circular horn and
on-axis mirror optics, belong to this special case,
whose analysis has thus importance in itself.
In fact, the $\Pi$ pattern
of Equation~(\ref{piEq}) becomes
\begin{equation}
\Pi = \Pi_Q + j\Pi_U
= {|g_0(\theta)|^2 - |g_{\pi/2}(\theta)|^2 \over 2}\,e^{j\,2\phi}
\label{symmfeq}
\end{equation}
where {$g_0(\theta)$ and $g_{\pi/2}(\theta)$} are the ${g_x(\theta, \phi)}$ pattern for $\theta=0$ and $\phi=\pi/2$, respectively. The intensity only depends on the angular distance $\theta$ from
the axis and polarization angle is featured by
radial pattern with respect to the beam main axis
\begin{eqnarray}
\alpha
&=& 0.5 \arctan \;{U_{\rm sp} \over Q_{\rm sp}} \nonumber\\
&=& \left\{
\begin{array}{ll}
\varphi & {\rm for\,|g_0|^2 > |g_{\pi/2}|^2}\\
\varphi+90^\circ & {\rm otherwise}.\\
\end{array}
\right.
\end{eqnarray}
An example is given by the circular feed horn of
SPOrt reported in Figure~\ref{sportPIFig}: the $\Pi$ intensity has
axial symmetry, while the polarization angle has tangential pattern perpendicular
to the radial directions.
First of all, we can
observe that the maximum sensitivity is not along the centre
of the observed field, but along an out-of-axis ring about FWHM across.
Thus, we are dealing with an off-axis instrumental polarization.
Then, $\Pi_Q$ and $\Pi_U$ have quadrilobe patterns with both
positive and negative lobes. This makes the contamination sensitive
to only the anisotropy pattern, since the contributions from the lobes and
due to the mean emission cancel out each other.
This important feature makes the contamination proportional
only to anisotropy radiation, much fainter than the mean CMB emission.
Finally, even more important is the radial structure of $\Pi$, which is
typical of an $E$--mode.
This is confirmed by the computation of its polarized angular power spectra
\begin{eqnarray}
C_{E\ell}^\Pi &=& {1\over 2\ell+1} |a_{2,\ell0}^\Pi|^2,\nonumber\\
C_{B\ell}^\Pi &=& 0. \label{fspeceq}
\end{eqnarray}
which show no power on the $B$--mode ($a_{2,lm}^\Pi$ are the 2-spin spherical
harmonics coefficient of $\Pi$). As an example, the $\Pi$ spectra for
the 90~GHz SPOrt antenna
are reported in Figure~\ref{sportPIPSFig}.
It is worth noting that the $E$--mode has its maximum power
on FWHM scale, which rapidly decreases on larger angular scales: at $\ell = 2$
(90$^\circ$ scale) its value is 8 orders of magnitude lower that the
co-polar pattern.
This interesting feature brings the maximum contamination on the smallest angular scale
accessible to the optics, while leaves clearer the largest ones.
\begin{figure*}
\centering
\includegraphics[angle=0, width=0.49\hsize]{sportE_PI_PS.ps}
\caption{$E$--mode power spectrum of the $\Pi$ pattern
for the 90~GHz feed horn of the SPOrt experiment (from \cite{CCS04}).
The spectrum $W_\ell$ of the
co-polar pattern is also reported for comparison.}
\label{sportPIPSFig}
\end{figure*}
This lack of power on the $B$--mode
is reflected in the effects generated on the final map of the experiment.
As described by Equation~(\ref{tspZeq}), the contamination map
is the convolution between the
Temperature map and the $\Pi$ pattern function.
An efficient way to evaluate the impact on CMBP is
the computation of the power spectrum, which represent
the most relevant quantity for CMBP purposes.
In general it does not exist a simple analytic solution,
and each optics configuration needs a numerical evaluation.
This is true apart from the axisymmetric case, which
once again allows an exact solution, thus giving an idea about
what happens also for the general case
(see always \cite{CCS04}).
In fact, in the axisymmetric case a sort of convolution theorem is
valid and the power spectrum of the contamination
is the product between the scalar spectra of the Temperature
map and the $E$-- and $B$--mode of the $\Pi$ pattern
\begin{eqnarray}
C_{E\ell}^{T\otimes \Pi} &=& {1 \over 4\pi} \, C_{E\ell}^\Pi \, C_{T\ell},\\
C_{B\ell}^{T\otimes \Pi} &=& {1 \over 4\pi} \, C_{B\ell}^\Pi \, C_{T\ell} = 0.
\label{cecbeq}
\end{eqnarray}
Hence, $C_{E\ell}^\Pi$ and $C_{B\ell}^\Pi$ allow a quick evaluation of
the effects of the $T_b$ leakage: their value directly tell us
the leakage of the $C^T$ spectrum on the polarized component.
An example is given by Figure~\ref{sportContPSFig}, where the spectra
of the contamination map of the SPOrt case are compared to the expected CMBP
signal.
\begin{figure*}
\centering
\includegraphics[angle=0, width=0.49\hsize]{sportE_Cont_PS.ps}
\caption{$E$--mode power spectrum of the contamination map
from optics instrumental polarization for the SPOrt experiment (solid). The expected spectrum of CMBP is reported for comparison (dashed)}
\label{sportContPSFig}
\end{figure*}
A very important result for CMBP $B$-mode experiments is that
axisymmetric systems, having no $B$-mode component for their $\Pi$ patterns,
do not show any contamination on this CMBP component, thus representing
the best choice for such experiments.
However, the faint CMBP signal requires many receivers in the focal plane to allow
the needed sensitivity and, apart from the central receiver, they will
be not in such an optimal condition even with an on-axis optics.
Therefore, a contamination on the $B$-mode will necessary occur and
the optics must be carefully designed to keep under control
the leakage into this component.
However, as a representative example for $B$-mode experiments we
report a summary of the analysis carried out for the BaR-SPOrt case. The leading part of the CMBP
$B$-mode emission is at degree scales and an experiment aimed at
measuring it requires at least a FWHM~$= 0.5^\circ$ as angular
resolution. BaR-SPOrt, with its $0.4^\circ$ and with its
design already oriented to the CMBP signal (even if thought
for the $E$-mode only) allows a reliable evaluation of what
is achievable with the present technology.
Actually, BaR-SPOrt has an optics with a very good cross-polarization
(about 40--45~dB for the feed horn and much better for the mirror optics)
and its performances provide a good
indication on what is to date achievable for instrument with
sub-degree resolution.
Figure~\ref{barsportPSFig} shows both the power spectra of
the contamination map.
\begin{figure*}
\centering
\includegraphics[angle=0, width=0.5\hsize]{contSpecBar.ps}
\caption{$E$-mode spectrum of the optics instrumental polarization
for the BaR-SPOrt experiment ($C^{T\cdot\Pi}$). The Temperature
($C^{T}$), $E$--mode ($C^{E}$) and $B$--mode ($C^{B}$) power spectra
of the cosmological model in agreement with the WMAP data are also
reported for comparison. For the $B$--mode a $T/S=0.03$ ratio has been assumed.}
\label{barsportPSFig}
\end{figure*}
Having BaR-SPOrt an on-axis configuration, the results on the $B$--mode
contamination are obviously optimal (no contamination at all), but, at the
same time, this is not representative of off-axis system, for which
the $\Pi$ pattern is a mixture of both $E$ and $B$ contributions.
However, the contamination on the other component
($E$-mode) is an estimate of the whole pollution achievable, and, therefore,
it can be used as worst case indication for the contamination on the
$B$-mode in off-axis systems, where the global contamination
is a mixture of the two modes.
In this frame, the contamination of the optics on the $B$-mode seems to
leave free the $\ell = 100$ peak also with a low $T/S$ value, indicating that a
cross-polarization of 40--45~dB should allow us to
keep under control this contaminant.
This source of contamination can be minimized by taking care of the optics design.
However, this result tell us that the present technology should already have
the right maturity level.
\subsection{Receiver}\label{recInstPolSec}
The signal collected through the antenna is elaborated and
detected by the receiver. Sometime, as in the case of HEMT-based receivers, is amplified before detection. Since the receiver must provide the Stokes
parameters $Q$ and $U$, it should have zero output when the signal entering
the antenna is unpolarized.
In real cases, however, the receiver introduces some
instrumental correlation between the two polarizations, making non-zero outputs.
Differently from the optics, this part of the instrument
deals with the total power collected from the antenna independently of the incoming
direction, thus generating an on-axis instrumental polarization.
This spurious signal acts as an offset and can be seen as a correlation
of the unpolarized signal generated by the receiver itself.
Although this offset, if constant, can be easily removed by means of
destriping techniques, it can increase
gain instabilities of the 1/$f$--noise of detectors.
In fact, the sensitivity radiometer equation writes
\begin{eqnarray}
\sigma &=& T_{\rm sys}\; \sqrt{
{k \over \Delta\nu\;\tau} +
\left({T_{\rm offset} \over T_{\rm sys}}\right)^2
\left({\Delta G \over G}\right)^2
}
\end{eqnarray}
where $\Delta\nu$ is the bandwidth, $\tau$ the integration time,
$T_{\rm sys}$ the noise system temperature, $T_{\rm offset}$ the offset, $\Delta G / G$ the gain fluctuations and $k$ the receiver-dependent constant (e.g. $k$=1 and $k$=1/2 for total power and correlation receivers, respectively).
The first term is the white noise, which, decreasing with long
integration times, represents the ideal behaviour of the receiver noise.
The second term, instead, is due to the gain fluctuations,
which, worsening the ideal behaviour, can jeopardize the advantage
of long integration times.
Driven by the offset, it requires the instrumental polarization
to be as low as possible.
The total noise properties are better described by its power spectrum
which writes
\begin{eqnarray}
P(f) &=& \sigma_0^2\;\left[ 1 + \left({f_{\rm knee} \over f}\right)^\alpha\right].
\end{eqnarray}
where $f$ is the frequency and $\sigma_0^2$ is the 1-second sensitivity.
The first term is constant and represents the white noise. The second
term is the contribution of the gain fluctuations and, following a power law
with index $\alpha \sim$ 1 \cite{WOL95}, is called 1/$f$ noise.
The most important parameters is the knee frequency $f_{\rm knee}$. It is
the frequency at which the 1/$f$ noise equals the white one and, thus,
defines the time scale on which the gain fluctuations becomes dominant, making
unuseful longer integration times.
Destriping techniques are typically able to remove instabilities on time scales
larger than $1/f_{\rm knee}$, which thus represents a time limit for the duration
of the scans covering the observed sky area.
Gain instabilities are typical of active devices like HEMTs and bolometers.
As already mentioned above,
a way to minimize the gain fluctuations is to reduce the receiver offset.
In fact, $f_{\rm knee}$ of a receiver is related with that of its amplifiers
($f_{\rm knee}^{\rm LNA}$) by the formula
\begin{eqnarray}
f_{\rm knee} &=& \left({T_{\rm offset}\over T_{\rm sys}}\right)^{2/\alpha}\,
f_{\rm knee}^{\rm LNA}
\label{fkneeEq}
\end{eqnarray}
The present InP technology provides HEMTs with high sensitivity,
but bad stability: typical values at 100~GHz are around
\begin{equation}
f_{\rm knee}^{\rm LNA}\sim 10^3\;{\rm Hz.}
\end{equation}
These high values requires receivers with very low offset generation to
be compatible with the $\sim 100$~s scanning time of space experiments
(see PLANCK and WMAP). Considering Equation~(\ref{fkneeEq}), a ratio better than
\begin{eqnarray}
{T_{\rm offset}\over T_{\rm sys}} < 3\times10^{-3}
\label{totsysEq}
\end{eqnarray}
is recommended.
Better situation occurs for bolometers.
They have a typical knee frequency of \cite{DELA98}
\begin{equation}
f_{\rm knee}^{\rm bol}\sim 10^{-2}\;{\rm Hz.}
\end{equation}
which already ensures the needed stability to keep under control
the 1/$f$ instabilities, even in the worst case of total power receivers
where $T_{\rm offset} = T_{\rm sys}$.
Constant offsets, like those from the constant 2.7~K CMB signal or
those from the system noise,
have impact just on the sensitivity, but do not provide
contaminations in the $Q$ and $U$ anisotropy, the relevant information
for CMBP investigations.
However, by its own definition, the offset is proportional to the
sky emission, so that the anisotropy can generate a variable contamination
which is not trivially removable by destriping techniques.
Thanks to its on-axis nature, this effect can be described by a transfer
function $S\!P$ defined as the ratio between the instrumental polarization
and the input unpolarized signal
\begin{equation}
S\!P = {Q_{\rm sp}+jU_{\rm sp} \over T}.
\end{equation}
To avoid to confuse these two effects, here we call {\it offset} the term related
to a constant unpolarized background (e.g. the isotropic part of CMB, the instrumental
noise) and {\it instrumental polarization} the term related to the variable
anisotropy sky emission $\Delta T$.
Considering the level of the $B$-mode component with
respect to the CMB anisotropy, the leakage from $\Delta T$
should be at least of the order of
\begin{equation}
|S\!P| \sim 10^{-3}
\label{spCondEq}
\end{equation}
to ensure a clean detection (also in combination with off-line data reduction).
By their own nature, these contaminations can be generated only in the parts where
the two polarizations propagates together, i.e. the antenna system and the correlation unit.
Let us discuss only about the antenna system, which
represents the most contaminant
part. In fact, as discussed in \cite{CTC01},
the CU can be inserted in lock-in ring, which, introducing a modulation
onto the signal, can reject the offset generated by the CU with high efficiency.
A first type of polarimeter is that based on the correlation of the two circularly polarized components (Figure ~\ref{radioFig}). In this architecture a polarizer is introduced after the horn to
insert a $90^\circ$ phase delay between the
two linear polarizations $E_x$ and $E_y$ collected by the horn itself.
\begin{figure}
\centering
\includegraphics[angle=0, width=0.6\hsize]{radio2.ps}
\caption{Scheme of polarimeter based on the correlation of the two
circular polarizations.}
\label{radioFig}
\end{figure}
After that, an Orthomode Transducer (OMT) $45^\circ$ rotated extracts the two circular
polarized components
\begin{eqnarray}
E_R &=& {E_x + jE_y \over \sqrt{2}} \nonumber \\
E_L &=& {E_x - jE_y \over \sqrt{2}}.
\label{erelEq}
\end{eqnarray}
The offset produced in the antenna system is due to the polarizer and the OMT,
through the expression (Carretti et al. 2001)
\begin{eqnarray}
(Q+jU)_{\rm offset}
&=& S\!P_{\rm omt}
\left(T_{\rm sky} + T_{\rm atm} +
T_{\rm noise}^{\rm Ant}
\right) \nonumber \\
&+& S\!P_{\rm pol}
\left(T_{\rm sky} + T_{\rm atm} +
T_{\rm noise}^{\rm h} -
A_{\rm h}\;T_{\rm ph}^{\rm p}
\right),\nonumber \\
\label{AB0TN0eq}
\end{eqnarray}
where $S\!P_{\rm omt}$ and $S\!P_{\rm pol}$ are
\begin{eqnarray}
S\!P_{\rm omt} & = & A_{\rm omt}\;\left(S_{A1}S_{B1}^* + S_{A2}S_{B2}^*\right),
\label{spomtEq} \\
\nonumber \\
S\!P_{\rm pol} & = & {1\over 2} \left(1 - {A_{\parallel}\over
A_{\perp}}\right)
= {1\over 2} \; {A_{\perp} - A_{\parallel}
\over A_{\perp}},
\label{sppolEq}
\end{eqnarray}
$S_{A1}$, $S_{B2}$ the transmission parameters of the two OMT arms,
$S_{A2}$, $S_{B1}$ their isolation terms, $A_{\rm h}$, $A_{\rm omt}$
the attenuations of the horn and the OMT, respectively,
and $A_{\perp}$, $A_{\parallel}$ the attenuations
of the polarizer along its two main polarizations.
The offset sources are thus
the physical temperature of the polarizer
$T_{\rm ph}^{\rm p}$ and the signals propagating in the antenna system:
the signal collected from the sky ($T_{\rm sky}$), the
atmosphere emission ($T_{\rm atm}$) and the noise temperatures
of the horn alone ($T_{\rm noise}^{\rm h}$)
and of the whole antenna system
\begin{equation}
T_{\rm noise}^{\rm Ant} = T_{\rm noise}^{\rm h} +
A_{\rm h}\; T_{\rm noise}^{\rm p} +
A_{\rm h}\;A_{\rm p}\; T_{\rm noise}^{\rm omt},
\label{tantnoiseeq}
\end{equation}
with $A_{\rm p}$ the mean attenuation of the two polarizer arms and
\begin{eqnarray}
T_{\rm noise}^{\rm h} & = &
(A_{\rm h} - 1)\;T_{\rm ph}^{\rm h}\label{thneq}\\
T_{\rm noise}^{\rm p} & = &
(A_{\rm p } - 1)\;T_{\rm ph}^{\rm p}\label{tpneq}\\
T_{\rm noise}^{\rm omt} & = &
(A_{\rm omt } - 1)\;T_{\rm ph}^{\rm omt}\label{toneq}
\end{eqnarray}
with $T_{\rm ph}^{\rm h}$, $T_{\rm ph}^{\rm p}$, $T_{\rm ph}^{\rm omt}$
the physical temperatures of horn, polarizer and OMT.
A complete description of derivation
and implications of Eq.~(\ref{AB0TN0eq}) is given in
Carretti et al. (2001). Here we
just point out that the offset is generated by OMT and polarizer
that partially correlate the antenna noise
as well as the sky and atmosphere emissions.
The part of Equation~(\ref{AB0TN0eq}) concerning $T_{\rm sky}$ provides the instrumental
polarization related to the CMB anisotropy:
\begin{eqnarray}
(Q+jU)_{\rm sp} &=& \left( S\!P_{\rm omt} + S\!P_{\rm pol}\right) \; \Delta T_{\rm sky}
\label{quspDTEq}
\end{eqnarray}
so that $S\!P_{\rm omt}$ and $S\!P_{\rm pol}$ directly provides the leakage
$S\!P$ of $\Delta T_{\rm sky}$ into $Q+jU$.
A second class of polarimeters are the correlators of the two linear polarizations.
The antenna system is very similar to the previous one,
but the polarizer, which is not inserted in the chain.
The OMT, thus, directly extract the two linear polarizations which are correlated
providing $U$ only, as for Equation~(\ref{linCorrEq}).
Apart from the lack of the polarizer term,
the expression describing the offset is very similar to equation~(\ref{AB0TN0eq}) and
writes
\begin{eqnarray}
Q_{\rm sp}+jU_{\rm sp}
&=& S\!P_{\rm omt}
\left(T_{\rm sky} + T_{\rm atm} +
T_{\rm noise}^{\rm Ant}
\right).
\label{linOffEq}
\end{eqnarray}
Similarly, the leakage from CMB anisotropy into the polarized signal is
\begin{eqnarray}
(Q+jU)_{\rm sp} &=& S\!P_{\rm omt}\;\Delta T.
\label{linQuspDTEq}
\end{eqnarray}
In general, the major term is that concerning the OMT,
so that the lack of the polarizer does not
give relevant differences between the two correlation
schemes in terms of offset and instrumental polarization generation.
The third polarimeter scheme consists in a differential receiver subtracting
the two linear polarizations and providing directly $Q$.
Discussed by \cite{LYH02}, it is realized by a dual polarization
antenna directly feeding an OMT, whose outputs are $E_x$ and $E_y$.
They are then differentiated
by a PLANCK-like receiver, with the second polarization which substitutes
the 4~K reference load: $E_x$ and $E_y$ enter a magic-T providing
their sum and difference ($(E_x + E_y)/\sqrt{2}$ and $(E_x - E_y)/\sqrt{2}$).
After the amplification, these enter a second magic-T which provide two
quantities proportional to $E_x$ and $E_y$. After a square detection performed
by diodes, they are differentiated thus providing $Q$.
The offset generated by the antenna system is given by
\begin{eqnarray}
(Q+jU)_{\rm offset}
&=& S\!P_{\rm omt}^{\rm diff}
\left(T_{\rm sky} + T_{\rm atm} +
T_{\rm noise}^{\rm h} -
A_{\rm h}\;T_{\rm ph}^{\rm omt}
\right),
\label{diffOffEqeq}
\end{eqnarray}
with
\begin{eqnarray}
S\!P_{\rm omt}^{\rm diff}&=&
{1\over 2} \; {A_{Y}^{\rm omt} - A_{X}^{\rm omt}
\over A^{\rm omt}},
\end{eqnarray}
where $T_{\rm ph}^{\rm omt}$ is the physical temperature of the OMT,
$A_{X}^{\rm omt}$ and $A_{Y}^{\rm omt}$ the attenuations
of the two OMT arms and $A^{\rm omt}$ the mean OMT attenuation.
Therefore, in this case the differential attenuation of the OMT, instead of
the polarizer, is the main cause of offset generation.
Similarly, the instrumental polarization proportional to the CMB anisotropy is
given by
\begin{eqnarray}
(Q+jU)_{\rm sp}
&=& S\!P_{\rm omt}^{\rm diff}\;\Delta T_{\rm sky}
\label{diffspEqeq}
\end{eqnarray}
Finally, polarization can be measured also with a pure bolometric
technique.
Polarization Sensitive Bolometers (PSBs) are able to separately measure the
power of the two linear polarizations ($|E_x|^2$ and $|E_y|^2$).
The Stokes parameter $Q$ is computed by differencing the power of the
two polarizations so detected. The Stokes parameter $U$ cannot be directly measured
and, as for differential polarimeters, have to be captured through a second
measurement $45^\circ$ rotated.
As OMTs do for coherent receivers, there is leakage between
the two polarizations.
Following \cite{JON02} this leakage is described through $\epsilon_x$,
the fraction of power of $|E_y|^2$ detected in the $|E_x|^2$ channel,
and through $\epsilon_y$, the equivalent for the other polarization.
The instrumental polarization from anisotropy signal is thus given by
\cite{JON02}
\begin{eqnarray}
Q_{\rm sp}^{\rm psb} &=& SP_{\rm psb} \Delta T_{\rm sky},
\label{psbInst1Eq}
\end{eqnarray}
with
\begin{eqnarray}
SP_{\rm psb} &=& {\epsilon_x - \epsilon_y \over 2},
\label{psbInst2Eq}
\end{eqnarray}
which provides the way to measure the non ideality of PSB detectors.
\subsection{Thermal fluctuations}
Thermal fluctuations are widely recognized as a possible source of errors in faint CMB measurements \cite{PTAL03}\cite{MBB02}\cite{CZM04}\cite{PIAT03} and, besides a quiet environment, receivers with
low sensitivity to temperature
are required for the detection of signal as weak as CMBP.
The importance of such systematics depends on the receiver scheme and,
in some cases, can be crucial.
Total power architectures suffer of variations induced
onto the data, that are only slightly dumped with respect to the
temperature fluctuations themselves. The detection
of a weak ($\mu$K) signal is thus a real challenge
even in very stable thermal environments.
Carretti et al. (2004b) describes such effects in terms of transfer functions
from physical temperature variations to fluctuations onto the data.
In fact, these transfer functions allow a direct
estimate of the sensitivity to temperature fluctuations of the
different architectures and, in turn, a direct comparison among these schemes.
For correlation receivers like those of SPOrt and BaR-SPOrt, they found that the
variations on the data are generated by the thermal fluctuations
$\Delta T_{\rm ph}^{\rm h}$, $\Delta T_{\rm ph}^{\rm p}$,
$\Delta T_{\rm ph}^{\rm omt}$ of OMT, polarizer and feed horn, respectively,
and are described by the equation
\begin{eqnarray}
\Delta(Q+jU) & = &
H_{\rm h}\; \Delta T_{\rm ph}^{\rm h}\nonumber \\
&+& H_{\rm p}\; \Delta T_{\rm ph}^{\rm p}\nonumber \\
&+& H_{\rm omt}\; \Delta T_{\rm ph}^{\rm omt}
\label{offFluct1eq}
\end{eqnarray}
where the transfer functions $H$ of the three devices are defined by
\begin{eqnarray}
H_{\rm h} &=& {3\over 2}\,(A_{\rm h} - 1)
\;[S\!P_{\rm omt}(1+h^{\rm omt}) + S\!P_{\rm pol}(1+h^{\rm p})],\nonumber\\
\nonumber\\
H_{\rm p} &=& {3\over 2}\,A_{\rm h}\;[S\!P_{\rm omt}\,(A_{\rm p} - 1)\,
(1+p^{\rm omt})-
S\!P_{\rm pol}(1+p^{\rm p})], \nonumber \\
\nonumber\\
H_{\rm omt} &=& {3\over 2}\,A_{\rm h}\;A_{\rm p}\;(A_{\rm omt} - 1)
\;S\!P_{\rm omt}\;(1+O^{\rm omt}),
\label{hhheq}
\end{eqnarray}
for horn (h), polarizer (p), and OMT (omt), respectively. $A$ are
the attenuations of the devices, while the corrective
terms $h$, $p$, $O$ are
\begin{eqnarray}
\nonumber\\
h^{\rm omt} &=& (A_{\rm p} - 1)\,{T_{\rm ph}^{\rm p}\over
3\;T_{\rm ph}^{\rm h}}
+A_{\rm p}\,(A_{\rm omt} - 1)\,{T_{\rm ph}^{\rm omt}\over
3\;T_{\rm ph}^{\rm h}},\nonumber\\
h^{\rm p} &=& -{T_{\rm ph}^{\rm p},\over
3\;T_{\rm ph}^{\rm h}},\nonumber\\
p^{\rm omt} &=& (A_{\rm omt}-1)\,{T_{\rm ph}^{\rm omt}\over
3\;T_{\rm ph}^{\rm p}},\nonumber\\
p^{\rm p} &=& -\left(
{A_{\rm h}-1 \over A_{\rm h}}\, {T_{\rm ph}^{\rm h}\over
3\;T_{\rm ph}^{\rm p}}
+ {1 \over A_{\rm h}}\, {T_{\rm sky}+T_{\rm atm}\over
3\;T_{\rm ph}^{\rm p}}
\right), \nonumber\\
O^{\rm omt} &=& {A_{\rm h}-1 \over A_{\rm h}\,A_{\rm p}}\,
{T_{\rm ph}^{\rm h}\over 6\;T_{\rm ph}^{\rm omt}}
+ {A_{\rm p}-1 \over A_{\rm p}} \,
{T_{\rm ph}^{\rm p}\over 6\;T_{\rm ph}^{\rm omt}}
+ {A_{\rm omt}-1 \over 6} \nonumber\\
& &+ {1 \over A_{\rm h}\,A_{\rm p}}\,
{T_{\rm sky}+T_{\rm atm}\over 6\;T_{\rm ph}^{\rm omt}},
\nonumber\\
\label{hpOeq}
\end{eqnarray}
which, dominated by $(A-1)$ terms,
are in general much lower than 1.
The dumping factors with respect to the thermal fluctuations
are thus given by noise generation terms ($A-1$) and by
the extra-terms ($S\!P_{\rm omt}$, $S\!P_{\rm pol}$) typical of the correlation
architecture.
The total power scheme, instead, does not benefit of such extra-factors.
Actually, transfer functions for this scheme are \cite{CZM04}
\begin{eqnarray}
H_{\rm h}^{TP} &=& {3\over 2}\;(A_{\rm h} - 1)\;(1+h^{\rm omt}),\\
H_{\rm p}^{TP} &=& {3\over 2}\;A_{\rm h}\;(A_{\rm p} - 1)\;(1+p^{\rm omt}), \\
H_{\rm omt}^{TP}&=& {3\over 2}\;A_{\rm h}\;A_{\rm p}\;(A_{\rm omt} - 1).
\label{hhhTPeq}
\end{eqnarray}
Considering that $S\!P_{\rm omt}$ and $S\!P_{\rm pol}$ can be as low as
$10^{-3}$ (see Section~\ref{recInstPolSec}),
the advantages of correlation architectures become obvious.
Beside the transfer functions, which describe the capability of a scheme
to be insensitive to temperature fluctuations, the contamination
on the final map of a CMB experiment have to be evaluated to check
the importance with respect to the CMB signal.
The effects on the final map (and on their spectra, the relevant quantities
for CMB studies) are too complex to be analytically estimated: they
depend on the amplitude of the thermal fluctuations,
their statistical behaviour and, finally, the scanning strategy of the experiment.
Simulations are thus needed and a general rule cannot be provided:
estimates have to be performed for every single experiment.
However, as an example, we can consider the SPOrt case, which, on board the
International Space Station, will not benefit of optimal thermal
conditions. In fact, the ISS orbit is featured by
a daily modulation of the Sun illumination generating
significative environmental temperature fluctuations.
Therefore, it represents a sort of worst case analysis of the
performances achievable by an experiment devoted to the CMBP $B$--mode.
The best case would be a correlation architecture SPOrt-like (see next section) coupled with an orbit characterized by optimal thermal environment (e.g. L2 point of the Sun--Earth system).
The analysis of \cite{CZM04} shows that, although the non-optimal
thermal environment, the contamination from thermal fluctuations are at
a low level for this experiment.
In fact, the correlation scheme,
along with the custom developed devices (OMT and polarizer),
allows very small transfer functions
(about $5\times 10^{-4}$ for the worst device). In combination with an active control
of the temperature fluctuations, which reduces to $\pm 0.2$~K the
{\it natural} thermal fluctuations induced by the Sun modulation,
this allows to keep under control the contamination from thermal instabilities.
Figure 6 in \cite{CZM04} shows the thermal induced noise in comparison with
the expected $E$--mode spectra, main scientific target of the SPOrt experiment:
the contamination is well below the sky signal at the multipole $\ell$ where
it provides the major contribution.
A further point is the relevance of the thermal fluctuation behaviour
\cite{MBB02}\cite{CZM04}. In fact, fluctuations synchronous
with the scanning period generate the major contribution. The thermal design has
thus to be optimized to minimize this component. This work has been carried out
on both the Planck and SPOrt experiments.
The case of the SPOrt experiment shows that a good thermal design, in combination
with a receiver generating very low offsets, allows to keep under control the
thermal fluctuation effects even in non-optimal orbits.
Even better results are expected for SPOrt-like instruments in
orbits which are optimal from the thermal stability point of view.
Therefore, this kind of systematics can be minimized by a careful instrument thermal design and by a mission design able to reduce the orbit synchronous thermal fluctuations.
\section{Architectures vs Systematics contamination}
From the point of view of the instrumental polarization by optics, all the architectures are
equivalent, the contamination depending on the optics features only.
Optics design must be carefully studied, but the analysis of the
BaR-SPOrt case gives indications that present technology can provide antennae free enough from this
contaminant, even for the $B$--mode detection.
The other relevant source of contamination is the instrumental
polarization by the receiver. In this case there are relevant differences among the various
schemes.
The purity can be evaluated through only the $S\!P$ coefficient giving
the instrumental polarization from CMB anisotropy (Section 3.2). This is because the
offset generation and related 1/$f$ noise effects are in general less critical.
In fact, both the offset and instrumental polarization are generated by the
same coefficient, but the condition to be matched for the instrumental polarization
(Equation~(\ref{spCondEq})) is more severe than
that for the offset (Equation~(\ref{totsysEq})).
Table~\ref{coeffTab} provides the coefficient $S\!P$
for the different architectures discussed here.
The computations have been performed assuming the best performances available to date. In particular, for the correlation polarimeters we have assumed the
$S\!P$ values of the SPOrt/BaR-SPOrt experiments,
which presently represent the state-of-the-art of such a scheme in terms of
$Q$ and $U$ purity. For these experiments $S\!P\sim 2 \times 10^{-3}$ (90 GHz channel of SPOrt) represents the worst case, while a better $S\!P\sim 2 \times 10^{-4}$ has been obtained for the 32 GHz channel of BaR-SPOrt.
\begin{table}
\centering
\caption{Instrumental polarization coefficient $S\!P$ for various polarimeter
schemes (brackets report values achieved at 32~GHz which can
be taken as goals for future developments).}
\begin{tabular}{@{}clcc@{}}
\\
& {\bf Architecture} & $S\!P$ &\\
\hline \\
& Correlation: & $\sim 2 \times 10^{-3}$ & \\
& Circular Polarizations & ($2\times 10^{-4}$ at 32~GHz) & \\
& & & \\
& Correlation: & see circular polarizations& \\
& Linear Polarizations & & \\
& & & \\
& Difference of Linear & $\sim 1 \times 10^{-2}$ & \\
& Polarizations & ($3\times10^{-3}$ at 30~GHz)& \\
& & & \\
& Difference of Linear & & \\
& Polarizations (Off-line)& $\sim 1. \times 10^{-2}$ & \\
& and PSB & & \\
\hline
\end{tabular}
\label{coeffTab}
\end{table}
For the differential scheme, instead, we have assumed a differential attenuation of
$0.1~dB$. This is the performance of
WMAP 94~GHz channel \cite{JAR03},
but it should be noted that the 30~GHz SPOrt OMTs allow better results.
Finally, typical values for PSB leakage are $\epsilon_i < 0.05$ \cite{JON02},
about 4-5 orders of magnitude worst than the
best OMTs, leading instrumental polarization coefficient of the order of
$SP_{\rm psb} \sim 1\times 10^{-2}$, similar to that of the differential
receiver.
\section{Conclusions}
The above discussion brings to the fore that there is a scheme already matching (or very close to) the
level of purity required for a $B$--mode experiment ($S\!P \cong 10^{-3}$): the correlation scheme of the SPOrt and BaR-SPOrt experiments (see Table 2).
That scheme was studied taking into account the major requirement of \textit{high-purity} for CMBP measurements and the sources generating
contamination have been deeply analyzed.
A design-to-goal philosophy has been followed by developing the most critical
components. Then, a custom approach based on the optimization of the
devices with respect to the instrumental polarization generation has brought to this important result.
Even though the correlation of the two circular polarizations
may already satisfies the requirements at least in some cases, other schemes
like the differential ones could have margin for improvements.
Further developments, in fact, should allow
the improvement of the $\approx 1$ order of magnitude
required by differential architectures to get to the same result.
However, it is worth remarking that only the correlation of the two circular polarizations allows
simultaneous detection of the two Stokes parameters $Q$ and $U$, which offers two unvaluable advantages with respect to other schemes:
\begin{itemize}
\item the time efficiency is a factor 2 better than for the others, allowing either a $\sqrt{2}$ better sensitivity or, alternatively,
half a number of receivers. In turn, lower costs and system resources (mass,
power) are realistic;
\item{} $Q$ and $U$ are detected by the same receiver, allowing
to avoid the complex coupling of data coming from
different instruments, and more complex calibration procedure.
\end{itemize}
In the light of this, the design of future CMBP experiments aiming at $B$--modes detection could start by taking the overall architecture of SPOrt as a baseline design for developing arrays of receivers, in order to achieve the needed sensitivity. Thus, the capability of realizing arrays of hundred receivers with suitable polarimetric performances will represent the major challenge for next generation of instruments.
It is worthwhile noting that SPOrt-like architectures are suitable to be used even with bolometers, taking advantage of their higher sensitivity. In this case the signal amplification would not be required before detection, then avoiding additional noise introduced by front-end amplifiers.
However, this solution implies more complicated cryogenics design due to the need of
cooling bolometers down to $\sim 1 K$, and even the front-end (feed, OMT, and correlation
unit) must be cryogenically cooled (down to few K) to reduce their thermal noise. In that case the
cryogenics must be performed by using liquid Helium and, apart from its intrinsic
complexity, also the time duration of the mission will be constrained.
The alternative of using Low Noise Amplifiers, before detection with diodes, does not
require the use of liquid Helium reservoir since the required temperatures
($\approx 15$--20~K) can be provided by mechanical coolers.
In the light of this, a really preliminary road map for future CMBP experiments \textit{B--mode--oriented} can be written down:
\begin{itemize}
\item adopt a correlation scheme of circular components to have both $Q$ and $U$ simultaneosly
\item develop array of receivers based on the SPOrt design for passive devices
\item assess \textit{pro et contra} of using bolometers instead of LNA and diodes in term of impact into cryogenics design
\item put most of efforts into technological development of either LNA or bolometers depending on the cryogenics assessment
\end{itemize}
The above list suggests the cryogenics could thus represent the watershed line. If space cryogenics able to ensure proper cooling to bolometers (and front-end) can be available soon, that is in time for incoming CMBP space missions, then the best effort shall be addressed towards bolometers in order to make it competitive at frequency around 100 GHz. On the other hand, if the sooner available space cryogenics will be that suitable for HEMT technology, then the goal shall be the realization of array of W-band LNAs with improved performances.
|
2,877,628,091,622 | arxiv |
\section*{Introduction \label{intro}}
Recognizing faces and facial expressions with high accuracy is central for many cognitive and social tasks that primates (and possibly other animals) perform every day. Several studies reported single neurons in the ventral visual stream -- and particularly in the so-called ``face patches'' of the inferotemporal (IT) cortex -- that are exquisitely sensitives to faces \cite{tsao2006cortical,tsao2008mechanisms}.
A recent landmark study greatly contributed to shed light on the neural code for facial identity in the IT of macaques \cite{chang2017}. This study reported that faces might be represented as feature vectors in a relatively low-dimensional ($\sim$50D) {\it face space} \cite{valentine2016}, with IT neurons tuned to single axes of variation of the face space and insensitive to changes in other, orthogonal axes\footnote{
Indeed, neurons are believed to encode principal components {\it linearly} but not necessarily one-to-one, see \cite{chang2017}. In particular, if $\bf y$ is the vector of neurons' normalized firing rates and ${\bf x}'$ is the vector of principal components in the face space, an orthogonal matrix $O$ relates $\bf y$ and ${\bf x}'$: ${\bf y} = O {\bf x'}$.}.
Interestingly, distinct subpopulations of neurons appear to project faces onto two distinct sets of axes, which encode the geometric shape of a face and its texture separately. The shape coordinates describe the main facial proportions, whereas the texture coordinates bring information about the detailed form of facial soft tissues, the skin texture and tonality, and cues to the facial shape in the depth dimension (through the light reflection).
From a computational perspective, these findings suggest that the IT cortex might form a generative model in which shape- and texture-related information is \emph{decoupled} into separate factors (aka disentangled or factorised). The resulting \emph{decoupled coding} (${\cal R}_{\rm D}$) resembles closely a computer vision model called the Active Appearance Model (AAM) \cite{edwards1998interpreting}. A recent computational study indicates that the AAM provides a very good fit for the single cell IT data of \cite{chang2017}, outperforming most standard deep network models of visual processing in the ventral stream \cite{chang2021explaining}. While the deep networks achieve a high score in face (or object) recognition, they do so by \emph{multiplexing} the same information into different neurons, which is the opposite of the decoupling strategy reported in IT neurons by \cite{chang2017}. In keeping, another computational study \cite{higgins2021} showed that using a deep generative model ($\beta$-VAE), with the explicit objective of disentangling facial images into separate latent factors, provides a good account of IT neural firings \cite{chang2017}.
In a series of neural network simulations \cite{chang2017,chang2021explaining}, the face processing performance of the decoupled coding (${\cal R}_{\rm D}$) that emerges from IT recordings is compared with a simpler scheme that is standard in computer vision: \emph{eigenface} (${\cal R}_{\rm E}$) \emph{coding} \cite{sirovichLowdimensionalProcedureCharacterization1987,Turk:1991:ER:1326887.1326894,valentine2016}. In both ${\cal R}_{\rm E}$ and ${\cal R}_{\rm D}$ codings, each neuron ``projects'' faces linearly onto one axis of variation of the face space. However, the decoupling is different in the two. In ${\cal R}_{\rm E}$ coding, the neurons simply encode the projection of the input face into the axes of variation of the original set of known facial images. Rather, in ${\cal R}_{\rm D}$ coding, the input facial image is first divided into two sources of information: a (shape-free) average-shaped or {\it uniformed} facial image whose texture corresponds to that of the input face, and a (texture-free) vector of Cartesian coordinates of some facial reference points called {\it landmarks}, describing the input face shape. Then, in ${\cal R}_{\rm D}$ coding, one set of neurons encodes the linear projection of the input {\it uniformed} facial image on the axes of variation {\it of uniformed images}, whereas another set of neurons encodes the projection of the input vector of landmark coordinates on the axes of variation of vectors of landmark coordinates. As reported in \cite{chang2017}, the decoupled coding scheme ${\cal R}_{\rm D}$ explains a significantly higher fraction of neural data variance than the ${\cal R}_{\rm E}$ coding.
While the above studies assess that decoupling information is a key ingredient of facial processing in primates, it is still unclear why this is the case. A plausible formal rationale for the decoupling of shape and texture parameters (as done in the AAM and related models) is that they might vary independently in real life conditions. For example, small variations in facial expressions entail a significant change of shape but not texture, whereas different conditions of luminosity and age may induce significant variations in texture but not shape \cite{chang2017}. This line of reasoning leads to the untested idea that decoupled coding entails not just a more \emph{accurate} but also a more \emph{efficient} (or compact) description of facial data.
Indeed, a general principle for neural coding is obtaining the most \emph{efficient coding} of the data from a source \cite{attneave1954some,barlow1961possible}. A formal measure of code efficiency is its {\it description length}: the best model is the one that minimizes the amount of information (bits) required to encode both the data, in terms of the model's latent variables, and the model parameters themselves \cite{rissanen1978modeling,rissanen1999hypothesis}. This implies that a more complex model, which has more free parameters and requires more memory to be encoded, will only outperform a simpler model if it affords significantly more data compression -- which in turn requires that it captures well the statistical structure of the data.
Here we use the notion of efficient coding to ask whether and in which conditions the decoupled (${\cal R}_{\rm D}$) coding revealed in monkey IT neurons is more efficient than eigenface (${\cal R}_{\rm E}$) coding. For this, we compare the description length of the two coding schemes of facial processing, using the same stimuli as in the monkey study of \cite{chang2017}. To preview our results, we show on info-theoretical grounds that decoupled (${\cal R}_{\rm D}$) coding is more efficient than eigenface (${\cal R}_{\rm E}$) coding for facial processing. Our results indicate that the advantage of decoupled coding increases with image resolution, and when encoding variants of training set images that differ for facial expressions. This is interesting, as it shows that the decoupled coding is most effective in a conditions that is are frequent during social cognitive tasks, such as the identification of changes of expression or age in known faces.
Finally, to further consolidate our findings, we show that decoupled coding outperforms eigenface coding in a range of cognitively relevant tasks, which include the generation of novel faces, the synthesis of unknown faces and the recognition of facial identities and gender.
\section*{Materials and methods \label{materials}}
\subsection*{Database under study}
In our analysis, we use the FEI database \cite{thomazNewRankingMethod2010,fei}, also used in the characterisation of the neural code of facial identity in macaques \cite{chang2017}. The FEI database comprises $N=400$ pictures, accompained by the spatial coordinates of $n_{\ell}=46$ standard landmarks for each image.
\subsection*{Texture and shape coordinates}
Let the training set consist of $N_{\rm tr}$ facial images, ${\cal I}=\{{\bf I}(n)\}_{n=1}^{N_{\rm tr}}$, where ${\bf I}(n)$ is the $n$-th image, and of $N_{\rm tr}$ vectors of shape coordinates ${\cal L}=\{{\bm\ell}(n)\}_{n=1}^{N_{\rm tr}}$, where ${\bm\ell}(n)$ is the vector of shape coordinates characterising the geometry of the $n$-th facial image. All images are vectors ${\bf I}(n)=(I_1(n),\ldots,I_{d_{\rm t}}(n))$ of dimension $d_{\rm t}=w\times h$, where $w,h$ are the width and height of the images in pixels (grid spacing units). The ${\bm \ell}$-vector components are the $\sf x$ or $\sf y$ Cartesian coordinates of $n_{\ell}$ representative landmarks of the $n$-th facial image: ${\bm \ell}(n)=(\ell_1(n),\ldots,\ell_{d_{\rm s}}(n))$, with $d_{\rm s}=2n_{\ell}$.
\subsection*{Formal definitions of eigenface (${\cal R}_{\rm E}$) and decoupled (${\cal R}_{\rm D}$) codings}
Here, we compare the efficiency (measured as description length) of two alternative neural codes for facial images: \emph{eigenface} (${\cal R}_{\rm E}$) \emph{coding} and \emph{decoupled} (${\cal R}_{\rm D}$) \emph{coding}; see Table \ref{table1} and Figure \ref{fig:coordinates}. Both the ${\cal R}_{\rm D}$ and the ${\cal R}_{\rm E}$ codings represent facial images in terms of Principal Components (PCs), but define PCs over different facial coordinates of the training set (i.e., over different databases). Specifically, they represent a generic image ${\bf I}$ as follows:
\begin{enumerate}
\item \emph{Eigenface coding} (${\cal R}_{\rm E}$) represents the image in terms of its PCs, ${\bf I}'$. In mathematical terms, ${\bf I}' = E_p^{\rm (E)} \cdot {\bf I}$, where $E_p^{\rm (E)}$ is the $p\times d_{\rm t}$ matrix composed of the first $p$ (row) eigenvectors of the unbiased estimator of the correlation matrix $C$ of training-set images, $C_{ij} = \<I_i(n) I_j(n)\>$, where $\<\cdot\>=(1/N_{\rm tr}\sum_n \cdot)$ is the empirical average over the training-set, and where all the vector components are null-averaged, $\<x_i\>=0$. This representation does not make use of the shape coordinates.
\item \emph{Decoupled coding} (${\cal R}_{\rm D}$) represents the image in terms of two sets of PCs, one for shape and one for texture facial coordinates. To obtain these coordinates, each original image ${\bf I}(n)$ in the training set is first deformed by means of image-deformation algorithms (see \cite{image_deformation, ibanez2019} and the Supporting information for details), in such a way that its landmark coordinates ${\bm \ell}(n)$ will be dragged to the {\it average position of the landmark coordinates in the training-database}, and that the rest of the image pixels are deformed coherently (so that the resulting facial image is as much realistic as possible). The resulting image will be called the {\it uniformed} image $\hat{\bf I}(n)$ (see figure \ref{fig:coordinates}). We refer to {\it uniformed texture coordinates}, or simply {\it texture coordinates}, as the {\it uniformed} (shape-free) image coordinates $\hat {\bf I}$, of an image ${\bf I}$ given $\bm \ell$ (and the average position of the landmarks $\<{\bm \ell}\>={\bf 0}$). This procedure permits decoupling the original database in two databases of coordinates: the (texture-free) shape coordinates $\cal L$ and the (shape-free) uniformed images $\hat{\cal I}=\{\hat {\bf I}(n)\}_{n=1}^{N_{\rm tr}}$.
The novel image ${\bf I}$ to be represented is then decomposed in PCs in texture and shape spaces separately, $\hat{{\bf I}}'=E_p^{({\rm t})}\cdot\hat {\bf I}$, ${\bm\ell}'=E_p^{({\rm s})}\cdot{\bm \ell}$, where $E_p^{({\rm t})}$ are the eigenvectors of $C^{({\rm t})}_{ij}=\<\hat I_i(n) \hat I_j(n) \>$, and $E_p^{({\rm s})}$ those of $C^{({\rm s})}_{ij}=\<\ell_i(n)\ell_j(n)\>$.
\end{enumerate}
\begin{figure}
\begin{center}
\makebox[\textwidth][c]{
\includegraphics[width=0.14\textwidth]{figures/Ecode-crop} \hspace{1cm} \includegraphics[width=0.45\textwidth]{figures/Dcode-crop}}
\caption{Schematic illustration of eigenface coding (${\cal R}_{\rm E}$, left) and decoupled coding (${\cal R}_{\rm D}$, right). See the main text for explanation.}
\label{fig:coordinates}
\end{center}
\end{figure}
\begin{table}[!ht]
\caption{\bf Outline of the two alternative schemes for the representation of facial images: Eigenface coding (${\cal R}_{\rm E}$) and Decoupled coding (${\cal R}_{\rm D}$).\label{table1}}
\begin{tabular}{l|ccccc}
Code & \\
\hline
${\cal R}_{\rm E}$ & $({\bf I},{\bm\ell})$ & & & $\begin{subarray}{c} {\rm projection} \\ \to \end{subarray}$ & ${\bf I}'_p=E^{\rm (E)}_p\cdot {\bf I}$ \\
\hline
${\cal R}_{\rm D}$ & $({\bf I},{\bm\ell})$ & $\begin{subarray}{c} {\rm uniformation} \\ \to \end{subarray}$ & $(\hat{\bf I},{\bm \ell})$ & $\begin{subarray}{c} {\rm projection} \\ \to \end{subarray}$ & $\begin{subarray}{c} \hat{\bf I}'_p=E^{({\rm t})}_p\cdot \hat{\bf I} \\ {\bm\ell}'_p=E^{({\rm s})}_p\cdot {\bm \ell}\end{subarray}$ \\
\hline
\end{tabular}
\end{table}
\section*{Results \label{results}}
\subsection*{Description length analysis}
\label{sec:description_length}
\subsubsection*{Intuition behind the description length analysis}
The Principal Component Analysis representation of a given set of coordinates in the face space with $p$ principal components ($p$-PCA) can be viewed both as a generative model, inducing a Gaussian distribution over facial coordinates, and as a form of data compression and dimensionality reduction \cite{beyeler2019}.
These two aspects are naturally linked by the notion of {\it description length} \cite{mackay2003}. PCA is a form of dimensionality reduction, since it describes each $d$-dimensional vector ${\bf x}$ as a shorter, $p$-dimensional vector ${\bf x}'_p=E_p\cdot {\bf x}$. In turn, this implies a {\it compression} ability. Consider a database in which each coordinate $x_i$ (say, each pixel value, if the vectors ${\bf x}$ are images) varies in a range $R$. In the absence of any prior knowledge regarding the database content, the amount of information per sample and coordinate needed to store the raw database with precision $\epsilon$ per coordinate is simply $l_0=\log_2 (R/\epsilon)$ bits. Normally, the information needed to store the $p$ principal components of each vector of the database ${\cal D}'=\{{\bf x}'_p(s)\}_{s=1}^N$ is lower than $l_0$, even if $p=d$. Indeed, if the database exhibits significant pairwise correlations between couples of variables, many principal components will exhibit a variance $\lambda_i$ lower than the average variance $R^2$, and they will consequently require fewer bits to be stored.
The amount of information necessary to encode a database $\cal D$ in terms of (the latent variables of) a probabilistic model ${\cal M}$ of the database vectors is called {\it description length}, $L_{\cal M}({{\cal D}})$. Crucially, the description length is formally related to the Bayesian data evidence, or joint marginal likelihood of the database $\cal D$ according to the model ${\cal M}$ in the following way: $L_{\cal M}({\cal D}) = -\log_2 [P_{\cal M}({{\cal D}}) |\epsilon|^d]$, where $P_{\cal M}({\cal D})$ is the data evidence according to ${\cal M}$ and $\epsilon$ is the precision per coordinate with which the database should be described. Description length is therefore equivalent to -- and provides an information-theoretic interpretation of -- Bayesian model evidence. The value of $p$ for which the database presents a higher Bayesian evidence is the one presenting an optimal accuracy/complexity trade-off and, consequently, the one presenting a lower description length. In other words, description length analysis evaluates the efficiency of a particular code, taking into account both its accuracy and its complexity. In this perspective, a good code is the one that does not employ too much information to describe a given input with a given tolerance. Indeed, the model that presents lower description length at fixed precision, is also the one that manages to describe the database with a smaller error, $\log_2 \epsilon = -\log_2 P_{\cal M}({\cal D}) -L$, when the amount $L$ of available storing information is fixed.
In the case that we study here, the model ${\cal M}$ is $p$-PCA and the explicit expression for $P({\cal D})$ is easily interpretable \cite{mackay2003}. Indeed, the description length may be decomposed in two terms, $L({\cal D}) = {\sf S({\cal D}|\bm{\theta}^*)} + {\sf O}(\bm{\theta}^*)$, that we will call the {\it empirical entropy} ${\sf S({\cal D}|\bm{\theta}^*)}$ and the {\it Occam length} ${\sf O}(\bm{\theta}^*)$.\footnote{$\bm{\theta}^*$ are the model parameters (the eigenvector matrix $E_p$ and the vector of averages $\bm \mu$) fitted as the Maximum Likelihood value for a training set ${\cal D}_{\rm tr}$, that may be different from ${\cal D}$.} These two terms may be interpreted as the amount of information needed to encode (without losses) the database ${\cal D}$ in terms of $p$ principal components, and the model parameters $\bm{\theta}^*$ once for all the vectors (that are needed to recover each vector ${\bf x}$ from its principal components ${\bf x}'$), respectively. When increasing the number of model parameters $p$, the empirical entropy of the training database decreases, but the Occam length generally increases, since more eigenvalues $E_p$ must be stored -- and they must be stored with a higher precision. Overfitting occurs when this balance is no longer worth, and the description length increases for increasing $p$.
\subsubsection*{Definition of the \emph{information gap} criterion for the comparison of decoupled coding and eigenface coding}
We measured the description length (in bits) of the alternative coding schemes ${\cal R}_{\rm E}$ and ${\cal R}_{\rm D}$ of facial images ${\bf I}$ that belong to a set of known images that have been used to train the model (training set) and to a set of unknown images that have not been used to train the model (test set).
On the one hand, the Eigenface coding (${\cal R}_{\rm E}$) encodes the principal components of the original images ${\bf I}$. We denote the description length associated to the compression of a database ${\cal I}$ according to ${\cal R}_{\rm E}$ as $L_{{\cal I}_{\rm tr},p}({\cal I})$. The two sub-indices of $L$ specify the model, they are, respectively, the training set with which the model parameters have been trained,\footnote{The model parameters of the multivariate Normal distribution are the covariance matrix and the average $C$, $\bm \mu$. Given $\cal I_{\rm tr}$, they are set as the unbiased estimates of such quantities in the database.} and the value of $p$. This information is enough to completely define the $p$-PCA model.
On the other hand, the Decoupled coding (${\cal R}_{\rm D}$) exploits both the the texture and the shape coordinates ($\bm \ell$ and $\hat {\bf I}$) of each facial vector, hence it has to store both sets of principal components ($\bm \ell'$ and $\hat {\bf I}'$) to represent the original image. Moreover, it has to store as well the principal axes in both the space of texture and shape coordinates. The extra information cost required to store shape coordinates might be compensated by the smaller cost to store the {\it uniformed} texture principal components $\hat {\bf I}'$. This is because the uniformed set of images $\hat {\bf I}$ might be compressed more easily, given that any inhomogeneity induced by the difference in landmark positions has been removed from the database -- at least, if the resolution of the image is large enough. This implies that encoding the uniformed images could require a smaller number of PCs without loss of precision, with respect to the set of raw images.
To quantify the difference in description length between ${\cal R}_{\rm D}$ and ${\cal R}_{\rm E}$, we define a summary measure that we call an {\it information gap} and which jointly considers two factors. The former factor ($\mathcal G_{1}$) considers the difference in the description lengths of the non-uniformed and uniformed image databases:
\begin{eqnarray}
\label{eq:gain}
{\mathcal G_{1}} = L_{{\cal I}_{\rm tr},p}({\cal I}) &-& L_{\hat{\cal I}_{\rm tr},\hat{p}}(\hat{\cal I}) \\
\begin{subarray}{l} \textrm{bits to compress the database} \\ \textrm{of non-uniformed images } {\cal I} \end{subarray} &-&
\begin{subarray}{l} \textrm{bits to compress the database} \\ \textrm{of uniformed images } \hat{\cal I} \end{subarray} \nonumber
\end{eqnarray}
where $\hat {\cal I}$ is the database composed by the uniformed facial images in $\cal I$. Note that in both the description length terms, the model ${\cal M}$ is assumed to be $p$-PCA. In these equations, $p$ may be taken as the optimal value according to Bayesian model selection, i.e., the value (say, $p^*$) for which the description length of $\cal I$ is minimum, and the same for $\hat p^*$.
The latter factor ($\mathcal G_{2}$) is the description length of the set of shape coordinates ${\cal L}=\{{\bm \ell}(n)\}_n$:
\begin{eqnarray}
\label{eq:dlength}
{\mathcal G_{2}} = L_{{\cal L}_{\rm tr},p}(\cal L)
\end{eqnarray}
where $\cal L$ is the set of landmarks corresponding to the images $\cal I$, and ${\cal L}_{\rm tr}$ to those in ${\cal I}_{\rm tr}$.
The information gap combines these two factors ($\mathcal G = \mathcal G_{1} - \mathcal G_{2} $) and measures the efficiency (in information-theoretic terms) of decoupled coding ${\cal R}_{\rm D}$ compared to eigenface coding ${\cal R}_{\rm E}$. Decoupled coding can be considered more efficient if the information gap $\mathcal G$ is equal or greater than zero:
\begin{eqnarray}
\label{eq:conditionongain}
{\rm Information\ gap\ in\ favour\ of\ {\cal R}_{\rm D}:\ } {\mathcal G} = {\mathcal G}_1 - {\mathcal G}_2
\end{eqnarray}
In other words, given a database of facial images, the representation of ${\cal R}_{\rm D}$ is more efficient than that of ${\cal R}_{\rm E}$ to the extent that it provides a more accurate description of the database, using the same amount of available information (see also the Supporting information, sec. \ref{sec:supplementary}).
We are able to compute $L_{{\cal D},p}$, since the Bayesian evidence of a multivariate normal distribution can be calculated analytically \cite{minka2000}. Note that in the case of texture coordinates (that are strongly undersampled $N\ll d_{{\rm t}}$), it is essential to use the exact expression for the Bayesian evidence of a multivariate Normal distribution \cite{minka2000}, instead of its more common Bayesian Information Criterion approximation \cite{mackay2003,Myung2000}, see the Supporting information.
\subsubsection*{Information gap for known facial images in the training set, at different resolutions}
In this section, we analyse how the coding efficiency ${\cal R}_{\rm D}$ varies as a function of the resolution of the database images. We expect that the information gap increases with the resolution. If the resolution is so low that the distance between pixels (normalised to the image height, $h^{-1}$), is of the same order of the typical deviation of landmark coordinates from their average, $\<\ell_i^2\>^{1/2}$, the uniformation will not have an effect and consequently the ${\cal R}_{\rm D}$ code may not be worth in terms of description length. In the opposite situation, $h^{-1} \ll \<\ell_i^2\>^{1/2}$, we expect a larger information gap.
To test this hypothesis, we calculate $p^*$ for every kind of coordinate and resolution, as the minimum of the $L_p$ curves. $p^*$ results to be lower than $N$ in the three kinds of coordinates (shape, non-uniformed texture, uniformed texture). Figure~\ref{fig:trainingevidences} shows the description length of uniformed images in the training set (i.e., taking $\hat{\cal I}=\hat{\cal I}_{\rm tr}$ in equation \ref{eq:gain}, where $\hat{\cal I}$ is the whole database of $N=400$ uniformed smiling and neural images) as a function of $p$, and for four different resolutions.
\begin{figure}
\begin{center}
\makebox[\textwidth][c]{\includegraphics[width=\textwidth]{figures/training_evidences_vs_p}}
\caption{Description length of uniformed images in the training set. See the main text for explanation.}
\label{fig:trainingevidences}
\end{center}
\end{figure}
The description length of the image databases is slightly over-linear in the number of pixels $d_{\rm t}$, as shown by the lack of superposition of the curves in figure \ref{fig:trainingevidences}. Indeed, the largest images actually contain more information per pixel: this is the information that, according to the $p$-PCA model, has been lost when lowering their resolution to construct the lower-resolution databases.
As a reference value for the analysis of description length curves, it is useful to compare the values in the figure with the {\it uniform length} $l_0/(d_{\rm t}N)$, or the minimum amount of information per sample and pixel that would take to store a database consisting in images whose pixels fluctuate {\it independenly} around their average value in an interval of length $R$, being $R$ such that the variance per pixel is equal to the empirical average variance $\bar v_{\rm t}$ of the database $\cal I$ (roughly equal to $37$ units per pixel out of $256$ in $8$-bit grayscale encoding).\footnote{In other words, if one assumes that the pixel values are uniformly distributed around their average in a $d_{\rm t}$-dimensional hypercube of size $R=(12 \bar v_{\rm t})^{1/2}$, then $l_0/dN=(1/2)\log_2(12 \bar v_{\rm t})-\log_2 \epsilon$. This value is very close to the empirical entropy of the database corresponding to a PCA model with $p=0$ (see the Supporting information): $L_0={\sf S}_0=(1/2)\{\log_2(2\pi)+1+\log_2(\tilde v)\}-\log_2 \epsilon$, where $\log_2 \tilde v=\overline{\log_2 \lambda}$ (see the proximity of $L_0$ and $l_0$ in figure \ref{fig:trainingevidences}). }
Figure \ref{fig:trainingevidences} also shows the description length of non-uniformed images for the largest resolution, $w\times h=250\times 300$. We see that, for this resolution, the information gap in equation \ref{eq:gain} is positive: the uniformed images are better compressed than non-uniformed images, for all values of $p$. The information gap per sample and coordinate ${\cal G}/(Nd_{{\rm t}})$ is, as expected, an increasing function of the resolution -- see the inset of Figure \ref{fig:trainingevidences} -- indicating that the information gap increases faster than linear in $d_{\rm t}$. Rather, for the two lowest resolutions, decoupled coding does not imply a significant information gap. Indeed, for $w=25$ the information gap is negative, roughly equal to minus one hundred of bits per sample.
The information gap per sample of the ${\cal R}_{\rm D}$ coding increases rapidly with the number of pixels $d_{\rm t}$, and it reaches more than $10000$ bits per image for $w=250$. This is evident in Figure \ref{fig:gap_info_vs_resolution}, which shows the information gap per sample ${\cal G}/N_{\rm tr}$ of training set images as a function of the resolution. Figure~\ref{fig:gap_info_vs_resolution} also shows the description length of shape coordinates, $L_{{\cal L},p^*_{\rm s}}({\cal L}_{\rm tr})$, which is independent of the image resolution (horizontal line, see the details in the Supporting information). The information gap of the image degrees of freedom results comparable with the shape coordinates' description length $L_{{\cal L},p^*_{\rm s}}({\cal L}_{\rm tr})$ for $w=150$, but it is much larger for the largest resolution $w=250$. Summarising, for the largest image resolution, the overall description length of decoupled coding is much larger than that of eigenface coding. For $d_{\rm t}\gtrsim 10^4$, the condition in equation \ref{eq:conditionongain} is satisfied.
\begin{figure}
\begin{center}
\makebox[\textwidth][c]{\includegraphics[width=\textwidth]{figures/gap_info_vs_resolution_all}}
\caption{Information gap for facial images in the training set. See the main text for details.}
\label{fig:gap_info_vs_resolution}
\end{center}
\end{figure}
A significant observation is that the standard deviation per pixel, $\bar v_{\rm t}^{1/2}$, of the order of $37$ units, is {\it not significantly smaller} in the database of uniformed images $\hat{\cal I}$. This means that uniformed images are more easily compressible, not simply because the database is less-varying, or more homogeneous, but because of the presence of stronger pairwise correlations between pixels in the uniformed images. Stronger correlations induce a more inhomogeneous spectrum of $C^{({\rm t})}$, say $\lambda^{({\rm t})}_1,\ldots,\lambda^{({\rm t})}_p$ and, consequently, a lower empirical entropy $\sf S$ of the associated Gaussian distribution, which, up to an additive constant, depends on the eigenvalues as $(1/2)\sum_{i\le p}\ln\lambda^{({\rm t})}_i$. Indeed, for the highest resolution, the excess of standard deviation per pixel and sample of non-uniformed images is $\simeq 0.8$. Neglecting the correlations, this would amount to an increment of {\it uniform length} $l_0/(d_{\rm t}N)$ (or, equivalently, of $L_0/(d_{\rm t}N)$) of $\simeq 0.04$, which is one order of magnitud lower than the texture gap ${\cal G}_1$.
As a consistency analysis, we have calculated the information gap in two different ways: first, taking $p^*$ according to Bayesian model selection, $p^*=\arg\max_p L_{{\cal D}_{\rm tr},p}({\cal D}_{\rm tr})$; second, taking the value that maximises the validation-set (out-of-sample) likelihood, by $K$-fold cross-validation of the validation/training separation of the original database (c.f. details in the Appendix). Both ways of computing the training-set information gap are consistent within the (cross-validation) statistical errors (which is not obvious, specially considered that $d_{\rm t}\gg N$).
\subsubsection*{Information gap for known facial images in the training set that show different facial expressions}
In this analysis, we test the hypothesis proposed in the introduction that decoupled coding is particularly effective when encoding variants of known facial images which differ only in facial expressions. By definition, variations in facial expression are expected to change mainly shape coordinates, and much less texture coordinates (that are independent of the positions of the landmarks and hence nearly independent of the facial expression). The information gap should increase in this situation, since the texture coordinates of facial images differing in expression should be more redundant, correlated and easily compressed -- or, in the language of probability, they should exhibit a larger likelihood.
To test this hypothesis, we computed the training-set information gap for two databases of length $N_{\rm tr}=200$: the former (called ``neutral'') consisting of neutral expression images of 200 different subjects and the latter (called ``mixed'') corresponding to both the neutral and the smiling portraits of the same $100$ (randomly selected) subjects. The red shadowed area in figure \ref{fig:gap_info_vs_resolution} indicates the difference in information gap between the ``mixed'' and the ``neutral'' training sets. While the information gap of the ``mixed'' database is indistinguishable from the ``full'' database of $N_{\rm}=400$ images, the ``neutral'' database presents a lower information gap.
This analysis is consistent with our initial hypothesis. Notice that this result is not a trivial consequence of the fact that the ``mixed'' database (consisting in $N=200$ portraits of the same $100$ subjects) is more easily compressible than the ``neutral'' one (consisting in $N=200$ portraits of $200$ different subjects): indeed, for {\it both uniformed and non-uniformed} facial images, the description length of the ``mixed'' database is lower than that of the ``neutral'' database. What is less trivial is that {\it the gap $\cal G$ is higher for mixed images}.
\subsubsection*{Information gap for unknown facial images in the test set that show different facial expressions}
\label{sec:info_gap_novel}
Here, we perform a variant of the above analysis aimed to test that the decoupling is particularly efficient when encoding {\it unknown} (not belonging to the training set) facial images that correspond to subjects that {\it are present} in the training set, with a different facial expression.
We have already seen that the training-set of uniformed images exhibits lower description length (and empirical entropy) than the training set of raw database images. It is hence reasonable to suppose that decoding does not only reduce the {\it bias error} (of the training-set) but also the {\it variance error} in the description of unknown facial images, belonging to a test-set.\footnote{In the language of probability, we have seen before that the uniformed images present stronger between-pixel correlations $C_{ij}$ while presenting a roughly equal total variance (or ${\rm tr\,}(C)$). This is the reason for which, for uniformed faces, the training-set empirical entropy ($\sum_i \ln\lambda_i$, up to a constant) is lower (hence the likelihood is higher). A lower test-set entropy would simply imply that also the term ${\rm tr\,}(C_{\rm te}\cdot C_{\rm tr}^{-1}/2)$ (the difference between test- and training-set entropies, up to a constant) is significantly lower. We will call bias and variance terms of the entropy to the terms $\sum_i \ln\lambda_i$ and ${\rm tr\,}(C_{\rm te}\cdot C_{\rm tr}^{-1}/2)$, respectively.\label{footnote:biasvariance}}
To test this hypothesis, we calculated the information gap in a test-set database $\cal I$ in Equation \ref{eq:gain}, which is composed by $N/K=80$ (with $K=5$) images corresponding to smiling subjects, {\it whose neutral-expression images do belong to the training-set}. Notice that we will call such a set simply ``test-set''. All the information-theoretical quantities are then cross-validated for different $K$ training/test partitions of the original database (by means of the $K$-fold algorithm of cross-validation).
Figure \ref{fig:gap_info_vs_resolution} reveals that the information gap of the test-set is significantly higher compared to the information gap of the training-set, with a p-value lower than $10^{-4}$ (notice the small errorbars of the training-set information gap in the figure). The increment in information gap per sample (roughly 1/6 of the test-set gap) corresponds to the bits that one saves using the decoupled coding for unknown smiling faces, not belonging to the training-set. This implies, as expected, that the ${\cal R}_{\rm D}$ coding reduces both the bias and the variance errors for variants of known faces differing in facial expression only.
It is interesting to compare this result with the texture information gap of a different test-set database, which we call ``non-overlapping'', in which the single folds are composed by $N/K=80$ facial images corresponding to $40$ subjects with both smiling and non-smiling expression. The non-overlapping test-set contains, in this way, images of subjects {\it that are not present in the training-set} (so that test- and training-sets contain information regarding different subject identities). Figure \ref{fig:gap_info_vs_resolution} shows how the texture gap of the non-overlapping database is even lower than the training-set gaps. This result shows that the decoupling code ${\cal R}_{\rm D}$ is less efficient to encode unknown facial images corresponding to unknown subjects. Furthermore, this results shows that while uniforming variants of known faces that differ only in facial expression implies a reduction of both the bias and the variance terms of the entropy (see \ref{footnote:biasvariance}), uniforming facial images of unknown individuals leads to a reduction of the bias term {\it but to a positive increment of the variance term}. In any case, we stress that the overall difference between the entropy of both uniformed and non-uniformed versions of the non-overlapping test database is negative. Furthermore, the texture information gap ${\cal G}_1$ is still larger than ${\cal G}_2$. Consequently, the decoupling code is more efficient even when processing unknown-identity facial images, albeit in this case the description length gap is lower.
\subsubsection*{Summary of the results of the description length analyses}
In sum, our analysis shows that decoupled coding leads to a more efficient encoding of known facial images (i.e., in the training set) compared to eigenface coding, when the images are shown at high enough resolutions and in particular when they differ in facial expressions. Furthermore, our results show that the efficiency of the decoupled coding is magnified when the task consists in encoding unknown variants of known faces differing in facial expression.
\subsection*{Analysis of the performance of decoupled and eigenface coding in face processing tasks}\label{sec:classification_experiments}
So far, we have used the normative construct of description length to assess the efficiency of decoupled coding (${\cal R}_{\rm D}$) and compare it to eigenface coding (${\cal R}_{\rm E}$). Here, we ask how the normative advantage of decoupled coding translates into a better performance in facial processing tasks and what are exactly the advantages. For this, we compare the performance of eigenface and decoupled coding in three face processing simulations that help illustrate the most important differences between the coding schemes; namely, (1) sampling artificial facial images from the learned generative model, (2) recognizing facial identity, and (3) reconstructing unknown faces. Please see the Supporting information for a supplementary (gender classification) simulation.
\subsubsection*{Simulation 1: Sampling synthetic faces from the trained generative model}
The generation of artificial faces is a widely used task in AI to demonstrate the quality of a learning algorithm or encoder. In this simulation, our goal is not to challenge the performance of mainstream machine learning approaches that use deep nets with millions of parameters \cite{dosovitskiy2016,otoole2018,suchow2018learned,liu2019}, but rather to test the hypothesis that a very simple (20 degrees of freedom) linear model can generate realistic images, when it is based on decoupled coding.
Each PCA-based representation of the training set ${\cal I}$ induces a simple generative model of faces (see the Supporting information for details). In particular, ${\cal R}_{\rm E}$ induces a multivariate Gaussian distribution in the space of facial images. Rather, ${\cal R}_{\rm D}$, induces two separate Gaussian distributions over uniformed texture and shape coordinates, respectively. It is possible to create {\it synthetic} facial images by sampling from the respective probability distributions of ${\cal R}_{\rm E}$ and ${\cal R}_{\rm D}$. In the case of ${\cal R}_{\rm D}$, after sampling from both probability distributions, it is necessary to de-uniformize the texture and shape coordinates (see the Supporting information for details about the de-uniformation procedure).
Figure \ref{fig:sampling} shows example synthetic images created by sampling ${\bf I}$ and $(\hat{{\bf I}},{\bm \ell})$ from the models ${\cal R}_{\rm E}$ and ${\cal R}_{\rm D}$, respectively. In both cases, we used $p=20$ degrees of freedom, randomly chosen among the first $40$ principal components of each model.\footnote{In particular, we sample $20$ principal components $x'_i$ from their respective distributions, where the index $i$ may take the values $1,\ldots,p=40$. The remaining $20$ coordinates among the first $40$ coordinates are set to zero.} Please note that in general, the larger the value of $p$, the higher the dimension of (the vector space of) the sampled facial images. When small values of $p$ are used, the generative models produce low-dimensional variations of the average face; this implies that the synthetic faces are realistic (free from artefacts) but very stereotyped, with low variability. Rather, using larger values of $p$ is a more compelling task, since the generative models are free to produce faces with high variability -- but at the same time it is harder for them to produce realistic faces that are free from artefacts.
\begin{figure}
\begin{center}
\makebox[\textwidth][c]{\includegraphics[width=\textwidth]{figures/random_samples}}
\caption{Examples of synthetic faces sampled from the generative models of ${\cal R}_{\rm E}$ (top row) and a simple variant of the decoupling ${\cal R}_{\rm D}$ coding scheme, which we call concatenated coding, ${\cal R}_{\rm c}$ (bottom row). See the main text for details.}
\label{fig:sampling}
\end{center}
\end{figure}
Figure \ref{fig:sampling} permits appreciating that with a relatively high value $p=20$, both eigenface coding (${\cal R}_{\rm E}$) and decoupled coding (${\cal R}_{\rm D}$) produce produce faces with high variability. However, only the faces produced by the latter are realistic and free from artefacts. This simulation therefore shows that a very simple linear model based on decoupled coding (but not on eigenface coding) can produce realistic and varied facial images.
Please note that for this comparison we consider a simple variant of the decoupling ${\cal R}_{\rm D}$ coding scheme, which we call concatenated coding, ${\cal R}_{\rm c}$ (see the Supporting information in section~\ref{sec:supplementary} for a formal description). The reason is that ${\cal R}_{\rm c}$ permits using a single number of principal components, $p$, in common with ${\cal R}_{\rm E}$ -- which therefore permits comparing the two neural codes with the same number of parameters. The results of figure \ref{fig:sampling} are qualitatively identical if one directly compares ${\cal R}_{\rm E}$ with ${\cal R}_{\rm D}$, varying $p_{\rm t}=p$ and fixing $p_{\rm s}$ to its maximum value, $=d_{\rm s}=92$).
\subsubsection*{Simulation 2: Facial identity recognition}\label{sec:facial_identity}
The results of the description length analysis show that decoupled coding may be particularly suited to encode efficiently natural variants of known facial images, which consist in variations of facial expressions. This is because variants of a known face may affect one set of coordinates shape or texture, while leaving the other essentially unaffected. For example, recognising a known face with a different facial expression might benefit from the reuse of texture coordinates, which would be relatively unaffected.
In this simulation, we test whether and how the description length advantage of decoupled coding translates into a better capability to recognise faces with different facial expressions. For this, we exploit the fact that the FEI database includes $400$ facial images of $200$ subjects, with 2 images of the each subject that only vary for facial expressions: neutral or smiling. The simulative task consists in recognising which of the $200$ images showing a neutral expression corresponds to a target image (excluded from the training set) in which the same person shows a smiling expression, and vice-versa.
We implement the recognition task through a nearest-neighbour classifier, using a {\it distance} ${\sf d}_p({\bf z},{\bf x}(s))$ among the facial coordinates of the target smiling face $\bf z$, and those of all the $200$ neutral expression images ${\bf x}(s)$. The distance ${\sf d}_p(\cdot,\cdot)$ among facial coordinates is the Mahalanobis ($+L_2$) distance, known to be a better measure of facial similarity than the simple Euclidean metric between images \cite{Moghaddam1998}\cite[Chs.~5-6]{Wechsler2007}\footnote{Although surely not the most efficient method for a supervised classification analysis, we choose the Mahalanobis distance algorithm, since it is the one that uses the only the information defining our working models, i.e., $C_p$ for each kind of coordinate.}. The Mahalanobis metrics between a couple of vectors is a scalar product between their (normalised) first $p$ principal components,\footnote{${\sf d}_p({\bf u},{\bf v})=[({\bf u}-{\bf v})^\dag\cdot C_p \cdot ({\bf u}-{\bf v})]^{1/2}$, where $C_p=E^\dag_p\cdot \Lambda_p\cdot E_p$, and where $\Lambda_p$ is the diagonal matrix of the largest $p$ eigenvalues.}, which does depend on the eigenvalues and eigenvectors of the correlation matrix and, hence, on the training set. It is important to remark that that the target image coordinates are excluded from the training set (see the details in the Supporting information).
Figure \ref{fig:recognition} shows the results of the facial identity recognition task. The figure reports the misclassification error as a function of the number of principal components considered $p$, for each kind of coordinate (uniformed images, non-uniformed images and shape coordinates, or ${\bf x}={\bf I}$, $\hat{\bf I}$, ${\bm \ell}$, respectively). The errors decrease and reach a plateau in the cases of both eigenface coding (${\cal R}_{\rm E}$) and decoupled coding (${\cal R}_{\rm D}$), but the latter shows consistently a better performance. Interestingly, this simulation permits appreciating the relative contributions of texture and shape coordinates to the task. As expected, almost all of the advantage of decoupled coding is due to texture coordinates, whereas the performance of shape coordinates is significantly lower, with a minimum error around $0.4$ (notice that in such recognition task, the random choice error rate is $1-1/N_{\rm train}\simeq 0.996$, see the Supporting information). However, we notice that this last conclusion depends on the arbitrary choice of the image resolution (determining the image dimension $d_{{\rm t}}=w\times h$) and on the number of landmarks $n_{\ell}$ (determining the shape coordinate dimension $d_{\rm s}=2n_{\ell}$). Using a larger number of landmarks (as it is presumably the case in the neural code \cite{chang2017}) will enhance the relative relevance of shape coordinates.
Finally, figure \ref{fig:recognition} reveals as well that the distance based on the concatenated code ${\cal R}_{\rm c}$, which exploits the correlation between shape and uniformed texture coordinates, does not perform significantly better than that based on uniformed images. This is due to the fact that the images contain much more information than the shape coordinates, and that shape and texture coordinates are only weakly correlated (see the Supporting information).
\begin{figure}
\begin{center}
\includegraphics[width=0.85\textwidth]{figures/recog_succ_rate_inset}
\caption{Performance of in the face recognition task using the uniformed images, the non-uniformed images, and the shape coordinates. See the main text for explanation.}
\label{fig:recognition}
\end{center}
\end{figure}
\subsubsection*{Simulation 3: Reconstructing novel faces}
The reconstruction task consists in representing novel facial images that do not belong to the training set, in terms of an expansion in $p$ principal components only. In mathematical terms, if ${\bf x}$ is a facial image, or its shape coordinates, the reconstruction of ${\bf x}$ in terms of the first $p$ principal axes is ${\bf x}_p=E_p^\dag\cdot E_p\cdot {\bf x}$, where $E_p$ is the $p\times d$ matrix of the first $p$ (row) eigenvectors.\footnote{If ${\bf x}$ corresponds to an image, the reconstructed image is different from the original one even with $p=N$ coordinates, since the matrix $E_p^\dag\cdot E_p$ is different from the identity matrix, as far as it has rank $=p\le N< d_{{\rm t}}$.} In the case of the ${\cal R}_{\rm E}$ code, we simply project the original image (${\bf x}={\bf I}$) onto the first $p$ eigenfaces.
Figure \ref{fig:reconstruction} shows the reconstruction of a novel face according to ${\cal R}_{\rm E}$ and ${\cal R}_{\rm c}$ for different values of $p$. The figure illustrates that ${\cal R}_{\rm E}$ produces a border artifact; this is because linear combinations of different facial images with different facial contours result in an image which tend to be blurred in the margin of the face. Rather, the border artifact is absent for ${\cal R}_{\rm c}$, and the representation for high $p$ is slightly more faithful. The results for different test-set images are qualitatively identical. As for Simulation 1, the results are qualitatively identical if one directly compares ${\cal R}_{\rm E}$ with ${\cal R}_{\rm D}$, varying $p_{\rm t}=p$ and fixing $p_{\rm s}$ to its maximum value, $=d_{\rm s}=92$).
\begin{figure}
\begin{center}
\makebox[\textwidth][c]{\includegraphics[width=\textwidth]{figures/reconstruction_2codes}}
\caption{Reconstruction of a test-set facial image with $p$ principal components according to the representations: eigenface coding (${\cal R}_{\rm E}$, first row) and mixed coding (${\cal R}_{\rm c}$, second row). The 5 columns of the matrix of images represent, respectively: $p=5$, $20$, $50$, $100$, $395$, and the original image. The image is $100\times 120$.}
\label{fig:reconstruction}
\end{center}
\end{figure}
\subsubsection*{Summary of the results of the face processing tasks}
In sum, our analysis shows that a tiny model using decoupled coding and only 20 degrees of freedom can be sampled to produce realistic synthetic images, whereas sampling from a model using eigenface coding produces less realistic faces with artefacts. Furthermore, decoupled coding greatly facilitates the recognition of familiar faces with novel facial expressions -- especially thanks the fact that texture coordinates remain stable across different expressions. Finally, decoupled coding outperforms eigenface coding in the reconstruction of novel faces. Please see the Supporting information for an additional simulation of gender classification using the two coding schemes.
\section*{Discussion and Conclusions}
\label{sec:discussion}
Recent research in neuroscience revealed that the neural coding for facial identity in the inferotemporal (IT) cortex of macaques \cite{chang2017,chang2021explaining} uses a decoupled coding scheme in which distinct subpopulations of neurons project faces onto two distinct sets of axes, which encode the geometric shape of a face and its texture separately. From a computational perspective, this decoupled coding (${\cal R}_{\rm D}$) affords accurate face processing, permitting the linear decoding of facial features from single cell responses; and it outperforms widely used schemes in vision research, such as eigenface coding (${\cal R}_{\rm E}$) \cite{chang2017,chang2021explaining,higgins2021}.
In this article, we aim to elucidate the normative reasons for this advantage, by appealing to the notion of \emph{description length}, which permits quantifying the efficiency of alternative neural coding schemes in info-theoretic terms. The general idea is that the best model is the one that minimizes the amount of memory (bits) required to encode both the data and the model parameters themselves \cite{rissanen1978modeling,rissanen1999hypothesis}.
We compared the efficiency of two alternative schemes: decoupled coding (${\cal R}_{\rm D}$), which requires coding both the principal components of uniformed facial images and their shape (landmark) coordinates; and the widely used eigenface coding (${\cal R}_{\rm E}$), which only requires storing the principal components of the original images. Our simulations using the same FEI database as in the monkey study of \cite{chang2017} show that decoupled coding (${\cal R}_{\rm D}$) requires less information to represent the images compared to eigenface coding (${\cal R}_{\rm E}$), despite the latter does not require coding for the geometric coordinates of faces. Remarkably, the efficiency gain of decoupled coding (${\cal R}_{\rm D}$) is already apparent in a small database in which $N$ is $\sim 200$; and it is especially prominent for high resolution images and for variants of training set images that only differ in facial expressions.
Furthermore, we found that the probabilistic generative model induced by decoupled coding (${\cal R}_{\rm D}$) achieves good performance in face processing tasks, including sampling artificial or novel faces, recognising face identity and reconstructing novel faces with $p$ principal components. Rather, a model using eigenface coding (${\cal R}_{\rm E}$) performs less accurately and produces less realistic faces with artefacts.
Taken together, these results shed light on the normative advantages of the decoupled coding for faces that was empirically reported in monkey inferotemporal (IT) cortex of macaques \cite{chang2017,chang2021explaining}, showing that it is both more efficient (in info-theoretic terms) and more accurate than the alternative eigenface scheme widely used in computer vision.
The fact that decoupled coding is more efficient than eigenface coding seems paradoxical, given that the former requires encoding both the principal components of uniformed facial images and their shape (landmark) coordinates. However, despite this apparent disadvantage, decoupled coding is more efficient, because the database of uniformed faces can be described with fewer principal components. The pixels in it are more correlated and hence can be compressed more (this is despite, perhaps counter-intuitively, the total variance of the set of uniformed facial images is not significantly lower than that of non-uniformed facial images).
The accuracy advantage of decoupled coding is mainly due to the fact that it separates facial landmarks ({\it shape} coordinates) and the image texture at fixed landmark position ({\it texture} coordinates) -- two sets of coordinates that carry information regarding naturally different aspects of human faces and can vary independently. Namely, variations in facial expressions and variations in perspective (e.g., small rotations) mainly modify shape coordinates, but not texture coordinates. Rather, variations in luminosity, suntan, or make-up only modify texture coordinates \cite{laurentini2014}. This perspective helps explain our finding that decoupled coding is more advantageous when encoding variants of known faces (in the training set) or unknown faces (in the test set). This is because when processing variants, one of the two sets of (shape and texture) coordinates will tend to remain the same as in the known, reference image.
These advantages of decoupled coding comes at the price of performing some prerequisite nonlinear computations over facial images, as landmark detection and image deformation (see also the Supporting information). We speculate that these computations might be putatively realised by early visual areas, which lie below the IT in the neural hierarchy. This speculation remains to be tested in future research.
\section*{Supporting information}
\label{sec:supplementary}
\paragraph*{Relation with Bayesian Model Selection.} Bayesian model selection consists in choosing the model $\cal M$ that maximises the Bayesian evidence of a given database ${\cal D}$. The best model $\cal M$ is, equivalently \cite{mackay2003}, the one that minimises the description length $\min_{\cal M} L_{\cal M}({\cal D})$. To verify the validity of the condition (\ref{eq:conditionongain}), we have, instead, compared the description length of {\it different databases}: $\cal I$, $\hat{\cal I}$, $\cal L$, according to {\it the same probabilistic model}, which corresponds to the normal distributions (whose maximum likelihood correlation matrices take, respectively, the values $C_{p}$, $C^{({\rm t})}_{p_{{\rm t}}}$ and $C^{({\rm s})}_{p_{{\rm s}}}$, for the three databases).
This is, hence, the opposite situation with respect to Bayesian model selection, in which one compares the evidence of the same database according to different models. It is important to remark that, in the present work, we do not aim to perform a comparison, on Bayesian grounds, between eigenface and decoupled codings, understood as probabilistic models {\it over the common database of original, non-uniformed facial images}. Indeed, the representation ${\cal R}_{\rm D}$ induces a probability distribution in the space of non-uniformed images ${\bf I}$, that is no longer a Gaussian distribution (even if the distributions over $\hat {\bf I}$ and $\bm \ell$ are) since it involves the nonlinear image deformation operations, that we have completely neglected in our information-theoretical analysis. Within our working hypothesis, we neglect the {\it uniformation} ${\cal I},{\cal L} \to \hat{\cal I}$ (previous to the PCA) and {\it de-uniformation} ${\cal I}_p,{\cal L}_p \leftarrow \hat{\cal I}',{\cal L}'$ (posterior to the PCA) operations from the information-theoretical analysis.
In other words, here we consider three probabilistic descriptions over separate spaces: non-uniformed images, uniformed images and shape coordinates. Our conclusions merely rely in the information-theoretical interpretation of the Bayesian evidence of a database according to a model, that is related to the amount of information needed to store the database in terms of the model's lattent variables, within a given precision. The decoupled code, understood as a probabilistic model {\it on the space of the original images}, would be much more complex than Gaussian. It would implicitly contain, in some (texture) latent variables, a description of the input image somehow invariant under shape transformations; other (shape) latent variables would be invariant under texture transformations of the original image. Our current analysis is to be understood as an estimation of the information theoretical gain of the facial representation in terms of (principal components of) uniformed images and landmarks, neglecting the non-linear operations of landmark detection and image deformation that lead to these two facial coordinates, from the original database of images.
Instead, we do perform a genuine Bayesian model selection when choosing the values of ${p}$, ${p_{{\rm t}}}$, ${p_{{\rm s}}}$ that minimise the description length (that maximise the Bayesian evidence) of each database, i.e., of each type of coordinate.
\paragraph*{Image uniformation and de-uniformation.} The creation of the {\it uniformed} texture $\hat{{\bf I}}$ coordinates from the original images ${{\bf I}}$ and their shape coordinates $\bm\ell$ in ${\cal R}_{\rm D}$ is implemented, as said before, through image deformation algorithms based on similiarity transformations \cite{image_deformation}. Such algorithms map the original image into an image whose landmark positions $\bm \ell$ will now occupy their average value in the database $\<{\bm\ell}\>$. Vice-versa, the reconstruction of novel images in ${\cal R}_{\rm D}$ requires creating a non-uniformed facial image from the reconstructed shape and texture coordinates ${\bm\ell}_p,\hat{\bf I}_p$. This operation we will be called {\it de-uniformation}:
\begin{eqnarray}
({\bm \ell},{\bf I}) &\begin{subarray}{c} \<{\bm \ell}\> \\ \rightarrow \end{subarray}& (\<{\bm \ell}\>,\hat{\bf I}) \qquad \textrm{uniformation} \\
({\bm \ell},{\bf I}) &\begin{subarray}{c} {\bm \ell} \\ \leftarrow \end{subarray}& (\<{\bm \ell}\>,\hat{\bf I}) \qquad \textrm{de-uniformation}
\end{eqnarray}
where the subscripted arrow indicates the image deformation algorithm transforming an image $({\bm \ell}_1,{\bf I}_1)\substack{{\bm\ell}_2 \\ \to} ({\bm \ell}_2,{\bf I}_2)$ so that the pixel values of ${\bf I}_2$ in the positions given by ${\bm \ell}_2$ are those of ${\bf I}_1$ in ${\bm \ell}_1$ (say, ${\bf I}_2(\vec{{\ell}_2}_j)={\bf I}_1(\vec{{\ell}_1}_j)$ where $\vec{{\ell}_1}_j$ are the original Cartesian positions of the $j$-th landmark), and the rest of the pixel values of ${\bf I}_2$ are changed consequently, under the requirement of smoothness. As a consistency check, we have verified that uniforming and consequently de-uniforming database images, leads to new images that are visually indistinguishable from the initial ones.
In fig.\ref{fig:facedeform} we illustrate the effect of the used image deformation algorithm on a picture of the FEI database.
\begin{figure}
\begin{center}
\includegraphics[width=10cm]{figures/facedeform.png}\\
\caption{an example of usage of the software for image deformation. The image in the center is the image deformation of the right image with the landmarks corresponding to the left image.}
\label{fig:facedeform}
\end{center}
\end{figure}
\paragraph*{Likelihood and evidence of the normal distribution.} We report the well-known expression for the Bayesian evidence, and related formulae, of the normal distribution associated to the $p$-PCA representation with $p$ principal components. Given a database $\cal D$ composed by $N$ $d$-dimensional vectors, $p$-PCA induces a likelihood probability distribution which is the normal distribution (supposing null averages):
\begin{align}
\ln P({\cal D}|\bm{\theta},p)/(Nd) = -\frac{1}{2} \left[ \ln (2\pi) + \ln\det(C) +\text{tr}\left( C^{-1} \cdot \Sigma \right) \right]
\end{align}
where $\Sigma$ is the unbiased estimator of the correlation matrix of the data ${\cal D}$, and where the parameters $\bm{\theta}=C$ are the theoretical correlation matrix, which in $p$-PCA is subject to exhibit its $d-p$ lowest eigenvalues equal to a common noise-level value $v$. The maximum likelihood estimation $\bm{\theta}^*$ for $C$ and $v$ are: $C^*=U\Lambda U^\dag$ where $U$ is an orthogonal matrix whose top $p$ eigenvectors are those of $\Sigma$, and where the diagonal matrix $\Lambda$ contains the top $p$ eigenvalues of $\Sigma$, $\Lambda_{ii}=\lambda_i$ for $i\le p$, and the remaining $d-p$ diagonal elements equal to $\Lambda_{ii}=v_p$ for $i>p$, with $v_p=(d-p)^{-1}\sum_{j>p}\lambda_j$.
For completeness, we report the expressions for the description length, the empirical entropy and the Occam factor, making explicit the dependence on the number of principal components $p$:
\begin{align}
L_p({\cal D}) &= -\ln P_p({\cal D}) - d\ln \epsilon = \\
&={\sf S}_p({\cal D}|\bm{\theta}^*)+{\sf O}(\bm{\theta}^*) \\
{\sf S}_p({\cal D}|\bm{\theta}) &=-\ln P_p({\cal D}|\bm{\theta})-d\ln (\epsilon) \\
\ln P_p({\cal D}) &= \ln P_p({\cal D}|\bm{\theta}^*) - {\sf O}_p(\bm{\theta}^*)
\end{align}
where, as mentioned in the main text, in these equations $\bm{\theta}^*$ refers to the maximum likelihood estimator. The equation for the Bayesian evidence (under certain assumptions on the prior variance) takes the form, up to a constant factor, and for sufficiently large $N$ \cite{minka2000}:
\begin{align}
\ln P({\cal D})/(Nd) &\simeq \ln P({\cal D}|\bm{\theta})/(Nd) - \ln {\sf O}(\bm{\theta}^*)/(Nd) \\
-\ln {\sf O}(\bm{\theta}^*)/(Nd) &:= \frac{1}{2Nd}\left( (m+p+1) \ln(2 \pi) -p \ln N -\ln|A| + \ln |p_U| \right) \\
m &:= dp - p(p+1)/2 \\
\ln |p_U| &:= -p\ln 2 + \sum_{j=1}p \ln \Gamma\left(\frac{d-j+1}{2}\right) - \frac{d-j+1}{2}\ln\pi
\end{align}
and where:
\begin{align}
\ln |A| &= \sum_{i=1}^p \Bigg\{ \sum_{j=i+1}^d \left[ \ln(\hat\lambda_j^{-1}-\hat\lambda_i^{-1}) + \ln(\lambda_i-\lambda_j) \right] \Bigg\} + m \ln N
\end{align}
where the $\lambda$'s are the eigenvalues of $\Sigma$ in decreasing order, $\hat\lambda_j=\lambda_j$ for $j\le p$ but $=v_p$ for $j>p$.
In the case $d>N$, this last term takes the form:
\begin{align}
\ln |A| &= \sum_{i=1}^p \Bigg\{ \sum_{j=i+1}^p \left[ \ln(\lambda_j^{-1}-\lambda_i^{-1}) + \ln(\lambda_i-\lambda_j) \right] + \\
&+ (d-p)\ln(v_p^{-1}-\lambda_i^{-1}) + \sum_{j=p+1}^N \ln(\lambda_i-\lambda_j) + \\
&+(d-N) \ln\lambda_i \Bigg\} +m \ln N
\end{align}
while for $d\le N$, it is:
\begin{align}
\ln |A| &= \sum_{i=1}^p \Bigg\{ \sum_{j=i+1}^p \left[ \ln(\lambda_j^{-1}-\lambda_i^{-1}) + \ln(\lambda_i-\lambda_j) \right] + \\
&+ (d-p)\ln(v_p^{-1}-\lambda_i^{-1}) + \sum_{j=p+1}^d \ln(\lambda_i-\lambda_j) \Bigg\}+m \ln N
.
\end{align}
\paragraph*{Likelihood and evidence of shape coordinates.} For shape coordinates, and for the databases considered here, it is $d_{\rm s}<N$. In figure \ref{fig:evidences_shape} we show (left) the behaviour of the training- and test-set (logarithms of the) likelihoods, along with the training- and test-set (logarithms of the) Bayesian evidences of shape coordinates (respectively, $\ln P({\cal L}_{\text{tr}}|C^{{\rm s}})$, $\ln P({\cal L}_{\text{te}}|C^{{\rm s}})$, $\ln P({\cal L}_{\text{tr}})$, $\ln P({\cal L}_{\text{te}})$). We observe that the training-set evidence behaviour is qualitatively similar to that of the the test-set likelihood (contrary to the case of texture coordinates, see below).
When commenting the results of figure \ref{fig:gap_info_vs_resolution}, we mentioned the fact that the empirical entropy of shape coordinates does not depend on the resolution. Indeed, changing the resolution in the database of shape coordinates amounts to multiply the Landmarks' Cartesian coordinates by a factor ($w/w'$ for horizontal, $h/h'$ for vertical coordinates). However, the relevant quantity in these experiments is not the absolute value of the coordinates in the $w\times h$ grid units, but their normalised value in units of the image heigh $h$. If normalised coordinates are considered, the precision should be consequently normalised to be inversely proportional to $h$. In figure \ref{fig:evidences_shape}-Right we plot the training empirical entropy $S_p({\cal L}_\text{tr}|C^{{\rm s}})=\ln P({\cal L}_\text{tr}|C^{{\rm s}})-d_{\rm s}\ln \epsilon_{\rm s}$ for different resolutions, using the resolution-dependent precision $\epsilon_{\rm s} = 0.1 (h_\text{max}/h)$. The overlap of different curves is a consequence of the fact that no information has been lost when scaling both the coordinates and the precision.
\begin{figure}
\begin{center}
\includegraphics[width=12cm]{figures/likelihoods_evidences_comparison}
\includegraphics[width=12cm]{figures/testempiricalentropy_geom_vs_p}
\caption{Left: Shape coordinates' likelihoods and evidences (in BIC approximation) of the test and training sets. Right: Empirical entropy of shape coordinates for the full $N_{\rm tr}=400$ training set, for different resolutions.}
\label{fig:evidences_shape}
\end{center}
\end{figure}
\paragraph*{Likelihood and evidence of texture coordinates.} In figure \ref{fig:textures_shape} we show the behaviour of the training- and test-set (logarithms of the) likelihoods, along with the training- and test-set (logarithms of the) Bayesian evidences of texture coordinates. In this case, in which, differently from shape coordinates, it is $d_{\rm t}>N$, the BIC approximation for the evidence is no longer good, and the evidence behaves differently from the test-set likelihood. We conclude that, in order to perform model selection in this case, or to estimate the Occam contribution to the description length, it is necessary to use the aforementioned expression of the Bayesian evidence due to Minka.
\begin{figure}
\begin{center}
\includegraphics[width=12cm]{figures/likelihoods_evidences_texturecoords}
\caption{Uniformed texture coordinates' likelihoods and evidences of the test and training sets.}
\label{fig:textures_shape}
\end{center}
\end{figure}
\paragraph*{The concatenated code representation, ${\cal R}_{\rm c}$} (see figure \ref{fig:concatenated_code}), simply consists in concatenating the uniformed texture and shape coordinates in a single vector ${\bf y}=({\bm \ell},\hat{\bf I})$, and to keep the first $p_\text{c}$ principal components of the set of concatenated vectors ${\bf y}'=E^{(\text{c})}{\bf y}$, hence treating shape and texture coordinates on the same ground. The concatenated code would exatly coincide with ${\cal R}_{\rm D}$ (taking $p_\text{c}=p_{\rm s}+p_{\rm t}$) for an image database such that shape and texture coordinates were completely uncorrelated (say, $\<\ell_m I_i\>=0$ $\forall m,i$). The performance of the ${\cal R}_{\rm c}$ code in the face processing tasks presented in section Results turn to be almost identical using texture coordinates only. The reason is that shape coordinates carry a lower amount of aggregated information and, in any case, the correlations between shape and texture coordinates are significantly smaller than those in the diagonal blocks of $C^{(\text{c})}$. The advantage of using ${\cal R}_{\rm c}$ is that one may fix a single number of principal components. The daydream generation of novel facial images with the ${\cal R}_{\rm D}$ code (fixing $p_{\rm s}=d_{\rm s}$ at its maximum value) leads to almost identical results of those of ${\cal R}_{\rm c}$ in figure \ref{fig:sampling}.
\begin{figure}
\begin{center}
\includegraphics[width=5cm]{figures/mixedcode-crop}
\caption{Schematic representation of the concatenated code.}
\label{fig:concatenated_code}
\end{center}
\end{figure}
\paragraph*{Details of the classification algorithms.} The classification tasks are performed via a nearest-neighbour classifier: every vector $\bf{x}$ is assigned to the class that minimizes the distance from $\bf{x}$. If a class contains more than one element, as it is the case of the gender classification task (in which the male and the female classes contain $200$ vectors each, corresponding to half of the raw FEI database), the distance is computed between $\bf{x}$ and the average of the elements belonging to the class.
For the gender identification task we follow a leave-one-out approach: for each vector $\bf{x}$, the training-set (from which we compute the correlation matrix, defining in its turn the Mahalanobis distance ${\sf d}_p(\cdot,\cdot)$) is composed by all the database vectors except for $\bf{x}$ itself. The so defined training-set is as well the set from which the average vector of each class is constructed.
In the facial recognition task, we use a more economic strategy: we construct $K=5$ training/test-set divisions (with $N_\text{te}=400/K=80$, $N_\text{tr}=320$) by $K$-folding, in such a way that the test-set contains at most one vector per individual, and that it contains $N_\text{te}/2$ vectors corresponding to smiling portraits, and $N_\text{te}/2$ corresponding to neutral portraits. For each of the vectors of facial coordinates in the test-set, we search for its nearest neighbor among the $N_\text{tr}=320$ vectors of facial coordinates in the training set. The training set is, again, the set from which we compute $C_p$ and consequently define ${\sf d}_p(\cdot,\cdot)$. Afterwards, the average value and the standard deviation of the mean of the success rate is computed by cross-validation over the $K=5$ iterations.
\paragraph*{Results of the gender classification task.} In figure \ref{fig:genderclassification} we present the results of the gender classification task. We observe that the shape coordinates alone are sufficient to achieve roughly $90\%$ of successful attempts with less than $30$ PC's. Consistently with the rest of the article results, the classification performed in terms of (principal components of) uniformed facial images achieves higher success rates respect to that using (principal components of) the original original facial image (i.e., the ${\cal R}_{\rm E}$ representation). Furthermore, the success rate plateau is reached for a lower number of PC's ($p\simeq30$ versus $p\simeq40$ of ${\cal R}_{\rm E}$).
\begin{figure}[]
\begin{subfigure}{0.48\textwidth}
\includegraphics[width=\linewidth]{figures/gender_succ_rate-comparison.pdf}
\caption{}
\label{subfig:gender_far}
\end{subfigure}
\quad
\begin{subfigure}{0.48\textwidth}
\includegraphics[width=\linewidth]{figures/gender_focus.pdf}
\caption{}
\label{subfig:gender_close}
\end{subfigure}
\caption{the success rates in the tasks of gender classification and face recognition are here displayed -on the left- as functions of the number $P$ of principal components; in the right column the same data (except for the geometric coordinates) are shown in a close-up.}
\label{fig:genderclassification}
\end{figure}
\paragraph*{Different regularisation schemes.} For each generic set of facial coordinates (say, ${\cal D}$), we have so far estimated its description length according to the $p$-PCA model, whose number of principal components $p$ are those that maximise the description length $p^*=\arg \min_p L_{{\cal D},p}({\cal D})$. The inferred probability distribution is a normal distribution whose correlation matrix $C_{p^*}$ is consequently different from the empirical correlation matrix, say $C_{p=\min\{N,d\}}$, since not all empirical eigenvalues and eigenvectors are statistically significant given the database finiteness.\footnote{Strictly speaking, in the $N<d$ case, the inferred correlation matrix $C_p$ is different from the empirical matrix even if $p=N$, since it has to be regularised so that its rank is $=d$ (and not $=N$).} The normal distribution whose correlation is the empirical matrix would correspond, instead, to maximum likelihood inference.
Actually, there are different ways, besides $p$-PCA, in which the correlation matrix may be inferred beyond the maximum likelihood criterion. An alternative is {\it linear (identity) shrinkage} (see, for example, \cite{bun2017}). Linear shrinkage leads to a correlation matrix which is a convex combination between the unbiased (maximum likelihood) empirical estimator $C$ and a completely biased (and null-variance) matrix, as the identity matrix in $d$ dimensions $1_d$. In other words, the ``regularised'' shrunk matrix is $C_\alpha=(1-\alpha) C+\alpha1_d$ where $\alpha$ is a real number in $[0,1]$, that may be chosen by maximum (cross-validated) out-of-sample likelihood. In the $p$-PCA scheme, $p=0$ and $p=\min\{N,d\}$ are the arguments of the minimum and maximum training likelihood respectively, and $p^*$ is comprised between them. Within the shrinkage scheme, these extreme cases correspond to $\alpha=1$ and $0$, respectively.
\begin{figure}
\begin{center}
\makebox[\textwidth][c]{\includegraphics[width=\textwidth]{figures/gap_info_vs_resolution_all_withshrinkage}}
\caption{The same as figure \ref{fig:gap_info_vs_resolution} but with the addition of the test-set texture gap per sample ${\cal G}_1/N$ computed with the shrinkage regularisation method.}
\label{fig:gap_info_vs_resolution_shrinkage}
\end{center}
\end{figure}
In order to check the robustness of our results with respect to the regularisation scheme, we have computed the information gaps (actually, the gaps in empirical entropy)\footnote{We make notice that, in the case of $p$-PCA, and for texture coordinates, the texture gap ${\cal G}_1$ is essentially given by the gap between empirical entropies. The difference between the Occam factors of non-uniformed and uniformed images is negligible in front of it.} resulting from the normal probability distributions associated not with $p$-PCA but with linear shrinkage. We have observed that the results are qualitatively consistent with those presented here. While the lowest description length of the set of landmarks ${\cal G}_2$ is consistent with the one shown in figure \ref{fig:gap_info_vs_resolution}, the description length gap ${\cal G}_1$ is significantly larger for the largest resolution, as can be seen in figure \ref{fig:gap_info_vs_resolution_shrinkage}. Consequently, the information gap is even larger when regularising the correlation matrices with the shrinkage method.
\paragraph*{Visualisation of the eigenvectors of the concatenated code ${\cal R}_{\rm c}$.} Figure \ref{fig:principal_axes_nc} presents a graphical visualisation of the first five principal axes of the whole database according to the concatenated code ${\cal R}_{\rm c}$ (the first five eigenvectors of $C^{(\text{c})}$).
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{figures/conc_PCs.png}
\caption{First five principal axes of the concatenated code ${\cal R}_{\rm c}$ (five largest-eigenvalue eigenvectors of $C^{(\text{c})}$). The $j$-th column represents the $j=th$ eigenvector. In particular, the $i$-th row of the table represents the points that have all the coordinates,in the base of the principal axes, equal to zero except the $i$-th one, that ranges from $-2\sigma$ (left) to $-2\sigma$ (right); $\sigma$ is taken equal to the square root of the largest eigenvalue $\lambda_1$ of the correlation matrix. In other words, the image, say ${\bf I}$ in the $i$-th row, $j$-th column is obtained by de-uniformation $(\<\bm \ell\>,\hat{\bf I}) \to ({\bm \ell},{\bf I})$, where ${\bm \ell}$ and $\hat {\bf I}$ are obtained as: $({\bm\ell,\hat{\bf I}})={\bf y}=E^\dag\cdot {\bf y}'$, and where ${\bf y}'$ is the vector that exhibits null principal components except by the $i$-th, $y'_i=(j-3)\lambda_1^{1/2}$, and $E$ is the matrix of row eigenvectors of $C^{(\text{c})}$.}
\label{fig:principal_axes_nc}
\end{center}
\end{figure}
\section*{Acknowledgments}
We thank Matteo Marsili for his comments on the manuscript. This research received funding from the European Union’s Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement No. 945539 (Human Brain Project SGA3) to GP and the European Research Council under the Grant Agreement No. 820213 (ThinkAhead) to GP. M. I.-B. is supported by the grant EU FESR-FSE PON Ricerca e Innovazione 2014-2020 BraVi.
|
2,877,628,091,623 | arxiv | \subsection{CDV Methodology}\label{sc:cdvmethod}
In CDV, an iterative process of test generation, execution, coverage collection and analysis is used to achieve coverage closure over several cycles. In practice, engineering input is required to interpret the data and to guide test generation towards closing coverage holes. This is either achieved simply by allowing further pseudorandom tests to be generated, by adding constraints to bias test generation, by employing model-based test generation or, as a last resort, by directed testing. If model-based test generation has already been applied, modifications to the formal model may yield new tests.
It is important to note that further test generation is not always the only appropriate response to a coverage hole or a requirement violation. The following options should also be considered: 1) the SUT has a bug, to be referred to the design team; 2) modifications to one or more of the requirements models (e.g.\ assertions or formal properties) are needed to more accurately reflect the actual requirements and/or design of the SUT; and/or 3) modifications to one or more of the testbench components are needed. This third decision may be reached if the tests and requirements models are deemed appropriate but the testbench does not allow the SUT's full range of functions to be exercised and observed.
\section{Coverage-Driven Verification} \label{sc:CDV}
\subsection{Structure of a CDV Testbench}
In CDV, a verification plan must be constructed before the testing process begins~\cite{Piziali2004}. This plan includes the aspects of the SUT that need to be verified, e.g.\ a requirements list or a functional description of the SUT, and a coverage strategy. The coverage strategy indicates how to achieve effective coverage, i.e.\ the exploration of the SUT and advancement of the verification progress, through the design of the testbench components, especially the Test Generator, the Checker and the Coverage Collector. The coverage strategy also specifies how to measure the coverage, e.g.\ a requirements model or a functional model to traverse.
In Testing, the SUT is placed into a test environment, a {\em testbench}. The testbench represents (a model of) the universe, or of its target environment. The process of testing is realised using the following four core components in a testbench, as shown in Fig.~\ref{testbench}:
\begin{figure}[!t]
\centering
\includegraphics[width=0.8\textwidth]{cdvtestbench.pdf}
\caption{Structure of a basic CDV testbench}
\label{testbench}
\end{figure}
\begin{itemize}
\item the {\bf Test Generator} is the component that generates stimulus for
the SUT;
\item the {\bf Driver} is the component that takes a test, potentially at a
high level of abstraction, translates it into the level of abstraction used
in the simulation, and drives it to stimulate the SUT;
\item the {\bf Checker} is the component that checks the response of the SUT to
the stimulus and detects failures;
\item the {\bf Coverage Collector} is the component that records the quality
of the generated tests with respect to a set of complementing coverage models.
\end{itemize}
A key objective in the design of a CDV testbench is to achieve a fully autonomous environment, so that verification engineers can
concentrate on areas requiring intelligent input, namely efficient and effective test generation, bug detection, reliable tracking of progress and timely completion.
In the following sections we describe each testbench component in more detail, explaining how they can be used for verification in robotics.
\subsection{Checker}\label{sc:checker}
The automation of test generation prompts the need for automatic and test independent checkers, i.e.\ self-checking testbenches. Assertion-based verification~\cite{abd} allows locating checkers, in the form of assertion monitors, close to the code that is being observed.
Requirements to verify can be expressed as Temporal Logic properties. Assertion monitors can be derived automatically from these
properties~\cite{Havelund2002}, in an automata-based form. Since the simulations are time bound, some safety properties defined over infinite traces (e.g., using an \verb+always+ Temporal Logic operator) are bound over the duration of a simulation run. Relevant work in~\cite{Armoni2006} mentions the advantages of the automatic generation of monitors as automata, including the reduction of errors caused by manual translation.
For requirements about the low-level continuous behaviour of the SUT (e.g., trajectories computed by the motion planning), the monitoring can be performed in a quasi-continuous manner, considering computational limitations. Otherwise, over-approximations or interpolation can be performed to predict events at instants of time between computations, such as the overlapping of regions in the 3D space for collision avoidance.
\section{Conclusions}\label{sc:conclusions}
We advocated the use of CDV for robot code in the context of HRI. By promoting automation, CDV can provide a faster route to coverage closure, compared with manually directed testing. CDV is typically used in Software-in-the-Loop simulations, but it can also be used in conjunction with Hardware-in-the-Loop simulation, Human-in-the-Loop simulation or with emulation.
The flexibility of CDV with regard to the levels of abstraction used in both the requirements models and the SUT makes it particularly well suited to verification of HRI.
The principal drawback of CDV, compared with directed testing, is the overhead effort associated with building an automated testbench. Directed testing produces early results, but CDV significantly accelerates the approach towards coverage closure once the testbench is in place. Hence CDV is an appropriate choice for systems in which complete coverage is difficult to achieve due to a broad and varied state space that includes rare but important events, as is typically the case for HRI.
We proposed implementations of four automatic testbench components, the Test Generator, the Driver, the Checker and the Coverage Collector, that suit the HRI domain. Different test generation strategies were considered: pseudorandom, constrained pseudorandom and model-based to complement each other in the quest for meaningful tests and exploration of unexpected behaviours. Assertions were proposed for the Checker, accommodating requirements at different levels of abstraction, an important feature for HRI. Different coverage models were proposed for the Coverage Collector: requirements (assertion), code statements, and cross-product.
The potential for CDG (Coverage-Driven test Generation), through the implementation of automated feedback loops, has been considered. Nevertheless, we believe a great part of the feedback work needs to be performed by the verification engineer, since CDG is difficult to implement in practice.
A handover example demonstrated the feasibility of implementing the CDV testbench as a ROS-Gazebo based simulator. The results show the relative merits of our proposed testbench components, and indicate how feedback loops in the testbench can be explored to seek coverage closure. Several key observations can be noted from these results. Pseudorandom test generation allows a degree of unpredictability in the environment, so that unexpected behaviours of the SUT may be exposed. Model-based test generation usefully complements this technique by systematically directing tests according to the requirements of the SUT. This requires the development of a formal model of the system, which additionally enables exhaustive verification through formal methods, as explored by previous authors for HRI~\cite{BFS09:HRIshort,Cowley2011,Kouskoulas2013,Mohammed2010,Muradore2011,webster14formalshort}.
If the requirements are translated into Temporal Logic properties for model checking, assertion monitors can be derived automatically. In future work, we will be exploring generation of monitors for different levels of abstraction in the simulation (e.g., events-based, or checked at every clock cycle) in a more formal manner. We will further explore the use of bisimulation analysis to ensure equivalence between a robot's high-level control code and any associated formal models. We intend to incorporate probabilistic models of the human, the environment and other elements in the simulator, to enable more varied stimulation of an SUT. We also intend to verify a more comprehensive set of requirements for the handover task, e.g., according to the safety standard ISO~15066 (currently under development) for collaborative industrial robots.
Our approach is scalable, as more complex systems can be verified using the same CDV approach, for the actual system's code. We are confident CDV can be used for the verification and validation of autonomous systems in general. Open source platforms and established tools can serve to create simulators and models at different abstraction levels for the same SUT.
\subsection{Coverage Collector}\label{sc:coverage}
Automatic test generation necessitates monitoring the quality of the tests to gain an understanding of the verification progress. To achieve this, statistics can be collected on the tests, the driven stimulus (external events), the SUT's response, and the SUT's internal state, including assertion monitors. In general, we distinguish between {\em code} coverage models and {\em functional} coverage models. A comprehensive account on coverage can be found in~\cite{Piziali2004}.
The collected coverage data provides information on unexplored (coverage ``holes'') or lightly covered areas. {\em Coverage closure} is the process of identifying coverage holes and creating new tests that address these holes. This introduces a feedback loop from coverage collection/analysis to test generation, termed Coverage Directed test Generation (CDG)~\cite{Piziali2004}. Attempts have been made to automate CDG using machine learning techniques~\cite{ioannides}. However, CDG remains a difficult challenge in practice.
In principle, coverage collection and analysis techniques can be transferred directly into the domain of robotics verification. In fact, it is interesting to note that functional coverage in the form of ``cross-product'' coverage~\cite{wile}, as widely used in hardware design verification, has recently been proposed (independently) for the verification of autonomous robots in~\cite{Alexander2015}, where it is termed {\em situation} coverage and includes combinations of external events only.
\subsection{Driver}\label{sc:driver}
The Driver is a fully automated component that translates a (potentially high-level) description of a test into signal-level stimulus that can be applied to the interfaces of the SUT in order to expose the SUT to the situation prescribed by the test. The Driver may comprise an interacting network of modules corresponding to the distinct interfaces of the SUT. The SUT reacts to the stimuli provided on its interfaces. The Driver runs in parallel with the SUT and responds to it, if necessary; i.e., the Driver can be reactive. The automation of the Driver makes it feasible to execute batches of abstract tests, to accelerate testing.
In HRI, the Driver comprises a model of the human, a physics model, and communication channels to represent any interactions that do not require detailed physical simulation. For example, if the human element in the simulator is driving the robot's code, the Driver would execute the corresponding high-level action sequence, one item at a time, by translating it into the respective sequence of input signals, potentially passing through the physics model before exposing the signals to the robot's input channels.
\section{CDV Implementation}\label{sc:implementation}
A case study from a collaborative manufacture scenario is presented. We demonstrate the transferability of CDV into the HRI domain by constructing a CDV testbench for this case study using a combination of established open-source tools and custom components. Our implementation showcases the potential of CDV to verify robotic code used in HRI.
\subsection{Case Study: Robot to Human Object Handover Task}\label{Example}
Our case study is an object handover, a critical subtask in a broader scenario where a robot (BERT2~\cite{lenz2010bert2}) and a person work together to assemble a table. The handover starts with an activation signal from the person to the robot. The robot then picks up an object, and holds it near the person. The robot indicates it is ready for the person to receive the object. Then, the person is expected to hold the object simultaneously, moving closer if necessary, and to look at it --- indicating readiness of the person. The robot collects data through two different sensing systems: ``pressure'', sensors that determine whether just the robot, or simultaneously the robot and the person, are holding the object; and ``location'' and ``gaze'' sensors, an `EgoSphere' system that tracks whether the human hand is close to the object and whether the human head is directed towards the object~\cite{lenz2010bert2}. Based on the sensors, the robot determines whether the release condition is satisfied, and decides on a course of action: the robot will release the object and allow the person to take it, if the human was ready; if not, the robot will not release the object. The robot or human may disengage from the task (look or move away). The sensors are considered perfect.
According to the handover task's interaction protocol, a robot ROS `node' was developed in Python, comprising 209 code statements. This node was structured as a state machine, using the SMACH modules~\cite{SMACH}, to facilitate modularity.
The states, with their transitions, can be enumerated as shown below. Each state transitions to the next in sequence, except where indicated otherwise. The code is also depicted as a flow chart in Figure~\ref{codecoverage}.
\begin{enumerate}
\item \verb|reset| - The robot moves to its starting position, with gripper open.
\item \verb|receive_signal| - Read signals. If `startRobot' is received, transition to \verb|move|; elseif timeout, transition to \verb|done|; else, loop back to present state.
\item \verb|move| - Plan trajectory of hand to piece. Move arm. Close gripper. Plan trajectory of hand to human. Move arm.
\item \verb|send_signal| - Send signal to inform human of handover start.
\item \verb|receive_signal| - Read signals. If `humanIsReady' is received, transition to \verb|sense|; elseif timeout, transition to \verb|done|; else loop back to present state.
\item \verb|sense| - Read sensors. If timeout, transition to \verb|done|; elseif not all signals available, loop back to present state; else, transition to \verb|decide|.
\item \verb|decide| - If all sensors are satisfied, transition to \verb|release|; else, transition to \verb|done| (without releasing).
\item \verb|release| - Open the gripper. Wait for 2 seconds.
\item \verb|done| - End of sequence.
\end{enumerate}
\subsection{Requirements}\label{ssc:requirements}
Requirements were derived from ISO~13482:2014 and desired functionality of the robot in the interaction~\cite{Grigore2011}:
\begin{enumerate}
\item If the gaze, pressure and location are sensed as correct, then the object shall be released.
\item If the gaze, pressure or location are sensed as incorrect, then the object shall not be released.
\item The robot shall make a decision before a threshold of time.
\item The robot shall always either time out, decide to release the object, or decide not to release the object.
\item The robot shall not close the gripper when the human is too close.
\end{enumerate}
Requirements 1 to 4 refer to sequences of high-level events over time, whereas Requirement 5 refers to a lower-level safety requirement of the continuous state space of the robot in the HRI. Thus, the former can be both targeted with model-based techniques and implemented as assertion monitors, whereas the latter is only suitable for implementation as an assertion monitor.
\subsection{CDV Testbench Implementation}
ROS is a widely used open-source platform for the design of code for robots in C++ and/or Python. ROS allows interfacing directly with robots. Gazebo is a robot simulation tool designed for compatibility with ROS, that is able to emulate the physics of our world. Thus, the combination ROS-Gazebo provides a means of developing a robotic simulator, as shown in Figure~\ref{Simulatorphoto}.
\begin{figure}[h]
\centering
\includegraphics[width=0.4\textwidth]{bert.png}
\includegraphics[width=0.4\textwidth, trim= 70mm 40mm 170mm 30mm, clip]{sim3.jpg}
\caption{BERT2 robot and a human, and the simulator in ROS-Gazebo}
\label{Simulatorphoto}
\end{figure}
Figure~\ref{simstructure} shows the structure of our CDV testbench implementation, incorporating the robot's high-level control code. The Driver incorporates the Gazebo physics simulator and the MoveIt!\footnote{http://moveit.ros.org/} packages for path-planning and inverse kinematics of the robot's motion. The human is embodied as a floating head and hand for simplicity; in future, this representation can be replaced by one that is anatomically accurate. The implementation in ROS ensures that assertion monitors and coverage collection can access parameters internal to the robot code as well as the external physics model and other interfaces, such as signals. Observability of the external behaviour allows validating the robot's actions. In real life experiments, this is equivalent to observing the robot's physical behaviour to see if its responses are as expected.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\textwidth]{simulator_testbench.pdf}
\caption{Testbench and simulator elements in ROS-Gazebo}
\label{simstructure}
\end{figure}
\subsection{Test Generator and Driver}
Tests were generated pseudorandomly, by concatenating randomly selected elements from the set of high-level actions belonging to the handover workflow, forming random action sequences and instantiating relevant parameters. These randomized sequences represent environmental settings that do not necessarily comply with the interaction protocol. Thus, pseudorandom action sequence generation produces stimulus that correspond to unexpected behaviours that were not previously considered in the requirements. Posteriorly, constraints were introduced to bias the pseudorandom generation to obtain tests that do comply with the interaction protocol (e.g., enforcing particular sequences of actions).
The handover interaction protocol was formalized as a set of six automata, in particular Probabilistic-Timed Automata (PTA) ~\cite{Hartmanns2009}, comprising the robot, the workflow, the gaze, the location, the pressure, and the sensors. Behaviours of the different elements (e.g., protocol compliant actions to activate the robot through signals) were abstracted in terms of state transitions and variable assignments. The structure of the robot's code guided the abstraction process, and the abstraction was verified via bisimulation analysis~\cite{CCSbook}.
Model-based test templates were obtained from witness traces (examples or counterexamples) produced by model checking the product automaton~\cite{Nielsen2003}. These witnesses contain combined sequences of states from the different automata. Requirements 1 to 4 (Section~\ref{ssc:requirements}) were used to derive model-based test templates that would trigger corresponding assertion monitors. We employed UPPAAL\footnote{http://www.uppaal.org/}, a model checker for PTA that produces witnesses automatically. Projections over these traces with respect to the workflow, gaze, location, pressure and sensors automata remove the elements that correspond to the robot's activities, to form a test template. Based on these test templates, tests were generated pseudorandomly.
A test template for our simulator consists of a sequence of high-level actions (workflow) to activate the robot expressed as a state machine. A test comprises, besides the high-level actions, the pseudorandom instantiation of parameters, from well defined sets (e.g., ranges of values for gaze correct or gaze incorrect). An example is shown in Figure~\ref{exampletest}. The Driver produces responses in the physical model in Gazebo, signals to be communicated to the robot, and sensor readings.
\begin{figure}[h]
\scriptsize
\centering
\begin{tabular}{r|lllll|lll}
\cline{2-6}
1&&\verb+sendsignal+&&\verb+activateRobot+ &&& &\\
2&&\verb+setparam+ && \verb+time = 40+& && This \verb+time+ instantiation produces &\\
3&&\verb+receivesignal+ && \verb+informHumanOfHandoverStart+ &&&a waiting time of $40 \times 0.05$ seconds.&\\
4&&\verb+sendsignal+ && \verb+humanIsReady+ &&&&\\
5&&\verb+setparam+ && \verb+time = 10+ &&&&\\
6&&\verb+setparam+ && \verb+honTask = true+ && &&\\
7&&\verb+setparam+ && \verb+hgazeOk = true+ && &Gaze instantiation for \verb+true+: choosing offset,&\\
8&&\verb+setparam+ && \verb+hpressureOk = true+ && & distance and angle, from ranges $ \{[0.1,0.2],$&\\
9&&\verb+setparam+ && \verb+hlocationOk = true+ && & $[0.5,0.6],[15,40)\}$, e.g., $(0.1,0.5,30)$&\\
\cline{2-6}
\end{tabular}
\caption{Example test from a test template, comprising high-level actions and some parameter instantiations (time and gaze)}
\label{exampletest}
\end{figure}
An example of a constraint for constrained pseudorandom generation is the enforcement of the sequence of actions in lines 1 to 4 of Figure~\ref{exampletest}, followed by any other action sequence. This constraint ensures the immediate activation of the robot, when a simulation starts.
An added benefit from the development of a formal model for test generation is that this allows formal verification through model checking~\cite{ClarkeMC}. Formal verification can thus complement CDV. However, properties that hold for abstract models must still be verified at the code level. Model checkers for code (e.g., CBMC\footnote{http://www.cprover.org/cbmc/}, Java PathFinder\footnote{http://javapathfinder.sourceforge.net/}) target runtime bugs in general, such as arrays out of bounds or unbounded loop executions. These are, however, at a different level than the complex functional behaviours we aim to verify. In~\cite{webster14formalshort}, the runtime detail is abstracted, giving way to high-level behaviour models where functional requirements can be verified with respect to the model only.
\subsection{Checker}
Assertion monitors were implemented for all the requirements in Section~\ref{ssc:requirements}. Requirements 1 to 4 were translated first into CTL properties, and then automata-based assertion monitors were generated manually. This process will be automated in the future. For example, Requirement 1 corresponds to the property:
$$E <> sgazeOk \wedge spressureOk \wedge slocationOk \wedge releasedTrue.$$
The resulting monitor is triggered when reading a sensors signal indicating the gaze, pressure and location are correct. Then, the automaton transitions when receiving a signal of the object's release. If the latter signal event happens within a time threshold (3 seconds), a \verb+True+ result is reported. Finally, the automaton returns to the initial state.
Requirement 2 corresponds to the CTL property:
$$E<> (sgazeNotOk \vee spressureNotOk \vee slocationNotOk) \wedge releasedFalse.$$
This monitor is triggered when any of the gaze, pressure or location are incorrect in a sensing signal. Then, the automaton transitions to either a \verb+False+ result when receiving a signal of the object's release, or a \verb+True+ result if some time has elapsed (2 seconds) and no release signal has been received. Finally, the automaton returns to the initial state.
Requirement 5 refers to physical space details abstracted from our PTA model, and it cannot be expressed as a Temporal Logic property. Hence, it was directly implemented as an automaton-based assertion monitor. When the robot grabs the object, it needs to make sure the human's hand (or any other body part) is at a distance. The monitor is triggered every time the code invokes the \verb+hand(close)+ function, which causes the motion of the robot's hand joints. The location of the human hand is then read from the Gazebo model (the head is ignored, since the model is abstracted to a head and a hand). If this location is close to the robot's hand (within a 0.05\,m distance of both mass centres), the monitor registers a \verb+False+ result, or otherwise \verb+True+.
The monitors automatically generate report files, indicating their activation time, and the result of the checks if completed.
\subsection{Coverage Collector}
We implemented two coverage models: code (statement) coverage and functional coverage in the form of requirements (assertion) coverage. The statement coverage was implemented through the `coverage'~\footnote{http://nedbatchelder.com/code/coverage/} module for Python. For each test run, statistics on the number of executed code statements are gathered. The assertion coverage is obtained by recording which assertion monitors are triggered by each test. If all the assertions are triggered at the end of the test runs, the testbench has achieved 100\% requirements coverage.
\section{Introduction}\label{sc:introduction}
Human-Robot Interaction (HRI) is a rapidly advancing sector within the field of robotics. Robotic assistants that engage in collaborative physical tasks with humans are increasingly being developed for use in industrial and domestic settings. However, for these technologies to translate into commercially viable products, they must be demonstrably safe and functionally sound, and they must be deemed trustworthy by their intended users~\cite{ROMAN14}. In existing industrial robotics, safety is achieved predominantly by physical separation or through limiting the robot's physical capabilities (e.g., speed, force) to thresholds, according to predefined interaction zones. To fully realize the potential of collaborative robots, the correctness of the software with respect to safety and functional (liveness) requirements needs to be verified.
HRI systems present a substantial challenge for software verification --- the process used to gain confidence in the correctness of an implementation, i.e.\ the robot's code, with respect to the requirements. {\em The robot responds to an environment that is multifaceted and highly unpredictable.} This is especially true for robots involved in direct interaction with humans, whether this is in an unstructured home environment or in the more structured setting of collaborative manufacturing. We require a verification methodology that is sufficiently realistic (models the system with sufficient detail) while thoroughly exploring the range of possible outcomes, without exceeding resource constraints.
s.
Prior work~\cite{BFS09:HRIshort,Cowley2011,Kouskoulas2013,Mohammed2010,Muradore2011,webster14formalshort} has explored the use of formal methods to verify HRI. Formal methods can achieve full coverage of a highly abstracted model of the interactions, but are limited in the level of detail that can practically be modelled. {\em Sensors, motion and actuation in a continuous world present a challenge for models and requirement formulation in formal verification.} Physical experiments or simulation-based testing may be used to achieve greater realism, and to allow a larger set of requirements to be verified over the real robot's code. However, neither of these can be performed exhaustively in practice.
{\em Robotic code is typically characterised by a high level of concurrency between the communicating modules} (e.g., nodes and topics used in the Robot Operating System, ROS\footnote{http://www.ros.org/}) that control and monitor the robot’s sensors and actuators, and its decision making. Parallels can be drawn here to the design of microelectronics hardware, which consists of many interacting functional blocks, all active at the same time. Hence it is natural to ask: `Can techniques from the microelectronics field be employed to achieve comprehensive verification of HRI systems?'
In this paper, we present the use of Coverage-Driven Verification (CDV) for the high-level control code of robotic assistants, in simulation-based testing. CDV is widely used in functional verification of hardware designs, and its adoption in the HRI domain is an innovative response to the challenge of verifying code for robotic assistants. CDV is a systematic approach that promotes achieving coverage closure efficiently, i.e.\ generation of effective tests to explore a System Under Test (SUT), efficient coverage data collection, and consequently efficient verification of the SUT with respect to the requirements. The resulting efficiency is critical in our application, given the challenge of achieving comprehensive verification with limited resources.
{\em The extension of CDV to HRI requires the development of practical tools that are compatible with established robotics tools and methods.} The microelectronics industry benefits from the availability of hardware description languages, which streamline the application of systematic V\&V techniques. No practical verification tool exists for Python or C++, common languages for robotics code~\cite{Trojanek2014}. A novel contribution of this paper is the development of a CDV testbench specifically for HRI; this implementation makes use of established open-source tools where possible, while custom tools have been created as necessary to complete and connect the testbench components (Test Generator, Driver, Checker and Coverage Collector). Additionally, we outline the relevant background to ensure robust implementation of CDV.
To demonstrate the feasibility and potential benefits of the method, we applied CDV to an object-handover task, a critical component of a cooperative manufacture scenario, implemented as a ROS and Gazebo\footnote{http://gazebosim.org/} based simulator. Our automated testbench conveniently allows the actual robot code to be used in the simulation. Model-based and constrained pseudorandom test generation strategies form the Test Generator. A Driver applies the tests to the simulation components. The Checker comprises assertion monitors, collecting requirement coverage. The Coverage Collector, besides requirement, includes code coverage.
We verified selected safety and liveness (functional) requirements of the handover task to showcase the potential of CDV in the HRI domain.
The paper proceeds with an overview of the CDV testbench components and verification methodology in Section~\ref{sc:CDV}. The handover scenario is introduced in Section~\ref{sc:implementation}, where we then present the CDV testbench we used to verify the code that implements the robot's part of the handover task. Section~\ref{sc:results} discusses the verification and coverage results for this example. Conclusions and future work are given in Section~\ref{sc:conclusions}.
\section{Experiments and Verification Results}\label{sc:results}
The CDV testbench described in Section~\ref{sc:implementation} was used
\textit{(a)} to demonstrate the benefits of CDV in the context of HRI;
\textit{(b)} to obtain an insight into the verification results, including unexpected behaviours or requirement violations; and
\textit{(c)} to explore options to achieve coverage closure (from Section~\ref{sc:cdvmethod}).
The requirements mentioned in Section~\ref{Example} were verified using a CDV testbench in ROS (version Indigo) and Gazebo (2.2.5), and through model checking in UPPAAL (version 4.0.14), using the model we developed for model-based test generation. We used a PC with Intel i5-3230M 2.60\,GHz CPU, 8\,GB of RAM, running Ubuntu 14.04.
Table~\ref{results1} presents the assertion coverage for the handover, and the verification results from model checking. In model checking, the requirements were verified as true (T) or false (F). Through model checking, we were only able to cover Requirements 1 to 4. From each of the model checking witnesses (test templates) of Requirements 1 to 4, we generated a test (model-based generation). We also generated 100 pseudorandom (unconstrained) tests, and 100 constrained pseudorandom tests that enforced the activation of the robot as explained in Section~\ref{sc:implementation}. We verified Requirements 1 to 5 in simulation, and recorded the results of the assertion monitors: Pass (P), Fail (F), Not Triggered (NT), or Inconclusive (U) when the monitor was triggered but the check was not completed within the simulation run. The same setup was used to compute both assertion and statement coverage, allowing the comparison of the test generation strategies in terms of coverage efficiency.
\begin{table}
\caption{Requirements (assertion) coverage and model checking results}
\centering
\scriptsize
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{|c|c|cccc|cccc|cccc|}
\hline
Req. & Model & \multicolumn{12}{|c|}{Simulation-Based Testing} \\ \cline{3-14}
& Checking&\multicolumn{4}{|c|}{Pseudorandom} & \multicolumn{4}{|c|}{Constrained-Pseudorandom} &\multicolumn{4}{|c|}{Model-Based} \\
\cline{3-14}
& & P & F & NT & I & P & F & NT & I & P & F & NT & I\\
\hline
1 & T & 0/100 &0/100 & 100/100 & 0/100 & 0/100 & 0/100 & 100/100 & 0/100 & 3/4 & 0/4 & 1/4 & 0/4\\
2 & T & 33/100 & 0/100 & 67/100 & 0/100 & 87/100 & 0/100 & 13/100 & 0/100 & 1/4 & 0/4 & 3/4 & 0/4\\
3 & T & 33/100 & 0 /100 & 67/100 & 0/100 & 87/100 & 0/100 & 13/100 & 0/100 & 4/4 & 0/4 & 0/4 & 0/4 \\
4 & T & 98/100 & 0/100 & 0/100 & 2/100 & 98/100 & 0/100 & 0/100 & 2/100 & 4/4 & 0/4 & 0/4 & 0/4\\ \hline
5 & - & 46/100 & 0/100 & 54/100 & 0/100 & 93/100 & 0/100 & 7/100 & 0/100 & 4/4 & 0/4 & 0/4 & 0/4\\
\hline
\end{tabular} \label{results1}
\end{table}
The results in Table~\ref{results1} confirm our expectations for the different test generation strategies. For assertion-based functional coverage, pseudorandom and constrained-pseudorandom test generation are less efficient than model-based test generation, which triggered all five assertions with just four tests. Requirement 1 was not covered by either the pseudorandom or the constrained pseudorandom strategy. If either of these strategies was used alone, the coverage hole could potentially be closed by adding further constraints or by using a more sophisticated test generation strategy such as model-based test generation.
The assertion monitor checks for Requirement 4 were inconclusive for some of the pseudorandom and constrained-pseudorandom generated tests. This occurs because in these tests the robot is activated long after the start of the handover task (when the robot is reset and proceeds to wait for a signal). These tests do not comply with the protocol which requires to activate the robot at the start and within a given time threshold.
This coverage result could trigger different actions, e.g.\ the assertion monitor could be modified to choose either pass or fail at the end of the simulation; the Driver could be modified such that the simulation duration is extended; or, the inconclusive checks could be dismissed as trivial, in which case the efficiency of any further tests could be improved by directing them away from such cases. As noted in Section~\ref{sc:cdvmethod}, further test generation is not always the sole appropriate response to a coverage hole. It is worth noting that this scenario was exposed only by pseudorandom and constrained-pseudorandom test generation, demonstrating the unique benefit of these approaches; by exploring the SUT's behaviour beyond the assumptions of the verification engineer, they provide a useful complement to the more directed approach of model-based test generation.
Figure~\ref{codecoverage} illustrates the code coverage (statements) achieved with each test generation strategy over 206 statements (the actual percentages may vary $\pm 2$\% due to decision branches with 1 or 2 lines of code each). The lines of code are grouped using the state machine structure in the Python module, to facilitate visualization. The block of code corresponding to the ``release'' state is not covered by the pseudorandom and constrained pseudorandom generated tests, hence it is shown in white. This coverage hole could be closed by applying the test template produced by model-based test generation for Requirement 1.
Because our code is structured as a finite state machine (FSM), it would be appropriate to also incorporate structural coverage models in the future. A comprehensive test suite would include tests that visit all states, trigger all possible state transitions, and traverse all paths.
\begin{figure}[h]
\subfloat[\label{subfig-3:a}]{
\includegraphics[width=0.27\textwidth]{coverage3.pdf}
}
\hfill
\subfloat[\label{subfig-3:b}]{
\includegraphics[width=0.36\textwidth]{coverage2.pdf}
}
\hfill
\subfloat[\label{subfig-3:c}]{
\includegraphics[width=0.27\textwidth]{coverage1.pdf}
}
\caption{Code coverage (percent values) obtained in simulation with (a) model-based (4 tests), (b) pseudorandom (100 tests), and (c) constrained-pseudorandom test generation (100 tests)
}
\label{codecoverage}
\end{figure}
The generation of effective tests, that target both the exploration of the SUT and the verification progress, is fundamental to maximising the efficiency of a CDV testbench reaching for coverage closure. From the overall results, it can be seen that the three test generation approaches applied have complementary strengths that overcome their respective weaknesses in terms of coverage. While model-based test generation ensures that the requirements are covered in an efficient manner, pseudorandom test generation can construct scenarios that the verification engineer has not foreseen. Such cases are useful for exposing flawed or missing assumptions in the design of the testbench or the requirements.
\subsection{Test Generator}\label{sc:testgen}
The test generator aims to exercise the SUT for verification (activation of faults), while working towards full coverage. Test generators in CDV make use of pseudorandom generation techniques. Using pseudorandom as opposed to random generation allows repeatability of the tests. The generated tests must be valid (realistic, like a sensor input that reflects a valid scene). An effective set of tests includes a good variety that explores unexpected conditions and addresses the scenarios of interest as per the requirements list. An efficient set of tests maximises the coverage and verification progress, whilst minimizing the number of tests needed. To achieve the former while allowing for the latter, pseudorandom test generation can be biased using constraints. These constraints can be derived from the SUT's functional requirements or from the verification plan~\cite{Piziali2004}. However, supplying effective constraints requires significant engineering skill and application knowledge. It is particularly difficult to generate meaningful sequences of actions, whether these are transactions on the interface of a system-on-chip, or interactions between humans and robots.
Constrained pseudorandom test generation can be complemented with model-based techniques~\cite{Haedicke2012,Lakhotia2009} to generate sequences that address specific use cases, such as interaction protocols between human and robot in a collaborative manufacturing environment. In model-based test generation, a model is explored or traversed to obtain abstract tests, i.e.\ tests at the same level of abstraction as the model. These abstract tests can serve as test templates, or constraints, for tests that target specific behaviours~\cite{Lackner2012,Nielsen2003}. For this, a model needs to be implemented, e.g.\ one that captures the intended behaviours of the robot when interacting with humans and/or its environment. In robotics, the degree of abstraction between such a model and the simulation often differs significantly compared to that observed in microelectronics~\cite{Nielsen2014}. Many low-level implementation details such as motion control, sensing models or path planning are abstracted from (e.g., as in~\cite{webster14formalshort}) to keep these models within manageable size. For model-based testing to be credible and effective, the correctness of the behavioural model with respect to the robot's code needs to be established. However, this is beyond the scope of this paper.
|
2,877,628,091,624 | arxiv | \section{Introduction}
This paper is a sequel to~\cite{RS}. It studies
the moduli space of stable maps whereas~\cite{RS}
studied the moduli space of stable marked nodal Riemann surfaces.
The latter can be considered as a special case of the former
by taking the target manifold $M$ to be a point. In both cases
the moduli space is the orbit space of a groupoid where
the objects are compact surfaces with additional structure.
(We think of a map from a surface to another
manifold as a structure on the surface.)
In both cases the difficulty is that to achieve compactness of
this moduli space it is necessary to include objects
whose underlying surfaces are not homeomorphic.
Here we study only that part of the moduli space
of stable maps which can be represented by
regular stable maps. Only by restricting attention
to regular stable maps can we hope to construct
an orbifold structure. We also limit attention
to target manifolds $M$ which are integrable
complex and not just almost complex.
As in~\cite{RS} we make heavy use of ``Hardy decompositions''.
The idea is to decompose a Riemann surface $\Sigma$
into two surfaces $\Sigma'$ and $\Sigma''$
intersecting in their common boundary $\Gamma$.
A holomorphic map from $\Sigma$ into a complex manifold $M$
is uniquely determined by its restriction to $\Gamma$ and so the
space of all such holomorphic maps can be embedded
into the space $\mathcal{V}$ of smooth maps from $\Gamma$ to $M$.
In this way we identify the holomorphic maps
with $\mathcal{V}'\cap\mathcal{V}''$ where $\mathcal{V}'$ and $\mathcal{V}''$ are the maps
from $\Gamma$ to $M$ which extend holomorphically
to $\Sigma'$ and $\Sigma''$ respectively.
(In the case where $\Sigma$ is the
Riemann sphere, $M={\mathbb{C}}\cup\{\infty\}$,
and $\Gamma$ is the equator,
$\mathcal{V}'$ would consist of those maps whose negative
Fourier coefficients vanish and
$\mathcal{V}''$ would consist of those maps whose positive
Fourier coefficients vanish. Hence the name
{\em Hardy decomposition}.)
The importance of this construction becomes
clear when we consider a parameterized family
$\{\Sigma_b\}_{b\in B}$ of Riemann surfaces.
By judiciously choosing the decomposition
we can arrange that the one dimensional manifolds
$\Gamma_b$ are all diffeomorphic, even though
the manifolds $\Sigma'_b$ are not all homeomorphic.
Then we identify the various $\Gamma_b$ with
a disjoint union $\Gamma$ of circles.
Under suitable hypotheses we are able
to
represent the holomorphic maps
from $\Sigma_b$ to $M$
(for varying $b$) as a submanifold of
the manifold of smooth maps from
$\Gamma\cong{\partial}\Sigma'_b={\partial}\Sigma''_b$ to $M$.
Our theorems led to a theory of Fredholm triples
in Section~\ref{sec:interfred}.
These are triples $(X,X',X'')$ where $X$ is a Hilbert
manifold and $X'$, $X''$ are Hilbert submanifolds
such that $T_xX'\cap T_xX''$ and $T_xX/(T_xX'+T_xX'')$
are finite dimensional for every $x\in X'\cap X''$.
We prove a finite dimensional
reduction theorem for morphisms of such triples.
We hope this theory has separate interest.
In Section~\ref{sec:topology} we show that the orbifold
topology is the same as the well known
topology of Gromov convergence.
Naming the additional structures which occur in this paper
as opposed to~\cite{RS} caused us to exhaust the
Latin and Greek alphabets. Accordingly we
have changed notation somewhat. For example,
the aforementioned decomposition
$\Sigma=\Sigma'\cup\Sigma''$
was
$\Sigma=\Delta\cup\Omega$
in~\cite{RS}. We also
use the following notations
$$
\begin{array}{lcl}
{\mathsf{g}} &:=&\mbox{arithmetic genus of }\Sigma/\nu, \\
{\mathsf{n}} &:=&\mbox{number of marked points}, \\
{\mathsf{k}} &:=&\mbox{number of nodal points}, \\
{\mathsf{a}} &:=&\mbox{complex dimension of } A,\\
\sb &:=&\mbox{complex dimension of } B,\\
{\mathsf{m}} &:=&\mbox{complex dimension of } M.\\
\end{array}
$$
We have used the \verb$\mathsf$ font for these integers so that we
can write $a\in A$, $b\in B$ for the elements. We will also use the
symbol ${\mathsf{d}}$ to denote a homology class in $H_2(M;{\mathbb{Z}})$.
\section{Stable maps}\label{sec:stablemap}
\par
Throughout let $(M,J)$ be a complex manifold without boundary.
A \jdef{configuration} in $M$ is a tuple $(\Sigma,s_*,\nu,j,v)$ where
$(\Sigma,s_*,\nu,j)$ is a marked nodal Riemann surface (see~\cite[\S3]{RS})
whose quotient $\Sigma/\nu$ is connected
and $v:\Sigma\to M$
is a smooth map satisfying the nodal conditions
$$
\{x,y\}\in\nu\implies v(x)=v(y).
$$
Thus $v$ descends to the quotient $\Sigma/\nu$ and we write
$v:\Sigma/\nu\to M$ for a smooth map $v:\Sigma\to M$ satisfying
the nodal conditions.
We say that the configuration has \jdef{type} $({\mathsf{g}},{\mathsf{n}})$
if the marked nodal surface $(\Sigma,s_*,\nu)$
has type $({\mathsf{g}},{\mathsf{n}})$ in the sense of \cite[Definition~3.7]{RS}
and that it has \jdef{type} $({\mathsf{g}},{\mathsf{n}},{\mathsf{d}})$
if in addition the map $v$
sends the fundamental class of $\Sigma$ to the homology
class ${\mathsf{d}}\in H_2(M;{\mathbb{Z}})$.
The configurations form the objects of a groupoid;
an isomorphism
$$
\phi:(\Sigma',s'_*,\nu',j',v')\to(\Sigma,s_*,\nu,j,v)
$$
is an isomorphism $\phi:\Sigma'\to\Sigma$ of the underlying
marked nodal Riemann surfaces such that
$$
v' = v\circ\phi.
$$
Given two nonnegative integers ${\mathsf{g}}$ and ${\mathsf{n}}$ and a homology
class ${\mathsf{d}}\in H_2(M;{\mathbb{Z}})$ we denote by
$\mathcal{B}_{{\mathsf{g}},{\mathsf{n}}}(M,J)$ the groupoid of configurations of type $({\mathsf{g}},{\mathsf{n}})$
and by $\mathcal{B}_{{\mathsf{g}},{\mathsf{n}},{\mathsf{d}}}(M,J)$ the subgroupoid of configurations of type
$({\mathsf{g}},{\mathsf{n}},{\mathsf{d}})$.
\end{PARA}\rm
\begin{PARA}\rm
The configuration $(\Sigma,s_*,\nu,j,v)$ is called \jdef{holomorphic} if the
map $v$ is holomorphic, i.e. if
$$
\bar{\partial}_{j,J}(v):=\mbox{$\frac12$}\left(dv + J(v) dv\circ j\right)=0.
$$
A \jdef{stable map} is a holomorphic configuration
whose automorphism group is finite.
This means that each genus-0 component of $\Sigma$
on which $v$ is constant carries at least three special points
and each genus-1 component of $\Sigma$
on which $v$ is constant carries at least one special point.
A component on which $v$ is constant is commonly
called a {\em ghost component} so a stable map
is a holomorphic configuration such that each ghost component
is stable in the sense of \cite[Definition~3.7]{RS}.
The stable maps of type $({\mathsf{g}},{\mathsf{n}})$ are a subgroupoid of $\mathcal{B}_{{\mathsf{g}},{\mathsf{n}}}(M,J)$;
the orbit space $\overline{\mathcal{M}}_{{\mathsf{g}},{\mathsf{n}}}$ of this subgroupoid is
(set theoretically) the \jdef{moduli space} of stable maps of type $({\mathsf{g}},{\mathsf{n}})$.
Similarly define the subset $\overline{\mathcal{M}}_{{\mathsf{g}},{\mathsf{n}},{\mathsf{d}}}$.
Our goal is to construct
a canonical orbifold structure on the regular part
of this space.
\end{PARA}\rm
\begin{definition}\rm \label{def:regular}
A holomorphic configuration $(\Sigma,s_*,\nu,j,v)$ is called
\jdef{regular} if
\begin{equation}\label{eq:regular}
\Omega^{0,1}_j(\Sigma,v^*TM)
= \im\,D_v + dv\cdot\Omega^{0,1}_j(\Sigma,T\Sigma)
\end{equation}
where
$$
D_v:\Omega^0(\Sigma/\nu,v^*TM)\to\Omega^{0,1}_j(\Sigma,v^*TM)
$$
is the linearized Cauchy Riemann operator (see~\cite[page~41]{MS} and~\ref{Dv} below).
\end{definition}\rm
\begin{PARA}\rm\label{regular} Fix $\nu$ and $s_*$.
Let $\mathcal{J}(\Sigma)\subset\End(T\Sigma)$
denote the manifold of complex structures on $\Sigma$ and let
$$
\mathcal{B}:=\mathcal{J}(\Sigma)\times C^{\infty}(\Sigma/\nu,M).
$$
Form the vector bundle $\mathcal{E}\to\mathcal{B}$ with fiber
$$
\mathcal{E}_{j,v}:=\Omega^{0,1}_j(\Sigma,v^*TM)
$$
and let $\mathcal{S}:\mathcal{B}\to\mathcal{E}$ denote the section defined by
the nonlinear Cauchy--Riemann operator
$$
\mathcal{S}(j,v):=\bar{\partial}_{j,J}(v).
$$
A configuration $(j,v)$ is holomorphic and only if $\mathcal{S}(j,v)=0$.
The intrinsic derivative of $\mathcal{S}$ at a zero $(j,v)\in\mathcal{S}^{-1}(0)$
is the operator
$
\mathcal{D}_{j,v}:T_{j,v}\mathcal{B}\to\mathcal{E}_{j,v}
$
given by
$$
\mathcal{D}_{j,v}({\hat{\jmath}},\hat{v})=D_v\hat{v}+\mbox{$\frac12$} J(v)\, dv\cdot {\hat{\jmath}}.
$$
{\em A holomorphic configuration $(j,v)$
is regular if and only if the operator $\mathcal{D}_{j,v}$ is surjective.}
This follows from the following three assertions:
(1)~the tangent space to $\mathcal{B}$ at $(j,v)$ is
$$
T_{j,v}\mathcal{B}=
\Omega^{0,1}_j(\Sigma,T\Sigma)\times \Omega^0(\Sigma,v^*TM)
$$
(2)~When $v$ is holomorphic, we have $J(v)\, dv\cdot {\hat{\jmath}}=dv\cdot j{\hat{\jmath}}$.
(3)~The map
$$
\Omega^{0,1}_j(\Sigma,T\Sigma)\to \Omega^{0,1}_j(\Sigma,T\Sigma):
{\hat{\jmath}}\mapsto j{\hat{\jmath}}
$$
is bijective.
Hence, for a regular holomorphic configuration,
the zero set of $\mathcal{S}$ is a Fr\'echet manifold near $(j,v)$
with tangent space $\ker\,\mathcal{D}_{j,v}$.
This zero set is the ``stratum'' consisting of the holomorphic
configurations of type $({\mathsf{g}},{\mathsf{n}})$ obtained by fixing $\nu$ and varying $(j,v)$.
Fixing $j$ gives the vector bundle over $C^{\infty}(\Sigma/\nu,M)$ with fibers
$\Omega^{0,1}_j(\Sigma,v^*TM)$.
When the configuration $(j,v)$ is holomorphic,
the operator $D_v$ is the intrinsic derivative of
the section $v\mapsto\mathcal{S}(j,v)$.
\end{PARA}\rm
\begin{PARA}\rm\label{Diff}
The section $(j,v)\mapsto\mathcal{S}(j,v)=\bar{\partial}_{j,J}(v)$ is equivariant
under the action of the group $\Diff(\Sigma,\nu)$
of orientation preserving diffeomorphisms that preserve the
nodal structure. The Lie algebra of $\Diff(\Sigma,\nu)$ is the
space
$$
\Vect(\Sigma,\nu)
:=\{\xi\in\Omega^0(\Sigma,T\Sigma)\,|\,\xi(z)=0\,\forall z\in\cup\nu\}
$$
of vector fields on $\Sigma$ that vanish on the nodal set.
The infinitesimal equivariance condition is
\begin{equation}\label{eq:Ddbar}
D_v(dv\cdot\xi) = dv\cdot\bar{\partial}_j\xi
\end{equation}
for every $\xi\in\Vect(\Sigma,\nu)$.
The diffeomorphism group $\Diff(\Sigma,\nu)$ acts on
the space
$$
\mathcal{Z}_{\mathsf{n}}(\Sigma,\nu;M,J):= (\Sigma^{\mathsf{n}}\setminus\Delta)\times\mathcal{S}^{-1}(0)
$$
(where $\Delta$ is the fat diagonal) by
\begin{equation}\label{eq:Diff}
g^*(s_1,\dots,s_{\mathsf{n}},j,v)
:=(g^{-1}(s_1),\dots,g^{-1}(s_{\mathsf{n}}),g^*j,v\circ g)
\end{equation}
for $g\in\Diff(\Sigma,\nu)$. Let $\mathcal{P}_{\mathsf{n}}(\Sigma,\nu;M,J)\subset \mathcal{Z}_{\mathsf{n}}(\Sigma,\nu;M,J)$
denote the subset of stable maps, i.e.\ the subset where $\Diff(\Sigma,\nu)$
acts with finite isotropy.
Then the quotient space
$$
\mathcal{M}_{\mathsf{n}}(\Sigma,\nu;M,J)
:= \mathcal{P}_{\mathsf{n}}(\Sigma,\nu;M,J)/\Diff(\Sigma,\nu)
$$
is a stratum of the moduli space $\bar\mathcal{M}_{{\mathsf{g}},{\mathsf{n}}}(M,J)$ of all
stable maps of genus ${\mathsf{g}}$ with ${\mathsf{n}}$ marked points.
The stratum can also be expressed as the quotient
$
\mathcal{M}_{\mathsf{n}}(\Sigma,\nu;M,J)
= \mathcal{S}^{-1}(0)_\mathrm{stable}/\Diff(\Sigma,\nu,s_*)
$
where $\Diff(\Sigma,\nu,s_*)\subset\Diff(\Sigma,\nu)$
denotes the subgroup of all diffeomorphisms $\phi\in\Diff(\Sigma,\nu)$
that satisfy $\phi(s_{\mathsf{i}})=s_{\mathsf{i}}$ for ${\mathsf{i}}=1,\dots,{\mathsf{n}}$.
\end{PARA}\rm
\begin{PARA}\rm\label{Dv}
Let $(\Sigma,\nu,j)$ be a nodal Riemann surface
and $v:\Sigma\to M$ be a smooth map.
Fix a connection on $TM$ and
define
\begin{equation}\label{eq:Dv}
D_v\hat{v}
:=\mbox{$\frac12$}\left(\nabla\hat{v}+J(v)\nabla\hat{v}\circ j\right)
- \mbox{$\frac12$} J(v)\nabla_{\hat{v}}J(v){\partial}_{j,J}(v).
\end{equation}
(See~\cite[page~41]{MS}.)
The definition for $D_v$ is meaningful even when
$J$ is not integrable.
If $\bar{\partial}_{j,J}(v)=0$, then
the right hand side of~(\ref{eq:Dv})
is independent of the choice of
the connection $\nabla$
and is the operator of Definition~\ref{def:regular}.
If $J$ is integrable,
$v^*TM\to\Sigma$ is a holomorphic vector bundle
and $D_v$ is its Cauchy Riemann operator.
If $\nabla$ is the Levi Civita
connection of a K\"ahler metric, then $\nabla J=0$ and the
last term vanishes. In general (assuming neither integrability nor
that $(j,v)$ is a zero) the formula for $D_v$ still defines
a Cauchy--Riemann operator on $v^*TM$ which depends
however on the connection and might not be complex linear,
but it is always Fredholm.
\end{PARA}\rm
\section{Unfoldings of stable maps}\label{sec:unfolding}
\begin{PARA}\rm\label{family}
Fix two nonnegative integers ${\mathsf{g}}$ and ${\mathsf{n}}$
and a homology class ${\mathsf{d}}\in H_2(M;{\mathbb{Z}})$.
A \jdef{(holomorphic) family of maps
(of type $({\mathsf{g}},{\mathsf{n}},{\mathsf{d}})$)}
is a triple
$$
(\pi:Q\to B,S_*,H)
$$
where $(\pi,S_*)$ is a marked nodal Riemann family
(of type $({\mathsf{g}},{\mathsf{n}})$) and
$$
H:Q\to M
$$
is a holomorphic map such that the restriction of $H$
to each fiber $Q_b$ represents the homology class ${\mathsf{d}}$.
A desingularization $u:\Sigma\to Q_b$ of a fiber induces
a holomorphic configuration $(\Sigma,s_*,\nu,j,v)$ with
$$
v:=H\circ u.
$$
The family of maps is called \jdef{stable} if each configuration
that arises from a desingularization of a fiber is a stable map.
Given two families of maps ${(\pi_A:P\to A,R_*,H_A)}$
and ${(\pi_B:Q\to B,S_*,H_B)}$ a map $f:P_a\to Q_b$ is called
a \jdef{fiber isomorphism} if it is a fiber isomorphism
of marked nodal Riemann families and
$$
H_A|P_a = H_B\circ f.
$$
A \jdef{morphism} between two families of maps
$(\pi_A,R_*,H_A)$ and $(\pi_B,S_*,H_B)$ is a
commutative diagram
$$
\xymatrix{
&&&& M\\
P\ar[rrr]^\Phi\ar[d]_{\pi_A}\ar[rrrru]^{H_A} &&& Q\ar[d]^{\pi_B}\ar[ru]_{H_B} \\
A\ar[rrr]^\phi &&& B \\
}
$$
such that, for each $a\in A$, the restriction of $\Phi$
to the fiber $P_a$ is a fiber isomorphism.
The morphism is called continuous, continuously differentiable,
smooth, or holomorphic if both maps $\phi$ and $\Phi$ are.
\end{PARA}\rm
\begin{definition}\rm\label{def:univ}
An \jdef{unfolding of maps} is a quadruple $(\pi_B,S_*,H_B,b)$
where $(\pi_B,S_*,H_B)$ is a family of maps and $b\in B$.
An unfolding $(\pi_B,S_*,H_B,b)$ is called \jdef{universal}
if, for every other unfolding $(\pi_A,R_*,H_A,a)$
and every fiber isomorphism ${f:P_a\to Q_b}$,
there is a unique morphism
$$
(\phi,\Phi):(\pi_A,R_*,H_A,a)\to(\pi_B,S_*,H_B,b)
$$
of families of maps such that
$$
\Phi|P_a=f.
$$
This is to be understood in the sense of germs; the morphism
may only be defined after shrinking $A$, and two morphisms
are considered equal if they agree on some neighborhood of
$P_a$.
\end{definition}\rm
\begin{definition}\rm\label{def:infuniv}
Let $(\pi:Q\to B,S_*,H,b)$ be an unfolding of maps and
${u:\Sigma\to Q_b}$ be a desingularization with induced
structures $s_*$, $\nu$, $j$, and $v$ on $\Sigma$
Define the spaces
$$
\mathcal{X}_u:=\left\{\hat{u}\in\Omega^0(\Sigma/\nu,u^*TQ)\,|\,
d\pi(u)\hat u\equiv\mathrm{constant},\;
\hat u(s_{\mathsf{i}})\in T_{u(s_{\mathsf{i}})}S_{\mathsf{i}}
\right\},
$$
$$
\mathcal{Y}_u:=\{\eta\in\Omega^{0,1}_j(\Sigma,u^*TQ)\,|\,d\pi(u)\eta=0\},
$$
$$
\mathcal{X}_v := \Omega^0(\Sigma/\nu,v^*TM),\qquad
\mathcal{Y}_v := \Omega^{0,1}_j(\Sigma,v^*TM).
$$
Consider the diagram
\begin{equation}\label{eq:XY}
\xymatrix{
{\mathcal{X}_u}\ar[r]^{dH(u)}\ar[d]_{D_u}&{\mathcal{X}_v}\ar[d]^{D_v} \\
{\mathcal{Y}_u}\ar[r]^{dH(u)}&{\mathcal{Y}_v}
}
\end{equation}
where the vertical maps are the restrictions
to the indicated subspaces
of the linearized Cauchy--Riemann operators (see~\ref{Dv})
$$
D_u:\Omega^0(\Sigma,u^*TQ)\to\Omega^{0,1}(\Sigma,u^*TQ),
$$
$$
D_v:\Omega^0(\Sigma,v^*TM)\to\Omega^{0,1}(\Sigma,v^*TM)
$$
associated to the holomorphic maps $u$ and $v$.
Thus
$D_v$
is the intrinsic derivative in~\ref{def:regular}.
The diagram~(\ref{eq:XY}) commutes because $H$ is
holomorphic and hence
$\bar{\partial}_{j,J_M}(H\circ u)=dH(u)\cdot\bar{\partial}_{j,J_Q}(u)$.
The commutative diagram~(\ref{eq:XY}) determines maps
\begin{equation}\label{eq:dH}
dH(u):\ker D_u\to \ker D_v,\qquad
dH(u):\coker D_u\to \coker D_v
\end{equation}
The unfolding is called \jdef{infinitesimally universal} if the maps
in~(\ref{eq:dH}) are both bijective.
\end{definition}\rm
\begin{remark}\rm\label{rmk:regular}
{\em Let $(\Sigma,s_*,\nu,j,v)$ be induced by a desingularization
$u:\Sigma\to Q_b$ of an unfolding $(\pi:Q\to B,S_*,H,b)$.
Then $(\Sigma,s_*,\nu,j,v)$ is regular if and only if
the map $dH(u):\coker D_u\to \coker D_v$ is surjective.}
To see this note that $dH(u):\coker D_u\to \coker D_v$ is surjective
if and only if
\begin{equation}\label{eq:regular1}
\mathcal{Y}_v = \im\, D_v + \im\,(dH(u):\mathcal{Y}_u\to\mathcal{Y}_v).
\end{equation}
Since $u$ is an immersion, the map
$$
T_j\mathcal{J}(\Sigma)=\Omega^{0,1}_j(\Sigma,T\Sigma)\to \mathcal{Y}_u:
\eta\mapsto du\cdot \eta
$$
is an isomorphism. But $v=H\circ u$ so
$dv\cdot\eta=dH(u)\circ du\cdot\eta$ so
$$
dv\cdot\Omega^{0,1}_j(\Sigma,T\Sigma)=\im\,(dH(u):\mathcal{Y}_u\to\mathcal{Y}_v).
$$
Hence equation~(\ref{eq:regular}) is equivalent to
equation~(\ref{eq:regular1}) which asserts that the
holomorphic configuration $(\Sigma,s_*,\nu,j,v)$ is regular.
\end{remark}\rm
When $M$ is a point the above definitions and the following theorems
agree with the corresponding ones in~\cite{RS}.
\begin{theorem}\label{thm:exists}
A holomorphic configuration $(\Sigma,s_*,\nu,j,v)$ admits
an infinitesimally universal unfolding if and only if
it is a regular stable map.
\end{theorem}
\begin{proof}
The hard part of the proof is to show that `if' holds under the
additional assumption that the underlying marked nodal Riemann
surface $(\Sigma,s_*,\nu,j)$ is stable. We will prove this
in Section~\ref{sec:proof}. Here we give the easy parts
of the proof.
We prove `if' (assuming the aforementioned result of
Section~\ref{sec:proof}). By adding marked points in the
appropriate components we may construct a stable map
whose underlying marked nodal Riemann
surface is stable. Hence, by backwards induction, it
is enough to prove the following
\medskip
\noindent{\bf Claim.} {\it If a stable map admits an
infinitesimally universal unfolding and the configuration
which results on deleting a marked point is also a stable map,
then it too admits an infinitesimally universal unfolding.}
\medskip
To prove the claim let $(\pi:Q\to B,S_1,\dots,S_{\mathsf{n}},H,b_0)$ be an infinitesimally
universal unfolding of $(\Sigma,s_1,\dots,s_{\mathsf{n}},\nu,j,v)$ with
associated desingularization $u:\Sigma\to Q_{b_0}$
and assume that $(\Sigma,s_1,\dots,s_{{\mathsf{n}}-1},\nu,j,v)$ is still stable.
We will construct an infinitesimally universal unfolding
$
(\pi:Q'\to B',S_1',\ldots,S_{{\mathsf{n}}-1}',H',b_0)
$
such that $B'$ is a submanifold of $B$,
$Q':=\pi^{-1}(B')$ is a submanifold of $Q$, $H':=H|Q'$,
and $S_{\mathsf{i}}'=S_{\mathsf{i}}\cap Q'$ for ${\mathsf{i}}=1,\ldots,{\mathsf{n}}-1$.
Define the space
$$
\Tilde{\mathcal{X}}_u:=\left\{\hat{u}\in\Omega^0(\Sigma/\nu,u^*TQ)\,|\,
d\pi(u)\hat u\equiv\mathrm{constant},\,
\hat u(s_{\mathsf{i}})\in T_{u(s_{\mathsf{i}})}S_{\mathsf{i}}
\mbox{ for }{\mathsf{i}}<{\mathsf{n}}\right\}.
$$
Note that $\Tilde{\mathcal{X}}_u$ is obtained from $\mathcal{X}_u$ by removing the constraint
on the value $\hat u(s_{\mathsf{n}})$ at the last marked point. Thus $\mathcal{X}_u$ is a
subspace of $\Tilde{\mathcal{X}}_u$ of complex codimension one;
a complement of $\mathcal{X}_u$ in $\Tilde{\mathcal{X}}_u$ is spanned
by any vertical vector field along $u$, satisfying the nodal condition,
that vanishes at the marked points $s_{\mathsf{i}}$ for ${\mathsf{i}}<1$
and does not vanish at $s_{\mathsf{n}}$. Denote by
$$
\Tilde{D}_u:\Tilde{\mathcal{X}}_u\to\mathcal{Y}_u
$$
the operator given by the same formula as $D_u$ on the larger domain.
Note that the diagram~(\ref{eq:XY}) continues to commute when
we replace $\mathcal{X}_u$ and $D_u$ by $\Tilde{\mathcal{X}}_u$ and $\Tilde{D}_u$,
respectively. We prove the following.
\begin{description}
\item[(a)]
$\mathrm{im}\,D_u=\mathrm{im}\,\Tilde{D}_u$
and $\ker D_u\subset\ker\Tilde{D}_u$ is a
subspace of codimension one.
\item[(b)]
There is an element $\hat u\in\ker\Tilde{D}_u$ with
$dH(u)\hat u\equiv 0$ and $\hat b:=d\pi(u)\hat u\ne 0$.
\end{description}
With this understood we choose a complex submanifold
$B'\subset B$ of codimension one such that
$\pi$ is tranverse to $B'$ and $\hat b\notin T_{b_0}B'$.
Then the kernel of the resulting operator $D_u'$ is a complex
subspace of the kernel of $\Tilde{D}_u$ of codimension one.
Since $\hat b\notin T_{b_0}B'$, the kernel of $D_u'$
is mapped under $dH(u)$ isomorphically onto the kernel
of $D_v$. Since $D_u'$ has the same image as $\Tilde{D}_u$
and $D_u$ we deduce that $dH(u)$ also induces an
isomorphism from the cokernel of $D'_u$ to that of $D_v$.
Hence $(\pi:Q'\to B',S_1',\dots,S_{{\mathsf{n}}-1}',H',b_0)$ is an
infinitesimally universal unfolding
of $(\Sigma,s_1,\dots,s_{{\mathsf{n}}-1},\nu,j,v)$
as claimed.
\smallbreak
It remains to prove~(a) and~(b). To prove~(a) note
that $\Tilde{D}_u$ has the same image as $D_u$.
(If $\eta\in\mathcal{Y}_u$ belongs to the image of $\Tilde{D}_u$
then $dH(u)\eta\in\mathrm{im}\,D_v$ and, since
the second map in~(\ref{eq:dH}) is injective, this
implies that $\eta$ belongs to the image of $D_u$.)
Hence~(a) follows from the fact that $\mathcal{X}_u$ has
codimension one in $\Tilde{\mathcal{X}}_u$.
To prove~(b) we use the fact that the first map in~(\ref{eq:dH})
is surjective and $dH(u)$ maps the kernel of $\Tilde{D}_u$
to the kernel of $D_v$. Hence there is an element
$$
\hat u\in\ker\Tilde{D}_u\cap\ker dH(u)\setminus\ker D_u.
$$
Any such element satisfies
$$
d\pi(u)\hat u\ne 0.
$$
Otherwise there is a vector field $\xi\in\Vect(\Sigma)$ with
$\hat u=du\cdot\xi$; since $\hat u\in\Tilde{\mathcal{X}}_u$ this implies
that $\xi$ belongs to the Lie algebra of the stabilizer subgroup
of $(\Sigma,s_1,\dots,s_{{\mathsf{n}}-1},\nu,j,v)$, contradicting stablility.
Thus we have proved~(a) and~(b) and hence the claim.
We prove `only if'.
Let $(\Sigma,s_*,\nu,j,v)$ be induced by a
desingularization $u:\Sigma\to Q_b$ of the
infinitesimally universal unfolding $(\pi:Q\to B,S_*,H,b)$.
Then the holomorphic configuration $(\Sigma,s_*,\nu,j,v)$
is regular, by Remark~\ref{rmk:regular}.
Next we argue as in~\cite{RS}.
Assume that $(\Sigma,s_*,\nu,j,v)$ is regular but not stable.
Then either $\Sigma$ has genus one,
$v$ is constant, and there are no special points
or else $\Sigma$ contains a component of
genus zero on which $v$ is constant and which carries
at most two special points. In either case there
is an abelian complex Lie group $A$ (namely $A=\Sigma$ in the former case
and $A={\mathbb{C}}^*$ in the latter) and an effective holomorphic action
$$
A\times\Sigma\to\Sigma:(a,z)\mapsto a_\Sigma(z)
$$
that preserves the given structures.
Let $P:=A\times\Sigma$, $\pi_A$ be the projection
on the first factor, $R_*:=A\times s_*$,
$f_A(a,z):=v(z)$, and $a_0\in A$ be the identity.
If $u_0:\Sigma\to Q$ is any desingularization
of a fiber $Q_{b_0}$ of an unfolding $(\pi_B:Q\to B,S_*,f_B,b_0)$
which induces the given structures on~$\Sigma$, then
$$
\Phi_1(a,z):=u_0(z),\qquad \Phi_2(a,z):=u_0(a_\Sigma(z))
$$
are distinct morphisms from $(\pi_A,R_*,f_A,a_0)$
to $(\pi_B,S_*,f_B,b_0)$ which extend the fiber isomorphism
${P_{a_0}\to Q_{b_0}:(a_0,z)\mapsto u_0(z)}$.
Hence $(\pi_B,S_*,f_B,b_0)$ is not a universal unfolding.
\end{proof}
\begin{theorem}\label{thm:universal}
An unfolding of a regular stable map
is universal if and only if it is infinitesimally universal.
\end{theorem}
\begin{proof}
We prove `if' in Section~\ref{sec:proof}.
For `only if' we argue as in~\cite{RS}.
A composition of morphisms (of nodal families of maps)
is again a morphism. The only morphism which is the identity
on the central fiber of a universal unfolding is the identity.
It follows that any two universal unfoldings of the same
holomorphic configuration are isomorphic.
By Theorem~\ref{thm:exists} there is an
infinitesimally universal unfolding and by `if' it is universal
and hence isomorphic to every other universal unfolding.
Any unfolding isomorphic to an infinitesimally
universal unfolding is itself infinitesimally universal.
\end{proof}
\begin{example}\rm
Here is an example of an unfolding which is universal
but not infinitesimally universal.
Let $B={\mathbb{C}}$, $b_0=0$, $\Sigma$ be a Riemann surface
of genus ${\mathsf{g}}\ge1$, $Q=M=B\times\Sigma$,
$\pi_B:Q\to B$ be the projection on the first factor,
and $H_B:Q\to M$ be the identity map.
This is trivially universal as follows.
If $(\pi_A,H_A,a_0)$ is another unfolding
and $f_0:P_{a_0}\to Q_{b_0}$ is a fiber isomorphism
as in~\ref{family}, then $f_0=H_A|P_{a_0}$,
the unique solution of $H_B\circ\Phi=H_A$ is $\Phi=H_A$,
and $\phi$ is uniquely determined by the condition
$\pi_B\circ\Phi=\phi\circ\pi_A$.
To show that that the example is not infinitesimally universal
it is enough (by Theorem~\ref{thm:exists})
to show that the fiber is not regular, i.e. that
$$
\im D_v +dv\cdot\Omega^{0,1}_j(\Sigma,T\Sigma)
\subsetneq \Omega^{0,1}_j(\Sigma,TM)
$$
where $v:\Sigma\to M$ is the map $v(z):=(b_0,z)$.
Now $TM$ is the direct sum of $dv\cdot T\Sigma$ with a trivial bundle, so
it is enough to show that $D_v$ followed by projection of the trivial
bundle is not surjective. But this is the linear operator
${\overline\partial}:\Omega^0(\Sigma)\to \Omega^{0,1}_j(\Sigma)$.
Its cokernel is the space of holomorphic $1$-forms
and it has dimension ${\mathsf{g}}$.
\end{example}\rm
\begin{theorem}\label{thm:stable}
If an unfolding $(\pi,S_*,H,b_0)$ is infinitesimally universal, then
the unfolding $(\pi,S_*,H,b)$ is infinitesimally universal
for $b$ sufficiently near $b_0$.
\end{theorem}
\begin{PARA}\rm\label{univ-family}
Fix two nonnegative integers ${\mathsf{g}}$ and ${\mathsf{n}}$ and a homology
class ${\mathsf{d}}\in H_2(M;{\mathbb{Z}})$.
A \jdef{universal family of maps} of type $({\mathsf{g}},{\mathsf{n}},{\mathsf{d}})$
is a marked nodal family of maps $(\pi_B:Q\to B,S_*,H_B)$
satisfying the following conditions.
\begin{description}
\item[(1)]
$(\pi_B,S_*,H_B,b)$ is a universal unfolding of maps of type $({\mathsf{g}},{\mathsf{n}},{\mathsf{d}})$
for every $b\in B$.
\item[(2)]
Every regular stable map of type $({\mathsf{g}},{\mathsf{n}},{\mathsf{d}})$
arises from a desingularization of at least one fiber of $\pi_B$.
\item[(3)]
$B$ is second countable.
\end{description}
The existence of a universal marked nodal family of maps
for every triple $({\mathsf{g}},{\mathsf{n}},{\mathsf{d}})$
follows immediately from
Theorems~\ref{thm:exists}, \ref{thm:universal}, and~\ref{thm:stable}
as in~\cite[Proposition~6.3]{RS}.
\end{PARA}\rm
\begin{PARA}\rm\label{B-Gamma}
Every universal family $(\pi_B:Q\to B,S_*,H_B)$
of maps of type $({\mathsf{g}},{\mathsf{n}},{\mathsf{d}})$ determines a groupoid
$(B,\Gamma,s,t,e,i,m)$ as in~\cite[Definition~6.4]{RS};
here $\Gamma$ denotes the set of all triples $(a,f,b)$
such that $a,b\in B$ and $f:Q_a\to Q_b$ is a fiber isomorphism
satisfying $H_B\circ f = H_B|Q_a$, and the structure maps
$s,t:\Gamma\to B$, $e:B\to\Gamma$, $i:\Gamma\to\Gamma$,
and $m:\Gamma\Times{s}{t} \Gamma\to\Gamma$
are defined by
$$
s(a,f,b):=a,\qquad
t(a,f,b):=b,\qquad
e(a):=(a,\id,a),
$$
$$
i(a,f,b):=(b,f^{-1},a),\qquad
m((b,g,c),(a,f,b)):=(a,g\circ f,c).
$$
The associated groupoid is equipped with a functor
$
B\to\bar\mathcal{B}^\mathrm{ reg}_{{\mathsf{g}},{\mathsf{n}},{\mathsf{d}}}(M,J):b\mapsto\Sigma_b
$
to the groupoid of Definition~\ref{def:regular}, i.e. $\iota_b:\Sigma_b\to Q_b$
denotes the canonical desingularization in~\cite[Remark~4.4]{RS}.
By definition the induced map
$$
B/\Gamma\to\bar\mathcal{M}^\mathrm{ reg}_{{\mathsf{g}},{\mathsf{n}},{\mathsf{d}}}(M,J
$$
on orbit spaces is bijective. As in~\cite[Theorem~6.5]{RS}
the groupoid $(B,\Gamma)$ equips the moduli space
$\bar\mathcal{M}^\mathrm{ reg}_{{\mathsf{g}},{\mathsf{n}},{\mathsf{d}}}(M,J)$ with an orbifold structure which
is independent of the choice of the universal family.
\end{PARA}\rm
\begin{theorem}\label{thm:proper}
Let $(\pi_B:Q\to B,S_*,H_B)$
be a universal family of maps of
type $({\mathsf{g}},{\mathsf{n}},{\mathsf{d}})$ as in~\ref{univ-family}.
Then the associated
groupoid $(B,\Gamma)$ constructed in~\ref{B-Gamma} is proper
in the sense of~\cite[2.2]{RS}.
\end{theorem}
\begin{proof}
See Section~\ref{sec:proof}.
\end{proof}
\begin{corollary}
Fix a homology class ${\mathsf{d}}\in H_2(M;{\mathbb{Z}})$.
Then the moduli space $\bar\mathcal{M}^\mathrm{ reg}_{{\mathsf{g}},{\mathsf{n}},{\mathsf{d}}}(M,J)$
of isomorphism classes of regular stable maps of genus ${\mathsf{g}}$
with ${\mathsf{n}}$ marked points representing the class ${\mathsf{d}}$
is a complex orbifold of dimension
$$
\dim_{\mathbb{C}}\bar\mathcal{M}^\mathrm{ reg}_{{\mathsf{g}},{\mathsf{n}},{\mathsf{d}}}(M,J)
=(g-1)(3-\dim_{\mathbb{C}} M) +\inner{c_1(TM)}{{\mathsf{d}}} +{\mathsf{n}}.
$$
\end{corollary}
\begin{remark}\rm
If $(M,\omega,J)$ is a K\"ahler manifold with a transitive action by
a compact Lie group $G$, then every genus zero configuration in
$M$ is regular (see~\cite{RT} or~\cite[Proposition~7.4.3]{MS}).
Hence the moduli space $\bar\mathcal{M}_{0,{\mathsf{n}},{\mathsf{d}}}(M,J)$
is a (compact) complex orbifold for every ${\mathsf{d}}\in H_2(M;{\mathbb{Z}})$.
For $M={\mathbb{C}} P^{\mathsf{m}}$
this result is due to Fulton and Pandharipande~\cite{FP}.
Their result applies to all projective manifolds
whenever all the stable maps are regular.
In such cases they show that the moduli space is an algebraic orbifold.
In contrast, our result shows that the set of regular maps
into {\em any} complex manifold is an orbifold.
\end{remark}\rm
\section{Stable maps without nodes}\label{sec:nonodes}
In this section we restrict attention to regular stable maps without nodes.
Let $(\Sigma,s_*,j_0,v_0)$ be a regular stable map
of type
$({\mathsf{g}},{\mathsf{n}},{\mathsf{d}})$
without nodes.
We will construct an infinitesimally universal unfolding
$(\pi_B,S_*,H_B,b_0)$ of $(\Sigma,s_*,j_0,v_0)$,
show that it is universal, and prove that every other
infinitesimally universal unfolding of $(\Sigma,s_*,j_0,v_0)$
is isomorphic to the one we've constructed.
\begin{PARA}\rm\label{cBcQ}
Fix two nonnegative integers ${\mathsf{n}}$ and ${\mathsf{g}}$,
a homology class ${\mathsf{d}}\in H_2(M;{\mathbb{Z}})$,
and a compact oriented surface
$\Sigma$ without boundary
of genus ${\mathsf{g}}$.
Denote
$$
\mathcal{P}:=\left\{(s_1,\dots,s_{\mathsf{n}},j,v)\,\Bigg|\,\begin{array}{l}
s_*\in\Sigma^{\mathsf{n}}\setminus\Delta,\,
j\in\mathcal{J}(\Sigma),\,v\in C^{\infty}(\Sigma,M) \\
\bar{\partial}_{j,J}(v)=0,\,[v]={\mathsf{d}}\\
\mathcal{D}_{j,v}\mbox{ is onto},\,
(s_*,j,v)\mbox{ is stable}
\end{array}
\right\}
$$
where $\Delta\subset\Sigma^{\mathsf{n}}$ denotes the fat diagonal,
$[v]:=v_*[\Sigma]$ denotes the homology class represented by $v$,
and
$$
\mathcal{D}_{j,v}:
\Omega^{0,1}_j(\Sigma,T\Sigma)\times
\Omega^0(\Sigma,v^*TM)\to\Omega^{0,1}(\Sigma,v^*TM)
$$
denotes the linearized Cauchy--Riemann operator
of~\ref{regular}. Thus $\mathcal{P}$ is the regular part of the
space $\mathcal{P}_{{\mathsf{n}},{\mathsf{d}}}(\Sigma;M,J)$ in~\ref{Diff}.
The group
$$
\mathcal{G}:=\Diff_0(\Sigma)
$$
of orientation preserving diffeomorphisms of $\Sigma$
that are isotopic to the identity acts on $\mathcal{P}$
as in equation~(\ref{eq:Diff}):
$$
g^*(s_1,\dots,s_{\mathsf{n}},j,v)
:= (g^{-1}(s_1),\dots,g^{-1}(s_{\mathsf{n}}),g^*j,g^*v)
$$
for $g\in\mathcal{G}$.
\end{PARA}\rm
\begin{remark}\rm
Roughly speaking, the tuple $(\mathcal{Q}\to\mathcal{B},\mathcal{S}_*,\mathcal{H})$
defined by
$$
\mathcal{B} := \mathcal{P}/\mathcal{G},\qquad \mathcal{Q} := \mathcal{P}\times_\mathcal{G}\Sigma,
$$
$$
\mathcal{H}\left([s_1,\dots,s_{\mathsf{n}},j,v,z]\right) := v(z),\quad
\mathcal{S}_{\mathsf{i}} := \left\{[s_1,\dots,s_{\mathsf{n}},j,v,z]\in\mathcal{Q}\,|\,z=s_{\mathsf{i}}\right\}
$$
is a universal family. Our task is to make sense of these quotients.
In the case
$$
{\mathsf{n}}>2-2{\mathsf{g}}
$$
the action is free. In general, the action is only semi-free,
i.e.~the isotropy group of a point in~$\mathcal{P}$ is always finite
but it might be nontrivial.
(Example: ${\mathsf{n}}=0$, $\Sigma=M=S^2$, $v(z)=z^2$.)
In this case the quotient spaces $\mathcal{B}$ and $\mathcal{Q}$ cannot
be manifolds and hence do not qualify as universal unfoldings.
However, we shall prove that even in this case every point in $\mathcal{P}$
admits a holomorphic local slice for the $\mathcal{G}$-action and
that these slices can be used to construct universal unfoldings.
\end{remark}\rm
\begin{PARA}\rm\label{TP}
The space $\mathcal{P}$ is an infinite dimensional Frech\'et manifold.
Its tangent space at a point $p=(s_*,j,v)\in\mathcal{P}$
is the space $T_p\mathcal{P}$ of all tuples $\hat p=(\hat s_*,{\hat{\jmath}},\hat v)$
with $\hat s_{\mathsf{i}}\in T_{s_{\mathsf{i}}}\Sigma$, ${\hat{\jmath}}\in T_j\mathcal{J}(\Sigma)$,
$\hat v\in\Omega^0(\Sigma,v^*TM)$ that satisfy
\begin{equation}\label{eq:TP}
D_v\hat v + \frac12 J(v)dv\circ{\hat{\jmath}}=0.
\end{equation}
The Lie algebra of $\mathcal{G}$ is $\Lie(\mathcal{G})=\Vect(\Sigma)$ and
its (contravariant) infinitesimal action at $p\in\mathcal{P}$ is the operator
$\mathcal{L}_p:\Vect(\Sigma)\to T_p\mathcal{P}$
defined by
\begin{equation}\label{eq:cL}
\mathcal{L}_p\xi:= \left.\frac{d}{dt}g_t^*p\right|_{t=0}
\end{equation}
where $p=(s_*,j,v)\in\mathcal{P}$ and ${\mathbb{R}}\to\mathcal{G}:t\mapsto g_t$ satisfies
\begin{equation}\label{eq:gxi}
g_0=\id,\qquad \left.\frac{d}{dt}g_t\right|_{t=0}=\xi.
\end{equation}
(The right hand side of~(\ref{eq:cL}) is independent of the choice of $g_t$
satisfying~(\ref{eq:gxi}).)
Since $2j\bar{\partial}_j\xi=\mathcal{L}_\xi j\in T_j\mathcal{J}(\Sigma)$ is the
Lie derivative of $j$ in the direction $\xi$, equation~(\ref{eq:cL}) may be written
\begin{equation}\label{eq:infinitesimal}
\mathcal{L}_p\xi=(-\xi(s_1),\dots,-\xi(s_{\mathsf{n}}),2j\bar{\partial}_j\xi,dv\cdot\xi),\qquad
p=(s_1,\dots,s_{\mathsf{n}},j,v).
\end{equation}
The image of $\mathcal{L}_p$ is the tangent space $T_p\mathcal{G}^*p$
to the $\mathcal{G}$-orbit of $p$. The space $T_p\mathcal{P}$ carries
a natural complex structure
$\mathcal{I}(p):T_p\mathcal{P}\to T_p\mathcal{P}$ given by
\begin{equation}\label{eq:complex}
\mathcal{I}(p)(\hat s_1,\dots,\hat s_{\mathsf{n}},{\hat{\jmath}},\hat v):=
(j(s_1)\hat s_1,\dots,j(s_{\mathsf{n}})\hat s_{\mathsf{n}},j{\hat{\jmath}},J(v)\hat v)
\end{equation}
for $p=(s_1,\dots,s_{\mathsf{n}},j,v)\in\mathcal{P}$. The tangent space $T_p\mathcal{P}$
is invariant under $\mathcal{I}(p)$ because the differential $dv$
and the operator $D_v$ are complex linear.
The $\mathcal{G}$-action preserves this complex structure
and the formula
$$
\mathcal{L}_pj\xi = \mathcal{I}(p)\mathcal{L}_p\xi,\qquad p=(s_*,j,v)\in\mathcal{P},
$$
shows that $T_p\mathcal{G}^*p$ is a complex subspace of $T_p\mathcal{P}$.
In other words, the orbits of $\mathcal{G}$ are
complex submanifolds of $\mathcal{P}$ and the complex structure
descends to the quotient $\mathcal{P}/\mathcal{G}$.
The space $\mathcal{P}$ (without marked points) is the zero set
of the section $(j,v)\mapsto\bar{\partial}_{j,J}(v)$ of an infinite
dimensional vector bundle. The intrinsic differential of this section
at a zero $(j,v)$ is the operator $\mathcal{D}_{j,v}$ in~\ref{regular} and this
operator is surjective by assumption. Condition~(\ref{eq:TP}) asserts
that the pair $({\hat{\jmath}},\hat v)$ belongs to the kernel of $\mathcal{D}_{j,v}$.
Choosing a suitable Sobolev completion $\mathcal{P}^s$ of $\mathcal{P}$
(see the proof of Theorem~\ref{thm:slice} below)
we can deduce that $\mathcal{P}^s$ is a smooth Hilbert manifold whose tangent space
is given by~(\ref{eq:TP}). The action of $\mathcal{G}$ on this Hilbert manifold
is not smooth; on any Sobolev completion its differential takes
values in another Sobolev completion with one derivative less.
However, in the Frech\'et category, where $B$ is a finite dimensional
smooth manifold, the notion of a smooth map $\iota:B\to\mathcal{P}$ and its differential
$d\iota(b):T_bB\to T_{\iota(b)}\mathcal{P}$ have well defined meanings
via evaluation maps.
\end{PARA}\rm
\begin{lemma}\label{le:hol-unfolding}
Let $A$ be a complex manifold (with complex structure $\sqrt{-1}$),
$$
A\to\mathcal{P}:a\mapsto p(a)=(r_1(a),\dots,r_{\mathsf{n}}(a),j(a),v(a))
$$
be a smooth map and $\eta:TA\to\Vect(\Sigma)$ be
a $1$-form on $A$ with values in the space
of vector fields on $\Sigma$ such that
\begin{equation}\label{eq:jeta1}
\eta(a,\sqrt{-1}\hat{a})=-j(a)\eta(a,\hat{a})
\end{equation}
for all $(a,\hat{a})\in TA$.
Define an almost complex structure $J_P$ on $P:=A\times\Sigma$,
sections $R_1,\dots,R_{\mathsf{n}}\subset P$, and a map $H_A:P\to M$ by
\begin{equation}\label{eq:JP}
J_P(a,z)(\hat a,\hat z)
:= \left(\sqrt{-1}\hat a,j(a)(z)\hat z+\eta(a,\hat a)(z)\right),
\end{equation}
\begin{equation}\label{eq:RH}
R_{\mathsf{i}} := \left\{(a,r_{\mathsf{i}}(a))\,|\,a\in A\right\},\qquad
H_A(a,z) := v(a)(z).
\end{equation}
Then the following are equivalent.
\begin{description}
\item[(i)]
The tuple $(\pi_A,R_*,H_A)$ is a (holomorphic) family of maps,
i.e.~$J_P$ is integrable, each $R_{\mathsf{i}}$ is a complex submanifold of $P$,
and $H_A:P\to M$ is holomorphic.
\item[(ii)]
$p$ and $\eta$ satisfy the differential equation
\begin{equation}\label{eq:vortex}
dp(a)\hat a + \mathcal{I}(p(a))dp(a)\sqrt{-1}\hat a
- \mathcal{L}_{p(a)}\eta(a,\sqrt{-1}\hat a) = 0
\end{equation}
for every $a\in A$ and every $\hat a\in T_aA$.
\end{description}
\end{lemma}
\begin{proof}
We prove that~(i) implies~(ii). If the almost
complex structure $J_P$ is integrable then,
by~\cite[Corrigendum, Lemma~A]{RS}, we have
\begin{equation}\label{eq:jeta2}
dj(a)\hat a + j(a)dj(a)\sqrt{-1}\hat a
- \mathcal{L}_{\eta(a,\sqrt{-1}\hat a)}j(a)=0.
\end{equation}
Moreover, for ${\mathsf{i}}=1,\dots,{\mathsf{n}}$ the set $R_{\mathsf{i}}$
is a complex submanifold of $A\times\Sigma$,
if and only if
$$
dr_{\mathsf{i}}(a)\hat a + j(a)dr_{\mathsf{i}}(a)\sqrt{-1}\hat a
+ \eta(a,\sqrt{-1}\hat a)(r_{\mathsf{i}}(a)) = 0
$$
and $H_A:A\times\Sigma\to M$
is holomorphic if and only if
$$
(dv(a)\hat a)(z)
+ J(v(a)(z))(dv(a)\sqrt{-1}\hat a)(z)
- d(v(a))(z)\eta(a,\sqrt{-1}\hat a)(z)
= 0.
$$
In the last formula $(dv(a)\hat{a})(z)$ denotes the derivative
of $v(a)(z)$ with respect to $a$ and $d(v(a))(z)\hat{z}$
denotes the derivative of $v(a)(z)$ with respect to $z$.
This proves that~(i) implies~(ii).
Conversely, assume~(ii) and, without loss of generality,
that $A$ is an open set in ${\mathbb{C}}^{\mathsf{a}}$.
Fix two vectors $\hat a,\hat b\in {\mathbb{C}}^{\mathsf{a}}$ and, for $a\in A$,
define $\zeta(a)\in\Vect(\Sigma)$ by
\begin{eqnarray*}
\zeta(a)
&:=&
{\partial}_1\eta(a,\hat a)\sqrt{-1}\hat b - j(a){\partial}_1\eta(a,\hat a)\hat b \\
&&
- {\partial}_1\eta(a,\hat b)\sqrt{-1}\hat a + j(a){\partial}_1\eta(a,\hat b)\hat a
+ [\eta(a,\hat a),\eta(a,\hat b)].
\end{eqnarray*}
Then
$$
\mathcal{L}_{\zeta(a)}j(a)=0,\qquad
\zeta(a)(r_i(a))=0,\qquad
\mathcal{L}_{\zeta(a)}v(a) = 0
$$
for $a\in A$ and $i=1,\dots,{\mathsf{n}}$. Here the first
equation follows from~\cite[Corrigendum, Lemma~B]{RS}
and the other two equations follow from similar,
though somewhat lengthy, calculations.
Now it follows from the stability condition in the definition
of $\mathcal{P}$ that $\zeta(a)=0$ for every $a\in A$ and hence,
by~\cite[Corrigendum, Lemma~A]{RS} the almost complex
structure $J_P$ is integrable.
This proves the lemma.
\end{proof}
\begin{PARA}\rm\label{slice}
Let $p_0:=(s_{0,*},j_0,v_0)\in\mathcal{P}$, $B$ be a complex manifold
with base point $b_0\in B$, and $\iota:B\to\mathcal{P}$ be a smooth map
such that $\iota(b_0)=p_0$.
The map $\iota$ is called \jdef{holomorphic} if its
differential $d\iota(b):T_bB\to T_{\iota(b)}\mathcal{P}$
is complex linear for every $b\in B$.
The map $\iota$ is called a \jdef{slice} at $b_0$ if
for every smooth map $p:(A,a_0)\to(\mathcal{P},p_0)$ there is a
neighborhood $A_0$ of $a_0$ in $A$
and unique smooth maps $\Phi:(A_0,a_0)\to(\mathcal{G},\id)$ and
$\phi:(A_0,a_0)\to(B,b_0)$ such that
$$
p(a)=\Phi(a)^*\iota(\phi(a))
$$
for $a\in A_0$. The map $\iota$ is called an \jdef{infinitesimal slice}
at $b_0$ if
\begin{equation}\label{eq:slice}
\mathrm{im}\,d\iota(b_0)\oplus T_{p_0}\mathcal{G}^*p_0
=T_{p_0}\mathcal{P},\qquad \ker d\iota(b_0) = 0.
\end{equation}
Write $\iota(b)=:(\sigma_1(b),\dots,\sigma_{\mathsf{n}}(b),j(b),v(b))$.
Then~(\ref{eq:slice}) can be expressed as follows.
\begin{description}
\item[($\dagger$)]
If $\hat b\in T_{b_0}B$ and $\hat u\in\Vect(\Sigma)$ satisfy
\begin{equation}\label{eq:slice1}
\left.
\begin{aligned}
d\sigma_{\mathsf{i}}(b_0)\hat b-\hat u(s_{0,{\mathsf{i}}}) &=0\\
dj(b_0)\hat b+2j_0\bar{\partial}_{j_0}\hat u&=0\\
dv(b_0)\hat b+dv_0\cdot\hat u &=0
\end{aligned}
\right\}
\qquad\implies\qquad
\hat b=0,\;\;\hat u=0.
\end{equation}
\item[($\ddagger$)]
If $\hat s_{\mathsf{i}}\in T_{s_{0,{\mathsf{i}}}}\Sigma$, ${\hat{\jmath}}\in T_{j_0}\mathcal{J}(\Sigma)$,
and $\hat v\in\Omega^0(\Sigma,v_0^*TM)$ satisfy~(\ref{eq:TP})
then there exists a pair $(\hat b,\hat u)\in T_{b_0}B\times\Vect(\Sigma)$
such that
\begin{equation}\label{eq:slice2}
\begin{aligned}
d\sigma_{\mathsf{i}}(b_0)\hat b-\hat u(s_{0,{\mathsf{i}}})&=\hat s_{\mathsf{i}},\\
dj(b_0)\hat b + 2j_0\bar{\partial}_{j_0}\hat u&={\hat{\jmath}},\\
dv(b_0)\hat b + dv_0\cdot\hat u&=\hat v.
\end{aligned}
\end{equation}
\end{description}
\end{PARA}\rm
\begin{theorem}[Slice Theorem]\label{thm:slice}
{\bf (i)} A smooth infinitesimal slice is a slice.
\smallskip\noindent{\bf (ii)}
If $\iota:B\to\mathcal{P}$ is an infinitesimal slice at $b_0\in B$
then it is an infinitesimal slice at $b$
for $b$ sufficiently near $b_0$.
\smallskip\noindent{\bf (iii)}
Every point in $\mathcal{P}$ admits a holomorphic infinitesimal slice
$\iota:B\to\mathcal{P}$ of complex dimension
$
\dim_{\mathbb{C}} B= ({\mathsf{m}}-3)(1-{\mathsf{g}}) + \inner{c_1}{{\mathsf{d}}} + {\mathsf{n}}.
$
\end{theorem}
\begin{proof}
Choose an integer $s\ge3$ and let $\mathcal{G}^s$ denote the Sobolev completion
of $\mathcal{G}$ in the $H^s$ topology and $\mathcal{P}^s$ denote the Sobolev completion
of $\mathcal{P}$ in the $H^{s-1}$ topology on $j$ and the $H^s$ topology on $v$.
Then
$$
\mathcal{P}^s\subset\Sigma^{\mathsf{n}}\times\mathcal{J}^{s-1}(\Sigma)\times H^s(\Sigma,M)
$$
is a smooth Hilbert submanifold.
Now let $\iota:(B,b_0)\to(\mathcal{P},p_0)$ be a smooth infinitesimal slice.
\medskip\noindent{\bf Claim 1: } {\em The map
$$
B\times\mathcal{G}^s\to\mathcal{P}^s:(b,g)\mapsto
\mathcal{F}^s(b,g) := g^*\iota(b)
$$
is a $C^{s-2}$ map between Hilbert manifolds.
The tangent space of $\mathcal{G}^s$ at $\phi=\id$ is the space
$H^s(\Sigma,T\Sigma)$ of vector fields of class $H^s$ and
the differential of $\mathcal{F}^s$ at the pair $(b,\id)$ is
$$
d\mathcal{F}^s(b,\id)(\hat b,\xi)
= d\iota(b)\hat b + \mathcal{L}_{\iota(b)}\xi
$$
for $\hat b\in T_bB$ and $\xi\in H^s(\Sigma,T\Sigma)$.
(See~(\ref{eq:infinitesimal}) for the definition of $\mathcal{L}_{\iota(b)}$.)}
\medskip\noindent
Denote the value of $\iota(b)$ at a point $x\in\Sigma$ by
$$
\iota(b)(x)= (\sigma_{1,b},\ldots,\sigma_{{\mathsf{n}},b},j_b(x),v_b(x).
$$
The maps $\sigma_i:B\to\Sigma$, $j:B\times\Sigma\to\End(T\Sigma)$,
and $v:B\times\Sigma\to M$ are all smooth by hypothesis.
The map $\mathcal{G}^s\to\mathcal{G}^s:g\mapsto g^{-1}$ is smooth.
Hence the map $B\times\mathcal{G}^s\to\Sigma:(b,g)\mapsto g^{-1}(\sigma_{i,b})$ is
as smooth as the evaluation map $\mathcal{G}^s\times\Sigma\to\Sigma$,
i.e. it is $C^{s-2}$ by Sobelov. Moreover, the map $g\mapsto dg$
is smooth as a map from $H^s$ to $H^{s-1}$.
Since $(g^*j_b)(x)=dg(x)^{-1}j_b(g(x))dg(x)$ this shows that the map
$$
B\times\mathcal{G}^s\to\mathcal{J}^{s-1}(\Sigma):(b,g)\mapsto g^*j_b
$$
is smooth. The map $B\times\mathcal{G}^s\to H^s(\Sigma,M):(b,g)\mapsto v_b\circ g$
is smooth because the map $v:B\times\Sigma\to M$ is smooth.
This proves claim~1.
\medskip\noindent{\bf Claim 2:} {\it The operator $d\mathcal{F}^s(b,\id)$ is bijective
if and only if $\iota$ is an infinitesimal slice at $b$.}
\medskip\noindent
To see this, assume first that $\iota$ is an infinitesimal slice at $b$.
Then, by elliptic regularity, every element in the
kernel of $d\mathcal{F}^s(b,\id)$ is smooth and hence
the operator is injective by~($\dagger$).
For surjectivity we observe that the image of $d\mathcal{F}^s(b,\id)$
is closed by the elliptic estimate, that the smooth elements
are dense in $T_{\iota(b)}\mathcal{P}^s$, and that the smooth elements
of $T_{\iota(b)}\mathcal{P}^s$ are contained in the image of $d\mathcal{F}^s(b,\id)$
by~($\ddagger$). Conversely, if $d\mathcal{F}^s(b,\id)$ is bijective,
it follows from elliptic regularity that $\iota$ satisfies the
infinitesimal slice conditions~($\dagger$) and~($\ddagger$)
at $b$. This proves claim~2.
Shrinking $B$ if necessary, we may assume that
$d\mathcal{F}^s(b,\id)$ is bijective for every $b\in B$. By Claim~2
this implies that $\iota$ is an infinitesimal slice at every point
$b\in B$ and $d\mathcal{F}^{s'}(b,\id)$ is bijective for every $b$ and every~$s'$.
Hence, by equivariance, $d\mathcal{F}^{s'}(b,g)$ is bijective for
every integer $s'\ge 2$, every $b\in B$, and every $g\in\mathcal{G}^{s'}$.
In particular, we have proved~(ii).
Now fix an integer $s_0\ge 3$. Then it follows from the inverse
function theorem that $\mathcal{F}^{s_0}$ maps an open $H^{s_0}$
neighborhood of $(b_0,\id)$ in $B\times\mathcal{G}^{s_0}$ by a
$C^{s_0-2}$-diffeomorphism onto an open neighborhood
of $p_0$ in $\mathcal{P}^{s_0}$. Given a smooth map $p:(A,a_0)\to(\mathcal{P},p_0)$
choose $A_0\subset A$ to be the preimage of this neighborhood
of $p_0$ and define the $C^{s_0-2}$ map
$$
A_0\to B\times\mathcal{G}^{s_0}:a\mapsto(\phi(a),\Phi(a))
$$
by
$$
(\phi(a),\Phi(a)):=(\mathcal{F}^{s_0})^{-1}(p(a)).
$$
Then
$$
p(a) = \Phi(a)^*\iota(\phi(a))
$$
for every $a\in A_0$. Since the complex structures on $\Sigma$
associated to $\iota\circ\phi(a)$ and $p(a)$ are smooth it follows
from elliptic regularity that $\Phi(a)\in\mathcal{G}$ is smooth for every
$a\in A_0$. Thus $\Phi(a)\in\mathcal{G}^s$ and $\mathcal{F}^s(\phi(a),\Phi(a))=p(a)$
for every $a\in A_0$ and every $s$.
Since the differential $d\mathcal{F}^s(\phi(a),\Phi(a))$ is bijective
for every $a\in A_0$ and every integer $s\ge 2$, it follows that
the map $a\mapsto(\phi(a),\Phi(a))$ is a $C^{s-2}$ map
from $A_0$ to $B\times\mathcal{G}^s$ for every integer $s\ge 3$.
Hence this map is smooth. This proves~(i).
We prove~(iii).
Fix an element $(s_{0,*},j_0,v_0)\in\mathcal{P}$.
Let $\mathrm{G}\subset\mathcal{G}$ denote the identity component of the
isotropy subgroup of the tuple $(s_{0,*},j_0)$. Thus
\begin{equation}\label{eq:G}
\mathrm{G} := \left\{\begin{array}{ll}
\{{{\mathchoice \mathrm{ 1\mskip-4mu l} \mathrm{ 1\mskip-4mu l\},&\mbox{if }{\mathsf{n}}>2-2{\mathsf{g}}, \\
{\mathbb{T}}^2,&\mbox{if }{\mathsf{g}}=1,\,{\mathsf{n}}=0, \\
{\mathbb{C}}^*,&\mbox{if }{\mathsf{g}}=0,\,{\mathsf{n}}=2, \\
{\mathbb{C}}^*\ltimes{\mathbb{C}},&\mbox{if }{\mathsf{g}}=0,\,{\mathsf{n}}=1, \\
\mathrm{PSL}(2,{\mathbb{C}}),&\mbox{if }{\mathsf{g}}=0,\,{\mathsf{n}}=0.
\end{array}\right.
\end{equation}
First we choose a $\mathrm{G}$-invariant holomorphic map
$$
\iota_0:A\to(\Sigma^{\mathsf{n}}\setminus\Delta)\times\mathcal{J}(\Sigma),\qquad
\iota_0(a)=(\sigma_1(a),\dots,\sigma_{\mathsf{n}}(a),j(a)),
$$
defined on an open neighborhood $A\subset{\mathbb{C}}^{3{\mathsf{g}}-3+{\mathsf{n}}+\dim_{\mathbb{C}}\mathrm{G}}$
of a point $a_0$, that is transverse to the $\mathcal{G}$-action and satisfies
$$
\iota_0(a_0)=(s_{0,*},j_0).
$$
We do this as follows. In the case ${\mathsf{n}}>2-2{\mathsf{g}}$ we choose a
slice in Teichm\"uller space $\mathcal{T}_{{\mathsf{g}},{\mathsf{n}}}$ as in the proof
of~\cite[Theorem~8.9]{RS}. There are two cases with ${\mathsf{n}}\le 2-2{\mathsf{g}}$.
If ${\mathsf{g}}=1$ (so $\Sigma\cong{\mathbb{T}}^2$) and ${\mathsf{n}}=0$
we take $A=\H$ to be the upper half plane and define
$\iota_0:A\to\mathcal{J}(\Sigma)$ as the standard map to the complex
structures on the torus (see~\cite[Section~7]{RS}).
If ${\mathsf{g}}=0$ (so $\Sigma\cong S^2$) and ${\mathsf{n}}\le 2$
we take $A$ to be a point. Note that
\begin{equation}\label{eq:dimAG}
\dim_{\mathbb{C}} A - \dim_{\mathbb{C}}\mathrm{G} = 3{\mathsf{g}}-3+{\mathsf{n}}
\end{equation}
in all cases and that $\mathrm{G}$ is the isotropy group of
each element of the slice, i.e. for $g\in\mathcal{G}$ and $a\in A$
we have $g^*\iota_0(a)=\iota_0(a)$ if and only if $g\in\mathrm{G}$.
The map $\iota_0$ gives rise to an infinite dimensional
vector bundle
$$
\mathcal{E}\to A\timesC^{\infty}(\Sigma,M)
$$
with fibers
$$
\mathcal{E}_{a,v} := \Omega^{0,1}_{j(a)}(\Sigma,v^*TM).
$$
The Cauchy--Riemann operator defines a section
\begin{equation}\label{eq:dbarjJ}
A\timesC^{\infty}(\Sigma,M)\to\mathcal{E}:(a,v)\mapsto\bar{\partial}_{j(a),J}(v)
\end{equation}
whose intrinsic derivative at a point $(a,v)$ is the operator
$$
\mathcal{D}_{a,v}:T_aA\times\Omega^0(\Sigma,v^*TM)
\to\Omega^{0,1}_{j(a)}(\Sigma,v^*TM)
$$
given by
\begin{equation}\label{eq:Dav}
\mathcal{D}_{a,v}(\hat a,\hat v)
:= \mathcal{D}_{j(a),v}(dj(a)\hat a,\hat v)
= D_v\hat v + \frac12J(v)dv\cdot dj(a)\hat a.
\end{equation}
Since the operator $\mathcal{D}_{j_0,v_0}$ is surjective and $\iota_0$
is an infinitesimal slice, it follows that the section~(\ref{eq:dbarjJ})
is transverse to the zero section at $(a_0,v_0)$. Hence it follows
from the implicit function theorem in suitable Sobolev completions
(see e.g.~\cite[Chapter~3]{MS}) that a neighborhood of $(a_0,v_0)$
in the zero set of~(\ref{eq:dbarjJ}) is a smooth submanifold of
$A\timesC^{\infty}(\Sigma,M)$. It is denoted by
$$
Z := \left\{(a,v)\in A\timesC^{\infty}(\Sigma,M)\,\Big|\,
\bar{\partial}_{j(a),J}(v)=0,\,\sup_{z\in\Sigma}d_M(v(z),v_0(z))<{\varepsilon}\right\}.
$$
The group $\mathrm{G}$ acts on $Z$.
Since
$$
\INDEX_{\mathbb{R}}(D_v) = {\mathsf{m}}(2-2{\mathsf{g}}) + 2\inner{c_1}{{\mathsf{d}}}
$$
by the Riemann--Roch theorem, it follows from~(\ref{eq:dimAG})
that
$$
\dim_{\mathbb{R}} Z - \dim_{\mathbb{R}}\mathrm{G}
= ({\mathsf{m}}-3)(2-2{\mathsf{g}}) + 2\inner{c_1}{{\mathsf{d}}} + 2{\mathsf{n}}.
$$
Since $\iota$ is holomorphic and $J$ is integrable,
the operator~(\ref{eq:Dav}) is complex linear for all
$(a,v)\in Z$. This shows that $Z$ is a finite dimensional
submanifold of ${A\timesC^{\infty}(\Sigma,M)}$ whose tangent
space at each point $(a,v)\in Z$ is a complex subspace
of $T_aA\times\Omega^0(\Sigma,v^*TM)$.
The almost complex structure on any such submanifold
is integrable, because $C^{\infty}(\Sigma,M)$ is a complex
manifold and the graph of a smooth function between
complex vector spaces is a complex submanifold
if and only if the function is holomorphic.
With this understood we obtain the desired
infinitesimal slice from a holomorphic slice $B\subset Z$
for the $\mathrm{G}$ action. This proves the theorem.
\end{proof}
\begin{remark}\rm
In the proof of part~(iii) of Theorem~\ref{thm:slice}
one can reduce the case ${\mathsf{n}}\le2-2{\mathsf{g}}$ with $\mathrm{G}\ne\{{{\mathchoice \mathrm{ 1\mskip-4mu l} \mathrm{ 1\mskip-4mu l\}$
to the case ${\mathsf{n}}>2-2{\mathsf{g}}$ with $\mathrm{G}=\{{{\mathchoice \mathrm{ 1\mskip-4mu l} \mathrm{ 1\mskip-4mu l\}$ by a similar
argument as we used in the proof of Theorem~\ref{thm:exists}.
\end{remark}\rm
\begin{PARA}\rm\label{slice-unfolding}
Let $(s_{0,*},j_0,v_0)\in\mathcal{P}$,
$B$ be a manifold with base point $b_0\in B$, and
$$
B\to\mathcal{P}:b\mapsto\iota(b)
=\left(\sigma_1(b),\dots,\sigma_{\mathsf{n}}(b),j(b),v(b)\right)
$$
be a holomorphic map such that
$$
j(b_0)=j_0,\qquad v(b_0)=v_0,\qquad
\sigma_{\mathsf{i}}(b_0)=s_{0,{\mathsf{i}}},\qquad {\mathsf{i}}=1,\dots,{\mathsf{n}}.
$$
Define the unfolding $(\pi_\iota:Q_\iota\to B,S_{\iota,*},H_\iota,b_0)$ by
$$
Q_\iota := B\times\Sigma,\qquad
J_\iota(b,z) := \left(\begin{array}{cc}
\sqrt{-1} & 0 \\ 0 & j(b)(z)\end{array}\right)
$$
where $\sqrt{-1}$ denotes the complex structure on $B$ and
$$
H_\iota(b,z):=v(b)(z),\qquad
S_{\iota,{\mathsf{i}}}:=\left\{(b,\sigma_{\mathsf{i}}(b))\,|\,b\in B\right\},\qquad
{\mathsf{i}}=1,\dots,{\mathsf{n}}.
$$
\end{PARA}\rm
\begin{lemma}\label{le:slice}
Let $(\pi_\iota,S_{\iota,*},H_\iota,b_0)$ be the unfolding
associated to a holomorphic map $\iota:B\to\mathcal{P}$
as in~\ref{slice-unfolding}. Then the following are equivalent.
\begin{description}
\item[(i)]
The unfolding $(\pi_\iota,S_{\iota,*},H_\iota,b_0)$
is infinitesimally universal.
\item[(ii)]
The map $\iota$ is an infinitesimal slice at $b_0$.
\end{description}
\end{lemma}
\begin{proof}
Let $u_0:(\Sigma,j_0)\to Q_\iota$ be the holomorphic embedding
$$
u_0(z):=(b_0,z)
$$
so that $H_\iota\circ u_0=v_0$.
Then the operator $D_{u_0}$ has domain
$$
\mathcal{X}_u:=\left\{(\hat u,\hat b)
\in\Omega^0(\Sigma,T\Sigma)\times T_{b_0}B\,|\,
\hat u(s_{0,{\mathsf{i}}})=d\sigma_{\mathsf{i}}(b_0)\hat b\right\},
$$
target space $\mathcal{Y}_u:=\Omega^{0,1}_{j_0}(\Sigma,T\Sigma)$,
and is given by
$$
D_{u_0}(\hat u,\hat b)
= {\overline\partial}_{j_0}\hat u -\frac12 j_0dj(b_0)\hat b.
$$
The linearized operator in~\ref{Dv} is
$$
D_{v_0}:\mathcal{X}_v\to\mathcal{Y}_v,\qquad
\mathcal{X}_v:=\Omega^0(\Sigma,v_0^*TM),\qquad
\mathcal{Y}_v:=\Omega^{0,1}(\Sigma,v_0^*TM).
$$
The homomorphisms
\begin{equation}\label{eq:kercoker}
\ker D_{u_0}\to\ker D_{v_0},\qquad
\coker D_{u_0}\to\coker D_{v_0}
\end{equation}
are induced by the maps
$$
\mathcal{X}_u\to\mathcal{X}_v:(\hat u,\hat b)
\mapsto dv_0\cdot\hat u + dv(b_0)\hat b,\qquad
\mathcal{Y}_u\to\mathcal{Y}_v:\eta\mapsto dv_0\cdot \eta.
$$
We must prove that the maps in~(\ref{eq:kercoker}) are
isomorphisms if and only if~(ii) holds.
Note that the second
map in~(\ref{eq:kercoker}) is necessarily surjective
because $(\Sigma,s_{0,*},j_0,v_0)$ is a regular stable map.
\smallbreak
We prove that~(ii) implies~(i).
We prove that the first map in~(\ref{eq:kercoker})
is bijective. Let $(\hat u,\hat b)\in\ker D_{u_0}$ and
assume that its image in $\ker D_{v_0}$ vanishes. Then
$$
{\overline\partial}_{j_0}\hat u - \frac12 j_0dj(b_0)\hat b=0,\qquad
dv_0\cdot \hat u + dv(b_0)\hat b = 0.
$$
Since $(\hat u,\hat b)\in\mathcal{X}_u$, we have $d\sigma_{\mathsf{i}}(b_0)=\hat u(s_{0,{\mathsf{i}}})$
for ${\mathsf{i}}=1,\dots,{\mathsf{n}}$ and hence, by~(ii) and~($\dagger$) in~\ref{slice},
$\hat b=0$ and $\hat u=0$. Thus we have proved that the
homomorphism $\ker D_{u_0}\to \ker D_{v_0}$ is injective.
Next we prove that this map is surjective.
Let $\hat v\in\Omega^0(\Sigma,v_0^*TM)$ be a vector
field along $v_0$ such that ${D_{v_0}\hat v=0}$.
Then the tuple $(\hat s_1,\dots\hat s_{\mathsf{n}},{\hat{\jmath}},\hat v)$
with $\hat s_{\mathsf{i}}=0$ and ${\hat{\jmath}}=0$ satisfies~(\ref{eq:TP}).
Hence, by~(ii) and~($\ddagger$) in~\ref{slice},
there is a pair $(\hat u,\hat b)$ such that
$$
d\sigma_{\mathsf{i}}(b_0)\hat a - \hat u(s_{0,{\mathsf{i}}}) =0,\qquad
dj(b_0)\hat b + 2j_0{\overline\partial}_{j_0}\hat u = 0,\qquad
dv(b_0)\hat b + dv_0\cdot\hat u = \hat v.
$$
This implies
$$
{\overline\partial}_{j_0}\hat u - \frac{1}{2}j_0dj(b_0)\hat b =0,\qquad
\hat v = dv(b_0)\hat b + dv_0\cdot\hat u
$$
and so $\hat v$ belongs to the image of the map
$\ker D_{u_0}\to \ker D_{v_0}$. This shows that
the first map in~(\ref{eq:kercoker}) is an isomorphism.
Next we prove that the second map in~(\ref{eq:kercoker})
is bijective. Let $\eta\in\mathcal{Y}_u$ such that
$dv_0\cdot\eta\in\mathrm{im}\,D_{v_0}$ and choose
$\hat v\in\Omega^0(\Sigma,v_0^*TM)$ such that
$$
dv_0\cdot\eta + D_{v_0}\hat v = 0.
$$
Then $\hat v$ and ${\hat{\jmath}}:=-2j_0\eta$ satisfy~(\ref{eq:TP}).
Hence, by~(ii) and~($\ddagger$) in~\ref{slice},
there is a pair $(\hat u,\hat b)$ such that
$$
d\sigma_{\mathsf{i}}(b_0)\hat b - \hat u(s_{0,{\mathsf{i}}}) =0,\qquad
dj(b_0)\hat b + 2j_0{\overline\partial}_{j_0}\hat u = {\hat{\jmath}},\qquad
dv(b_0)\hat b + dv_0\cdot\hat u = \hat v.
$$
This implies
$$
(\hat u,\hat b)\in\mathcal{X}_u,\qquad
D_{u_0}(\hat u,\hat b)=-\frac12j_0{\hat{\jmath}}=-\eta,
$$
and hence $\eta\in\mathrm{im}D_{u_0}$.
This shows that the second map in~(\ref{eq:kercoker})
is injective and, since we have already proved surjectivity,
it is an isomorphism. Thus we have proved that~(ii) implies~(i).
We prove that~(i) implies~(ii).
Assume that the maps in~(\ref{eq:kercoker})
are bijective. If $\hat u$ and $\hat b$
satisfy~(\ref{eq:slice1}) then $(\hat u,\hat b)\in\mathcal{X}_u$,
$D_{u_0}(\hat u,\hat b)=0$, and the image of $(\hat u,\hat b)$
under the homomorphism $\mathcal{X}_u\to\mathcal{X}_v$ vanishes.
Since the first map in~(\ref{eq:kercoker}) is injective,
this implies $\hat u=0$ and $\hat b=0$.
Now suppose that ${\hat{\jmath}}$ and $\hat v$
satisfy~(\ref{eq:TP}) with $v=v_0$, i.e.
$$
0=D_{v_0}\hat v + \frac12J(v_0)dv_0\circ{\hat{\jmath}}
= D_{v_0}\hat v + dv_0\circ\eta,\qquad \eta := \frac12j_0{\hat{\jmath}}.
$$
Hence
$
dv_0\circ \eta=-D_{v_0}\hat v\in\mathrm{im}D_{v_0}.
$
Since the second map in~(\ref{eq:kercoker}) is injective
this implies $\eta\in\mathrm{im}D_{u_0}$. Choose a pair
$(\hat u,\hat b)\in\mathcal{X}_u$ such that $D_{u_0}(\hat u,\hat b)=-\eta$.
Then $\hat u$ and $\hat b$ satisfy
$$
d\sigma_{\mathsf{i}}(b_0)\hat b-\hat u(s_{0,{\mathsf{i}}})=0,\qquad
{\hat{\jmath}} = -2j_0\eta = 2j_0D_{u_0}(\hat u,\hat b)
= 2j_0\bar{\partial}_{j_0}\hat u + dj(b_0)\hat b.
$$
Hence
$$
D_{v_0}\hat v = -dv_0\cdot \eta
= dv_0\cdot D_{u_0}(\hat u,\hat b)
= D_{v_0}\left(dv_0\cdot\hat u + dv(b_0)\hat b\right).
$$
The last equation follows from the fact that
and the diagram~(\ref{eq:XY}) in Definition~\ref{def:infuniv}
commutes, reading $H_\iota(p,z)=v(b)(z)$ for $H$.
Since the first map in~(\ref{eq:kercoker}) is surjective, there
exists a pair $(\hat u_0,\hat b_0)\in\ker D_{u_0}$ such that
$$
\hat v = dv_0\cdot(\hat u+\hat u_0) + dv(b_0)(\hat b+\hat b_0).
$$
Hence the pair $(\hat u+\hat u_0,\hat b+\hat b_0)$
satisfies~(\ref{eq:slice2}) with $\hat s_{\mathsf{i}}=0$.
In the case $\hat s_{\mathsf{i}}\ne 0$ choose first a vector field
$\hat u_0\in\Vect(\Sigma)$ such that $-\hat u_0(s_{0,{\mathsf{i}}})=\hat s_{\mathsf{i}}$
for ${\mathsf{i}}=1,\dots,{\mathsf{n}}$ and denote
$$
{\hat{\jmath}}_1 := {\hat{\jmath}} - 2j_0\bar{\partial}_{j_0}\hat u_0,\qquad
\hat v_1 := \hat v - dv_0\cdot \hat u_0.
$$
This pair still satisfies~(\ref{eq:TP}). Hence, by what we have
already proved, there exists a pair $(\hat u_1,\hat b_1)$ that
satisfies~(\ref{eq:slice2}) with $(\hat s_{\mathsf{i}},{\hat{\jmath}},\hat v)$
replaced by $(0,{\hat{\jmath}}_1,\hat v_1)$. Hence the pair
$\hat u:=\hat u_0+\hat u_1$, $\hat b:=\hat b_1$ satisfies~(\ref{eq:slice2}).
Thus we have proved that~(i) implies~(ii). This completes
the proof of the lemma.
\end{proof}
\begin{lemma}\label{le:holomorphic}
Fix a regular stable map $(\Sigma,s_{0,*},j_0,v_0)$ and let
$$
B\to\mathcal{P}:b\mapsto\iota(b)=(\sigma_1(b),\dots,\sigma_{\mathsf{n}}(b),j(b),v(b))
$$
be a holomorphic infinitesimal slice such that
$$
\iota(b_0)=(s_{0,*},j_0,v_0).
$$
Let $(\pi_\iota,S_{\iota,*},H_\iota,b_0)$
be the unfolding constructed in~\ref{slice-unfolding}.
Then every continuously differentiable morphism $(\phi,\Phi)$ from
$(\pi_A:P\to A,R_*,H_A,a_0)$ to $(\pi_\iota,S_{\iota,*},H_\iota,b_0)$
is holomorphic.
\end{lemma}
\begin{proof}
Choose a smooth trivialization
$$
A\times\Sigma\to P:(a,z)\mapsto\tau(a,z)=\tau_a(z)
$$
so that $\tau_a:\Sigma\to P_a$ is a desingularization
(with no singularities) for every $a\in A$. The stable map
on $\Sigma$, induced by $\tau_a$, is the tuple
$$
p(a) := \iota\circ\phi(a)
= \bigl(\sigma_1(\phi(a)),\dots,\sigma_{\mathsf{n}}(\phi(a)),j(\phi(a)),v(\phi(a))\bigr)
\in \mathcal{P}.
$$
The complex structure on $A\times\Sigma$ induced by $\tau$
has the form
$$
(\hat a,\hat z)\mapsto\left(\sqrt{-1}\hat a,
j(\phi(a))(z)\hat z+\eta(a,\hat a)(z)\right)
$$
for a suitable $1$-form $T_aA\to\Vect(\Sigma):\hat a\mapsto\eta(a,\hat a)$.
Since this complex structure is integrable, the map
$H_A\circ\tau:A\times\Sigma\to M$ is holomorphic, and
$\tau^{-1}(R_{\mathsf{i}})$ is a complex submanifold of $A\times\Sigma$
for every ${\mathsf{i}}$, it follows from Lemma~\ref{le:hol-unfolding} that
$$
dp(a)\hat a + \mathcal{I}(p(a))dp(a)\sqrt{-1}\hat a - \mathcal{L}_{p(a)}\eta(a,\sqrt{-1}\hat a) = 0
$$
for every $a\in A$ and every $\hat a\in T_aA$.
Since $p=\iota\circ\phi$ and $\iota$ is holomorphic, this implies
$$
d\iota(\phi(a))
\left(d\phi(a)\hat a + \sqrt{-1}d\phi(a)\sqrt{-1}\hat a\right)
= \mathcal{L}_{\iota(\phi(a))}\eta(a,\sqrt{-1}\hat a)
$$
for all $a$ and $\hat a$. Since $\iota$ is a slice this implies
that $\eta\equiv0$ and $\phi$ is holomorphic. Hence $\Phi$
is holomorphic as well and this proves the lemma.
\end{proof}
\begin{theorem}\label{thm:nonodes}
Theorems~\ref{thm:exists}, \ref{thm:universal}, and~\ref{thm:stable}
hold for regular stable maps without nodes. Moreover,
if $(\pi_B:Q\to B,S_*,H_B,b_0)$ is any universal unfolding without
nodes and $(\phi,\Phi)$ is a continuously differentiable
morphism from ${(\pi_A:P\to A,R_*,H_A,a_0)}$ to $(\pi_B,S_*,H_B,b_0)$
then $\phi$ and $\Phi$ are holomorphic.
\end{theorem}
\begin{proof}
{\bf Step 1.}
{\em Theorem~\ref{thm:exists} holds for stable maps without nodes. }
We proved ``only if'' immediately after the statement of
Theorem~\ref{thm:exists}; we prove ``if'' here.
Fix a regular stable map $(\Sigma,s_{0,*},j_0,v_0)$, let
$\iota:B\to\mathcal{P}$ be a holomorphic infinitesimal slice such that
$
\iota(b_0)=(s_{0,*},j_0,v_0),
$
and let $(\pi_\iota,S_{\iota,*},H_\iota,b_0)$
be the unfolding constructed in~\ref{slice-unfolding}.
Then it follows from Lemma~\ref{le:slice} that
$(\pi_\iota,S_{\iota,*},H_\iota,b_0)$ is infinitesimally universal.
{\bf Step 2.}
{\em The unfolding $(\pi_\iota,S_{\iota,*},H_\iota,b_0)$ is universal.}
Let $(\pi_A,R_*,H_A,a_0)$ be an unfolding
of $(\Sigma,s_{0,*},j_0,v_0)$ and $f_0:P_{a_0}\to Q_{b_0}$
be a fiber isomorphism. Assume w.l.o.g.~that
$$
P=A\times\Sigma,\qquad f_0(a_0,z)=(b_0,z).
$$
Denote by $p(a)=(r_*(a),j(a),v(a))\in\mathcal{P}$
the regular stable map on the fiber over $a$ determined by
$(\pi_A,R_*,H_A,a_0)$. Then
$$
p(a_0) = (s_{0,*},j_0,v_0) = \iota(b_0).
$$
Now any two smooth maps $\phi:A\to B$ and
$\Phi:P\to Q_\iota$ that intertwine the projections
and satisfy $\Phi|P_{a_0}=f_0$ have the form
$$
\Phi(a,z) = (\phi(a),\Phi_a(z)),
$$
where $A\to\Diff(\Sigma):a\mapsto\Phi_a$ is a smooth map
such that
$
\Phi_{a_0}=\id.
$
The pair $(\phi,\Phi)$ is a
smooth morphism from $(\pi_A,R_*,H_A,a_0)$
to $(\pi_\iota,S_{\iota,*},H_\iota,b_0)$
if and only if
$$
p(a) = \Phi_a^*\iota(\phi(a))
$$
for every $a\in A$. Hence the existence and uniqueness
of smooth morphisms follows from the Theorem~\ref{thm:slice}~(i)
That every smooth morphism is holomorphic follows from
Lemma~\ref{le:holomorphic}.
{\bf Step 3.}
{\em Every infinitesimally universal unfolding
of $(\Sigma,s_{0,*},j_0,v_0)$ is isomorphic to
$(\pi_\iota,S_{\iota,*},H_\iota,b_0)$.}
Let $(\pi_A,R_*,H_A,a_0)$ be an unfolding
and
$$
f_0:P_{a_0}\to Q_{b_0}
$$
be a fiber isomorphism. By Step~2, there exists
a holomorphic morphism $(\phi,\Phi)$ from
$(\pi_A,R_*,H_A,a_0)$ to $(\pi_\iota,S_{\iota,*},H_\iota,b_0)$.
The map
$$
p:=\iota\circ\phi:A\to\mathcal{P}
$$
is holomorphic. Since $(\pi_A,R_*,H_A,a_0)$ is infinitesimally
universal, $p$ is a infinitesimal slice at $a_0$, by Lemma~\ref{le:slice}.
Hence the differential $d\phi(a_0)$ is bijective. This implies that
$(\phi,\Phi)$ is an isomorphism.
{\bf Step 4.}
Since every infinitesimally universal unfolding
of $(\Sigma,s_{0,*},j_0,v_0)$ is isomorphic to
$(\pi_\iota,S_{\iota,*},H_\iota,b_0)$ and $(\pi_\iota,S_{\iota,*},H_\iota,b_0)$
is universal we have proved Theorem~\ref{thm:universal} for
stable maps without nodes. By
Lemma~\ref{le:slice} and Theorem~\ref{thm:slice},
the unfolding $(\pi_\iota,S_{\iota,*},H_\iota,b)$
is infinitesimally universal for $b$ near $b_0$ and hence
Theorem~\ref{thm:stable} holds for stable maps without nodes.
The `moreover' assertion follows from Lemma~\ref{le:holomorphic}
and Step~3. This proves Theorem~\ref{thm:nonodes}.
\end{proof}
\section{Hardy decompositions}\label{sec:Hardy}
This section follows closely Sections~9 and~11 of~\cite{RS}.
It is convenient to use slightly different notation;
for example $P=N\cup M$ in~\cite{RS} becomes $P=P'\cup P''$
and the open sets $U,V\subset Q$ in~\cite{RS} are replaced by $U',U''$.
With these changes we review the notation from~\cite{RS}.
\begin{PARA}\rm\label{AB}
Throughout this section
$$
(\pi_A:P\to A,R_*,H_A,a_0),\qquad (\pi_B:Q\to B,S_*,H_B,b_0)
$$
are unfoldings of maps,
$$
f_0: P_{a_0}\to Q_{b_0}
$$
is a fiber isomorphism, and $p_1,p_2,\ldots,p_{\mathsf{k}}$ are
the nodal points of the central fiber~$P_{a_0}$,
so $q_{\mathsf{i}}:=f_0(p_{\mathsf{i}})$ (for ${\mathsf{i}}=1,\ldots,{\mathsf{k}}$)
are the nodal points of the central fiber~$Q_{b_0}$.
As in~\cite{RS} we denote by $C_A\subset P$ and
$C_B\subset Q$ the critical points of $\pi_A$ and $\pi_B$,
respectively.
\end{PARA}\rm
\begin{PARA}\rm\label{UQ}
Let $U'\subset Q$ be an open neighborhood of $C_B$
equipped with nodal coordinates. This means
$$
U'=U'_1\cup\cdots\cup U'_{\mathsf{k}}
$$
where the sets $U'_{\mathsf{i}}$ have pairwise disjoint closures,
each $U'_{\mathsf{i}}$ is a connected neighborhood of one of the components
of $C_B$, and for ${\mathsf{i}}=1,\ldots,{\mathsf{k}}$ there is a holomorphic coordinate
system
$$
(\zeta_{\mathsf{i}},\tau_{\mathsf{i}}):B\to {\mathbb{C}}\times{\mathbb{C}}^{\sb-1}
$$
and holomorphic functions $\xi_{\mathsf{i}},\eta_{\mathsf{i}}:U'_{\mathsf{i}}\to{\mathbb{C}}$ such that
$$
(\xi_{\mathsf{i}},\eta_{\mathsf{i}},\tau_{\mathsf{i}}\circ\pi_B):U'_{\mathsf{i}}\to{\mathbb{C}}\times{\mathbb{C}}\times{\mathbb{C}}^{\sb-1}
$$
is a holomorphic coordinate system and $\xi_{\mathsf{i}}\eta_{\mathsf{i}}=\zeta_{\mathsf{i}}\circ\pi_B$.
Assume that $\bar U'\cap S_*=\emptyset$.
Let $U''\subset Q$ be an open set such that
$$
Q=U'\cup U'', \qquad \bar U''\cap C_B=\emptyset,
$$
and $U_{\mathsf{i}}'\cap U''$ intersects each fiber $Q_b$ in two open annuli
with $|\xi_{\mathsf{i}}|>|\eta_{\mathsf{i}}|$ on one component and $|\xi_{\mathsf{i}}|<|\eta_{\mathsf{i}}|$ on the other.
Introduce the abbreviations
$$
U:=U'\cap U'', \quad U_{\mathsf{i}}:=U_{\mathsf{i}}'\cap U'', \quad U_{{\mathsf{i}},1}:=\{|\xi_{\mathsf{i}}|>|\eta_{\mathsf{i}}|\},
\quad U_{{\mathsf{i}},2}:=\{|\xi_{\mathsf{i}}|<|\eta_{\mathsf{i}}|\},
$$
$$
U'_b:= U'\cap Q_b, \qquad U''_b:=U''\cap Q_b, \qquad U_b:=U\cap Q_b.
$$
\end{PARA}\rm
\begin{PARA}\rm\label{Hardy}
As in~\cite{RS} we use a Hardy decomposition
$$
P=P'\cup P'', \qquad {\partial} P'={\partial} P'' = P'\cap P'',
$$
for $(\pi_A,R_*,a_0)$. Thus
$P'$ and $P''$ are submanifolds of $P$
intersecting in their common boundary and
$$
P'=P'_1\cup\cdots \cup P'_{\mathsf{k}},
$$
where $P'_{\mathsf{i}}$ is a closed neighborhood of $p_{\mathsf{i}}$
disjoint from the elements of $R_*$,
the $P'_{\mathsf{i}}$ are pairwise disjoint,
and each $P'_{\mathsf{i}}$ is the domain of a nodal coordinate system.
The latter consists of three holomorphic maps
$$
(x_{\mathsf{i}},y_{\mathsf{i}}):P'_{\mathsf{i}}\to{\mathbb{D}}^2,\qquad z_{\mathsf{i}}: A \to{\mathbb{C}},\qquad t_{\mathsf{i}}:A\to{\mathbb{C}}^{{\mathsf{a}}-1},
$$
such that each map
$$
A\to {\mathbb{D}}\times{\mathbb{C}}^{{\mathsf{a}}-1}:a\mapsto(z_{\mathsf{i}}(a),t_{\mathsf{i}}(a))
$$
is a holomorphic coordinate system, each map
$$
P'_{\mathsf{i}}\to {\mathbb{D}}^2\times {\mathbb{C}}^{{\mathsf{a}}-1}:
p\mapsto \bigl(x_{\mathsf{i}}(p),y_{\mathsf{i}}(p),t_{\mathsf{i}}(\pi_A(p))\bigr)
$$
is a holomorphic coordinate system, and
$$
x_{\mathsf{i}}(p_{\mathsf{i}})=y_{\mathsf{i}}(p_{\mathsf{i}})=0,\qquad z_{\mathsf{i}}\circ\pi_A =x_{\mathsf{i}} y_{\mathsf{i}}.
$$
Restricting to a fiber gives a decomposition
$$
P_a=P'_a\cup P''_a, \qquad
P'_a:=P'\cap P_a, \qquad
P''_a:=P''\cap P_a,
$$
where $P''_a$ is a Riemann surface with boundary
and each component of $P'_a$ is either
a closed annulus or a pair of transverse closed disks.
Abbreviate
$$
\Gamma_a:=P'_a\cap P''_a = {\partial} P'_a = {\partial} P''_a.
$$
The nodal coordinate system determines a trivialization
\begin{equation}\label{eq:trivialize}
\iota:A\times\Gamma\to{\partial} P',\qquad
\Gamma:=\bigcup_{{\mathsf{i}}=1}^{\mathsf{k}}\{({\mathsf{i}},1),({\mathsf{i}},2)\}\times S^1,
\end{equation}
given by
$$
\begin{array}{ll}
\iota^{-1}(p):=(\pi_A(p),({\mathsf{i}},1),x_{\mathsf{i}}(p)),&\qquad p\in{\partial}_1P'_{\mathsf{i}}:=\{|x_{\mathsf{i}}|=1\}, \\
\iota^{-1}(q):=(\pi_A(p),({\mathsf{i}},2),y_{\mathsf{i}}(q)),&\qquad q\in{\partial}_2P'_{\mathsf{i}}:=\{|y_{\mathsf{i}}|=1\}.
\end{array}
$$
For $a\in A$ and ${\mathsf{i}}=1,\dots,{\mathsf{k}}$
define $\iota_a:\Gamma\to\Gamma_a$ by
$\iota_a(\lambda):=\iota(a,\lambda)$ and denote
$$
{\partial}_{{\mathsf{i}},1}P'_a:={\partial}_1P'_{\mathsf{i}}\cap P_a,\qquad
{\partial}_{{\mathsf{i}},2}P'_a:={\partial}_2P'_{\mathsf{i}}\cap P_a,\qquad
P'_{a,{\mathsf{i}}}:=P'_a\cap P'_{\mathsf{i}}.
$$
\end{PARA}\rm
\begin{figure}[htp]
\centering
\includegraphics[scale=0.6]{figure-hardy}
\caption{{A Hardy decomposition of $P$.}}\label{fig:hardy}
\end{figure}
\begin{PARA}\rm\label{f=id}
Lemma~11.3 in~\cite{RS} asserts that,
after shrinking $A$ and $B$ if necessary,
there is a Hardy decomposition $P=P'\cup P''$ as
in~\ref{Hardy} and there are open
subsets $U'=U'_1\cup\cdots\cup U'_{\mathsf{k}}$, $U''$, $U$ of $Q$
and functions $\xi_{\mathsf{i}},\eta_{\mathsf{i}},\zeta_{\mathsf{i}},\tau_{\mathsf{i}}$ as described in~\ref{UQ}
such that
$$
f_0(P'_{a_0})\subset U'_{b_0},\qquad f_0(P''_{a_0})\subset U''_{b_0},
$$
$$
\xi_{\mathsf{i}}\circ f_0\circ x_{\mathsf{i}}^{-1}(x,0,0)=x,\qquad
\eta_{\mathsf{i}}\circ f_0\circ y_{\mathsf{i}}^{-1}(0,y,0)=y
$$
for $x,y\in{\mathbb{D}}$. Fix a Hardy decomposition
$P=P'\cup P''$ for $(\pi_A,R_*,a_0)$,
open subsets $U'=U'_1\cup\cdots\cup U'_{\mathsf{k}}$, $U''$, $U$ of $Q$,
and functions $\xi_{\mathsf{i}},\eta_{\mathsf{i}},\zeta_{\mathsf{i}},\tau_{\mathsf{i}}$ as described in~\ref{UQ},
such that these conditions are satisfied.
\end{PARA}\rm
\begin{PARA}\rm\label{cU}
Fix an integer $s+1/2>1$. For $a\in A$ and $b\in B$ define an open subset
$$
\mathcal{U}(a,b)\subset H^s(\Gamma_a,U_b)
$$
by the condition that for $\alpha\in H^s(\Gamma_a,U_b)$
we have $\alpha\in\mathcal{U}(a,b)$ if
$$
\alpha\bigl({\partial}_{{\mathsf{i}},1}P'_a\bigr)\subset U_{{\mathsf{i}},1},\qquad
\alpha\bigl({\partial}_{{\mathsf{i}},2}P'_a\bigr)\subset U_{{\mathsf{i}},2},
$$
(see~\ref{UQ} for the notation $U_{{\mathsf{i}},1}$ and $U_{{\mathsf{i}},2}$)
and the curves $\xi_{\mathsf{i}}\circ \alpha\circ x_{\mathsf{i}}^{-1}$ and
$\eta_{\mathsf{i}}\circ \alpha\circ y_{\mathsf{i}}^{-1}$ from $S^1$ to ${\mathbb{C}}\setminus0$
both have winding number one about the origin.
\begin{equation*}
\begin{split}
\mathcal{U}'(a,b)&:=
\left\{\alpha\in\mathcal{U}(a,b) \,\Bigg|\,
\begin{aligned}
& \exists f'\in \Hol^{s+1/2}(P'_a,U'_b) : \alpha=f'|\Gamma_a\\
& \mbox{and } f'(C_A\cap P_a)=C_B\cap Q_b,
\end{aligned}\right\}, \\ \\
\mathcal{U}''(a,b)&:=
\left\{\alpha\in\mathcal{U}(a,b)\,\Bigg|\,
\begin{aligned}
& \exists f''\in \Hol^{s+1/2}(P''_a,U''_b):\alpha=f''|\Gamma_a\\
& \mbox{and } f''(R_*\cap P_a)=S_*\cap Q_b
\end{aligned}\right\}.
\end{split}
\end{equation*}
Here $\Hol^{s+1/2}(X,Y)$ denotes the set of maps
of class $H^{s+1/2}$ from $X$ to $Y$ which are holomorphic
on the interior of $X$. Holomorphicity at a nodal point
is defined as in~\cite[\S11.1]{RS}.
Note that the function
$f':P'_a\to U'_b$ in the definition of $\mathcal{U}'(a,b)$ maps the
boundary $\Gamma_a={\partial} P'_a$ into $U_b=U'_b\cap U''_b$;
similarly for $f''$ in the definition of $\mathcal{U}''(a,b)$.
Define
$$
\mathcal{U}_a:=\bigsqcup_{b\in B}\mathcal{U}(a,b),\qquad
\mathcal{U}'_a:=\bigsqcup_{b\in B}\mathcal{U}'(a,b),\;\qquad
\mathcal{U}''_a:=\bigsqcup_{b\in B}\mathcal{U}''(a,b),
$$
$$
\mathcal{U}:=\bigsqcup_{a\in A}\mathcal{U}_a,\qquad
\mathcal{U}':=\bigsqcup_{a\in A}\mathcal{U}'_a,\qquad
\mathcal{U}'':=\bigsqcup_{a\in A}\mathcal{U}''_a.
$$
Our notation means that the three formulas
$(a,\alpha,b)\in\mathcal{U}$, $(\alpha,b)\in\mathcal{U}_a$,
and $\alpha\in\mathcal{U}(a,b)$ have the same meaning.
\end{PARA}\rm
\begin{PARA}\rm\label{cU0}
We use the nodal coordinate system of~\ref{Hardy} to
construct an auxiliary Hilbert manifold structure on $\mathcal{U}$.
The domains of the maps in this space vary with $a$
so we replace them with a constant domain by using
an appropriate trivialization. Define an open set
$$
\mathcal{U}_0\subset \left\{(a,\alpha,b)\in A\times H^s(\Gamma,U)\times B\,|\,
\pi_B\circ\alpha=b\right\}
$$
by the condition that the map
$$
\mathcal{U}_0\to\mathcal{U}:(a,\alpha,b)\mapsto(a,\alpha\circ\iota_a^{-1},b)
$$
is a bijection. In particular $\alpha(({\mathsf{i}},1)\times S^1)\subset U_{{\mathsf{i}},1}$
and $\alpha(({\mathsf{i}},2)\times S^1)\subset U_{{\mathsf{i}},2}$ for $(a,\alpha,b)\in\mathcal{U}_0$.
(By a standard construction $H^s(\Gamma,U)$ is a complex
Hilbert manifold and the subset $\{(a,\alpha,b)\,|\,\pi_B\circ\alpha=b\}$
is a complex Hilbert submanifold of $A\times H^s(\Gamma,U)\times B$.
This is because the map $H^s(\Gamma,U)\to H^s(\Gamma,B)$
induced by $\pi_B$ is a holomorphic submersion.
Note that $\mathcal{U}_0$ is a connected component
of $\{(a,\alpha,b)\,|\,\pi_B\circ\alpha=b\}$ and hence inherits
its Hilbert manifold structure.)
We emphasize that the resulting Hilbert manifold structure
on $\mathcal{U}$ depends on the choice of the Hardy trivialization.
Two different Hardy trivializations give rise to a homeomorphism
which is of class $C^\ell$ on the dense subset $\mathcal{U}\cap H^{s+\ell}$.
\end{PARA}\rm
\begin{PARA}\rm\label{shrinking}
The fiber isomorphism $f_0:P_{a_0}\to Q_{b_0}$ determines
a point
$$
(a_0,\alpha_0:=f_0|\Gamma_{a_0},b_0)\in\mathcal{U};
$$
this point lies in $\mathcal{U}'\cap\mathcal{U}''$ as
$$
\alpha_0=f'_0|\Gamma_{a_0}=f''_0|\Gamma_{a_0},
\qquad\mbox{where} \qquad
f'_0:=f_0|P'_{a_0},\quad f''_0:=f_0|P''_{a_0}.
$$
In the sequel we will denote neighborhoods of $a_0$ in $A$ and
$(a_0,\alpha_0,b_0)$ in $\mathcal{U}'$, $\mathcal{U}''$, or $\mathcal{U}$ by the same letters
$A$, respectively $\mathcal{U}'$, $\mathcal{U}''$, or $\mathcal{U}$, and signal this with
the text ``shrinking $A$, $\mathcal{U}'$, $\mathcal{U}''$, or $\mathcal{U}$, if necessary''.
\end{PARA}\rm
\begin{lemma}\label{le:fgamma}
For every $(a,\alpha,b)\in\mathcal{U}'\cap\mathcal{U}''$
there is a unique fiber isomorphism
$f:P_a\to Q_b$ with $f|\Gamma_a=\alpha$.
\end{lemma}
\begin{proof}
This follows immediately from~\cite[Lemma~9.4]{RS}.
\end{proof}
\begin{theorem}\label{thm:U}
Fix an integer $s+1/2>4$.
After shrinking $A$, $\mathcal{U}'$, $\mathcal{U}''$, $\mathcal{U}$,
if necessary, the following holds.
\begin{description}
\item[(i)]
For each $a\in A$, $\mathcal{U}'_a$ and $\mathcal{U}''_a$
are complex submanifolds of~$\mathcal{U}_a$.
\item[(ii)]
Let $(a,\alpha,b)\in\mathcal{U}'\cap\mathcal{U}''$ and $f:P_a\to Q_b$ be the
associated fiber isomorphism with $\alpha=f|\Gamma_a$.
Let $w:\Sigma\to P_a$ be a desingularization with induced
structures $j,\nu,s_*,u:=f\circ w$ on~$\Sigma$ and $D_u$
be the operator in Definition~\ref{def:infuniv}.
Then
$$
\ker D_u\cong
T_{(\alpha,b)}\mathcal{U}'_a\cap T_{(\alpha,b)}\mathcal{U}''_a,\qquad
\coker D_u\cong
\frac{T_{(\alpha,b)}\mathcal{U}_a}
{T_{(\alpha,b)}\mathcal{U}'_a+T_{(\alpha,b)}\mathcal{U}''_a}.
$$
\item[(iii)]
$\mathcal{U}'$ and $\mathcal{U}''$ are complex submanifolds of $\mathcal{U}$.
\item[(iv)]
The projections $\mathcal{U}\to A$, $\mathcal{U}'\to A$, $\mathcal{U}''\to A$
are holomorphic submersions.
\end{description}
\end{theorem}
\begin{proof}
Theorems~9.5 and~11.9 in~\cite{RS}.
The condition $s+1/2>4$ is used in compactness arguments
for the proofs of~(i) and~(iii). These compactness arguments
can be eliminated by modifying the definition of $\mathcal{U}''$ along
the lines of the definition of $\mathcal{V}''$ in~\ref{cV} below.
\end{proof}
\begin{PARA}\rm\label{hardyTrivialization}
As in~\cite[Definition~11.6]{RS}, we use a Hardy trivialization for
$(\pi_A:P\to A,R_*,a_0)$, i.e. a triple $(P'\cup P'',\iota,\rho)$
where $P=P'\cup P''$ is a Hardy decomposition with
corresponding trivialization $\iota:A\times\Gamma\to {\partial} P'$
as in~\ref{Hardy} and
$$
\rho:P''\to P''_{a_0}=:\Omega
$$
is a trivialization such that $\rho_a:=\rho|P''_a:P''_a\to\Omega$
is a diffeomorphism satisfying
$$
\rho_{a_0} = \id,\qquad
\rho_a\circ\iota_a=\iota_{a_0}
$$
for $a\in A$. We require further that $\rho$ is holomorphic
in a neighborhood of the boundary.
\end{PARA}\rm
\begin{PARA}\rm\label{cV}
Let $(\pi_A:P\to A,R_*,a_0)$ be an unfolding of marked nodal
Riemann surfaces and $h_0:P_{a_0}\to M$ be a holomorphic map.
Choose a Hardy decomposition $P=P'\cup P''$ as in~\ref{Hardy}
and a Hardy trivialization $\rho_a:P_a\to\Omega$ as
in~\ref{hardyTrivialization}.
We would like to imitate Theorem~\ref{thm:U}
and define subsets $\mathcal{V}'_a,\mathcal{V}''_a\subset H^s(\Gamma_a,M)$
of those maps $\beta\in\mathcal{V}_a$ which extend holomorphically
to $P'_a,P''_a$ respectively, but it is convenient
to restrict the extensions. Let
$$
V'=V'_1\cup\cdots\cup V'_{\mathsf{k}}\subset M
$$
be an open neighborhood of the image $h_0(P_{a_0}\cap C_A)$
of the nodal set so that each pair $(V'_{\mathsf{i}},h_0(p_{\mathsf{i}}))$
is holomorphically diffeomorphic to the open unit ball
in ${\mathbb{C}}^{\mathsf{m}}$ centered at origin, the closures of the
sets $V'_{\mathsf{i}}$ are pairwise disjoint, and
$$
h_0(P_{a_0}\cap P'_{\mathsf{i}}) \subset V'_{\mathsf{i}}.
$$
For $a\in A$ abbreviate
$$
\mathcal{V}_a := H^s(\Gamma_a,M).
$$
Let $\mathcal{V}'_a\subset\mathcal{V}_a$ be the subspace of those
$\beta$ that extend holomorphically to $P_a'$, {i.e.}
$$
\mathcal{V}'_a:=\left\{\beta\in\mathcal{V}_a\,\big|\,\exists\,
h'\in\Hol^{s+1/2}(P'_a,M)\,s.t.\,h'(P'_{a,{\mathsf{i}}})\subset V'_{\mathsf{i}}
\mbox{ and }
\beta=h'|\Gamma_a\right\}.
$$
Let $\mathcal{W}_0$ be a neighborhood of $h_0|\Omega$
in $H^{s+1/2}(\Omega,M)$, where $\Omega=P_{a_0}''$
as in~\ref{hardyTrivialization}. Via the trivialization
$\rho_a:P_a''\to \Omega$ this determines an open subset
$$
\mathcal{W}_a:=\left\{h''\in H^{s+1/2}(P''_a,M)\,\big|\,
h''\circ\rho_a^{-1}\in\mathcal{W}_0\right\}
$$
of $H^{s+1/2}(P''_a,M)$ for $a\in A$. Let
$$
\mathcal{V}''_a:=\left\{\beta\in\mathcal{V}_a\,\big|\,\exists\,
h''\in\mathcal{W}_a\cap\Hol^{s+1/2}(P''_a,M)\,s.t.\,
\beta=h''|\Gamma_a\right\}
$$
Define
$$
\mathcal{V}:=\bigsqcup_{a\in A}\mathcal{V}_a,\qquad
\mathcal{V}':=\bigsqcup_{a\in A}\mathcal{V}'_a,\qquad
\mathcal{V}'':=\bigsqcup_{a\in A}\mathcal{V}''_a.
$$
Then every pair $(a,\beta)\in\mathcal{V}'\cap\mathcal{V}''$ determines
a holomorphic map $h:P_a\to M$ such that
$h|\Gamma_a=\beta$. As in~\ref{cU0} we use the
nodal coordinate system of~\ref{Hardy}
to construct an auxiliary Hilbert
manifold structure on $\mathcal{V}$ via the bijection
\begin{equation}\label{eq:Vtriv}
\mathcal{V}\to A\times H^s(\Gamma,M):
(a,\beta)\mapsto (a,\beta\circ\iota_a).
\end{equation
\end{PARA}\rm
\begin{theorem}\label{thm:V}
Continue the notation of~\ref{Hardy}, \ref{hardyTrivialization},
and~\ref{cV}. Fix an integer $s+1/2>1$. After shrinking
$A$ and $\mathcal{W}_0$, if necessary, the following holds.
\begin{description}
\item[(i)]
For each $a\in A$, $\mathcal{V}'_a$ and $\mathcal{V}''_a$
are complex submanifolds of~$\mathcal{V}_a$.
\item[(ii)]
Let $(a,\beta)\in\mathcal{V}'\cap\mathcal{V}''$ and
$h:P_a\to M$ be the associated holomorphic map
with $\beta=h|\Gamma_a$.
Let $w:\Sigma\to P_a$ be a desingularization with
induced structures $s_*$, $\nu$, $j$, $v:=h\circ w$
on~$\Sigma$ and $D_v$ be the operator in~\ref{def:infuniv}.
Then
$$
\ker D_v\cong
T_\beta\mathcal{V}'_a\cap T_\beta\mathcal{V}''_a,\qquad
\coker D_v\cong
\frac{T_\beta\mathcal{V}_a}
{T_\beta\mathcal{V}'_a+T_\beta\mathcal{V}''_a}.
$$
\item[(iii)]
$\mathcal{V}'$ and $\mathcal{V}''$ are complex submanifolds of $\mathcal{V}$.
\item[(iv)]
The projections $\mathcal{V}\to A$, $\mathcal{V}'\to A$, $\mathcal{V}''\to A$
are holomorphic submersions.
\end{description}
\end{theorem}
\begin{proof}[Proof of Theorem~\ref{thm:V}~(i) and~(ii).]
In parts~(i) and~(ii) the point $a$ is fixed.
We introduce the following notation to make
the proof look more like the proof of~\cite[Theorem~9.5]{RS}
Use the notation of part~(ii). Abbreviate
$$
\Sigma' := w^{-1}(P_a'),\qquad
\Sigma'' := w^{-1}(P_a'').
$$
Thus $\Sigma'$ and $\Sigma''$ are submanifolds of $\Sigma$
such that
$$
\Sigma=\Sigma'\cup\Sigma'',\qquad
{\partial}\Sigma'={\partial}\Sigma''=\Sigma'\cap\Sigma''.
$$
Now $w^{-1}\circ\iota_a$ is a diffeomorphism from
$\Gamma$ in~(\ref{eq:trivialize}) to $\Sigma'\cap\Sigma''$.
To simplify the notation we assume that
$
\Gamma=\Sigma'\cap\Sigma''.
$
The submanifold $\Sigma'$ is a disjoint union
$$
\Sigma'=\Sigma'_1\cup\dots\cup \Sigma'_{\mathsf{k}}
$$
where each set $\Sigma'_{\mathsf{i}}$ is either an embedded
closed annulus or else the union of two disjoint embedded closed
disks centered at two equivalent nodal points. It follows that every
pair of equivalent nodal points appears in some~$\Sigma'_{\mathsf{i}}$.
In case $\Sigma'_{\mathsf{i}}$ is a disjoint union of two disks, say
$\Sigma'_{\mathsf{i}}=\Sigma'_{{\mathsf{i}},1}\cup\Sigma'_{{\mathsf{i}},2}$, choose holomorphic
diffeomorphisms $x_{\mathsf{i}}:\Sigma'_{{\mathsf{i}},1}\to{\mathbb{D}}$ and $y_{\mathsf{i}}:\Sigma'_{{\mathsf{i}},2}\to{\mathbb{D}}$
which send the nodal point to $0$. In case $\Sigma'_{\mathsf{i}}$ is an annulus
choose a holomorphic diffeomorphism $x_{\mathsf{i}}:\Sigma'_{\mathsf{i}}\to{\mathbb{A}}(\delta_{\mathsf{i}},1)$
and define $y_{\mathsf{i}}:\Sigma'_{\mathsf{i}}\to{\mathbb{A}}(\delta_{\mathsf{i}},1)$ by $y_{\mathsf{i}}=\delta_{\mathsf{i}}/x_{\mathsf{i}}$.
Let $\mathcal{V}'_0\subset H^s(\Gamma,M)$ be the subspace of those
$H^s$-functions $\gamma:\Gamma\to M$ that extend
holomorphically to $H^{s+1/2}$-functions $v':\Sigma'\to M$
which map each pair of equivalent nodal points to the
same point in $M$ and take $\Sigma'_{\mathsf{i}}$ to $V'_{\mathsf{i}}$.
Let $\mathcal{V}''_0\subset H^s(\Gamma,M)$ be the subspace of
those $H^s$-functions $\gamma:\Gamma\to M$ that extend
holomorphically to $H^{s+1/2}$-functions $v'':\Sigma''\to M$
such that
$
h'':=v''\circ w^{-1}|P_a''\in\mathcal{W}_a.
$
In this notation part~(i) asserts that $\mathcal{V}_0'$ and
$\mathcal{V}_0''$ are complex submanifolds of~$H^s(\Gamma,M)$.
We prove that $\mathcal{V}_0'$ is a complex submanifold of $H^s(\Gamma,M)$.
Choose coordinate charts $\psi_{\mathsf{i}}:V'_{\mathsf{i}}\to{\mathbb{C}}^{\mathsf{m}}$ such that
$\psi_{\mathsf{i}}(H_A(p_{\mathsf{i}}))=0$ and $\psi_{\mathsf{i}}(V'_{\mathsf{i}})$ is the
open unit ball in ${\mathbb{C}}^{\mathsf{m}}$ for every ${\mathsf{i}}$. Define the map
\begin{equation}\label{eq:V0}
\mathcal{V}_0'\to(H^s(S^1,{\mathbb{C}}^{\mathsf{m}}))^{2{\mathsf{k}}}:
\gamma\mapsto(\xi_1,\eta_1,
\dots,\xi_{\mathsf{k}},\eta_{\mathsf{k}})
\end{equation}
by
\begin{equation}\label{eq:xietai}
\xi_{\mathsf{i}}:=\psi_{\mathsf{i}}\circ\gamma\circ x_{\mathsf{i}}^{-1},\qquad
\eta_{\mathsf{i}}:=\psi_{\mathsf{i}}\circ\gamma\circ y_{\mathsf{i}}^{-1}.
\end{equation}
The image of~(\ref{eq:V0}) is the set of all tuples
$(\xi_1,\eta_1,\dots,\xi_{\mathsf{k}},\eta_{\mathsf{k}})$ in
$(H^s(S^1,{\mathbb{C}}^{\mathsf{m}}))^{2{\mathsf{k}}}$ that satisfy the following conditions.
\begin{description}
\item[(a)]
The functions $\xi_{\mathsf{i}},\eta_{\mathsf{i}}:S^1\to{\mathbb{C}}^{\mathsf{m}}$
take values in the open unit ball.
\item[(b)]
If $\Sigma'_{\mathsf{i}}$ is the disjoint union of two discs then all negative Fourier
coefficients of $\xi_{\mathsf{i}}$ and $\eta_{\mathsf{i}}$ vanish and the zeroth coefficients agree.
\item[(c)]
If $\Sigma'_{\mathsf{i}}$ is an annulus then $\gamma_{{\mathsf{i}},1}$ extends
holomorphically to an $H^{s+1/2}$ function on the annulus
${\mathbb{A}}(\delta_{\mathsf{i}},1)$ and $\eta_{\mathsf{i}}(y) = \xi_{\mathsf{i}}(\delta_{\mathsf{i}}/y)$
for every $y\in S^1$.
\end{description}
Conditions~(b) and~(c) define a closed subspace of
$(H^s(S^1,{\mathbb{C}}^{\mathsf{m}}))^{2{\mathsf{k}}}$ and condition~(a)
defines an open set in this subspace. Hence the image
of~(\ref{eq:V0}) is an open set in a Hilbert subspace and
this shows that $\mathcal{V}_0'$ is a Hilbert submanifold of $H^s(\Gamma,M)$.
We prove that $\Hol^{s+1/2}(\Sigma'',M)$ is a complex
submanifold of $H^{s+1/2}(\Sigma'',M)$.
To see this note that the Cauchy--Riemann operator
$v''\mapsto\bar{\partial}_{j,J}(v'')$
defines a holomorphic section of the vector bundle
$
\mathcal{E}\to\mathcal{B}:=H^{s+1/2}(\Sigma'',M)
$
with fibers
$$
\mathcal{E}_{v''}:=H^{s-1/2}(\Sigma'',
\Lambda^{0,1}T^*\Sigma''\otimes(v'')^*TM)
$$
The intrinsic derivative of this section at a zero $v''$
is the Cauchy--Riemann operator
$
D_{v''}:T_{v''}\mathcal{B}\to\mathcal{E}_{v''}
$
of the holomorphic vector bundle $(v'')^*TM\to\Sigma''$.
Since each component of $\Sigma''$ has nonempty boundary
the operator $D_{v''}$ is surjective;
a right inverse can be
constructed from an appropriate Lagrangian boundary condition
(see~\cite[Appendix~C.1.10]{MS}).
This proves that
$\Hol^{s+1/2}(\Sigma'',M)$ is a complex submanifold
of $H^{s+1/2}(\Sigma'',M)$.
We prove that $\mathcal{V}_0''$ is a complex submanifold of $H^s(\Gamma,M)$.
The restriction map
$$
\Hol^{s+1/2}(\Sigma'',M)\to\mathcal{V}_0:v''\mapsto v''|\Gamma
$$
is an injective holomorphic immersion.
That it is holomorphic is obvious, that it is injective follows
from unique continuation, and that it is an immersion follows
from the elliptic boundary estimate in~\cite[Theorem~B.4]{RS}.
It follows that the image of a sufficiently small neighborhood
of $H_A\circ w|\Sigma''$ under the restriction map is
a complex submanifold of $H^s(\Gamma,M)$; this image is $\mathcal{V}_0''$.
This proves~(i).
We prove~(ii). It follows directly from the definitions that
there is a map
$$
\ker\,D_v\to T_\beta\mathcal{V}_a'\cap T_\beta\mathcal{V}_a'':
\hat v\mapsto \hat v\circ w^{-1}|\Gamma_a.
$$
As in the proof of Theorem~9.5~(ii) in~\cite{RS} this map is injective
by unique continuation and is surjective by elliptic regularity.
Now define a map
$$
\coker\,D_v\to\frac{T_\beta\mathcal{V}_a}{T_\beta\mathcal{V}_a'+T_\beta\mathcal{V}_a''}:
[\eta]\mapsto[\hat\beta]
$$
as follows. Given $\eta\in\Omega^{0,1}(\Sigma,v^*TM)$ choose two vector
fields $\xi'$ along $v':=v|\Sigma'$ and $\xi''$ along $v'':=v|\Sigma''$
that satisfy
$$
D_{v'}\xi'=\eta|\Sigma',\qquad D_{v''}\xi''=\eta|\Sigma'',\qquad
\xi'|\Gamma-\xi''|\Gamma = \hat\beta\circ w|\Gamma.
$$
One verifies as in the proof of~\cite[Theorem~9.5~(iii)]{RS}
that this map is well defined and bijective. That this map
is well defined follows directly from the definitions and that it
is injective uses elliptic regularity. The proof of surjectivity
is based on the following two assertions.
\begin{description}
\item[(a)]
Each element in the quotient
$T_\beta\mathcal{V}_a/(T_\beta\mathcal{V}_a'+T_\beta\mathcal{V}_a'')$
can be represented by a smooth vector field along $\beta$.
\item[(b)]
For every smooth vector field $\hat\beta$ along $\beta$ there exist
vector fields $\xi'$ along $v'$ and $\xi''$ along $v''$ such that
$\xi'|\Gamma-\xi''|\Gamma = \hat\beta\circ w|\Gamma$
and the $(0,1)$-form $\eta$ along $v$ defined by
$\eta|\Sigma':=D_{v'}\xi'$ and $\eta|\Sigma'':=D_{v''}\xi''$
is smooth.
\end{description}
One first proves~(b) by an argument in local coordinates,
using the construction due to Emile Borel of a smooth function with
a prescribed Taylor series at a point. Once~(b) is established
assertion~(a) follows from the observation that the subspace
of those elements of the quotient
$T_\beta\mathcal{V}_a/(T_\beta\mathcal{V}_a'+T_\beta\mathcal{V}_a'')$ that admit
smooth representatives is both finite dimensional and dense.
The details are exactly as in the proof of~\cite[Theorem~9.5~(iii)]{RS}
and will be omitted. Thus we have proved~(ii).
The proofs of~(iii) and~(iv) are given below after some preparation.
\end{proof}
\begin{PARA}\rm\label{standardNode}
Let ${\mathbb{D}}\subset{\mathbb{C}}$ be the closed unit disc.
The \jdef{standard node} is defined as the map
$$
N\to\INT({\mathbb{D}}): (x,y)\mapsto xy, \qquad N:=\{(x,y)\in{\mathbb{D}}\times{\mathbb{D}}\,|\,\Abs{xy}<1\}.
$$
For $z\in\INT({\mathbb{D}})$ denote
$$
N_z:=\{(x,y)\in{\mathbb{D}}\times{\mathbb{D}}\,|\,xy=z\}.
$$
The boundary ${\partial} N_z$ has two components
$$
{\partial}_1N_z:=\{(x,y)\in N_z\,|\,\Abs{x}=1\},\qquad
{\partial}_2N_z:=\{(x,y)\in N_z\,|\,\Abs{y}=1\}
$$
which can be identified with the unit circle $S^1={\partial}{\mathbb{D}}\subset{\mathbb{C}}$
via the embeddings $\iota_1,\iota_2:S^1\to N_z$ given by
$$
\iota_{1,z}(e^{i\theta}) := (e^{i\theta},e^{-i\theta}z),\qquad
\iota_{2,z}(e^{i\theta}) := (e^{-i\theta}z,e^{i\theta}).
$$
We study the set of all triples $(z,\xi,\eta)$
where $z\in\INT({\mathbb{D}})$ and $\xi:S^1\to{\mathbb{C}}^{\mathsf{m}}$,
$\eta:S^1\to{\mathbb{C}}^{\mathsf{m}}$ are the boundary values
a holomorphic map $v:N_z\to{\mathbb{C}}^{\mathsf{m}}$, namely
$$
\xi := v\circ\iota_{1,z},\qquad
\eta := v\circ\iota_{2,z}.
$$
At $z=0$, the functions $\xi$ and $\eta$ extend to the
closed unit disk and agree at the origin.
More precisely, fix an integer $s+1/2>1$.
For $z\in\INT({\mathbb{D}})\setminus 0$ let $\Hol^{s+1/2}(N_z,{\mathbb{C}}^{\mathsf{m}})$ be the space of
all maps $v:N_z\to{\mathbb{C}}^{\mathsf{m}}$ of class $H^{s+1/2}$ which are holomorphic
in $\INT(N_z)$. The space $N_0$ consists
of two disks ${\mathbb{D}}\times 0$ and $0\times{\mathbb{D}}$ intersecting in $(0,0)$.
In this case let $\Hol^{s+1/2}(N_0,{\mathbb{C}}^{\mathsf{m}})$ denote the space of
all continuous maps $v:N_0\to{\mathbb{C}}^{\mathsf{m}}$ such that $v_1:=v|{\mathbb{D}}\times 0$
and $v_2:=v|0\times{\mathbb{D}}$ are holomorphic in the interior
and restrict to $H^s$ functions on the boundary.
In both cases the trace theorem gives rise to a map
$$
\Hol^{s+1/2}(N_z,{\mathbb{C}}^{\mathsf{m}})\to H^s(S^1,{\mathbb{C}}^{\mathsf{m}})\times H^s(S^1,{\mathbb{C}}^{\mathsf{m}});
v\mapsto(v\circ\iota_{1,z},v\circ\iota_{2,z}).
$$
The norm on $ H^s(S^1,{\mathbb{C}}^{\mathsf{m}})$ is given by
$$
\left\|\zeta\right\|_s:=\sqrt{\sum_{n\in{\mathbb{Z}}} (1+|n|)^{2s}|\zeta_n|^2},\qquad
\zeta(e^{i\theta})=\sum_{n\in{\mathbb{Z}}} \zeta_n e^{in\theta}.
$$
\end{PARA}\rm
\begin{lemma}\label{le:localmodel}
{\bf (i)} The set
$$
\mathcal{N}:= \left\{(z,\xi,\eta)\,\big|\,
\exists\,v\in \Hol^{s+1/2}(N_z,{\mathbb{C}}^{\mathsf{m}})\,s.t.\,
\xi=v\circ\iota_{1,z},\,\eta=v\circ\iota_{2,z}\right\}.
$$
is a complex submanifold of
$H^s(S^1,{\mathbb{C}}^{\mathsf{m}})\times H^s(S^1,{\mathbb{C}}^{\mathsf{m}})\times\INT({\mathbb{D}})$.
\smallskip\noindent{\bf (ii)}
The projection $\mathcal{N}\to\INT({\mathbb{D}}):(\xi,\eta,z)\mapsto z$
is a surjective submersion.
\smallskip\noindent{\bf (iii)}
Let $A\subset\INT({\mathbb{D}})\times{\mathbb{C}}^{{\mathsf{a}}-1}$ be an open set and
$A\to\mathcal{N}:(z,t)\mapsto(z,\xi_{z,t},\eta_{z,t})$ be a holomorphic
map. Then the map
$$
H:\left\{(x,y,t)\in{\mathbb{C}}^{{\mathsf{a}}+1}\,|\,x,y\in\INT({\mathbb{D}}),\,(xy,t)\in A\right\}\to{\mathbb{C}}^{\mathsf{m}}
$$
well defined by
$$
H(x,y,t) := \left\{\begin{array}{ll}
\xi_{xy,t}(x),&\mbox{if } y\ne 0,\\
\eta_{xy,t}(y),&\mbox{if } x\ne 0,\\
\xi_{0,t}(0)=\eta_{0,t}(0),&\mbox{if }x=y=0,
\end{array}\right.
$$
is holomorphic.
\end{lemma}
\begin{proof}
Let $(z,\xi,\eta)\in\INT({\mathbb{D}})\times H^s(S^1,{\mathbb{C}}^{\mathsf{m}})\times H^s(S^1,{\mathbb{C}}^{\mathsf{m}})$
and write
$$
\xi(x)=:\sum_{n\in{\mathbb{Z}}}\xi_nx^n,\qquad
\eta(y)=:\sum_{n\in{\mathbb{Z}}}\eta_ny^n,
$$
i.e. $\xi_n,\eta_n\in{\mathbb{C}}^{\mathsf{m}}$ are the Fourier coefficients
of $\xi,\eta$.
When $(z,\xi,\eta)\in\mathcal{N}$ each of these series converges on the annulus
with inner radius $\Abs{z}$ and outer radius one.
(Thus was used in defining $H$.)
When $z\ne 0$ we have
$$
(z,\xi,\eta)\in\mathcal{N}\qquad\iff\qquad
\eta_{-n} = z^n\xi_n\mbox{ for all }n\in{\mathbb{Z}},
$$
but
$$
(0,\xi,\eta)\in\mathcal{N}\qquad\iff\qquad
\xi_0=\eta_0,\;\;\xi_n=\eta_n=0\mbox{ for }n<0.
$$
Denote by $H^s_\pm(S^1,{\mathbb{C}}^{\mathsf{m}})\subset H^s(S^1,{\mathbb{C}}^{\mathsf{m}})$
the Hardy space of all $\zeta\in H^s(S^1,{\mathbb{C}}^{\mathsf{m}})$ whose
Fourier coefficients $\zeta_n$ vanish for $\mp n\ge 0$.
For $z\in\INT({\mathbb{D}})$ define the bounded linear operator
$\mathcal{T}_z:H^s_+(S^1,{\mathbb{C}}^{\mathsf{m}})\to H^s_-(S^1,{\mathbb{C}}^{\mathsf{m}})$ by
$$
\mathcal{T}_z\left(\sum_{n>0}c_n e^{in\theta}\right)
:= \sum_{n>0} z^nc_ne^{-in\theta}.
$$
Then the resulting map
$$
\INT({\mathbb{D}})\times H^s_+(S^1,{\mathbb{C}}^{\mathsf{m}})\to H^s_-(S^1,{\mathbb{C}}^{\mathsf{m}}):
(z,\zeta_+)\mapsto \mathcal{T}_z(\zeta_+)
$$
is holomorphic. Moreover, the set $\mathcal{N}$
can be written in the form
$$
\mathcal{N}=\left\{
(z,\xi_++\lambda+\mathcal{T}_z(\eta_+),\eta_++\lambda+\mathcal{T}_z(\xi_+))\,\bigg|\,
\begin{array}{l}
\xi_+,\eta_+\in H^s_+(S^1,{\mathbb{C}}^{\mathsf{m}}),\\
\lambda\in{\mathbb{C}}^{\mathsf{m}},\,z\in\INT({\mathbb{D}})
\end{array}
\right\}.
$$
Hence $\mathcal{N}$ is a complex Hilbert submanifold of the space
$$
{\mathbb{C}}\times H^s(S^1,{\mathbb{C}}^{\mathsf{m}})^2\cong
{\mathbb{C}}\times H^s_+(S^1,{\mathbb{C}}^{\mathsf{m}})^2\times({\mathbb{C}}^{\mathsf{m}})^2\times H^s_-(S^1,{\mathbb{C}}^{\mathsf{m}})^2.
$$
The formula shows that the projection $\mathcal{N}\to\INT({\mathbb{D}})$ is a
surjective submersion. This proves~(i) and~(ii).
To prove~(iii) we observe that the projection
$H^s(S^1,{\mathbb{C}}^{\mathsf{m}})\to H^s_+(S^1,{\mathbb{C}}^{\mathsf{m}})$ and the evaluation map
$\INT({\mathbb{D}})\times H^s_+(S^1,{\mathbb{C}}^m)\to{\mathbb{C}}^{\mathsf{m}}:(z,\zeta)\mapsto\zeta(z)$
are holomorphic. Hence~(iii) follows from the identification
$$
H(x,y,t) = \xi_{xy,t,+}(x) + \eta_{xy,t,+}(y) + \lambda(xy,t)
$$
where $\lambda(z,t)$ denotes the common constant term
of the power series $\xi_{z,t}$ and~$\eta_{z,t}$.
This proves the lemma.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:V}~(iii) and~(iv).]
We prove that $\mathcal{V}'$ is a complex Hil\-bert submanifold of $\mathcal{V}$.
As in the proof of~(i) we choose holomorphic coordinate
charts $\psi_{\mathsf{i}}:V'_{\mathsf{i}}\to{\mathbb{C}}^{\mathsf{m}}$ such that $\psi_{\mathsf{i}}(p_{\mathsf{i}})=0$
and $\psi_{\mathsf{i}}(V'_{\mathsf{i}})$ is the open unit disc in ${\mathbb{C}}^{\mathsf{m}}$ for every ${\mathsf{i}}$.
Define the map
$$
\mathcal{V}'\to A\times(H^s(S^1,{\mathbb{C}}^{\mathsf{m}}))^{2{\mathsf{k}}}:
(a,\beta)\mapsto(a,\xi_1,\eta_1,\dots,\xi_{\mathsf{k}},\eta_{\mathsf{k}})
$$
by
$$
\xi_{\mathsf{i}}:=\psi_{\mathsf{i}}\circ\beta\circ x_{\mathsf{i}}^{-1},\qquad
\eta_{\mathsf{i}}:=\psi_{\mathsf{i}}\circ\beta\circ y_{\mathsf{i}}^{-1}.
$$
as in~(\ref{eq:xietai}). The image of this map is the subset
$$
\left\{(a,\xi_1,\eta_1,\dots,\xi_{\mathsf{k}},\eta_{\mathsf{k}})
\in A\times H^s(S^1,{\mathbb{C}}^{\mathsf{m}}))^{2{\mathsf{k}}}\,\big|\,
(z_{\mathsf{i}}(a),\xi_{\mathsf{i}},\eta_{\mathsf{i}})\in\mathcal{N}\;\forall\;{\mathsf{i}}\right\}.
$$
By Lemma~\ref{le:localmodel}, this set
is a complex Hilbert submanifold of $A\times(H^s(S^1,{\mathbb{C}}^{\mathsf{m}}))^{2{\mathsf{k}}}$.
Hence $\mathcal{V}'$ is a complex Hilbert submanifold of $\mathcal{V}'$ and the
projection $\mathcal{V}\to A$ is a submersion.
The proof that $\mathcal{V}''$ is a complex Hilbert submanifold of $\mathcal{V}$ follows
the argument in the proof of~\cite[Theorem~11.9~(ii)]{RS}.
Define
\begin{equation*}
\begin{split}
\mathcal{B} &:= \bigl\{(a,h'')\,\big|\,a\in A,\,h''\in H^{s+1/2}(P''_a,M)\bigr\}, \\
\mathcal{Z} &:= \bigl\{(a,h'')\in\mathcal{B}\,\big|\,h''\in\Hol^{s+1/2}(P''_a,M)\bigr\}.
\end{split}
\end{equation*}
We construct an auxiliary Hilbert manifold structure on $\mathcal{B}$
and show that $\mathcal{Z}$ is a smooth submanifold of $\mathcal{B}$.
Fix a Hardy trivialization $(P=P'\cup P'',\iota,\rho)$
as in~\ref{hardyTrivialization} and denote
$$
\mathcal{B}_0:=\bigl\{(a,w)\,\big|\,a\in A,\,w\in H^{s+1/2}(\Omega,M)\bigr\}
$$
This space is a Hilbert manifold and the Hardy trivialization
induces a bijection
$$
\mathcal{B}_0\to\mathcal{B}:(a,w)\mapsto(a,h'':=w\circ\rho_a).
$$
This defines the Hilbert manifold structure on $\mathcal{B}$.
The bijection $\mathcal{B}_0\to\mathcal{B}$ identifies the subset
$\mathcal{Z}\subset\mathcal{B}$ with the subset $\mathcal{Z}_0\subset\mathcal{B}_0$ given by
$$
\mathcal{Z}_0:=\bigl\{(a,w)\in\mathcal{B}_0\,\big|\,w\in\Hol^{s+1/2}((\Omega,j(a)),M)\bigr\},
$$
where $j(a):=(\rho_a)_*(J_P|P''_a)$, $\rho_a:P''_a\to \Omega$
is the Hardy trivialization, and $J_P$ is the complex structure on $P$.
(The map $a\mapsto j(a)$ need not be holomorphic.)
We prove that $\mathcal{Z}_0$ is a smooth Hilbert submanifold of $\mathcal{B}_0$.
The tangent space of $\mathcal{B}_0$ at a pair $(a,w)$ is
$$
T_{a,w}\mathcal{B}_0 = T_aA\times H^{s+1/2}(\Omega,w^*TM).
$$
Let $\mathcal{E}\to\mathcal{B}_0$ be the complex Hilbert space bundle
whose fiber
$$
\mathcal{E}_{a,w} := H^{s-1/2}(\Omega,
\Lambda_{j(a)}^{0,1}T^*\Omega\otimes w^*TM)
$$
over $(a,w)\in\mathcal{B}_0$ is the Sobolev space of $(0,1)$-forms
on $(\Omega,j(a))$ of class $H^{s-1/2}$ with values in the
pullback tangent bundle $w^*TM$. As before the
Cauchy--Riemann operator defines a smooth
section ${\overline\partial}:\mathcal{B}_0\to\mathcal{E}$ given by
\begin{equation}\label{eq:section}
{\overline\partial}(a,w) := \bar{\partial}_{j(a),J}(w)
= \frac12\left(dw + J\circ dw\circ j(a)\right).
\end{equation}
Here $J$ denotes the complex structure on $M$.
The zero set of this section is the set $\mathcal{Z}_0$ defined above.
It follows as in the proof of~(i) that the
linearized operator $D_{a,w}:T_{a,v''}\mathcal{B}_0\to\mathcal{E}_{a,w}$
is surjective and has a right inverse. Hence the zero set $\mathcal{Z}_0$
is a smooth Hilbert submanifold of $\mathcal{B}_0$.
Again as in the proof of~(i) restriction to
the boundary gives rise to a smooth injective immersion
$$
\mathcal{Z}_0\to\mathcal{V}:(a,w)\mapsto(a,\beta),\qquad
\beta:=w\circ\rho_a^{-1}|\Gamma_a.
$$
The image of a sufficiently small neighbourhood
of $(a_0,w_0:=H_A|\Omega)$ under this immersion is $\mathcal{V}''$;
the neighborhood is $\mathcal{Z}_0\cap(A\times\mathcal{W}_0)$ after shrinking
$A$ and $\mathcal{W}_0$, if necessary. Hence $\mathcal{V}''$ is a smooth
Hilbert submanifold of $\mathcal{V}$. That it is a complex submanifold
follows, as in the proof of Theorem~11.9 in~\cite{RS},
by introducing an auxiliary (almost) complex structure on $\mathcal{Z}_0$.
Namely, the push forward of the complex structure on $P''$
by the Hardy trivialization
$$
\pi_A\times\rho:P''\to A\times\Omega
$$
of~\ref{hardyTrivialization} has the form~(\ref{eq:JP})
for a smooth map $j:A\to\mathcal{J}(\Omega)$
and a smooth $1$-form $\eta:TA\to\Vect(\Omega)$
satisfying~(\ref{eq:jeta1}) and~(\ref{eq:jeta2}).
Since $\rho$ is holomorphic near ${\partial} P'$ with respect to
the complex structure of $\Omega$ it follows that
$\eta$ vanishes near $A\times{\partial}\Omega$. The tangent space
$T_{(a,w)}\mathcal{Z}_0$ is the kernel of the operator $\mathcal{D}_{a,w}$
from $T_{(a,w)}\mathcal{B}_0$ to $\Omega^{0,1}_{j(a)}(\Omega,w^*TM)$
given by
\begin{equation}\label{eq:Daw}
\mathcal{D}_{(a,w)}(\hat{a},\hat{w})=D_w\hat{w} +\frac12 J(w)dw\cdot dj(a)\hat{a}.
\end{equation}
It follows from~(\ref{eq:jeta1}) and~(\ref{eq:jeta2}) that the automorphisms
$$
\left(\hat{a},\hat{w}\right)\mapsto
\left(\sqrt{-1}\hat{a}, J(w)\hat{w}-dw\cdot\eta(a,\hat{a})\right)
$$
define an almost complex structure on $\mathcal{Z}_0$. Since $\eta$ vanishes
near the boundary, the embedding
$$
\mathcal{Z}_0\to A\times H^s(\Gamma,M): (a,w)\mapsto(a,w\circ\iota_{a_0})
$$
is holomorphic. Hence $\mathcal{V}''$ is a complex submanifold
of $\mathcal{V}$ as claimed.
That the projection
$\mathcal{V}''\to A$ is a submersion follows from the fact that the
linearized operator~(\ref{eq:Daw}) of the section~(\ref{eq:section})
is already surjective when differentiating in the direction
of a vector field $\hat v$ along~$v$.
This completes the proof Theorem~\ref{thm:V}.
\end{proof}
\begin{definition}\rm\label{def:core}
Let $\pi_A:P\to A$ be a nodal family
and denote by
$$
C_1,\dots,C_{\mathsf{k}}\subset P
$$
the components of the singular set near $P_{a_0}$.
The set
$$
A_0:=\pi_A(C_1)\cap\cdots\cap\pi_A(C_{\mathsf{k}})
$$
is called the \jdef{core} of the family.
Recall from~\cite[Definition~12.1]{RS} that we
call $\pi_A$ \jdef{regular nodal} if
the submanifolds $\pi_A(C_{\mathsf{i}})$ intersect transversally.
In this case, the core $A_0$ is a complex submanifold of $A$
of codimension ${\mathsf{k}}$.
We call an unfolding $(\pi_A:P\to A,R_*,a_0)$ \jdef{regular nodal}
iff the ambient family $\pi_A:P\to A$ is regular nodal.
In~\cite[Theorem~5.6]{RS} we constructed a universal
unfolding which is regular nodal.
By the uniqueness of universal unfoldings it follows
(after shrinking $A$ if necessary)
that every universal unfolding is regular nodal.
\end{definition}\rm
\begin{theorem}\label{thm:transverse}
Continue the notation of~\ref{Hardy}, \ref{hardyTrivialization},
\ref{cV}, and Definition~\ref{def:core}, and
fix an integer $s+1/2>1$. Assume that the unfolding
$(\pi_A,R_*,a_0)$ (of marked nodal Riemann surfaces)
is universal. Let $w_0:\Sigma\to P_{a_0}$
be a desingularization with induced structures
$s_{0,*}$, $\nu_0$, $j_0$, $v_0:=h_0\circ w_0$ on~$\Sigma$.
Then the configuration $(\Sigma,s_{0,*},\nu_0,j_0,v_0)$
is stable; assume that it is regular. Then the following holds.
\begin{description}
\item[(i)]
$\mathcal{V}'$ and $\mathcal{V}''$ intersect transversally in $\mathcal{V}$ at
$(a_0,\beta_0:=h_0|\Gamma_{a_0})$.
\item[(ii)]
The projection $\mathcal{V}'\cap\mathcal{V}''\to A$ is tranverse
to $A_0$ at $(a_0,\beta_0)$.
\end{description}
\end{theorem}
\begin{proof}
Recall the auxiliary Hilbert manifold structure on $\mathcal{V}$ from~\ref{cV}
given by the bijection~(\ref{eq:Vtriv}). The tangent space at
$(a,\gamma)\in A\times H^s(\Gamma,M)$
is the set of pairs $(\hat{a},\hat{\gamma})$ with $\hat{a}\in T_aA$
and $\hat{\gamma}\in H^s(\Gamma,\gamma^*TM)$. We abuse notation
and write
$$
T_{(a,\beta)}\mathcal{V} = T_aA \times H^s(\Gamma,\gamma^*TM),\qquad
\gamma:=\beta\circ\iota_a.
$$
Below we prove the following.
\medskip\noindent
{\bf Claim:} {\it If $\hat\gamma\in\Omega^0(\Gamma,\gamma_0^*TM)$ is a smooth
vector field along $\gamma_0:=\beta_0\circ\iota_{a_0}$ then the pair
$(0,\hat\gamma)$ belongs to the sum $T_{(a_0,\beta_0)}\mathcal{V}'+T_{(a_0,\beta_0)}\mathcal{V}''$.}
\medskip\noindent
We show first that this claim implies~(i).
By part~(ii) of Theorem~\ref{thm:V} the sum
$T_{\beta_0}\mathcal{V}'_{a_0}+T_{\beta_0}\mathcal{V}''_{a_0}$ is a closed
subspace of $T_{\beta_0}\mathcal{V}_{a_0}$ and hence
$T_{(a_0,\beta_0)}\mathcal{V}'+T_{(a_0,\beta_0)}\mathcal{V}''$ is a closed
subspace of $T_{(a_0,\beta_0)}\mathcal{V}$. Hence the claim implies
that every vertical tangent vector $(0,\hat\gamma)$ with
$\hat\gamma\in H^s(\Gamma,\gamma^*TM)$ is contained in the sum
$T_{(a_0,\beta_0)}\mathcal{V}'+T_{(a_0,\beta_0)}\mathcal{V}''$. Since the projection
$\mathcal{V}'\to A$ is a submersion by part~(iv), this implies
$$
T_{(a_0,\beta_0)}\mathcal{V}'+T_{(a_0,\beta_0)}\mathcal{V}''=T_{(a_0,\beta_0)}\mathcal{V}.
$$
Thus we have proved that~(i) follows from the claim.
The desingularization $w_0:\Sigma\to P_{a_0}$ induces a decomposition
$$
\Sigma=\Sigma'\cup\Sigma'',\qquad
\Sigma':=w_0^{-1}(P'_{a_0}),\qquad
\Sigma'':=w_0^{-1}(P''_{a_0}).
$$
The intersection $\Sigma'\cap\Sigma''={\partial}\Sigma'={\partial}\Sigma''$ is diffeomorphic
to the $1$-manifold $\Gamma$ in~(\ref{eq:trivialize}).
To simplify the notation we assume that
$$
\Gamma=\Sigma'\cap\Sigma''.
$$
The core admits a smooth desingularization
$$
\iota:A_0\times\Sigma\to P_0:=\pi_A^{-1}(A_0)
$$
that agrees with $w_0:\Sigma\to P_{a_0}$ at the base point $a_0$ and
with the trivialization~(\ref{eq:trivialize}) on $A_0\times\Gamma$.
Choose $\iota$ so that it maps each component of $A_0\times\cup\nu$
to the corresponding component $C_{\mathsf{i}}$ of the singular set and so
that
$$
\iota^{-1}(R_{\mathsf{i}}) = A_0\times\{s_{0,{\mathsf{i}}}\},\qquad
{\mathsf{i}}=1,\dots,{\mathsf{n}}.
$$
For $a\in A_0$ define the desingularization
$\iota_a:\Sigma\to P_a$ by
$$
\iota_a(z):=\iota(a,z).
$$
The trivialization induces a map $j:A_0\to\mathcal{J}(\Sigma)$ determined by the
condition that $\iota_a$ is holomorphic with respect to $j(a)$ for every
$a\in A_0$. Since $(\pi_A,R_*,a_0)$ is a universal unfolding as in~\cite{RS},
the map $j:A\to\mathcal{J}(\Sigma)$ contains a local slice of the $\Diff(\Sigma)$-action.
We prove the claim. Let $\hat\gamma\in\Omega^0(\Gamma,\gamma_0^*TM)$
be a smooth vector field along~$\gamma_0$. There exist
$\xi'\in \Omega^0(\Sigma',v_0^*TM)$,
$\xi''\in \Omega^0(\Sigma'',v_0^*TM)$, and
$\eta\in\Omega^{0,1}(\Sigma,v_0^*TM)$
such that
$$
\hat{\gamma}=(\xi'-\xi'')|\Gamma,\qquad
D_{w_0}\xi'=\eta|\Sigma',\qquad
D_{w_0}\xi''=\eta|\Sigma''.
$$
To see this take $\xi'=0$ and construct $\xi''$
so that $D_{w_0}\xi''$ vanishes to infinite order along $\Gamma$.
(The equation determines the Taylor expansion along $\Gamma$
and then use Emile Borel's extension theorem.) By the hypothesis
that the stable map $(\Sigma,s_{0,*},\nu_0,j_0,v_0)$ is regular,
there exists $\hat{a}\in T_{a_0}A$ and $\hat v\in\Omega^0(\Sigma/\nu_0,w_0^*TM)$
such that
$$
\eta=\mathcal{D}_{a_0,v_0}(\hat{a},\hat v)
:=D_{v_0}\hat v + \frac12dv_0\cdot j_0dj(a)\hat{a}.
$$
It follows that the pair $((\xi'-\hat v)|\Gamma,-\hat a)$ represents a tangent vector
to $\mathcal{V}'$ and the pair $((\xi''-\hat v)|\Gamma,-\hat a)$ represents a tangent vector
to $\mathcal{V}''$. Their difference is equal to $(0,\hat\gamma)$. This proves the claim
and hence part~(i) of the theorem.
We prove~(ii).
By~(i) and Theorem~\ref{thm:V}~(ii), the intersection
$\mathcal{V}'\cap\mathcal{V}''$ has complex dimension
\begin{eqnarray*}
\dim_{\mathbb{C}}(\mathcal{V}'\cap\mathcal{V}'')
&=& \INDEX_{\mathbb{C}}(D_{v_0})+\dim_{\mathbb{C}}(A) \\
&=& ({\mathsf{m}}-3)(1-{\mathsf{g}}) + \inner{c_1}{{\mathsf{d}}} + {\mathsf{n}}
\end{eqnarray*}
where ${\mathsf{d}}:=[v_0]\in H_2(M;{\mathbb{Z}})$
denotes the homology class represented by $v_0$.
Now abbreviate
$$
\gamma_0 := v_0|\Gamma = \beta_0\circ\iota_{a_0}:\Gamma\to M.
$$
Assertion~(ii) follows from the fact that the subspace
$$
\mathcal{X}_0 := \left\{(\hat a,\hat\gamma)
\in T_{(a_0,\beta_0)}\mathcal{V}'\cap T_{(a_0,\beta_0)}\mathcal{V}''\,\big|\,
\hat a\in T_{a_0}A_0\right\}
$$
has dimension
\begin{equation}\label{eq:dimX0}
\dim_{\mathbb{C}}\mathcal{X}_0 = ({\mathsf{m}}-3)(1-{\mathsf{g}}) + \inner{c_1}{{\mathsf{d}}} + {\mathsf{n}} - {\mathsf{k}}.
\end{equation}
To prove this we observe that the pair
$(\hat a,\hat\gamma)\in T_{a_0}A\times\Omega^0(\Gamma,\gamma_0^*TM)$
belongs to the intersection $T_{(a_0,\beta_0)}\mathcal{V}'\cap T_{(a_0,\beta_0)}\mathcal{V}''$
if and only if there exists a vector field $\hat v\in\Omega^0(\Sigma/\nu,v_0^*TM)$
satisfying
$$
\mathcal{D}_{a_0,v_0}(\hat{a},\hat v)
= D_{v_0}\hat v + \frac12dv_0\cdot j_0dj(a)\hat{a}
= 0,\qquad
\hat v|\Gamma=\hat\gamma.
$$
Since the restriction of the operator
$$
D_{v_0}:\Omega^0(\Sigma/\nu,v_0^*TM)\to\Omega^{0,1}(\Sigma,v_0^*TM)
$$
is Fredholm with index
$$
\INDEX_{\mathbb{C}}(D_{v_0}) = {\mathsf{m}}(1-{\mathsf{g}}) + \inner{c_1}{{\mathsf{d}}}
$$
and
$$
\dim_{\mathbb{C}} A_0 = 3{\mathsf{g}}-3 + {\mathsf{n}} - {\mathsf{k}}
$$
and the augmented operator
$$
\mathcal{D}_{a_0,v_0}:T_{a_0}A_0\times
\Omega^0(\Sigma/\nu,v_0^*TM)
\to\Omega^{0,1}(\Sigma,v_0^*TM)
$$
is surjective, this implies~(\ref{eq:dimX0}) and hence part~(ii)
of the theorem.
\end{proof}
\begin{PARA}\rm\label{infuniv}
For every $a\in A$ there is a map
\begin{equation}\label{eq:HB}
\mathcal{U}_a\to\mathcal{V}_a:(\alpha,b)\mapsto \beta:=H_B\circ\alpha
\end{equation}
which sends $\mathcal{U}'_a$ to $\mathcal{V}'_a$ and $\mathcal{U}''_a$ to $\mathcal{V}''_a$.
It follows from our definitions and Theorems~\ref{thm:U}
and~\ref{thm:V} that the unfolding $(\pi_B,S_*,H_B,b)$ is
infinitesimally universal if and only if the operator
$$
dH_B(\alpha):T_{(\alpha,b)}\mathcal{U}_a\to T_\beta\mathcal{V}_a
$$
induces isomorphisms
$$
dH_B(\alpha):T_{(\alpha,b)}\mathcal{U}'_a\cap T_{(\alpha,b)}\mathcal{U}''_a
\to T_\beta\mathcal{V}'_a\cap T_\beta\mathcal{V}''_a,
$$
$$
dH_B(\alpha):
\frac{T_{(\alpha,b)}\mathcal{U}_a}{T_{(\alpha,b)}\mathcal{U}'_a+T_{(\alpha,b)}\mathcal{U}''_a}
\to \frac{T_\beta\mathcal{V}_a}{T_\beta\mathcal{V}'_a+T_\beta\mathcal{V}''_a}
$$
for some (and hence every) unfolding $(\pi_A,R_*,H_A,a)$
and fiber isomorphism $f:P_a\to Q_b$. Thus~(\ref{eq:HB})
is an exact morphism of Fredholm quadruples as in~\ref{fredh}
below.
\end{PARA}\rm
\section{Fredholm intersection theory} \label{sec:interfred}
\begin{PARA}\rm\label{fredE}
Let $E$ be a Hilbert space and $E',E''\subset E$ be closed subspaces.
We call $(E,E',E'')$ a \jdef{Fredholm triple} (of subspaces) if
the intersection $E'\cap E''$ is finite dimensional,
the sum $E'+E''$ is a closed subspace of $E$,
and the quotient $E/(E'+E'')$ is finite dimensional.
The triple $(E,E',E'')$ is Fredholm if and only if the operator
\begin{equation}\label{eq:E}
E'\times E''\to E:(x',x'')\mapsto x'+x''
\end{equation}
is Fredholm. The \jdef{Fredholm index} of the triple is defined as the
Fredholm index of the operator~(\ref{eq:E}). The image of~(\ref{eq:E})
is the sum $E'+E''$ and its kernel is isomorphic to $E'\cap E''$
via the inclusion
$$
E'\cap E''\to E'\times E'':x\mapsto(x,-x).
$$
Hence the index of the triple $(E,E',E'')$ is
$$
\INDEX(E,E',E'') := \dim(E'\cap E'') - \dim(E/(E'+E'')).
$$
Standard Fredholm theory implies that the Fredholm property
and the index are stable under small deformations
of the subspaces $E'$ and $E''$.
\end{PARA}\rm
\begin{PARA}\rm\label{fredX}
Let $X$ be a Hilbert manifold, $X',X''\subset X$
be smooth submanifolds, and $x_0\in X'\cap X''$.
We call the quadruple $(X,X',X'',x_0)$ \jdef{Fredholm}
if the triple $(T_{x_0}X,T_{x_0}X',T_{x_0}X'')$ is Fredholm.
Define its \jdef{Fredholm index} to be the index of the triple.
If $(X,X',X'',x_0)$ is Fredholm then so is $(X,X',X'',x)$
for $x\in X'\cap X''$ sufficiently close to $x_0$ and both
quadruples have the same Fredholm index.
\end{PARA}\rm
\begin{lemma}[Normal coordinates]\label{le:fredX}
Let $(X,X',X'',x_0)$ be a Fredholm qua\-druple as in~\ref{fredX}
and abbreviate
$$
E:=T_{x_0}X,\qquad E':=T_{x_0}X',\qquad E'':=T_{x_0}X''.
$$
Then there are coordinates $u,x',x'',\xi$ defined in a neighborhood
of $x_0$ in $X$ satisfying the following conditions.
\begin{description}
\item[(i)]
$u$ takes values in $E'\cap E''$ and $u(x_0)=0$.
\item[(ii)]
$x'$ takes values in a complement to $E'\cap E''$
in $E'$ and $x'(x_0)=0$.
\item[(iii)]
$x''$ takes values in a complement to $E'\cap E''$
in $E''$ and ${x''(x_0)=0}$.
\item[(iv)]
$\xi$ takes values in a complement to $E'+E''$
in $E$ and $\xi(x_0)=0$.
\item[(v)]
Near $x_0$ the submanifolds $X'$, $X''$ and the subset $X'\cap X''$
are given by
$$
X''=\{x'=0,\xi=0\},\qquad X'=\{x''=0,\xi=f(u,x')\},
$$
$$
X'\cap X'' = \{x'=0,x''=0,\xi=0,f(u,0)=0\}
$$
for a smooth function $f$ with $f(0,0)=0$ and $df(0,0)=0$.
\end{description}
\end{lemma}
\begin{proof}
Choose any coordinate chart $(X'',x_0)\to(E'',0)$ whose differential
at $x_0$ is the identity. This coordinate chart can be written as
$(u,x'')$ where $u$ takes values in $E'\cap E''$ and $x''$
takes values in a complement of $E'\cap E''$ in $E''$.
Extend $(u,x'')$ to a coordinate chart $(X,x_0)\to(E,0)$.
This extended coordinate chart can be written as $(u,x',x'',\xi)$
where $x'$ takes values in a complement of $E'\cap E''$ in $E'$
and $\xi$ takes values in a complement of $E'+E''$ in $E$.
In these coordinates we have
$$
X''=\{x'=0,\xi=0\},\qquad X'=\{x''=\phi(u,x'),\xi=f(u,x')\}.
$$
where $\phi(0,0)=0$, $d\phi(0,0)=0$ and $f(0,0)=0$,
$df(0,0)=0$. Now replace $x''$ by $x''-\phi(u,x')$
to obtain the required coordinate system.
\end{proof}
\begin{corollary}\label{cor:U'U''}
Let $(X,X',X'',x_0)$ be as in Lemma~\ref{le:fredX}.
Then there exists a neighborhood $X_0$ of $x_0$ in $X$
and finite dimensional submanifolds $U$, $U'$, $U''$
of $X$, $X'$, $X''$, respectively, passing through $x_0$
such that
$$
U'=U\cap X',\qquad U''=U\cap X'',\qquad
U'\cap U'' =X_0\cap X'\cap X''
$$
and, for $x\in U'\cap U''$, we have
$$
T_xU'\cap T_xU''=T_xX'\cap T_xX'',\qquad
\frac{T_xU}{T_xU'+T_xU''}
\cong \frac{T_xX}{T_xX'+T_xX''}.
$$
We call $(U,U',U'',x_0)$ a \jdef{finite dimensional reduction}.
\end{corollary}
\begin{proof}
Let $X_0$ be the domain of the normal form coordinates $u,x',x'',\xi$
introduced in Lemma~\ref{le:fredX}. Then
$$
X_0\cap X'\cap X'' = \{(u,0,0,0)\,|\,f(u,0)=0\},
$$
$$
T_xX'\cap T_xX'' = \left\{(\hat u,0,0,0)\,|\,
df(u,0)(\hat u,0)=0\right\},
$$
$$
T_xX'+T_xX'' = \left\{(\hat u,\hat x',\hat x'',\hat\xi)\,\bigg|\,
\hat\xi-\frac{{\partial} f}{{\partial} x'}\hat x' \in\im\frac{{\partial} f}{{\partial} u}\right\}
$$
for $x=(u,0,0,0)\in X_0\cap X'\cap X''$. Hence the submanifolds
\begin{equation}\label{eq:U}
U:=\{(u,0,0,\xi)\},\qquad
U' := \{(u,0,0,f(u,0))\},\qquad
U'':=\{(u,0,0,0)\}
\end{equation}
satisfy the requirements of the corollary.
\end{proof}
\begin{PARA}\rm\label{fredh}
A \jdef{morphism} from $(X,X',X'',x_0)$ to $(Y,Y',Y'',y_0)$
is a smooth map $h:X\to Y$ such that
$$
h(X')\subset Y',\qquad h(X'')\subset Y'',\qquad h(x_0)=y_0.
$$
The morphism $h$ is called \jdef{exact (at $x_0$)} if the differential
$dh(x_0):T_{x_0}X\to T_{y_0}Y$ induces isomorphisms
$$
dh(x_0):T_{x_0}X'\cap T_{x_0}X''
\to T_{y_0}Y'\cap T_{y_0}Y''
$$
and
$$
dh(x_0):\frac{T_{x_0}X}{T_{x_0}X'+T_{x_0}X''}
\to\frac{T_{y_0}Y}{T_{y_0}Y'+T_{y_0}Y''}.
$$
The inclusion of a finite dimensional reduction is an example of an
exact morphism.
\end{PARA}\rm
\begin{theorem}\label{thm:fredh}
Let $h:(X,X',X'',x_0)\to(Y,Y',Y'',y_0)$ be a morphism
of Fredholm quadruples. Then the following are equivalent.
\begin{description}
\item[(i)]
$h$ is exact at $x_0$.
\item[(ii)]
There exist finite dimensional reductions
$(U,U',U'',x_0)$ of $(X,X',X'',x_0)$ and $(V,V',V'',y_0)$ of
$(Y,Y',Y'',y_0)$ such that $h$ maps $U$, $U'$, $U''$
diffeomorphically onto $V$, $V'$, $V''$, respectively.
\end{description}
\end{theorem}
\begin{proof}
We prove that~(ii) implies~(i). By~(ii), the
homomorphism $dh(x_0)$ from $T_{x_0}X'\cap T_{x_0}X''$
to $T_{y_0}Y'\cap T_{y_0}Y''$ can be written as the composition
$$
T_{x_0}X'\cap T_{x_0}X''
= T_{x_0}U'\cap T_{x_0}U''
\stackrel{dh(x_0)}{\longrightarrow} T_{y_0}V'\cap T_{y_0}V''
= T_{y_0}Y'\cap T_{y_0}Y''
$$
and hence is an isomorphism. Similarly for the
map from $T_{x_0}X/(T_{x_0}X'+T_{x_0}X'')$ to
${T_{y_0}Y/(T_{y_0}Y'+T_{y_0}Y'')}$.
We prove that~(i) implies~(ii).
Let $u,x',x'',\xi$ be the normal coordinates
on $X$ introduced in Lemma~\ref{le:fredX} and choose similar
normal coordinates $v,y',y'',\eta$ on $Y$ at $y_0$.
Thus
\begin{equation}\label{eq:Y1}
Y''=\{y'=0,\eta=0\},\qquad Y'=\{y''=0,\eta=g(v,y')\},
\end{equation}
\begin{equation}\label{eq:Y2}
Y'\cap Y'' = \{y'=0,y''=0,\eta=0,g(v,0)=0\}
\end{equation}
for a smooth function $g$ with $g(0,0)=0$ and $dg(0,0)=0$.
In these coordinates the morphism $h=(h_1,h_2,h_3,h_4)$
satsfies
\begin{equation}\label{eq:h1}
h_2(u,0,x'',0)=0,\qquad h_4(u,0,x'',0)=0
\end{equation}
(because $h(X'')\subset Y''$),
\begin{equation}\label{eq:h2}
h_3(u,x',0,f(u,x'))=0,
\end{equation}
\begin{equation}\label{eq:h3}
h_4(u,x',0,f(u,x'))=g(h_1(u,x',0,f(u,x')),h_2(u,x',0,f(u,x')))
\end{equation}
(because $h(X')\subset Y'$), and
\begin{equation}\label{eq:h4}
\det({\partial} h_1/{\partial} u)(0,0,0,0)\ne 0,\qquad
\det({\partial} h_4/{\partial}\xi)(0,0,0,0)\ne 0
\end{equation}
(because $h$ is exact). By~(\ref{eq:h1}) and~(\ref{eq:h4}), the restriction of
$h$ to a neighborhood of $x_0$ in $U$ is an embedding.
Shrinking the domain $X_0\subset X$ of the normal coordinates,
if necessary, we may assume that $h|U:U\to Y$ is an embedding.
Denote
$$
V:=h(U),\qquad V':=h(U'),\qquad V'':=h(U'').
$$
We must prove that $(V,V',V'',y_0)$ is a finite dimensional
reduction.
\begin{description}
\item[(a)]
The set $V$ consists of all quadruples of the form $(v,y',y'',\eta)$
where
$$
y':=h_2(u,0,0,\xi),\qquad y'':=h_3(u,0,0,\xi)
$$
and $u,\xi$ are defined by $h_1(u,0,0,\xi)=v$, $h_4(u,0,0,\xi)=\eta$.
\item[(b)]
The set $V'$ consists of all quadruples of the form $(v,y',0,g(v,y'))$
where
$$
y':=h_2(u,0,0,f(u,0)),\qquad
h_1(u,0,0,f(u,0)):=v.
$$
\item[(c)]
The set $V''$ consists of all quadruples of the form $(v,0,y'',0)$
where
$$
y'':=h_3(u,0,0,0),\qquad
h_1(u,0,0,0):=v.
$$
\end{description}
Thus a point in the intersection $V'\cap V''$ has the form
$(v,0,0,0)$ where $v$ satisfies the conditions
\begin{description}
\item[(i)] $g(v,0)=0$
\item[(ii)] If $u$ is defined by $h_1(u,0,0,f(u,0)):=v$
then $h_2(u,0,0,f(u,0))=0$.
\item[(iii)] If $u$ is defined by $h_1(u,0,0,0):=v$
then $h_3(u,0,0,0)=0$.
\end{description}
We show that~(i) implies~(ii) and~(iii) whenever $v$ is sufficiently small.
For~(ii) we define $u$ as the unique solution of $h_1(u,0,0,f(u,0))=v$
so that
\begin{equation}\label{eq:gh2h4}
g(v,0)=0,\qquad
g(v,h_2(u,0,0,f(u,0)))= h_4(u,0,0,f(u,0)).
\end{equation}
We claim that for $v$ sufficiently small this implies $f(u,0)=0$.
To see this we use first that the solution $u$ of the equation
$h_1(u,0,0,f(u,0))=v$ satisfies an inequality
\begin{equation}\label{eq:fu1}
\Norm{u}+\Norm{f(u,0)}\le c\Norm{v}
\end{equation}
for $v$ sufficiently small. Next we use the fact that
$h_2(u,0,0,0)=0$ and hence
\begin{equation}\label{eq:fu2}
\Norm{h_2(u,0,0,\xi)}\le c\Norm{\xi}.
\end{equation}
Third, we have that $h_4(u,0,0,0)=0$ and ${\partial} h_4/{\partial}\xi$
is invertible at the point $(0,0,0,0)$, hence also at the point
$(u,0,0,0)$ for $u$ sufficiently small. Hence we have
an inequality
\begin{equation}\label{eq:fu3}
\Norm{h_4(u,0,0,\xi)}\ge c^{-1}\Norm{\xi}
\end{equation}
for a suitable constant $c>0$ and $u$ and $\xi$ sufficiently small.
Fourth, since $g(0,0)=0$ and $dg(0,0)=0$, there is an inequality
\begin{equation}\label{eq:fu4}
\Norm{g(v,y')-g(v,0)}\le c\left(\Norm{v}+\Norm{y'}\right)\Norm{y'}
\end{equation}
for a suitable constant $c$. Putting these four inequalities together
and inserting $\xi=f(u,0)$ and $y'=h_2(u,0,0,f(u,0))$ we deduce
$$
\begin{array}{lclr}
\Norm{f(u,0)}
&\le&
c \Norm{h_4(u,0,0,f(u,0))} & \mbox{by }(\ref{eq:fu3}) \\
&=&
c\Norm{g(v,h_2(u,0,0,f(u,0)))-g(v,0)} & \mbox{by }(\ref{eq:gh2h4}) \\
&\le&
c^2\left(\Norm{v}+\Norm{h_2(u,0,0,f(u,0))}\right)
\Norm{h_2(u,0,0,f(u,0))} & \mbox{by }(\ref{eq:fu4}) \\
&\le&
c^3\left(\Norm{v}+c\Norm{f(u,0)}\right)\Norm{f(u,0)}
& \mbox{by }(\ref{eq:fu2}) \\
&\le&
(c^3+c^5)\Norm{v}\Norm{f(u,0)} & \mbox{by }(\ref{eq:fu1})
\end{array}
$$
for $v$ sufficiently small. With $(c^3+c^5)\Norm{v}<1$ this implies
$$
f(u,0)=0
$$
as claimed and hence $h_2(u,0,0,f(u,0))=0$,
by~(\ref{eq:h1}). Thus we have proved that~(i) implies~(ii).
Since $f(u,0)=0$ we also deduce that our $u$
is the unique solution of $h_1(u,0,0,0)=v$ needed in~(iii).
Using $f(u,0)=0$ again we obtain $h_3(u,0,0,0)=0$,
by~(\ref{eq:h2}). Thus we have proved that~(i) implies~(ii)
and~(iii) and hence
$$
V'\cap V'' = \left\{(v,0,0,0)\,|\,g(v,0)=0\right\}
= Y_0\cap Y'\cap Y''
$$
for a suitable open neighborhood $Y_0$ of $y_0$ in $Y$.
Next we examine the tangent spaces of $V$, $V'$, and $V''$
at a point
$$
y:=(v,0,0,0)\in V'\cap V'',\qquad g(v,0)=0.
$$
Let $x=(u,0,0,0)\in U'\cap U''$ with $f(u,0)=0$ be the
element with $h(x)=y$.
\begin{description}
\item[(A)]
The tangent space $T_yV$ consists of all vectors
$\hat y=(\hat v,\hat y',\hat y'',\hat \eta)$ where
$$
\hat y':=\frac{{\partial} h_2}{{\partial}\xi}\hat\xi,\qquad
\hat y'':=\frac{{\partial} h_3}{{\partial} u}\hat u+\frac{{\partial} h_3}{{\partial}\xi}\hat\xi
$$
and $\hat u,\hat\xi$ are defined by
\begin{equation}\label{eq:uhat}
\hat u:= \left(\frac{{\partial} h_1}{{\partial} u}\right)^{-1}
\left(\hat v-\frac{{\partial} h_1}{{\partial}\xi}\hat\xi\right)
\end{equation}
\begin{equation}\label{eq:etahat}
\hat\xi:=\left(\frac{{\partial} h_4}{{\partial}\xi}\right)^{-1}\hat\eta.
\end{equation}
Here and below all partial derivatives of $h$ are evaluated
at $x=(u,0,0,0)$ and we have used the fact that
${\partial} h_2/{\partial} u$ and ${\partial} h_4/{\partial} u$ vanish at $x$, by~(\ref{eq:h1}).
\item[(B)]
The tangent space $T_yV'$ consists of all vectors
$\hat y=(\hat v,\hat y',0,\hat \eta)$ where
\begin{equation}\label{eq:uhat-etahat}
\hat y':=\frac{{\partial} h_2}{{\partial}\xi}\frac{{\partial} f}{{\partial} u}\hat u,\qquad
\hat\eta:=\frac{{\partial} g}{{\partial} v}\hat v+\frac{{\partial} g}{{\partial} y'}\hat y'
\end{equation}
and $\hat u$ is defined
\begin{equation}\label{eq:uhatB}
\hat u:= \left(\frac{{\partial} h_1}{{\partial} u}+\frac{{\partial} h_1}{{\partial}\xi}\frac{{\partial} f}{{\partial} u}\right)^{-1}\hat v.
\end{equation}
Here and below all partial derivatives of $f$ are evaluated
at $(u,0)$ and all partial derivatives of $g$ at $(v,0)$.
\item[(C)]
The tangent space $T_yV''$ consists of all vectors
$\hat y=(\hat v,0,\hat y'',0)$ where
\begin{equation}\label{eq:C}
\hat y'':=-\frac{{\partial} h_3}{{\partial}\xi}\frac{{\partial} f}{{\partial} u}\hat u,\qquad
\hat u:=\left(\frac{{\partial} h_1}{{\partial} u}\right)^{-1}\hat v.
\end{equation}
Note that $-({\partial} h_3/{\partial}\xi)({\partial} f/{\partial} u)={\partial} h_3/{\partial} u$, by~(\ref{eq:h2}).
\end{description}
We prove that the intersection $T_yV'\cap T_yV''$ consists of
all vectors $\hat y=(\hat v,0,0,0)$ where $\hat v$ satisfies
the conditions
\begin{equation}\label{eq:vhat1}
\frac{{\partial} g}{{\partial} v}\hat v=0,
\end{equation}
\begin{equation}\label{eq:vhat2}
\frac{{\partial} f}{{\partial} u}\hat u=0
\end{equation}
where $\hat u$ is given by~(\ref{eq:uhatB}).
First assume $\hat v$ satisfies~(\ref{eq:vhat1}) and~(\ref{eq:vhat2}).
We show that $\hat y:=(\hat v,0,0,0)\in T_yV'\cap T_yV''$.
By~(\ref{eq:vhat2}), we have $\hat y'=0$ in~(\ref{eq:uhat-etahat}) and hence,
by~(\ref{eq:vhat1}), $\hat\eta =({\partial} g/{\partial} v)\hat v=0$ in~(\ref{eq:uhat-etahat}).
Thus $\hat y\in T_yV'$.
Moreover the vector $\hat u$ in~(\ref{eq:uhatB})
satisfies $({\partial} h_1/{\partial} u)\hat u=\hat v$ by~(\ref{eq:vhat2})
and, also by~(\ref{eq:vhat2}), we have $\hat y''=0$ in~(\ref{eq:C}).
Thus $\hat y\in T_yV''$.
Conversely assume $\hat y\in T_yV'\cap T_yV''$.
We show that $\hat y=(\hat v,0,0,0)$ where $\hat v$
satisfies~(\ref{eq:vhat1}) and~(\ref{eq:vhat2}).
That $\hat y$ has the form $(\hat v,0,0,0)$ follows immediately
from~(B) and~(C). Equation~(\ref{eq:vhat1}) follows immediately
from~(B) and the fact that $\hat y'=0$. To prove that $\hat v$
satisfies~(\ref{eq:vhat2}) we differentiate equation~(\ref{eq:h3})
at the point $x=(u,0,0,0)$ with respect to $u$ to obtain
\begin{equation}\label{eq:h5}
\frac{{\partial} h_4}{{\partial}\xi}\frac{{\partial} f}{{\partial} u}
= \frac{{\partial} g}{{\partial} v}\left(\frac{{\partial} h_1}{{\partial} u} + \frac{{\partial} h_1}{{\partial}\xi}\frac{{\partial} f}{{\partial} u}\right)
+ \frac{{\partial} g}{{\partial} y'} \frac{{\partial} h_2}{{\partial}\xi}\frac{{\partial} f}{{\partial} u}.
\end{equation}
Here we have used the fact that ${\partial} h_2/{\partial} u$ and ${\partial} h_4/{\partial} u$
vanish at $x$, by~(\ref{eq:h1}). Evaluating~(\ref{eq:h5})
in the direction of the vector $\hat u$ in~(\ref{eq:vhat2})
gives
$$
\frac{{\partial} h_4}{{\partial}\xi}\frac{{\partial} f}{{\partial} u}\hat u
= \frac{{\partial} g}{{\partial} v}\hat v + \frac{{\partial} g}{{\partial} y'}\hat y'
= 0.
$$
Since ${\partial} h_4/{\partial}\xi$ is invertible this proves~(\ref{eq:vhat2}).
We prove that
\begin{equation}\label{eq:TV'TV''}
T_yV'\cap T_yV''=\left\{(\hat v,0,0,0)\,\Big|\,\frac{{\partial} g}{{\partial} v}\hat v=0\right\},
\end{equation}
i.e. that~(\ref{eq:vhat1}) implies~(\ref{eq:vhat2}).
Let $\hat u$ be given by~(\ref{eq:uhatB}) and abbreviate
$$
\hat\xi := \frac{{\partial} f}{{\partial} u}\hat u.
$$
Evaluating~(\ref{eq:h5}) again in the direction of the vector
$\hat u$ in~(\ref{eq:vhat2}) and using~(\ref{eq:vhat1}) we obtain
$$
\frac{{\partial} h_4}{{\partial}\xi}\hat\xi
= \frac{{\partial} g}{{\partial} y'} \frac{{\partial} h_2}{{\partial}\xi}\hat\xi.
$$
Since ${\partial} g/{\partial} y'$ vanishes at the origin it is small when
$v$ is small and hence, in this case, $\hat\xi=0$ as claimed.
This proves~(\ref{eq:TV'TV''}).
By~(\ref{eq:Y1}), the right hand side of~(\ref{eq:TV'TV''})
is $T_yY'\cap T_yY''$.
This proves that
$$
T_yV'\cap T_yV''=T_yY'\cap T_yY''.
$$
It remains to prove that
\begin{equation}\label{eq:VY1}
\frac{T_yV}{T_yV'+T_yV''}
\cong \frac{T_yY}{T_yY'+T_yY''}.
\end{equation}
Since $T_yV'\cap T_yV''=T_yY'\cap T_yY''$ and the Fredholm
quadruples $(V,V',V'',y)$ and $(Y,Y',Y'',y)$ have the same Fredholm index
for $y\in V'\cap V''$ sufficiently small, both quotient spaces have the same
dimension. Hence condition~(\ref{eq:VY1}) is equivalent to
\begin{equation}\label{eq:VY2}
T_yV\cap(T_yY'+T_yY'') \subset T_yV'+T_yV''.
\end{equation}
The sum $T_yY'+T_yY''$ is the set of all vectors
$\hat y=(\hat v,\hat y',\hat y'',\hat \eta)$ that satisfy
\begin{equation}\label{eq:TY}
\hat\eta-\frac{{\partial} g}{{\partial} y'}\hat y' \in\im\left(\frac{{\partial} g}{{\partial} v}\right).
\end{equation}
To prove~(\ref{eq:VY2}) fix a vector
$\hat y=(\hat v,\hat y',\hat y'',\hat \eta)\in T_yV\cap(T_yY'+T_yY'')$.
By~(\ref{eq:TY}) there is a vector $\hat v'$ such that
\begin{equation}\label{eq:v'}
\hat\eta-\frac{{\partial} g}{{\partial} y'}\hat y' = \frac{{\partial} g}{{\partial} v}\hat v'.
\end{equation}
We prove that
\begin{equation}\label{eq:yhat}
(\hat v',\hat y',0,\hat\eta)\in T_yV',\qquad
(\hat v'',0,y'',0)\in T_yV'',\qquad \hat v'' := \hat v - \hat v'.
\end{equation}
To see this define the vectors $\hat u$ and $\hat\xi$ by
\begin{equation}\label{eq:uhatxihat}
\frac{{\partial} h_1}{{\partial} u}\hat u+\frac{{\partial} h_1}{{\partial}\xi}\hat\xi=\hat v,\qquad
\frac{{\partial} h_4}{{\partial}\xi}\hat\xi=\hat\eta
\end{equation}
as in~(A) so that
\begin{equation}\label{eq:yhat'}
\hat y' =\frac{{\partial} h_2}{{\partial}\xi}\hat\xi,\qquad
\hat y'' =\frac{{\partial} h_3}{{\partial} u}\hat u+\frac{{\partial} h_3}{{\partial}\xi}\hat\xi.
\end{equation}
Next define $\hat u'$ and $\hat u''$ by
\begin{equation}\label{eq:uhat'}
\frac{{\partial} h_1}{{\partial} u}\hat u'
+\frac{{\partial} h_1}{{\partial}\xi}\frac{{\partial} f}{{\partial} u}\hat u':=\hat v',\qquad
\frac{{\partial} h_1}{{\partial} u}\hat u'' := \hat v''.
\end{equation}
Then, by~(\ref{eq:h5}), (\ref{eq:v'}), and~(\ref{eq:uhatxihat}-\ref{eq:uhat'}), we have
\begin{eqnarray*}
\frac{{\partial} h_4}{{\partial}\xi}\left(\frac{{\partial} f}{{\partial} u}\hat u'-\hat\xi\right)
&=&
\frac{{\partial} g}{{\partial} v}
\left(\frac{{\partial} h_1}{{\partial} u} + \frac{{\partial} h_1}{{\partial}\xi}\frac{{\partial} f}{{\partial} u}\right)\hat u'
+ \frac{{\partial} g}{{\partial} y'}\frac{{\partial} h_2}{{\partial}\xi}\frac{{\partial} f}{{\partial} u}\hat u'
- \hat\eta \\
&=&
\frac{{\partial} g}{{\partial} v}\hat v' + \frac{{\partial} g}{{\partial} y'}\hat y'
- \hat\eta + \frac{{\partial} g}{{\partial} y'}\frac{{\partial} h_2}{{\partial}\xi}
\left(\frac{{\partial} f}{{\partial} u}\hat u'-\hat\xi\right) \\
&=&
\frac{{\partial} g}{{\partial} y'}\frac{{\partial} h_2}{{\partial}\xi}
\left(\frac{{\partial} f}{{\partial} u}\hat u'-\hat\xi\right).
\end{eqnarray*}
Since ${\partial} g/{\partial} y'$ is small when $v$ is small this implies
$$
\hat\xi = \frac{{\partial} f}{{\partial} u}\hat u',\qquad \hat u'+\hat u''=\hat u.
$$
Here the last equation follows from the first
and~(\ref{eq:uhatxihat}) and~(\ref{eq:uhat'}).
Now it follows from~(\ref{eq:yhat'}) that
$$
\hat y'' =\frac{{\partial} h_3}{{\partial} u}\hat u+\frac{{\partial} h_3}{{\partial}\xi}\hat\xi
= \left(\frac{{\partial} h_3}{{\partial} u}+\frac{{\partial} h_3}{{\partial}\xi}\frac{{\partial} f}{{\partial} u}\right)\hat u'
+ \frac{{\partial} h_3}{{\partial} u}\hat u''
= \frac{{\partial} h_3}{{\partial} u}\hat u''.
$$
Combining this with~(C) and~(\ref{eq:uhat'}) we find that
$(\hat v'',0,\hat y'',0)\in T_yV''$. Likewise it follows from~(B)
and~(\ref{eq:v'}), (\ref{eq:yhat'}) and~(\ref{eq:uhat'})
that $(\hat v',\hat y',0,\eta')\in T_yV'$. Thus we have
proved~(\ref{eq:yhat}). This completes the proof
of~(\ref{eq:VY2}) and the theorem.
\end{proof}
Let $A\subset X$ and $B\subset Y$ be arbitrary subsets.
Recall that $\phi:A\to B$ is by definition a diffeomorpism
if it is bijective and $\phi$ and $\phi^{-1}$ are smooth,
i.e.\ for every point $x\in A$ there is a smooth
extension of $\phi$ from a neighbourhood of $x$ in $X$ to $Y$,
and for every point $y\in B$ there is a smooth
extension of $\phi^{-1}$ from a neighbourhood
of $y$ in $Y$ to $X$ (see~\cite{MILNOR}).
\begin{corollary}\label{cor:fredh}
Let $h:(X,X',X'',x_0)\to(Y,Y',Y'',y_0)$ be an exact morphism
of Fredholm quadruples. Then the following holds.
\smallskip\noindent{\bf (I)}
$h$ maps a neighborhood of $x_0$ in $X'\cap X''$
diffeomorphically onto a neighborhood of $y_0$ in $Y'\cap Y''$.
\smallskip\noindent{\bf (II)}
$h$ is exact at every point $x\in X'\cap X''$ sufficiently close to $x_0$.
\end{corollary}
\begin{proof}
Of course $X'\cap X''$ need not be a manifold.
Let $(U,U',U'')$ and $(V,V',V'')$ be the finite dimensional reductions
of Theorem~\ref{thm:fredh}. Then assertion~(I) follows from
the fact that $h^{-1}:V\to U$ extends to a smooth map from
a neighborhood of $V$ to~$X$. Assertion~(II) follows from
the equivalence of~(i) and~(ii) in Theorem~\ref{thm:fredh};
namely, if~(ii) holds for $x_0$ then it also holds for every point
$x\in X'\cap X''$ sufficiently close to $x_0$ (with the same
finite dimensional reductions). This proves the corollary.
\end{proof}
\begin{theorem}\label{thm:fredstable}
Let $h_\lambda:(X,X'_\lambda,X''_\lambda)\to(Y,Y'_\lambda,Y''_\lambda)$
be a smooth family of morphisms of Fredholm triples parametrized by
$\lambda\in\Lambda$, where $\Lambda$ is a finite dimensional
manifold, i.e. the map
$$
h:\Lambda\times X\to\Lambda\times Y,\qquad
h(\lambda,x):=(\lambda,h_\lambda(x)),
$$
is smooth, the sets
$$
X':=\bigsqcup_\lambda X_\lambda',\qquad
X'':=\bigsqcup_\lambda X_\lambda''
$$
are smooth submanifolds of $\Lambda\times X$, the sets
$$
Y':=\bigsqcup_\lambda Y_\lambda',\qquad
Y'':=\bigsqcup_\lambda Y_\lambda''
$$
are smooth submanifolds of $\Lambda\times Y$, and the
projections from $X',X'',Y',Y''$ to~$\Lambda$ are submersions.
Let $\lambda_0\in\Lambda$,
$x_0\in X'_{\lambda_0}\cap X''_{\lambda_0}$,
and $y_0:=h_{\lambda_0}(x_0)$. Then the following holds.
\begin{description}
\item[(i)]
The Fredholm indices are related by
\begin{equation*}
\begin {split}
\INDEX(\Lambda\times X,X',X'',(\lambda_0,x_0))
&=\INDEX(X_{\lambda_0},X'_{\lambda_0},X''_{\lambda_0},x_0)
+\dim\Lambda, \\
\INDEX(\Lambda\times Y,Y',Y'',(\lambda_0,y_0))
&=\INDEX(Y_{\lambda_0},Y'_{\lambda_0},Y''_{\lambda_0},y_0)
+\dim\Lambda.
\end{split}
\end{equation*}
\item[(ii)]
$h_{\lambda_0}$ is exact at $x_0$ if and only if
$h$ is exact at $(\lambda_0,x_0)$.
\end{description}
\end{theorem}
\bigbreak
\begin{proof}
There is a commutative diagram
$$
\Rectangle{T_{x_0}X'_{\lambda_0}\times T_{x_0}X''_{\lambda_0}}
{}{T_{x_0}X_{\lambda_0}}
{}{}
{T_{(\lambda_0,x_0)}X'\times T_{(\lambda_0,x_0)}X''}{}
{T_{(\lambda_0,x_0)}X}
$$
of Fredholm operators where the horizontal arrows
are as in~\ref{fredE} and the vertical arrows are inclusions.
The Fredholm index of the top horizontal arrow is
$\INDEX(X_{\lambda_0},X'_{\lambda_0},X''_{\lambda_0},x_0)$,
the index of the bottom horizontal arrow is
$\INDEX(\Lambda\times X,X',X'',(\lambda_0,x_0))$,
that of the left vertical arrow is $-2\dim\Lambda$,
and that of the right vertical arrow is $-\dim\Lambda$.
(Here we have used the fact that the projections
$X'\to\Lambda$ and $X''\to\Lambda$ are submersions.)
Hence assertion~(i) follows from the fact that the Fredholm
index of a composition is the sum of the Fredholm indices.
We prove~(ii).
Assume first that $h_{\lambda_0}$ is exact at $x_0$
and denote ${y_0:=h_{\lambda_0}(x_0)}$.
We prove that the induced homomorphism
\begin{equation}\label{eq:hl1}
dh(\lambda_0,x_0):
T_{(\lambda_0,x_0)}X'\cap T_{(\lambda_0,x_0)}X''
\to T_{(\lambda_0,y_0)}Y'\cap T_{(\lambda_0,y_0)}Y''
\end{equation}
is injective. If
$(\hat\lambda,\hat x)\in T_{(\lambda_0,x_0)}X'\cap T_{(\lambda_0,x_0)}X''$
and $dh(\lambda_0,x_0)(\hat\lambda,\hat x)=0$
then
$$
\hat\lambda=0,\qquad dh_{\lambda_0}(x_0)\hat x=0.
$$
Since the projections $X'\to\Lambda$ and
$X''\to\Lambda$ are submersions we have
$\hat x\in T_{x_0}X'_{\lambda_0}\cap T_{x_0}X''_{\lambda_0}$.
By assumption, this implies $\hat x=0$.
This shows that~(\ref{eq:hl1}) is injective, as claimed.
We prove that the induced homomorphism
\begin{equation}\label{eq:hl2}
dh(\lambda_0,x_0):\frac{T_{(\lambda_0,x_0)}(\Lambda\times X)}
{T_{(\lambda_0,x_0)}X'+T_{(\lambda_0,x_0)}X''}
\to \frac{T_{(\lambda_0,y_0)}(\Lambda\times Y)}
{T_{(\lambda_0,y_0)}Y'+T_{(\lambda_0,y_0)}Y''}
\end{equation}
is surjective. Let
$(\hat\lambda,\hat y)\in T_{(\lambda_0,y_0)}(\Lambda\times Y)$.
Since the projection $X'\to\Lambda$ is a submersion,
there is a vector $\hat x\in T_{x_0}X$ such that
$(\hat\lambda,\hat x)\in T_{(\lambda_0,x_0)}X'$.
Define $\hat y_0\in T_{y_0}Y$ by
$$
(0,\hat y_0):=(\hat\lambda,\hat y)-dh(\lambda_0,x_0)(\hat\lambda,\hat x).
$$
By assumption, there exists a vector
$\hat x_0\in T_{x_0}X$ such that
$$
\hat y_0-dh_{\lambda_0}(x_0)\hat x_0\in T_{y_0}Y'+T_{y_0}Y''.
$$
Hence
$$
(0,\hat y_0)-dh(\lambda_0,x_0)(0,\hat x_0)
\in T_{(\lambda_0,y_0)}Y'+T_{(\lambda_0,y_0)}Y''.
$$
and hence
$$
(\hat\lambda,\hat y)-dh(\lambda_0,x_0)(0,\hat x_0)
\in T_{(\lambda_0,y_0)}Y'+T_{(\lambda_0,y_0)}Y''.
$$
This shows that~(\ref{eq:hl2}) is surjective, as claimed.
Moreover, by~(i) the quadruples
$(\Lambda\times X,X',X'',(\lambda_0,x_0))$ and
$(\Lambda\times Y,Y',Y'',(\lambda_0,y_0))$
have the same Fredholm index. Hence~(\ref{eq:hl1})
and~(\ref{eq:hl2}) are bijective and so $h$ is exact at
$(\lambda_0,x_0)$.
Conversely, assume that $h$ is exact at $(\lambda_0,x_0)$ so
that~(\ref{eq:hl1}) and~(\ref{eq:hl2}) are bijective.
We prove that the induced homomorphism
\begin{equation}\label{eq:h01}
dh_{\lambda_0}(x_0):T_{x_0}X'_{\lambda_0}\cap T_{x_0}X''_{\lambda_0}
\to T_{y_0}Y'_{\lambda_0}\cap T_{y_0}Y''_{\lambda_0}
\end{equation}
is injective. Let
$\hat x\in T_{x_0}X'_{\lambda_0}\cap T_{x_0}X''_{\lambda_0}$
and suppose that $dh_{\lambda_0}(x_0)\hat x=0$. Then
$$
(0,\hat x)\in T_{(\lambda_0,x_0)}X'\cap T_{(\lambda_0,x_0)}X'',\qquad
dh(\lambda_0,x_0)(0,\hat x)=(0,0).
$$
Since~(\ref{eq:hl1}) is injective, this implies $\hat x=0$.
This shows that~(\ref{eq:h01}) is injective.
We prove that the induced homomorphism
\begin{equation}\label{eq:h02}
dh_{\lambda_0}(x_0):\frac{T_{x_0}X}
{T_{x_0}X'_{\lambda_0}+T_{x_0}X''_{\lambda_0}}
\to \frac{T_{y_0}Y}
{T_{y_0}Y'_{\lambda_0}+T_{y_0}Y''_{\lambda_0}}
\end{equation}
is surjective.
Let $\hat y\in T_{y_0}Y$. Since~(\ref{eq:hl2})
is surjective, there exists a pair
$(\hat\lambda,\hat x)\in T_{(\lambda_0,x_0)}(\Lambda\times X)$
such that
$$
(0,\hat y)-dh(\lambda_0,x_0)(\hat\lambda,\hat x)
\in T_{(\lambda_0,y_0)}Y'+T_{(\lambda_0,y_0)}Y''.
$$
Write
\begin{equation}\label{eq:yhat1}
(0,\hat y)-
dh(\lambda_0,x_0)(\hat\lambda,\hat x)
= (\hat\lambda',\hat y') + (\hat\lambda'',\hat y'')
\end{equation}
where
$$
(\hat\lambda',\hat y')\in T_{(\lambda_0,y_0)}Y',\qquad
(\hat\lambda'',\hat y'')\in T_{(\lambda_0,y_0)}Y''.
$$
Since the projections ${X'\to\Lambda}$ and $X''\to\Lambda$
are submersions, there exist tangent vectors $\hat x',\hat x''\in T_{x_0}X$
such that
$$
(\hat\lambda',\hat x')\in T_{(\lambda_0,x_0)}X',\qquad
(\hat\lambda'',\hat x'')\in T_{(\lambda_0,x_0)}X''.
$$
Define the tangent vectors $\hat y_0',\hat y_0''\in T_{y_0}Y$
by
\begin{equation}\label{eq:yhat2}
\begin{split}
(0,\hat y_0')
&:= (\hat\lambda',\hat y')-dh(\lambda_0,x_0)(\hat\lambda',\hat x')
\in T_{(\lambda_0,y_0)}Y',\\
(0,\hat y_0'')
&:= (\hat\lambda'',\hat y'')-dh(\lambda_0,x_0)(\hat\lambda'',\hat x'')
\in T_{(\lambda_0,y_0)}Y''.
\end{split}
\end{equation}
Since the projections $Y'\to\Lambda$ and $Y''\to\Lambda$
are submersions we have
$$
\hat y_0'\in T_{y_0}Y'_{\lambda_0},\qquad
\hat y_0''\in T_{y_0}Y''_{\lambda_0}.
$$
Moreover, by~(\ref{eq:yhat1}), we have
$$
\hat\lambda+\hat\lambda'+\hat\lambda''=0
$$
and hence, by~(\ref{eq:yhat1}) and~(\ref{eq:yhat2}),
$$
\hat y - dh_{\lambda_0}(x_0)(\hat x+\hat x'+\hat x'')
= \hat y_0'+\hat y_0''
\in T_{y_0}Y'_{\lambda_0}+T_{y_0}Y''_{\lambda_0}.
$$
Hence~(\ref{eq:h02}) is surjective, as claimed.
Now it follows again from the index identities in~(i)
that~(\ref{eq:h01}) and~(\ref{eq:h02}) are bijective
and hence $h_{\lambda_0}$ is exact at $x_0$.
This proves the theorem.
\end{proof}
\begin{corollary}\label{cor:fredstable}
Let $h_\lambda:(X,X'_\lambda,X''_\lambda)\to(Y,Y'_\lambda,Y''_\lambda)$
be as in Theorem~\ref{thm:fredstable} and suppose that $h_{\lambda_0}$
is exact at $x_0\in X'_{\lambda_0}\cap X''_{\lambda_0}$.
Then the following holds.
\begin{description}
\item[(i)]
If $\lambda$ is sufficiently close to $\lambda_0$ and
$x\in X'_\lambda\cap X''_\lambda$ is sufficiently close to $x_0$
then $h_\lambda$ is exact at $x$.
\item[(ii)]
If $\Lambda\to Y:\lambda\mapsto y_\lambda$ is a smooth
map such that $y_\lambda\in Y'_\lambda\cap Y''_\lambda$
for every~$\lambda$ then, after shrinking $\Lambda$
if necessary, there exists a unique smooth map
$\Lambda\to X:\lambda\mapsto x_\lambda$
such that $x_\lambda\in X'_\lambda\cap X''_\lambda$
and $h_\lambda(x_\lambda)=y_\lambda$ for every~$\lambda$.
\end{description}
\end{corollary}
\begin{proof}
Theorem~\ref{thm:fredstable} and Corollary~\ref{cor:fredh}
\end{proof}
\begin{remark}\rm\label{rmk:sh}
All the results of this section continue to hold
in the complex category, i.e. all Hilbert spaces are complex,
all Hilbert manifolds are complex, all maps are complex,
the family $\{h_\lambda\}_{\lambda\in\Lambda}$
in Theorem~\ref{thm:fredstable} is a holomorphic
family of holomorphic morphisms of complex Fredholm triples, etc.
As a result the map $\Lambda\to X$ in Corollary~\ref{cor:fredstable}
is holomorphic.
\end{remark}\rm
\section{Proofs of the main theorems}\label{sec:proof}
\begin{proof}[Proof of Theorem~\ref{thm:stable}.]
Assume the unfolding $(\pi_B:Q\to B,S_*,H_B,b_0)$
is iinfinitesimally universal.
Let $\mathcal{U},\mathcal{U}',\mathcal{U}''$ be the manifolds in~\ref{cU}
and let $\mathcal{V},\mathcal{V}',\mathcal{V}''$ be the manifolds in~\ref{cV}
for
$$
P=Q,\qquad A=B,\qquad \pi_A=\pi_B,\qquad
R_*=S_*,\qquad H_A=H_B,
$$
and an appropriate Hardy decomposition
$Q=Q'\cup Q''$. For $a\in A=B$ denote $b_a:=a$,
let $\alpha_a:\Gamma_a\to Q_{b_a}$ be the inclusion
of $\Gamma_a:=Q'_a\cap Q''_a$ into $Q_{b_a}$,
and abbreviate $\beta_a:=H_B\circ\alpha_a:\Gamma_a\to M$.
Then the morphism
\begin{equation}\label{eq:QQ}
\mathcal{U}_a\to\mathcal{V}_a:(\alpha,b)\mapsto\beta:=H_B\circ\alpha
\end{equation}
from the Fredholm quadruple $(\mathcal{U}_a,\mathcal{U}'_a,\mathcal{U}''_a,(\alpha_a,b_a))$
to $(\mathcal{V}_a,\mathcal{V}'_a,\mathcal{V}''_a,\beta_a)$ is exact for $a=a_0=b_0$,
by Theorems~\ref{thm:U} and~\ref{thm:V} (see~\ref{infuniv}).
The same theorems assert that the family~(\ref{eq:QQ})
of morphisms of Fredholm quadruples satisfies the
requirements of Theorem~\ref{thm:fredstable}.
Hence it follows from Corollary~\ref{cor:fredstable}
that~(\ref{eq:QQ}) is exact for $a=b$ sufficiently close to
$a_0=b_0$. Hence, again by Theorems~\ref{thm:U} and~\ref{thm:V},
the unfolding $(\pi_B:Q\to B,S_*,H_B,b)$
is infinitesimally universal for $b$ sufficiently close
to $b_0$. This proves the theorem.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:universal}.]
We proved `only if' in Section~\ref{sec:unfolding}. To prove `if'
assume that $(\pi_B:Q\to B,S_*,H_B,b_0)$ is an infinitesimally
universal unfolding. We prove that it is universal.
Let $(\pi_A:P\to A,R_*,H_A,a_0)$ be another unfolding
of maps and $f_0:P_{a_0}\to Q_{b_0}$ be a fiber isomorphism.
Choose a Hardy decomposition $P=P'\cup P''$
and open subsets $U'$, $U''$, and $U:=U'\cap U''$
of $Q$ as in~\ref{UQ}, \ref{Hardy}, and~\ref{f=id}.
Let $\mathcal{U}$, $\mathcal{U}'$, $\mathcal{U}''$ be as in~\ref{cU}
and $\mathcal{V}$, $\mathcal{V}'$, $\mathcal{V}''$ be as in~\ref{cV}.
Then
$$
(\alpha_0:=f_0|\Gamma_{a_0},b_0)\in\mathcal{U}'_{a_0}\cap\mathcal{U}''_{a_0},\qquad
\beta_0:=H_A|\Gamma_{a_0}\in\mathcal{V}'_{a_0}\cap\mathcal{V}''_{a_0}.
$$
Since the unfolding $(\pi_B,S_*,H_B,b_0)$ is
infintesimally universal the map
$$
\mathcal{U}_{a_0}\to\mathcal{V}_{a_0}:(\alpha,b)\mapsto \beta:=H_B\circ\alpha
$$
is an exact morphism of Fredholm triples as in~\ref{fredh}
(see~\ref{infuniv}). By Theorems~\ref{thm:U} and~\ref{thm:V}
the family of maps
$$
\mathcal{U}_a\to\mathcal{V}_a:(\alpha,b)\mapsto \beta:=H_B\circ\alpha,
$$
parametrized by $a\in A$ satisfies the hypotheses of
Theorem~\ref{thm:fredstable} (in the complex category). Moreover, there is
a holomorphic map
$$
A\to \mathcal{V}:a\mapsto(a,\beta_a),\qquad
\beta_a:=H_A|\Gamma_a\in\mathcal{V}'_a\cap\mathcal{V}''_a.
$$
Hence it follows from Corollary~\ref{cor:fredstable} and Remark~\ref{rmk:sh}
that, after shrinking $A$ if necessary, there exists a
unique holomorphic map
\begin{equation}\label{eq:2ndmap}
A\to\mathcal{U}:a\mapsto(a,\alpha_a,b_a),\qquad
(\alpha_a,b_a)\in\mathcal{U}'_a\cap\mathcal{U}''_a,
\end{equation}
such that
$
\beta_a=H_B\circ\alpha_a
$
for every $a\in A$. Define $\phi:A\to B$ by $\phi(a):=b_a$,
for every $a\in A$ let $f_a:P_a\to Q_{b_a}$ be the unique
fiber isomorphism with $f_a|\Gamma_a=\alpha_a$, and
define $\Phi:P\to Q$ by $\Phi|P_a:=f_a$. Then $\phi$ is holomorphic.
That the restriction of $\Phi$ to $\INT(P')$ is holomorphic
follows from~\cite[Lemma~10.18]{RS}. To prove that the restriction
of $\Phi$ to $\INT(P'')$ is holomorphic we write it as the composition
$$
\INT(P'')\to A\times\Omega\to \mathcal{U}''\times\Omega\to Q
$$
where the first map is $\pi_A\times\rho$,
the second map is the product of~(\ref{eq:2ndmap})
with the identity, and the third map is the evaluation map
$(a,f'',z)\mapsto f''(\rho_a^{-1}(z))$.
All four spaces are complex manifolds and
all three maps are holomorphic.
The argument is as in Step~3 in the
proof of~\cite[Theorem~5.3]{RS}.
It is important to remember that the complex structure
on the factor $\Omega$ depends on $a\in A$ and is twisted
by $\eta(a,\hat{a})$ as in~(\ref{eq:JP}).
This proves that $\Phi$ is holomorphic on $P\setminus{\partial} P'$.
Since $\Phi$ is continuous, it is holomorphic everywhere.
This proves the theorem.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:exists}.]
Given the work done in Section~\ref{sec:unfolding} it remains to prove `if'
under the assumptions that $(\Sigma,s_{0,*},\nu_0,j_0,v_0)$
is a regular stable map and the underlying marked nodal
Riemann surface $(\Sigma,s_{0,*},\nu_0,j_0)$ is still stable.
Let $(\pi_A:P\to A,R_*,a_0)$ be a universal unfolding of this
marked nodal Riemann surface (in the sense of~\cite[Definition 5.1]{RS})
and $w_0:\Sigma\to P_{a_0}$ be a desingularization of the
central fiber. Define the holomorphic map $h_0:P_{a_0}\to M$
by $h_0\circ w_0:=v_0$. Choose a Hardy decomposition
$$
P = P'\cup P'',\qquad
\Gamma_a:=P_a\cap P'\cap P'',
$$
as in~\ref{Hardy}, fix an integer $s+1/2>1$,
and define $\mathcal{V}$, $\mathcal{V}'$, $\mathcal{V}''$ as in~\ref{cV}.
The desingularization $w_0:\Sigma\to P_{a_0}$
induces a decomposition
$$
\Sigma=\Sigma'\cup\Sigma'',\qquad \Sigma'\cap\Sigma''
={\partial}\Sigma'={\partial}\Sigma'',
$$
with $\Sigma':=w_0^{-1}(P')$ and $\Sigma'':=w_0^{-1}(P'')$.
As in the proof of Theorem~\ref{thm:V} the map
$w_0^{-1}\circ\iota_{a_0}$ is a diffeomorphism from $\Gamma$
in~(\ref{eq:trivialize}) to $\Sigma'\cap\Sigma''$ and, to simplify the notation,
we assume that $\Gamma=\Sigma'\cap\Sigma''$ so that
$\iota_{a_0}=w_0|\Gamma:\Gamma\to P_{a_0}$.
The infinitesimally universal unfolding of the stable
map $(\Sigma,s_{0,*},\nu_0,j_0,v_0)$ is the tuple
$$
(\pi_B:Q\to B,S_*,H_B,b_0)
$$
defined by
\begin{equation}\label{eq:BQS}
\renewcommand{\arraystretch}{1.9}
\begin{array}{l}
B:=\mathcal{V}'\cap\mathcal{V}'', \quad
Q:=\bigl\{(p,\beta)\in P\times B\,\big| \,
\beta\in\mathcal{V}'_{\pi_A(p)}\cap\mathcal{V}''_{\pi_A(p)}\,\bigr\}, \\
\pi_B(p,\beta):=(\pi_A(p),\beta),\qquad
b_0:=(a_0,\beta_0), \\
S_{\mathsf{i}}:= \bigl\{(p,\beta)\in Q\,\big|\,p\in R_{\mathsf{i}}\bigr\},\qquad
H_B(p,\beta):= h_\beta(p),
\end{array}
\end{equation}
where $h_\beta:P_a\to M$ is the unique holomorphic map
with
$$
h_\beta|\Gamma_a=\beta.
$$
As in~\ref{cV}, $\mathcal{V}$ is a complex Hilbert manifold and by part~(iii)
of Theorem~\ref{thm:V} the sets $\mathcal{V}'$ and $\mathcal{V}''$ are complex
submanifolds of $\mathcal{V}$. By part~(i) of Theorem~\ref{thm:transverse},
the submanifolds $\mathcal{V}'$ and $\mathcal{V}''$ intersect transversally at
$(a_0,\beta_0)$ and hence $B=\mathcal{V}'\cap\mathcal{V}''$ is a complex
submanifold of $\mathcal{V}$ (after shrinking $\mathcal{V}'$
and $\mathcal{V}''$ if necessary). By Theorem~\ref{thm:V},
$B$ has dimension
\begin{equation}\label{eq:dimB}
\dim_{\mathbb{C}} B = ({\mathsf{m}}-3)(1-{\mathsf{g}}) + \inner{c_1}{{\mathsf{d}}} + {\mathsf{n}}.
\end{equation}
We prove that $Q$ is a complex submanifold of $P\times\mathcal{V}$.
Define
$$
f:B\to A \qquad\mbox{by}\qquad f(a,\beta):=a
$$
for $(a,\beta)\in B=\mathcal{V}'\cap\mathcal{V}''$.
Then the projection $\pi_B:Q\to B$ is the \jdef{pullback}
of the projection $\pi_A:P\to A$ by the map $f$, i.e.
$Q$ is the preimage of the diagonal in $A\times A$ under the
holomorphic map
$$
\pi_A\times f:P\times B\to A\times A
$$
and $\pi_B$ is the restriction of projection on the first factor to $Q$.
The map $\pi_A\times f$ is transverse to the diagonal if and only if
\begin{equation}\label{eq:TPV}
T_{\pi_A(p)}A = \mathrm{im}\,d\pi_A(p)
+ d\pi_\mathcal{V}(a,\beta)\left(T_{(a,\beta)}\mathcal{V}'\cap T_{(a,\beta)}\mathcal{V}''\right)
\end{equation}
for every $p\in P$ and every $\beta\in\mathcal{V}'_a\cap\mathcal{V}''_a$
with $a=\pi_A(p)$, where $\pi_\mathcal{V}:\mathcal{V}\to A$ denotes the
obvious projection. Equation~(\ref{eq:TPV}) follows
immediately from part~(ii) of Theorem~\ref{thm:transverse}.
Hence $Q$ is a complex submanifold of $P\times\mathcal{V}$
and the projection $\pi_B:Q\to B$ is holomorphic.
We prove that
the map $\pi_B$ is a nodal family of Riemann surfaces
in Lemma~\ref{le:pullback} below.
The subset $S_{\mathsf{i}}\subset Q$ is the transverse
intersection of the complex submanifolds
$R_{\mathsf{i}}\times\mathcal{V}$ and $Q$, and hence is a complex
submanifold of $Q$ (of codimension one).
We prove that $H_B:Q\to M$ is holomorphic.
For this we use the Hardy decomposition
$$
Q=Q'\cup Q'',\qquad
Q':=Q\cap(P'\times\mathcal{V}),\qquad
Q'':=Q\cap(P''\times\mathcal{V}).
$$
That $H_B$ is holomorphic in the interior of $Q'$ follows from
Lemma~\ref{le:localmodel}~(iii). To prove that $H_B$
is holomorphic in the interior of $Q''$ write it
as the composition
$$
\INT(Q'') \to B\times\Omega \to \mathcal{V}''\times\Omega \to M
$$
where the first map is given by a Hardy trivialization
$\pi_B\times\rho$, the second by the inclusion $B\to\mathcal{V}''$,
and the third is the evaluation map
$
((a,\beta),z)\mapsto(h''_\beta(\rho_a^{-1}(z)))
$
where $h''_\beta:P_a''\to M$ is the unique holomorphic map
with $h''_\beta|\Gamma_a=\beta$.
As in the proof of Theorem~\ref{thm:universal} all four
spaces are complex manifolds and all three maps are holomorphic.
This proves that $H_B$ is holomorphic in $Q\setminus{\partial} Q'$.
Since $H_B$ is continuous it is holomorphic everywhere.
We prove that the unfolding $(\pi_B:Q\to B,S_*,H_B,b_0)$
is infinitesimally universal. Note that
$
Q_{b_0} = P_{a_0}\times\{\beta_0\}
$
and define $u_0:\Sigma\to Q_{b_0}$ by
$$
u_0(z) := (w_0(z),\beta_0).
$$
Since $h_{\beta_0}\circ w_0=v_0$ we have
$$
H_B\circ u_0(z) = H_B(w_0(z),\beta_0)
=h_{\beta_0}(w_0(z)) = v_0(z)
$$
for every $z\in\Sigma$. As before we denote by
$f:B=\mathcal{V}'\cap\mathcal{V}''\to A$ the obvious projection
and by $b_0=(a_0,\beta_0)\in B$ the base point.
Then the kernel of the derivative $df(b_0):T_{b_0}B\to T_{a_0}A$
is the intersection $T_\beta\mathcal{V}'_{a_0}\cap T_\beta\mathcal{V}''_{a_0}$.
Hence, for $z\in\Sigma$ we have $p:=w_0(z)\in P_{a_0}$,
$q:=u_0(z)=(w_0(z),\beta_0)\in Q_{b_0}$, and
$$
\ker d(f\circ\pi_B)(q) = \ker d\pi_A(p)\times
\left(T_\beta\mathcal{V}'_{a_0}\cap T_\beta\mathcal{V}''_{a_0}\right).
$$
The restriction of
$dH_B(q):T_qQ\to T_{v_0(z)}M$
to this space is
$$
dH_B(u_0(z))(\hat p,\hat\beta) = \hat v(z) + dv_0(z)\hat z
$$
where $\hat z\in T_z\Sigma$ is the unique element with
$dw_0(z)\hat z=\hat p$ and $\hat v\in\Omega^0(\Sigma/\nu,v_0^*TM)$
is the unique vector field along $v_0$ that satisfies the nodal
condition, belongs to the kernel of $D_{v_0}$, and
satisfies $\hat v|\Gamma=\hat\beta\circ\iota_{a_0}$.
We prove that the induced map
\begin{equation}\label{eq:keruv}
dH_B(u_0):\ker D_{u_0}\to\ker\,D_{v_0}
\end{equation}
is bijective. The domain of $\mathcal{D}_{u_0}$ is the space
$$
\mathcal{X}_{u_0}:=\left\{(\hat w,\hat b)
\in\Omega^0(\Sigma/\nu,w_0^*TP)\times T_bB\,\bigg|\,
\begin{aligned}
&\hat w(s_{0,{\mathsf{i}}})\in T_{w_0(s_{0,{\mathsf{i}}})}R_{\mathsf{i}} \\
&d\pi_A(w_0)\hat w\equiv df(b_0)\hat b
\end{aligned}
\right\},
$$
the target space can be identified with
$$
\mathcal{Y}_{u_0} = \mathcal{Y}_{w_0} = \left\{\eta\in\Omega^{0,1}(\Sigma,w_0^*TP)\,|\,
d\pi_A(w_0)\eta\equiv 0\right\},
$$
and the operator is given by
$$
D_{u_0}(\hat w,\hat b) := D_{w_0}\hat w.
$$
Since the unfolding $(\pi_A,R_*,a_0)$
(of marked nodal Riemann surfaces) is universal, the operator
$$
D_{w_0}:
\mathcal{X}_{w_0}:=\left\{\hat w\in\Omega^0(\Sigma/\nu,w_0^*TP)\,\bigg|\,
\begin{aligned}
&\hat w(s_{0,{\mathsf{i}}})\in T_{w_0(s_{0,{\mathsf{i}}})}R_{\mathsf{i}} \\
&d\pi_A(w_0)\hat w\equiv\mbox{ constant}
\end{aligned}
\right\}
\to\mathcal{Y}_{u_0}
$$
is bijective.
It follows that the projection $(\hat w,\hat b)\mapsto\hat b$
is an isomorphism from the kernel of $D_{u_0}$ to the kernel
of the linear map $df(b_0):T_{b_0}B\to T_{a_0}A$.
Now recall that $f:B=\mathcal{V}'\cap\mathcal{V}''\to A$ denotes the obvious projection.
Then the kernel of $df(a_0,\beta_0):T_{(a_0,\beta_0)}(\mathcal{V}'\cap\mathcal{V}'')\to T_{a_0}A$
is the intersection $T_{\beta_0}\mathcal{V}'_{a_0}\cap T_{\beta_0}\mathcal{V}'_{a_0}$
which, by Theorem~\ref{thm:V}~(ii), is isomorphic to the kernel of $D_{v_0}$.
The composite isomorphism
$$
\ker\,D_{u_0}\to\ker\,df(a_0,\beta_0)\to \ker\,D_{v_0}
$$
is given by $(0,\hat b)\mapsto\hat\beta\mapsto\hat v$
where $\hat b=(0,\hat\beta)$ and $\hat v$ is the unique
element in the kernel of $D_{v_0}$ with
$\hat v|\Gamma=\hat\beta\circ\iota_{a_0}$.
This map is precisely~(\ref{eq:keruv}) which is therefore
an isomorphism.
Now it follows from Theorem~\ref{thm:transverse}~(ii)
that the nodal family $(\pi_B,S_*,b_0)$ is regular nodal,
i.e.~the projections of the critical manifolds intersect transversally
at $b_0$.
Hence, by~\cite[Lemma~12.2]{RS},
the operator $D_{u_0}$ has Fredholm index
\begin{eqnarray*}
\INDEX_{\mathbb{C}}(D_{u_0})
&=&
3-3{\mathsf{g}} - {\mathsf{n}} + \dim_{\mathbb{C}} B \\
&=&
{\mathsf{m}}(1-{\mathsf{g}}) + \inner{c_1}{{\mathsf{d}}} \\
&=&
\INDEX_{\mathbb{C}}(D_{v_0}).
\end{eqnarray*}
Here the second equality follows from~(\ref{eq:dimB}).
Since the kernels are isomorphic it follows that cokernels
of $D_{u_0}$ and $D_{v_0}$ have the same dimensions.
Moreover, the induced homomorphism
$
dH_B(u_0):\coker D_{u_0}\to\coker\,D_{v_0}
$
is surjective, by Remark~\ref{rmk:regular},
and hence is bijective. This completes the proof
of Theorem~\ref{thm:exists}.
\end{proof}
\begin{lemma} \label{le:pullback}
Let
$\pi_A:P\to A$ be a nodal family and $f:B\to A$
be a holomorphic map such that
$f\times \pi_A:B\times P\to A\times A$
is transverse to the diagonal. Then the
pullback $\pi_B:Q\to B$ of $\pi_A$ by $f$
is a nodal family.
\end{lemma}
\begin{proof}
The pullback is defined by
$$
Q:=\left\{(b,p)\in B\times P\,|\,\pi_A(p)=f(b)\right\},\qquad
\pi_B(b,p):=b.
$$
The condition that $f\times \pi_A:B\times P\to A\times A$
is transverse to the diagonal implies that $Q$ is a submanifold
of $B\times P$.
We prove that
\begin{description}
\item[(i)]
$(b,p)\in Q$ is a regular point of $\pi_B$
if $p$ is a regular point of $\pi_A$, and
\item[(ii)]
$(b,p)\in Q$ is a nodal point of $\pi_B$
if $p$ is a nodal point of $\pi_A$.
\end{description}
To prove~(i) assume w.l.o.g.~that $P={\mathbb{C}}\times A$
so $Q={\mathbb{C}}\times\mathrm{graph}(f)$.
Then $\pi_B(b,z,f(b))=b$ so $\pi_B$ is a submersion.
To prove~(ii) assume that w.l.o.g.~that
$P={\mathbb{C}}\times {\mathbb{C}}\times U$,
$A={\mathbb{C}}\times U$,
$\pi_A(x,y,u)=(xy,u)$, and
$f(b)=(\zeta(b),g(b))\in{\mathbb{C}}\times U$.
Then
$$
Q=\{(b,x,y,u)\,|\,xy=z=\zeta(b),\;\; u=g(b)\}.
$$
The condition that $f\times\pi_A$ is transverse to the diagonal
at $(b,x,y,u)\in Q$ is that for all
$
(\hat{z}_1,\hat{u}_1,\hat{z}_2,\hat{u}_2)\in
T_{(z,u)}A\times T_{(z,u)}A={\mathbb{C}}\times T_uU\times{\mathbb{C}}\times T_uU
$
the equations
\begin{eqnarray*}
\hat{z}_1&=&d\zeta(b)\hat{b}+\hat{z}\\
\hat{u}_1&=& dg(b)\hat{b}+\hat{u}\\
\hat{z}_2&=&\hat{x}y+x\hat{y}+\hat{z}\\
\hat{u}_2&=&\hat{v}+\hat{u}
\end{eqnarray*}
have a solution
$$
\hat{b}\in T_bB,\quad
(\hat{x},\hat{y},\hat{v})\in T_{(x,y,u)}P={\mathbb{C}}^2\times T_uU,\quad
(\hat{z},\hat{u})\in T_aA={\mathbb{C}}\times T_uU.
$$
At a nodal point we have $x=y=0$ so transversality implies
that $d\zeta(b)\ne0$. This implies that there is a coordinate system
on $B$ with $\zeta$ as its first element.
The pullback to $Q$ of the coordinates other than
$\zeta$ together with the functions $x$ and $y$ give the desired nodal
coordinates on $Q$.
This proves~(ii) and the lemma.
\end{proof}
\begin{corollary}\label{cor:transverse-core}
Let $\pi_A:P\to A$ be regular nodal family
and $f:B\to A$ be a holomorphic map which is transverse
to the core $A_0$ of $\pi_A$. Then the hypothesis of
Lemma~\ref{le:pullback} holds,
the pullback $\pi_B:Q\to B$ is regular nodal,
and its core is $B_0:=f^{-1}(A_0)$.
\end{corollary}
\begin{proof} Denote by
$
C_1,\dots,C_{\mathsf{k}}\subset P
$
the components of the singular set of $\pi_A$.
The proof of Lemma~\ref{le:pullback} shows that the
hypothesis that $f\times\pi_A$ is transverse to the diagonal
is equivalent to the hypothesis that $f$ is transverse
to each $\pi_A(C_{\mathsf{i}})$. The hypothesis that $\pi_A$ is regular
nodal is that
these projections $\pi_A(C_{\mathsf{i}})$ of the critical manifolds intersect transversally.
Hence $T_aA_0=\bigcap_{\mathsf{i}} T_a\pi_A(C_{\mathsf{i}})$ so $f$ is certainly
transverse to each $\pi_A(C_{\mathsf{i}})$ and the hypothesis of
Lemma~\ref{le:pullback} holds.
The hypothesis that $\pi_A$ is regular nodal
implies that in a neighborhood of each point of the core $A_0$ of $\pi_A$ there
are coordinates $z_1,\ldots,z_{\mathsf{k}},u_1,\ldots$
on $A$ such that for each ${\mathsf{i}}$, $z_i$ together with the remaining
coordinates for the base coordinates of a nodal coordinate system.
In particular, $\pi_A(C_{\mathsf{i}})=\{z_{\mathsf{i}}=0\}$.
The transversality hypothesis implies
that the functions $f^*z_{\mathsf{i}}$ are independent, i.e.
the sequence $f^*z_1,\ldots,f^*z_{\mathsf{k}}$ extends to a coordinate system on $B$.
Now the proof of Lemma~\ref{le:pullback} shows that
for each ${\mathsf{i}}$ a reordering of these coordinates
which puts $f^*z_{\mathsf{i}}$ first is the base coordinate system of a nodal
coordinate system. The core $B_0$ is then defined by
$f^*z_1=\cdots f^*z_{\mathsf{k}}=0$ which shows that $B_0=f^{-1}(A_0)$.
\end{proof}
\begin{definition}\rm\label{def:proper}
Let $(\pi_A:P\to A,R_*,H_A,a_0)$ and $(\pi_B:Q\to B,S_*,H_B,b_0)$
be two unfoldings of type
$({\mathsf{g}},{\mathsf{n}},{\mathsf{d}})$.
A sequence of fiber isomorphisms ${f_k:P_{a_k}\to Q_{b_k}}$
is said to \jdef{DMG converge} to a fiber isomorphism
$f_0:P_{a_0}\to Q_{b_0}$ if ${a_k\to a_0}$, $b_k\to b_0$, and
for every Hardy decomposition $P=P'\cup P''$ as in~\ref{Hardy}
the sequence
${f_k\circ\iota_{a_k}:\Gamma\to Q}$ converges to
${f_0\circ\iota_{a_0}:\Gamma\to Q}$ in the $C^{\infty}$ topology.
(DMG convergence of fiber isomorphisms is essentially
the same as DM convergence in~\cite[Definition~13.7]{RS}.
The only difference is that in the former case we deal with unfoldings
of stable maps whereas in the latter case we deal with unfoldings
of marked nodal Riemann surfaces, i.e.~the two notions of fiber
isomorphism differ.)
\end{definition}\rm
\begin{lemma}\label{le:proper}
Let $(\pi_A:P\to A,R_*,H_A,a_0)$ and $(\pi_B:Q\to B,S_*,H_B,b_0)$
be two universal unfoldings of type $({\mathsf{g}},{\mathsf{n}},{\mathsf{d}})$,
$(\Phi,\phi):(P,A)\to(Q,B)$ be the germ of a morphism satisfying
$H_B\circ\Phi=H_A$, $\phi(a_0)=b_0$, and $\Phi_{a_0}=f_0$,
$a_k\in A$ and $b_k\in B$ be two sequences with
$a_k\to a_0$ and $b_k\to b_0$,
and $f_k:P_{a_k}\to Q_{b_k}$ be a sequence of fiber isomorphisms.
Then the following are equivalent.
\begin{description}
\item[(i)]
The sequence $(a_k,f_k,b_k)$ DMG converges to $(a_0,f_0,b_0)$.
\item[(ii)]
For $k$ sufficiently large we have $\phi(a_k)=b_k$ and $\Phi_{a_k}=f_k$.
\end{description}
\end{lemma}
\begin{proof}
That~(ii) implies~(i) is obvious. We prove that~(i) implies~(ii).
Recall the Hardy decomposition in the definition of
the spaces $\mathcal{U}$, $\mathcal{U}'$, $\mathcal{U}''$ in~\ref{cU} and
$\mathcal{V}$, $\mathcal{V}'$, $\mathcal{V}''$ in~\ref{cV}. Then
$$
\bigl(a,\Phi_a|\Gamma_a,\phi(a)\bigr) \in \mathcal{U}'\cap \mathcal{U}'',\qquad
(a_k,f_k|\Gamma_{a_k},b_k)\in\mathcal{U}'\cap \mathcal{U}''
$$
for every $a\in A$ and every sufficiently large $k$,
by DMG convergence. The sequences
$(a_k,\Phi_{a_k}|\Gamma_{a_k},\phi(a_k))$
and $(a_k,f_k|\Gamma_{a_k},b_k)$ converge
to the same point $(a_0,f_0|\Gamma_{a_0},b_0)\in\mathcal{U}'\cap\mathcal{U}''$.
Moreover, their images under the Fredholm map
$$
\mathcal{U}'\cap\mathcal{U}''\to\mathcal{V}'\cap\mathcal{V}'':(a,\alpha,b)\mapsto(a,H_B\circ\alpha)
$$
agree because
$$
H_B\circ f_k = H_A|P_{a_k} = H_B\circ\Phi_{a_k}.
$$
Moreover it follows from infinitesimal universality and
Theorems~\ref{thm:U}, \ref{thm:V}, and~\ref{thm:fredstable}
that the map $(a,\alpha,b)\mapsto(a,H_B\circ\alpha)$
from $(\mathcal{U},\mathcal{U}',\mathcal{U}'',(a_0,f_0|\Gamma_0,b_0))$
to $(\mathcal{V},\mathcal{V}',\mathcal{V}'',(a_0,H_B\circ f_0|\Gamma_0))$
is an exact morphism of Fredholm quadruples
(see~\ref{fredh}). Hence
$(f_k|\Gamma_{a_k},b_k)=(\Phi_{a_k}|\Gamma_{a_k},\phi(a_k))$
for $k$ sufficiently large, by Corollary~\ref{cor:fredh},
and hence also $f_k=\Phi_{a_k}$.
This proves the lemma.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:proper}.]
Let $(\pi:Q\to B,S_*,H)$ be a universal family and denote by
$(B,\Gamma)$ the associated etale groupoid of~\ref{B-Gamma}.
We prove that this groupoid is proper.
Thus let $(a_k,f_k,b_k)$ be a sequence in $\Gamma$ such that
$a_k$ converges to $a_0$ and $b_k$ converges to $b_0$.
We must show that there is a fiber isomorphism $f_0:Q_{a_0}\to Q_{b_0}$
such that a suitable subsequence of $f_k$ DMG converges to $f_0$.
To see this we assume first that
the underlying marked nodal Riemann surface associated to
a desingularization of $Q_{a_0}$ is stable. Then the same holds
for $Q_{b_0}$ and we may assume w.l.o.g.~that our universal
unfolding has the form~(\ref{eq:BQS}) as constructed in the proof
of Theorem~\ref{thm:exists} near $a_0$ and $b _0$.
It then follows that $(a_k,f_k,b_k)$ induces a sequence
$(a_k',f_k',b_k')$ of fiber isomorphisms for the underlying
universal family $(\pi':Q'\to B',S'_*)$ of stable marked nodal
Riemann surfaces such that $a_k'$ and $b_k'$ converge to
$a_0'$ and $b_0'$, respectively. By~\cite[Theorem~6.6]{RS},
the sequence $f_k'$ DM-converges to a fiber isomorphism
$f_0':Q'_{a'_0}\to Q'_{b'_0}$. Since $H_B\circ f_k=H_B|Q_{a_k}$,
we find that $f_0'$ induces a fiber isomorphism
$f_0:Q_{a_0}\to Q_{b_0}$ and it follows from the definitions
that $f_k$ DMG converges to $f_0$.
This proves the assertion
under the stability assumption for the underlying marked nodal
Riemann surface. If that does not hold, we choose
an embedding of our universal family into another
family $(\pi':Q'\to B',S'_*,T'_*,H')$ that is a universal
unfolding of each of its fibers and remains stable after
discarding $H'$. Then the existence of a
DMG-convergent subsequence follows immediately from
what we have already proved.
\end{proof}
\section{The Gromov topology}\label{sec:topology}
In this section we prove that the topology
on the moduli space of (regular) stable maps that is induced
by the orbifold structure agrees with the topology used elsewhere
in the literature. To define convergence of a sequence in this topology
we need to recall the notion of {\em deformation}
from~\cite[Definition~13.2]{RS}.
\begin{PARA}\rm\label{suture}
Let $\Sigma$ be a compact oriented surface
and $\gamma\subset\Sigma$ be a disjoint union of
embedded circles. We denote by $\Sigma_\gamma$
the compact surface with boundary which results
by \jdef{cutting open} $\Sigma$ along $\gamma$.
This implies that there is a local embedding
$$
\sigma:\Sigma_\gamma\to\Sigma
$$
which
maps $\INT(\Sigma_\gamma)$ one to one onto $\Sigma\setminus\gamma$
and maps ${\partial} \Sigma_\gamma$ two to one onto $\gamma$.
One might call $\sigma$ the {\em suture map} and $\gamma$ the {\em incision}.
\end{PARA}\rm
\begin{definition}\rm \label{deformation}
Let $(\Sigma',\nu')$ and $(\Sigma,\nu)$ be nodal surfaces.
A smooth map $\phi:\Sigma'\setminus\gamma'\to\Sigma$
is called a $(\nu',\nu)$-\jdef{deformation}
iff $\gamma'\subset\Sigma'\setminus\bigcup\nu'$ is a disjoint union
of embedded circles such that (where $\sigma:\Sigma'_{\gamma'}\to\Sigma'$
is the suture map just defined) we have
\begin{itemize}
\item
$
\phi_*\nu':=\bigl\{ \{\phi(y'_1),\phi(y'_2)\}\,|\,
\{y'_1,y'_2\}\in\nu'\bigr\} \subset\nu.
$
\item
$\phi$ is a diffeomorphism from $\Sigma'\setminus \gamma'$
onto $\Sigma\setminus \gamma$, where
$\gamma:=\bigcup(\nu\setminus\phi_*\nu')$.
\item
$\phi\circ\sigma|\INT(\Sigma'_{\gamma'})$
extends to a continuous surjective map
$\Sigma'_{\gamma'}\to\Sigma$ such that
the preimage of each nodal point in $\gamma$
is a component of ${\partial}\Sigma'_{\gamma'}$ and two boundary components
which map under $\sigma$ to the same component of $\gamma'$
map to a nodal pair $\{x,y\}\in\gamma$.
\end{itemize}
Each component of $\gamma'$ is called a {\em vanishing cycle}
of the deformation $\phi$. A sequence
$\phi_k:(\Sigma_k\setminus\gamma_k,\nu_k)\to(\Sigma,\nu)$
of $(\nu_k,\nu)$-deformations
is called \jdef{monotypic} if $(\phi_k)_*\nu_k$
is independent of $k$.
\end{definition}\rm
\begin{definition}\rm\label{def:convergence}
Let $M$ be a complex manifold.
A sequence $(\Sigma_k,s_{k,*},\nu_k,j_k,v_k)$ of configurations
in $M$ of type
$({\mathsf{g}},{\mathsf{n}},{\mathsf{d}})$
is said to \jdef{converge monotypically} to
a configuration $(\Sigma,s_*,\nu,j,v)$ of type
$({\mathsf{g}},{\mathsf{n}},{\mathsf{d}})$
iff there is a monotypic sequence
$\phi_k:\Sigma_k\setminus\gamma_k \to\Sigma\setminus\gamma$
of $(\nu_k,\nu)$-deformations satisfying the following conditions.
\begin{description}
\item[(Marked points)]
For ${\mathsf{i}}=1,\ldots,{\mathsf{n}}$ the sequence $\phi_k(s_{k,{\mathsf{i}}})$
converges to $s_{\mathsf{i}}$ in $\Sigma$.
\item[(Complex structure)]
The sequence $(\phi_k)_*j_k$ of complex structures
on $\Sigma\setminus \gamma$ converges to
$j|(\Sigma\setminus \gamma)$ in the $C^{\infty}$ topology.
\item[(Map)]
The sequence $(\phi_k)_*v_k:=v_k\circ\phi_k^{-1}$
converges to $v|(\Sigma\setminus \gamma)$
in the $C^{\infty}$ topology on $C^{\infty}(\Sigma\setminus\gamma,M)$.
\item[(Energy)]
For some (and hence every) pair of Riemannian metrics on
$\Sigma$ and~$M$ we have
$$
\lim_{{\varepsilon}\to 0}\lim_{k\to\infty}\int_{B_{\varepsilon}(\gamma)}\Abs{d(v_k\circ\phi_k^{-1})}^2
= 0,
$$
where $B_{\varepsilon}(\gamma)\subset\Sigma$ denotes the ${\varepsilon}$-neighborhood
of $\gamma\subset\cup\nu$.
\end{description}
The sequence $(\Sigma_k,s_{k,*},\nu_k,j_k,v_k)$ is said to
\jdef{Gromov converge} to $(\Sigma,j,s,\nu,v)$ if,
after discarding finitely many terms, it is the disjoint union
of finitely many sequences which converge
monotypically to $(\Sigma,s,\nu,j,v)$.
\end{definition}\rm
\begin{figure}[htp]
\centering
\includegraphics[scale=0.6]{figure-gromov}
\caption{{Gromov convergence.}}\label{fig:gromov}
\end{figure}
\begin{theorem}\label{thm:gromovConvergence}
Let $(\Sigma,s_*,\nu,j,v)$ be a stable map,
$(\pi:Q\to B,S_*,H,b_0)$ be a universal unfolding,
${u_0:\Sigma\to Q_{b_0}}$ be a desingularization with induced
structures $s_*$, $\nu$, $j$, and $v$ on $\Sigma$, and
$(\Sigma_k,s_{k,*},\nu_k,j_k,v_k)$ be a sequence of stable maps.
Then the following are equivalent.
\begin{description}
\item[(i)]
The sequence $(\Sigma_k,s_{k,*},\nu_k,j_k,v_k)$
Gromov converges to $(\Sigma,s_*,\nu,j,v)$.
\item[(ii)]
After discarding finitely many terms, there exist $b_k\in B$
and desingularizations ${u_k:\Sigma_k\to Q_{b_k}}$ inducing
$s_{k,*}$, $\nu_k$, $j_k$, $v_k$ such that $b_k$ converges to~$b_0$.
\end{description}
If~(i) holds with a sequence of deformations
$\phi_k:\Sigma\setminus\gamma_k\to\Sigma$
then the sequence $u_k$ in~(ii) can be chosen such that
$u_k(\gamma_k)$ converges to the nodal set in $Q_{b_0}$
and $u_k\circ\phi_k^{-1}:\Sigma\setminus\cup\nu$ converges
to $u_0|(\Sigma\setminus\cup\nu)$ in the $C^{\infty}$ topology.
\end{theorem}
\begin{proof}
We prove~(ii) implies~(i).
Let $u:\Sigma\to Q_{b_0}$ be a desingularization.
Assume that $b_k$ converges to $b$ and that
$u_k:\Sigma_k\to Q_{b_k}$ is a sequence of desingularizations
inducing $(s_{k,*}$, $\nu_k$, $j_k$, $v_k$).
As in the proof of~\cite[Theorem~13.6]{RS}
there are maps $\psi_b:Q_b\to Q_{b_0}$ and deformations
$\phi_k:\Sigma_k\setminus \gamma_k\to\Sigma$
such that $\psi_b$ agrees with a smooth trivialization away from
the nodal set, $\psi_{b_0}$ is the identity, and
$$
u\circ\phi_k = \psi_{b_k}\circ u_k:
\Sigma_k\setminus \gamma_k\to Q_{b_0}.
$$
Assume w.l.o.g. that the sequence $\phi_k$ is monotypic
so that there is a subset $\gamma\subset\cup\nu$ such
that $\phi_k:\Sigma_k\setminus \gamma_k\to\Sigma\setminus\gamma$
is a diffeomorphism.
As in~\cite{RS} the sequence $\phi_k(s_{k,{\mathsf{i}}})$
converges to $s_{\mathsf{i}}$ in $\Sigma$ and
the sequence $(\phi_k)_*j_k$ of complex structures
on $\Sigma\setminus \gamma$
converges to $j|(\Sigma\setminus \gamma)$ in the $C^{\infty}$ topology.
Now
$
\psi_{b_k}^{-1}\circ u_0= u_k\circ\phi_k^{-1}
$
so
$$
H\circ\psi_{b_k}^{-1}\circ u_0
= H\circ u_k\circ\phi_k^{-1}=v_k\circ\phi_k^{-1}.
$$
Since $\psi_{b_0}$ is the identity the left hand side
(and hence also $(\phi_k)_*v_k=v_k\circ\phi_k^{-1}$)
converges to $v_0|(\Sigma_0\setminus \gamma)$
in the $C^{\infty}$ topology on $C^{\infty}(\Sigma\setminus\gamma,M)$.
We prove~(i) implies~(ii) under the additional hypothesis that
the marked nodal Riemann surface $(\Sigma,s_*,\nu,j)$
is stable. By the uniqueness of universal unfoldings
we may asssume that $(\pi,S_*,H,b_0)$ is given by~(\ref{eq:BQS}).
By assumption, the sequence $(\Sigma_k,s_{k,*},\nu_k,j_k)$
obtained by discarding the maps $v_k$ consists
of stable marked nodal Riemann surfaces and it
DM-converges to $(\Sigma,s_*,\nu,j)$ as in~\cite[Definition~13.3]{RS}.
Hence Theorem~13.6 in~\cite{RS} asserts that there exists
a sequence $a_k\in A$ converging to $a_0$ and,
for sufficiently large $k$, desingularizations
$w_k:\Sigma_k\to P_{a_k}$ inducing the structures
$s_{k,*}$, $\nu_k$, $j_k$ on $\Sigma_k$.
By~\cite[Remark~13.9]{RS}, the desingularizations
$w_k$ can be chosen such that the sequence
$$
w_k\circ\phi_k^{-1}:\Sigma\setminus\cup\nu\to P
$$
converges to $w_0$ in the $C^{\infty}$ topology.
Define ${h_k:P_{a_k}\to M}$
and ${h_0:P_{a_0}\to M}$~by
$$
h_k\circ w_k:=v_k,\qquad h_0\circ w_0:=v_0.
$$
Since $w_k\circ\phi_k^{-1}$ converges to $w_0$,
the sequence $\phi_k\circ w_k^{-1}\circ\rho_{a_k}^{-1}$
(with $\rho$ as in~\ref{hardyTrivialization})
converges to $w_0^{-1}$ in the $C^{\infty}$ topology
on $\Omega=P_{a_0}''$. This implies that the sequence
$$
h_k\circ\rho_{a_k}^{-1} = (v_k\circ\phi_k^{-1})
\circ(\phi_k\circ w_k^{-1}\circ\rho_{a_k}^{-1})
$$
converges to $v_0\circ w_0^{-1}=h_0$ in the
$C^{\infty}$ topology on $\Omega$.
By definition of $\mathcal{V}''$, this implies
$$
b_k := (a_k,\beta_k)\in\mathcal{V}'\cap\mathcal{V}''=B,\qquad
\beta_k:=h_k|\Gamma_{a_k}\in\mathcal{V}_{a_k}''
$$
for $k$ sufficiently large. Here we have also used the fact
that $h_k|P'_{a_k}$ takes values in $\mathcal{V}'$ for large $k$, by
the {\it (Energy)} axiom and the standard compactness arguments
for pseudoholomorphic curves (see~\cite[Chapter~4]{MS}).
Since $a_k$ converges to $a_0$ and
$\beta_k\circ\rho_{a_k}^{-1}|{\partial}\Omega$ converges
to $\beta_0:=h_0|\Gamma_{a_0}=h_0|{\partial}\Omega$,
we deduce that $b_k$ converges to $b_0:=(a_0,\beta_0)$.
Thus we have proved that~(i) implies~(ii) under the assumption
that the marked nodal Riemann surface $(\Sigma,s_*,\nu,j)$
is stable.
We prove~(i) implies~(ii) in general.
Suppose the sequence $(\Sigma_k,s_{k,*},\nu_k,j_k,v_k)$
Gromov converges to $(\Sigma,s_*,\nu,j,v)$ and the underlying
marked nodal Riemann surface $(\Sigma,s_*,\nu,j)$ is not
stable. Then we can add marked points to $\Sigma_k$
and $\Sigma$ such that the resulting sequence still
Gromov converges and the augmented marked nodal Riemann
surface $(\Sigma,s_*,t_*,\nu,j)$ is stable. By what we have
already proved, the augmented sequence
$(\Sigma_k,s_{k,*},t_{k,*},\nu_k,j_k,v_k)$ satisfies~(ii).
Let $(\pi_A:P\to A,R_*,T_*,H_A,a_0)$ be a universal
unfolding of the augmented stable map. Removing the
additional sections $T_*$ results in an unfolding that is no longer
universal but, by definition of universal, admits a morphism to
$(\pi:Q\to B,S_*,H,b_0)$. Hence the original sequence
$(\Sigma_k,s_{k,*},\nu_k,j_k,v_k)$ also satisfies~(ii).
This proves the theorem.
\end{proof}
\section{Concluding remarks}
It would be interesting to know to what extent the techniques developed
in this paper extend to the nonintegrable case. Since the linearized
Cauchy--Riemann operators $D_v$ are not complex linear in this case
the resulting moduli space will at best be a smooth (not a complex)
orbifold. In the definition of a universal unfolding we can
at most expect the existence of a smooth morphism
${(\Phi,\phi):\pi_A\to\pi_B}$. An analogue of the universal unfolding
theorem (Theorem~\ref{thm:universal}) for the nonintegrable case
will depend on an answer to the following question.
Given an almost complex structure $J$ on ${\mathbb{R}}^{2{\mathsf{m}}}$
and a complex number $z\in\INT({\mathbb{D}})$ define the set
$$
\mathcal{N}_z:= \left\{(\xi,\eta,z)\in
H^s(S^1,{\mathbb{R}}^{2{\mathsf{m}}})^2\,\Bigg|\,
\begin{array}{l}
\exists\mbox{ a $J$-holomorphic map } \\
v:N_z\to{\mathbb{R}}^{2{\mathsf{m}}}\mbox{ in } H^{s+1/2} \\
\mbox{s.t. }\xi=v\circ\iota_{1,z},\;
\eta=v\circ\iota_{2,z}
\end{array}
\right\}
$$
where $N_z$ is as in~\ref{standardNode}.
It is easy to prove that this set is a smooth submanifold
of $H^s(S^1,{\mathbb{R}}^{2{\mathsf{m}}})\times H^s(S^1,{\mathbb{R}}^{2{\mathsf{m}}})$
for every $z$. A natural question to ask is if the disjoint union
$$
\mathcal{N} := \bigcup_{z\in\INT({\mathbb{D}})}\{z\}\times\mathcal{N}_z
$$
is a smooth submanifold of
$\INT({\mathbb{D}})\times H^s(S^1,{\mathbb{R}}^{2{\mathsf{m}}})\times H^s(S^1,{\mathbb{R}}^{2{\mathsf{m}}})$.
In Lemma~\ref{le:localmodel} this was proved in the integrable
case. However, we have examples of finite dimensional analogues
where this fails. On the other hand, we expect that the Hadamard
proof of the unstable manifold theorem carries over
to the infinite dimensional setting and shows that the $\mathcal{N}_z$
form a continuous family of smooth submanifolds. This would
give an alternative approach to the gluing theorem for
pseudoholomorphic curves. Moreover, one could then carry
over the techniques of this paper to prove that, in the
almost complex case, the regular stable maps form a $C^0$ orbifold.
|
2,877,628,091,625 | arxiv | \subsection{Kernel of the type of Green's function}\label{subsection:2.1}
Let $r \geq 0$ be an integer and
assume that the kernel $\kappa$ of the integral operator $\mathcal {K}$ defined in (\ref{eq:1.1}) has the following properties.
\begin{enumerate}
\item Let $ \Psi = [0, 1] \times [0, 1] \times \R.$ The partial derivative
$ \displaystyle {\ell (s, t, u) = \frac {\partial \kappa ( s, t, u) } { \partial u}}$
is continuous for all $ (s, t, u) \in \Psi. $
\item Let
$ \Psi_1 = \{ (s, t, u): 0 \leq t \leq s \leq 1, \; u \in \R \},\;\;\;
\Psi_2 = \{ (s, t, u): 0 \leq s \leq t \leq 1, \; u \in \R \}.$
There are functions $\ell_i \in C^{r+1} ( \Psi_i ), i = 1, 2, $ with
$$
\ell (s,t, u) = \left\{ {\begin{array}{ll}
\ell_1 (s, t, u), \;\;\; (s, t, u) \in \Psi_1, \\
\ell_2 (s, t, u), \;\;\; (s, t, u) \in \Psi_2.
\end{array}}\right.
$$
\item
There are two functions $\kappa_i \in C^{r+1} ( \Psi_i ), i = 1, 2, $ such that
$$
\kappa (s,t, u) = \left\{ {\begin{array}{ll}
\kappa_1 (s, t, u), \;\;\; (s, t, u) \in \Psi_1, \\
\kappa_2 (s, t, u), \;\;\; (s, t, u) \in \Psi_2.
\end{array}}\right.
$$
\item
$ \displaystyle {\frac { \partial^2 \kappa} {\partial u^2} \in C ( \Psi).}$
\end {enumerate}
Following Atkinson-Potra \cite{AtkP1}, if the kernel $ \kappa$ satisfies the above conditions,
then we say that $ \kappa$ is of class $\mathcal{G}_2 ({r+1}, 0).$
Let $ f \in C^{r+1} [0, 1]$, then by the Corollary 3.2 of Atkinson-Potra \cite{AtkP1}, it follows that $\varphi \in C^{r+1} [0, 1].$ If $ r = 0,$ then it is assumed that $ f \in C^{2} [0, 1]$ so that $\varphi \in C^{2} [0, 1].$ We assume that $\mathcal{K}$ is twice Fr\'echet differentiable and that $1$ is not an eigenvalue of $\mathcal {K}' (\varphi).$
\subsection{Nystr\"{o}m approximation}
Let $m \in \mathbb{N}$ and
consider the following uniform partition of $[0, 1]:$
\begin{equation}\label{eq:2.1}
0 < \frac{1} {m} < \cdots < \frac{m-1} {m} < 1.
\end{equation}
Let
$\displaystyle { \tilde{h} = \frac {1} {m} \;\; \mbox {and} \;\; s_i = \frac {i} {m}, \;\; i = 0, \ldots, m.}$
Consider a basic quadrature rule of the form
\begin{equation}\nonumber
\int_0^1 f (t) d t \approx \sum_{q=1}^{\rho} w_q f (\mu_q),
\end{equation}
where the weights $w_q > 0$ and the nodes $\mu_q \in [0, 1].$ It is assumed that the
quadrature rule is exact
at least for polynomials of degree $\leq 2 r. $
Then $\displaystyle{\sum_{q=1}^{\rho} w_q = 1}.$
A composite integration rule with respect to the partition (\ref{eq:2.1})
is then defined as
\begin{eqnarray}\label{eq:2.2}
\int_0^1 f (t) d t &=& \sum_{i =1}^m \int_{s_{i -1}}^{s_i} f (t) d t
\approx \tilde h \sum_{i=1}^m \sum_{q =1}^{\rho} w_q \; f (\zeta_q^i ), \;\;\; \zeta_q^i = s_{i -1} + \mu_q \tilde{h}.
\end{eqnarray}
We replace the integral in (\ref{eq:1.1}) by the numerical quadrature formula (\ref{eq:2.2}) and define
the Nystr\"{o}m operator as
\begin{equation}\nonumber
\mathcal{K}_m (x) (s) = \tilde {h} \sum_{i=1}^m \sum_{q=1}^\rho w_q \; \kappa \left (s, \zeta_q^i, x \left (\zeta_q^i \right ) \right), \;\;\; s \in [0, 1].
\end{equation}
Note that $\mathcal{K}_m$ is twice Fr\'echet differentiable and
\begin{equation}\nonumber
\mathcal {K}_m' (x) v (s) = \tilde {h} \sum_{i=1}^m \sum_{q=1}^\rho w_q \; \frac {\partial \kappa } {\partial u} (s, \zeta_q^i, x (\zeta_q^i)) v (\zeta_q^i), \;\;\; s \in [0, 1],
\end{equation}
and
\begin{equation}\nonumber
\mathcal {K}_m'' (x) v (s) = \tilde {h} \sum_{i=1}^m \sum_{q=1}^\rho w_q \; \frac {\partial^2 \kappa } {\partial u^2} (s, \zeta_q^i, x (\zeta_q^i)) v (\zeta_q^i), \;\;\; s \in [0, 1].
\end{equation}
For $ \delta_0 > 0, $ let
$ \displaystyle {\mathcal{B}(\varphi ,\delta_0) = \{ \psi \in \mathcal{X}: \| \varphi - \psi \|_\infty <
\delta_0 \}.}$
Define
$$ C_1 = \max_{\stackrel {s, t \in [0, 1]}{|u| \leq \|\varphi \|_\infty + \delta_0 }}
\left | \frac {\partial \kappa } {\partial u} (s, t, u) \right | \; \mbox {and} \;\;\; C_2 = \max_{\stackrel {s, t \in [0, 1]}{|u| \leq \|\varphi \|_\infty + \delta_0 }}
\left | \frac {\partial^2 \kappa } {\partial u^2} (s, t, u) \right |. $$
Then for $x, y \in \mathcal{B}(\varphi ,\delta_0),$
\begin{eqnarray}\label{eq:2.3}
\|\mathcal {K}_m' (x) v \|_\infty \leq C_1 \|v\|_\infty
\end{eqnarray}
and
\begin{eqnarray}\label{eq:2.4}
\|\mathcal {K}_m' (x) - \mathcal {K}_m' (y)\| \leq C_2 \|x - y \|_\infty.
\end{eqnarray}
Even though the numerical quadrature is assumed to be exact for polynomials of degree $\leq 2 r,$ since the kernel
lacks smoothness along $s = t,$ we only have the following order of convergence from Atkinson-Potra \cite {AtkP2}:
If $x \in C^2 [0, 1],$ then
\begin{equation}\label{eq:2.5}
\|\mathcal{K} (x) - \mathcal{K}_m (x) \|_\infty = O \left (\tilde{h}^2 \right ).
\end{equation}
In the Nystr\"{o}m method, (\ref{eq:1.2}) is approximated by
\begin{equation}\nonumber
x_m - \mathcal{K}_m (x_m) = f.
\end{equation}
For all $m$ big enough, the above equation has a unique solution $\varphi_m$ in $\mathcal{B} (\varphi, \delta_0 )$
and
\begin{eqnarray}\label{eq:2.6}
\|\varphi - \varphi_m \|_\infty \leq C_3 \|\mathcal{K} (\varphi) - \mathcal{K}_m (\varphi ) \|_\infty = O \left (\tilde{h}^{2}\right).
\end{eqnarray}
See Atkinson \cite{Atk1}.
We quote the following result from Krasnoselskii et al \cite{KraV} for future reference:
If $v_1, v_2 \in \mathcal{B} (\varphi, \delta_0),$ then by the generalized Taylor's theorem,
\begin{eqnarray}\label{eq:2.7}
\mathcal{K}_m ( v_2 ) (s) - \mathcal{K}_m ( v_1 ) (s) - \mathcal{K}_m' ( v_1) (v_2 - v_1) (s) &= &
R (v_2 - v_1) (s), \; s \in [0, 1],
\end{eqnarray}
where
\begin{eqnarray}\nonumber
&&R (v_2 - v_1) (s)
= \int_0^1 {(1 - \theta) } \mathcal {K}_m^{''} \left (v_1 + \theta
(v_2 - v_1) \right ) (v_2 - v_1)^2 (s) d \theta.
\end{eqnarray}
It then follows that
\begin{equation}\label{eq:2.8}
\| R (v_2 - v_1) \|_\infty \leq C_2 \|v_2 - v_1\|_\infty^2.
\end{equation}
\subsection {Discrete orthogonal projection}
Let $n \in \mathbb{N}$ and consider the following uniform partition of $[0, 1]:$
\begin{equation}\label{eq:2.9}
\Delta: 0 < \frac{1} {n} < \cdots < \frac{n-1} {n} < 1.
\end{equation}
Define
\begin{equation}\nonumber
t_j = \frac {j} {n}, \;\;\;\Delta_j = [t_{j-1}, t_j] \;\;\; \mbox {and} \;\;\; h = t_{j} - t_{j-1} = \frac {1} {n}, \;\;\; j = 1, \ldots, n.
\end{equation}
For $r \geq 0,$ let $\mathcal{X}_n$ denote the space of piecewise polynomials of degree $\leq r $ with respect to
the partition of (\ref{eq:2.9}) of $[0, 1].$ Assume that the values at
$t_j-, \; j = 1, \ldots, n,$ are defined by continuity. Then the dimension of $\mathcal{X}_n$ is $ n (r+1).$
\noindent
For $\eta = 0, 1, \ldots, r,$ let $L_\eta $ denote the Legendre polynomial of degree $\eta$ on $[-1, 1].$ Define
\begin{eqnarray}\nonumber
\varphi_{1, \eta} (t) &=& \left\{ {\begin{array}{ll}
\sqrt {\frac {2} { h}} L_\eta \left ( \frac {2 t - t_1 - t_0} {h} \right ), \;\;\; t \in [t_{0}, t_1], \\
0, \;\;\; \mbox{otherwise}.
\end{array}}\right.
\end{eqnarray}
For $j = 2, \ldots, n,$ and for $\eta = 0, 1, \ldots, r,$ define
\begin{eqnarray}\nonumber
\varphi_{j, \eta} (t) &=& \left\{ {\begin{array}{ll}
\sqrt {\frac {2} { h}} L_\eta \left ( \frac {2 t - t_j - t_{j-1}} {h} \right ), \;\;\; t \in (t_{j-1}, t_j], \\
0, \;\;\; \mbox{otherwise}.
\end{array}}\right.
\end{eqnarray}
Note that $\displaystyle { \|\varphi_{j, \eta} \|_\infty = \max_{t \in [t_{j-1}, t_j]} |\varphi_{j, \eta} (t)| = \sqrt {\frac {2} { h}} \|L_\eta\|_\infty.}$
From now onwards we assume that $m = p n$ for some $p \in \N.$ Thus each interval $\Delta_j$ is
divided into $p$ equal parts and the integral over each interval $\Delta_j$ is approximated by using the quadrature formula (\ref {eq:2.2}) restricted to the interval $\Delta_j.$
For $f, g \in C (\Delta_j),$
define
\begin{equation}\label{eq:2.10}
\inp {f} {g}_{\Delta_j} = \tilde {h} \sum_{\nu = 1}^p \sum_{q = 1}^{\rho} w_q ~ f {\left (\zeta_q^{(j-1) p + \nu} \right )} ~ g{\left (\zeta_q^{(j-1) p + \nu} \right )},
\end{equation}
where $\zeta_q^{(j-1) p + \nu}$ are defined in (\ref{eq:2.2}).
Note that $\displaystyle {\inp {f} {g}_{\Delta_j} = \inp {g} {f}_{\Delta_j}.}$
Define
\begin{equation}\nonumber
\| f \|_{\Delta_j, \infty} = \max_{t \in [t_{j-1}, t_j]} |f (t)|.
\end{equation}
Then
\begin{equation}\label{eq:2.11}
\left |\inp {f} {g}_{\Delta_j} \right | \leq \| f \|_{\Delta_j, \infty} \| g \|_{\Delta_j, \infty} h.
\end{equation}
Since the quadrature rule is exact for polynomials of degree $\leq 2 r ,$ it follows that
$$ \inp {\varphi_{j, \eta}}{\varphi_{j, \eta'}} = \int_{0}^{1} \varphi_{j, \eta} (t) \varphi_{j, \eta'} (t) d t =
\inp {\varphi_{j, \eta}}{\varphi_{j, \eta'}}_{\Delta_j}.$$
Thus,
$\displaystyle {\{\varphi_{j, \eta}, j = 1, \ldots, n, \; \eta = 0, \ldots, r \}}$
forms an orthonormal basis for $\mathcal{X}_n \subset L^\infty [0, 1].$
Let $\mathcal{P}_{r, \Delta_j}$ denote the space of polynomials of degree $\leq r $ on $\Delta_j.$
Define the discrete orthogonal projection $Q_{n,j}: C (\Delta_j) \rightarrow \mathcal{P}_{r, \Delta_j}$ as follows:
\begin{equation}\nonumber
Q_{n, j} x = \sum_{\eta = 0}^r \inp {x} {\varphi_{j, \eta}}_{\Delta_j} \varphi_{j, \eta}.
\end{equation}
It follows that
\begin{eqnarray}\nonumber
\inp {Q_{n, j} x} {y}_{\Delta_j} = \inp { x} {Q_{n, j} y}_{\Delta_j}, \;\;\; Q_{n, j}^2 = Q_{n, j} \;\;\; \mbox {and} \;\;\; Q_{n, j} Q_{n, i} = 0 \;\;\; \mbox {for} \;\;\; i \neq j.
\end{eqnarray}
Also,
\begin{eqnarray}\nonumber
\|Q_{n, j} x\|_{\Delta_j, \infty} &\leq& \sum_{\eta = 0}^r \left | \inp {x} {\varphi_{j, \eta}}_{\Delta_j} \right |
\|\varphi_{j, \eta}\|_{\Delta_j, \infty}
\leq \left (2 \sum_{\eta = 0}^r \|L_\eta\|_\infty^2 \right ) \|x\|_\infty.
\end{eqnarray}
A discrete orthogonal projection $Q_n: C[0, 1] \rightarrow \mathcal{X}_n$ is defined as follows:
\begin{eqnarray}\label{eq:2.12}
Q_n x = \sum_{j=1}^n Q_{n, j} x.
\end{eqnarray}
Using the Hahn-Banach extension theorem, as in Atkinson et al \cite{AtkG}, $Q_n$ can be extended to $L^\infty [0, 1].$
Then
\begin{eqnarray}\label{eq:2.13}
Q_n^2 = Q_n \;\;\;\mbox{and} \;\;\; \|Q_n\| \leq 2 \sum_{\eta = 0}^r \|L_\eta\|_\infty^2 = C_4.
\end{eqnarray}
The following estimate is standard:
If $f \in C^{r+1} (\Delta_j),$ then we have,
\begin{eqnarray}\label{eq:2.14}
\|(I - Q_{n,j})f \|_{\Delta_j, \infty} &\leq& C_5 \|f^{(r+1)} \|_{\Delta_j, \infty} h^{r+1}.
\end{eqnarray}
Thus, if $f \in C^{r+1} [0, 1],$ then
\begin{eqnarray}\label{eq:2.15}
\|(I - Q_{n})f \|_{ \infty} &=& O \left ( h^{r+1} \right ).
\end{eqnarray}
\subsection{Discrete Projection Methods}
We define below the discrete versions of various projection methods given in Section 1 by replacing the
integral operator $ \mathcal{K}$ by the Nystr\"{o}m operator $\mathcal{K}_m$ and the orthogonal projection
$\pi_n$ by the discrete orthogonal projection $Q_n.$
\noindent
Discrete Galerkin Method:
\begin{equation}\nonumber
z_n^G - Q_n \mathcal{K}_m (z_n^G) = Q_n f.
\end{equation}
Discrete Iterated Galerkin Method:
\begin{equation}\nonumber
{z}_n^S - \mathcal{K}_m (Q_n {z}_n^S) = f.
\end{equation}
The discrete modified projection operator is defined as
\begin{equation}\label{eq:2.16}
\tilde {\mathcal{K}}_n^M (x) = Q_n \mathcal{K}_m (x) + \mathcal{K}_m (Q_n x) - Q_n \mathcal{K}_m (Q_n x).
\end{equation}
Discrete Modified Projection method:
\begin{equation}\label{eq:2.17}
{z_n^M - \tilde {\mathcal{K}}_n^M (z_n^M) = f.}
\end{equation}
\noindent
Discrete Iterated Modified Projection method:
\begin{equation}\label{eq:2.18}
{\tilde{z}_n^M = {\mathcal{K}}_m (z_n^M) + f.}
\end{equation}
\setcounter{equation}{0}
\section{Piecewise polynomial approximation : $\mathbf{ r \geq 1 }$ }
In this section we consider the case $r \geq 1$ and obtain orders of convergence in the discrete modified projection method and its iterated version.
\subsection{Preliminary results}
In Proposition 3.1 we first obtain an error estimate for the term
$\| \mathcal {K}_m' (\varphi) (I - Q_n) v \|_\infty, $ where $v \in C^{r+1} [0, 1].$ Note that the term
$\mathcal {K}_m' (\varphi) (I - Q_n) v (s)$ needs to be treated differently depending upon whether $s$ is a partition point of the partition (\ref{eq:2.9}) or $s \in (t_{i-1}, t_i)$ for some $i.$ Using this result, we obtain an error estimate for the term
$\| \mathcal {K}_m' (\varphi) (I - Q_n) \mathcal {K}_m' (\varphi) (I - Q_n)v \|_\infty $ in
Proposition 3.2. These estimates are used in obtaining orders of convergence of $z_n^M$ and $\tilde{z}_n^M.$
Let
\begin{equation}\nonumber
\ell_* (s, t) = \ell (s, t, \varphi (t)), \;\;\; 0 \leq s, t \leq 1,
\end{equation}
where $ \displaystyle {\ell (s, t, u) = \frac {\partial \kappa ( s, t, u) } { \partial u}}$. Then
$$
\ell_* (s,t) = \left\{ {\begin{array}{ll}
\ell_{1, *} (s, t) = \ell_1 (s, t, \varphi (t)), \;\;\; 0 \leq t \leq s \leq 1, \\
\ell_{2, *} (s, t) = \ell_2 (s, t, \varphi (t)), \;\;\; 0 \leq s \leq t \leq 1.
\end{array}}\right.
$$
Since $\varphi \in C^{r+1} [0, 1],$ it follows that
$$ \ell_{1, *} \in C^{r+1} ( \{ 0 \leq t \leq s \leq 1\}) \; \mbox {and} \; \ell_{2, *} \in C^{r+1} ( \{ 0 \leq s \leq t \leq 1\}).$$
\noindent
We introduce the following notation.
For a fixed $s \in [0, 1],$ define
\begin{equation}\nonumber
\ell_{*, s} (t) = \ell_* (s, t), \; t \in [0, 1].
\end{equation}
Note that
\begin{eqnarray}\nonumber
\mathcal {K}_m' (\varphi) v (s) & = & \tilde {h} \sum_{j = 1}^n \sum_{\nu =1}^p \sum_{q=1}^\rho w_q \;
\ell_* (s, \zeta_q^{(j-1) p + \nu} ) v (\zeta_q^{(j-1) p + \nu}) = \sum_{j=1}^n \inp {\ell_{*, s}} {v}_{\Delta_j}.
\end{eqnarray}
Let
$$ C_6 = \max_{1 \leq j \leq r+1} \left \{ \max_{\stackrel { 0 \leq t \leq s \leq 1 }{|u| \leq \|\varphi \|_\infty }}
\left | D^{(0,j, 0)} \ell_{1} (s, t, u) \right |, \max_{\stackrel { 0 \leq s \leq t \leq 1 }{|u| \leq \|\varphi \|_\infty }}
\left | D^{(0, j, 0)} \ell_{2} (s, t, u) \right | \right \}.$$
The following proposition is crucial. It will be used several times in what follows.
\begin{proposition}\label{prop:3.1}
If $v \in C^{ r + 1} [0, 1],$
then
\begin{eqnarray}\label{eq:3.1}
\| \mathcal {K}_m' (\varphi) (I - Q_n) v \|_\infty
& \leq & (C_5)^2 C_6 \|v^{(r+1)} \|_\infty h^{r +3}.
\end{eqnarray}
\end{proposition}
\begin{proof}
For $s \in [0, 1],$
\begin{eqnarray*}
\mathcal {K}_m' (\varphi) (I - Q_n ) v (s)
& = & \sum_{j=1}^n \inp { \ell_{*, s}} {(I - Q_{n,j} )v}_{\Delta_j} = \sum_{j=1}^n \inp {(I - Q_{n,j} ) \ell_{*, s}} {(I - Q_{n,j} )v}_{\Delta_j}.
\end{eqnarray*}
Case 1: $ s = t_i$ for some $i \in \left\{0, 1, \ldots, n \right\}.$ Then
$\ell_{*, s} \in C^{r+1} (\Delta_j)$
for $j = 1, \ldots, n.$ Since $v \in C^{r+1} [0, 1],$
it follows from (\ref{eq:2.14}),
\begin{eqnarray}\nonumber
\max_{0 \leq i \leq n} |\mathcal{K}_m (I - Q_n ) x (t_i) | &\leq& \sum_{j=1}^n \|(I - Q_{n,j} ) \ell_{*, s}\|_{\Delta_j, \infty}
\|(I - Q_{n, j}) v\|_{\Delta_j, \infty} h\\\label{eq:3.2}
& \leq & (C_5)^2 C_6 \|v^{(r+1)}\|_\infty h^{2 r + 2}.
\end{eqnarray}
Case 2: $s \in (t_{i-1}, t_i)$ for some $i \in \left\{1, 2, \ldots, n \right\}.$ We write
\begin{eqnarray}\nonumber
\mathcal {K}_m' (\varphi) (I - Q_n ) v (s) & = &
\sum_{\stackrel {j=1} {j \neq i}}^n \inp { (I - Q_{n,j} ) \ell_{*, s}} {(I - Q_{n,j} )v}_{\Delta_j}\\\label{eq:3.3}
&&~ + \inp { (I - Q_{n,i} ) \ell_{*, s}} {(I - Q_{n,i} )v}_{\Delta_i}.
\end{eqnarray}
For $j \neq i,$ $\ell_{*, s} \in C^{r+1} (\Delta_j)$ and $v \in C^{r+1} (\Delta_j).$
Hence
\begin{eqnarray}\label{eq:3.4}
\left |\sum_{\stackrel {j=1} {j \neq i}}^n \inp { (I - Q_{n,j} ) \ell_{*, s}} {(I - Q_{n,j} )v}_{\Delta_j} \right |
\leq (C_5)^2 C_6 \|v^{(r+1)}\|_\infty (n-1) h^{2 r + 3}.
\end{eqnarray}
We now consider the case $j = i.$ Note that $\ell_{*, s}$ is only continuous on $[t_{i-1}, t_i].$
Define a constant function:
$$ g_i (t) = \ell_{*, s} (s) , \;\; t \in [t_{i-1}, t_i].$$
Note that
$\hspace*{0.5 cm}
\displaystyle {\inp { (I - Q_{n,i} ) \ell_{*, s}} {(I - Q_{n,i} )v}_{\Delta_i} = \inp { \ell_{*, s} - g_i} {(I - Q_{n,i} )v}_{\Delta_i}.}$
For $t \in [t_{i-1}, t_i],$
\begin{eqnarray*}
\ell_{*, s} (t) - g_i (t)
& = & \left\{ {\begin{array}{ll}
D^{(0,1)} \ell_{1, *} (s, \theta_t) (t - s), \;\;\; \theta_t \in (t, s), \\
D^{(0,1)} \ell_{2, *} (s, \eta_t ) (t - s), \;\;\; \eta_t \in (s, t).
\end{array}}\right.
\end{eqnarray*}
Thus,
\begin{eqnarray}\label{eq:3.5}
\left |\inp { (I - Q_{n,i} ) \ell_{*, s}} {(I - Q_{n,i} )v}_{\Delta_i} \right |
& \leq & C_5 C_6 \|v^{(r+1)} \|_\infty h^{r+3}.
\end{eqnarray}
Without loss of generality, let $C_5 \geq 1.$
From (\ref{eq:3.3}), (\ref{eq:3.4}) and (\ref{eq:3.5}) we obtain,
\begin{eqnarray}\nonumber
|\mathcal {K}_m' (\varphi) (I - Q_n ) v (s)|
& \leq & (C_5)^2 C_6 \|v^{(r+1)} \|_\infty h^{r + 3}.
\end{eqnarray}
Combining the above estimate with (\ref{eq:3.2}) we obtain the required result.
\end{proof}
\begin{proposition}\label{prop:3.2}
If $v \in C^{ r + 1} [0, 1],$
then
\begin{equation}\label{eq:3.6}
\| \mathcal {K}_m' (\varphi) (I - Q_n ) \mathcal {K}_m' (\varphi) (I - Q_n) v \|_\infty =
O \left ( h^{r + 5} \right ).
\end{equation}
Also,
\begin{eqnarray}\label{eq:3.7}
\left \| \mathcal {K}_m' (\varphi) (I - Q_n ) \mathcal {K}_m' (\varphi) \right \| = O \left ( h^{ 2 } \right ).
\end{eqnarray}
\end{proposition}
\begin{proof}
The proof of (\ref{eq:3.6}) is similar to that of (\ref{eq:3.1}). For $s \in [0, 1],$ we write
\begin{eqnarray*}
\mathcal {K}_m' (\varphi) (I - Q_n ) \mathcal {K}_m' (\varphi) (I - Q_n ) v (s)
& = & \sum_{j=1}^n \inp { (I - Q_{n,j} ) \ell_{*, s} } { \mathcal {K}_m' (\varphi) (I - Q_n ) v}_{\Delta_j}.
\end{eqnarray*}
If $s = t_i,$ for some $i,$ then using (\ref{eq:2.14}) and (\ref{eq:3.1})
we obtain
\begin{eqnarray}\nonumber
\left | \mathcal {K}_m' (\varphi) (I - Q_n ) \mathcal {K}_m' (\varphi) (I - Q_n ) v (t_i) \right |
\leq (C_5)^3 (C_6)^2 \|v^{(r+1)} \|_\infty h^{2 r + 4}.
\end{eqnarray}
If $s \in (t_{i-1}, t_i),$ then we write
\begin{eqnarray*}
\mathcal {K}_m' (\varphi) (I - Q_n ) \mathcal {K}_m' (\varphi) (I - Q_n ) v (s)
& = & \sum_{\stackrel {j=1} {j \neq i}}^n \inp { (I - Q_{n,j} ) \ell_{*, s}} { \mathcal {K}_m' (\varphi) (I - Q_n ) v}_{\Delta_j}\\
& + & \inp { \ell_{*, s} - g_i } { \mathcal {K}_m' (\varphi) (I - Q_n ) v}_{\Delta_i}.
\end{eqnarray*}
Proceeding as in the proof of Proposition 3.1, we obtain
\begin{eqnarray}\nonumber
|\mathcal {K}_m' (\varphi) (I - Q_n ) \mathcal {K}_m' (\varphi) (I - Q_n ) v (s) |
&\leq & (C_5)^3 (C_6)^2 \|v^{(r+1)} \|_\infty h^{ r + 5 }.
\end{eqnarray}
The estimate (\ref{eq:3.6}) follows from the above two estimates.
In order to prove (\ref{eq:3.7}), consider $v \in C [0, 1].$ Let
$ s = t_i$ for some $i.$ Then
\begin{eqnarray}\label{eq:3.8}
\left | \mathcal {K}_m' (\varphi) (I - Q_n ) v (s) \right |
& \leq & C_5 C_6 \|v \|_\infty h^{ r + 1 }.
\end{eqnarray}
Now
let $s \in (t_{i-1}, t_i). $ We write
\begin{eqnarray}\nonumber
\mathcal {K}_m' (\varphi) (I - Q_n ) v (s)
= \sum_{\stackrel {j=1} {j \neq i}}^n \inp { (I - Q_{n,j} ) \ell_{*, s}} { v}_{\Delta_j}
+ \inp { (I - Q_{n,i} ) (\ell_{*, s} - g_i) } {v}_{\Delta_i}.
\end{eqnarray}
and obtain
\begin{eqnarray}\nonumber
|\mathcal {K}_m' (\varphi) (I - Q_n ) v (s) |
&\leq & (1 + C_4 + C_5) C_6 \|v \|_\infty h^{ 2 }.
\end{eqnarray}
Combining (\ref{eq:3.8}) and the above estimate, we obtain
\begin{eqnarray}\label{eq:3.9}
\|\mathcal {K}_m' (\varphi) (I - Q_n ) v \|_\infty
&\leq & (1 + C_4 + C_5) C_6 \|v \|_\infty h^{ 2 }.
\end{eqnarray}
Since from (\ref{eq:2.3}),
$ \| \mathcal {K}_m' (\varphi) v\|_\infty \leq C_1 \|v\|_\infty,$
we obtain
\begin{eqnarray}\nonumber
\|\mathcal {K}_m' (\varphi) (I - Q_n ) \mathcal {K}_m' (\varphi) v \|_\infty
&\leq & \; C_1 (1 + C_4 + C_5) C_6 \|v \|_\infty h^{ 2}
\end{eqnarray} and the required result follows taking the supremum over unit ball in $C[0, 1].$
\end{proof}
\subsection{Error in the discrete modified projection method}
As in Grammont \cite{Gram1}, it can be shown that there is a $\delta_0 > 0$ such that (\ref{eq:2.17}) has a unique
solution $z_n^M$ in $\mathcal{B} (\varphi, \delta_0)$ and that
\begin{eqnarray}\nonumber
\| z_n^M - \varphi\|_\infty
& \leq & 6 \left \| \left (I - \mathcal{K}' (\varphi) \right )^{-1} \right \| \left ( \|\mathcal{K} (\varphi) -
\mathcal{K}_m (\varphi)\|_\infty + \|\mathcal{K}_m (\varphi) -\tilde{\mathcal{K}}_n^M (\varphi)\|_\infty \right ).\\\label{eq:3.10}
\end{eqnarray}
In the following theorem, we obtain the order of convergence of the discrete modified projection solution.
\begin{theorem}\label{thm:3.3}
Let $r \geq 1,$ $ \kappa$ be of class $\mathcal{G}_2 ({r+1}, 0)$
and $ f \in C^{r+1} [0, 1].$
Let $\varphi$ be the unique solution of (\ref{eq:1.2}) and assume that $1$ is not an eigenvalue of $\mathcal{K}' (\varphi).$
Let $\mathcal{X}_n$ be the space of piecewise polynomials of degree $\leq r $ with respect to the partition (\ref{eq:2.9})
and $Q_n$ be the discrete orthogonal projection defined by
(\ref{eq:2.12}). Let $z_n^M $ be the discrete modified projection solution in $\mathcal{B} (\varphi, \delta_0 ).$ Then
\begin{equation}\label{eq:3.11}
\|z_n^M - \varphi \|_\infty = O (\max \{\tilde{h}^{2}, h^{r + 3}\}).
\end{equation}
\end{theorem}
\begin{proof}
From (\ref{eq:2.5}),
\begin{eqnarray}\label{eq:3.12}
\|\mathcal{K} (\varphi) - \mathcal{K}_m (\varphi)\|_\infty = O \left ( \tilde{h}^{2} \right ).
\end{eqnarray}
Since $\varphi \in C^{r+1} [0, 1],$ it follows from (\ref{eq:2.15}) that $\displaystyle {\| Q_n \varphi - \varphi \|_\infty = O (h^{r+1}).}$
Note that
\begin{eqnarray}\nonumber
\|\mathcal{K}_m (\varphi) - \tilde{\mathcal{K}}_n^M (\varphi) \|_\infty
& \leq & \| (I - Q_n) (\mathcal{K}_m (Q_n\varphi) - \mathcal{K}_m (\varphi) - \mathcal {K}_m' (\varphi) (Q_n \varphi - \varphi ) )\|_\infty\\\nonumber
&& + \| (I - Q_n) \mathcal {K}_m' (\varphi) (Q_n \varphi - \varphi )\|_\infty.
\end{eqnarray}
From (\ref{eq:2.7}), (\ref{eq:2.8}) and (\ref{eq:2.15}),
\begin{eqnarray}\nonumber
\| \mathcal{K}_m (Q_n\varphi) - \mathcal{K}_m (\varphi) - \mathcal {K}_m' (\varphi) (Q_n \varphi - \varphi ) \|_\infty \leq {C_2 } \| Q_n \varphi - \varphi \|_\infty^2 = O (h^{2 r + 2}).
\end{eqnarray}
By (\ref{eq:2.13}) and Proposition \ref{prop:3.1},
\begin{eqnarray}\nonumber
\| (I - Q_n) \mathcal {K}_m' (\varphi) (Q_n \varphi - \varphi )\|_\infty
\leq (1 + C_4) \| \mathcal {K}_m' (\varphi) (Q_n \varphi - \varphi )\|_\infty = O (h^{r + 3}).
\end{eqnarray}
Since $r \geq 1,$ it then follows that
\begin{eqnarray}\nonumber
\|\mathcal{K}_m (\varphi) - \tilde{\mathcal{K}}_n^M (\varphi) \|_\infty = O (h^{r + 3}).
\end{eqnarray}
The required result follows from (\ref{eq:3.10}), (\ref{eq:3.12}) and the above estimate.
\end{proof}
\begin{remark}
It can be shown that
\begin{equation}\label{*}
\|z_n^G - \varphi \|_\infty = O \left (\max \left \{\tilde{h}^{2}, h^{r + 1 } \right \} \right ),
\;\;\; \|z_n^S - \varphi \|_\infty = O \left (\max \left \{\tilde{h}^{2}, h^{r +3} \right \} \right ).
\end{equation}
Thus the order of convergence of $z_n^S$ and $z_n^M$ is the same. We prove the estimate (\ref{eq:3.11}) as
it is needed for obtaining the order of convergence in the iterated discrete modified projection method.
\end{remark}
\subsection{Error in the discrete iterated modified projection method}
Note that
\begin{equation}\nonumber
\tilde{z}_n^M - \varphi_m = \mathcal {K}_m ({z}_n^M) - \mathcal {K}_m (\varphi_m).
\end{equation}
From (\ref{eq:2.7}) and (\ref{eq:2.8}),
\begin{eqnarray}\nonumber
\mathcal {K}_m ({z}_n^M) - \mathcal {K}_m (\varphi_m) = \mathcal {K}_m' (\varphi_m) (z_n^M - \varphi_m) +
O \left ( \|z_n^M - \varphi_m\|_\infty^2 \right ).
\end{eqnarray}
From (\ref{eq:2.6}) and Theorem \ref{thm:3.3}, we obtain
\begin{equation}\nonumber
\|z_n^M - \varphi_m \|_\infty \leq \|z_n^M - \varphi \|_\infty + \|\varphi - \varphi_m \|_\infty = O (\max \{\tilde{h}^{2},
h^{r + 3}\}).
\end{equation}
Thus,
\begin{eqnarray}\label{eq:3.13}
\tilde{z}_n^M - \varphi_m = \mathcal {K}_m' (\varphi_m) (z_n^M - \varphi_m) + O (\max \{\tilde{h}^{2},
h^{r + 3}\}^2).
\end{eqnarray}
We quote the following result from Kulkarni-Rakshit \cite{Kul1}:
\begin{eqnarray}\nonumber
&&\mathcal{K}_m' (\varphi_m) ( z_n^M - \varphi_m ) \\\nonumber
& = &
- \left [ I - {\mathcal{K}}_m' (\varphi_m) \right]^{-1} \mathcal{K}_m' (\varphi_m) \left \{ \mathcal {K}_m (\varphi_m) - \tilde{\mathcal{K}}_n^M (\varphi_m) \right \}\\\nonumber
&&+ \left [ I - {\mathcal{K}}_m' (\varphi_m) \right]^{-1} \mathcal{K}_m' (\varphi_m) \left \{
\tilde{\mathcal{K}}_n^M (z_n^M) - \tilde{\mathcal{K}}_n^M (\varphi_m) - \left ( \tilde{\mathcal{K}}_n^M \right )' (\varphi_m)
(z_n^M - \varphi_m) \right \}\\\label{eq:3.14}
&& + \left [ I - {\mathcal{K}}_m' (\varphi_m) \right]^{-1} \mathcal{K}_m' (\varphi_m) \left \{
\left (\left ( \tilde{\mathcal{K}}_n^M \right )' (\varphi_m) - \mathcal{K}_m' (\varphi_m) \right ) (z_n^M - \varphi_m) \right \}.
\end{eqnarray}
We obtain below orders of convergence for the three terms in (\ref{eq:3.14}).
\begin{proposition}\label{prop:3.4}
Let $\varphi_m$ be the Nystr\"{o}m solution. Then
\begin{eqnarray}\nonumber
\left \| \mathcal {K}_m' (\varphi_m) \left (\mathcal {K}_m (\varphi_m) - \tilde{\mathcal{K}}_n^M (\varphi_m) \right ) \right \|_\infty = O \left ( h^{4} \max \{\tilde{h}^{2}, h^{r +1}\} \right ).
\end{eqnarray}
\end{proposition}
\begin{proof}
Let $v \in C[0, 1].$ Then from (\ref{eq:2.4}), (\ref{eq:2.6}), (\ref{eq:2.13}) and (\ref{eq:3.9}),
\begin{eqnarray}\nonumber
\| \mathcal{K}_m' (\varphi_m)(I - Q_n) v \|_\infty &\leq& \| \left [ \mathcal{K}_m' (\varphi_m) - \mathcal{K}_m' (\varphi) \right ]
(I - Q_n) v \|_\infty
+ \| \mathcal{K}_m' (\varphi)(I - Q_n) v \|_\infty \\\nonumber
&\leq & C_2 (1 + C_4) \|v\|_\infty \|\varphi_m - \varphi \|_\infty
+ (1 + C_4 + C_5) C_6 \|v\|_\infty h^{2}\\\label{eq:3.15}
&\leq & C_7 \|v\|_\infty h^{2}.
\end{eqnarray}
Note that
\begin{eqnarray}\nonumber
\mathcal{K}_m (\varphi_m) - \tilde{\mathcal{K}}_n^M (\varphi_m)
& = & - (I - Q_n) (\mathcal{K}_m (Q_n\varphi_m) - \mathcal{K}_m (\varphi_m) - \mathcal {K}_m' (\varphi_m) (Q_n \varphi_m - \varphi_m ) )\\\label{eq:3.16}
&& - (I - Q_n) \mathcal {K}_m' (\varphi_m) (Q_n \varphi_m - \varphi_m ).
\end{eqnarray}
Let
\begin{eqnarray}\nonumber
y_n = \mathcal{K}_m (Q_n\varphi_m) - \mathcal{K}_m (\varphi_m) - \mathcal {K}_m' (\varphi_m) (Q_n \varphi_m - \varphi_m )
= R (Q_n \varphi_m - \varphi_m ) .
\end{eqnarray}
Then by (\ref{eq:3.15})
\begin{eqnarray}\nonumber
\| \mathcal{K}_m' (\varphi_m)(I - Q_n) y_n \|_\infty &\leq& C_7 \|y_n\|_\infty h^{2}.
\end{eqnarray}
By (\ref{eq:2.8})
\begin{eqnarray}\nonumber
\|y_n \|_{\infty} = \|R (Q_n \varphi_m - \varphi_m )\|_{ \infty } \leq
C_2 \|Q_n \varphi_m - \varphi_m \|_\infty^2.
\end{eqnarray}
Using (\ref{eq:2.6}), (\ref{eq:2.13}) and (\ref{eq:2.15}), we obtain
\begin{eqnarray}\label{eq:3.17}
\|Q_n \varphi_m - \varphi_m \|_\infty
= O (\max \{\tilde{h}^{2}, h^{r+1 }\}).
\end{eqnarray}
Thus,
\begin{eqnarray}\label{eq:3.18}
\| \mathcal{K}_m' (\varphi_m)(I - Q_n) y_n \|_\infty = O \left ( h^{2} \max \{\tilde{h}^{2}, h^{r +1}\}^2 \right ).
\end{eqnarray}
Using (\ref{eq:3.6}) it can be checked
that
\begin{eqnarray*}\nonumber
&&\left \|\mathcal {K}_m' (\varphi_m) (I - Q_n) \mathcal {K}_m' (\varphi_m) (I - Q_n) \varphi_m \right \|_{\infty}
= O \left ( h^{4} \max \{\tilde{h}^{2}, h^{r +1}\} \right ).
\end{eqnarray*}
The required result then follows from (\ref{eq:3.16}), (\ref{eq:3.18}) and the above estimate.
\end{proof}
\begin{proposition}\label{prop:3.5}
Let $\varphi_m$ be the Nystr\"{o}m solution and $z_n^M$ be the discrete modified projection solution. Then
\begin{eqnarray}\nonumber
&&\left \|\tilde {\mathcal{K}}_n^M (z_n^M) - \tilde{\mathcal{K}}_n^M (\varphi_m) - \left ( \tilde{\mathcal{K}}_n^M \right )' (\varphi_m)
(z_n^M - \varphi_m) \right \|_\infty = O \left (\max \left \{ \tilde{h}^{2}, h^{ r + 3} \right \}^2 \right).
\end{eqnarray}
\end{proposition}
\begin{proof}
Note that for $m$ and $n$ big enough,
$\displaystyle {\varphi_m, z_n^M \in \mathcal {B} \left (\varphi, \delta_0 \right ). }$
By the generalized Taylor's theorem,
\begin{eqnarray*}
&&\tilde {\mathcal{K}}_n^M (z_n^M) (s) - \tilde{\mathcal{K}}_n^M (\varphi_m) (s) - \left ( \tilde{\mathcal{K}}_n^M \right )' (\varphi_m)
(z_n^M - \varphi_m) (s) \\
&& \hspace*{0.5 in} =
\int_0^1 ( 1 - \theta) \left ( \tilde{\mathcal{K}}_n^M \right )'' \left (\varphi_m + \theta (z_n^M - \varphi_m) \right ) (z_n^M - \varphi_m)^2
(s) \; d \theta.
\end{eqnarray*}
Hence
\begin{eqnarray}\nonumber
&&\left \|\tilde {\mathcal{K}}_n^M (z_n^M) - \tilde{\mathcal{K}}_n^M (\varphi_m) - \left ( \tilde{\mathcal{K}}_n^M \right )' (\varphi_m)
(z_n^M - \varphi_m) \right \|_\infty \\\nonumber
&& \hspace*{0.5 in} \leq \frac {1} {2} \max_{0 \leq \theta \leq 1}
\left \|\left (\tilde {\mathcal{K}}_n^M \right )'' \left (\varphi_m + \theta (z_n^M - \varphi_m) \right ) \right \| \| z_n^M - \varphi_m\|_\infty^2.
\end{eqnarray}
It can be shown that
\begin{eqnarray}\nonumber
\max_{0 \leq \theta \leq 1}
\left \|\left (\tilde {\mathcal{K}}_n^M \right )'' \left (\varphi_m + \theta (z_n^M - \varphi_m) \right ) \right \|
\leq C_8.
\end{eqnarray}
We skip the details.
The required result then follows from Theorem 3.3.
\end{proof}
\begin{proposition}\label{prop:3.6}
Let $\varphi_m$ be the Nystr\"{o}m solution and $z_n^M$ be the discrete modified projection solution. Then
\begin{eqnarray}\nonumber
&&\left \|\mathcal{K}_m' (\varphi_m)
\left (\left ( \tilde{\mathcal{K}}_n^M \right )' (\varphi_m) - \mathcal{K}_m' (\varphi_m) \right ) (z_n^M - \varphi_m)
\right \|_\infty
= O \left ( h^2 \max \left \{ \tilde{h}^{2}, h^{r + 3} \right \} \right).
\end{eqnarray}
\end{proposition}
\begin{proof}
Note that
\begin{eqnarray*}
\mathcal{K}_m' (\varphi_m) \left (\left ( \tilde{\mathcal{K}}_n^M \right )' (\varphi_m) - \mathcal{K}_m' (\varphi_m) \right )
& = &\mathcal{K}_m' (\varphi_m) (I - Q_n) (\mathcal{K}_m' (Q_n \varphi_m ) - \mathcal{K}_m' (\varphi_m)) Q_n \\
& - & \mathcal{K}_m' (\varphi_m) (I - Q_n) \mathcal{K}_m' (\varphi_m) (I - Q_n).
\end{eqnarray*}
Using (\ref{eq:2.3}) and (\ref{eq:3.15}) it can be shown that
$$\| \mathcal{K}_m' (\varphi_m) (I - Q_n) \mathcal{K}_m' (\varphi_m) \| = O (h^{ 2}).$$
By (\ref{eq:2.6}) and (\ref{eq:3.17}),
\begin{eqnarray*}
\|\mathcal{K}_m' (Q_n \varphi_m ) - \mathcal{K}_m' (\varphi_m)\|\leq C_2 \|Q_n \varphi_m - \varphi_m \|_\infty =
O (\max \{\tilde{h}^2, h^{r+1} \}).
\end{eqnarray*}
Since
by (\ref{eq:2.3}),
$\displaystyle { \|\mathcal{K}_m' (\varphi_m) \| \leq C_1},$
it follows that
\begin{eqnarray}\nonumber
\left \|\mathcal{K}_m' (\varphi_m) \left (\left ( \tilde{\mathcal{K}}_n^M \right )' (\varphi_m) - \mathcal{K}_m' (\varphi_m) \right ) \right \|= O (h^2).
\end{eqnarray}
The required result follows using the estimate for $\|z_n^M - \varphi\|_\infty $ from Theorem 3.3.
\end{proof}
We now prove our main result about
the order of convergence in the discrete iterated modified projection method.
\begin{theorem}\label{thm:3.7}
Let $r \geq 1,$ $ \kappa$ be of class $\mathcal{G}_2 ({r+1}, 0)$
and $ f \in C^{r+1} [0, 1].$
Let $\varphi$ be the unique solution of (\ref{eq:1.2}) and assume that $1$ is not an eigenvalue of $\mathcal{K}' (\varphi).$
Let $\mathcal{X}_n$ be the space of piecewise polynomials of degree $\leq r $ with respect to the partition
(\ref{eq:2.9})
and $Q_n$ be the discrete orthogonal projection defined by
(\ref{eq:2.12}). Let $\tilde{z}_n^M$ be the discrete iterated modified projection solution defined by (\ref{eq:2.18}).
Then
\begin{eqnarray}\label{eq:3.19}
\|\tilde{z}_n^M - \varphi \|_\infty = O \left ( \max \left \{\tilde{h}^2,
h^{r + 5} \right \}\right).
\end{eqnarray}
\end{theorem}
\begin{proof}
We have from (\ref{eq:3.13})
\begin{eqnarray}\nonumber
\tilde{z}_n^M - \varphi_m = \mathcal {K}_m' (\varphi_m) (z_n^M - \varphi_m) + O (\max \{\tilde{h}^{2},
h^{r + 3}\}^2).
\end{eqnarray}
From (\ref{eq:3.14}) recall that
\begin{eqnarray}\nonumber
&&\mathcal{K}_m' (\varphi_m) ( z_n^M - \varphi_m ) \\\nonumber
& = &
- \left [ I - {\mathcal{K}}_m' (\varphi_m) \right]^{-1} \mathcal{K}_m' (\varphi_m) \left \{ \mathcal {K}_m (\varphi_m) - \tilde{\mathcal{K}}_n^M (\varphi_m) \right \}\\\nonumber
&&+ \left [ I - {\mathcal{K}}_m' (\varphi_m) \right]^{-1} \mathcal{K}_m' (\varphi_m) \left \{
\tilde{\mathcal{K}}_n^M (z_n^M) - \tilde{\mathcal{K}}_n^M (\varphi_m) - \left ( \tilde{\mathcal{K}}_n^M \right )' (\varphi_m)
(z_n^M - \varphi_m) \right \}\\\nonumber
&& + \left [ I - {\mathcal{K}}_m' (\varphi_m) \right]^{-1} \mathcal{K}_m' (\varphi_m) \left \{
\left (\left ( \tilde{\mathcal{K}}_n^M \right )' (\varphi_m) - \mathcal{K}_m' (\varphi_m) \right ) (z_n^M - \varphi_m) \right \}.
\end{eqnarray}
By Proposition 4.2 from Kulkarni-Rakshit \cite{Kul1}, we have
$$ \left \|\left [ I - {\mathcal{K}}_m' (\varphi_m) \right]^{-1} \right \| \leq
4 \left \| \left ( I - \mathcal{K}' (\varphi ) \right )^{-1} \right \|.$$
Hence by Proposition \ref{prop:3.4},
Proposition \ref{prop:3.5},
and Proposition \ref{prop:3.6},
\begin{eqnarray}\nonumber
&&\left \| \mathcal{K}_m' (\varphi_m) ( z_n^M - \varphi_m ) \right \|_\infty
= O \left ( \max \left \{ h^4 \max \{\tilde{h}^{2}, h^{r +1}\}, h^{2} \max \{\tilde{h}^{2}, h^{r +3}\}\right \} \right ),
\end{eqnarray}
It follows that
\begin{eqnarray}\nonumber
&&\left \| \tilde{z}_n^M - \varphi_m \right \|_\infty =
O \left ( h^{2} \max \{\tilde{h}^{2}, h^{r +3}\} \right ).
\end{eqnarray}
Since, $\tilde{z}_n^M - \varphi = \tilde{z}_n^M - \varphi_m + \varphi_m - \varphi $ and $ \left \| \varphi - \varphi_m \right \|_\infty = O \left ( \tilde{h}^{2} \right ),$
the required result follows.
\end{proof}
\setcounter{equation}{0}
\section{Piecewise constant polynomial approximation : $\mathbf{ r = 0 }$}
In this section we assume that $ \kappa$ is of class $\mathcal{G}_2 ({2}, 0).$
If we follow the development in Section 3, then we obtain the following orders of convergence:
\begin{equation}\nonumber
\| z_n^M - \varphi \|_\infty = O (h^2 ), \;\;\;
\| \tilde{z}_n^M - \varphi \|_\infty = O ( \max \{\tilde{h}^2, h^3 \} ).
\end{equation}
But by looking at the proofs more carefully, we are able to improve the above estimates.
More specifically, while for $r \geq 1,$ if $v \in C^{r+1} [0, 1],$ then both $\| \mathcal {K}_m' (\varphi) (I - Q_n) v \|_\infty $
and $\|(I- Q_n) \mathcal {K}_m' (\varphi) (I - Q_n) v \|_\infty $ are of the same order, we could show that if $r=0,$ then
$$ \| \mathcal {K}_m' (\varphi) (I - Q_n) v \|_\infty = O (h^2) \;\;\; \mbox{and} \;\;\;
\| (I - Q_n) \mathcal {K}_m' (\varphi) (I - Q_n) v \|_\infty = O (h^3).$$
This is the essential point in proving the estimates (\ref{eq:1.5}).
Consider $\mathcal{X}_n$ to be the space of piecewise constant functions with respect to the partition (\ref{eq:2.9}). Thus, $r = 0.$
We
choose Gauss 2 point rule as a basic quadrature rule:
\begin{equation}\nonumber
\int_0^1 f (t) d t \approx w_1 f (\mu_1) + w_2 f (\mu_2),
\end{equation}
where
\begin{eqnarray*}
w_1 = w_2 = \frac {1} {2}, \; \mu_1 = \frac {1} {2} - \frac {1} {2 \sqrt{3}}, \;
\mu_2 = \frac {1} {2} + \frac {1} {2 \sqrt{3}}.
\end{eqnarray*}
A composite integration rule with respect to the fine partition (\ref{eq:2.1})
is then defined as
\begin{eqnarray}\nonumber
\int_0^1 f (t) d t &\approx& {\tilde h} \sum_{i =1}^m \sum_{q =1}^{2} w_q \; f (\zeta_q^i ), \;\;\;
\;\;\; \zeta_q^i = s_{i -1} + \mu_q \tilde{h}.
\end{eqnarray}
Since $m = n p,$ the above
rule can be written as
\begin{equation}\nonumber
\int_{0}^1 f (t) d t \approx
\tilde {h} \ \sum_{j=1}^n \sum_{\nu = 1}^p \sum_{q = 1}^{2} w_q f \left (\zeta_q^{(j-1) p + \nu} \right ).
\end{equation}
The Nystr\"{o}m operator is defined as
\begin{eqnarray}\label{eq:4.1}
\mathcal{K}_m (x) (s) & = & \tilde {h} \sum_{j=1}^n \sum_{\nu = 1}^p \sum_{q = 1}^{2} w_q ~\kappa \left (s, \zeta_q^{(j-1) p + \nu}, x{\left ( \zeta_q^{(j-1) p + \nu} \right )} \right ).
\end{eqnarray}
Recall from (\ref{eq:2.10}) that
for $f, g \in C (\Delta_j),$
\begin{equation}\nonumber
\inp {f} {g}_{\Delta_j} = \tilde {h} \sum_{\nu = 1}^p \sum_{q = 1}^{2} w_q ~ f{\left (\zeta_q^{(j-1) p + \nu}\right )} ~ g{\left (\zeta_q^{(j-1) p + \nu} \right)}.
\end{equation}
The discrete orthogonal projection $Q_{n,j}: C (\Delta_j) \rightarrow \mathcal{P}_{0, \Delta_j}$ is defined
as follows:
\begin{eqnarray}\nonumber
(Q_{n, j} v) (t) =
\frac {1} {p } \left [ \sum_{\nu = 1}^p \sum_{q = 1}^2
w_q v \left (\zeta_q^{(j-1) p + \nu} \right ) \right ], \;\;\; t \in (t_{j-1}, t_j], \;\;\;
\end{eqnarray}
and
\begin{eqnarray}\nonumber
(Q_{n, 1} v) (0)
&=& \frac {1} {p } \left [ \sum_{\nu = 1}^p \sum_{q = 1}^2
w_q v \left (\zeta_q^{ \nu} \right ) \right ].
\end{eqnarray}
A discrete orthogonal projection $Q_n: C[0, 1] \rightarrow \mathcal{X}_n$ is defined as
\begin{eqnarray}\label{eq:4.2}
{Q_n v = \sum_{j=1}^n Q_{n, j} v.}
\end{eqnarray}
The following result is crucial in obtaining improved orders of convergence in the discrete modified
projection method and its iterated version.
\begin{proposition}
If $v \in C^1 [0, 1],$ then
\begin{eqnarray}\label{eq:4.3}
\|(I - Q_{n} ) \mathcal{K}_m'(\varphi) (I - Q_n ) v \|_\infty = O (h^3),
\end{eqnarray}
\begin{eqnarray}\label{eq:4.4}
\left \| \mathcal{K}_m'(\varphi) (I - Q_n ) \mathcal{K}_m'(\varphi) (I - Q_n ) v \right \|_\infty = O (h^4).
\end{eqnarray}
\end{proposition}
\begin{proof}
Note that
\begin{eqnarray*}
\| (I - Q_n ) \mathcal{K}_m'(\varphi) (I - Q_n ) v \|_\infty
& = & \max _{1 \leq i \leq n} \max_{s \in [t_{i-1}, t_i]}
\left |(I - Q_{n, i} ) \mathcal{K}_m'(\varphi) (I - Q_n ) v (s) \right|.
\end{eqnarray*}
For $s \in [t_{i-1}, t_i],$
\begin{eqnarray} \nonumber
&&(I - Q_{n, i} ) \mathcal{K}_m'(\varphi) (I - Q_n ) v (s) \\\nonumber
&& \hspace*{1 cm} = \frac {1} {p } \sum_{\nu = 1}^p \sum_{q = 1}^2
w_q \left \{ \mathcal{K}_m'(\varphi) (I - Q_n ) v (s) - \mathcal{K}_m'(\varphi) (I - Q_n ) v \left (\zeta_q^{(i-1) p + \nu} \right ) \right \}
\\\nonumber
&& \hspace*{1 cm} = \frac {1} {p } \sum_{\nu = 1}^p \sum_{q = 1}^2 \sum_{ \stackrel {j =1} {j \neq i}}^n
w_q \inp {\ell_{*,s} - \ell_{*,\zeta_q^{(i-1) p + \nu}} } {(I - Q_{n, j} ) v}_{\Delta_j}\\\label{eq:4.5}
&& \hspace*{1 cm} + \frac {1} {p } \sum_{\nu = 1}^p \sum_{q = 1}^2
w_q \inp {\ell_{*,s} - \ell_{*,\zeta_q^{(i -1) p + \nu}} } {(I - Q_{n, i} ) v}_{\Delta_i}.
\end{eqnarray}
For $j \neq i,$
\begin{eqnarray*}
\inp {\ell_{*,s} - \ell_{*,\zeta_q^{(i-1) p + \nu}} } {(I - Q_{n, j} ) v}_{\Delta_j}
= (s - \zeta_q^{(i-1) p + \nu}) \inp {D^{(1, 0)} \ell_{*} (\eta_q^{{(i-1) p + \nu}}, \cdot) } {(I - Q_{n, j} ) v}_{\Delta_j},
\end{eqnarray*}
for some $\eta_q^{{(i-1) p + \nu}} \in (t_{i - 1}, t_i).$
Define the following constant function
$$ g_q^{{(i-1) p + \nu}} (t) = D^{(1, 0)} \ell_{*} {\left (\eta_q^{{(i-1) p + \nu}}, \frac {t_{j-1} + t_j} {2} \right )} , \; \; \;
t \in [t_{j - 1}, t_j].$$
Then
\begin{eqnarray*}
&& \inp {\ell_{*,s} - \ell_{*,\zeta_q^{(i-1) p + \nu}} } {(I - Q_{n, j} ) v}_{\Delta_j} \\
& & \hspace*{1 cm} = \left (s - \zeta_q^{(i-1) p + \nu}\right ) \inp { D^{(1, 0)} \ell_{*} (\eta_q^{(i-1) p + \nu}, \cdot) - g_q^{(i-1) p + \nu } } {(I - Q_{n, j} ) v}_{\Delta_j}.
\end{eqnarray*}
From (\ref{eq:2.11}) and (\ref{eq:2.14}),
\begin{eqnarray*}
&& \left | \inp {\ell_{*,s} - \ell_{*,\zeta_q^{(i-1) p + \nu}} } {(I - Q_{n, j} ) v}_{\Delta_j} \right |
\leq C_5 \left ( \max_{s \neq t} \left |D^{(1, 1)} \ell_{*} (s, t)
\right | \right ) \|v'\|_{ \infty} h^4.
\end{eqnarray*}
On the other hand, from (\ref{eq:3.5}) with $r=0,$
\begin{eqnarray*}
\left | \inp {\ell_{*,s} - \ell_{*,\zeta_q^{(i-1) p + \nu}} } {(I - Q_{n, i} ) v}_{\Delta_i} \right |
& \leq & 2 C_5 C_6 \|v'\|_\infty h^3, \;\;\; \nu = 1, \ldots, p.
\end{eqnarray*}
Thus, from (\ref{eq:4.5}) and the above two estimates,
\begin{eqnarray}\nonumber
\|(I - Q_{n, i} ) \mathcal{K}_m'(\varphi) (I - Q_n ) v \|_{\Delta_i, \infty}
& \leq & C_5 \max\left\{ 2 C_6, \max_{s \neq t} \left |D^{(1, 1)} \ell_{*} (s, t)
\right | \right\}
\|v'\|_\infty h^3.
\end{eqnarray}
This completes the proof of (\ref{eq:4.3}).
In order to prove (\ref{eq:4.4}), as before we consider two cases.
If $s = t_i$ for some $i,$ then
\begin{eqnarray*}
\mathcal{K}_m'(\varphi) (I - Q_n ) \mathcal{K}_m'(\varphi) (I - Q_n ) v (s)
& = & \sum_{j=1}^n \inp { (I - Q_{n,j} ) \ell_{*,s} } { (I - Q_{n,j} ) \mathcal{K}_m'(\varphi) (I - Q_n )v}_{\Delta_j}.
\end{eqnarray*}
If $s \in (t_{i-1}, t_i),$ then we write
\begin{eqnarray}\nonumber
\mathcal{K}_m'(\varphi) (I - Q_n ) \mathcal{K}_m'(\varphi) (I - Q_n ) v (s)
& = & \sum_{\stackrel {j=1} {j \neq i}}^n \inp { (I - Q_{n,j} ) \ell_{*,s} } { (I - Q_{n,j} ) \mathcal{K}_m'(\varphi) (I - Q_n )v}_{\Delta_j}\\\nonumber
& + & \inp { \ell_{*,s} - g_i} { (I - Q_{n,i} )\mathcal{K}_m'(\varphi) (I - Q_n )v}_{\Delta_i}.
\end{eqnarray}
Proceeding as in the proof of Proposition 3.2 and using the estimate (\ref{eq:4.3}), we obtain the required result.
\end{proof}
\begin{theorem}\label{thm:4.2}
Let $ \kappa$ be of class $\mathcal{G}_2 (2, 0)$
and $ f \in C^{2} [0, 1].$
Let $\varphi$ be the unique solution of (\ref{eq:1.2}) and assume that $1$ is not an eigenvalue of $\mathcal{K}' (\varphi).$
Let $\mathcal{X}_n$ be the space of piecewise constant functions with respect to the partition (\ref{eq:2.9})
and $Q_n: L^\infty [0, 1] \rightarrow \mathcal {X}_n$ be the discrete orthogonal projection defined by (\ref{eq:4.2}).
Let $z_n^M $ be the discrete modified projection solution in $\mathcal{B} (\varphi, \delta_0 ).$ Then
\begin{equation}\label{eq:4.6}
\|z_n^M - \varphi \|_\infty = O (\max \{\tilde{h}^2, h^3 \}).
\end{equation}
\end{theorem}
\begin{proof}
Recall from (\ref{eq:3.10}) that
\begin{eqnarray}\nonumber
\| z_n^M - \varphi\|_\infty
& \leq & 6 \left \| \left (I - \mathcal{K}' (\varphi) \right )^{-1} \right \| \left ( \|\mathcal{K} (\varphi) -
\mathcal{K}_m (\varphi)\|_\infty + \|\mathcal{K}_m (\varphi) -\tilde{\mathcal{K}}_n^M (\varphi)\|_\infty \right ).
\end{eqnarray}
From (\ref{eq:2.5}) we have
\begin{eqnarray}\label{eq:4.7}
\|\mathcal{K} (\varphi) - \mathcal{K}_m (\varphi)\|_\infty = O (\tilde{h}^2).
\end{eqnarray}
On the other hand,
\begin{eqnarray}\nonumber
\|\mathcal{K}_m (\varphi) - \tilde{\mathcal{K}}_n^M (\varphi) \|_\infty
& \leq & \| (I - Q_n) (\mathcal{K}_m (Q_n\varphi) - \mathcal{K}_m (\varphi) - \mathcal {K}_m' (\varphi) (Q_n \varphi - \varphi ) )\|_\infty\\\label{new1}
&& + \| (I - Q_n) \mathcal {K}_m' (\varphi) (Q_n \varphi - \varphi )\|_\infty.
\end{eqnarray}
Recall from (\ref{eq:2.7}) that
\begin{eqnarray}\nonumber
\mathcal{K}_m (Q_n\varphi) - \mathcal{K}_m (\varphi) - \mathcal {K}_m' (\varphi) (Q_n \varphi - \varphi ) =
R ( Q_n \varphi - \varphi ),
\end{eqnarray}
where
\begin{align}\nonumber
R ( Q_n \varphi - \varphi )(s) = \int_{0}^{1} \mathcal{K}_m''(\varphi + \theta(Q_n \varphi - \varphi )) (Q_n \varphi - \varphi )^2(s) (1-\theta) d\theta.
\end{align}
Note that
\begin{align*}
&\mathcal {K}_m'' (\varphi + \theta(Q_n \varphi - \varphi )) (Q_n \varphi - \varphi )^2 (s) \\ &\hspace{1cm} = \tilde {h} \sum_{j=1}^m \sum_{q=1}^\rho w_q \; \frac {\partial^2 \kappa } {\partial u^2} (s, \zeta_q^j, (\varphi + \theta(Q_n \varphi - \varphi )) (\zeta_q^j) (Q_{n, j} \varphi - \varphi )^2 (\zeta_q^j)
\end{align*}
Define
\begin{align*}
\sigma_n (s,t) = \frac {\partial^2 \kappa } {\partial u^2} (s, t, (\varphi(t) + \theta(Q_n \varphi - \varphi )(t))
\end{align*}
and for a fixed $s \in [0, 1],$ let
\begin{equation*}
\sigma_{n, s} (t) = \sigma_n (s,t), \; t \in [0, 1].
\end{equation*}
Let
\begin{eqnarray*}
C_9 = \max\left\{ \sup_{ \stackrel {0\leq t<s\leq 1} {|u|\leq \|\varphi\|_\infty + \delta_0}}
\left |D^{(0,1,1) } \ell_1 (s, t, u) \right | , \sup_{ \stackrel {0\leq s<t\leq 1} {|u|\leq \|\varphi\|_\infty + \delta_0}}
\left |D^{(0,1,1) } \ell_2 (s, t, u) \right | \right\},
\end{eqnarray*}
Note that
\begin{eqnarray*}
\mathcal {K}_m'' (\varphi + \theta(Q_n \varphi - \varphi )) (Q_n \varphi - \varphi )^2 (s)
& = &\sum_{j = 1}^n \left< (I-Q_{n, j})\sigma_{n, s}, ((I- Q_{n, j}) \varphi )^2 \right>_{\Delta_j}.
\end {eqnarray*}
If $s = t_i$ for some $i,$ then for all $j$ and if $s \in (t_{i-1}, t_i)$ for some $i,$ then for $j \neq i,$
\begin{eqnarray*}
\|(I-Q_{n, j})\sigma_{n, s}\|_{\Delta_j, \infty} \leq C_5 C_9 h.
\end{eqnarray*}
We then obtain $\;\;\; \left \|\mathcal {K}_m'' (\varphi + \theta(Q_n \varphi - \varphi )) (Q_n \varphi - \varphi )^2 \right \|_\infty =
O (h^3).$
It follows that
\begin{eqnarray}\label{eq:4.8}
\| (I - Q_n) (\mathcal{K}_m (Q_n\varphi) - \mathcal{K}_m (\varphi) - \mathcal {K}_m' (\varphi) (Q_n \varphi - \varphi ) \|_\infty
& = & O (h^3).
\end{eqnarray}
Using the estimate (\ref{eq:4.3}) of Proposition 4.1 and \eqref{new1}, we thus obtain
\begin{eqnarray*}
\|\mathcal{K}_m (\varphi) - \tilde{\mathcal{K}}_n^M (\varphi) \|_\infty = O (h^3).
\end{eqnarray*}
The required result follows from (\ref{eq:4.7}) and the above estimate.
\end{proof}
\begin{theorem}\label{thm:4.3}
Let $ \kappa$ be of class $\mathcal{G}_2 (2, 0)$
and $ f \in C^{2} [0, 1].$
Let $\varphi$ be the unique solution of (\ref{eq:1.2}) and assume that $1$ is not an eigenvalue of $\mathcal{K}' (\varphi).$
Let $\mathcal{X}_n$ be the space of piecewise constant functions with respect to the partition (\ref{eq:2.9})
and $Q_n: L^\infty [0, 1] \rightarrow \mathcal {X}_n$ be the discrete orthogonal projection defined by
(\ref{eq:4.2}). Let $\tilde{z}_n^M$ be the discrete iterated modified projection solution defined by (\ref{eq:2.18}). Then
\begin{eqnarray}\label{eq:4.9}
\|\tilde{z}_n^M - \varphi \|_\infty = O \left ( \max \left \{\tilde{h}^2, h^{4} \right \} \ \right).
\end{eqnarray}
\end{theorem}
\begin{proof}
Recall from Section 3.3 that
\begin{eqnarray}\nonumber
\tilde{z}_n^M - \varphi_m &=& \mathcal {K}_m' (\varphi_m) (z_n^M - \varphi_m) + O (\|z_n^M - \varphi \|_\infty^2)
\end{eqnarray}
Hence by Theorem 4.2,
\begin{eqnarray}\label{eq:4.10}
\tilde{z}_n^M - \varphi_m = \mathcal {K}_m' (\varphi_m) (z_n^M - \varphi_m) +
O \left (\max \{\tilde{h}^{2}, h^{ 3} \}^2 \right )
\end{eqnarray}
We now obtain estimates for the three terms in the expression for $\mathcal {K}_m' (\varphi_m) (z_n^M - \varphi_m)$ given in (\ref{eq:3.14}).
Note that
\begin{eqnarray}\label{eq:4.11}
\|\mathcal{K}_m (\varphi_m) - \tilde{\mathcal{K}}_n^M (\varphi_m) \|_\infty
& \leq & \| (I - Q_n) (\mathcal{K}_m (Q_n\varphi_m) - \mathcal{K}_m (\varphi_m) - \mathcal {K}_m' (\varphi_m) (Q_n \varphi_m - \varphi_m ) )\|_\infty \nonumber\\
&& + \| (I - Q_n) \mathcal {K}_m' (\varphi_m) (Q_n \varphi_m - \varphi_m )\|_\infty.
\end{eqnarray}
Recall from (\ref{eq:2.7}) that
\begin{eqnarray}\nonumber
y_n = \mathcal{K}_m (Q_n\varphi_m) - \mathcal{K}_m (\varphi_m) - \mathcal {K}_m' (\varphi_m) (Q_n \varphi_m - \varphi_m ) =
R ( Q_n \varphi_m - \varphi_m ),
\end{eqnarray}
Now proceeding as in the proof of Theorem 4.2, we obtain
\begin{align*}
\|y_n \|_\infty = \|R ( Q_n \varphi_m - \varphi_m )\|_\infty = O\left( \max\left\{ \tilde{h}^2 , h^3 \right\} \right).
\end{align*}
Note that
\begin{align}\label{eq:4.12}
\|\mathcal{K}_m'(\varphi_m) (I - Q_n )y_n\|_\infty \leq C_7 \|y_n\|_\infty h = O\left(h \max\left\{ \tilde{h}^2, h^3 \right\} \right).
\end{align}
Using (\ref{eq:4.4}) it can be seen that
\begin{eqnarray*}
\left \|\mathcal {K}_m' (\varphi_m) (I - Q_n) \mathcal {K}_m' (\varphi_m) (I - Q_n) \varphi_m \right \|_{\infty} = O \left (h^4 \right ).
\end{eqnarray*}
Thus, from \eqref{eq:4.11}, \eqref{eq:4.12} and the above estimate, we obtain
\begin{eqnarray}\label{eq:4.18}
\|\mathcal{K}_m' (\varphi_m) ( \mathcal {K}_m (\varphi_m) - \tilde{\mathcal{K}}_n^M (\varphi_m) )\|_\infty = O\left(h \max\left\{ \tilde{h}^2, h^3 \right\} \right).
\end{eqnarray}
We recall the following result from Proposition 3.5:
\begin{eqnarray}\nonumber
\left \|\tilde{\mathcal{K}}_n^M (z_n^M) - \tilde{\mathcal{K}}_n^M (\varphi_m) - \left ( \tilde{\mathcal{K}}_n^M \right )' (\varphi_m) (z_n^M - \varphi_m) \right \|_\infty &\leq & C_8 \left \|z_n^M - \varphi_m \right \|_\infty^2 .\\\label{eq:4.19}
& = & O (\max \{\tilde{h}^{2}, h^{ 3} \}^2).
\end{eqnarray}
Note that
\begin{eqnarray}\nonumber
\left \| \mathcal{K}_m' (\varphi_m)
\left (\left ( \tilde{\mathcal{K}}_n^M \right )' (\varphi_m) - \mathcal{K}_m' (\varphi_m) \right ) \right \| = O (h).
\end{eqnarray}
Hence
\begin{equation}\label{eq:4.20}
\left \|\mathcal{K}_m' (\varphi_m) \left (\left ( \tilde{\mathcal{K}}_n^M \right )' (\varphi_m) - \mathcal{K}_m' (\varphi_m) \right ) ( z_n^M - \varphi_m ) \right \|_\infty
= O (h \max \{\tilde{h}^2, h^3 \}),
\end{equation}
We thus obtain the following estimate using (\ref{eq:3.14}), (\ref{eq:4.18}), (\ref{eq:4.19}) and (\ref{eq:4.20}):
\begin{eqnarray*}
\|\mathcal {K}_m' (\varphi_m) (z_n^M - \varphi_m) \|_\infty = O (h \max \{\tilde{h}^2, h^3 \}).
\end{eqnarray*}
From (\ref{eq:4.10}) it follows that
\begin{eqnarray*}
\|\tilde{z}_n^M - \varphi_m \|_\infty = O (h \max \{\tilde{h}^2, h^3 \}).
\end{eqnarray*}
Since
$ \|\varphi - \varphi_m \|_\infty = O (\tilde{h}^2),$
the required result follows.
\end{proof}
\begin{remark}
It can be shown that
\begin{equation}\label{**}
\|z_n^G - \varphi \|_\infty = O \left (h \right ),
\;\;\; \|z_n^S - \varphi \|_\infty = O \left ({h}^{2} \right ).
\end{equation}
\end{remark}
\setcounter{equation}{0}
\section{Numerical Results}
For the sake of illustration, we quote some numerical results from Grammont et al \cite{Gram3} for the following example considered
in Atkinson-Potra \cite{AtkP1}.
Consider
\begin{equation}\label{eq:5.1}
x (s) - \int_0^1 \kappa (s, t) \left [ f (t, x (t) \right ] d t = \int_0^1 \kappa (s, t) z (t) d t, \;\;\; 0 \leq s \leq 1,
\end{equation}
where
$$
\kappa (s,t) = \left\{ {\begin{array}{ll}
(1 - s) t, & 0 \leq t \leq s \leq 1, \\
s ( 1 - t), & 0 \leq s \leq t \leq 1,
\end{array}}\right. \;\;\; \mbox{and} \;\;\; f (t, u) = \frac {1} {1 + t + u}
$$
with $z (t)$ so chosen that
$$ \varphi (t) = \frac { t (1 - t)} { t + 1}$$
is the solution of (\ref{eq:5.1}).
In this example,
$r$ can be chosen as large as we want.
\subsection{ Piecewise Constant functions ($ r = 0$)}
Let $\mathcal{X}_n$ be the space of piecewise constant functions
with respect to the partition (\ref{eq:2.12}) and
$ Q_n: L^\infty [0, 1] \rightarrow \mathcal{X}_n$ be the
discrete orthogonal projection defined by (\ref{eq:4.4})-(\ref{eq:4.6}).
The numerical quadrature is chosen to be the composite Gauss $2$ rule with respect to partition (\ref{eq:2.1}) with
$m = n^2$ subintervals. Then $\tilde{h} = h^2.$
In the following table, $\delta_G, \; \delta_S, \; \delta_M $ and $\delta_{IM}$ denote the computed orders of convergence in the
discrete Galerkin, discrete iterated Galerkin, discrete Modified Projection and the discrete iterated Modified Projection methods, respectively. It can be seen that the computed values of order of convergence
match well with the theoretically predicted values in (\ref{eq:4.6}), (\ref{eq:4.9}) and (\ref{**}).
\begin{center}
Table 5.1
\bigskip
\begin{tabular} {|c|cc|cc|cc|cc|}\hline
$n$ & $\| \varphi - z_n^G \|_\infty$ & $\delta_G$ & $\| \varphi - z_n^S \|$ & $\delta_S$
& $\| \varphi - z_n^M \|_\infty$ & $\delta_M$ & $\| \varphi - \tilde{z}_n^M \|_\infty$ & $\delta_{IM}$\\
\hline
2& $ 1.22 \times 10^{-1} $ & & $ 8.40 \times 10^{-3} $ & & $ 4.34 \times 10^{-3} $ & & $ 5.23 \times 10^{-3} $ & \\
4& $ 8.65 \times 10^{-2} $ & $ 0.49 $ & $ 2.35 \times 10^{-3} $ & $ 1.84 $ & $ 4.31 \times 10^{-4} $ & $ 3.33 $ & $ 3.14 \times 10^{-4} $ & $ 4.06$\\
8& $ 5.09 \times 10^{-2} $ & $ 0.77 $ & $ 6.22 \times 10^{- 4} $ & $ 1.92 $ & $ 5.28 \times 10^{- 5 } $ & $ 3.03 $ & $ 1.89 \times 10^{-5} $ & $4.05$\\
16& $ 2.70 \times 10^{-2} $ & $ 0.91 $ & $ 1.59 \times 10^{-4} $ & $ 1.96 $ & $ 6.92 \times 10^{- 6} $ & $ 2.93 $ & $ 1.36 \times 10^{-6} $ & $3.80$\\
32& $ 1.33 \times 10^{-2} $ & $ 1.02 $ & $ 4.02 \times 10^{-5} $ & $ 1.98 $ & $ 8.38 \times 10^{- 7} $ & $ 3.05 $ & $ 4.55 \times 10^{-8} $ & $4.90$\\\hline
\end{tabular}
\end{center}
\subsection{Piecewise Linear Functions ($ r = 1$)}
Let $\mathcal{X}_n$ be the space of piecewise linear polynomials
with respect to the partition (\ref{eq:2.12}) and
$ Q_n: L^\infty [0, 1] \rightarrow \mathcal{X}_n$ be the
discrete orthogonal projection defined by (\ref{eq:2.15}).
The numerical quadrature is chosen to be the composite Gauss 2 point rule with $ n^2$ intervals for the Galerkin and the iterated Galerkin
method and the composite Gauss 2 point rule with $ n^3$ intervals for the modified projection and the iterated modified projection methods. In the latter case $\tilde{h}^2 = h^6.$ As a consequence,
it follows from (\ref{eq:3.11}), (\ref{*}) and (\ref{eq:3.19}) that the expected orders of convergence
in the discrete Galerkin, the discrete iterated Galerkin, the discrete modified projection and the discrete iterated modified projection methods are
$2, 4, 4$ and $6,$ respectively. The computational results given below match well with these orders.
\begin{center}
Table 5.2
\bigskip
\begin{tabular} {|c|cc|cc|cc|cc|}\hline
$n$ & $\| \varphi - \varphi_n^G \|_\infty$ & $\delta_G$ & $\| \varphi - \varphi_n^S \|$ & $\delta_S$
& $\| \varphi - \varphi_n^M \|_\infty$ & $\delta_M$ & $\| \varphi - \tilde{\varphi}_n^M \|_\infty$ & $\delta_{IM}$\\
\hline
2& $ 1.32 \times 10^{-1} $ & & $ 4.97 \times 10^{-3} $ & & $ 1.54 \times 10^{-3} $ & & $ 1.34 \times 10^{-3} $ & \\
4& $ 4.98 \times 10^{-2} $ & $ 1.41 $ & $ 4.46 \times 10^{-4} $ & $ 3.48 $ & $ 1.12 \times 10^{-4} $ & $ 3.78 $ & $ 1.89 \times 10^{-5} $ & $ 6.15 $\\
8& $ 1.58 \times 10^{-2} $ & $ 1.66 $ & $ 3.89 \times 10^{- 5} $ & $ 3.52 $ & $ 1.06 \times 10^{- 5 } $ & $ 3.40 $ & $ 2.48\times 10^{-7} $ & $6.25$\\
16& $ 4.51 \times 10^{-3} $ & $ 1.81 $ & $ 3.15 \times 10^{- 6} $ & $ 3.62 $ & $ 9.10 \times 10^{- 7 } $ & $ 3.54$& $ 2.92\times 10^{-9} $ & $6.41$\\
\hline
\end{tabular}
\end{center}
|
2,877,628,091,626 | arxiv | \section{Introduction}
The key element of effective long-time robot localization is the ability to reduce the accumulated drift when a robot revisits an already known location~\cite{tutorialPerception}.
The so-called loop closure can be performed based on a variety of sensors with the GPS and the camera being the prime examples.
The GPS signal is sometimes unavailable and therefore, the appearance-based loop closure is used to determine place similarity solely on its visual characteristic and without prior geometric assumptions.
The systems to detect loop closures using images from RGB cameras are already used in real-world scenarios~\cite{fabmap,dbow,seqslam}.
The performance of these methods depends on the image quality that degenerates at night, in adverse weather conditions, or when the sun shines directly into the lens blinding the camera.
Fortunately, most of the autonomous cars are equipped with other sensors like 3D LiDARs to provide necessary robustness in these conditions.
The scans from 3D LiDARs provide information about the geometry of the surroundings of the robot, complementing RGB images from cameras to form a more complete view of the environment.
\begin{figure}[htbp!]
\centering
\includegraphics[width=\columnwidth]{figures/catchy_camerab.png}
\caption{We compare loop closure performed on descriptors trained on RGB images (camera-based), LiDAR intensity values (LiDAR-based) and joint input of RGB images and LiDAR intensity values (camera-LiDAR-based) in varying weather conditions on multiple runs on the same trajectory using USyD dataset.
}
\label{fig:catchy_image}
\end{figure}
In contrary to camera-based loop closure, the problem of loop closure using LiDAR data is still actively researched with most current efforts focusing on proper point cloud representation for deep learning~\cite{seqlpd,pointnetvlad,lpdnet}.
To the best knowledge of the authors, the joint RGB-LiDAR loop closure is still an unexplored research direction.
The goal of the presented article is to determine the real-world conditions in which the camera-based, LiDAR-based, and camera-LiDAR-based loop closures provide satisfactory or poor results (Fig.~\ref{fig:catchy_image}) providing first camera-LiDAR-based loop closure pipeline.
In order to achieve this goal, we utilize the University of Sydney Campus Dataset~\cite{usyd} that provides camera images with 16-line LiDAR that was gathered across varying weather conditions.
Our processing pipeline, based on~\cite{singleview,multiview}, is used with minor modifications for all considered versions to focus on performance under changing weather conditions while reducing the influence that could stem from different processing pipelines.
The contribution of our work can be summarized as:
\begin{itemize}
\item the first experimental verification of the LiDAR-based loop closure in changing weather conditions.
\item the first experimental comparison between camera-based and LiDAR-based loop closures using similar processing pipelines on the same sequences.
\item the first multi-sensory camera-LiDAR-based loop closure system with extensive experimental verification.
\end{itemize}
\section{Related work}
The appearance-based loop closure using RGB images from cameras is a well-researched topic with several established solutions that can be divided into two groups of approaches.
The first is based on constructing a global descriptor from local features, like FABMAP~\cite{fabmap} or DBoW~\cite{dbow}.
These methods use the global descriptor with the bag of visual words (BoVW) approach to determine the similarity of locations usually based on data from a single location.
The second group is based on utilizing simpler and faster to compute global descriptors, but relying on sequences of these descriptors to achieve the desired efficiency, like SeqSLAM~\cite{seqslam} or FastABLE~\cite{wpc}.
The advent of CNN resulted in approaches with learnable features that are more robust to weather conditions or lightning changes, i.e. as in the work of Naseer \textit{et al.}~\cite{burgardRGB}.
Due to the increasing popularity of LiDARs in the automotive industry, the 3D LiDARs are getting cheaper while at the same time, the typical number of scan lines increases, resulting in a denser representation of the environment and new application possibilities, i.e. to use them for loop closure.
Before the advent of deep learning, the feature-based approaches to LiDAR-based solutions were popular with either specific local interest points~\cite{depthLFeat1, depthLFeat2} or global frame description~\cite{depthGFeat1}.
In the deep learning era, we see further improvements in local~\cite{burgardDepth} and global~\cite{ishot} descriptors.
Notably, in~\cite{ishot}, authors propose a descriptor that joints depth measurements with returning signal intensity to propose a globally invariant place descriptor.
Nevertheless, the most articles on the point cloud-based loop closure focus on point cloud representation for deep learning that is used to train the descriptor, i.e. as in~\cite{seqlpd}.
The PointNet representation with NetVLAD as in~\cite{pointnetvlad} or
graph-based neighborhood aggregation as in~\cite{lpdnet} can be used, but there seems to still be room for improvement with better point cloud representations.
On the other hand, the SegMatch~\cite{segmatch} avoids the problem of proper point cloud representation by matching hand-crafted descriptors of the segmented parts of a point cloud.
In our comparison, similarly to SegMatch, we wanted to avoid the problem of proper point cloud representation for deep learning to keep our pipeline similar for RGB and LiDAR data.
Therefore, following remarks highlighting the importance of LiDAR intensity and its invariance to lighting conditions~\cite{ishot}, we focus only on the LiDAR intensity information ignoring depth measurements and utilizing 2D image representation for intensity measurements.
Our approach is based on RGB image descriptor learning with triplet loss, as in~\cite{multiview}, that is applied in the same way for both RGB and LiDAR intensity input.
With such an approach, our processing pipeline is similar to~\cite{locnet}, where CNN on range image with depth measurements from LiDAR is trained with the contrastive loss to achieve robust place descriptor.
\section{Processing pipeline}
The network used in our comparison was proposed by F\'acil \textit{et al.}~\cite{singleview,multiview} and is presented in Fig.~\ref{fig:triplet_changed}.
The network consists of three identical processing pipelines that take three $224\times224$ pixel, 3-channel images as an input.
The training input is comprised of an anchor image (reference image), an image that is a positive match, and an image that is a negative match to the anchor image.
The network is trained simultaneously with positive and negative pairs, which makes the learning process more stable and efficient.
As a result of training, the descriptors obtained from the same place are getting more similar according to the chosen Euclidean norm, while the distance of the descriptors obtained from different places increases.
The initial part of the network is a pre-trained part of the VGG-16 model extended by a fully-connected layer without activation function directly after the max-pooling layer.
For training, we use the Wohlhart-Lepetit loss (also called triplet loss) proposed in \cite{tripleloss}:
\begin{equation} \label{loss_equ}
E = max \Bigg\{ 0.1 - \frac{d_n}{margin + d_p} \Bigg\}
\end{equation}
Where \textit{E} is the loss error, $d_n$ is the distance between the neutral and negative input, $d_p$ is the distance between the neutral and positive input and \textit{margin} is an additional parameter which helps to limit the difference between those two distances.
In all the experiments the margin parameter was set to 1. The value of the loss is limited between 0 and 1, with a function returning 0 whenever the distance to the positive pair is smaller than the distance to the negative pair plus the margin.
The network at its final layer generates a concise descriptor of the place that is used to detect loop closures.
\begin{figure*}[h!]
\centering
\includegraphics[width=0.65\textwidth]{figures/Fig_2_v3.png}
\caption{The descriptor for each location is the last layer of the deep neural network trained with triplet loss. The overall architecture of the networks is the same for camera-based, LiDAR-based, and joint camera-LiDAR-based loop closures with each solution trained on its own database}
\label{fig:triplet_changed}
\end{figure*}
In the case of the RGB images, we only resize the available images to fit the assumed input size of the image.
In the case of the LiDAR, we only utilize the intensity channel and thus we decided to represent it as intensity images, also resized to fit the required input size.
In both cases our input is an image and thus we have a similar network pipeline for RGB and point cloud data to infer the influence of the weather conditions.
\section{Evaluation methodology}
The loop closure algorithms are usually evaluated on Nordlandsbanen~\cite{nordland}, Oxford RobotCar dataset~\cite{robotcar}, or KITTI dataset~\cite{kitti}.
Neither of these datasets fits the requirements to perform a reliable comparison between camera-based and LiDAR-based loop closures using a location description from a single image or a single LiDAR scan.
We wanted to use a single instance of measurement to create a global descriptor that later can be extended with known DBoW or NetVLAD multi-place frameworks.
The Nordlandsbanen lacks the LiDAR, the Oxford RobotCar dataset contains only 4-layer LiDARs, and KITTI does not have enough varying conditions for a reliable comparison.
Even though the depth data from the RobotCar dataset could be made denser by combining multiple scans based on odometry, we wanted to achieve an independent global descriptor for each location without the necessity to stop to gather dense LiDAR scans, i.e. as performed in~\cite{denseLaser}.
\subsection{University of Sydney Campus Dataset}
In our comparison, we utilize the University of Sydney dataset (USyd)~\cite{usyd} that contains recordings of data collected by multiple sensors: cameras, 3D LiDAR (Velodyne VLP-16), u-blox GPS, IMU, and others while driving almost the same route once a week for more than one year. Currently, it consists of over 50 recordings covering different illumination and weather conditions as well as infrastructural, environmental, and traffic variations making it a perfect experimental setup to compare the camera-based and LiDAR-based loop closures.
In our comparison, we are interested in timestamped measurements from front-facing camera images, 3D LiDAR scans, and corresponding GPS measurements.
The GPS data is converted into the local, metric coordinate system with UTM (Universal Transverse Mercator) conversion.
Since the sensors recorded information with different timestamps and different frequencies, the location from the GPS is linearly interpolated in metric coordinates to provide a location for each considered RGB image and LiDAR scan.
The role of the GPS information is to provide ground truth locations of the processed images and LiDAR scans and to make it possible to match measurements representing the same place from multiple recorded runs.
The LiDAR scans used by our processing pipeline are not motion-compensated resulting in distortions while moving with greater speed similarly to distortions observed for moving rolling-shutter cameras.
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{figures/week_linking.png}
\caption{Locations chosen in the data are separated by $d_p = 5$ meters. We consider that two places from week A and week B are matching only if the distance between them does not exceed $d_w = 10$ meters}
\label{fig:gps}
\end{figure}
We took the first recording and divided it into discrete places with each location separated by at least $d_p = 5$ meters from any other, as presented in Fig.~\ref{fig:gps}.
We omitted the places with no GPS data available, as we would not be able use them in training, nor in verification.
In total, we obtained 718 distinct locations in the experimental environment.
From these locations, we used 446 places for training and 163 places for testing.
To provide separation of training and testing locations, we rejected 109 places.
The route, on which the data was collected, in some parts, covers the same streets but in a different direction.
Despite the VLP-16 capability of recording the surrounding scene in its full range of 360\textdegree, we assumed that the same physical location with reverse heading is treated as another place to be able to perform a direct comparison with RGB-based solution.
\subsection{Training}
We used transfer learning to train our networks with the weights (coefficients) of the VGG-16 pre-trained on the Places database~\cite{places}.
The same pre-trained model was used for RGB and LiDAR intensity images.
The idea of this procedure is to use a trained model, which to some extent is capable of recognizing locations
and adapt it for the new task.
The proposed network is trained with a triplet loss that requires defining positive and negative examples.
We consider two RGB images (or two intensity images from LiDAR) to be a positive example if the distance between their locations is within the threshold of $d_w = 10$ meters. The threshold was chosen based on the knowledge of the limited dynamic accuracy of the typical GPS.
The pairs of data from the outside of that threshold are considered negative examples.
For each location determined in the USyd dataset, we found all of the images and LiDAR scans that fit within the assumed threshold and then generated positive pairs based on these matches.
Each positive pair has an associated negative pair that in the original database was chosen randomly as long as the distance to the anchor was greater than $t_n = 50$ meters.
Based on the USyD sequences, we obtained approximately 4.3 million triplets that were used for training.
The random choice of a negative example with $t_n = 50$ meters leads to a plateau in training as the number of non-active triplets greatly exceeds the number of active cases during later stages of training.
To overcome this issue, we prepared a separate database with hard negative examples, chosen with the distance to the anchor equal to $25$ meters.
The new database was used to train the network (fine-tune) once the training plateau on the original database was observed and proved to significantly increase our recognition accuracy by approximately $7\%$ on the testing set.
\section{Experimental results}
In the presented experiments, we assumed that the loop closure is trained on images but operates in an unknown environment and is expected to determine place similarity based on a single previous observation.
With this assumption, the data from one week of the USyD dataset is treated as a reference and the data from another week is considered as testing to measure loop closure recognition accuracy.
The original dataset consists of 52 weeks but only 38 of these contained correct 3D LiDAR, RGB images, and GPS data needed for reliable comparison.
Based on dataset author's annotations, we group the obtained results based on the weather conditions observed for each week into 6 categories: sunny (\textbf{S}), cloudy (\textbf{C}), sunny/cloudy (\textbf{S/C}), after rain (\textbf{AR}), sunset (\textbf{SS}), and very cloudy (\textbf{VC}).
This clustering makes it possible to verify if and how the weather conditions influence the performance of the loop closure.
\begin{table}[htbp!]
\caption{Number of testing locations based on reference (Ref.) and testing (Test) week when divided into categories based on weather conditions: sunny (\textbf{S}), cloudy (\textbf{C}), sunny/cloudy (\textbf{S/C}), after rain (\textbf{AR}), sunset (\textbf{SS}), very cloudy (\textbf{VC})}
\label{tab:testsize}
\centering
\begin{tabular}{c|cccccc}
\diagbox{Test}{Ref.} & \textbf{S} & \textbf{C} & \textbf{S/C} & \textbf{AR} & \textbf{SS} & \textbf{VC} \\ \hline
\textbf{S} & 43738 & 16782 & 7771 & 9193 & 5063 & 2617 \\
\textbf{C} & 16782 & 5468 & 2852 & 3275 & 1851 & 953 \\
\textbf{S/C} & 7771 & 2852 & 864 & 1539 & 854 & 442 \\
\textbf{AR} & 9193 & 3275 & 1539 & 1318 & 1007 & 519 \\
\textbf{SS} & 5063 & 1851 & 854 & 1007 & 288 & 287 \\
\textbf{VC} & 2617 & 953 & 442 & 519 & 287 & 0
\end{tabular}
\end{table}
Naturally, our dataset is not well-balanced, which can be observed by analyzing the number of testing locations in Tab.~\ref{tab:testsize}, and reflects the frequency of the conditions in the real-world.
As it reflects the real-world conditions, we do not perform any special actions to balance the distribution in our dataset.
\subsection{Camera-based loop closure}
The accuracy of the camera-based loop closure was verified on all of the available test locations.
We compared the descriptor of the testing location to all of the descriptors of the locations available in the reference.
If the most similar location based on the similarity of the single-place descriptor was within $\pm 10$ meters of the location measured from the GPS, the found location was assumed to be correct.
In all other cases, the testing location was marked as incorrect.
The threshold of $\pm 10$ meters was chosen experimentally as a sufficient accuracy of the appearance-based solution that should converge to real metric localization if a geometric approach, like ICP~\cite{icp}, would be used.
\begin{table}[htbp!]
\centering
\caption{The recognition accuracy in percentages based on the reference (columns) and testing (rows) weather conditions for camera-based loop closure. Notice the lowered performance compared to average when testing in sunny conditions}
\label{tab:rgb}
\begin{tabular}{c|cccccc|c}
\textbf{Camera} & \textbf{S} & \textbf{C} & \textbf{S/C} & \textbf{AR} & \textbf{SS} & \textbf{VC} & \textbf{Mean} \\ \hline
\textbf{S} & 80.32 & 83.20 & 85.39 & 84.61 & 83.33 & 83.03 & \cellcolor{red!25}82.08 \\
\textbf{C} & 83.26 & 85.55 & 86.68 & 87.27 & 87.74 & 86.99 & 84.78 \\
\textbf{S/C} & 86.00 & 87.24 & 87.73 & 89.99 & 88.06 & 88.46 & 86.98 \\
\textbf{AR} & 83.67 & 85.74 & 89.02 & 90.14 & 89.87 & 86.71 & 85.53 \\
\textbf{SS} & 79.50 & 84.39 & 83.37 & 89.77 & 90.63 & 83.28 & \cellcolor{red!25}82.39 \\
\textbf{VC} & 83.15 & 84.78 & 88.24 & 88.25 & 86.41 & - & 84.68 \\ \hline
\textbf{Mean} & 81.82 & 84.37 & 86.15 & 86.47 & 85.66 & 84.72 & \textbf{83.49}
\end{tabular}
\end{table}
Based on all tests, the recognition accuracy of the RGB image loop closure was measured to be equal to $83.49\%$.
The exact performance depending on varying weather conditions is presented in Tab.~\ref{tab:rgb}.
The poorest performance, marked by red background color, was observed for sunny and sunset conditions.
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.7\columnwidth]{figures/LIDAR_lepszy/681_w13_w5.png}
\caption{The visual comparison of images of the same location taken during sunset (A) and in sunny conditions (B) that are challenging for camera-based loop closure due to direct camera sunlight}
\label{fig:rgbSun}
\end{figure}
In these cases, direct sunlight is a factor that can negatively influence the image acquisition process leading to overexposure that drastically changes the apparent perception of the location, i.e. as presented in Fig.~\ref{fig:rgbSun}.
There is no easy way to improve the quality of images from the chosen camera in such cases. In practice, the easiest way to deal with such situations is to use another camera that could be faced backward or rely on another type of sensor, like LiDAR.
\subsection{LiDAR-based loop closure}
Similarly to the camera-based loop closure, we also trained and analyzed the version operating on LiDAR intensities represented as an image.
The overall successful recognition accuracy was measured to be equal to $81.11\%$, which is lower than the recognition accuracy reported for the camera-based solution.
We believe that it has to be expected as the Velodyne VLP-16 LiDAR used in the USyD dataset has only 16 independent horizontal lines that have to be significantly upscaled to match the expected input size of the network.
In the case of the camera, the original image contains more independent information that has to be downsampled to fit the input of the network.
The more in-detail results across different weather conditions are presented in Tab.~\ref{tab:lidar}.
\begin{table}[htbp!]
\caption{The recognition accuracy in percentages based on the reference (columns) and testing (rows) weather conditions for LiDAR-based loop closure. Notice the overall similar performance apart from after rain (\textbf{AR}) conditions}
\label{tab:lidar}
\begin{tabular}{c|cccccc|c}
\textbf{LiDAR} & \textbf{S} & \textbf{C} & \textbf{S/C} & \textbf{AR} & \textbf{SS} & \textbf{VC} & \textbf{Mean} \\ \hline
\textbf{S} & 81.21 & 81.27 & 82.05 & 81.05 & 80.56 & 81.47 & \cellcolor{green!25}81.25 \\
\textbf{C} & 81.22 & 78.58 & 80.29 & 78.69 & 79.15 & 79.64 & 80.24 \\
\textbf{S/C} & 82.87 & 82.71 & 82.99 & 82.00 & 81.50 & 84.16 & 82.71 \\
\textbf{AR} & 80.39 & 77.80 & 79.66 & 83.31 & 84.81 & 78.03 & \cellcolor{red!25}80.24 \\
\textbf{SS} & 81.02 & 79.90 & 77.87 & 86.30 & 89.58 & 82.58 & \cellcolor{green!25}81.39 \\
\textbf{VC} & 82.12 & 83.32 & 82.35 & 80.35 & 80.49 & - & 82.09 \\\hline
\textbf{Mean} & 81.29 & 80.55 & 81.26 & 81.15 & 81.10 & 81.05 & \textbf{81.11} \\
\end{tabular}
\end{table}
In most cases, the LiDAR-based solution performs similarly showing some robustness to weather conditions with a drop when the data acquisition was performed after raining (red background), which is expected as additional raindrops increase the number of missing measurements in the LiDAR data.
On the other hand, the LiDAR-based loop closure is robust to changes in lighting conditions working in sunny and sunset conditions (green background).
\begin{figure}[htbp!]
\centering
\includegraphics[width=\columnwidth]{figures/530_w13_w22_w15.png}
\caption{Example location visible at different times: with LiDAR-based solution working more reliably in case of more structure and intense sun (A), similar performance of camera- and LiDAR-based version in typical conditions (B) and camera-based version outperforming LiDAR in ideal lightning conditions (C)}
\label{fig:rgblidar}
\end{figure}
Taking a closer look reveals that the performance of the LiDAR-based solution is more reliant on the geometry of the scene rather than the visual appearance.
Such a case is visible in Fig.~\ref{fig:rgblidar} when a lack of a poster on the wall misguides the camera-based loop closure but the LiDAR-based version is more robust.
Nevertheless, the LiDAR-based loop closure performs overall worse than the camera-based solution.
\subsection{Camera-LiDAR-based loop closure}
The results obtained from the camera-based loop closure could be improved when information from a sensor providing good performance in sunny and sunset situations could supplement the original data.
Therefore, we verified the camera-LiDAR-based loop closure that was formed by joining a LiDAR intensity image with a camera image to form an artificial image.
The artificial image creation process is presented in Fig.~\ref{fig:fusion}.
In this artificial image, the first 16 rows contain the resized LiDAR intensity information, while the remaining 208 rows contain the resized RGB image.
Similarly to previous versions, we prepared a new training database, trained and then verified the performance of the network.
\begin{figure}[htbp!]
\centering
\includegraphics[width=\columnwidth]{figures/Fig5_nizsze.png}
\caption{The visual representation of the camera-LiDAR-based image that is formed by joining resized LiDAR intensities with resized RGB image to form the artificial image. The artificial image has the same size as the inputs in the camera-based and LiDAR-based solutions}
\label{fig:fusion}
\end{figure}
The camera-LiDAR-based loop closure achieved a correct recognition of $86.91\%$ on the testing sequence exceeding the results obtained from each of the individual sensors.
The exact performance in the analyzed weather conditions was measured and is presented in Tab.~\ref{tab:rgblidar}.
\begin{table}[htbp!]
\caption{The recognition accuracy in percentages based on the reference (columns) and testing (rows) weather conditions for camera-LiDAR-based loop closure. Notice the overall increase in the performance when compared to camera-based and LiDAR-based solutions}
\label{tab:rgblidar}
\centering
\begin{tabular}{c|cccccc|c}
\textbf{\begin{tabular}[c]{@{}c@{}}Camera\\ LiDAR\end{tabular}} & \textbf{S} & \textbf{C} & \textbf{S/C} & \textbf{AR} & \textbf{SS} & \textbf{VC} & \textbf{Mean} \\ \hline
\textbf{S} & 83.86 & 86.74 & 88.25 & 87.64 & 87.70 & 88.54 & \cellcolor{green!25}85.61 \\
\textbf{C} & 86.31 & 87.75 & 90.08 & 90.14 & 90.49 & 89.93 & \cellcolor{green!25}87.67 \\
\textbf{S/C} & 88.05 & 90.32 & 89.81 & 92.92 & 90.16 & 92.08 & \cellcolor{green!25}89.38 \\
\textbf{AR} & 87.27 & 89.59 & 91.36 & 92.94 & 92.55 & 91.52 & \cellcolor{green!25}88.99 \\
\textbf{SS} & 85.52 & 90.01 & 88.41 & 92.55 & 94.79 & 90.24 & \cellcolor{green!25}87.86 \\
\textbf{VC} & 87.08 & 89.19 & 90.95 & 90.94 & 92.33 & - & \cellcolor{green!25}88.58 \\ \hline
\textbf{Mean} & 85.29 & 87.81 & 89.14 & 89.42 & 89.36 & 89.56 & \textbf{86.91}
\end{tabular}
\end{table}
The camera-LiDAR-based loop closure performs the best in all of the analyzed weather conditions when compared to camera-based and LiDAR-based solutions.
Compared to camera-based loop closure, the presented version achieves the best gains in sunset conditions ($5.47$ percentage point increase in recognition rate) and sunny conditions ($3.56$ percentage point increase in recognition rate).
This proves that additional LiDAR intensity data make loop closure more invariant to direct sunlight.
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{figures/false_recognition.png}
\caption{Examples of incorrect recognition for camera-LiDAR-based loop closure.
The input image (A) is incorrectly matched to two locations (B, C) based on the descriptor. The correct match (D) has the third-best descriptor match. The visual comparison proves that loop closure using a single image/scan is hard even for a person.}
\label{fig:fusion2}
\end{figure}
We also took a closer look at cases when camera-LiDAR-based loop closure failed. An example of such a case is presented in Fig.~\ref{fig:fusion2} when a correct match had the third-best match to the input image based on the trained descriptor.
In this case, the loop closure was not recognized due to structural changes to the environment as the wall was no longer present in the input image.
Such real-world situations are hard to predict, but considering more than a single location descriptor could lead to an increase in the loop closure recognition rate.
\section{Conclusions}
We present a comparison of camera-based, LiDAR-based, and joint camera-LiDAR-based loop closures across varying weather conditions on the same trajectories using publicly available USyD dataset.
As the processing pipeline architectures for all considered solutions are the same, it is possible to conclude that the camera-based solution performance degrades in direct sunlight situations while LiDAR-based solution utilizing intensities provides overall similar performance independent of lighting conditions with worse performance in after the rain conditions.
These observations lead to the creation of a camera-LiDAR-based solution that performs best in all considered cases.
The presented experimental evaluation proves that multi-sensory loop closures should be considered in real-world scenarios as it provides more robust solution.
In our future work, we plan on utilizing pipelines more suited for the input from each sensor that could lead to the development of a system that can be directly compared with existing state-of-the-art camera-based and LiDAR-based loop closures.
|
2,877,628,091,627 | arxiv | \section*{References}
\bibliographystyle{apsrev}
|
2,877,628,091,628 | arxiv | \section{450 $\mu$m polarization measurements for M17}
\label{app:SHARPvector}
\setcounter{table}{0}
\renewcommand*\thetable{\Alph{section}.\arabic{table}}
\begin{deluxetable}{c c c c c c c}
\tabletypesize{\footnotesize}
\tablecaption{450 $\mu$m polarization measurements for M17}
\tablecolumns{7}
\tablewidth{0pt}
\tablehead{\colhead{$\Delta\alpha$\,\tablenotemark{a}} &
\colhead{$\Delta\delta$\,\tablenotemark{a}} &
\colhead{P} &
\colhead{$\sigma_{p}$} &
\colhead{P.A.\,\tablenotemark{b}} &
\colhead{$\sigma_\text{P.A.}$} &
\colhead{I\,\tablenotemark{c}} \\
\colhead{(arcsec)} &
\colhead{(arcsec)} &
\colhead{(\%)} &
\colhead{(\%)} &
\colhead{(deg)} &
\colhead{(deg)} &
\colhead{(-)} }
\startdata
80.0 & -38.0 & 5.0 & 2.5 & -0.2 & 11.7 & 0.25\\
70.2 & -114.0 & 4.8 & 2.0 & 56.3 & 11.0 & 0.19\\
70.2 & -95.0 & 3.8 & 1.6 & 36.8 & 11.5 & 0.19\\
70.2 & -76.0 & 3.4 & 1.6 & 27.2 & 11.6 & 0.20\\
70.2 & -66.5 & 2.7 & 1.3 & 21.6 & 12.8 & 0.21\\
70.1 & -57.0 & 3.2 & 1.1 & 30.8 & 8.8 & 0.24\\
70.1 & -38.0 & 2.5 & 1.2 & 8.5 & 11.6 & 0.26\\
70.1 & -19.0 & 3.3 & 1.2 & 12.2 & 9.3 & 0.26\\
60.3 & -104.5 & 2.1 & 1.1 & 58.2 & 12.8 & 0.21\\
60.3 & -95.0 & 2.6 & 1.0 & 25.4 & 10.6 & 0.20\\
60.3 & -76.0 & 3.2 & 1.2 & 36.1 & 9.7 & 0.20\\
60.3 & -66.5 & 2.6 & 1.3 & 19.6 & 13.1 & 0.22\\
60.3 & -57.0 & 1.7 & 0.7 & 30.7 & 11.5 & 0.26\\
60.3 & -47.5 & 2.1 & 0.6 & 36.9 & 7.5 & 0.29\\
60.3 & -38.0 & 1.4 & 0.6 & 24.1 & 11.6 & 0.30\\
60.2 & -19.0 & 2.6 & 0.6 & 18.9 & 6.3 & 0.28\\
60.2 & -9.5 & 1.8 & 0.6 & 27.6 & 8.1 & 0.27\\
60.2 & -0.0 & 2.3 & 0.7 & 15.7 & 8.2 & 0.25\\
60.2 & 9.5 & 2.6 & 0.5 & 9.5 & 5.8 & 0.24\\
60.2 & 19.0 & 3.2 & 0.8 & 29.3 & 6.8 & 0.23\\
60.2 & 28.5 & 4.4 & 1.0 & 28.8 & 5.9 & 0.22\\
60.2 & 38.0 & 3.1 & 1.1 & 20.6 & 9.6 & 0.21\\
60.2 & 47.5 & 2.0 & 0.9 & 56.0 & 11.1 & 0.21\\
60.2 & 57.0 & 3.0 & 1.4 & 72.6 & 11.3 & 0.20\\
60.2 & 104.5 & 7.0 & 3.6 & 6.9 & 11.5 & 0.17\\
50.4 & -95.0 & 1.7 & 0.6 & 41.9 & 9.1 & 0.21\\
50.4 & -85.5 & 2.5 & 0.6 & 47.5 & 6.0 & 0.21\\
50.4 & -76.0 & 2.7 & 0.7 & 35.6 & 7.3 & 0.22\\
50.4 & -57.0 & 0.9 & 0.5 & 29.8 & 14.0 & 0.27\\
50.4 & -19.0 & 1.4 & 0.6 & 15.4 & 12.1 & 0.27\\
50.4 & -0.0 & 1.7 & 0.6 & 7.9 & 8.8 & 0.25\\
50.4 & 9.5 & 1.3 & 0.4 & -2.5 & 8.6 & 0.25\\
50.4 & 28.5 & 2.6 & 0.6 & 19.9 & 6.2 & 0.23\\
50.3 & 38.0 & 2.6 & 1.0 & 11.4 & 9.8 & 0.22\\
50.3 & 47.5 & 2.3 & 0.7 & 34.2 & 8.5 & 0.21\\
50.3 & 57.0 & 1.9 & 0.9 & 46.0 & 12.3 & 0.21\\
40.5 & -123.5 & 2.0 & 0.9 & 40.3 & 11.6 & 0.21\\
40.5 & -114.0 & 1.3 & 0.7 & 25.2 & 14.1 & 0.21\\
40.5 & -85.5 & 1.9 & 0.5 & 31.9 & 7.8 & 0.23\\
40.5 & -76.0 & 1.2 & 0.6 & 38.8 & 12.8 & 0.23\\
40.5 & -66.5 & 2.7 & 0.9 & 43.0 & 8.7 & 0.25\\
40.5 & -57.0 & 1.3 & 0.5 & 34.7 & 9.7 & 0.27\\
40.5 & -47.5 & 1.2 & 0.5 & 50.3 & 9.8 & 0.27\\
40.5 & -28.5 & 1.4 & 0.4 & 37.2 & 7.6 & 0.30\\
40.5 & -9.5 & 0.8 & 0.4 & -2.6 & 13.4 & 0.26\\
40.5 & -0.0 & 1.5 & 0.5 & 3.0 & 8.9 & 0.26\\
40.5 & 9.5 & 2.0 & 0.4 & 2.2 & 5.7 & 0.25\\
40.5 & 19.0 & 2.3 & 0.6 & 21.0 & 6.4 & 0.25\\
40.5 & 28.5 & 1.2 & 0.5 & 30.7 & 11.5 & 0.25\\
40.5 & 38.0 & 3.0 & 0.7 & 16.5 & 5.5 & 0.23\\
40.5 & 47.5 & 1.7 & 0.8 & 12.7 & 12.5 & 0.22\\
40.5 & 57.0 & 1.3 & 0.7 & 20.6 & 13.0 & 0.21\\
40.5 & 66.5 & 2.2 & 0.9 & 26.4 & 10.7 & 0.21\\
30.6 & -123.5 & 1.2 & 0.7 & 42.1 & 13.4 & 0.24\\
30.6 & -114.0 & 1.4 & 0.5 & 40.7 & 10.6 & 0.24\\
30.6 & -104.5 & 1.4 & 0.5 & 45.2 & 9.0 & 0.24\\
30.6 & -95.0 & 1.9 & 0.4 & 37.8 & 6.4 & 0.24\\
30.6 & -85.5 & 2.0 & 0.4 & 42.1 & 6.1 & 0.25\\
30.6 & -76.0 & 2.7 & 0.5 & 44.3 & 5.3 & 0.25\\
30.6 & -66.5 & 1.7 & 0.5 & 33.0 & 8.3 & 0.27\\
30.6 & -57.0 & 1.2 & 0.4 & 19.0 & 9.0 & 0.28\\
30.6 & -38.0 & 1.7 & 0.4 & 12.3 & 6.4 & 0.27\\
30.6 & -28.5 & 1.8 & 0.5 & 12.8 & 6.8 & 0.27\\
30.6 & -9.5 & 1.1 & 0.4 & -3.5 & 8.8 & 0.27\\
30.6 & -0.0 & 1.3 & 0.4 & 6.0 & 7.9 & 0.27\\
30.6 & 9.5 & 1.8 & 0.4 & -5.3 & 5.7 & 0.27\\
30.6 & 28.5 & 1.8 & 0.6 & 18.0 & 8.4 & 0.25\\
30.6 & 38.0 & 1.8 & 0.4 & 27.6 & 6.7 & 0.25\\
30.6 & 47.5 & 1.5 & 0.5 & 30.9 & 9.9 & 0.24\\
30.6 & 57.0 & 1.4 & 0.6 & 21.2 & 11.6 & 0.23\\
30.6 & 76.0 & 2.0 & 1.0 & -1.7 & 12.0 & 0.21\\
20.7 & -104.5 & 2.0 & 0.5 & 51.5 & 7.4 & 0.25\\
20.7 & -95.0 & 2.0 & 0.4 & 44.9 & 6.4 & 0.26\\
20.7 & -85.5 & 2.5 & 0.4 & 36.7 & 3.9 & 0.28\\
20.7 & -76.0 & 3.0 & 0.5 & 38.5 & 4.2 & 0.28\\
20.7 & -66.5 & 1.8 & 0.4 & 43.3 & 6.4 & 0.29\\
20.7 & -57.0 & 1.1 & 0.3 & 29.5 & 7.1 & 0.34\\
20.7 & -47.5 & 1.8 & 0.3 & 20.1 & 4.9 & 0.35\\
20.7 & -38.0 & 1.9 & 0.4 & 6.3 & 5.2 & 0.30\\
20.7 & -28.5 & 2.4 & 0.4 & 1.1 & 4.5 & 0.29\\
20.7 & -9.5 & 1.4 & 0.3 & 5.4 & 6.7 & 0.29\\
20.7 & -0.0 & 1.9 & 0.3 & 4.6 & 5.0 & 0.29\\
20.7 & 9.5 & 0.8 & 0.4 & -0.1 & 12.4 & 0.28\\
20.7 & 28.5 & 2.0 & 0.5 & 24.3 & 6.8 & 0.26\\
20.7 & 38.0 & 1.4 & 0.4 & 31.0 & 7.4 & 0.27\\
20.7 & 47.5 & 1.5 & 0.5 & 19.6 & 8.4 & 0.25\\
20.7 & 66.5 & 2.3 & 0.6 & 28.1 & 7.1 & 0.24\\
20.7 & 76.0 & 2.3 & 0.7 & 28.8 & 8.3 & 0.23\\
20.7 & 85.5 & 2.3 & 0.8 & 23.7 & 9.0 & 0.22\\
20.7 & 95.0 & 2.7 & 1.2 & -4.6 & 11.6 & 0.21\\
20.7 & 104.5 & 2.9 & 1.2 & -12.8 & 10.9 & 0.18\\
20.7 & 123.5 & 4.3 & 2.4 & -15.2 & 13.4 & 0.18\\
10.8 & -95.0 & 3.8 & 1.6 & 39.6 & 11.5 & 0.21\\
10.8 & -85.5 & 2.2 & 0.6 & 12.1 & 6.8 & 0.29\\
10.8 & -76.0 & 2.2 & 0.7 & 30.2 & 8.2 & 0.31\\
10.8 & -66.5 & 2.3 & 0.4 & 29.9 & 4.9 & 0.37\\
10.8 & -57.0 & 1.6 & 0.2 & 17.1 & 4.3 & 0.42\\
10.8 & -47.5 & 1.2 & 0.2 & 0.0 & 4.1 & 0.48\\
10.8 & -38.0 & 1.7 & 0.2 & -5.0 & 3.5 & 0.43\\
10.8 & -28.5 & 2.1 & 0.3 & -6.5 & 3.9 & 0.36\\
10.8 & -19.0 & 2.2 & 0.4 & 2.6 & 5.2 & 0.35\\
10.8 & -9.5 & 1.9 & 0.3 & 1.4 & 4.6 & 0.35\\
10.8 & -0.0 & 1.6 & 0.3 & 3.8 & 4.9 & 0.32\\
10.8 & 9.5 & 1.2 & 0.3 & 14.8 & 7.4 & 0.30\\
10.8 & 19.0 & 1.2 & 0.3 & 14.6 & 7.6 & 0.28\\
10.8 & 28.5 & 1.5 & 0.4 & 18.6 & 6.2 & 0.27\\
10.8 & 38.0 & 2.0 & 0.4 & 16.7 & 5.2 & 0.27\\
10.8 & 47.5 & 1.8 & 0.4 & 22.6 & 6.0 & 0.27\\
10.8 & 57.0 & 1.5 & 0.5 & 12.1 & 7.8 & 0.27\\
10.8 & 66.5 & 2.1 & 0.6 & 14.3 & 7.3 & 0.27\\
10.8 & 76.0 & 2.7 & 0.8 & 20.6 & 7.8 & 0.24\\
10.8 & 85.5 & 1.6 & 0.6 & 23.6 & 9.9 & 0.25\\
10.8 & 95.0 & 1.7 & 0.9 & 5.0 & 14.1 & 0.23\\
10.8 & 114.0 & 3.4 & 1.4 & 27.3 & 10.4 & 0.20\\
0.9 & -104.5 & 1.8 & 0.6 & 31.9 & 9.4 & 0.25\\
0.9 & -95.0 & 3.6 & 1.8 & 20.2 & 11.7 & 0.17\\
0.9 & -85.5 & 1.6 & 0.5 & 20.1 & 8.6 & 0.30\\
0.9 & -76.0 & 2.2 & 0.5 & 7.3 & 5.9 & 0.37\\
0.9 & -66.5 & 2.1 & 0.3 & 0.6 & 4.4 & 0.45\\
0.9 & -57.0 & 2.2 & 0.3 & -5.0 & 3.9 & 0.47\\
0.9 & -47.5 & 2.1 & 0.2 & -9.1 & 2.9 & 0.50\\
0.9 & -38.0 & 2.6 & 0.2 & -6.7 & 2.0 & 0.44\\
0.9 & -28.5 & 2.4 & 0.2 & -7.7 & 2.4 & 0.46\\
0.9 & -19.0 & 1.5 & 0.2 & -2.0 & 4.8 & 0.45\\
0.9 & -9.5 & 1.6 & 0.2 & -1.9 & 4.0 & 0.44\\
0.9 & -0.0 & 1.8 & 0.3 & -10.2 & 4.0 & 0.39\\
0.9 & 9.5 & 1.1 & 0.3 & -4.4 & 6.3 & 0.34\\
0.9 & 19.0 & 1.1 & 0.3 & -0.0 & 8.3 & 0.31\\
0.9 & 38.0 & 1.4 & 0.3 & 10.5 & 6.6 & 0.29\\
0.9 & 47.5 & 1.7 & 0.3 & 7.7 & 4.7 & 0.29\\
0.9 & 57.0 & 1.5 & 0.3 & 18.0 & 5.3 & 0.29\\
0.9 & 66.5 & 1.3 & 0.4 & 22.2 & 7.4 & 0.30\\
0.9 & 76.0 & 0.8 & 0.4 & 16.2 & 13.2 & 0.33\\
0.9 & 95.0 & 1.3 & 0.6 & 22.4 & 11.9 & 0.29\\
0.9 & 114.0 & 5.1 & 2.0 & 49.0 & 10.1 & 0.21\\
0.9 & 133.0 & 11.3 & 6.3 & 35.4 & 13.3 & 0.20\\
-9.0 & -76.0 & 2.9 & 0.7 & -5.2 & 6.3 & 0.37\\
-9.0 & -66.5 & 1.9 & 0.5 & -4.8 & 6.7 & 0.36\\
-9.0 & -57.0 & 1.9 & 0.4 & -11.2 & 6.2 & 0.36\\
-9.0 & -47.5 & 2.1 & 0.4 & -5.9 & 5.5 & 0.39\\
-9.0 & -38.0 & 2.5 & 0.3 & -8.1 & 3.1 & 0.46\\
-9.0 & -28.5 & 2.4 & 0.2 & -9.0 & 2.0 & 0.57\\
-9.0 & -19.0 & 2.1 & 0.2 & -16.0 & 2.2 & 0.59\\
-9.0 & -9.5 & 1.8 & 0.2 & -10.3 & 2.8 & 0.53\\
-9.0 & -0.0 & 1.4 & 0.2 & -11.6 & 4.3 & 0.50\\
-9.0 & 9.5 & 1.7 & 0.2 & -20.6 & 3.6 & 0.39\\
-9.0 & 19.0 & 1.4 & 0.2 & -16.8 & 4.9 & 0.36\\
-9.0 & 28.5 & 1.2 & 0.4 & -12.4 & 8.7 & 0.32\\
-9.0 & 38.0 & 1.7 & 0.4 & -6.0 & 6.5 & 0.32\\
-9.0 & 57.0 & 1.2 & 0.3 & 15.0 & 7.3 & 0.33\\
-9.0 & 66.5 & 1.0 & 0.3 & 11.7 & 9.0 & 0.36\\
-9.0 & 76.0 & 1.1 & 0.2 & -26.6 & 5.1 & 0.50\\
-9.0 & 85.5 & 1.1 & 0.2 & -16.8 & 4.4 & 0.72\\
-9.0 & 95.0 & 1.3 & 0.3 & 16.7 & 7.3 & 0.46\\
-9.0 & 114.0 & 2.8 & 1.4 & 30.0 & 12.6 & 0.23\\
-9.0 & 133.0 & 14.3 & 7.6 & 38.0 & 11.7 & 0.20\\
-18.9 & -123.5 & 6.9 & 3.1 & -60.0 & 9.9 & 0.28\\
-18.9 & -95.0 & 4.8 & 2.0 & -41.6 & 9.9 & 0.22\\
-18.9 & -76.0 & 5.2 & 2.5 & -39.7 & 11.4 & 0.22\\
-18.9 & -47.5 & 1.1 & 0.4 & 6.3 & 8.6 & 0.38\\
-18.9 & -38.0 & 1.7 & 0.2 & -0.0 & 4.2 & 0.49\\
-18.9 & -28.5 & 1.8 & 0.2 & -6.4 & 3.2 & 0.57\\
-18.9 & -19.0 & 2.3 & 0.2 & -17.2 & 2.0 & 0.61\\
-18.9 & -9.5 & 1.7 & 0.1 & -22.7 & 2.4 & 0.58\\
-18.9 & -0.0 & 1.4 & 0.2 & -27.4 & 3.3 & 0.57\\
-18.9 & 9.5 & 1.3 & 0.2 & -26.6 & 3.4 & 0.56\\
-18.9 & 19.0 & 1.5 & 0.2 & -28.3 & 3.1 & 0.47\\
-18.9 & 28.5 & 1.3 & 0.2 & -20.1 & 4.8 & 0.41\\
-18.9 & 38.0 & 1.2 & 0.2 & -16.4 & 5.4 & 0.38\\
-18.9 & 47.5 & 0.9 & 0.2 & -15.7 & 6.6 & 0.41\\
-18.9 & 57.0 & 1.0 & 0.2 & -2.5 & 6.1 & 0.46\\
-18.9 & 66.5 & 1.7 & 0.2 & -2.4 & 4.1 & 0.53\\
-18.9 & 76.0 & 1.1 & 0.2 & -0.9 & 4.1 & 0.70\\
-18.9 & 85.5 & 1.5 & 0.1 & 2.6 & 2.5 & 0.88\\
-18.9 & 95.0 & 1.4 & 0.2 & 11.4 & 4.4 & 0.59\\
-18.9 & 104.5 & 1.1 & 0.4 & 30.0 & 9.6 & 0.41\\
-28.8 & -85.5 & 3.2 & 1.2 & -77.2 & 9.3 & 0.28\\
-28.8 & -47.5 & 0.7 & 0.3 & -30.0 & 9.9 & 0.40\\
-28.8 & -38.0 & 1.1 & 0.3 & -17.1 & 7.0 & 0.45\\
-28.8 & -28.5 & 1.9 & 0.2 & -7.3 & 3.8 & 0.44\\
-28.8 & -19.0 & 2.8 & 0.4 & -7.8 & 3.7 & 0.44\\
-28.8 & -9.5 & 1.4 & 0.2 & -19.2 & 3.9 & 0.54\\
-28.8 & -0.0 & 1.1 & 0.2 & -21.3 & 3.9 & 0.58\\
-28.8 & 9.5 & 1.1 & 0.2 & -28.9 & 4.6 & 0.60\\
-28.8 & 19.0 & 1.1 & 0.2 & -34.4 & 4.6 & 0.58\\
-28.8 & 28.5 & 0.9 & 0.2 & -34.5 & 5.6 & 0.56\\
-28.8 & 38.0 & 1.1 & 0.3 & -16.1 & 6.7 & 0.44\\
-28.8 & 47.5 & 0.7 & 0.2 & -9.6 & 6.5 & 0.54\\
-28.8 & 57.0 & 0.9 & 0.1 & 11.8 & 4.3 & 0.65\\
-28.8 & 66.5 & 1.0 & 0.1 & -0.6 & 3.1 & 0.90\\
-28.8 & 76.0 & 1.0 & 0.1 & -9.7 & 3.7 & 0.93\\
-28.8 & 85.5 & 1.4 & 0.1 & 3.9 & 3.2 & 0.89\\
-28.8 & 95.0 & 1.1 & 0.2 & 13.5 & 4.1 & 0.78\\
-28.8 & 104.5 & 1.1 & 0.3 & 33.0 & 7.9 & 0.57\\
-38.7 & -104.5 & 1.7 & 0.6 & -32.6 & 9.9 & 0.64\\
-38.7 & -76.0 & 4.1 & 1.5 & -70.3 & 9.3 & 0.25\\
-38.7 & -66.5 & 1.6 & 1.0 & -81.5 & 13.9 & 0.27\\
-38.7 & -38.0 & 1.0 & 0.3 & -20.1 & 9.5 & 0.35\\
-38.7 & -28.5 & 1.6 & 0.3 & -12.9 & 5.3 & 0.35\\
-38.7 & -19.0 & 2.7 & 0.3 & -10.7 & 2.8 & 0.36\\
-38.7 & -9.5 & 2.2 & 0.2 & -8.2 & 2.7 & 0.45\\
-38.7 & -0.0 & 1.4 & 0.1 & -16.0 & 2.7 & 0.60\\
-38.7 & 9.5 & 1.2 & 0.1 & -33.0 & 3.1 & 0.63\\
-38.7 & 19.0 & 1.4 & 0.2 & -33.0 & 3.3 & 0.61\\
-38.7 & 28.5 & 0.8 & 0.1 & -40.1 & 4.8 & 0.75\\
-38.7 & 38.0 & 0.9 & 0.2 & -44.4 & 5.6 & 0.68\\
-38.7 & 47.5 & 0.6 & 0.2 & -25.3 & 11.5 & 0.59\\
-38.7 & 76.0 & 0.9 & 0.2 & 17.2 & 6.2 & 0.79\\
-38.7 & 85.5 & 0.8 & 0.3 & 18.0 & 10.1 & 0.73\\
-38.7 & 95.0 & 1.2 & 0.2 & 16.6 & 3.6 & 0.88\\
-38.7 & 104.5 & 0.6 & 0.2 & 19.4 & 8.9 & 0.79\\
-38.7 & 123.5 & 4.7 & 2.3 & 64.5 & 11.9 & 0.27\\
-48.6 & -104.5 & 2.3 & 0.9 & -60.6 & 9.8 & 0.49\\
-48.6 & -95.0 & 4.7 & 0.9 & -35.1 & 4.9 & 0.39\\
-48.6 & -76.0 & 4.4 & 1.8 & -75.2 & 10.3 & 0.21\\
-48.6 & -66.5 & 2.7 & 1.2 & -70.9 & 10.9 & 0.26\\
-48.6 & -57.0 & 3.5 & 0.9 & -72.1 & 7.4 & 0.26\\
-48.6 & -38.0 & 1.0 & 0.3 & -34.5 & 9.3 & 0.34\\
-48.6 & -28.5 & 1.4 & 0.3 & -22.5 & 7.2 & 0.35\\
-48.6 & -19.0 & 2.6 & 0.4 & -9.4 & 3.8 & 0.36\\
-48.6 & -9.5 & 1.9 & 0.2 & -9.1 & 3.6 & 0.41\\
-48.6 & -0.0 & 1.7 & 0.1 & -17.5 & 2.1 & 0.59\\
-48.6 & 9.5 & 1.3 & 0.1 & -26.2 & 2.1 & 0.68\\
-48.6 & 19.0 & 0.9 & 0.1 & -22.8 & 3.7 & 0.66\\
-48.6 & 28.5 & 0.5 & 0.1 & -34.9 & 5.0 & 0.71\\
-48.6 & 38.0 & 0.7 & 0.1 & -43.1 & 4.3 & 0.68\\
-48.6 & 47.5 & 0.5 & 0.1 & -38.9 & 6.7 & 0.60\\
-48.6 & 57.0 & 0.3 & 0.1 & -29.3 & 12.9 & 0.63\\
-48.6 & 66.5 & 1.0 & 0.2 & 4.0 & 4.4 & 0.73\\
-48.6 & 76.0 & 0.7 & 0.2 & -19.5 & 8.1 & 0.63\\
-48.6 & 95.0 & 1.3 & 0.3 & 26.2 & 6.1 & 0.71\\
-48.6 & 104.5 & 1.2 & 0.3 & 28.0 & 7.8 & 0.60\\
-58.5 & -114.0 & 5.3 & 1.7 & -60.7 & 8.0 & 0.33\\
-58.5 & -104.5 & 2.9 & 0.6 & -52.5 & 5.3 & 0.61\\
-58.5 & -95.0 & 3.5 & 0.6 & -38.3 & 4.6 & 0.53\\
-58.5 & -85.5 & 5.3 & 1.2 & -39.5 & 5.8 & 0.35\\
-58.5 & -76.0 & 5.6 & 2.3 & -57.0 & 9.7 & 0.24\\
-58.5 & -57.0 & 3.0 & 0.9 & -72.4 & 8.0 & 0.26\\
-58.5 & -47.5 & 1.7 & 0.8 & -65.5 & 11.6 & 0.27\\
-58.5 & -38.0 & 1.0 & 0.5 & -25.1 & 13.3 & 0.29\\
-58.5 & -28.5 & 2.0 & 0.4 & -36.4 & 5.3 & 0.32\\
-58.5 & -19.0 & 2.5 & 0.4 & -16.8 & 4.0 & 0.35\\
-58.5 & -9.5 & 2.6 & 0.3 & -9.0 & 3.4 & 0.39\\
-58.5 & -0.0 & 1.5 & 0.2 & -11.3 & 3.7 & 0.53\\
-58.5 & 9.5 & 1.0 & 0.1 & -7.4 & 3.0 & 0.83\\
-58.5 & 19.0 & 0.4 & 0.1 & -16.8 & 7.6 & 0.82\\
-58.5 & 28.5 & 0.5 & 0.1 & -30.6 & 6.3 & 0.86\\
-58.5 & 38.0 & 0.7 & 0.1 & -46.2 & 5.5 & 0.75\\
-58.5 & 47.5 & 0.7 & 0.2 & -58.6 & 6.9 & 0.60\\
-58.5 & 57.0 & 1.3 & 0.2 & -30.0 & 4.6 & 0.49\\
-58.5 & 66.5 & 2.5 & 0.3 & -7.3 & 3.3 & 0.44\\
-58.5 & 76.0 & 1.2 & 0.3 & -11.5 & 7.6 & 0.46\\
-68.4 & -114.0 & 8.9 & 2.0 & -49.6 & 4.6 & 0.49\\
-68.4 & -104.5 & 5.3 & 0.8 & -46.2 & 3.5 & 0.63\\
-68.4 & -95.0 & 3.4 & 0.5 & -43.5 & 3.9 & 0.65\\
-68.4 & -85.5 & 3.8 & 0.7 & -53.4 & 4.8 & 0.47\\
-68.4 & -76.0 & 3.7 & 1.1 & -45.5 & 7.5 & 0.40\\
-68.4 & -66.5 & 3.3 & 1.4 & -59.7 & 10.0 & 0.26\\
-68.4 & -57.0 & 2.0 & 1.1 & -63.7 & 13.2 & 0.26\\
-68.4 & -47.5 & 2.4 & 1.0 & -51.0 & 10.2 & 0.26\\
-68.4 & -38.0 & 1.9 & 0.7 & -60.8 & 9.2 & 0.28\\
-68.4 & -28.5 & 2.1 & 0.4 & -37.4 & 5.6 & 0.33\\
-68.4 & -19.0 & 1.9 & 0.3 & -24.9 & 4.0 & 0.39\\
-68.4 & -9.5 & 1.8 & 0.2 & -22.1 & 3.8 & 0.41\\
-68.4 & -0.0 & 1.5 & 0.2 & -19.6 & 2.9 & 0.55\\
-68.4 & 9.5 & 0.9 & 0.1 & -18.6 & 3.0 & 0.79\\
-68.4 & 19.0 & 0.8 & 0.1 & -14.5 & 3.2 & 0.83\\
-68.4 & 28.5 & 0.5 & 0.1 & -2.3 & 5.3 & 0.85\\
-68.4 & 38.0 & 0.3 & 0.1 & -65.3 & 11.3 & 0.78\\
-68.4 & 47.5 & 0.9 & 0.2 & -73.6 & 5.2 & 0.54\\
-68.4 & 57.0 & 1.1 & 0.2 & -58.7 & 6.3 & 0.43\\
-68.4 & 66.5 & 1.7 & 0.4 & -31.7 & 6.1 & 0.36\\
-68.4 & 76.0 & 1.8 & 0.5 & -16.5 & 7.3 & 0.34\\
-68.4 & 85.5 & 3.1 & 1.1 & -22.1 & 8.8 & 0.29\\
-78.3 & -104.5 & 5.8 & 1.9 & -45.4 & 7.6 & 0.68\\
-78.3 & -95.0 & 6.4 & 1.0 & -42.8 & 3.7 & 0.58\\
-78.3 & -85.5 & 4.6 & 0.8 & -44.0 & 4.0 & 0.53\\
-78.3 & -76.0 & 5.0 & 1.1 & -52.9 & 5.2 & 0.36\\
-78.3 & -47.5 & 2.1 & 1.2 & -53.1 & 13.7 & 0.25\\
-78.3 & -38.0 & 4.4 & 0.8 & -67.3 & 4.9 & 0.28\\
-78.3 & -28.5 & 2.4 & 0.7 & -62.5 & 8.3 & 0.31\\
-78.3 & -19.0 & 1.2 & 0.5 & -40.6 & 9.5 & 0.43\\
-78.3 & -9.5 & 1.3 & 0.4 & -41.3 & 7.6 & 0.44\\
-78.3 & -0.0 & 0.6 & 0.3 & -24.8 & 11.3 & 0.51\\
-78.3 & 9.5 & 1.4 & 0.2 & -16.2 & 4.4 & 0.58\\
-78.3 & 19.0 & 0.5 & 0.2 & 15.6 & 10.9 & 0.66\\
-78.3 & 28.5 & 0.8 & 0.2 & -1.4 & 5.3 & 0.70\\
-78.3 & 66.5 & 1.9 & 0.5 & -35.2 & 6.8 & 0.33\\
-78.3 & 76.0 & 1.7 & 0.6 & -52.9 & 9.5 & 0.31\\
-88.2 & -95.0 & 10.0 & 2.6 & -49.6 & 4.8 & 0.36\\
-88.2 & -85.5 & 5.4 & 1.6 & -52.6 & 6.7 & 0.53\\
-88.2 & -76.0 & 5.8 & 1.9 & -50.9 & 6.0 & 0.42\\
-88.2 & -66.5 & 4.9 & 1.8 & -42.9 & 8.5 & 0.28\\
-88.2 & -57.0 & 3.0 & 1.6 & -50.3 & 12.8 & 0.28\\
-88.2 & -47.5 & 5.6 & 2.0 & -54.5 & 7.8 & 0.25\\
-88.2 & -38.0 & 3.5 & 0.9 & -56.0 & 6.8 & 0.27\\
-88.2 & -28.5 & 2.6 & 0.8 & -38.1 & 8.3 & 0.28\\
-88.2 & -19.0 & 2.3 & 0.8 & -22.1 & 9.5 & 0.31\\
-88.2 & -9.5 & 3.1 & 0.7 & -25.9 & 6.3 & 0.32\\
-88.2 & -0.0 & 2.4 & 0.6 & -43.5 & 7.9 & 0.35\\
-88.2 & 28.5 & 0.5 & 0.3 & -44.4 & 13.7 & 0.56\\
-88.2 & 47.5 & 1.5 & 0.4 & -53.7 & 7.8 & 0.44\\
-88.2 & 57.0 & 3.1 & 1.4 & -88.5 & 11.1 & 0.42\\
-88.2 & 66.5 & 1.8 & 0.9 & -57.7 & 13.3 & 0.32\\
-88.2 & 76.0 & 2.3 & 1.2 & 89.6 & 12.4 & 0.37\\
-88.2 & 85.5 & 2.0 & 1.0 & -71.7 & 12.0 & 0.39\\
-98.1 & -66.5 & 3.3 & 1.7 & -48.9 & 12.1 & 0.29\\
-98.1 & -57.0 & 4.0 & 1.4 & -51.4 & 8.9 & 0.29\\
-98.1 & -38.0 & 3.8 & 1.5 & -63.4 & 10.4 & 0.27\\
-98.1 & -19.0 & 1.8 & 1.0 & -18.9 & 13.4 & 0.30\\
-98.1 & -9.5 & 3.4 & 1.0 & -20.3 & 7.8 & 0.32\\
-98.1 & -0.0 & 2.3 & 0.6 & -44.2 & 7.5 & 0.35\\
-98.1 & 9.5 & 1.0 & 0.6 & -46.5 & 15.2 & 0.36\\
-98.1 & 19.0 & 1.3 & 0.7 & -12.7 & 12.0 & 0.40\\
-98.1 & 47.5 & 1.2 & 0.3 & -46.7 & 8.6 & 0.44\\
-98.1 & 57.0 & 1.4 & 0.5 & -36.6 & 11.4 & 0.43\\
-98.1 & 76.0 & 3.6 & 2.0 & -80.0 & 10.1 & 0.43\\
-98.1 & 85.5 & 3.8 & 2.1 & -66.1 & 11.1 & 0.47\\
\enddata
\tablenotetext{a}{Offsets are given with respect to $18^\text{h} 20^\text{m} 25^\text{s}$, $-16\arcdeg 13\arcmin 02\arcsec$ (J2000).}
\tablenotetext{b}{Position angle of electric vector measured east from north.}
\tablenotetext{c}{Intensity normalized to 1.00 at peak.}
\end{deluxetable}
\section{Introduction}
\label{sec:intrm17}
Magnetic fields are believed to play an important role in the dynamics and evolution of galactic molecular clouds and hence affect the star forming processes therein. Since the dust temperature of a typical molecular cloud is $\sim$ 10 -- 20 K, the submillimeter part of the electromagnetic spectrum is a very important window for studying the physics of star formation. Submillimeter polarimetry provides one of the best tools for mapping interstellar magnetic fields in star forming regions \citep{1999ApJ...520..706C}, because asymmetric dust grains are partially aligned by magnetic fields. The physics of the alignment process is an active area of research.
There are several theoretical models for magnetic alignment of interstellar dust grains (see reviews by \citealp{2003JQSRT..79..881L, 2007JQSRT.106..225L}). Among them, the ``radiative alignment torques'' (RAT) model is the most favored candidate. In this model, photons from an anisotropic radiation field produce a net radiative alignment torque on irregularly shaped grains, because the grains present different cross sections to right- and left-handed circularly polarized photons \citep{1972Ap&SS..18..337D, 1976Ap&SS..43..257D, 1996ApJ...470..551D, 1997ApJ...480..633D, 2003JQSRT..79..881L, 2007JQSRT.106..225L}. As in the case for other grain alignment theories, the grain axis with the largest moment of inertia is aligned parallel to the spin axis, and furthermore the spin axis is aligned with the local magnetic field. Since the grains will emit and absorb most efficiently along the long grain axis, polarization is observed perpendicular to the magnetic field in emission, but parallel to the field in absorption (or extinction). As a result, measurement of the direction of polarization provides knowledge of the orientation of the interstellar magnetic field, as projected onto the plane of the sky \citep{2003JQSRT..79..881L, 2007JQSRT.106..225L}.
By observing the wavelength dependence of both the magnitude of polarization (polarization fraction) and the polarization angle, we can characterize the dust grain properties and the physical conditions in a cloud. As discussed below, in the M17 cloud we find a compression shock front that is viewed edge-on. Thus this cloud provides a unique opportunity to study the polarization spectrum in regions having an anisotropic radiation field that varies spatially. This allows an experimental test of the RAT theory of grain alignment. M17 is also known as the Omega Nebula and is located in the constellation Sagittarius. This cloud is a premier example of a young, massive star formation region in the Galaxy. It is one of the brightest infrared and thermal radio sources in the sky. Its distance has been measured to be 1.6 $\pm$ 0.3 kpc \citep{2001A&A...377..273N}, and it spans an area of about $11\arcmin \times 9\arcmin$ across the sky.
A geometric model of M17 was presented by \cite{2007ApJ...658.1119P}. In the inner part of the nebula, a bright, photoionized region with a hollow conical shape surrounds a central star cluster. This region expands outward in several directions into adjacent molecular gas. There is a large, unobscured optical \ion{H}{2} region spreading into the low density medium at the eastern edge of the molecular cloud. X-ray observations \citep{2003ApJ...590..306D, 2003ApJ...593..874T} have shown that the interior of the \ion{H}{2} region is filled by hot (with T $\sim 10^6$ - $10^7$ K) gas that is flowing out to the east. \cite{2003ApJ...590..306D} noted that this region is too young to have produced a supernova remnant and interpreted the X-ray emission as hot gas filling a super bubble blown by the OB stellar winds. In the middle of the nebula, velocity studies have shown an ionized shell having a diameter of about 6 pc \citep{2003ApJ...590..306D}. Toward some portions at the border of the ionized region, warm and hot gases are truncated by a wall of dense, cold molecular material that includes the dense cores known as M17 Southwest (M17 SW) and M17 North (M17 N). These cores exhibit many signposts of ongoing massive star formation. \cite{1977Afz....13..569G} characterized members of the young stellar cluster NGC6618 that is responsible for radiatively exciting the nebula.
In Section~\ref{sec:obs} of this paper, we describe new 450 $\mu$m polarimetric observations obtained for M17. Section~\ref{sec:genres} shows that our results for the magnetic field orientation are in good agreement with those from previous observations in the far-IR and submillimeter. In Section~\ref{sec:aveps}, we discuss the far-IR/submillimeter polarization spectrum of M17. In Section~\ref{sec:magchange}, we analyze the change of magnetic field across the shock front and find a correlation between the polarization angle and the location along an axis orthogonal to the shock front. We also find that the $P_{450}$/$P_{350}$ polarization ratio appears to be correlated with the strength of the radiation field, as discussed in Section~\ref{sec:pschange}. We explain both of these correlations in terms of the effects of stars in the central star cluster.
\section{Observations}
\label{sec:obs}
The 450 $\mu$m polarimetric data presented here were collected using the SHARP instrument \citep{2008ApOpt..47..422L} at the Caltech Submillimeter Observatory (CSO). SHARP is a fore-optics module that converts the SHARC II bolometer camera \citep{2003SPIE.4855...73D} into a sensitive imaging polarimeter with a spatial resolution of $\sim 11\arcsec$ at 450 $\mu$m, and $9\arcsec$ at 350 $\mu$m. The function of the fore-optics is to split the incident radiation in a $55\arcsec \times 55\arcsec$ field of view (FOV) into two orthogonally polarized beams that are then reimaged onto 12 $\times$ 12 pixel ``subarrays" at opposite ends of the 32 $\times$ 12 pixel array in SHARC II. The polarization signal is modulated by a stepped rotating half-wave plate (HWP) located skyward of the polarization-splitting optics. The observations were obtained during three nights in July 2010. The total integration time was about 9 hours, with an average zenith opacity $\tau \approx 1.3$ at 450 $\mu$m.
\section{Results}
\label{sec:results}
\subsection{General Results}
\label{sec:genres}
Figure~\ref{fig:SHARPpol} shows our 450 $\mu$m polarization map of M17 superposed on contours of dust emission intensity, also taken from SHARP data. The map is centered at the J2000 coordinate ($18^\text{h} 20^\text{m} 25.1^\text{s}$, $-16^{\circ} 13' 02.1''$) and covers an area of about $4'25'' \times 2'45''$ overlapping M17 SW (see Figure~\ref{fig:Spitzer-optical}). Taking the distance to M17 to be 1.6 kpc, our map coverage corresponds to an area of 2.1 pc $\times$ 1.3 pc. The 450 $\mu$m M17 polarization measurements from SHARP are listed in Appendix~\ref{app:SHARPvector}. All the polarization magnitudes presented in this work, including the ones in figures and tables, are corrected for positive bias using the method described in \cite{2006PASP..118.1340V}.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{f01.eps}
\caption{M17 450 $\mu$m polarization vectors superposed on a map of the 450 $\mu$m intensity, also from SHARP data. Contours range from 10\% to 100\% of the peak intensity, in steps of 10\%. The three main flux peaks visible in this map correspond to the Northern, Central, and Southern Condensations of Figure 1b of \cite{1996ApJ...470..566D}. Thick vectors are detected with greater than or equal to $3\sigma$ sensitivity ($p \ge 3\sigma_{p}$) and thin vectors are between the $2\sigma$ and $3\sigma$ levels ($2\sigma_{p} \le p \le 3\sigma_{p}$). All vectors on the plot are corrected for positive bias. The orientation of each vector indicates the direction of the electric vector of the measured polarization. The key at bottom left shows the vector length corresponding to a polarization magnitude of 4\%. The circle on the bottom right shows the SHARP beam size. Right ascension and declination offsets are given with respect to $18^\text{h} 20^\text{m} 25^\text{s}$, $-16\arcdeg 13\arcmin 02\arcsec$ (J2000).}
\label{fig:SHARPpol}
\end{figure}
As can be seen in Figure~\ref{fig:SHARPpol}, for regions of high submillimeter intensity, the average polarization fraction is lower than that for low intensity regions. This may be caused by any or all of the following effects: (1) If the magnetic field orientation varies along the line of sight (LOS), the measured polarization fraction becomes diluted upon integration along the LOS; (2) If the magnetic field within the dense part of the cloud is more ``tangled," averaging over the finite beam will cause a reduction in the observed polarization; (3) Grain alignment may be less efficient deep inside the dense part of the molecular cloud perhaps due to the weaker radiation field \citep{2005ApJ...631..361C, 2008ApJ...674..304W}.
For polarized emission, the magnetic field projected onto the plane of the sky is inferred by rotating the polarization E-vectors by $90\arcdeg$. Figure~\ref{fig:Spitzer-optical} shows the inferred magnetic field vectors from 100 $\mu$m \citep{1996ApJ...470..566D}, 450 $\mu$m (SHARP) and optical observations \citep{1981A&A....95...94S}. Our results for the magnetic field orientation are in good agreement with those from previous observations at far-IR wavelengths (Stokes instrument, 60 and 100 $\mu$m, $22\arcsec$ and $35\arcsec$ spatial resolution respectively; \citealp{1996ApJ...470..566D, 2000ApJS..128..335D}) and submillimeter wavelengths (Hertz instrument, 350 $\mu$m, $20\arcsec$ resolution; \citealp{2002ApJ...569..803H}), but have finer angular resolution. The 8.0 $\mu$m intensity from Spitzer GLIMPSE that is shown in the figure predominantly traces polycyclic aromatic hydrocarbon (PAH) molecular emission. The brightest 8.0 $\mu$m emission traces the boundary of the \ion{H}{2} region where the PAHs are being illuminated by the UV radiation from the central OB cluster, as seen in typical photon-dominated regions such as MonR2 and the Orion Bar \citep{2009ApJ...706L.160B, 2009A&A...498..161V}.
\begin{figure*}
\centering
\includegraphics[width=0.65\textwidth]{f02.eps}
\caption{Vectors show inferred magnetic field orientations from SHARP (red, 450 $\mu$m), Stokes (black, 100 $\mu$m; \citealp{1996ApJ...470..566D}) and optical observations (yellow; \citealp{1981A&A....95...94S}) superposed on a Spitzer/IRAC 8.0 $\mu$m image from GLIMPSE (Galactic Legacy Infrared Mid-Plane Survey Extraordinaire). The green box outlines the region displayed in Figure~\ref{fig:SHARPpol}. For clarity, not all vectors shown in Figure~\ref{fig:SHARPpol} are shown here. The hollow conical shape area in the center containing most of the yellow vectors is the \ion{H}{2} region. The cloud to the north of the \ion{H}{2} region is M17 N. The portion on the bottom right is M17 SW. The magnetic field orientations inferred from SHARP and Stokes data are perpendicular to the measured polarization angles, while those from optical polarization measurements are parallel to the polarization angles. Vectors shown here are plotted with uniform length and serve to indicate the inferred field orientation only. Thick vectors are detected with greater than or equal to $3\sigma$ sensitivity ($p \ge 3\sigma_{p}$) and thin vectors are between the $2\sigma$ and $3\sigma$ levels ($2\sigma_{p} \le p \le 3\sigma_{p}$). Right ascension and declination offsets are given with respect to $18^\text{h} 20^\text{m} 25^\text{s}$, $-16\arcdeg 13\arcmin 02\arcsec$ (J2000).}
\label{fig:Spitzer-optical}
\end{figure*}
\clearpage
\cite{1996ApJ...470..566D} notes that the magnetic field revealed by the 100 $\mu$m polarization data bulges away from the \ion{H}{2} region (see black vectors in Figure~\ref{fig:Spitzer-optical}) and points out that this is consistent with the suggestion that the \ion{H}{2} region is expanding into its surroundings. A similar situation was seen in the molecular cloud G333.6-0.2 by \cite{2006ApJ...648..340L}. In this cloud, the authors found evidence of distortion of magnetic fields by an expanding photo-ionized gas bubble. In the Galactic center, \cite{2000ApJ...529..241N} found that the expansion of the non-thermal shell source Sgr A (East) into a molecular cloud causes a similar effect and point out that, due to flux freezing, the magnetic field in an edge-on compression front should tend to run parallel to the compression front. Indeed, this is approximately what is suggested by both the black \citep{1996ApJ...470..566D} and red (our work) vectors in Figure~\ref{fig:Spitzer-optical}, provided that we restrict ourselves to the boundary of the \ion{H}{2} region, where the 8.0 $\mu$m Spitzer GLIMPSE emission is strong.
\subsection{The Polarization Spectrum of M17}
\label{sec:aveps}
Previous investigators have compared polarimetric data for various samples of molecular clouds at wavelengths ranging from the far-IR to submillimeter. If the source of the polarized emission is a single population of dust grains having identical polarization properties and temperature, the magnitude of the polarization (polarization fraction) is expected to be nearly independent of wavelength longward of 50 $\mu$m \citep{1988QJRAS..29..327H, 1999ApJ...516..834H}. However, the observed far-IR/submillimeter polarization spectra yield a different result, as shown in Figure~\ref{fig:M17PS} (see also \citealp{1999ApJ...516..834H, 2002ApJS..142...53V, 2004ASPC..309..515H, 2008ApJ...679L..25V, 2012ApJS..201...13V}). The polarization spectrum has been observed to fall from 60 to $\sim$ 350 $\mu$m (negative slope region) before rising again to 850 and 1300 $\mu$m (positive slope region), with its minimum located near 350 $\mu$m. In order to ensure that the computed polarization spectra are meaningful, several criteria are used when comparing the polarization fraction at two different wavelengths. For example, confusion can arise if the inclination of the field with respect to the LOS varies along the LOS. The likelihood of confusion due to this effect can be reduced by imposing the following criterion: The difference between the respective polarization angles at the two wavelengths, $|\Delta\phi|$, must be smaller than 10$\arcdeg$. This then leaves the alignment efficiency as a dominant factor affecting the polarization spectrum. This constraint is discussed in detail by \cite{2012ApJS..201...13V}.
\begin{figure}[h!b]
\centering
\includegraphics[width=0.50\textwidth]{f03.eps}
\caption{Polarization spectrum of various interstellar molecular clouds from VM12 \citep{2012ApJS..201...13V} and this work. The green circle represents the median ratio for 15 clouds. The median polarization ratios are normalized to the value at 350 $\mu$m. Our results for the eastern part of M17 SW are in good agreement with the results of VM12 at 60, 100, and (by definition) 350 $\mu$m. In contrast to the results for OMC-1, our work shows that the eastern part of M17 SW has lower median polarization at 450 $\mu$m than at 350 $\mu$m. In this part of M17, the polarization spectrum falls monotonically from 60 $\mu$m to 450 $\mu$m.}
\label{fig:M17PS}
\end{figure}
Models containing two or more dust grain populations have been proposed to explain the observed structure in the polarization spectrum \citep{1999ApJ...516..834H, 2002ApJS..142...53V, 2007EAS....23..147V}. In such models, each dust grain population contributes a flux of $F_i(\nu) \propto \nu^{\beta_i} B_\nu(T_i)$, where $\nu$ is frequency, $B_\nu(T)$ is the Planck spectrum, and $\beta_i$ and $T_i$ are the spectral index and temperature of the dust population $i$, respectively. Correlation between the alignment efficiency and $\beta_i$ or $T_i$ for each dust population can result in a wavelength-dependent polarization spectrum. We will not refer to these early models in the remainder of this paper, focusing instead on more recent models that will be described below.
It has been pointed out that uncertainty could be introduced if one compares the polarization fraction ratio from two instruments having different chop throws, polarization efficiencies and/or beam sizes \citep{2012ApJS..201...13V}. The analysis presented here carefully combines the 450 $\mu$m SHARP polarimetric data ($\sim$ $11\arcsec$ resolution) with data collected at three other wavelengths. First, the 450 $\mu$m SHARP maps were smoothed to the same resolution as the 60 $\mu$m (Stokes, $22\arcsec$ resolution), 100 $\mu$m (Stokes, $35\arcsec$ resolution) and 350 $\mu$m (Hertz, $20\arcsec$ resolution) maps. Assuming that all beams were Gaussian, new maps of the Stokes parameters I, Q, and U were created from the original 450 $\mu$m maps by smoothing them with different Gaussian sizes to match the resolution of the 60, 100 and 350 $\mu$m data. Then, the polarization fractions and angles at the new resolution were calculated by resampling and combining the Stokes parameters in the new maps.
At a given wavelength, the polarization vectors that are to be compared with vectors from the smoothed 450 $\mu$m map are chosen based on the following criteria adopted from \cite{1999ApJ...516..834H}: (1) The vectors are spatially separated by no more than 1$\arcsec$ in both RA and Dec; (2) The difference between the two polarization angles $|\Delta\phi|$ must be less than $10\arcdeg$; (3) The vectors are from the cloud envelope, not from high density cores; (4) All vectors are detected with signal to noise ratios greater than or equal to the $3 \sigma$ level.
\begin{figure}[h!b]
\centering
\includegraphics[width=0.45\textwidth]{f04.eps}
\caption{Vectors selected for polarization spectrum analysis at 60 $\mu$m (yellow, Stokes, $22\arcsec$ resolution), 100 $\mu$m (green, Stokes, $35\arcsec$ resolution) and 350 $\mu$m (blue, Hertz, $20\arcsec$ resolution) superposed on the 450 $\mu$m intensity map from SHARP observations. Vectors are selected by comparing them with 450 $\mu$m data smoothed to matching angular resolution and then applying the selection criteria listed in Section~\ref{sec:aveps}. All vectors shown here are in the common region where we have data from all four wavelengths, i.e., the RA Offset $>$ 0 region (see data with $\Delta\alpha > 0$ in Tables~\ref{table:45060}, ~\ref{table:450100} and ~\ref{table:450350}). Right ascension and declination offsets are given with respect to $18^\text{h} 20^\text{m} 25^\text{s}$, $-16\arcdeg 13\arcmin 02\arcsec$ (J2000).}
\label{fig:M17PSarea}
\end{figure}
The M17 cloud spans a large area across the sky. Because we do not have polarimetric data for the entire cloud at all wavelengths, we can only compare the polarization ratios in a common region where we have data from all four wavelengths. This region is in the east portion of M17 SW with RA Offset $\geq$ 0 in Figure~\ref{fig:M17PSarea}. Polarization spectrum vectors for pairs of wavelengths (450 vs. 60, 450 vs. 60, and 450 vs. 350 $\mu$m) in the common region are listed in Tables~\ref{table:45060}, ~\ref{table:450100} and ~\ref{table:450350}. Table~\ref{table:450350} also lists the $P_{450}$/$P_{350}$ data for the RA Offset $<$ 0 region that will be discussed in Section~\ref{sec:pschange}. The polarization vectors plotted in Figure~\ref{fig:M17PSarea} are those from the RA Offset ($\Delta\alpha$) $>$ 0 region and are taken from Table~\ref{table:45060}, Table~\ref{table:450100} and part of Table~\ref{table:450350}. The M17 polarization spectrum resulting from this work is calculated based on the data in the common region. It is superposed on previous spectra in Figure~\ref{fig:M17PS}, and tabulated in Table~\ref{table:450PS}. Our main result is that $P_{450}<P_{350}<P_{100}<P_{60}$ in the east portion of M17 SW.
\clearpage
\begin{deluxetable}{ c c c c c c c c c c c c c}
\tablecaption{Polarization ratio data for 450 $\mu$m vs. 60 $\mu$m with 22$\arcsec$ resolution}
\tabletypesize{\footnotesize}
\tablecolumns{13}
\tablewidth{0pt}
\tablehead{
\colhead{$\Delta\alpha$\,\tablenotemark{a}} &
\colhead{$\Delta\delta$\,\tablenotemark{a}} &
\colhead{$P_{450}$} &
\colhead{$\sigma_{p450}$} &
\colhead{P.A.\,\tablenotemark{b}} &
\colhead{$\sigma_\text{P.A.}$} &
\colhead{$I_{450}$\,\tablenotemark{c}} &
\colhead{$P_{60}$} &
\colhead{$\sigma_{p60}$} &
\colhead{P.A.\,\tablenotemark{b}} &
\colhead{$\sigma_\text{P.A.}$} &
\colhead{$I_{60}$\,\tablenotemark{c}} &
\colhead{$P_{450}/P_{60}$\,\tablenotemark{d}} \\
\colhead{(arcsec)} &
\colhead{(arcsec)} &
\colhead{(\%)} &
\colhead{(\%)} &
\colhead{(deg)} &
\colhead{(deg)} &
\colhead{(-)} &
\colhead{(\%)} &
\colhead{(\%)} &
\colhead{(deg)} &
\colhead{(deg)} &
\colhead{(-)} &
\colhead{(-)} }
\startdata
80 & 19 & 2.1 & 0.5 & 18.5 & 5.9 & 0.26 & 6.7 & 0.5 & 21.7 & 2.2 & 0.47 & 0.32\\
70 & -29 & 1.5 & 0.3 & 21.9 & 5.2 & 0.34 & 4.5 & 0.3 & 23.0 & 1.6 & 0.66 & 0.33\\
63 & -7 & 1.8 & 0.2 & 17.6 & 3.8 & 0.30 & 4.5 & 0.2 & 22.8 & 1.4 & 0.71 & 0.40\\
60 & -76 & 2.5 & 0.4 & 38.2 & 4.2 & 0.24 & 4.8 & 0.5 & 33.6 & 2.8 & 0.32 & 0.52\\
58 & 14 & 2.0 & 0.2 & 15.4 & 3.4 & 0.28 & 5.3 & 0.2 & 20.6 & 1.3 & 0.60 & 0.38\\
50 & 36 & 2.4 & 0.3 & 23.4 & 3.0 & 0.25 & 5.7 & 0.3 & 26.0 & 1.7 & 0.56 & 0.42\\
43 & -12 & 1.1 & 0.2 & 7.3 & 4.3 & 0.32 & 2.8 & 0.2 & 14.3 & 1.8 & 0.83 & 0.39\\
38 & -83 & 2.1 & 0.2 & 40.6 & 2.7 & 0.26 & 5.2 & 0.4 & 43.5 & 2.1 & 0.31 & 0.40\\
33 & -62 & 1.4 & 0.2 & 33.0 & 3.0 & 0.33 & 4.7 & 0.3 & 28.8 & 1.7 & 0.45 & 0.30\\
31 & 29 & 1.6 & 0.2 & 19.7 & 2.9 & 0.30 & 3.8 & 0.2 & 15.8 & 1.9 & 0.73 & 0.42\\
28 & -40 & 1.4 & 0.1 & 12.7 & 2.6 & 0.36 & 3.7 & 0.2 & 17.9 & 1.6 & 0.67 & 0.38\\
23 & 52 & 1.6 & 0.2 & 20.4 & 3.2 & 0.28 & 3.4 & 0.5 & 21.3 & 3.7 & 0.75 & 0.47\\
21 & -19 & 1.5 & 0.1 & 1.8 & 2.1 & 0.37 & 2.7 & 0.2 & 10.2 & 1.7 & 0.88 & 0.56\\
13 & -67 & 1.5 & 0.1 & 26.4 & 2.2 & 0.43 & 3.8 & 0.3 & 34.4 & 2.0 & 0.39 & 0.40\\
8 & 24 & 1.2 & 0.1 & 10.6 & 3.0 & 0.34 & 2.9 & 0.2 & 16.4 & 2.0 & 0.92 & 0.41\\
\enddata
\tablenotetext{a}{Offsets are given with respect to $18^\text{h} 20^\text{m} 25^\text{s}$, $-16\arcdeg 13\arcmin 02\arcsec$ (J2000).}
\tablenotetext{b}{Position angle of electric vector measured east from north.}
\tablenotetext{c}{Intensity normalized to 1.00 at peak.}
\tablenotetext{d}{Median = 0.398, mean = 0.405 and std = 0.067 (see Table~\ref{table:450PS}).}
\label{table:45060}
\end{deluxetable}
\begin{deluxetable}{ c c c c c c c c c c c c c}
\tablecaption{Polarization ratio data for 450 $\mu$m vs. 100 $\mu$m with 35$\arcsec$ resolution}
\tabletypesize{\footnotesize}
\tablecolumns{13}
\tablewidth{0pt}
\tablehead{\colhead{$\Delta\alpha$\,\tablenotemark{a}} &
\colhead{$\Delta\delta$\,\tablenotemark{a}} &
\colhead{$P_{450}$} &
\colhead{$\sigma_{p450}$} &
\colhead{P.A.\,\tablenotemark{b}} &
\colhead{$\sigma_\text{P.A.}$} &
\colhead{$I_{450}$\,\tablenotemark{c}} &
\colhead{$P_{100}$} &
\colhead{$\sigma_{p100}$} &
\colhead{P.A.\,\tablenotemark{b}} &
\colhead{$\sigma_\text{P.A.}$} &
\colhead{$I_{100}$\,\tablenotemark{c}} &
\colhead{$P_{450}/P_{100}$\,\tablenotemark{d}} \\
\colhead{(arcsec)} &
\colhead{(arcsec)} &
\colhead{(\%)} &
\colhead{(\%)} &
\colhead{(deg)} &
\colhead{(deg)} &
\colhead{(-)} &
\colhead{(\%)} &
\colhead{(\%)} &
\colhead{(deg)} &
\colhead{(deg)} &
\colhead{(-)} &
\colhead{(-)} }
\startdata
91 & -78 & 3.0 & 0.5 & 36.9 & 4.7 & 0.25 & 3.9 & 0.3 & 27.5 & 2.3 & 0.21 & 0.76\\
81 & -43 & 1.4 & 0.2 & 30.7 & 4.4 & 0.36 & 4.4 & 0.2 & 23.0 & 1.6 & 0.26 & 0.32\\
71 & -7 & 1.6 & 0.2 & 16.1 & 3.0 & 0.33 & 3.8 & 0.3 & 20.4 & 2.2 & 0.29 & 0.42\\
64 & -124 & 1.5 & 0.3 & 50.5 & 6.1 & 0.25 & 3.5 & 0.3 & 51.9 & 2.9 & 0.14 & 0.42\\
61 & 31 & 2.2 & 0.2 & 22.0 & 2.4 & 0.28 & 4.2 & 0.3 & 26.3 & 2.0 & 0.24 & 0.52\\
57 & -88 & 2.0 & 0.2 & 41.4 & 2.7 & 0.27 & 3.5 & 0.2 & 35.5 & 1.9 & 0.18 & 0.57\\
52 & 64 & 1.7 & 0.2 & 28.6 & 3.6 & 0.26 & 3.4 & 0.4 & 34.7 & 3.1 & 0.17 & 0.50\\
34 & -17 & 1.2 & 0.1 & 5.7 & 1.9 & 0.39 & 2.3 & 0.2 & 10.1 & 1.8 & 0.44 & 0.52\\
27 & 19 & 1.3 & 0.1 & 9.8 & 1.9 & 0.35 & 2.8 & 0.2 & 13.2 & 2.4 & 0.39 & 0.46\\
17 & 57 & 1.5 & 0.1 & 17.2 & 1.8 & 0.35 & 2.0 & 0.1 & 25.5 & 1.9 & 0.36 & 0.75\\
10 & -62 & 1.4 & 0.1 & 11.4 & 1.4 & 0.51 & 2.7 & 0.3 & 21.3 & 3.0 & 0.37 & 0.52\\
7 & 93 & 1.1 & 0.1 & 10.7 & 2.6 & 0.43 & 1.2 & 0.2 & 13.9 & 4.8 & 0.41 & 0.93\\
\enddata
\tablenotetext{a}{Offsets are given with respect to $18^\text{h} 20^\text{m} 25^\text{s}$, $-16\arcdeg 13\arcmin 02\arcsec$ (J2000).}
\tablenotetext{b}{Position angle of electric vector measured east from north.}
\tablenotetext{c}{Intensity normalized to 1.00 at peak.}
\tablenotetext{d}{Median = 0.521, mean = 0.558 and std = 0.165 (see Table~\ref{table:450PS}).}
\label{table:450100}
\end{deluxetable}
\begin{deluxetable}{ c c c c c c c c c c c c c}
\tablecaption{Polarization ratio data for 450 $\mu$m vs. 350 $\mu$m with 20$\arcsec$ resolution}
\tabletypesize{\footnotesize}
\tablecolumns{13}
\tablewidth{0pt}
\tablehead{\colhead{$\Delta\alpha$\,\tablenotemark{a}} &
\colhead{$\Delta\delta$\,\tablenotemark{a}} &
\colhead{$P_{450}$} &
\colhead{$\sigma_{p450}$} &
\colhead{P.A.\,\tablenotemark{b}} &
\colhead{$\sigma_\text{P.A.}$} &
\colhead{$I_{450}$\,\tablenotemark{c}} &
\colhead{$P_{350}$} &
\colhead{$\sigma_{p350}$} &
\colhead{P.A.\,\tablenotemark{b}} &
\colhead{$\sigma_\text{P.A.}$} &
\colhead{$I_{350}$\,\tablenotemark{c}} &
\colhead{$P_{450}/P_{350}$\,\tablenotemark{d}} \\
\colhead{(arcsec)} &
\colhead{(arcsec)} &
\colhead{(\%)} &
\colhead{(\%)} &
\colhead{(deg)} &
\colhead{(deg)} &
\colhead{(-)} &
\colhead{(\%)} &
\colhead{(\%)} &
\colhead{(deg)} &
\colhead{(deg)} &
\colhead{(-)} &
\colhead{(-)} }
\startdata
63 & -55 & 1.6 & 0.3 & 35.4 & 5.4 & 0.30 & 2.0 & 0.2 & 27.8 & 3.5 & 0.26 & 0.79\\
53 & -19 & 1.1 & 0.2 & 20.4 & 4.9 & 0.34 & 1.5 & 0.1 & 13.1 & 2.6 & 0.31 & 0.72\\
51 & -76 & 2.1 & 0.3 & 40.2 & 4.2 & 0.25 & 2.3 & 0.3 & 35.1 & 3.7 & 0.21 & 0.91\\
48 & -2 & 1.4 & 0.2 & 5.9 & 3.7 & 0.29 & 2.3 & 0.1 & 4.5 & 1.7 & 0.29 & 0.60\\
46 & 71 & 1.8 & 0.4 & 30.1 & 6.5 & 0.22 & 2.3 & 0.4 & 24.5 & 5.1 & 0.18 & 0.77\\
43 & 14 & 1.6 & 0.2 & 9.7 & 3.3 & 0.29 & 2.0 & 0.2 & 7.3 & 2.9 & 0.28 & 0.80\\
36 & -24 & 1.1 & 0.2 & 15.3 & 4.1 & 0.32 & 2.0 & 0.1 & 5.9 & 1.2 & 0.37 & 0.54\\
33 & 50 & 1.7 & 0.2 & 22.0 & 3.8 & 0.26 & 1.3 & 0.2 & 15.0 & 3.9 & 0.25 & 1.31\\
31 & -7 & 1.3 & 0.2 & 2.4 & 3.3 & 0.32 & 2.0 & 0.1 & 3.0 & 1.0 & 0.38 & 0.64\\
29 & 67 & 1.7 & 0.3 & 21.9 & 4.3 & 0.26 & 1.2 & 0.2 & 17.2 & 4.6 & 0.25 & 1.41\\
26 & 10 & 1.3 & 0.1 & 2.8 & 3.2 & 0.31 & 2.2 & 0.1 & 5.0 & 1.0 & 0.39 & 0.59\\
24 & -45 & 1.4 & 0.1 & 12.0 & 2.3 & 0.39 & 1.9 & 0.1 & 6.7 & 0.9 & 0.48 & 0.74\\
21 & 26 & 1.3 & 0.1 & 17.7 & 3.2 & 0.30 & 1.8 & 0.1 & 7.9 & 1.3 & 0.36 & 0.72\\
19 & -29 & 1.8 & 0.1 & -0.1 & 1.9 & 0.39 & 2.2 & 0.1 & 180.0 & 0.6 & 0.49 & 0.82\\
16 & -86 & 2.2 & 0.2 & 35.2 & 2.4 & 0.31 & 1.7 & 0.2 & 30.9 & 3.4 & 0.31 & 1.30\\
14 & 116 & 2.0 & 0.7 & 11.8 & 9.0 & 0.21 & 1.6 & 0.2 & 12.4 & 4.3 & 0.21 & 1.25\\
14 & -12 & 1.5 & 0.1 & 1.3 & 2.0 & 0.41 & 2.1 & 0.1 & 178.7 & 0.7 & 0.58 & 0.71\\
11 & 62 & 1.7 & 0.2 & 19.3 & 2.7 & 0.31 & 1.0 & 0.1 & 14.6 & 3.4 & 0.36 & 1.70\\
9 & 5 & 1.4 & 0.1 & 0.5 & 2.2 & 0.39 & 2.0 & 0.0 & 177.7 & 0.6 & 0.56 & 0.70\\
6 & -50 & 1.7 & 0.1 & -0.9 & 1.5 & 0.52 & 1.8 & 0.1 & 178.6 & 0.8 & 0.55 & 0.94\\
4 & 21 & 1.1 & 0.1 & 4.6 & 3.1 & 0.35 & 1.4 & 0.1 & 177.3 & 1.1 & 0.52 & 0.78\\
1 & 95 & 1.0 & 0.2 & 15.6 & 5.3 & 0.39 & 1.2 & 0.1 & 5.6 & 1.3 & 0.50 & 0.82\\
1 & -33 & 2.2 & 0.1 & -7.8 & 1.1 & 0.55 & 2.1 & 0.1 & 177.1 & 0.6 & 0.66 & 1.05\\
-4 & -17 & 1.8 & 0.1 & -9.7 & 1.1 & 0.60 & 2.0 & 0.0 & 173.2 & 0.5 & 0.75 & 0.90\\
-6 & -74 & 2.0 & 0.2 & 0.4 & 2.9 & 0.43 & 0.6 & 0.1 & 2.3 & 3.3 & 0.36 & 3.36\\
-6 & 57 & 1.2 & 0.1 & 11.3 & 2.8 & 0.38 & 0.5 & 0.1 & 6.6 & 2.9 & 0.60 & 2.44\\
-9 & -0 & 1.4 & 0.1 & -15.1 & 1.4 & 0.58 & 1.6 & 0.0 & 167.3 & 0.6 & 0.78 & 0.87\\
-11 & -57 & 1.6 & 0.2 & -8.5 & 2.7 & 0.44 & 1.1 & 0.1 & 173.4 & 1.7 & 0.43 & 1.45\\
-11 & 74 & 1.1 & 0.1 & -5.2 & 2.3 & 0.57 & 0.8 & 0.0 & 177.8 & 1.2 & 0.87 & 1.37\\
-14 & 17 & 1.3 & 0.1 & -22.2 & 1.8 & 0.50 & 1.1 & 0.0 & 162.1 & 0.8 & 0.77 & 1.18\\
-16 & 90 & 1.2 & 0.1 & 3.8 & 1.8 & 0.78 & 1.3 & 0.0 & 4.8 & 0.6 & 0.94 & 0.92\\
-16 & -38 & 2.0 & 0.1 & -7.4 & 1.4 & 0.54 & 1.2 & 0.1 & 174.5 & 1.2 & 0.54 & 1.67\\
-19 & 33 & 1.1 & 0.1 & -18.6 & 2.3 & 0.49 & 0.7 & 0.0 & 156.9 & 1.6 & 0.76 & 1.56\\
-21 & -21 & 2.1 & 0.1 & -13.5 & 1.0 & 0.61 & 1.7 & 0.0 & 171.4 & 0.6 & 0.69 & 1.23\\
-24 & 50 & 0.8 & 0.1 & -6.5 & 2.7 & 0.59 & 0.4 & 0.0 & 174.3 & 3.7 & 0.84 & 1.98\\
-26 & -5 & 1.5 & 0.1 & -19.8 & 1.3 & 0.66 & 1.4 & 0.0 & 167.1 & 0.7 & 0.79 & 1.07\\
-31 & 12 & 1.2 & 0.1 & -28.6 & 1.3 & 0.71 & 0.9 & 0.0 & 161.1 & 1.1 & 0.93 & 1.33\\
-33 & -45 & 0.8 & 0.1 & -20.1 & 4.5 & 0.46 & 0.8 & 0.1 & 153.8 & 2.1 & 0.43 & 1.00\\
-33 & 86 & 1.0 & 0.1 & 5.9 & 1.7 & 0.98 & 0.9 & 0.0 & 9.4 & 0.9 & 1.00 & 1.11\\
-38 & 102 & 1.0 & 0.1 & 20.0 & 2.5 & 0.88 & 1.3 & 0.0 & 11.7 & 0.7 & 0.81 & 0.77\\
-38 & -26 & 1.8 & 0.1 & -13.2 & 1.8 & 0.45 & 1.1 & 0.1 & 159.0 & 1.8 & 0.55 & 1.64\\
-43 & -10 & 1.9 & 0.1 & -12.9 & 1.2 & 0.55 & 1.4 & 0.1 & 165.7 & 1.3 & 0.67 & 1.36\\
-46 & 64 & 0.6 & 0.1 & -5.7 & 2.8 & 0.81 & 0.4 & 0.0 & 2.3 & 2.5 & 0.89 & 1.48\\
-51 & 81 & 0.7 & 0.1 & -8.5 & 4.0 & 0.74 & 0.6 & 0.0 & 179.1 & 2.0 & 0.67 & 1.15\\
-53 & -105 & 3.4 & 0.3 & -46.1 & 2.6 & 0.56 & 2.1 & 0.2 & 141.6 & 2.3 & 0.38 & 1.62\\
-56 & -31 & 1.5 & 0.1 & -27.3 & 2.7 & 0.39 & 1.4 & 0.1 & 143.0 & 2.0 & 0.50 & 1.07\\
-58 & -88 & 3.8 & 0.4 & -43.5 & 2.4 & 0.46 & 2.5 & 0.1 & 140.7 & 1.4 & 0.51 & 1.51\\
-66 & 2 & 1.2 & 0.1 & -16.6 & 1.3 & 0.75 & 0.9 & 0.1 & 164.5 & 2.5 & 0.78 & 1.34\\
-68 & 76 & 1.5 & 0.2 & -26.3 & 3.4 & 0.43 & 0.5 & 0.1 & 151.3 & 3.1 & 0.59 & 3.03\\
-71 & 19 & 0.7 & 0.0 & -11.9 & 1.8 & 0.94 & 0.7 & 0.1 & 177.0 & 3.1 & 0.91 & 1.01\\
-71 & -109 & 4.8 & 0.4 & -47.7 & 2.2 & 0.63 & 2.8 & 0.3 & 136.5 & 3.4 & 0.24 & 1.72\\
-73 & 93 & 0.7 & 0.3 & -21.7 & 9.4 & 0.45 & 0.5 & 0.1 & 161.5 & 3.1 & 0.68 & 1.51\\
-75 & -93 & 4.5 & 0.3 & -45.5 & 1.9 & 0.63 & 3.7 & 0.2 & 136.9 & 1.3 & 0.38 & 1.22\\
-75 & -19 & 1.6 & 0.1 & -29.3 & 2.5 & 0.45 & 1.5 & 0.1 & 147.7 & 1.9 & 0.51 & 1.07\\
-80 & 55 & 0.7 & 0.1 & -55.4 & 5.0 & 0.52 & 0.2 & 0.1 & 123.4 & 10.8 & 0.72 & 4.00\\
-85 & 71 & 1.4 & 0.2 & -49.6 & 4.9 & 0.40 & 0.9 & 0.1 & 128.6 & 1.8 & 0.68 & 1.55\\
-90 & 88 & 1.4 & 0.4 & -63.9 & 7.4 & 0.46 & 0.6 & 0.1 & 119.9 & 2.3 & 0.76 & 2.44\\
-98 & -81 & 6.2 & 0.9 & -49.8 & 3.3 & 0.41 & 3.3 & 0.3 & 135.8 & 2.9 & 0.27 & 1.90\\
-103 & 67 & 1.7 & 0.4 & -52.9 & 6.9 & 0.50 & 0.8 & 0.2 & 128.3 & 7.6 & 0.56 & 2.13\\
-103 & -64 & 4.3 & 1.1 & -50.4 & 6.5 & 0.33 & 3.6 & 0.3 & 139.3 & 2.3 & 0.33 & 1.19\\
-103 & 10 & 1.3 & 0.3 & -39.6 & 8.1 & 0.43 & 1.8 & 0.5 & 134.1 & 7.1 & 0.60 & 0.69\\
\enddata
\tablenotetext{a}{Offsets are given with respect to $18^\text{h} 20^\text{m} 25^\text{s}$, $-16\arcdeg 13\arcmin 02\arcsec$ (J2000).}
\tablenotetext{b}{Position angle of electric vector measured east from north.}
\tablenotetext{c}{Intensity normalized to 1.00 at peak.}
\tablenotetext{d}{For vectors with $\Delta\alpha >= 0$, median = 0.790, mean = 0.897 and std = 0.294 (see Table~\ref{table:450PS}).}
\label{table:450350}
\end{deluxetable}
\begin{deluxetable}{c c c c c c}
\tablecaption{M17 Polarization Spectrum Data}
\tablecolumns{6}
\tablewidth{0pt}
\tablehead{\colhead{Ratio} &
\colhead{\# of Points} &
\colhead{Median} &
\colhead{Mean} &
\colhead{Std} &
\colhead{Ref.} }
\startdata
$P_{450}/P_{60}$ & 15 & 0.398 & 0.405 & 0.067 & Table~\ref{table:45060}\\
$P_{450}/P_{100}$ & 12 & 0.521 & 0.558 & 0.165 & Table~\ref{table:450100}\\
$P_{450}/P_{350}$ & 23 & 0.790 & 0.897 & 0.294 & Table~\ref{table:450350}\\
\enddata
\label{table:450PS}
\end{deluxetable}
\clearpage
Past experience with other clouds shows that at a fixed wavelength, SHARP tends to produce higher polarization magnitudes than Hertz, even after smoothing to the same angular resolution (unpublished result by JEV). Our present work shows that M17 has lower median polarization at 450 $\mu$m (SHARP data) relative to that at 350 $\mu$m (Hertz data), for the east side of our map. If the above-mentioned offset between SHARP and Hertz is present in our data, then the actual median $P_{450}$/$P_{350}$ polarization ratio must be even lower than reported here. Thus, we are confident that the observed monotonic decrease in polarization ratio from 60 to 450 $\mu$m is a robust result. This suggests that for M17 the negative slope region extends beyond 350 $\mu$m, at least to 450 $\mu$m, which is different from what is seen in OMC-1 (Figure~\ref{fig:M17PS}).
According to the RAT theory of grain alignment, the alignment efficiency should be a function of grain environment. Simulations of a molecular cloud as a mixture of aspherical graphite and silicate under the RAT model results in a polarization spectrum rising from 100 to 450 $\mu$m and flat at longer wavelengths \citep{2007ApJ...663.1055B}. \cite{2009ApJ...696....1D} present similar models composed of both aspherical and spherical grains of both silicate and graphite composition. \cite{2009ApJ...696....1D} obtained their observational constraints on grain alignment from the optical to near-infrared polarization spectrum, and they obtained a monotonic polarization spectrum with a positive slope, even longward of 450 $\mu$m.
\begin{figure}[h!b]
\centering
\includegraphics[width=0.45\textwidth]{f05.eps}
\caption{Qualitative picture for how the cloud polarization spectrum might be expected to change as the power emitted by young stars in the cloud increases. The solid curve (A) represents a cloud with no internal sources. The dashed (B) and dot-dashed (C) curves show the expectation for clouds containing, respectively, less powerful and more powerful internal sources. $\lambda_B$ and $\lambda_C$ are the locations of the respective polarization spectrum minima.}
\label{fig:RATalign}
\end{figure}
Figure~\ref{fig:RATalign} qualitatively illustrates our hypotheses concerning the polarization spectra for different clouds shown in Figure~\ref{fig:M17PS} in the context of the RAT alignment theory. To understand this, note that the two models described in the previous paragraph do not include internal radiation sources. These two models yield a monotonically increasing (on average) polarization spectrum, i.e. positive slope (see curve A of Figure~\ref{fig:RATalign}); they cannot account for the negative slope region of the polarization spectrum, which seems to be generally dominant between about 60 $\mu$m and 350 $\mu$m, corresponding roughly to the far-IR band (Figure~\ref{fig:M17PS}). However, note that the cool (10 -- 20 K) dust grain population that is considered in these models cannot explain the generally quite high intensity levels observed in the far-IR. The far-IR emission from molecular clouds must instead be primarily due to dust heated to much higher temperatures by the intense radiation field created by embedded young stellar objects (YSOs) and young stars. This warmer, highly irradiated dust would be expected to be very well aligned if grains are indeed aligned by the RAT mechanism. Furthermore, the very hottest dust components should be the best aligned, which is precisely the recipe for a negative slope polarization spectrum. The observed far-IR/submillimeter spectra of Figure~\ref{fig:M17PS} may thus roughly be explained by drawing two component curves, a negative slope curve in the far-IR (warm dust irradiated by internal sources) and a positive slope curve at the longer submillimeter wavelengths (cool dust far from internal sources of radiation). The position of the minimum is crudely set by the intersection of these two component curves (see Figure~\ref{fig:RATalign}).
\begin{figure*}
\centering
\includegraphics[width=0.65\textwidth]{f06.eps}
\caption{Magnetic field vectors from SHARP plotted over the (21 cm)/(450 $\mu$m) intensity ratio map in arbitrary units (gray scale and contours), illustrating the compression shock front that is passing through the cloud. The contours range from 10\% to 100\% of the peak intensity ratio, in steps of 10\%. A new X-Y coordinate system is rotated by about $66\arcdeg$ with respect to the RA-Dec coordinates. The X axis aligns with the 10\% contour level. The shock proceeds the -Y direction. The Y = $0\arcsec$ and Y = -$50\arcsec$ lines separate the cloud into post-shock (Y $>$ $0\arcsec$), shock front (-$50\arcsec$ $<$ Y $<$ $0\arcsec$) and pre-shock (Y $<$ -$50\arcsec$) regions, as determined from variations in inferred magnetic field orientation (see Figure~\ref{fig:phiy}). For clarity, not all vectors from Figure~\ref{fig:SHARPpol} are shown here. Right ascension and declination offsets are given with respect to $18^\text{h}20^\text{m}25^\text{s}$, -$16\arcdeg13\arcmin02\arcsec$ (J2000).}
\label{fig:rotation}
\end{figure*}
In this picture, clouds with no internal sources would be expected to have only one component curve and thus only a positive slope region, shown as curve A in Figure~\ref{fig:RATalign}. (This conjecture cannot be tested with present data, as no such clouds are included in the sample shown in Figure~\ref{fig:M17PS} because such quiescent clouds are faint shortward of 800 $\mu$m and have not been observed polarimetrically at these shorter wavelengths.) If we include a few internal sources in a cloud, then we expect the negative slope component curve to be evident at least at the very shortest wavelengths where the contribution from cool dust far from radiation sources is negligible. This is shown in Figure~\ref{fig:RATalign} as curve B. Adding still more internal sources might be expected to increase the influence of the negative slope component curve, thus yielding curve C which has its minimum shifted to longer wavelengths. In this interpretation, the fact that the minimum for the eastern part of M17 SW is shifted to the right with respect to that of OMC-1, i.e. from 350 $\mu$m to 450 $\mu$m or beyond (see Figure~\ref{fig:M17PS}), would indicate the existence of a stronger internal radiation field in the eastern portion of M17 SW in comparison with that in OMC-1.
\subsection{Changes in Magnetic Field Direction across the Shock Front}
\label{sec:magchange}
In Section~\ref{sec:intrm17} we reviewed how the central OB stars in M17 heat the \ion{H}{2} region and carve a hollow conical shape into the surrounding molecular cloud, and we noted that the M17 SW region provides a nearly edge-on view of the corresponding shock front. We can trace the progress of this shock front across M17 SW by studying the ratio of atomic column density to total column density. This ratio will be higher for the post-shock region. Figure~\ref{fig:rotation} shows 450 $\mu$m inferred magnetic field vectors superposed on a map of the ratio of 21 cm line intensity (VLA, \citealp{2001ApJ...560..821B}) to 450 $\mu$m continuum intensity (SHARP). This ``intensity ratio'' is a reasonable proxy for the ratio of atomic column density to total column density. We now define a new X-Y coordinate system for which the X axis is approximately coincident with the 10\% contour level of the normalized intensity ratio, as shown in Figure~\ref{fig:rotation}. This coordinate system is rotated counter-clockwise by an angle of about 66$\arcdeg$ with respect to the RA-Dec system. The shock front proceeds along the -Y direction.
In Figure~\ref{fig:phiy}, the polarization angle for multi-wavelength data toward M17 SW is shown as a function of the Y coordinate defined above. We can clearly see a strong correlation between the two quantities. Based on the variation of polarization angle seen in Figure~\ref{fig:phiy}, we use two lines, defined by Y = 0$\arcsec$ and Y = -50$\arcsec$, to separate the cloud into three regions: post-shock, shock front and pre-shock. These three regions are indicated in both Figure~\ref{fig:rotation} and Figure~\ref{fig:phiy}.
\begin{figure}[h!b]
\centering
\includegraphics[width=0.5\textwidth]{f07.eps}
\caption{Polarization angles as a function of perpendicular distance Y from post-shock/shock-front boundary line (see Figure~\ref{fig:rotation}). The post-shock region has Y $> 0\arcsec$ and the pre-shock region has Y $< -50\arcsec$. The 60 $\mu$m data points are in yellow, 100 $\mu$m data in green, 350 $\mu$m data in blue, and 450 $\mu$m data in red. We only show data from the region within $-120\arcsec <$ RA Offset $< 70\arcsec$ and $-120\arcsec <$ Dec Offset $< 120\arcsec$ (see Figure~\ref{fig:rotation}).}
\label{fig:phiy}
\end{figure}
In Section~\ref{sec:genres} we reviewed the arguments presented by \cite{1996ApJ...470..566D} and others regarding the manner in which a compression front or shock front is expected to distort a cloud's magnetic field. Specifically, the effect of the compression should be to force the magnetic field nearly parallel to the observed compression front. Figure~\ref{fig:phiy} gives the polarization angle values corresponding to magnetic field parallel to and perpendicular to the shock front, which are respectively at 66$\arcdeg$ and -24$\arcdeg$. In order to quantitatively explore the effects of the shock, we computed mean polarization angles for the pre-shock, shock front, and post-shock regions. Due to the 180$\arcdeg$ ambiguity in polarization angle, the mean angle is, strictly speaking, not well defined. The equal weight Stokes mean (EWSM) technique \citep{2006ApJ...648..340L} provides a simple and unambiguous method for computing an effective mean angle, and using this technique we find mean polarization angles of -46$\arcdeg$, 0$\arcdeg$, and 18$\arcdeg$, respectively, for the pre-shock, shock front, and post-shock regions. From the above values, we can see that the pre-shock region has its inferred field rotated by 70$\arcdeg$ counter-clockwise with respect to the shock front, while for the post-shock region the field is 48$\arcdeg$ clockwise from the shock front. It appears that, as expected, the shock front has changed the field direction in the sense of making it lie closer to parallelism with the shock front (see also Figure~\ref{fig:phiy}). However, the angle between the post-shock magnetic field and the shock front is still substantial, for reasons that are not clear.
The magnetic field in the pre-shock region of M17 SW may be taken as representative of the unperturbed field of the M17 molecular cloud. This field has an average direction of 44$\arcdeg$, and thus lies within 17$\arcdeg$ of the Galactic plane which is at position angle 27$\arcdeg$ (see Figure~\ref{fig:rotation}). This result is consistent with the conclusion by \cite{2006ApJ...648..340L} that Giant Molecular Cloud magnetic fields run preferentially parallel to the Galactic plane.
We have argued that we can understand gross features of the magnetic field in M17 SW in terms of simple arguments previously advanced by \cite{1996ApJ...470..566D}, \cite{2000ApJ...529..241N}, and \cite{2006ApJ...648..340L}. Our two-dimensional analysis of cloud structure and polarization reveals some physical processes in the shock front region, which has an edge-on shell structure. However, we still lack a full understanding of the magnetic field in this source. In particular, both the magnetic field and the shell structure are inherently three-dimensional. A full three-dimensional model including the results of Zeeman mapping of the line-of-sight field (e.g., \citealp{2001ApJ...560..821B}) might be helpful, but is beyond the scope of the present paper.
\subsection{Changes in Polarization Spectrum across the Shock Front}
\label{sec:pschange}
In warmer regions with higher radiation fields, the dust grains would be expected to be better aligned, if RAT theory is correct. Statistically higher polarization fractions are observed in warmer regions \citep{2012ApJS..201...13V}. We cannot observe this easily in M17 since the magnetic field inclination with respect to the LOS likely varies in a complex manner, due to the shock front in M17 \citep{1999ApJ...515..304B}. However, this geometric effect should not affect the polarization spectrum, due to our use of the $|\Delta\phi| < 10\arcdeg$ criterion (see Section~\ref{sec:aveps}). Thus, we turn to the issue of polarization spectrum variations across the shock front.
The average polarization spectrum that was plotted in Figure~\ref{fig:M17PS} is dominated by vectors in the post-shock region due to limitations of the spatial extent of the 60 and 100 $\mu$m data. In this region $P_{450}$ is smaller than $P_{350}$. However, in the shock front and pre-shock regions, we find that the median of $P_{450}$ is greater than that of $P_{350}$. Using the data presented in Table~\ref{table:450350}, Figure~\ref{fig:21cm-M17} shows the $P_{450}$/$P_{350}$ polarization ratio vector superposed on the (21 cm)/(450 $\mu$m) intensity ratio map. Most of the blue vectors ($P_{450} < P_{350}$) are in the post-shock region. In spite of some red ($P_{450} > P_{350}$) vectors distributed around the densest part of the cloud, the contour level (21 cm)/(450 $\mu$m) = 0.1 roughly separates the respective areas for blue and red vectors. Figure~\ref{fig:Hist} shows separate histograms of $P_{450}$/$P_{350}$ polarization ratio for the pre-shock and post-shock regions, respectively. The median polarization ratios of these two spatial regions are 1.33 and 0.81, respectively. There are more measurements of polarization ratio corresponding to negative slope in the post-shock region and the opposite is true in the pre-shock region. If the RAT theory is correct, the stronger radiation field in the post-shock east part of the cloud should cause the minimum in the polarization spectrum to shift toward longer wavelength (see Figure~\ref{fig:RATalign}) for this region. Thus the $P_{450} / P_{350}$ ratio should become smaller in the post-shock region, which is exactly what we see in Figure~\ref{fig:Hist}.
\begin{figure}[h!b]
\centering
\includegraphics[width=0.48\textwidth]{f08.eps}
\caption{The $P_{450}$/$P_{350}$ polarization ratio vectors superposed on the (21 cm)/(450 $\mu$m) intensity ratio map from Figure~\ref{fig:rotation}. The length of each vector is proportional to the ratio of $P_{450}/P_{350}$. The orientations of the vectors are parallel to the polarization angles of the 450 $\mu$m data. The blue vectors represent $P_{450}$ $<$ $P_{350}$, while the red vectors correspond to $P_{450}$ $>$ $P_{350}$. The scale at bottom left is equivalent to $P_{450}/P_{350} = 1.0$. Right ascension and declination offsets are given with respect to $18^\text{h} 20^\text{m} 25^\text{s}$, $-16\arcdeg 13\arcmin 02\arcsec$ (J2000).}
\label{fig:21cm-M17}
\end{figure}
\begin{figure}[h!b]
\centering
\includegraphics[width=0.50\textwidth]{f09.eps}
\caption{Histograms of $P_{450}$/$P_{350}$ polarization ratio for pre-shock and post-shock regions. The median ratios of these two spatial regions are respectively 1.33 and 0.81, with standard deviations of 0.42 and 0.46. More measurements of polarization ratio corresponding to negative slope are found in the post-shock region, and more positive slope ratios are found in the pre-shock region.}
\label{fig:Hist}
\end{figure}
Figure~\ref{fig:binned} shows the correlation between the $P_{450}$/$P_{350}$ polarization ratio and the (21 cm)/(450 $\mu$m) intensity ratio. In this figure, data shown in Figure~\ref{fig:21cm-M17} are binned into four bins having the following (21 cm)/(450 $\mu$m) intensity ratios: (0.024 -- 1.748), (1.748 -- 2.289), (2.289 -- 7.695) and (7.695 -- 36.877). The bins sizes have been chosen to make the error bars in $P_{450}$/$P_{350}$ polarization ratio (vertical error bars) approximately equal. The data points and vertical error bars in Figure~\ref{fig:binned} show the mean value of the $P_{450}/P_{350}$ ratio for each bin and the uncertainty of this value, respectively. These uncertainties were calculated assuming Gaussian errors in the individual ratios shown in Figure~\ref{fig:21cm-M17}. (I.e., the plotted uncertainty is the r.m.s. of the ratios divided by the square root of the number of ratios used to compute the corresponding mean value). The bin sizes are represented by the horizontal error bars. Although the vertical error bars are large, we do see a trend of falling $P_{450}/P_{350}$ ratio, as we progress from molecular-dominated to atomic-dominated regions. The horizontal axis of Figure~\ref{fig:binned} provides a measurement of the strength of the radiation field. We see that the atomic-dominated regions, which have greater exposure to radiation sources in comparison with the molecular-dominated regions, exhibit a shift to a negative slope just as expected if the minimum is being pushed toward longer wavelength as illustrated in Figure~\ref{fig:RATalign}. Future observations covering more wavelengths could potentially reduce the error bars in the plot of Figure~\ref{fig:binned}, confirming the existence of this tentative correlation between dust grain environment and polarization spectrum.
\begin{figure}[h!b]
\centering
\includegraphics[width=0.48\textwidth]{f10.eps}
\caption{The $P_{450}$/$P_{350}$ polarization ratio plotted against the (21 cm)/(450 $\mu$m) intensity ratio. The data used for this plot are the same as shown in Figure~\ref{fig:21cm-M17}. The bins for the data points are (0.024 -- 1.748), (1.748 -- 2.289), (2.289 -- 7.695) and (7.695 -- 36.877) in the (21 cm)/(450 $\mu$m) intensity ratio. For each bin, the data point and vertical error bar respectively represent the mean value of the $P_{450}/P_{350}$ ratio for that bin and the uncertainty of this mean value. The horizontal error bars show the ranges of the bins. These ranges were determined so as to keep the vertical error bars for the bins as close as possible to one another.}
\label{fig:binned}
\end{figure}
\section{Summary}
The combination of general multi-wavelength studies and polarimetric data in the far-IR and submillimeter allows us to probe the physical structure and evolution of the M17 cloud. At large scales, young OB type stars in the center of the cloud heat the \ion{H}{2} region up to $10^6$ - $10^7$ K and create a high energy fountain towards the southeast direction. The \ion{H}{2} wind pushes the \ion{H}{1} and $\text{H}_2$ regions outward, creating a hollow region with a conical shape. The magnetic field is found bulging away from the \ion{H}{2} region, as noted earlier by \cite{1996ApJ...470..566D}.
At small scales, within M17 SW, a compression shock front is found at the boundary between the \ion{H}{1} and $\text{H}_2$ regions. There are significant differences in the polarization spectrum between the pre-shock and post-shock regions. Specifically, the negative slope region is dominant (extending from 60 to 450 $\mu$m) in the east (post-shock) part of M17 SW where there is a stronger radiation field. In the west (pre-shock) part, where the radiation is weaker, the positive slope region begins to dominate by 350 $\mu$m. We have suggested that this change is in qualitative agreement with predictions of the radiative alignment torque (RAT) model for grain alignment. Grains in molecular clouds are not always aligned with the magnetic field perfectly, and any model trying to explain the polarization spectrum should take into account the variance of interstellar physical conditions across a cloud as well as along the line of sight.
\acknowledgments
We thank C. Brogan for providing the 21 cm intensity data. We are grateful to A. Chepurnov, R. Hildebrand, and A. Lazarian for illuminating discussions. This material is based upon work at the Caltech Submillimeter Observatory, which is operated by the California Institute of Technology under cooperative agreement with the National Science Foundation (AST-0838261). We thank the National Science Foundation for supporting SHARP, via grant AST-0909030 to Northwestern University.
\bibliographystyle{apj}
|
2,877,628,091,629 | arxiv | \section*{Abstract}
Dengue is a major threat to public health in Brazil, the world’s sixth biggest country by population, with over 1.5 million cases recorded in 2019 alone. Official data on dengue case counts is delivered incrementally and, for many reasons, often subject to delays of weeks. In contrast, data on dengue-related \emph{Google} searches and \emph{Twitter} messages is available in full with no delay. Here, we describe a model which uses online data to deliver improved weekly estimates of dengue incidence in Rio de Janeiro. We address a key shortcoming of previous online data disease surveillance models by explicitly accounting for the incremental delivery of case count data, to ensure that our approach can be used in practice. We also draw on data from \emph{Google Trends} and \emph{Twitter} in tandem, and demonstrate that this leads to slightly better estimates than a model using only one of these data streams alone. Our results provide evidence that online data can be used to improve both the accuracy and precision of rapid estimates of disease incidence, even where the underlying case count data is subject to long and varied delays.
\section*{Introduction}
Dengue is the most common mosquito-borne disease worldwide, with 50 to 100 million cases reported each year \cite{StanawayEtAl2016} and almost 4 billion people at risk \cite{BradyEtAl2012}. Typical symptoms of dengue include high fever, rashes, muscle aches and joint pain. A small proportion of patients develop a severe dengue infection, known as dengue haemorrhagic fever (DHF), which can involve massive bleeding and lead to death \cite{who2009}. The annual global number of dengue infections continues to grow, having already risen by a factor of 30 over the last 50 years \cite{WHO2012}. Unfortunately, there is currently no antiviral treatment to reduce severe illness \cite{Endy2014}, nor an effective vaccine.
Dengue has been endemic in Brazil since 1986, with all four serotypes circulating since 2012. Large epidemics occur every three to five years, causing disruption in the health system. With the high incidence rate of the disease, dengue is not only life-threatening, but also a serious burden on the Brazilian economy. To help mitigate dengue outbreaks, policymakers would greatly benefit from accurate, rapidly available information on the current number of cases of the disease.
In reality, official data on the number of dengue cases is often delayed. In Brazil, some of these delays are caused by a lack of dedicated staff to complete the notification paperwork, as well as poor infrastructure in healthcare settings. While Brazil has an online reporting system, healthcare centres often do not have a good internet connection. In such situations, notification of each dengue case is often recorded on a paper sheet, which is then filed locally and sent to the municipal or state health secretariat for online submission. Delays are worsened further when surveillance teams are involved in other emergencies.
In recent years, researchers have started to look at alternative sources of data which may provide rapid indicators of disease case counts. Instead of forecasting the incidence of the disease, the goal here is to ``nowcast'' the current number of cases, before the delayed official data is released. Previous work has investigated whether rapidly available data on people searching for a disease on \emph{Google} or discussing the disease on \emph{Twitter} could provide rapid insights into the incidence of a disease. For example, data on \emph{Google} searches has been shown to improve nowcasts of influenza case counts, in comparison to a model that makes estimates using official data alone \cite{Ginsberg2009,Preis2014,Yang2015,Lampos2015}. For dengue, relationships have been found between case counts and the use of online services such as \emph{Google} and \emph{Twitter} \cite{Gomide2011,Chan2011,Souza2015,Marques2017,Yang2017}, complementing other work that has sought to use rapidly available weather data \cite{Luz2008,Hii2012,Ramadona2016}.
However, delayed official data on the number of cases of a disease is often made available incrementally. For example, in Rio de Janeiro, around 25\% of dengue cases are entered into the system after a week, and less than 50\% after two weeks. Previous analyses of the value of online data in nowcasting dengue have not taken this incremental delivery into account, modelling official dengue data releases as lagged, full releases by working at a lower temporal granularity such as months \cite{Yang2017}.
Here, we seek to investigate whether online data can help improve weekly dengue case count nowcasts in a more realistic scenario where the official data is released incrementally. To do this, we build on a time series analysis framework for generating nowcasts of current disease case counts using historic, incrementally released case count data, introduced by Bastos and colleagues \cite{Bastos}. In addition, in contrast to previous approaches, we draw on data from \emph{Google Trends} and \emph{Twitter} in tandem, to investigate whether combining these two data sources can lead to better estimates than only using one at a time. We examine whether online data can improve both the accuracy and the precision of nowcasting estimates. The model we present is designed to be used in practice in the surveillance system \emph{InfoDengue}, which serves to detect dengue outbreaks in hundreds of Brazilian cities based on weekly official data \cite{codeco2016}.
\section*{Materials and Methods}
In this section, we detail the data sources used and the models analysed in the present study. The models that we consider all seek to deliver weekly estimates of dengue case counts in Rio de Janeiro. We carry out our analysis on the basis of epidemiological weeks, which are defined as starting on a Sunday.
\subsection*{Data sources}
Here we describe the three main sources of data used in this study.
\begin{description}
\item[Official data on dengue cases.] This is a list of suspected dengue cases for the city of Rio de Janeiro during the period from 1st January 2012 to 23rd July 2016. Each case has a \emph{date of notification} and a \emph{date of system entry}. The \emph{date of notification} is the date on which the patient visits the doctor and dengue is diagnosed. The \emph{date of system entry} is the date on which the information about this case is inserted into the official database and becomes available for analysis, for example in nowcasting models such as those described here.
Note that suspected dengue cases later confirmed to be a disease other than dengue are removed from the list. The data was obtained from the Health Secretariat of Rio de Janeiro, via the \emph{InfoDengue} project.
\item[Google Trends.] Data on search behaviour was obtained via the \emph{Google Extended Trends API for Health}. We obtained daily data for the whole period of analysis from 1st January 2012 to 23rd July 2016. In order to identify searches relating to the topic of \emph{dengue}, we searched for the topic using \emph{Wikidata}\footnote{\href{https://www.wikidata.org/}{https://www.wikidata.org/}}, and then used the identified topic's Freebase identifier to query the \emph{Google Extended Trends API for Health}. For the topic of \emph{dengue fever} (referred to as \emph{dengue} from now on), the Freebase ID is \emph{/m/09wsg}. We chose the topic \emph{dengue fever} rather than \emph{dengue virus} as search volume for the latter was much lower. In Brazil, the finest geographical resolution for data retrieved from the \textit{Google Extended Trends API for Health} is state level. We therefore requested data on searches made in the state of Rio de Janeiro only. The data then returned by the API represents the probability of a few consecutive searches relating to dengue, including typos and indirect descriptions of the disease, within the state of Rio de Janeiro on each day in the period of analysis.
Since 2015, the \textit{Zika} arbovirus has presented an additional risk in Rio de Janeiro, with considerable media coverage in some years. This disease is spread by the same mosquito as dengue, and also shares some symptoms. The same is true of a further arbovirus, \textit{chikungunya}, which has also been present in Rio de Janeiro since 2015, although with lower case counts. To allow us to investigate whether data on \textit{Google} searches relating to these two arboviruses might act as an additional potential signal for dengue incidence, we also retrieve searches relating to the topics of \emph{Zika virus} (referred to as \emph{Zika} from now on, Freebase ID \emph{/m/080m\_5j}; chosen instead of \emph{Zika fever} due to higher search volume) and \emph{chikungunya} (Freebase ID \emph{/m/01\_\_7l}).
\item[Twitter.] We also analyse data on the volume of tweets relating to dengue that were posted to \emph{Twitter} during each week between 1st January 2012 and 23rd July 2016, for which the user location was determined to be in Rio de Janeiro city. Location was inferred on the basis of the user location specified in the \emph{Twitter} user's user information, as described in more detail by Gomide et al. \cite{Gomide2011}. The data reflects the volume of tweets that meet both the criteria of containing the word `dengue' and expressing personal experience of dengue (e.g., in English, ``You know I have had dengue?'') \cite{Gomide2011}. This dataset was made available to us by the \emph{Observatorio da Dengue}\footnote{\href{http://www.observatorio.inweb.org.br/dengue/}{http://www.observatorio.inweb.org.br/dengue/}} via the \emph{InfoDengue} project.
\end{description}
We depict all the time series described above in Fig. \ref{fig:Fig1}. It is possible to see that there is a correlation between the number of cases of dengue notified to doctors in a given week (Fig. \ref{fig:Fig1}A, black) and both the volume of \emph{Google} searches (Fig. \ref{fig:Fig1}B; Kendall's $\tau = 0.506$, $N=238$, $p<0.001$) and tweets (Fig. \ref{fig:Fig1}C; Kendall's $\tau = 0.557$, $N=238$, $p<0.001$) relating to the topic of dengue. Whereas data on \emph{Google} searches and tweets is available almost immediately, only a very small fraction of dengue cases are entered into the surveillance system and therefore known to policymakers and analysts in the same week in which the patient visits the doctor (Fig. \ref{fig:Fig1}A, red). Indeed, there is a mean delay of 9 weeks before 95\% of the cases notified to doctors in a given week are entered into the system (Fig. \ref{fig:Fig2}). This means that in any given week, the official data on dengue cases in previous weeks is also notably incomplete. This presents clear obstacles for autoregressive models that seek to infer the number of cases in a given week by drawing on complete knowledge about previous weeks. It can also be seen that the number of cases entered into the system in the same week in which the patient visited the doctor cannot simply be multiplied by a given constant in order to determine the total number of cases notified to doctors in that week (Fig. \ref{fig:Fig1}A).
\begin{figure}[!h]
\thisfloatpagestyle{empty}
\centering
\includegraphics[width = \linewidth]{figures/Fig1.pdf}
\caption{{\bf Dengue case count data compared to data from \textit{Google} and \textit{Twitter}.}
(A) In black, we depict official data on the total number of dengue cases recorded in official data for each week in Rio de Janeiro, from January 2012 until July 2016. The city frequently experiences dengue seasons during which thousands of people are infected. In red, we depict the total number of dengue cases known to the authorities by the end of each week. It is clear that only a very small fraction of dengue cases are entered into the database by the end of each week (see Fig. \ref{fig:Fig2} for further details). (B) We investigate whether rapidly available data on \textit{Google} searches relating to dengue can help improve our understanding of the number of dengue cases in the previous week. It can be seen that peaks in dengue related searches occur at roughly the same time as peaks in dengue cases. However, we note that the size of the peak in searches often does not directly correspond to the size of the peak in dengue cases. (C) We also examine the relationship between dengue case counts and the number of tweets in the city of Rio de Janeiro that express personal experience of dengue. Again, we see that peaks in tweets occur at roughly the same time as peaks in cases, but the relative size of the peaks does not always correspond. (D) Since 2015, the \textit{Zika} arbovirus has presented an additional risk in Rio de Janeiro, with considerable media coverage in some years. This disease is spread by the same mosquito as dengue, and also shares some symptoms. We therefore also investigate whether data on \textit{Google} searches relating to Zika might act as an additional potential signal for dengue incidence. (E) For similar reasons, we also consider data on \textit{Google} searches relating to the arbovirus \textit{chikungunya}. In Brazil, \textit{Google} data is made available via the \textit{Google Extended Trends API for Health} at state level and therefore relates to searches in the state of Rio de Janeiro.}
\label{fig:Fig1}
\end{figure}
\FloatBarrier
\begin{figure}[!h]
\thisfloatpagestyle{empty}
\centering
\adjustbox{valign=t}{\begin{minipage}[c]{0.55\linewidth}
\includegraphics[width = \linewidth]{figures/Fig2.pdf}
\end{minipage}}\hfill
\adjustbox{valign=t}{\begin{minipage}[c]{0.42\linewidth}
\caption{{\bf Delays in official data on dengue case counts.}
(A) We examine the true nature of the delays in the availability of official data on cases of dengue in Rio de Janeiro. We consider data from the 15\textsuperscript{th} epidemiological week of 2013 as an example. It can be seen that only a very small fraction of dengue cases have been entered into the surveillance system by the end of the week. Indeed, data relating to this week continues to arrive over a period of six months. Furthermore, by the end of the 15\textsuperscript{th} epidemiological week of 2013, data on dengue cases in the previous weeks is also severely incomplete. This creates problems for autoregressive methods that seek to use complete knowledge about previous weeks to compensate for delays in the arrival of data relating to the current week. (B) In contrast to official data on dengue cases, data on \textit{Google} searches in the 15\textsuperscript{th} epidemiological week of 2013 is available in full by the end of the week. The same applies to data on tweets posted on \textit{Twitter}. This opens up possibilities to use data on \textit{Google} searches and tweets relating to dengue to improve estimates of the number of dengue cases in a given week. (C) We further examine the rate with which dengue cases for a given week are added into the system. Here, we depict the empirical distribution of the delays in dengue case count entry over the whole time series. The blue line depicts the mean fraction of cases entered into the system after a given delay. The dark shading indicates 80\% of the empirical distribution of the fraction of cases notified after a given delay, and the light shading 95\% of the empirical distribution. It can be seen that there is a mean delay of 9 weeks before 95\% of dengue cases for a given week are entered into the system.}
\label{fig:Fig2}
\end{minipage}}
\end{figure}
\FloatBarrier
For the reasons outlined when introducing the \emph{Google Trends} data above, we also examine the volume of \emph{Google} searches for the topics of \emph{Zika} (Fig. \ref{fig:Fig1}D) and \emph{chikungunya} (Fig. \ref{fig:Fig1}E). We find a correlation between the number of dengue cases notified to doctors in a given week and \emph{Google} searches for both \emph{Zika} and \emph{chikungunya} in the same week, both when considering the whole period of analysis (\emph{Zika} searches: Kendall's $\tau = 0.127$, $N=238$, $p<0.01$; \emph{chikungunya} searches: Kendall's $\tau = -0.09$, $N=238$, $p<0.05$) and the period beginning in the 1\textsuperscript{st} epidemiological week in 2015, the year in which Zika and chikungunya became present in Rio de Janeiro (\emph{Zika} searches: Kendall's $\tau = 0.499$, $N=81$, $p<0.001$; \emph{chikungunya} searches: Kendall's $\tau = 0.526$, $N=81$, $p<0.001$).
\subsection*{Models}
We investigate whether rapidly available data on \emph{Google} searches and tweets relating to dengue or other arboviruses present in Rio de Janeiro can enhance weekly estimates of the number of cases of dengue in Rio reported to doctors in the previous week. Importantly, we carry out these investigations while taking into account the incremental delivery of dengue case count data described in the previous section. We therefore compare the following seven models.
\begin{description}
\item[Baseline.] We first consider a model developed by Bastos \emph{et al.} \cite{Bastos} that aims to infer the number of cases of dengue in the previous week using the delayed dengue case count alone. In simple terms, the model aims to estimate the number of cases of dengue that will be reported for each week with a given number of weeks delay. The approach therefore explicitly models the gradual delivery of information relating to dengue cases in a given week over the following weeks.
Formally, let $n_{t,\tau}$ be the number of cases that occurred in week $t$ and were reported in week $t+\tau$, thus with delay $\tau$.
We assume that $n_{t,\tau}$ follows a negative binomial distribution
\begin{equation*}
n_{t,\tau} \sim \mathcal{NB}(\lambda_{t,\tau},\phi), \quad t=0,1,2,\ldots \quad \tau=0,1,2,\ldots
\label{eqn:nttau}
\end{equation*}
which has the following form
\begin{equation*}
P(n_{t,\tau} = k) = \binom{\lambda_{t,\tau}+k-1}{k}(1-\phi)^{\lambda_{t,\tau}}\phi^k, \quad k=0,1,2,\ldots
\label{eqn:negbin}
\end{equation*}
where the mean $\lambda_{t,\tau}$ is given by
\begin{equation*}
\log{(\lambda_{t,\tau})} = \mu + \alpha_t + \beta_{\tau},
\label{eqn:lambdattau}
\end{equation*}
$\mu$ is a constant and $\alpha_t$ and $\beta_{\tau}$ are random effects with an autoregressive structure
\begin{equation*}
\begin{split}
\alpha_t & \sim \alpha_{t-1} + \mathcal{N}(0, \eta_{\alpha}),\\
\beta_{\tau} & \sim \beta_{\tau-1} + \mathcal{N}(0, \eta_{\beta}).
\end{split}
\label{eqn:atbtau}
\end{equation*}
Parameters are fit using the Integrated Nested Laplace Approximation (INLA) method \cite{rueApproximateBayesianInference2009}.
Values of $n_{t,\tau}$ are estimated using sampling. The total number of cases at week $t$ is then given by
\begin{equation*}
n_{t} = \sum_{\tau} n_{t,\tau}.
\label{eqn:nttot}
\end{equation*}
We use the first twenty weeks of data in 2012 for training only, and begin generating estimates in epidemiological week 21 in 2012, which began on Sunday 20th May 2012. The model is fit to the data again every week, using all data available from the start of 2012 until week $t$. For efficiency, in fitting the model we discard all cases for which entry of the case into the surveillance system was delayed for over 26 weeks (i.e., half a year). We then set the maximum value of $\tau$ -- the number of weeks for which system entry was delayed -- to the number of weeks delay required to include 95\% of the remaining cases in training, or 8 weeks if this is greater. Remaining cases with a longer delay are omitted from training. The same approach is used for all of the following models, apart from the naive model.
\item[Google (Dengue).] This model is the same as the baseline model, with data on \emph{Google} searches related to the topic of \emph{dengue} added as an external regressor.
The mean $\lambda_{t,\tau}$ is now calculated as
\begin{equation*}
\log{(\lambda_{t,\tau})} = \mu + \alpha_t + \beta_{\tau} + \gamma^d\log{(G^{d}_t)},
\label{eqn:lambdattau.dengue}
\end{equation*}
where $G^{d}_t$ is the volume of \emph{Google} searches related to \emph{dengue} in week $t$ and $\gamma^d$ is a regression coefficient.
\item[Twitter.] This model is the same as the baseline model, with data on the volume of tweets that express personal experience of dengue added as an external regressor. The mean $\lambda_{t,\tau}$ is now calculated as
\begin{equation*}
\log{(\lambda_{t,\tau})} = \mu + \alpha_t + \beta_{\tau} + \delta\log{(T_t)},
\label{eqn:lambdattau.twitter}
\end{equation*}
where $T_t$ is the volume of \emph{Twitter} posts in week $t$ and $\delta$ is a regression coefficient.
\item[Google (Dengue) + Twitter.] This model is the same as the baseline model, with data on \emph{Google} searches related to the topic of \emph{dengue} and the volume of tweets that express personal experience of dengue added as external regressors.
The mean $\lambda_{t,\tau}$ is now calculated as
\begin{equation*}
\log{(\lambda_{t,\tau})} = \mu + \alpha_t + \beta_{\tau} + \gamma^d\log{(G^{d}_t)}. + \delta\log{(T_t)}.
\label{eqn:lambdattau.both}
\end{equation*}
\item[Google (all diseases).] This model is the same as the baseline model, with data on \emph{Google} searches related to the topics of \emph{dengue}, \emph{Zika} and \emph{chikungunya} added as external regressors.
The mean $\lambda_{t,\tau}$ is now calculated as
\begin{equation*}
\log{(\lambda_{t,\tau})} = \mu + \alpha_t + \beta_{\tau} + \gamma^d\log{(G^{d}_t)} + \gamma^z\log{(G^{z}_t)} + \gamma^c\log{(G^{c}_t)},
\label{eqn:lambdattau.alldiseases}
\end{equation*}
where $G^{z}_t$ and $G^{c}_t$ are the volumes of \emph{Google} searches in week $t$ related to \emph{Zika} and \emph{chikungunya}, and $\gamma^z$ and $\gamma^c$ are regression coefficients.
\item[Google (all diseases) + Twitter.] This model is the same as the baseline model, with data on \emph{Google} searches related to the topics of \emph{dengue}, \emph{Zika} and \emph{chikungunya} and the volume of tweets that express personal experience of dengue added as external regressors.
The mean $\lambda_{t,\tau}$ is now calculated as
\begin{equation*}
\log{(\lambda_{t,\tau})} = \mu + \alpha_t + \beta_{\tau} +
\gamma^d\log{(G^{d}_t)} + \gamma^z\log{(G^{z}_t)} + \gamma^c\log{(G^{c}_t)} + \delta\log{(T_t)}.
\label{eqn:lambdattau.alldiseasestw}
\end{equation*}
\item[Naive.] Following Yang et al. \cite{Yang2017}, this model uses the number of known cases for the previous week as the estimate of the number of dengue cases for the current week.
\end{description}
\subsection*{Evaluation of results}
We investigate two elements of model performance: accuracy and precision.
To evaluate accuracy, we consider the size of the \emph{prediction errors} generated by a model; that is, the difference between the number of dengue cases estimated by the model for a given week and the true number of cases in that week. A more accurate model would produce smaller prediction errors. To compare the size of prediction errors generated by different models, we calculate the \textit{mean absolute error} (MAE). This error metric is easy to interpret, as it is measured in numbers of dengue cases.
In Fig. \ref{S1_Fig}, we discuss choice of error metric further and consider alternative metrics to the MAE.
To evaluate precision, we consider the size of the 95\% \emph{prediction intervals} generated by a model; that is, the size of the range of values within which the model estimates that there is a 95\% probability that the true number of dengue cases falls. A more precise model would generate smaller prediction intervals (assuming that 95\% of the true data points do fall within these prediction intervals, which we verify). To compare the size of prediction intervals generated by different models, we calculate the \textit{mean prediction interval} (MPI). We define the MPI as the mean width of the 95\% prediction interval for all estimates generated.
The dengue case count time series is characterised by a sequence of peaks and troughs. The error metrics we outline here will be affected by the model's performance during both peaks and troughs. However, accurate, precise information may be of most use to policymakers during epidemics when case counts are high. We therefore carry out sub-analyses in which we focus specifically on model accuracy and precision during periods of epidemics. To identify periods of epidemics, we apply the \textit{Moving Epidemic Method} (MEM \cite{Vega2013}) to historic data for Rio de Janeiro. This is a method which can be used to determine the minimum number of dengue cases per week that would be expected during epidemics. By applying this methodology to the official dengue case count data, we obtain an epidemic threshold of 550 dengue cases per week. Weekly counts below this threshold are considered inter-epidemic activity.
By adding online data streams to the models we consider, we introduce extra parameters into the models, potentially increasing the danger of overfitting. In our main error metric analyses, we test our models out-of-sample, thereby guarding against such overfitting as well as mimicking operational implementation. However, when evaluating the models, we also consider a further metric of model quality, the \textit{Watanabe-Akaike information criterion} (WAIC). This model quality metric rewards goodness of fit but explicitly penalises models for the presence of additional parameters.
\section*{Results}
Following Yang et al. \cite{Yang2017}, we begin by comparing the accuracy of all models proposed to the accuracy of the naive model. Again, the naive model uses the known case count for the previous week as the estimate for the case count in the current week. To evaluate model accuracy, we calculate the \textit{mean absolute error} (MAE) for each model. To facilitate comparison of the models, we also calculate the \textit{relative MAE} for each model \cite{Reich2016a}. We define the relative MAE as the MAE of a given model divided by the MAE of the naive model. The relative MAE of the naive model is therefore 1.
Table \ref{tab:maes.naive} shows that the naive model is vastly outperformed by all other models. The MAE for all other models is at least 37\% smaller than the MAE of the naive model. The best performing model is the \textit{Google (Dengue) and Twitter} model, for which the relative MAE is 0.502. As the performance of the naive model is considerably worse than all other models, we disregard it for further analyses.
\begin{table}[!ht]
\centering
\caption{{\bf Accuracy of all dengue nowcasting models compared to a naive model.} Following Yang et al. \cite{Yang2017}, we compare the accuracy of the naive model to all other models. We define the relative mean absolute error (relative MAE) as the MAE of a given model divided by the MAE of the naive model. The relative MAE of the naive model is therefore 1. We find that the naive model is vastly outperformed by all other models. Note that the baseline model is a more advanced model than the naive model, and is explicitly designed to account for the incremental delivery of the dengue case count data \cite{Bastos}. All models other than the naive model build on the baseline model. The best performing model is the \textit{Google (Dengue) and Twitter} model (bold), which exhibits an MAE 49.8\% smaller than that of the naive model.}
\begin{tabular}{lcc}
\toprule
Model & MAE & relative MAE \\
\midrule
Baseline & 267.2 & 0.629 \\
\emph{Google (Dengue)} & 215.4 & 0.507\\
\emph{Twitter} & 223.3 & 0.525 \\
\textbf{\emph{Google (Dengue)} + \emph{Twitter}} & \textbf{213.3} & \textbf{0.502} \\
\emph{Google (all diseases)} & 218.8 & 0.515\\
\emph{Google (all diseases)} + \emph{Twitter} & 213.7 & 0.503\\
Naive & 425.0 & 1 \\
\bottomrule
\end{tabular}
\label{tab:maes.naive}
\end{table}
For the remainder of our analyses, we focus on comparing the models that use \textit{Google} and \textit{Twitter} data to the baseline model. We redefine the relative MAE as the MAE of a given model divided by the MAE of the baseline model. The relative MAE of the baseline model is therefore 1.
Table \ref{tab:maes} shows that all the models enhanced with online data from either \textit{Google} or \textit{Twitter} outperform the baseline model. Across the full time period analysed, the baseline model exhibits an MAE of 267.2 cases. The model enhanced with data on tweets relating to dengue exhibits an MAE 16.4\% smaller than the baseline model, at 223.3 cases. The model enhanced with data on \textit{Google} searches relating to dengue exhibits an MAE 19.4\% smaller than the baseline model, at 215.4 cases. As was already seen in Table \ref{tab:maes.naive} however, the best performing model is the \emph{Google (Dengue) + Twitter} model, which draws on data on both \textit{Google} searches and tweets relating to dengue in tandem. This model exhibits an MAE of 213.3 cases, 20.2\% smaller than that of the baseline model (Fig. \ref{fig:Fig3}B).
\begin{table}[!ht]
\centering
\caption{{\bf Accuracy of dengue nowcasting models using \textit{Google} and \textit{Twitter} data compared to the baseline model.} We redefine the relative mean absolute error (relative MAE) as the MAE of a given model divided by the MAE of the baseline model. The relative MAE of the baseline model is therefore 1. We find that all the models using online data outperform the baseline model. The best performing model is the \emph{Google (Dengue) + Twitter} model (bold), which exhibits an MAE 20.2\% smaller than that of the baseline model.}
\begin{tabular}{lcc}
\toprule
Model & MAE & relative MAE \\
\midrule
Baseline & 267.2 & 1 \\
\emph{Google (Dengue)} & 215.4 & 0.806 \\
\emph{Twitter} & 223.3 & 0.836 \\
\textbf{\emph{Google (Dengue) + Twitter}} & \textbf{213.3} & \textbf{0.798} \\
\emph{Google (all diseases)}& 218.8 & 0.819 \\
\emph{Google (all diseases)} + \emph{Twitter} & 213.7 & 0.800 \\
\bottomrule
\end{tabular}
\label{tab:maes}
\end{table}
\begin{figure}[ht!]
\vspace{-3.5cm}
\thisfloatpagestyle{empty}
\centering
\includegraphics[width = \linewidth]{figures/Fig3.pdf}
\caption{{\bf Improving accuracy and reducing uncertainty for dengue case count estimates with \textit{Google} and \textit{Twitter}.}
(A) We compare the performance of the baseline model with a model drawing on data from \textit{Google} and \textit{Twitter}. In black, we depict official data on the total number of dengue cases recorded for each week in Rio de Janeiro, from January 2012 until July 2016. In green, we depict the total number of dengue cases known to the authorities by the end of each week, which constitute a very small fraction of the total cases. In red, we depict estimates of the number of dengue cases generated by the baseline model for each week at the end of the corresponding week. The baseline model uses the official dengue case count data only and was designed to explicitly take into account the nature of the delays in the dengue data \cite{Bastos}, going beyond standard autoregressive approaches. It is clear that this model generally succeeds in capturing the timing and magnitude of the peaks. In blue, we depict estimates of the number of dengue cases generated by the \textit{Google (Dengue) + Twitter} model. It can be seen that the estimates enriched with \textit{Google} and \textit{Twitter} are often even closer to the final weekly dengue case count, in particular during the large peaks in case counts in 2012 and 2013. The blue shaded areas represent the 80\% (dark blue) and 95\% (light blue) prediction intervals for the \textit{Google (Dengue) + Twitter} model. (B) We compare the weekly absolute error for the baseline model and the \textit{Google (Dengue) + Twitter} model. While the mean absolute error (MAE) for the baseline model is 267.2 dengue cases per week, the MAE for the \textit{Google (Dengue) + Twitter} model is lower, at 213.3 dengue cases per week. The \textit{Google (Dengue) + Twitter} model is therefore more accurate. (C) An ideal model for estimating dengue case counts would produce accurate estimates with low uncertainty. To evaluate the level of uncertainty in the estimates produced by each model, we examine the \textit{relative mean prediction interval} (rMPI) for each model. We define the \textit{mean prediction interval} (MPI) as the mean width of the 95\% prediction interval for the full period for which estimates are generated. We define the rMPI as the MPI for the model divided by the MPI for the baseline model. The rMPI for the baseline model is therefore 1, whereas the rMPI for the \textit{Google (Dengue) + Twitter} model is lower at 0.899. The \textit{Google (Dengue) + Twitter} model therefore also generates more precise estimates.}
\label{fig:Fig3}
\end{figure}
The accuracy of estimates generated by the models which additionally draw on data on \textit{Google} searches relating to Zika and chikungunya is similar, with the \emph{Google (all diseases) + Twitter} model exhibiting an MAE of 213.7 cases, 20.0\% smaller than that of the baseline model. Overall, it therefore does not appear that integrating this extra \textit{Google} data relating to other arboviruses present in Rio de Janeiro improves accuracy of estimates of dengue incidence.
The performance of the models during epidemics is of particular importance. We therefore examine whether the estimates generated by the \textit{Google (Dengue) + Twitter} model are more accurate when considering periods of epidemics alone. Using the \textit{Moving Epidemic Method} (MEM \cite{Vega2013}), we determine the epidemic threshold for Rio de Janeiro to be 550 dengue cases per week. For each week in which the final number of notified dengue cases was 550 or over, we calculate the absolute error of the estimates generated by the baseline model and the \textit{Google (Dengue) + Twitter} model. We find that during epidemics, the baseline model exhibits an MAE of 774.8 cases. In contrast, the \textit{Google (Dengue) + Twitter} model exhibits an MAE of 596.0 cases, 23.1\% lower than the baseline model (Fig. \ref{fig:Fig4}A).
\FloatBarrier
\begin{figure}[ht!]
\thisfloatpagestyle{empty}
\centering
\includegraphics[width = \linewidth]{figures/Fig4.pdf}
\caption{{\bf Further analyses of the quality of dengue nowcasting models including \textit{Google} and \textit{Twitter} data.} (A) The performance of the models during epidemics is of particular importance. Using the \textit{Moving Epidemic Method} (MEM \cite{Vega2013}), we determine the epidemic threshold for Rio de Janeiro to be 550 dengue cases per week. For each week in which the final number of notified dengue cases was 550 or over, we determine the absolute error of the estimates generated by the baseline model and the \textit{Google (Dengue) + Twitter} model, and plot the distribution using a kernel density estimate. We find that the \textit{mean absolute error} (MAE) for the \textit{Google (Dengue) + Twitter} model (596.0 dengue cases per week; blue) is again considerably lower than the MAE for the baseline model (774.8 dengue cases per week; red). (B) In addition to evaluating the accuracy and precision of out-of-sample estimates generated by the models, here we examine a further metric of model quality, the \textit{Watanabe-Akaike information criterion} (WAIC). The WAIC rewards goodness of fit but explicitly penalises models for the presence of additional parameters, such as data on \textit{Google} searches or tweets. We evaluate the quality of all six models explored in our main analysis: the baseline model (red), the \textit{Google (Dengue)} model (purple), the \textit{Twitter} model (green), the \textit{Google (Dengue) + Twitter} model (blue), the \textit{Google (all diseases)} model (orange) and the \textit{Google (all diseases) + Twitter} model (pink). As the model is fit each week when new data arrives, we calculate a WAIC value for each of the six models for every week. To facilitate comparison of these weekly WAIC values, for each week we normalise the six WAIC values by the WAIC for the baseline model. The resulting value for the baseline model is therefore always 1 (red line). A lower WAIC indicates a higher quality model. It can be observed that the models enhanced by online data generally exhibit lower WAIC values than the baseline model. We note that, again, the \textit{Google (Dengue) + Twitter} model (blue) performs particularly well.}
\label{fig:Fig4}
\end{figure}
The inclusion of extra parameters in a model, such as data on \textit{Google} searches or tweets, increases the likelihood of overfitting. While the analyses detailed so far have considered estimates generated out-of-sample, thereby guarding against this danger, we also calculate the \textit{Watanabe-Akaike information criterion} (WAIC) model quality metric for each of our six models. The WAIC rewards goodness of fit whilst penalising models for the inclusion of extra parameters. As the model is fit each week when new data arrives, we calculate a WAIC value for each of the six models for every week.
Fig. \ref{fig:Fig4}B depicts the weekly WAIC values for all six models, relative to the baseline model.
A lower WAIC value indicates a higher quality model. We find that models enhanced by online data generally exhibit lower WAIC values than the baseline model. In most weeks, the lowest WAIC is again obtained by the \emph{Google (Dengue) + Twitter} model, which draws on data on both \textit{Google} searches and tweets relating to dengue in tandem.
An ideal model for estimating current dengue case counts would not only produce accurate estimates, but would also produce precise estimates, where uncertainty about the true value was low. We therefore examine whether dengue nowcasting models enhanced by online data generate estimates that are more precise, as well as more accurate. To evaluate the precision of estimates produced by each model, we calculate the \textit{mean prediction interval} (MPI), the mean width of the 95\% prediction interval for all estimates generated. To facilitate comparison to the performance of the baseline model, we also calculate the \textit{relative MPI} (rMPI), which we define as the MPI for a given model divided by the MPI for the baseline model. The rMPI for the baseline model is therefore 1.
Table \ref{tab:shrinkings} shows that the rMPI for all models enhanced by online data is lower than 1. This indicates that the estimates generated by the models enhanced by online data are more precise than those generated by the baseline model. The \textit{Twitter} model is the most precise model, exhibiting an MPI which is 11.1\% lower than the MPI of the baseline model. The \textit{Google (Dengue)} model, drawing on data on \textit{Google} searches relating to dengue, achieves a smaller but still notable improvement of 8.8\%. The combined \textit{Google (Dengue) + Twitter} model, which produced the most accurate estimates, generates the second most precise estimates, with an MPI 10.1\% lower than the the MPI of the baseline model (Fig. \ref{fig:Fig3}C).
\begin{table}[!ht]
\caption{{\bf Precision of dengue nowcasting models using \textit{Google} and \textit{Twitter} data compared to the baseline model.} We define the mean prediction interval (MPI) as the mean width of the 95\% prediction interval for all estimates generated. The MPI for the baseline model is given in parentheses. We define the relative mean prediction interval (rMPI) as the MPI for the model divided by the MPI for the baseline model. The rMPI for the baseline model is therefore 1. We find that models using online data generate more precise estimates, reflected by lower rMPIs. The most precise model is the \textit{Twitter} model (bold), followed by the \textit{Google (Dengue) + Twitter} model. We also verify that the 95\% prediction intervals reliably represent the range within which 95\% of true data points fall. We find that whether considering all weeks, weeks with more than 550 cases (i.e., during epidemics) or weeks with fewer than 550 cases (i.e., outside epidemics), the 95\% prediction intervals appear to behave as desired.}
\centering
\begin{tabular}{lcccc}
\toprule
& relative Mean &\multicolumn{3}{c}{Percentage points within} \\
Model & Prediction Interval &\multicolumn{3}{c}{95\% prediction interval } \\
\cmidrule{3-5}\
&& all & $>550$ & $<550$ \\
\midrule
Baseline & 1 (1554.6) & 95.0 & 93.7 & 95.5 \\
\emph{Google (Dengue)} & 0.912 & 94.5 & 93.7 & 94.8 \\
\emph{Twitter} & \textbf{0.889} & 95.4 & 96.9 & 94.8 \\
\emph{Google (Dengue)} + \emph{Twitter} & 0.899 & 94.5 & 95.3 & 94.2 \\
\emph{Google (all diseases)} & 0.938 & 95.4 & 96.9 & 94.8\\
\emph{Google (all diseases)} + \emph{Twitter} & 0.901 & 95.4 & 95.3 & 95.5 \\
\bottomrule
\end{tabular}
\label{tab:shrinkings}
\end{table}
The precision of estimates generated by models which additionally draw on data on \textit{Google} searches relating to Zika and chikungunya is again similar, with the \emph{Google (all diseases) + Twitter} model exhibiting an MPI 9.9\% lower than the the MPI of the baseline model. It therefore does not appear that integrating this extra \textit{Google} data relating to other arboviruses present in Rio de Janeiro improves the precision of estimates of dengue incidence.
We verify whether the 95\% prediction intervals continue to reliably represent the range within which 95\% of true data points fall. Table \ref{tab:shrinkings} demonstrates that whether considering all weeks, weeks with more than 550 cases (i.e., during epidemics) or weeks with fewer than 550 cases (i.e., outside epidemics), the 95\% prediction intervals appear to behave as desired. In other words, this 10\% improvement in the precision of estimates does not come at the cost of the reliability of the prediction intervals.
The characteristics of the dengue season in Rio de Janeiro vary from year to year. In some years, over 5\,000 cases a week are reported at the height of the season, whereas in other years, the case count is much lower (Fig. \ref{fig:Fig1}). In addition, previous research has highlighted that the relationship between online data and case counts may vary across time \cite{Preis2014}. We therefore investigate whether the use of online data helps deliver more accurate estimates of dengue incidence in Rio de Janeiro in each of the years covered in our analysis.
In Table \ref{tab:maes.years}, we report the relative MAE for each model for each year of analysis. We note that statistics for 2012 and 2016 are based on incomplete years, as the analyses begin in Week 21 of 2012 and end in Week 29 of 2016. We find that in 2012, 2013, 2014 and 2015, the accuracy of all models using online data is greater than the accuracy of the baseline model. Using the \emph{Google (Dengue) + Twitter} model, the MAE is reduced by between 11\% and 32\%.
\begin{table}[!ht]
\caption{{\bf Evaluating the accuracy of dengue nowcasting models using \textit{Google} and \textit{Twitter} across different years.} For each year, we define the relative mean absolute error (relative MAE) as the MAE of a given model divided by the MAE of the baseline model. The MAE is given in parentheses. In bold, we highlight the lowest relative MAE for each year. We find that in 2012, 2013, 2014 and 2015, the accuracy of all models using online data is greater than the accuracy of the baseline model. In 2016, we find that the baseline model delivers the most accurate estimates. However, Fig. \ref{fig:Fig3}A shows that in 2016, the performance of the baseline model itself is notably worse than in previous years. We discuss the particular circumstances of 2016 in more detail in the text.}
\begin{adjustwidth}{-2.25in}{0in}
\flushright
\begin{tabular}{llllll}
\toprule
&\multicolumn{5}{c}{Relative mean absolute error}\\
\cmidrule{2-6}
Model & 2012 & 2013 & 2014 & 2015 & 2016 \\
\midrule
Baseline & 1 (678.4) & 1 (354.3) & 1 (19.0) & 1 (123.0) & \textbf{1 (369.1)} \\
\emph{Google (Dengue)} & 0.69 & 0.76 & 0.98 & 0.93 & 1.03 \\
\emph{Twitter} & 0.74 & 0.78 & 0.91 & 0.96 & 1.03 \\
\emph{Google (Dengue)} + \emph{Twitter} & 0.68 & \textbf{0.74} & \textbf{0.86} & \textbf{0.89} & 1.08 \\
\emph{Google (all diseases)} & 0.73 & 0.77 & 0.96 & 0.90 & 1.02 \\
\emph{Google (all diseases)} + \emph{Twitter} & \textbf{0.65} & 0.80 & 0.87 & 0.92 & 1.02 \\
\bottomrule
\end{tabular}
\end{adjustwidth}
\label{tab:maes.years}
\end{table}
In 2016 however, we find that the baseline model delivers the most accurate estimates, and that the MAE of estimates generated by the \emph{Google (Dengue) + Twitter} model is 8\% higher. At the same time however, we note that the MAE for the baseline model in 2016 (369.1 cases per week) is relatively high given the size of the peak. For example, the MAE for the baseline model in 2013 was similar at 354.3 cases per week, but the peak number of dengue cases per week in 2013 was 6430 in comparison to a peak of 2973 cases per week in 2016. This diminished performance in 2016 can also be seen in Fig. \ref{fig:Fig3}A.
Why might we observe differing results for 2016 in comparison to earlier years? A potential answer to this question can be found by examining the nature of the delays in the entry of dengue cases into the surveillance system around this period. Figure~\ref{S3_Fig} illustrates that from January 2012 to May 2015, there was a mean delay of 4.9 weeks until 80\% of dengue cases for a given week were entered into the surveillance system, with a standard deviation of 1.5 weeks. From June 2015 to December 2015 however, delays were notably reduced such that there was a mean delay of 2 weeks until 80\% of dengue cases for a given week were entered into the surveillance system.
From January 2016 to the end of the dataset in July 2016, the delays increased again to a mean of 4.6 weeks until 80\% of dengue cases for a given week were entered into the surveillance system.
This abnormally large variation in delays may have made it particularly difficult for the baseline model to correctly model the delay structure, leading to a higher baseline MAE for 2016.
It is also worth noting that there was a Zika outbreak in Brazil during the 2016 dengue season. Zika is not only spread by the same mosquito as dengue, but also shares some symptoms. Difficulty in discerning the symptoms of dengue from the symptoms of Zika before a laboratory analysis has taken place will have led to some cases of dengue being recorded as suspected cases of Zika, and vice versa. The Zika outbreak was also covered widely in the media, and it is possible that people with dengue may have searched for information relating to Zika instead. Fig. \ref{fig:Fig1}D shows that there was a surge in searches relating to Zika in 2016, and Fig. \ref{fig:Fig1}E shows that a similar surge occurred for searches relating to a further arbovirus present in Rio de Janeiro, chikungunya. Indeed, Table \ref{tab:maes.years} shows that for 2016, the best performing models using online data are the \emph{Google (all diseases)} model and the \emph{Google (all diseases) + Twitter} model, both of which additionally draw on data on \textit{Google} searches relating to Zika and chikungunya. However, both models still generate estimates with errors which were 2\% greater than the errors generated by the baseline model.
\section*{Discussion}
Here, we investigate whether data on \emph{Google} searches and \emph{Twitter} posts relating to dengue can be used to improve nowcasts of dengue case counts, when official case count data is not only delayed but also released incrementally, as is frequently the case. Using Rio de Janeiro in Brazil as a case study, we present analyses which show that by drawing on \emph{Google} and \emph{Twitter} data in parallel, weekly estimates of the current number of dengue case counts can be made both more accurate and more precise than estimates that use historic official data alone. The explicit modelling of the true incremental delivery of the case count data means that this approach can be used in practice, with no need to aggregate data up to a coarser temporal granularity such as months. Our results also illustrate the potential value of considering multiple online data streams in parallel, instead of focusing on the relationship between case count data and one online data stream alone.
The only year in which we find that online data does not improve estimates is 2016, when there was also a Zika outbreak in Rio de Janeiro. As Zika and dengue share symptoms, it is possible that people were searching for information about one disease when they were suffering from the other. Future work could look to build a combined model of the incidence of the arboviruses dengue, Zika and chikungunya, to better exploit the relationships between the three diseases that exist in both case count and online data. An extended model could also look to draw on other rapidly available data sources, such as weather data \cite{Luz2008,Hii2012,Ramadona2016}. The framework described here has been developed for use in the \emph{InfoDengue} surveillance system, used in hundreds of Brazilian cities \cite{codeco2016}. Extensions of this work could also verify whether this online data approach would benefit other cities and countries too.
Dengue is a global burden, and a lack of timely data on case counts leaves policymakers without the information they need to intervene early in an outbreak. We hope that careful development of analysis frameworks to exploit rapidly available alternative data sources, integrated into surveillance systems such as \emph{InfoDengue}, will help mitigate this problem.
\section*{Acknowledgements}
{\small
GM acknowledges EPSRC grant EP/L015374/1. TP and HSM were supported by Research Councils UK grant EP/K039830/1, the University of Warwick Brazil Partnership Fund, and The Alan Turing Institute under the EPSRC grant EP/N510129/1 (awards TU/B/000006 and TU/B/000008). GM, TP and HSM are also grateful for support provided by the University of Warwick GRP Behavioural Science. LSB acknowledges CAPES grant 88881.068124/2014-01 and FAPERJ E-26/201.277/2021. CTC acknowledges CNPq grant 305553/2014-3 and InfoDengue support from the SVS/Brazilian Ministry of Health. The authors are grateful to the Secretaria Municipal de Sa\'{u}de do Rio de Janeiro for providing access to the data on dengue cases and to the Observat\'{o}rio da Dengue (UFMG) for the data on the volume of tweets related to dengue.}
|
2,877,628,091,630 | arxiv | \section{Introduction}
The ultimate fate of a close binary composed of a neutron star and a
black hole has first been discussed (\cite{wheel}) shortly after the
discovery of neutron stars. It has been pointed out that the
coalescence of such a binary would make a promising site for the
r--process nucleosynthesis (\cite{latt}); the same authors already
suggested that the coalescence may give rise to a gamma--ray burst
(GRB), but the correctly estimated event rate was thought to be too
low in the then prevailing paradigm of Galactic sources for GRBs. As
discussed in the next section, recent observations led to a revived
interest in black hole--neutron star binaries as sources of GRBs.
A seemingly separate problem is that of the fate of a neutron star
with mass below the stability limit
(e.g. \cite{page},~\cite{sumiyoshi}). It has been though that such stars
will undergo a violent explosion but no reliable production sites had
been identified.
In this Letter we report on our Newtonian simulations of the final
stages of evolution of a black hole--neutron star
binary. Surprisingly, our results suggest a possible unification of
the disparate paths of investigation mentioned above.
\section{Black hole--neutron star coalescence as a potential source of GRBs}
The properties of dim optical transients
(\cite{vanP},~\cite{djorg},~\cite{metzger}) associated with gamma-ray
bursts (GRBs) reinforce the view (\cite{bp},~\cite{meeg}), hitherto
held on statistical grounds, that the sources of the observed GRBs are
not located in the Galaxy or the nearby clusters of galaxies. All
facts are consistent with a ``cosmological'' origin of GRBs
(\cite{fish}). In fact, the isotropy of GRBs and the distribution of
their peak flux favour a typical distance between $\sim100\,$Mpc and
$\sim1\,$Gpc to the closest sources of the observed GRBs. The reported
redshift (\cite{metzger}) of $z=0.8$ to the optical counterpart of
GRB970508 should settle the issue of the intrinsic luminosity of the
GRB sources. A distance of $\sim1\,$Gpc implies that up to $10^{51}$
ergs must be released in gamma rays to account for the observed
fluences of $\sim10^{-7.5}$ to $\sim10^{-3}$erg/cm$^2$. All models
(\cite{colg},~\cite{bpap},
~\cite{eich},~\cite{bp},~\cite{usov},~\cite{mesz},~\cite{woos})
involve the birth or death of a neutron star or a star like it.
To be efficiently converted to observed gamma rays, the energy
released in the primary event must have a line of sight to the
observer which is sufficiently baryon-free to allow a relativistic
blast wave (\cite{bpap},~\cite{mesz},~\cite{rees}) to expand at
velocities close to the speed of light. It has been argued
(\cite{rys}) that the interaction of such relativistic outflow with
the interstellar medium will result in shock acceleration of electrons
and amplification of magnetic fields yielding significant emission of
gamma-rays rays through synchrotron radiation. The expected afterglow
(\cite{viet}) may be consistent with the X-ray and optical transients
detected by the Beppo-SAX satellite and follow-up observations
(\cite{vanP}). We are looking, then, for a process which would
release a sufficient amount of energy in a baryon-free direction, and
one whose characteristic timescales correspond to the variability and
durations of the observed GRBs (in the shocked fireball model the GRB
timescales must arise at the source (\cite{piran})).
A sufficiently small baryon loading of the plasma is obtained
(\cite{pa}) in a natural way in the mergers of two strange stars,
because the strange-quark matter making up their bulk is self-bound
and hence immune to lofting by radiation. But the disruption of a
strange star would pollute the Galactic environment with strange-quark
nuggets which would preclude (\cite{cald}) the further formation of
young pulsars (neutron stars). Thus, the merger of a strange star with
anything else is excluded as a source of GRBs (\cite{wk}).
The most conservative scenario of GRB formation involves the
coalescence of a binary system composed of two neutron stars
(\cite{bpap},~\cite{eich}). These events are certain to occcur and a
satisfactory lower limit to their rate can reliably be inferred
(\cite{latt},~\cite{nara}), e.g. from the statistics of the known
Hulse-Taylor type neutron star binaries. There is disagreement as to
the outcome of the last stages of evolution of such
binaries. Newtonian simulations give an insufficient neutrino
luminosity to power a GRB (\cite{ruff}) while general relativistic
calculations indicate no blast wave will be formed, although a GRB
with a smooth time profile is the computed outcome
(\cite{wilson},~\cite{wilson97}).
It has been proposed (\cite{bp}) that in the binary coalescence of a
neutron star with a black hole the star would be disrupted into a
torus which would accrete on the viscous timescale, thus extending the
duration of the burst. Our simulations show a rather different
outcome, but it remains true that the process is extended in time (for
a different reason). Theoretical estimates (\cite{latt},~\cite{nara})
give $\sim10^{-6}$ per year per galaxy for the rate of coalescence of
such binaries, in agreement with the observed rate of GRBs. The energy
release is comparable to that in the double neutron star
mergers. Thus, the process seems to share all the advantages of the
coalescing neutron stars scenario, while avoiding its main
shortcomings. This motivated our study.
\section{Numerical Method}
For the computations presented in this letter, we have used a fully
Newtonian smooth particle hydrodynamics (SPH) code
(\cite{Lucy},~\cite{GM}). A detailed description of the code will be
published elsewhere (\cite{longpaper}). In calibration runs of the
code, we have replicated (\cite{my}) in detail all features of the
binary neutron star mergers computed by \cite{rasio}. The neutron star
was modeled as a polytrope with a stiff equation of state (adiabatic
index $\Gamma=3$) with 17,000 particles. The black hole was modeled as
a point mass with an absorbing boundary at $r_{g}=2GM/c^{2}$. Any
particle that comes closer than $r_{g}$ to the black hole is absorbed,
the mass and momentum of the black hole are adjusted so the that total
mass and linear momentum are conserved. The detailed results presented
here were obtained for initial conditions corresponding to a tidally
locked neutron star. Initial synchronized equilibrium configurations
can be constructed via a relaxation technique for a range of binary
separations, allowing the polytrope to respond to the presence of the
tidal field (\cite{RS92}). During the dynamical coalescence, we also
calculate the gravitational radiation waveforms emmitted by the
system, in the quadrupole approximation. These waveforms are
presented elsewhere (\cite{eamaldi},b).
\section{Results and Discussion}
In the coalescence, the two components of the binary are brought
together by the loss of angular momentum to gravitational radiation. A
particularly interesting case occurs when the mass of the black hole
is close to that of the neutron star. In this case a dynamical
instability appears, and the orbit decays on a dynamical
timescale. The results of model calculations with an initial mass of
the neutron star of 1.4 M$_\odot$ and unperturbed radius of the
polytrope of 13.4 km are presented in Figures \ref{fig1} and
\ref{fig2}. Upon relaxing the polytrope to a synchronized state in the
binary system, we find the onset of instability at a distance of 37
km, this is the initial binary separation in the simulation presented
in Figures \ref{fig1} and \ref{fig2}.
Figure \ref{fig2} shows density contour snapshots during a dynamical
simulation with an initial mass ratio of one ($q=1$). A transient
massive accretion torus forms around the black hole, but the neutron
star is not completely disrupted as a result of this encounter. To the
limit of our resolution ($10^{-4}$M$_{\odot}$), a baryon--free line of
sight, parallel to the rotation axis of the binary, remains present
throughout the simulation. Higher resolution runs are needed to
determine whether the baryon content is below $10^{-5}$M$_{\odot}$, as
required by the blast--wave model of GRBs (\cite{rees}). The total
energy released through viscous heating is $\approx 5\times 10^{52}$
erg. In this case, mass transfer is essentially over in approximately
five initial orbital periods (11 ms) and a remnant core containing
0.43 M$_{\odot}$ is left orbiting around a 2.25 M$_{\odot}$ black
hole.
In Figure \ref{fig1} we have plotted (solid line) the mass accretion
rate onto the black hole, showing that the accretion event is very
brief $\sim 2\,$ms; the dashed line is the mass of the black hole as a
function of time. The configuration resulting from the unstable mass
transfer in a binary of initial mass ratio $q=1$ is that of a black
hole and a lighter remnant core left in orbit of greater separation
($\sim 60\,$km) and a greatly altered mass ratio ($q_{final}=0.19$).
The orbital separation in this new binary system will again decrease
due to continuing emission of gravitational waves and, after
$\sim0.1\,$s, Roche-lobe overflow will occur as described below. In
the initial mass transfer for $q=1$, the black hole was a messy eater
and $\sim 0.1$M$_{\odot}$ of mass from the original neutron star remains
scattered around the binary system. With the current resolution of
our computations we were unable to determine the exact distribution of
this matter after 0.1 s. It is possible that some of this matter will
eventually be accreted onto the black hole, potentially releasing up
to $10^{51}$ erg in energy. We expect that much of the remaining
neutron matter will release its nuclear binding energy on the beta
decay timescale ($\tau\approx 15$ minutes).
For mass ratios not too close to unity ($q\equiv M_{ns}/M_{BH}<0.8$),
we find no dynamical instability. Once the components are brought
sufficiently close an episode of mass transfer through Roche--lobe
overflow from the neutron star onto the black hole ensues. This causes
the neutron star to move away from the black hole (by conservation of
angular momentum). The event is ``clean,'' all the mass lost by the
neutron star is accreted by the black hole. These results are quite
different from those of early estimates, which suggested that the
neutron star will be tidally disrupted (\cite{wheel},~\cite{latt}) and
that a few per cent of the mass will be ejected (\cite{latt}) to
infinity, although it should be noted that our calculations are
completely Newtonian. If gravitational radiation backreaction is
neglected, the peak accretion rate is about 2M$_\odot/$s, but only
about one percent of the neutron star mass is transferred in each
episode.
The accretion rate and the mass transferred in such an episode are
illustrated in Figure \ref{fig3}, for a 1.4M$_\odot$ neutron star
orbiting a 4.5M$_\odot$ black hole ($q=0.31$). Here, the critical
distance corresponding to Roche--lobe overflow is 50.4 km. After an
interval of time comparable with the duration of the accretion event,
$\sim 4\,$ms, gravitational radiation again forces the binary into a
configuration where mass transfer occurs again. Clearly, the number of
such accretion events would be $\sim M_{ns}/\Delta M_{BH}\sim 100$ and
the total duration of the process a few seconds. However,
gravitational radiation losses cannot be ignored in this case, since
the time scale for decay for the orbit (from angular momentum losses
to gravitational waves) in the point mass approximation is 3.5 ms and
the duration of the mass transfer episode presented in Figure
\ref{fig3} is 10 ms. To explore how these angular momentum losses to
gravitational radiation will affect the binary, we have calculated,
using the quadrupole approximation for angular momentum loss, the
evolution of the same binary assuming that the gravitational potential
is that of two point masses. After 10 ms of mass transfer, the binary
separation has increased by about 0.06\% and the mass of the neutron
star is 0.85 M$_{\odot}$. Thus, this approximation also leads to the
conclusion that the binary will survive with an altered mass ratio and
separation, and the total time scale of the coalescence process is
extended from a few milliseconds to at least several tens of
milliseconds. Full hydrodynamical simulations involving a backreaction
force are required to explore the evolution of such a binary in
greater detail.
Note that we have identified the final stages of evolution of the
black-hole neutron-star binary as the only known astrophysical process
leading to the creation of a low-mass neutron star. The coalescence
ends with an explosion (\cite{colp},~\cite{sumiyoshi}) when the mass
of the surviving core drops below the lower stability limit of neutron
stars. This in itself could also give rise to an observable
transient. As pointed out by the referee, the black hole member of the
binary will be left behind with a large linear velocity as a result of
the explosion and the associated recoil. Our simulations suggest a
velocity on the order of $10^{4}$ km/s.
In summary, we have identified several unexpected features in the
binary coalescence of a neutron star with a black hole, which may make
such events promising candidate sources for the central engine of
gamma-ray bursters, at least for the shorter bursts in the apparently
bimodal distribution (\cite{kouveliotou}). The Newtonian numerical
calculations presented here assumed that the rotation of the neutron
star was synchronized with the orbital period. In fact, tidal locking
is not expected (\cite{bild}). Our preliminary simulations for a
non--synchronized system with an initial mass ratio of $q=0.31$ show
that the core of the neutron star survives the initial mass transfer
episode and could be driven below the minimum mass required for
stability. Thus the outcome is similar to that for the tidally locked
binary. Finally, all of our results are predicated on the assumption
that the neutron star will not collapse to a black hole before the
onset of mass transfer, relativistic simulations are required to
address the validity of this assumption (\cite{wilson}).
\acknowledgements
This work was supported in part by Poland's Committee for Scientific
Research under grant KBN 2P03D01311 and by DGAPA--UNAM. We thank the
referee for helpful comments.
|
2,877,628,091,631 | arxiv | \section{INTRODUCTION}
As one of the fundamental building blocks of quantum theory, the uncertainty principle has attracted considerable attention since the innovation of quantum mechanics. Ever since Heisenberg proposed various notions of uncertainties related to the measurements of non-commuting observables in 1927 \cite{h1927}, a lot of researches have been done on quantifying the uncertainty of measurement outcomes, such as in terms of the noise and disturbance \cite{om2004,blw2013}, successive measurements \cite{dd1983,smd2003,dp2013,bfs2014,zzy2015,cf2015}, informational recourses \cite{ww2010}, entropic terms \cite{mu1988,pmms2017,fss2020,wym2009,r2013,rp2014,npg2016}, Wigner-Yanase skew information \cite{ls2003, cfl2016, zgy2021} and majorization techniques \cite{br2011,prz2013,fgg2013} etc..
Based on the variance of two arbitrary observables $A$ and $B$, Robertson derived the following well-known uncertainty relation \cite{r1929},
\begin{equation}\label{a1}
\Delta A\Delta B\geq \frac{1}{2}|\langle \psi [A,B]\psi\rangle|
\end{equation}
where $\Delta \Omega=\sqrt{\langle \Omega^2\rangle-\langle\Omega\rangle^2}$ is the standard deviation of an observable $\Omega$ and the commutator $[A,B]=AB-BA$. For measurements on suitable states, the above uncertainty relation is nontrivial for non-commuting observables, namely, the non-commutativity of two arbitrary observables could be captured by the non-zero lower bound in (\ref{a1}). However, when the measured state $|\psi\rangle$ is an eigenvector of either $A$ or $B$, lower bound in (\ref{a1}) is trivially zero. To deal with such problems, uncertainty relations based on the sum of variances have been taken into account. In \cite{mp2014} Maccone and Pati presented the following relations,
\begin{equation}\label{a2}
(\Delta A)^2+(\Delta B)^2\geq \pm\langle\psi|[A,B]|\psi\rangle + |\langle\psi |A\pm iB|\psi^\bot \rangle|^2,
\end{equation}
\begin{equation}\label{a3}
(\Delta A)^2+(\Delta B)^2\geq \frac{1}{2}[\Delta(A+B)]^2,
\end{equation}
where $\langle \psi|\psi^\bot\rangle=0$ and the signs $\pm$ on the right-hand side of (\ref{a2}) are so taken such that the lower bound attains the maximum.
Besides the uncertainty relations related to pairs of incompatible observables like position and momentum, the uncertainty relations with respect to three incompatible observables like the three components of spins and angular momentums \cite{kw2014,zdsw2015,sq2016,wbym2017} have been also investigated. Uncertainty relations for general multiple observables have been further studied either in product form \cite{qfl2016,xj2016} or sum form of variances \cite{ccfl2016, cbf2015, slpq2017, ccf2016, cwl2019}.
Song $et\ al.$ derived in \cite{slpq2017} an improved variance-based uncertainty relation,
\begin{equation} \label{a5}
\sum_{i=1}^N(\Delta A_i)^2 \geq \frac{1}{N}\Bigg\{ [\Delta (\sum_{{i=1}}^{N} A_i)]^2+\frac{2}{N(N-1)}[\sum_{1\leq i<j\leq N} \Delta (A_i-A_j)]^2 \Bigg\}
\end{equation}
for arbitrary $N$ incompatible observables, which is stronger than the one derived from the uncertainty inequality for two observables \cite{mp2014}.
The skew information also provides a way to characterize uncertainty relation \cite{ls2003}. The Wigner-Yanase skew information of a state $\rho$ with respect to an operator $A$ is given by \cite{wy1963},
\begin{equation}
I_{\rho}(A)=-\frac{1}{2}tr([\sqrt{\rho},A]^2)=\frac{1}{2}\| [\sqrt{\rho},A]\|^2,
\end{equation}
where $\|\bullet \|$ denotes the Frobenius norm. The Wigner-Yanase skew information characterizes the intrinsic features of the state $\rho$ and the observable $A$. It is the same as the variance for pure states, but generally fundamentally different from the variance \cite{lz2004}. The skew information describes the non-commutativity between the square root of $\rho$ and the observable, while the variance describes the non-commutativity between the state $\rho$ and the observable.
In ref. \cite{cfl2016}, Chen $et\ al.$ provided the sum uncertainty relation based on Wigner-Yanase skew information for finite $N$ observables,
\begin{equation}\label{a6}
\sum_{i=1}^N I_{\rho}(A_i) \geq \frac{1}{N-2}\Bigg\{ \sum_{1\leq i<j\leq N} I_{\rho}(A_i+A_j)-\frac{1}{(N-1)^2}[\sum_{1\leq i<j\leq N} \sqrt{I_{\rho}(A_i+A_j)}]^2 \Bigg\}.
\end{equation}
Recently, Zhang $et\ al.$ improved the above sum uncertainty relation \cite{zgy2021},
\begin{equation} \label{a7}
\sum_{i=1}^N I_{\rho}(A_i) \geq \frac{1}{N}\Bigg\{ I_{\rho}(\sum_{{i=1}}^{N} A_i)+\frac{2}{N(N-1)}[\sum_{1\leq i<j\leq N} \sqrt{I_{\rho}(A_i-A_j)}]^2 \Bigg\}.
\end{equation}
Both uncertainty inequalities (\ref{a6}) and (\ref{a7}) capture the incompatibility of the observables in the sense that their lower bounds are nonzero as long as the observables are not commutative with the measured state.
In this paper, we investigate sum uncertainty relations based on Wigner-Yanase skew information and variance for arbitrary $N$ incompatible observables. In Sec. \uppercase\expandafter{\romannumeral2}, we present a pair of uncertainty relation inequalities in term of variance, and we compare the uncertainty relation with existing ones for detail example, which shows our uncertainty relations can provide tighter bounds than others. In Sec. \uppercase\expandafter{\romannumeral3}, we obtain two uncertainty relations via skew information. And detail examples show the validity and superiority of our theorem to capture incompatibility. Then we conclude in Sec.\uppercase\expandafter{\romannumeral4}.
\section{UNCERTAINTY RELATION VIA VARIANCE}
In this section, we study stronger sum uncertainty relations based on the eigensystems of the observables. Let $A_i=\sum_k^du_{ik}| u_{ik}\rangle\langle u_{ik}|$ be the observable with the $k$-th eigenvalue $u_{ik}$ and the eigenstate $|u_{ik}\rangle$. Then the variance is given by $(\Delta A_i)^2=\sum_k^d \bar{u}_{ik}^2\langle| u_{ik}\rangle\langle u_{ik}|\rangle$, where $\bar{u}_{ik}=u_{ik}-\langle A_i\rangle$ and $\langle| u_{ik}\rangle\langle u_{ik}|\rangle$ is the projective probability of
measured state in the basis $|u_{ik}\rangle$. Set $a_i=(a_{i1},a_{i2},\dots,a_{id})=(|\bar{u}_{i1}|\sqrt{\langle |u_{i1}\rangle\langle u_{i1}|\rangle},|\bar{u}_{i2}|\sqrt{\langle |u_{i2}\rangle\langle u_{i2}|\rangle},\dots,|\bar{u}_{d1}|\sqrt{\langle |u_{id}\rangle\langle u_{id}|\rangle})$. Then $(\Delta A_i)^2=\sum_k^d a_{ik}^2=\|a_i\|^2$.
We have the following conclusion.
\begin{theorem}
Let $A_1, A_2, \dots, A_N$ be arbitrary $N$ observables. The following variance-based sum uncertainty relation holds for any quantum state $\rho$,
\begin{equation} \label{th3eq1}
\sum_{i=1}^N(\Delta A_i)^2 \geq \max_{\pi_i,\pi_j \in S_d} \frac{1}{2N-2}\Bigg\{ \sum_{1\leq i<j\leq N} \Lambda_{\pi_i(i)\pi_j(j)}^2+\frac{2}{N(N-1)}[\sum_{1\leq i<j\leq N} \bar{\Lambda}_{\pi_i(i)\pi_j(j)}]^2 \Bigg\},
\end{equation}
where
\[\begin{aligned}&{\Lambda }_{\pi_i(i)\pi_j(j)}^2=\sum_{k=1}^d (|\bar{u}_{i{\pi_i(k)}}|\sqrt{\langle
|u_{i{\pi_i(k)}}\rangle \langle u_{i{\pi_i(k)}}|\rangle }+|\bar{u}_{j{\pi_j(k)}}|\sqrt{\langle
|u_{j{\pi_j(k)}}\rangle \langle u_{j{\pi_j(k)}}|\rangle })^2,\\&\bar{\Lambda
}_{\pi_i(i)\pi_j(j)}^2=\sum_{k=1}^d (|\bar{u}_{i{\pi_i(k)}}|\sqrt{\langle |u_{i{\pi_i(k)}}\rangle \langle
u_{i{\pi_i(k)}}|\rangle }-|\bar{u}_{j{\pi_j(k)}}|\sqrt{\langle |u_{j{\pi_j(k)}}\rangle \langle
u_{j{\pi_j(k)}}|\rangle })^2, \end{aligned}\]
$\pi_i,\pi_j\in S_d$ are arbitrary $d$-element permutation.
\end{theorem}
{\sf [Proof]} To prove the inequality (\ref{th3eq1}), we need the following identity for vectors $a_i$,
\begin{equation*}
(2N-2)\sum_{i=1}^{N} \| a_i\|^2 = \sum_{1\leq i<j \leq N} \| a_i+a_j \|^2 + \sum_{1\leq i<j \leq N} \| a_i-a_j \|^2,
\end{equation*}
where $\|\bullet \|$ stands for the norm of a vector defined by inner product. Using the Cauchy-Schwarz inequality,
\begin{equation*}
\sum_{1\leq i<j \leq N} \| a_i - a_j \|^2 \geq \frac{2}{N(N-1)} (\sum_{1\leq i<j \leq N} \| a_i - a_j \|)^2,
\end{equation*}
we obtain
\begin{equation}\label{th3pf1}
\sum_{i=1}^{N} \| a_i^{\pi_i} \|^2 \geq \frac{1}{2N-2}[\frac{2}{N(N-1)}(\sum_{1\leq i<j \leq N} \| a_i^{\pi_i} - a_j^{\pi_j} \|)^2 + \sum_{1\leq i<j \leq N} \| a_i ^{\pi_i} + a_j^{\pi_j} \|^2],
\end{equation}
where \(a_i^{\pi_i}=(a_{i{\pi_i(1)}},a_{i{\pi_i(2)}},\dots ,a_{i{\pi_i(d)}}).\) This completes the proof. $\Box$
Arranging the components of the vector $b_i^{\uparrow}=(b_{i1},b_{i2},\dots, b_{id})^{\uparrow}=(|\bar{u}_{i1}|\sqrt{\langle |u_{i1}\rangle\langle u_{i1}|\rangle},|\bar{u}_{i2}|\sqrt{\langle |u_{i2}\rangle\langle u_{i2}|\rangle},\dots,|\bar{u}_{id}|\sqrt{\langle |u_{id}\rangle\langle u_{id}|\rangle})^{\uparrow}$
in increasing order, that is, $b_{ik} \leq b_{ik+1}$,
Chen $et\ al.$ introduced in \cite{cwl2019} a stronger uncertainty relation,
\begin{equation} \label{chen1}
\sum_{i=1}^N(\Delta A_i)^2 \geq \frac{1}{2^{H(2-N)}N-2}\Bigg\{ \sum_{1\leq i<j\leq N} K_{ij}^2+\frac{H(2-N)-1}{(N-1)^2}(\sum_{1\leq i<j\leq N} K_{ij})^2 \Bigg\},
\end{equation}
where $$K_{ij}^2= \sum_{k=1}^d (b_{ik}^{\uparrow} + b_{jk}^{\uparrow})^2,$$
and $H(x)$ is the unit step function with zero for $x<0$ and one for $x\geq 0$.
For two incompatible observables case, the inequality (\ref{th3eq1}) becomes
\begin{equation*}
(\Delta A_1)^2+(\Delta A_2)^2 \geq \max_{\pi_1,\pi_2 \in S_2} \frac{1}{2}\Bigg\{ \Lambda_{\pi_1(1)\pi_2(2)}^2+ \bar{\Lambda}_{\pi_1(1)\pi_2(2)}^2 \Bigg\}.
\end{equation*}
The uncertainty relation (\ref{chen1}) gives rise to
\begin{equation*}
(\Delta A_1)^2+(\Delta A_2)^2 \geq \frac{1}{2}K_{12}^2.
\end{equation*}
Since $\max_{\pi_1,\pi_2 \in S_2} \Lambda_{\pi_1(1)\pi_2(2)}^2=K_{12}^2$,
our Theorem 1 has tighter lower bound than (\ref{chen1}) due to an extra non-negative term.
\emph{Example 1.}
To illustrate that our uncertainty inequality (\ref{th3eq1}) is tighter than (\ref{a5}) and (\ref{chen1}), we consider the pure state,
\begin{equation}\label{ex1}
|\psi\rangle =cos{\frac{\theta}{2}}|1\rangle+e^{i\phi}sin{\frac{\theta}{2}}|0\rangle,
\end{equation}
where $0\leq\theta\leq \pi$ and $0\leq\phi\leq 2\pi$. We take observables $A_1=-|0\rangle\la1|-|1\rangle\la0|$, $A_2=-i|0\rangle\la1|+i|1\rangle\la0|$ and $A_3=|0\rangle\la0|-|1\rangle\la1|$.
Set $\phi=\pi/4$. The comparison between the lower bounds from (\ref{th3eq1}), (\ref{a5}) and (\ref{chen1}) is shown in Fig. \ref{fig1}. For the sake of simplicity, ${\rm LB}$, $\overline{\rm LB}1$ and $\overline{\rm LB}2$ represent the lower bounds of (\ref{th3eq1}), (\ref{a5}) and (\ref{chen1}), respectively. The bound of (\ref{th3eq1}) is tighter than (\ref{a5}) and (\ref{chen1}) for certain $\theta$.
\begin{figure}[htb]
\centering
\includegraphics[width=15cm]{TUfigure1.pdf}
\caption{ Blue (solid), pink (dot-dashed), Green (dashed) and line represent the lower bounds of (\ref{th3eq1}), (\ref{chen1}) and (\ref{a5}), respectively.}
\label{fig1}
\end{figure}
\section{UNCERTAINTY RELATIONS VIA SKEW INFORMATION}
We now provide stronger sum uncertainty relation inequalities based on
Wigner-Yanase skew information for $N$-incompatible observables.
\begin{theorem}\label{th2}
For arbitrary finite $N$ observables $A_1, A_2, \dots, A_N$, the following sum uncertainty relations hold:
\begin{equation}\label{th2eq1}
\sum_{i=1}^N I_{\rho} (A_i)\geq \frac{1}{2N-2}\Bigg\{\frac{2}{N(N-1)} [\sum_{1\leq i<j\leq N} \sqrt{I_{\rho}(A_i+A_j)}]^2+\sum_{1\leq i<j\leq N} I_{\rho} (A_i-A_j) \Bigg\},
\end{equation}
and
\begin{equation}\label{th2eq2}
\sum_{i=1}^N I_{\rho} (A_i)\geq \frac{1}{2N-2}\Bigg\{ \frac{2}{N(N-1)}[\sum_{1\leq i<j\leq N} \sqrt{I_{\rho} (A_i-A_j)}]^2 + \sum_{1\leq i<j\leq N} I_{\rho}(A_i+A_j) \Bigg\}.
\end{equation}
\end{theorem}
{\sf [Proof]}
Using the following identity for any Hermitian matrices $a_i$,
\begin{equation*}
(2N-2)\sum_{i=1}^{N} \| a_i\|^2 = \sum_{1\leq i<j \leq N} \| a_i+a_j \|^2 + \sum_{1\leq i<j \leq N} \| a_i-a_j \|^2,
\end{equation*}
where $\|\bullet \|$ stands for the Frobenius norm,
and the Cauchy-Schwarz inequalities,
\begin{equation*}
\sum_{1\leq i<j \leq N} \| a_i + a_j \|^2 \geq \frac{2}{N(N-1)}
(\sum_{1\leq i<j \leq N} \| a_i - a_j \|)^2,
\end{equation*}
and
\begin{equation*}
\sum_{1\leq i<j \leq N} \| a_i - a_j \|^2 \geq \frac{2}{N(N-1)} (\sum_{1\leq i<j \leq N} \| a_i - a_j \|)^2,
\end{equation*}
we have
\begin{equation}\label{th2pf1}
\sum_{i=1}^{N} \| a_i\|^2 \geq \frac{1}{2N-2}[\frac{2}{N(N-1)}(\sum_{1\leq i<j \leq N} \| a_i + a_j \|)^2 + \sum_{1\leq i<j \leq N} \| a_i - a_j \|^2]
\end{equation}
and
\begin{equation}\label{th2pf2}
\sum_{i=1}^{N} \| a_i\|^2 \geq \frac{1}{2N-2}[\frac{2}{N(N-1)}(\sum_{1\leq i<j \leq N} \| a_i - a_j \|)^2 + \sum_{1\leq i<j \leq N} \| a_i + a_j \|^2].
\end{equation}
Denote $a_i=[\sqrt{\rho},A_i]$. Then $\| a_i \|^2=2I_{\rho} (A_i)$,
$\| a_i - a_j \|^2=2I_{\rho} (A_i - A_j)$ and $\| a_i + a_j \|^2=2I_{\rho} (A_i + A_j)$.
Substituting the above relations into the inequalities (\ref{th2pf1}) and (\ref{th2pf2})
we obtain (\ref{th2eq1}) and (\ref{th2eq2}). $\Box$
In fact, by using the parallelogram law in Hilbert space,
$I_{\rho} (A)+I_{\rho} (B) \geq \frac{1} {2}I_{\rho}(A+B)$ and
$I_{\rho} (A)+I_{\rho} (B) \geq \frac{1} {2}I_{\rho}(A-B)$, one can also get for $N$ observables,
\begin{equation}\label{a10}
\sum_{i=1}^N I_{\rho} (A_i)\geq \frac{1}{2N-2} \sum_{1\leq i<j\leq N} I_{\rho}(A_i+A_j)
\end{equation}
and
\begin{equation}\label{a11}
\sum_{i=1}^N I_{\rho} (A_i)\geq \frac{1}{2N-2} \sum_{1\leq i<j\leq N} I_{\rho}(A_i-A_j).
\end{equation}
Obviously, our Theorem 2 provides tighter uncertainty inequalities than the above inequalities (\ref{a10}) and (\ref{a11}) due to the extra non-negative terms.
We give two examples to show that our inequalities are tighter than (\ref{a7}). For convenience, $\overline{\rm Lb}$, Lb1 and Lb2 are employed to represent the right hands of (\ref{a7}), (\ref{th2eq1}) and (\ref{th2eq2}), respectively.
\begin{figure}[tbp]
\centering
\includegraphics[width=15cm]{TUfigure2.pdf}
\caption{Blue (dashed) line is the bound (\ref{a7}). Red (dotted) and green (dot-dashed) lines represent the bounds of (\ref{th2eq1}) and (\ref{th2eq2}), respectively. Clearly, the bound of (\ref{th2eq1}) and (\ref{th2eq2}) are strictly tighter than the bound of (\ref{a7}) for some cases.}
\label{fig2}
\end{figure}
\emph{Example 2.}
We first consider the mixed state given by Bloch vectors $\vec{r}=\{ \frac{\sqrt{3}}{2}cos{\theta}, \frac{\sqrt{3}}{2}sin{\theta}, 0 \}$,
\begin{equation}
\rho=\frac{I_2 + \vec{r}\cdot\vec{\sigma}}{2},
\end{equation}
where $\vec{\sigma} = \{\sigma_x, \sigma_y, \sigma_z\}$ is a vector given by standard Pauli matrices and $I_2$ is the $2\times2$ identity matrix.
Choosing $\sigma_x$, $\sigma_y$ and $\sigma_z$ as the observables, we get
$$I_{\rho}(\sigma_x)+I_{\rho}(\sigma_y)+I_{\rho}(\sigma_z) = 1,\quad I_{\rho}(\sigma_x + \sigma_y+ \sigma_z)= 1-cos\theta sin\theta,$$
$$I_{\rho}(\sigma_x + \sigma_y) = \frac{1}{2}-cos\theta sin\theta, \quad I_{\rho}(\sigma_x + \sigma_z) =\frac{1}{4}(3-cos2\theta),\quad I_{\rho}(\sigma_y + \sigma_z) = \frac{1}{4}(3+cos2\theta), $$
$$ I_{\rho}(\sigma_x - \sigma_y) = \frac{1}{2}(1+sin2\theta), \quad I_{\rho}(\sigma_x - \sigma_z) =\frac{1}{4}(3-cos2\theta),\quad I_{\rho}(\sigma_y - \sigma_z) =\frac{1}{4}(3+cos2\theta).$$
The comparison between the lower bounds of Theorem 2 and (\ref{a7}) is given in Fig. \ref{fig2}.
\begin{figure}[tbp]
\centering
\subfigure[]
{
\label{fig:subfig:a}
\includegraphics[width=8cm]{TUfigure3a.pdf}
}
\subfigure[]
{
\label{fig:subfig:b}
\includegraphics[width=7.5cm]{TUfigure3d.pdf}}
\caption{(\textbf{a}) The green and blue surfaces represent our lower bound (\ref{th2eq2}) and the lower bound (\ref{a7}), respectively. Obviously, the green surface covers all the blue one. (\textbf{b}) Set $\phi=\pi/2$. Black (solid) line represents the sum of skew information $I_{\rho} (L_x)+I_{\rho}(L_x)+I_{\rho}(L_y)$. Blue (dashed) is the bound (\ref{a7}), red (dotted) and green (dot-dashed) lines represent the bounds of (\ref{th2eq1}) and (\ref{th2eq2}), respectively. The lower bounds (\ref{th2eq1}) and (\ref{th2eq2}) in Theorem 2 are tighter than (\ref{a7}) for certain cases.}
\label{fig3}
\end{figure}
\emph{Example 3.} We consider the following quantum state in spin-$1$ system,
\begin{equation}
|\psi\rangle = sin{\theta}cos{\phi}|1\rangle + sin{\theta}sin{\phi}|0\rangle + cos{\theta}|-1\rangle ,
\end{equation}
where $\theta\in [0, \pi]$ and $\phi\in [0,2\pi]$. We take angular momentum operators ($\hbar=1$) as the observables:
\begin{equation*}
\begin{gathered}
L_x=\frac{1}{\sqrt{2}}
\begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1&0 \end{pmatrix},
\quad
L_y=\frac{1}{\sqrt{2}}
\begin{pmatrix} 0 & -i & 0 \\ i & 0 & -i \\ 0 & i&0 \end{pmatrix},
\quad
L_z=
\begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0&-1 \end{pmatrix}.
\end{gathered}
\end{equation*}
We have the sum of skew information for the state $|\psi\rangle$
\begin{equation*}
{\rm Sum}:=I_{\rho} (L_x)+I_{\rho}(L_x)+I_{\rho}(L_y)=2-(cos^2\theta -sin^2\theta cos^2\phi)^2-2sin^2\theta sin^2\phi(cos\theta+sin\theta cos\phi)^2.
\end{equation*}
We show in Fig. \ref{fig3} the comparison between the lower bounds (\ref{th2eq1}), (\ref{th2eq2}) and (\ref{a7}). The figures show that our bounds are better in this case.
\section{CONCLUSION}
We have derived uncertainty inequalities for arbitrary $N$-incompatible observables based on the sum of variance and Wigner-Yanase skew information, which improve the existing results about the related uncertainty inequalities. The simple approach used in this work can be also applied to investigate other kind of uncertainty relations.
\bigskip
\noindent{\bf Acknowledgments}\, This work is supported by NSFC (Grant No. 12075159), Beijing Natural Science Foundation (Z190005), Academy for Multidisciplinary Studies, Capital Normal University, the Academician Innovation Platform of Hainan Province, Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology (No. SIQSE202001).
|
2,877,628,091,632 | arxiv | \subsubsection{\@startsection{subsubsection}{3}{10pt}{-1.25ex plus -1ex minus -.1ex}{0ex plus 0ex}{\normalsize\bf}}
\def\paragraph{\@startsection{paragraph}{4}{10pt}{-1.25ex plus -1ex minus -.1ex}{0ex plus 0ex}{\normalsize\textit}}
\renewcommand\@biblabel[1]{#1}
\renewcommand\@makefntext[1
{\noindent\makebox[0pt][r]{\@thefnmark\,}#1}
\makeatother
\renewcommand{\figurename}{\small{Fig.}~}
\sectionfont{\large}
\subsectionfont{\normalsize}
\fancyfoot{}
\fancyfoot[LO,RE]{\vspace{-7pt}\includegraphics[height=9pt]{headers/LF}}
\fancyfoot[CO]{\vspace{-7.2pt}\hspace{12.2cm}\includegraphics{headers/RF}}
\fancyfoot[CE]{\vspace{-7.5pt}\hspace{-13.5cm}\includegraphics{headers/RF}}
\fancyfoot[RO]{\footnotesize{\sffamily{1--\pageref{LastPage} ~\textbar \hspace{2pt}\thepage}}}
\fancyfoot[LE]{\footnotesize{\sffamily{\thepage~\textbar\hspace{3.45cm} 1--\pageref{LastPage}}}}
\fancyhead{}
\renewcommand{\headrulewidth}{1pt}
\renewcommand{\footrulewidth}{1pt}
\setlength{\arrayrulewidth}{1pt}
\setlength{\columnsep}{6.5mm}
\setlength\bibsep{1pt}
\twocolumn[
\begin{@twocolumnfalse}
\noindent\LARGE{\textbf{Mechanics of large folds in thin interfacial films}}
\vspace{0.6cm}
\noindent\large{\textbf{Vincent Démery,$^\ast$ Benny Davidovitch, and Christian D. Santangelo}}\vspace{0.5cm}
\noindent\textit{\small{\textbf{Received Xth XXXXXXXXXX 20XX, Accepted Xth XXXXXXXXX 20XX\newline
First published on the web Xth XXXXXXXXXX 200X}}}
\noindent \textbf{\small{DOI: 10.1039/b000000x}}
\vspace{0.6cm}
\noindent \normalsize{A thin film at a liquid interface responds to uniaxial confinement by wrinkling and then by folding; its shape and energy have been computed exactly before self contact.
Here, we address the mechanics of large folds, i.e. folds that absorb a length much larger than the wrinkle wavelength. With scaling arguments and numerical simulations, we show that the antisymmetric fold is energetically favorable and can absorb any excess length at zero pressure.
Then, motivated by puzzles arising in the comparison of this simple model to experiments on lipid monolayers and capillary rafts, we discuss how to incorporate film weight, self-adhesion and energy dissipation.
}
\vspace{0.5cm}
\end{@twocolumnfalse}
]
\footnotetext{\textit{Department of Physics, University of Massachusetts, Amherst, MA 01003, USA.}}
\footnotetext{\textit{$^\ast$~E-mail: vdemery@physics.umass.edu}}
\section{Introduction}\label{}
Deforming soft two dimensional objects by means of capillarity opened a new route to design three dimensional structures at the micro and nanoscale \cite{Py2007,Roman2010,Leong2010}.
However, attaching these thin films to soft substrates submits them to a wealth of morphological instabilities such as wrinkling, crumpling or folding \cite{Li2012}.
Such instabilities have been observed in lipid monolayers \cite{Ries1979,Milner1989,Saint_Jalmes1998,Ybert2002,Zhang2005,Gopal2006,Pu2006,Lee2008}, nanoparticles films \cite{Leahy2010}, capillary rafts\cite{Vella2004,Protiere2010,Abkarian2013} and thin polymer sheets resting on a gel \cite{Pocivavsek2008,Brau2013} or a liquid substrate \cite{Holmes2010,King2012}.
Their complete characterization is a necessary step towards their control and use in the fabrication of small structures.
A simple setup where some of these instabilities arise consists of a thin film at an initially flat liquid interface that is confined in one horizontal direction (see Fig.~\ref{fig:picture}).
The film responds to confinement by wrinkling and folding in a universal way resulting from the competition between the bending energy to deform the film and the gravitational energy to lift the liquid\cite{Pocivavsek2008}. Minimizing these energies leads to an integrable equation for the shape of the film\cite{Diamant2011,Rivetti2013,Diamant2013}, allowing one to obtain an analytical expression for the energy as a function of the confinement length. However, this exact result holds only up to self-contact of the film, and that occurs as soon as the confinement length reaches approximately the wavelength of the wrinkles.
\begin{figure}
\begin{center}
\includegraphics[width=\linewidth]{folds.jpg}
\end{center}
\caption{A thin interfacial film responds to uniaxial confinement first by wrinkling (top), and then by forming a large fold (bottom). The invariance of the system along $\hat\pmb{z}$ allows to parametrize the shape of the film by a function $\pmb{r}(s)=(x(s),y(s))$.}
\label{fig:picture}
\end{figure}
On the other hand, the experimental range of confinement for lipid monolayers\cite{Ries1979,Milner1989,Zhang2005,Gopal2006,Pu2006,Lee2008} and capillary rafts\cite{Protiere2010,Abkarian2013} goes far beyond self contact.
In the first case, folds are formed\cite{Ries1979} abruptly, causing jerky monolayer dynamics\cite{Gopal2006}. In a folding event, a length $\sim 2\,\mu\textrm{m}$ is absorbed in a fold in $\sim 0.1\,\textrm{s}$.
It has already been noted that this characteristic time is anomalously fast\cite{Oppenheimer2013}, but what sets the characteristic length is also unclear.
In capillary rafts, large folds -- involving a length much larger than the wrinkle wavelength -- are formed and eventually get destabilized under their own weight\cite{Protiere2010}.
To understand the behavior of the film in these experiments, a systematic study of the mechanics of large folds is required.
In this article, we address the following questions: what is the shape of a fold after self contact? What is its energy?
\section{Model}\label{}
A thin film at a liquid interface is submitted to uniaxial confinement along $\hat\pmb{x}$; the system is invariant in the $\hat\pmb{z}$ direction (see Fig.~\ref{fig:picture}).
Soon after the confinement length exceeds a threshold value for wrinkling instability, the film responds as if it was nearly inextensible, and can be modeled as a rod parametrized by $\pmb{r}(s)=(x(s),y(s))$ in a vertical plane ($s$ is the arc length).
\citet{Pocivavsek2008} found that the bending energy of the film and the gravitational energy of the displaced fluid are responsible for the wrinkle to fold transition. Those energies are, respectively,
\begin{align}
U\ind{bend} & = \frac{B}{2}\int \pmb{r}''(s)^2ds, \label{eq:ubend}\\
U\ind{grav} & = \frac{\rho g}{2}\int y(s)^2x'(s)ds, \label{eq:ugrav}
\end{align}
where $B$ is the bending modulus of the film,
$\rho$ is the mass density difference between the fluids below and above the sheet, $g$ is the gravitational acceleration and energies are given per unit length in the orthogonal direction.
For a continuous material, the bending modulus is given by $B=Et^3/[12(1-\nu^2)]$, where $E$ is the Young modulus, $\nu$ the Poisson ratio, and $t$ the thickness of the film.
These parameters allow one to define the characteristic length $l=(B/\rho g)^{1/4}$.
Rescaling lengths by $l$ and energies by $B/l$, we are left with a system with no dimensionless parameters (in the following, we use only dimensionless quantities).
We focus on the dependence of the energy on the confinement length $\Delta=L-[x(L)-x(0)]$, $L$ being the length of the film in the confined direction.
The system defined by the energies (\ref{eq:ubend}-\ref{eq:ugrav}) is integrable \cite{Diamant2011,Rivetti2013,Diamant2013}. For a given confinement length $\Delta$, there is a continuous family of solutions with the same energy
\begin{equation}\label{eq:exact}
U^0(\Delta)=2\Delta-\frac{\Delta^3}{48},
\end{equation}
among which are the symmetric and antisymmetric configurations pictured in Fig.~\ref{fig:d_u_sym_asym}.
Two points are noteworthy: first, this energy is always lower than the energy of the wrinkled state, $U^\mathrm{wr}=2\Delta$~\cite{Milner1989}; second, this energy has a maximum and may even become negative. This is prevented by self contact, where the exact solutions cease to be valid. Self contact occurs at $\Delta\simeq 5.6$ for the symmetric fold, just before the maximum, and at $\Delta\simeq 6.6$ for the antisymmetric fold, just after $U^0$ reaches its maximum, meaning that there exists an antisymmetric fold with negative pressure.
\begin{figure}
\begin{center}
\includegraphics[width=\linewidth]{D_U_complete.pdf}
\end{center}
\caption{Fold energy as a function of the imposed displacement for the symmetric (squares) and antisymmetric (circle) folds. The blue line is the exact solution (\ref{eq:exact}) valid before self-contact.
Symmetric and antisymmetric configurations are shown before self contact (left, exact solutions from \citet{Diamant2011}) and after self contact (right). After self contact, the size of the fold $\Delta/2$ absorbs the excess length while bending is localized in highly curved zones of length $l'$.}
\label{fig:d_u_sym_asym}
\end{figure}
\section{Scaling arguments and numerical solution}\label{}
We start our investigation of large folds with a scaling analysis.
Large symmetric and antisymmetric folds are depicted in Fig.~\ref{fig:d_u_sym_asym}.
A fold is characterized by two lengths: its size $\Delta/2$ that is given by the confinement length (assuming that the whole confinement length is absorbed into the fold), and the size $l'$ of the highly curved zone(s) that contains bending. The size $l'$ is determined by an energy balance.
In the symmetric case, the bending and gravitational energies are respectively $1/l'$ and $\Delta l'^2$ (the volume of fluid contained in the highly curved zone is $l'^2$ and its displacement is $\Delta$). Minimizing over $l'$ gives $l'\sim \Delta ^{-1/3}$ and the scaling law
\begin{equation}\label{eq:asymptotic_sym}
U^\mathrm{sym}\sim\Delta^{1/3}.
\end{equation}
Note that this scaling is strictly different from the result of a scaling argument in \citet{Pocivavsek2008}, which neglected the effect of self avoidance.
On the other hand, in the antisymmetric case, the displacement of the fluid inside the highly curved zones does not depend on the fold size $\Delta$. The bending and gravitational energies are, respectively, $1/l'$ and $l'^3$, leading to $l'\sim 1$ and
\begin{equation}\label{eq:asymptotic_asym}
U^\mathrm{antisym}\sim 1.
\end{equation}
Since the fold occurs at $\Delta>1$, the antisymmetric fold has a lower energy than the symmetric one, which does not depend on the size of the fold: once it is formed, it can absorb length at negligible cost.
In order to completely characterize the behavior of the fold, we have to investigate the crossover between the energy at self contact, given by Eq.~\ref{eq:exact}, and the asymptotic behaviors of Eqs.~\ref{eq:asymptotic_sym}-\ref{eq:asymptotic_asym}. Besides this crossover, we want to determine the asymptotic value of the energy of the asymmetric fold.
We resort to a numerical computation of the film shape to answer these questions.
The rod parametrized by $\pmb{r}(s)$ is modeled as a chain of beads
with bending and gravitational energies, a stretching energy with a very large stretching modulus and a short-range repulsion energy between the beads to prevent self-crossing.
The equilibrium rod configuration is given by minimization of the full energy, and its energy is computed with the bending and gravitational contributions only.
We perform two kinds of simulations: in the first, we find the energy minimizing configuration of the complete rod; in the second, we consider one half of the rod and impose a symmetric configuration.
In the first case, the energy minimizing configuration is always found to be antisymmetric after self-contact.
The energies of the symmetric and antisymmetric configurations are plotted as a function of the displacement in Fig.~\ref{fig:d_u_sym_asym}: they are equal before self-contact and differ strongly after self-contact. The energy of the symmetric fold keeps increasing while that of the antisymmetric fold decreases monotonously to a plateau well below its maximum value (the symmetric fold shown in Fig.~\ref{fig:d_u_sym_asym} is pointing down, but the corresponding configuration with the fold pointing up has the same energy).
The energy of the antisymmetric fold reaches its maximum $U\ind{max}=16\sqrt{2}/3\simeq 7.5$ before self-contact and then decreases to its plateau value $U_\infty\simeq 5.2 \simeq 0.7U\ind{max}$ (see~Fig.~\ref{fig:num_asym}).
Four steps can be identified after the energy maximum, pictured in the inset of Fig.~\ref{fig:num_asym}: (i) Between the maximum and self-contact ($4\sqrt{2}\simeq 5.7\leq\Delta\leq 6.5$), the two highly curved zones get closer, reducing the gravitational energy.
(ii) Just after self-contact ($6.5\leq\Delta\leq 9$) the size of the highly curved zones increases.
(iii) Once the highly curved zones have reached their optimal size, they start to move apart until a trilayer is formed between them ($9\leq\Delta\leq 25$).
(iv) The highly curved zones move apart at constant energy and shape, increasing the trilayer length between them ($\Delta\geq 25$).
The transition from the flat to the folded film resembles the monolayer to trilayer transition observed in nanospheres rafts\cite{Leahy2010}.
\begin{figure}
\begin{center}
\includegraphics[width=\linewidth]{D_U_confs.pdf}
\end{center}
\caption{Fold energy as a function of the imposed displacement. The blue line is the exact solution (\ref{eq:exact}) that is valid before self contact, i.e. for $\Delta\lesssim 6.6$.
Inset: typical configurations showing the different folding steps (numbers indicate the imposed displacement).}
\label{fig:num_asym}
\end{figure}
\section{Discussion}\label{}
The picture that emerges from our analysis is that large folds are antisymmetric and energetically cheap; it does not fit the observations of folds in capillary rafts or in lipid monolayers:
\begin{itemize}
\item folds in capillary rafts are often symmetric\cite{Protiere2010,Abkarian2013};
\item folds in lipid monolayers have a well defined length, i.e., creating several folds is favorable to enlarging one fold\cite{Gopal2006}.
\end{itemize}
These discrepancies indicate that the energetic consideration of bending and gravity alone do not account for the observed properties: other effects should be involved.
As a preliminary exploration of such effects, we discuss possible extensions of the simple model used here and their potential effect on the energy and dynamics of large folds.
The film weight plays a crucial role in capillary rafts, leading to fold instability and breaking\cite{Protiere2010,Abkarian2013}. Here, we discuss its effect on the shape and energy of the fold.
The film weight enters in the energy via an additional term
\begin{equation}
U\ind{weight}=M \int y(s)\,ds,
\end{equation}
where $M=\rho\ind{f}[g/(B\rho^3)]^{1/4}$ and $\rho\ind{f}$ is the effective mass of the film per unit area (taking into account its buyoancy).
It is noteworthy that for the very small deformations involved in the wrinkled phase, the gravitational energy is approximated by $U\ind{grav}\simeq(1/2)\int y(s)^2\,ds$, thus the film weight can be absorbed in a shift of the $y$ coordinate and has no effect.
For large folds, it is straightforward to incorporate it into the scaling analysis: it contributes to the downward symmetric fold (that is selected if $\sigma>0$) as $U\ind{weight}\sim -M \Delta^2$ and it does not contribute to the antisymmetric fold.
This negative contribution may thus make the symmetric fold favorable and even unstable since its energy $U^\mathrm{sym}\sim \Delta^{1/3}-M\Delta^2$ decreases to $-\infty$ after its maximum at $\Delta\ind{c}\sim M^{-3/5}$.
Moreover, a tension $T\sim\Delta$ is induced in the film that will eventually break;
this behavior is observed in heavy capillary rafts~\cite{Protiere2010,Abkarian2013}.
On the other hand, for monolayers, a rough estimate gives $M\simeq 10^{-3}$ in dimensionless units, meaning that the weight of the film may have an effect only when $\Delta\ind{c}\simeq 100$, i.e. for very large folds. A strong effect of the weigth of the monolayers on their folding is thus unlikely.
We turn to self-attraction, that can hold two parts of the fold together\cite{Holmes2010} and has been suggested as a mechanism to drive the folding of lipid monolayers\cite{Lee2008}.
It can be modelled by an energy gain $\Gamma$ per unit area of the film in contact with itself.
The adhesion energy $\Gamma$ may differ on either side of the film: not only the two sides of the film can be different, as is the case for lipid monolayers, but the interaction of the film with itself can depend on the surrounding liquid.
The symmetric fold is the first to experience self-contact, thus it may be favored in the presence of self attraction. Self attraction leads to an energy gain $\Gamma\Delta$, giving $U^\mathrm{sym}\sim\Delta^{1/3}-\Gamma\Delta$. Thus, depending on $\Gamma$, the film may be unstable at self contact.
In case self-adhesion prevents relative motion and fluid flow between two sections of the film in contact with each other, the size of the highly curved zone cannot decrease as $l'\sim\Delta^{-1/3}$ (it requires fluid flow from the highly curved zone to the upper reservoir) and remains equal to its value at self contact, $l'\sim 1$, resulting in the total energy $U^\mathrm{sym}\sim(1-\Gamma)\Delta$.
Let us now consider the antisymmetric fold, with self-attraction on the two sides: the energy gain is higher than in the symmetric case (although self contact occurs later), but relative motion of sticking parts is required, thus, if the upper fluid is sufficiently viscous, the symmetric fold is preferable.
If only one side experiences self attraction, the relative motion of sticking parts can be avoided, the energy gain is the same as in the symmetric case but the gravitational cost (of the liquid phase) is lower: the antisymmetric fold is still favored.
Lastly, energy dissipation may occur during the fold formation due to flow between nearly touching parts of the film (symmetric fold) or the relative motion of nearly touching parts of the film (antisymmetric fold).
When the symmetric fold grows, the highly curved zone shrinks as $l'\sim\Delta^{-1/3}$ under the effect of increasing hydrostatic pressure and an upward flow is generated in the narrow neck formed by the two parts of the film that are close to self contact. The radius of the highly curved zone shrinks slowly, thus the dissipation decreases as the fold size increases.
In the antisymmetric fold, the effect is slightly different: the length of the highly curved zones does not change, but proximal parts of the film are in relative motion (in Fig.~\ref{fig:num_asym}, inset $\Delta=35$, the top part of the trilayer moves left, the center part does not move and the bottom part moves right). A flow is needed to lubricate this relative motion, and the dissipation increases as $\Delta$, the length of the portions of the film in self contact.
Hence, energy dissipation will be lower in the symmetric fold, and hence it is favored if the formation of the fold is rapid. Once formed, the symmetric fold will eventually relax to the antisymmetric configuration.
A more precise analysis of the sources of energy dissipation would consider the effect of self-attraction, that may reduce the thickness of the fluid layer between parts of the film and thus increase dissipation.
\section{Conclusion}\label{}
We have investigated the behavior of large folds that may appear in thin interfacial films under uniaxial confinement.
Under the assumption that the system is controlled by bending and gravity~\cite{Pocivavsek2008}, we have shown that the large folds are antisymmetric and their energy decreases after a maximum reached before self contact to a universal value well below this maximum (see Fig.~\ref{fig:num_asym}). The antisymmetric folds are energetically cheap -- one fold can absorb all the excess length at a finite cost -- and stable -- they do not unfold spontaneously at zero tension.
On the other hand, the energy of symmetric folds increases monotonously, and these folds are thus less favorable energetically.
Although antsymmetric folds may be the actual cause of ``tri-layers", which have been observed, for instance, by \citet{Leahy2010}, their actual development for the wrinkled state had not been directly observed.
We have shown that they do not explain
the preferred fold size observed in compressed lipid monolayers\cite{Gopal2006}. Together with the kinetic puzzle encountered in trying to predict the folding timescale of monolayers~\cite{Oppenheimer2013}, this suggests that other interactions must be included in the model.
We discussed the effect of the weight of the film, its self-attraction, and energy dissipation,
finding that the symmetric fold may be favored in some cases. A more thorough study of these effects is however needed to draw quantitative predictions on the modifications of the folding behavior presented here.
On the other hand, Rivetti and Antkowiak\cite{Rivetti2013b,Rivetti2013} have observed the exact solutions of the model presented here~\cite{Diamant2011,Rivetti2013,Diamant2013}.
The large size system -- the characteristic length is $l\sim 1\,\text{cm}$ -- used in their experiment appears to be accurately described by bending and gravity only, and is thus likely to exhibit the folding behavior predicted here.
\section*{Acknowledgements}
V.D. thanks S. Proti\`ere and M. Abkarian for stimulating discussions about the shape and stability of folds appearing in confined granular rafts, and A. R. C. Romaguera and J. Paulsen for useful advices on the numerical computations.
The authors acknowledge financial support by the KECK foundation Award 37086 (V.D.), and NSF CAREER Award DMR-11-51780 (B.D.).
\footnotesize{
|
2,877,628,091,633 | arxiv | \section{Introduction}
Let $X_1,X_2$ and $X_3$ be three independent copies of a
regular diffusion on $[0,1]$ with absorbing boundaries.
Eventually, either at least two of the diffusions are absorbed
at the upper boundary of the interval or at least two are absorbed
at the lower boundary. In this way, the diffusions determine a
\emph{majority decision} between 0 and 1.
In order to identify this decision we run the three processes --not
simultaneously, but switching from one to another-- until we
observe at least two of them reaching a common boundary point. Our aim is
to switch between the processes in a way that minimises the total time
required to find the majority decision.
More precisely, we allocate our time between the three
processes according to a suitably adapted $[0,\infty)^3$-valued increasing
process $\mathcal{C}$ with $\sum_{i=1}^3 \mathcal{C}_i(t) = t$. Such a process is called a \emph{strategy} and $\mathcal{C}_i(t)$ represents
the amount of time spent observing $X_i$ after $t \geq 0$ units
of calendar time have elapsed. Accordingly, the process we observe is
\[
X^\mathcal{C} \overset{\mathrm{def}}{=} ( X_1(\mathcal{C}_1(t)), X_2(\mathcal{C}_2(t)), X_3(\mathcal{C}_3(t)); t \geq 0),
\]
and the \emph{decision time} $\tau^\mathcal{C}$ for the strategy $\mathcal{C}$ is the first
time that two components of $X^\mathcal{C}$ are absorbed at the same end point
of $[0,1]$, i.e.
\[
\tau^\mathcal{C} \overset{\mathrm{def}}{=} \inf\{t \geq 0: X^\mathcal{C}_i(t) = X^\mathcal{C}_j(t) \in \{0,1\}~\mathrm{for~distinct}~i,j\}.
\]
In this paper we find a strategy ${\mathcal{C}^\star}$ that minimises this time. Roughly speaking, ${\mathcal{C}^\star}$ runs whichever diffusion is currently observed to have
``middle value'' (see Lemma \ref{l:stratexists} for a precise description).
Our main theorem is that the decision time $\tau^{{\mathcal{C}^\star}}$ of this strategy is the \emph{stochastic minimum} of all possible decision times, i.e.
\begin{theorem}\label{t:stochminimality}
The decision time $\tau^{{\mathcal{C}^\star}}$ of the ``run the middle''
strategy ${\mathcal{C}^\star}$ given in lemma \ref{l:stratexists} satisfies
\[
\mathop{\mathbb{P}}\nolimits(\tau^{{\mathcal{C}^\star}} > t) = \inf_{\mathcal{C}} \mathop{\mathbb{P}}\nolimits(\tau^\mathcal{C} > t), \;\mathrm{for~every~} t \geq 0.
\]
where the infimum is taken over all strategies and $\tau^\mathcal{C}$
is the corresponding decision time.
\end{theorem}
This model fits into the existing literature on optimal
dynamic resource allocation (see \ref{ss:banditsetc} below for a brief review) but
our original motivation for studying it was to gain an understanding of the
problem of evaluating the ``recursive majority of three'' function
on random input. The latter can be described as follows -- take
the complete ternary tree on $n$ levels, place independent Bernoulli(p) ($0 < p < 1$)
random variables on each of the $3^n$ leaves and recursively define the internal nodes to take the majority value of their children.
We wish to determine the value of the root node, but may only accrue knowledge
about the tree by sequentially observing leaves, paying \textsterling 1 each time for
the privilege. It remains an open problem to determine the strategy
with least expected cost, $r_n$. However, $r$ is
sub-multiplicative (i.e. $r_{n+m} \leq r_n r_m$ for any $n,m \in {\mathbb N}$)
and so, by Fekete's lemma
\[
\gamma \overset{\mathrm{def}}{=} \lim_{n\to\infty} r_n^{1/n}
\]
exists with the trivial bounds $2 \leq \gamma \leq 3$.
The value of $\gamma$, despite attracting the attention
of several investigators, is not known (see section \ref{ss:rmtrevisited}). Our idea was to study it by considering a continuous approximation to the large $n$
tree. It was this continuous approximation that inspired the diffusive model
introduced in this paper, but the reader is warned that
the results we present here do not shed light on the value of $\gamma$
However, the problem of switching between diffusions is worthwhile in
its own right. It has a similar flavour to the continuous multi-armed bandit problem but does not seem to have the same mathematical anatomy.
Nevertheless there is an
interesting structure to be revealed -- in particular we make use of the heuristic equation \eqref{e:heureq4} in order to evaluate the value function for the discounted problem,
and the same equation plays a central role in proving the much stronger
stochastic minimality property.
\subsection{Dynamic resource allocation}\label{ss:banditsetc}
The problem we have described is one of optimal dynamic resource allocation
in continuous time. The most widely studied example of this is
the continuous multi-armed bandit problem (see, for example, El Karoui and Karatzas \cite{elkaroui1994dap}, Mandelbaum and Kaspi \cite{kaspi98multi-armed}). Here, a gambler chooses
the rates at which he will pull the arms on different slot machines.
Each slot machine rewards the gambler at rates which follow a stochastic
process independent of the reward processes for the other machines.
These general bandit problems find application in several fields
where agents must choose between exploration and exploitation,
typified in economics
and clinical trials.
An optimal strategy is suprisingly easy to describe. Associated to each machine
is a process known as the Gittins index, which may be interpreted as
the equitable surrender value. It is a celebrated theorem that at each instant, we should play whichever machine currently has the largest Gittins index. This is in direct analogy to the discrete time result of Gittins and Jones \cite{Git74}.
There is no optimal strategy of index type for our problem.
This reflects the fact that the reward processes
associated to running each of the diffusions are not independent -- once two
of the diffusions are absorbed, it may be pointless to run the third.
In \cite{mandelbaum1990osb}, a different dynamic allocation problem is considered. It has a similar
flavour in that one must choose the rates at which to run
two Brownian motions on $[0,1]$, and we stop once \emph{one} of the processes hits an endpoint.
The rates are chosen to maximise a terminal payoff, as specified by a function
defined on the boundary of the square (the generalisation of this problem to several
Brownian motions is considered in \cite{vanderbei92}). An optimal strategy is determined
by a partition of the square into regions of indifference, preference for the
first Brownian motion and preference for the second. However, there is no notion of a reward (cost) being \emph{accrued} as in our problem. So, our problem, in which
time is costly \emph{and} there is a terminal cost of minus infinity
for finishing on a part of $\partial \mathcal{S}$ which does not give a majority decision
could be seen as lying between continuous bandits and the Brownian switching in \cite{mandelbaum1990osb}.
\subsection{Overview of paper}
The rest of the paper is laid out as follows. Section \ref{s:probstatement}
contains a precise statement of the problem and our assumptions and a clarification of
Theorem \ref{t:stochminimality}. The proof of this theorem begins in Section 2,
where we show that the Laplace transform of the distribution
of the decision time $\tau^\straTopt$
solves certain differential equations. This fact is then
used in Section 3 to show that the tail of $\tau^{{\mathcal{C}^\star}}$
solves, in a certain sense, the appropriate Hamilton-Jacobi-Bellman PDE.
From here, martingale optimality arguments complete the proof.
Section 4 shows the existence and uniqueness of the strategy ${\mathcal{C}^\star}$
and in Section 5 we explain the connection between the controlled process
and doubly perturbed diffusions. In the final section, we make a conjecture
about an extension to the model and then, to close, we ask a few questions
relating to the discrete recursive majority of three problem that
motivated us originally.
\subsection{Problem statement and solution}\label{s:probstatement}
We are given a complete probability space $(\Omega, {\mathcal F}, \mathop{\mathbb{P}}\nolimits)$ supporting three
independent It\^{o} diffusions $(X_i(t),\; t \geq 0)$, $i \in V = \{1,2,3\}$,
each of which is started in the unit interval $[0,1]$ and absorbed
at the endpoints. The diffusions all satisfy the same stochastic
differential equation
\begin{equation}\label{e:itosde}
dX_i(t) = \sigma(X_i(t))dB_i(t) + \mu(X_i(t))dt, \; t \geq 0,
\end{equation}
where $\sigma:[0,1]\to(0,\infty)$ is continuous, $\mu:[0,1]\to{\mathbb R}$ is Borel and
$(B_i(t),\; t \geq 0)$, $i \in V$ are independent Brownian motions.
We denote by $\mathcal{S}$ the unit cube $[0,1]^3$, by ${\mathbb R}_+$ the set of non-negative real numbers $[0,\infty)$ and $\preceq$ its usual partial order on ${\mathbb R}^3_+$.
It is assumed that we have a standard Markovian setup, i.e. there is a
family of probability measures $(\mathop{\mathbb{P}}\nolimits_x; x \in \mathcal{S})$ under which
$X(0) = x$ almost surely and the filtration ${\mathcal F}_i = \left( {\mathcal F}_i(t); t \geq 0\right)$
generated by $X_i$ is augmented to satisfy the usual conditions.
From here, we adopt the framework for continuous dynamic allocation
models proposed by Mandelbaum in \cite{mandelbaum1987cma}. This approach
relies on the theory of multiparameter time changes; the reader may consult
Appendix A for a short summary of this.
For $\eta \in {\mathbb R}_+^3$ we define the $\sigma$-algebra
\[
{\mathcal F}(\eta) \overset{\mathrm{def}}{=} \sigma({\mathcal F}_1(\eta_1), {\mathcal F}_2(\eta_2), {\mathcal F}_3(\eta_3)),
\]
which corresponds to the information revealed by running
$X_i$ for $\eta_i$ units of time. The family $({\mathcal F}(\eta); \eta \in {\mathbb R}_+^3)$ is called
a \emph{multiparameter filtration} and satisfies the ``usual conditions''
of right continuity, completeness and property (F4) of Cairoli and Walsh \cite{cairoli75}.
It is in terms of this filtration that we define the sense
in which our strategies must be adapted.
A \emph{strategy} is an ${\mathbb R}_+^3$-valued stochastic process
\[
\mathcal{C} = \left(\mathcal{C}_1(t), \mathcal{C}_2(t), \mathcal{C}_3(t); t \geq 0\right)
\]
such that
\begin{itemize}
\item[(C1)] for $i = 1,2,3$, $\mathcal{C}_i(0) = 0$ and $\mathcal{C}_i(\cdot)$ is nondecreasing,
\item[(C2)] for every $t \geq 0$, $\mathcal{C}_1(t) + \mathcal{C}_2(t) + \mathcal{C}_3(t) = t$ and
\item[(C3)] $\mathcal{C}(t)$ is a stopping \emph{point} of the multiparameter
filtration $({\mathcal F}(\eta); \eta \in {\mathbb R}_+^3)$, i.e.
\[
\{\mathcal{C}(t) \preceq \eta\} \in {\mathcal F}(\eta) \;\mathrm{for~every}\; \eta \in {\mathbb R}_+^3.
\]
\end{itemize}
\begin{remark}
In the language of multiparameter processes, $\mathcal{C}$ is an
\emph{optional increasing path} after Walsh \cite{walsh1981oip}.
\end{remark}
\begin{remark}
Conditions (C1) and (C2) together imply that for any $s \leq t$,
$|\mathcal{C}_i(t) - \mathcal{C}_i(s)| \leq t - s$. It follows that the
measure $dC_i$ is absolutely continuous and so it makes sense
to talk about the \emph{rate} $\dot \mathcal{C}_i(t) = d\mathcal{C}_i(t)/dt$, $t \geq 0$,
at which $X_i$ is to be run.
\end{remark}
The interpretation is that $\mathcal{C}_i(t)$ models the total amount of time spent running $X_i$ by calendar time $t$, and accordingly, the \emph{controlled process} $X^\mathcal{C}$ is defined
by
\[
X^\mathcal{C}(t) \overset{\mathrm{def}}{=} (X_1(\mathcal{C}_1(t)), X_2(\mathcal{C}_2(t)), X_3(\mathcal{C}_3(t))), \; t \geq 0.
\]
Continuity of $\mathcal{C}$ implies that $X^\mathcal{C}$ is a continuous process in $\mathcal{S}$ that is adapted to the (one parameter) filtration ${\mathcal F}^\mathcal{C}$ defined by
\[
{\mathcal F}^\mathcal{C}(t) \overset{\mathrm{def}}{=} \left\{ F \in {\mathcal F} : F \cap \{ \mathcal{C}(t) \preceq \eta \} \in {\mathcal F}(\eta)~\mathrm{for~every}~\eta \in {\mathbb R}^3_+ \right\},\; t \geq 0.
\]
The \emph{decision time} $\tau^\mathcal{C}$ for a time allocation strategy $\mathcal{C}$ is
the first time that $X^\mathcal{C}$ hits the \emph{decision set}
\[
D \overset{\mathrm{def}}{=} \{(x_1,x_2,x_3) \in \mathcal{S} : x_{i} = x_{j} \in \{0,1\}~\mathrm{for~some}~1 \leq i < j \leq 3 \}
\]
The objective is to find a strategy whose associated decision time is a stochastic minimum.
Clearly, it is possible to do very badly by only
ever running one of the processes as a decision may never be reached (these strategies
do not need to be ruled out in our model).
A more sensible thing to do is to pick two of the processes, and run them untill
they are absorbed. Only if they disagree do we run the third. This strategy is much better
than the pathological one (the decision time is almost surely finite!) but we can do better.
We do not think it is obvious what the best strategy is.
In the situation that $X_1(0)$ is close to zero and $X_3(0)$ close to one, it is probable that $X_1$ and $X_3$ will be absorbed at different end points of $[0,1]$.
So, if $X_2(0)$ is close to $0.5$ say, it seems likely that $X_2$ will be
pivotal and so we initially run it, even though $X_1$ and $X_3$ might be
absorbed much more quickly. Our guess is to run the diffusion
whose value lies between that of the other two processes. But if all the
processes are near one, it is not at all clear this is the best thing to do.
For example, one could be tempted to run the process with largest value in the hope that it will give a decision very quickly.
It turns out that we must always ``run the middle''. That is, if, at any moment $t \geq 0$, we have
$X^\mathcal{C}_1(t) < X_2^\mathcal{C}(t) < X^\mathcal{C}_3(t)$, then we should run $X_2$ exclusively until it hits $X^\mathcal{C}_1(t)$ or $X^\mathcal{C}_3(t)$. We need not concern ourselves with what happens when the processes are equal. This is because there is, almost surely, only one strategy that runs the middle of the three
diffusions when they are separated.
To state this result, let us say that for a strategy $\mathcal{C}$,
component $\mathcal{C}_i$ increases at time
$t \geq 0$ if $\mathcal{C}_i(u) > \mathcal{C}_i(t)$ for every $u > t$.
\begin{lemma}\label{l:stratexists}
There exists a unique time allocation strategy ${\mathcal{C}^\star}$
such that
$\mathcal{C}^\star_i$ increases only at times $t \geq 0$
such that $X_j^{{\mathcal{C}^\star}}(t) \leq X_i^{{\mathcal{C}^\star}}(t) \leq X_k^{{\mathcal{C}^\star}}(t)$
under some labelling $\{i,j,k\} = V$ of the processes.
If $\mathcal{C}$ is any other strategy with this property, then
$\mathcal{C}(t) = {\mathcal{C}^\star}(t)$ for all $t \geq 0$ almost surely (with respect
to any of the measures $\mathop{\mathbb{P}}\nolimits_x$).
\end{lemma}
This lemma is proved in section \ref{s:stratexists} and Theorem \ref{t:stochminimality} states
that ${\mathcal{C}^\star}$ gives a stochastic minimum for the decision time.
In the sequel, the drift term $\mu$ is assumed to vanish.
This is not a restriction, for if a drift is present we
may eliminate it by rewriting the problem in natural scale.
\section{The Laplace transform of the distribution of $\tau^\straTopt$}\label{s:laplacetransform}
The proof of Theorem \ref{t:stochminimality} begins by computing
the Laplace transform
\[
\hat v_r(x) \overset{\mathrm{def}}{=} \mathop{\mathbb{E}}\nolimits_x \left(\exp(-r \tau^\straTopt) \right),
\]
of the distribution of the decision time.
This non-trivial task is carried out using the ``guess and verify''
method. Loosely, the guess is inspired by comparing the payoffs of doing
something optimal against doing something nearly optimal. This leads to
a surprisingly tractable heuristic equation from which $\hat v_r$ can
be recovered.
The argument which motivates the heuristic proceeds as follows.
From any strategy $\mathcal{C}$ it is possible to construct (but we omit the details)
another strategy, $\hat \mathcal{C}$, that begins by running $X_1$ for some small time $h > 0$
(i.e. $\hat \mathcal{C}(t) = (t,0,0)$ for $0 \leq t \leq h$) and then does not run
$X_1$ again until $\mathcal{C}_1$ exceeds h, if ever.
In the meantime, $\hat \mathcal{C}_2$ and $\hat \mathcal{C}_3$ essentially follow $\mathcal{C}_2$ and
$\mathcal{C}_3$ with the effect that once $\mathcal{C}_1$ exceeds h, $\mathcal{C}$ and $\hat \mathcal{C}$ coincide.
This means that if the amount of time, $\mathcal{C}_1(\tau^\mathcal{C})$, that $\mathcal{C}$ spends
running $X_1$ is at least $h$, then $\tau^{\hat \mathcal{C}}$ and $\tau^{\mathcal{C}}$ are identical.
On the other hand, if $\mathcal{C}_1(\tau^\mathcal{C}) < h$, then $\hat \mathcal{C}$ runs $X_1$
for longer than $\mathcal{C}$, with some of the time $\hat \mathcal{C}$ spends
running $X_1$ being wasted. In fact, outside a set with probability $o(h)$ we have
\begin{equation}\label{e:tauhat}
\tau^{\hat \mathcal{C}} = \tau^\mathcal{C} + \left(h - T_1\right)^+,
\end{equation}
where $T_i = \mathcal{C}_i(\tau^\mathcal{C})$ is the amount of time that $\mathcal{C}$ spends running
$X_i$ while determining the decision.
We compare $\hat \mathcal{C}$ with the strategy that runs $X_1$ for time $h$ and
then behaves \emph{optimally}. If we suppose that ${\mathcal{C}^\star}$ itself is optimal
and recall that $\hat v_r$ is the corresponding payoff, this yields the inequality
\begin{equation}\label{e:heurineq}
\mathop{\mathbb{E}}\nolimits_x \left( \exp( -r \tau^{\hat \mathcal{C}}) \right) \leq \mathop{\mathbb{E}}\nolimits_x\left( \exp(-rh) \hat v_r(X_1(h),X_2(0), X_3(0)) \right).
\end{equation}
Now, we take $\mathcal{C} = {\mathcal{C}^\star}$ and use \eqref{e:tauhat} to
see that the left hand side of \eqref{e:heurineq}
is equal to
\[
\mathop{\mathbb{E}}\nolimits_x \left( \exp( -r ( \tau^{\mathcal{C}^\star} + (h - T_1)^+)) \right) + o(h),
\]
which, in turn, may be written as
\begin{equation}\label{e:heur1}
\hat v_r(x) + \mathop{\mathbb{E}}\nolimits_x\left( \left(\exp(-r(\tau^\straTopt + h) - \exp(-r\tau^\straTopt)\right)\Indi{T_i = 0}\right) + o(h).
\end{equation}
On the other hand, if we assume $\hat v_r$ is suitably smooth, the right hand side of \eqref{e:heurineq} is
\begin{equation}\label{e:heur2}
\hat v_r(x) + h \left(\mathcal{G}^1 - r \right)\hat v_r(x) + o(h),\; x_1 \in (0,1).
\end{equation}
where we have introduced the differential operator $\mathcal{G}^i$ defined by
\[
\mathcal{G}^i f(x) \overset{\mathrm{def}}{=} \frac{1}{2}\sigma^2(x_i) \frac{\partial^2 }{\partial x_i^2} f(x),\; x_i \in (0,1).
\]
After subsituting these expressions back into \eqref{e:heurineq} and
noticing that there was nothing special about choosing $X_1$ to be the process that we
moved first, we see that
\begin{equation}\label{e:heurineq1}
\mathop{\mathbb{E}}\nolimits_x\left(\exp(-r(\tau^\straTopt + h) - \exp(-r\tau^\straTopt); T_i = 0\right) \leq h \left(\mathcal{G}^i - r \right)\hat v_r(x) + o(h),
\end{equation}
for each $x_i \in (0,1)$ and $i \in V$.
Dividing both sides by $h$, and taking the limit $h \to 0$ yields the inequality
\begin{equation}\label{e:heur3}
\left(\mathcal{G}^i - r \right)\hat v_r(x)
\leq - r \mathop{\mathbb{E}}\nolimits_x\left(\exp(-r\tau^\straTopt); T_i = 0\right).
\end{equation}
Now, in some simpler, but nevertheless related problems, we can
show that \eqref{e:heur3} is true with an \emph{equality} replacing the inequality.
This prompts us to \emph{construct} a function
satisfying \eqref{e:heur3} with equality. Our effort culminates in
\begin{lemma}\label{l:heurfnexists} There exists a continuous function $h_r: \mathcal{S} \to {\mathbb R}$ such that
\begin{itemize}
\item $h_r(x) = 1$ for $x \in D$,
\item the partial derivatives $\frac{\partial^2 \hat h_r}{\partial x_i \partial x_j}$ exist
and are continuous on $\{ x \in \mathcal{S} \backslash D : x_i, x_j \in (0,1)\}$ (for any $i,j \in V$ not necessarily distinct) and
\item furthermore, for each $i \in V$ and $x \notin D$ with $x_i \in (0,1)$,
\[
\left(\mathcal{G}^i - r \right)h_r(x)
= - r \hat f^i_r(x),
\]
where $\hat f^i_r(x) \overset{\mathrm{def}}{=} \mathop{\mathbb{E}}\nolimits_x\left(\exp(-r\tau^\straTopt)\Indi{T_i = 0}\right)$.
\end{itemize}
\end{lemma}
\begin{proof}
We begin by factorising $\hat f_r^i(x)$ into a product of Laplace transforms of
diffusion exit time distributions. This factorisation is useful as it allows us to construct $h$ by solving a series of ordinary
differential equations. Note that in this proof, we will typically suppress the $r$ dependence for notational convenience.
The diffusions all obey the same stochastic differential equation and so
we lose nothing by assuming that the components of $x$ satisfy
$0 \leq x_1 \leq x_2 \leq x_3 \leq 1$. Further, we suppose that $x \notin D$
because otherwise $T_i = 0$ $\mathop{\mathbb{P}}\nolimits_x$-almost-surely.
In this case, $T_2 > 0$ $\mathop{\mathbb{P}}\nolimits_x$-almost surely, because for any $t > 0$, there exist times $t_1, t_3 < t/2$
at which $X_1(t_1) < x_1 \leq x_2 \leq x_3 < X_3(t_3)$ and so it is certain our
strategy allocates time to $X_2$. It follows that $\hat f^2(x)$ vanishes.
Now consider $\hat f^1$. There is a $\mathop{\mathbb{P}}\nolimits_x$-neglible set off which
$T_1 = 0$ occurs if, and only if,
both of the independent diffusions $X_2$ and $X_3$ exit the interval
$(X_1(0),1)$ at the upper boundary. Furthermore, $\tau^\straTopt$ is just
the sum of the exit times. That is, if
\begin{equation}\label{e:xihita}
\mathfrak{m}^{(i)}_a \overset{\mathrm{def}}{=} \inf\{t > 0: X_i(t) = a\}, \; a \in I, i \in V,
\end{equation}
then
\[
\hat f^1(x) = \mathop{\mathbb{E}}\nolimits_x\left(\exp(-r (\mathfrak{m}^{(2)}_1 + \mathfrak{m}^{(3)}_1))
\Indi{ \mathfrak{m}^{(2)}_1 < \mathfrak{m}^{(2)}_{x_1}, \mathfrak{m}^{(3)}_1 < \mathfrak{m}^{(3)}_{x_1}}\right).
\]
Using independence of $X_2$ and $X_3$, we have the factorisation
\[
\hat f^1(x) = \prod_{i=2}^3 \mathop{\mathbb{E}}\nolimits_x\left(\exp(-r \mathfrak{m}^{(i)}_1)\Indi{\mathfrak{m}^{(i)}_1 < \mathfrak{m}^{(i)}_{x_1}}\right).
\]
Note that our assumption $x \notin D$ guarantees that $x_1 < 1$.
To write this more cleanly, let us introduce, for $0 \leq a < b \leq 1$,
the functions
\[
h^+_{a,b}(u) \overset{\mathrm{def}}{=} \mathop{\mathbb{E}}\nolimits_{u}\left(\exp(-r \mathfrak{m}^{(1)}_b);\mathfrak{m}^{(1)}_b < \mathfrak{m}^{(1)}_{a}\right),
\]
where the expectation operator $\mathop{\mathbb{E}}\nolimits_u$ corresponds to the (marginal) law of
$X_1$ when it begins at $u \in [0,1]$.
The diffusions obey the same SDE, and so
\[
\hat f^1(x) = h^+_{x_1,1}(x_2)h^+_{x_1,1}(x_3),
\]
Similarly,
\[
\hat f^3(x) = h^-_{0,x_3}(x_1)h^-_{0,x_3}(x_2)
\]
where
\[
h^-_{a,b}(u) \overset{\mathrm{def}}{=} \mathop{\mathbb{E}}\nolimits_u\left(\exp(-r \mathfrak{m}^{(i)}_a);\mathfrak{m}^{(i)}_a < \mathfrak{m}^{(i)}_{b}\right).
\]
We take, as building blocks for the construction of $h$, the functions $h^\pm_{0,1}$, abbreviated to $h^\pm$ in the sequel. The regularity of each of our (non-singular) diffusions
together with the Markov property shows that if $a < b$, $u \in [a,b]$ then
\[
h^+(u) = h^+_{a,b}(u)h^+(b) + h^-_{a,b}(u)h^+(a)
\]
and
\[
h^-(u) = h^+_{a,b}(u)h^-(b) + h^-_{a,b}(u)h^-(a).
\]
Solving these equations gives
\begin{equation}\label{e:hpab}
h^+_{a,b}(u) = \frac{h^-(a)h^+(u) - h^-(u)h^+(a)}{h^-(a)h^+(b)- h^-(b)h^+(a)}
\end{equation}
and
\begin{equation}\label{e:hmab}
h^-_{a,b}(u) = \frac{h^-(u)h^+(b) - h^-(b)h^+(u)}{h^-(a)h^+(b)- h^-(b)h^+(a)}.
\end{equation}
The functions $h^+$ and $h^-$ are $C^2$ on $(0,1)$ and continuous on $[0,1]$.
Furthermore, they solve $\mathcal{G} f = rf$ where $\mathcal{G} f \overset{\mathrm{def}}{=} \frac{1}{2}\sigma^2(\cdot) f^{\prime \prime}$.
In light of this, and remembering our assumption that the components of $x$ are ordered,
we will look for functions $\lambda^+$ and $\lambda^-$ of $x_1$ and $x_3$ such that
\begin{equation}\label{e:hinlambda}
h(x) = \lambda^-(x_1,x_3)h^-(x_2) + \lambda^+(x_1,x_3)h^+(x_2)
\end{equation}
has the desired properties. For other values of $x \notin D$ we will define $h$ by symmetry.
To get started, use \eqref{e:hpab} and \eqref{e:hmab} so see that $\hat f^i(x)$ has a linear
dependence on $h^+(x_2)$ and $h^-(x_2)$, that is, there are functions $\psi_\pm^i$ such that
\[
\hat f^i(x) = \psi^i_-(x_1,x_3)h^-(x_2) + \psi^i_+(x_1,x_3)h^+(x_2).
\]
For example,
\[
\psi^1_+(x_1,x_3) = \frac{h^-(x_1)h^+(x_3) - h^-(x_3)h^+(x_1) }{h^-(x_1)}
\]
\[
\psi^1_-(x_1,x_3) = - \frac{h^+(x_1)}{h^-(x_1)} \psi^1_+(x_1,x_3)
\]
Linearity of the operator $\left( \mathcal{G}^i - r \right)$ and linear independence of
$h^-$ and $h^+$ then show the requirement that $(\mathcal{G}^i - r)h = -r\hat f^i$
boils down to requiring
\[
\left( \mathcal{G}^i - r\right)\lambda_\pm = -r\psi^i_\pm.
\]
Of course, the corresponding homogenous (eigenfunction) problems are solved with
linear combinations of $h^+$ and $h^-$ -- what remains is the essentially
computational task of finding particular integrals and some constants.
This endeavour begins with repeated application of Lagrange's variation of parameters
method, determining constants using the boundary conditions $h(x) = 1$
for $x \in D$ where possible. Eventually we are left wanting
only for real constants, an unknown function of $x_1$ and a function of $x_3$. At this point we appeal
to the ``smooth pasting'' conditions
\begin{equation}\label{e:smoothpasting}
\left.\left( \frac{\partial}{\partial x_i} - \frac{\partial}{\partial x_j} \right)h\right|_{x_i = x_j} = 0, \; i,j \in V.
\end{equation}
After some manipulation, we are furnished with differential equations for our unknown functions
and equations for the constants. These we solve with little difficulty and, in doing so,
determine that
\begin{eqnarray*}
\lambda_-(x_1,x_3) &=& h^-(x_1) - h^+(x_1)h^+(x_3)\int_{x_3}^1 \frac{\frac{\partial}{\partial u} h^-(u)}{h^+(u)^2} du \\
& & + h^-(x_1) h^+(x_3) \int_{0}^{x_1} \frac{\frac{\partial}{\partial u} h^+(u)}{h^-(u)^2} du \\
& & + \frac{2r h^-(x_3) }{\phi} \int_0^{x_1}\left( \frac{h^+(u)}{\sigma(u)h^-(u)} \right)^2 \left( h^-(x_1)h^+(u)- h^-(u)h^+(x_1) \right) du,
\end{eqnarray*}
and
\begin{eqnarray*}
\lambda_+(x_1,x_3) &= & h^+(x_3) + h^-(x_1) h^-(x_3) \int_{0}^{x_1} \frac{\frac{\partial}{\partial u} h^+(u)}{h^-(u)^2} du \\
& & - h^-(x_1)h^+(x_3)\int_{x_3}^1 \frac{\frac{\partial}{\partial u} h^-(u)}{h^+(u)^2} du \\
&& + \frac{2r h^+(x_1) }{\phi} \int_{x_3}^{1}\left( \frac{h^-(u)}{\sigma(u)h^+(u)} \right)^2 \left( h^-(u)h^+(x_3) - h^-(x_3)h^+(u) \right) du,
\end{eqnarray*}
where $\phi$ denotes the constant value of $h^-(u)\frac{\partial}{\partial u} h^+(u) - h^+( u)\frac{\partial}{\partial u} h^-(u)$.
These expressions for $\lambda^\pm$ are valid for any $x$ not lying in $D$ with weakly ordered components; so $h$ is defined outside of $D$ via \eqref{e:hinlambda}. Naturally,
we define $h$ to be equal to one on $D$.
Having defined $h$, we now show that it is continuous and has the required partial derivatives.
Continuity is inherited from $h^+$ and $h^-$ on the whole of $\mathcal{S}$ apart from at
the exceptional points $(0,0,0)$ and $(1,1,1)$ in $D$. For these two points, a few lines of
justification is needed. We shall demonstrate continuity at the origin, continuity at the upper right hand
corner $(1,1,1)$ follows by the same argument. Let $x^n$ be a sequence of points in $S$ that converge
to $(0,0,0)$; we must show $h(x^n) \to h(0,0,0) = 1$. Without loss
of generality assume that
the components of $x^n$ are ordered $x^n_1 \leq x^n_2 \leq x^n_3$ and that
$x^n$ is not in $D$ (if $x^n \in D$, then $h(x^n) = 1$ and it may be discarded from the sequence).
By examining the expressions for $\lambda^\pm$, we see that it is sufficient to check that
\[
(i)\;\; \lambda^-(x^n_1, x^n_3) \to 1 \; \mathrm{and} \; (ii) \;\; h^+(x^n_2)\lambda^+(x^n_1, x^n_3) \to 0.
\]
For (i), the only doubt is that the term involving the first integral in the expression
for $\lambda^-$ does not vanish in the limit. The fact that it
does can be proved by the Dominated Convergence Theorem. The term is
\begin{eqnarray*}
h^+(x^n_1)h^+(x^n_3) \int_{x^n_3}^1 \frac{\frac{\partial}{\partial u} h^-(u)}{h^+(u)^2} du
= \int_{0}^1 \Indi{u > x^n_3} \frac{h^+(x^n_1)h^+(x^n_3)}{h^+(u)^2} \frac{\partial}{\partial u} h^-(u) du.
\end{eqnarray*}
The ratio $\frac{h^+(x^n_1)h^+(x^n_3)}{h^+(u)^2}$ is bounded above by one when $u > x^n_3 \geq x^n_1$
since $h^+$ is increasing. Further, the derivative of $h^-$ is integrable and so
the integrand is dominated by an integrable function, and converges to zero.
For the second limit (ii), there are two terms to check. Firstly, that
\[
h^+(x^n_2)h^-(x^n_1)h^+(x^n_3)\int_{x^n_3}^1 \frac{\frac{\partial }{\partial u} (u)}{h^+(u)^2} du \to 0
\]
follows from essentially the same argument as before. The second term of concern is
\[
h^+(x^n_1)\int_{x^n_3}^{1}\left( \frac{h^-(u)}{\sigma(u)h^+(u)} \right)^2 \left( h^-(u)h^+(x^n_3) - h^-(x^n_3)h^+(u) \right) du.
\]
Again, one may write this as the integral of a dominated function (recalling
that $\sigma$ is bounded away from zero) that converges to zero. Thus, the integral above converges to zero as required.
Now that we have established continuity of $h$, we can begin tackling the partial derivatives.
When the components of $x$ are distinct, differentiability comes from that of our building blocks $h^+$
and $h^-$. It is at the switching boundaries, when two or more components are equal,
where we have to be careful. The key here is to remember that we constructed $h$ to
satisfy the smooth pasting property \eqref{e:smoothpasting} -- this allows us to show that the one-sided partial derivatives are equal in $(0,1)$. For example, provided the limit exists,
\[
\left.\frac{\partial}{\partial x_1}h(x_1,x_2,x_3)\right|_{x_1 = x_2=x_3} =
\lim_{h \to 0} \frac{1}{h}\left( h(x_1 + h, x_1,x_1) - h(x_1 - h,x_1,x_1) \right).
\]
Using \eqref{e:hinlambda} and the differentiability of $\lambda$, the limit from above is
\[
\left.\frac{\partial}{\partial x_3}\left( \lambda^-(x_1,x_3)h^-(x_2) + \lambda^+(x_1,x_3)h^+(x_2) \right)\right|_{x_1 = x_2=x_3}.
\]
This is equal to the limit from below,
\[
\left.\frac{\partial}{\partial x_1}\left( \lambda^-(x_1,x_3)h^-(x_2) + \lambda^+(x_1,x_3)h^+(x_2) \right)\right|_{x_1 = x_2=x_3},
\]
by the smooth pasting property
The other first order partial derivatives exist by similar arguments.
Note that we do not include in our hypothesis the requirement that these first order partial
derivatives exist at the boundary points of $I$.
The second order derivatives are only slightly more laborious to check.
As before it is at switching boundaries where we must take care in checking that
the limits from above and below agree. For the partial derivatives
$\frac{\partial^2}{\partial x_i^2}h$ at a point $x$ not in $D$ with $x_i \in (0,1)$,
it is continuity of $\hat f^i$ at $x$ that allows us to equate the limits
and show that the result is continuous, rather than smooth pasting.
For the mixed partial derivatives, a priori, we don't have this helping hand.
Instead, when two components are equal, we can always assume that
one is the ``middle'' component that enters through the the terms $h^+$ and
$h^-$ in \eqref{e:hinlambda} while the other is an ``upper'' or ``lower'' term that
enters through $\lambda^+$ and $\lambda^-$. This makes it easy to check that the
partial derivations of $h$ commute at $x$. Now we can use the smooth pasting
condition \eqref{e:smoothpasting} to show that, for example,
\[
\left. \frac{\partial^2 }{\partial x_3 \partial x_2}h(x_1,x_2,x_3)\right|_{x_1=x_2=x_3} =
\left.\frac{\partial^2 }{\partial x_1 \partial x_2}h(x_1,x_2,x_3)\right|_{x_1=x_2=x_3}.
\]
Thus, $h$ satisfies all the properties we required.
\end{proof}
From here, we need a verification lemma to check that the function we constructed really is equal
to $\hat v_r$. The following result does just that, and, as a corollary, shows that $\hat v_r$ is maximal among Laplace transforms of decision time distributions
(note that this is weaker than the stochastic minimality claimed in Theorem \ref{t:stochminimality}).
The result is essentially that Bellman's principle of optimality holds
(specialists in optimal control will notice that
the function we constructed in Lemma \ref{l:heurfnexists}
satisfies the Hamilton-Jacobi-Bellman PDE).
\begin{lemma}\label{l:vhatverification} Suppose that $h_r: \mathcal{S} \to {\mathbb R}$ satisfies
\begin{itemize}
\item $h_r$ is continuous on $\mathcal{S}$
\item for $i,j \in V$, $\frac{\partial^2 h_r}{\partial x_i \partial x_j}$ exists and is continuous on $\{ x \in \mathcal{S} \backslash D: x_i, x_j \in (0,1)\}$.
\item $h_r(x) = 1$ for $x \in D$.
\item $\left(\mathcal{G}^i - r\right)h_r(x) \leq 0$
\end{itemize}
Then,
\[
h_r(x) \geq \sup_{\mathcal{C}} \mathop{\mathbb{E}}\nolimits_x\left(\exp(-r \tau^{\mathcal{C}})\right).
\]
Furthermore, if $\left(\mathcal{G}^i - r\right)h_r(x)$ vanishes whenever $x_j \leq x_i \leq x_k$ (under some labelling) then,
\[
h_r(x) = \hat v_r(x) = \mathop{\mathbb{E}}\nolimits_x\left(\exp(-r \tau^\straTopt)\right).
\]
\end{lemma}
\begin{proof}
Let $\mathcal{C}$ be an arbitrary strategy and define the function $g:\mathcal{S}\times [0,\infty) \to {\mathbb R}$
by $g(x,t) \overset{\mathrm{def}}{=} \exp(-rt)h_r(x)$. Then, by hypothesis, $g$ is $C^{2,1}$ on
$(0,1)^3 \times [0,\infty)$.
Thus, if $dist$ denotes Euclidean distance and $\rho_n \overset{\mathrm{def}}{=} \inf\{t \geq 0: \mathrm{dist}(X^\mathcal{C}(t), \partial \mathcal{S}) < n^{-1}\}$, Ito's
formula shows that,
\begin{eqnarray*}
g(X^\mathcal{C}(\rho_n),\rho_n) - g(X^\mathcal{C}(0),0) &=& \sum_i \int_0^{\rho_n} \frac{\partial}{\partial x_i}g(X^\mathcal{C}(s),s)dX^\mathcal{C}_i(s) \\
&& + \int_0^{\rho_n} \frac{\partial}{\partial t}g(X^\mathcal{C}(s),s)ds\\
&& + \frac{1}{2} \sum_{i,j} \int_0^{\rho_n} \frac{\partial^2}{\partial x_i\partial x_j}g(X^\mathcal{C}(s),s)d[X^\mathcal{C}_i,X^\mathcal{C}_j]_s.
\end{eqnarray*}
Theorem \ref{t:MPCSM} implies
$[X^\mathcal{C}_i]_s = [X_i]_{\mathcal{C}_i(s)}$ and that
$X^\mathcal{C}_i$ and $X^\mathcal{C}_j$ are orthogonal martingales. Hence,
using absolute continuity of $\mathcal{C}$ and Proposition 1.5, Chapter V
of \cite{revuzyor},
\begin{eqnarray*}
g(X^\mathcal{C}(\rho_n),\rho_n) - g(X^\mathcal{C}(0),0) &=& \sum_i \int_0^{\rho_n} \frac{\partial}{\partial x_i}g(X^\mathcal{C}(s),s)dX^\mathcal{C}_i(s) \\
&& + \sum_i \int_0^{\rho_n} \exp(-rs) \left(\mathcal{G}^i - r \right) h(X^\mathcal{C}(s)) \dot \mathcal{C}_i(s) ds.
\end{eqnarray*}
The integrand of the stochastic integral against the square integrable martingale $X^\mathcal{C}_i$ is continuous and hence bounded on each compact subset of $(0,1)^3$.
Thus, the integral's expectation vanishes, i.e.
\[
\mathop{\mathbb{E}}\nolimits_x\left( \int_0^{\rho_n} \frac{\partial}{\partial x_i}g(X^\mathcal{C}(s),s)dX^\mathcal{C}_i(s) \right) = 0
\]
Next, the fact that $\left(\mathcal{G}^i -r \right) h$ is non-positive
gives
\[
\mathop{\mathbb{E}}\nolimits_x \left(\int_0^{\rho_n} \exp(-rs) \left(\mathcal{G}^i -r \right) h(X^\mathcal{C}(s)) \dot \mathcal{C}_i(s) ds\right) \leq 0,
\]
and so
\begin{equation}\label{e:lemver1}
\mathop{\mathbb{E}}\nolimits_x \left(\exp(-r\rho_n)h(X^\mathcal{C}(\rho_n))\right) - h(x)
\leq 0.
\end{equation}
Now, the times $\rho_n$ taken for $X^\mathcal{C}$ to come within distance $n^{-1}$
of the boundary of $\mathcal{S}$ converge to $\rho \overset{\mathrm{def}}{=} \inf\{t \geq 0: X^\mathcal{C}(t) \in \partial\mathcal{S} \}$
as $n \to \infty$. So, the continuity of $h$ and the Dominated Convergence Theorem together imply
\begin{equation}\label{e:lemver2}
\mathop{\mathbb{E}}\nolimits_x \left(\exp(-r\rho)h(X^\mathcal{C}(\rho))\right) \leq h(x).
\end{equation}
In summary, inequality \eqref{e:lemver2} arises by applying
the three dimensional Ito formula
to $g$ composed with the controlled process stopped inside $(0,1)^3$ and
then using continuity of $h$. But, from time $\rho$ onwards, our controlled process
runs on a face or an edge of the cube and Ito's formula in three dimensions
does not apply. This is not a problem though -- a similar
argument with Ito's formula in one (or two) dimensions does the trick. That is,
if $\rho^\prime$ denotes the first time that $X^\mathcal{C}$ hits an edge of $\mathcal{S}$
(so $0 \leq \rho \leq \rho^\prime \leq \tau^\mathcal{C}$),
then both
\begin{equation}\label{e:lemver3}
\mathop{\mathbb{E}}\nolimits_x \left(\exp(-r\rho^\prime)h(X^\mathcal{C}(\rho^\prime)) - \exp(-r\rho)h(X^\mathcal{C}(\rho))\right) \leq 0,
\end{equation}
and
\begin{equation}\label{e:lemver4}
\mathop{\mathbb{E}}\nolimits_x \left(\exp(-r\tau^\mathcal{C})h(X^\mathcal{C}(\tau^\mathcal{C})) - \exp(-r\rho^\prime)h(X^\mathcal{C}(\rho^\prime))\right) \leq 0.
\end{equation}
Summing these differences and using the boundary condition $h(x) = 1$ for
$x \in D$ yields
\[
\mathop{\mathbb{E}}\nolimits_x \left(\exp(-r\tau^\mathcal{C})\right) = \mathop{\mathbb{E}}\nolimits_x \left(\exp(-r\tau^\mathcal{C})h(X^\mathcal{C}(\tau^\mathcal{C}))\right)
\leq h(x).
\]
Thus $h$ is an upper bound for the Laplace transform of the distribution of
the decision time arising from any strategy.
It remains to prove that $h$ is equal to the Laplace transform $\hat v_r$
Suppose that $\mathcal{C}$ is the strategy ${\mathcal{C}^\star}$ from Lemma \ref{l:stratexists}, then for almost every $s \geq 0$, $\dot \mathcal{C}_i(s)$ is positive only when $X^\mathcal{C}_j(s) \leq X^\mathcal{C}_i(s) \leq X^\mathcal{C}_k(s)$ under some labelling.
So, $\left(\mathcal{G}^i -r \right) h(X^\mathcal{C}(s))\dot \mathcal{C}_i(s)$
vanishes for almost every $s \geq 0$ and \eqref{e:lemver1} is an equality.
Taking limits shows that \eqref{e:lemver2} -- \eqref{e:lemver4}
are also equalities.
\end{proof}
So, $\hat v_r$ is twice differentiable in each component and satisfies the heuristic
equation
\begin{equation}\label{e:heureq4}
\left(\mathcal{G}^i - r \right)\hat v_r(x)
= - r \hat f^i_r(x), \; x \notin D, \; x_i \in (0,1).
\end{equation}
In the next section we will show that $\mathop{\mathbb{P}}\nolimits_x(\tau^\straTopt > t)$ is the probabilistic
solution to certain parabolic partial differential equations. To do this, we
need to rewrite $\hat v_r$ in a more convenient form.
It is convenient to introduce the notation $X^{(1)}(t) = (X_1(t),X_2(0),X_3(0))$,
$X^{(2)}(t) = (X_1(0),X_2(2),X_3(0))$ and
$X^{(3)}(t) = (X_1(0),X_2(0),X_3(t))$ for each $t \geq 0$.
We define $\rho^{(i)}$ to be
the absorption time of
$X_i$, i.e.
\[
\rho^{(i)} \overset{\mathrm{def}}{=} \inf\{t \geq 0: X_i(t) \notin (0, 1)\}.
\]
\begin{lemma}\label{l:vrhatrep}
For any $x \notin D$, $\hat v_r$ can be written as
\[
\hat v_r(x) = \mathop{\mathbb{E}}\nolimits_x \left( \exp(-r \rho^{(i)}) \hat v_r(X^{(i)}(\rho^{(i)}))
+
\int_0^{\rho^{(i)}} \hat f^i_r(X^{(i)}(s)) \exp(-rs) ds \right).
\]
\end{lemma}
\begin{proof}
Fix $x \notin D$, then the function $x_i \mapsto \hat v_r(x)$ is $C^2$ on $(0,1)$ and $C^0$ on $[0,1]$.
Introduce the a.s. finite ${\mathcal F}_i$ stopping time $\rho^{(i)}_n \overset{\mathrm{def}}{=} \inf\{t \geq 0: X_i(t) \notin (n^{-1}, 1-n^{-1})\}$, so Ito's formula (in one dimension) gives
\begin{eqnarray*}
\exp(-r\rho^{(i)}_n)\hat v_r(X^{(i)}(\rho^{(i)}_n)) - \hat v_r(X(0)) & = &
\int_0^{\rho^{(i)}_n} \exp(-rs) \frac{\partial }{\partial x_i} \hat v_r(X^{(i)}(s))dX_i(s) \\
&& + \int_0^{\rho^{(i)}_n} \exp(-rs)\left( \mathcal{G}^i - r\right) \hat v_r(X^{(i)}(s))ds.
\end{eqnarray*}
The function $\frac{\partial }{\partial x_i} \hat v_r$ is continuous on $(0,1)$ and hence bounded on the compact subsets $[n^{-1}, 1-n^{-1}]$. It follows that
the expectation of the stochastic integral against $dX_i$
vanishes.
So, using equation \eqref{e:heureq4},
\begin{eqnarray*}
\hat v_r(x) & = & \mathop{\mathbb{E}}\nolimits_x\left( \exp(-r\rho^{(i)}_n)\hat v_r(X^{(i)}(\rho^{(i)}_n))\right)
\\
&& +r\mathop{\mathbb{E}}\nolimits_x\left(\int_0^{\rho^{(i)}_n} \exp(-rs)\hat f^i_r(X^{(i)}(s))ds\right).
\end{eqnarray*}
The stopping times $\rho^{(i)}_n$ converge to $\rho^{(i)}$ as $n \to \infty$ and so by continuity of $X_i$,
$\hat v_r$, the exponential function and the integral,
\[
\exp(-r\rho^{(i)}_n)\hat v_r(X^{(i)}(\rho^{(i)}_n)) \to \exp(-r\rho^{(i)})\hat v_r(X^{(i)}(\rho^{(i)}))~\mathrm{and}
\]
\[
\int_0^{\rho^{(i)}_n} \exp(-rs)\hat f^i_r(X^{(i)}(s))ds \to \int_0^{\rho^{(i)}} \exp(-rs)\hat f^i_r(X^{(i)}(s))ds.
\]
To finish the proof, use the Dominated Convergence Theorem to exchange the limit and expectation.
\end{proof}
\begin{remark}\label{r:Etau}
We can generalise our heuristic to value functions of the form
\[
J(x,t) \overset{\mathrm{def}}{=} \mathop{\mathbb{E}}\nolimits_x(g(\tau^\straTopt + t)),\; x \in \mathcal{S}, \; t \geq 0,
\]
for differentiable $g$. It reads
\begin{equation}\label{e:heurgeneric}
\left(\mathcal{G}^i + \frac{\partial }{\partial t}\right) J(x,t) = \mathop{\mathbb{E}}\nolimits_x(g^\prime(\tau^\straTopt + t);T_i = 0).
\end{equation}
Equation \eqref{e:heureq4} is the specialisation $g(t) = \exp(-rt)$. Such a choice of $g$ is helpful because it effectively removes the time dependence in \eqref{e:heurgeneric},
making it easier to solve. The benefit is the same if $g$ is linear
and it is not difficult to construct and verify (compare
Lemmas \ref{l:heurfnexists} and \ref{l:vhatverification})
an explicit expression for $J(x) \overset{\mathrm{def}}{=} \mathop{\mathbb{E}}\nolimits_x(\tau^\straTopt)$.
In terms of the integrals
\[
I_k(x_1) \overset{\mathrm{def}}{=} \int_0^{x_3} \frac{G(u)}{(1-u)^k}du~\mathrm{and}~ J_k(x_3) \overset{\mathrm{def}}{=} \int_{x_3}^1 \frac{G(u)}{u^k}du,\; k \in {\mathbb N},
\]
the expression for $J$ reads,
\begin{eqnarray*}
J(x) & = & G(x_2) + (1-x_1)^{-2}G(x_1)\left( (1-x_2)((1-x_1)-(1-x_3))+(1-x_1)(1-x_3)\right) \\
&& -2I_3(x_1)\left( (1-x_2)((1-x_1)+(1-x_3))+(1-x_1)(1-x_3))\right) + \\
&& 6I_4(x_1)(1-x_2)(1-x_1)(1-x_3) + x_3^{-2}G(x_3)\left(x_2(x_3-x_1) + x_1 x_3 \right) \\
&& -2J_3(x_3)\left( x_2(x_3+x_1) + x_1 x_3 \right) + 6J_4(x_3)x_1 x_2 x_3.
\end{eqnarray*}
\end{remark}
\section{A representation for $\mathop{\mathbb{P}}\nolimits_x\left(\tau^\straTopt > T\right)$}
The aim of this section is to connect the tail probability $v:\mathcal{S}\times[0,\infty) \to [0,1]$ defined by
\[
v(x,t) \overset{\mathrm{def}}{=} \mathop{\mathbb{P}}\nolimits_x\left(\tau^\straTopt > t\right),
\]
to the formula for $\hat v_r$ from Lemma \ref{l:vrhatrep}.
Before continuing, let us explain the key idea. Just for a moment, suppose that $v$ is smooth
and consider the Laplace transform of $\left(\mathcal{G}^i - \frac{\partial}{\partial t} \right)v(x,\cdot)$.
It is straightforward to show that the Laplace transform of $v$ satisfies (see \eqref{e:laplacev}),
\[
\int_0^\infty v(t,x) \exp(-rt) dt = r^{-1} \left( 1 - \hat v_r(x)\right).
\]
Bringing $\mathcal{G}^i$ through the integral and integrating by parts in $t$,
\[
\int_0^\infty \exp(-rt) \left(\mathcal{G}^i - \frac{\partial}{\partial t} \right)v(x,t) dt = -r^{-1} \left( \mathcal{G}^i - r\right) \hat v_r(x).
\]
Combining this with the heuristic equation \eqref{e:heureq4} gives
\begin{equation}\label{e:genddtvxtLT}
\int_0^\infty \exp(-rt) \left(\mathcal{G}^i - \frac{\partial}{\partial t} \right)v(x,t) dt =
\hat f_r^i(x).
\end{equation}
This shows that $\left(\mathcal{G}^i - \frac{\partial}{\partial t} \right)v$ is non-negative,
(i.e. $v$ satisfies the associated Hamilton-Jacobi-Bellman equation). From here,
one could use Ito's formula (c.f. the proof of Lemma \ref{l:vhatverification}) to see that
$\left(v(X^\mathcal{C}(t), T-t), 0 \leq t \leq T\right)$ is a sub-martingale for any strategy $\mathcal{C}$.
In particular,
\[
\mathop{\mathbb{P}}\nolimits_x(\tau^{\mathcal{C}}>T) = \mathop{\mathbb{E}}\nolimits_x\left(v(X^\mathcal{C}(T), 0)\right) \geq v(x,T).
\]
So, ideally, to prove Theorem \ref{t:stochminimality}, we would establish that $v$ is
smooth enough to apply Ito's formula.
We are given some hope, by noticing that if we can show that $\hat f_r^i$ is the Laplace transform of a function $f_i$ say, then \eqref{e:genddtvxtLT} implies $v$ solves
\begin{equation}\label{e:genddtvxt}
\int_0^\infty \exp(-rt) \left(\mathcal{G}^i - \frac{\partial}{\partial t} \right)v = f_i.
\end{equation}
We can show such a density $f_i$ exists (Lemma \ref{l:densityexists} below) but
(surpisingly) not that it is H\"{o}lder continuous. Unfortunately, without the latter
we cannot show that \eqref{e:genddtvxt} has a classical solution. Nevertheless,
we can deduce the sub-martingale inequality by showing merely that $v$ solves
\eqref{e:genddtvxt} in a weaker sense (Lemma \ref{l:rep}).
To commence, let us first verify the claim that $\hat f_r^i$ is the
Laplace transform of a function.
\begin{lemma}\label{l:densityexists} For each $x \notin D$ and $i \in V$, the Borel measure $B \mapsto \mathop{\mathbb{P}}\nolimits_x(\tau^\straTopt \in B, T_i = 0)$ has a (defective) density $f_i:\mathcal{S}\times[0,\infty) \to [0,\infty)$, i.e.
\[
\mathop{\mathbb{P}}\nolimits_x(\tau^\straTopt \in dt, T_i = 0) = f_i(x,t)dt,\; t \geq 0.
\]
\end{lemma}
\begin{proof}
This is essentially a corollary of the decomposition of $\tau^\straTopt$ on $\{ T_i = 0 \}$ that
was discussed in the proof of Lemma \ref{l:heurfnexists}.
Recall that if $\mathfrak{m}^{(i)}_a$ is the first hitting time of level $a$ by $X_i$ (defined
in \eqref{e:xihita}), then for $x_1 \leq x_2 \leq x_3$,
\[
\mathop{\mathbb{P}}\nolimits_x\left(\tau^\straTopt \in B, T_1 = 0\right) = \mathop{\mathbb{P}}\nolimits_x\left(\mathfrak{m}^{(2)}_1 + \mathfrak{m}^{(3)}_1 \in B, \mathfrak{m}^{(2)}_1 < \mathfrak{m}^{(2)}_{x_1}, \mathfrak{m}^{(3)}_1 < \mathfrak{m}^{(3)}_{x_1}\right).
\]
This is the convolution of the sub-probability measures
\[
\mathop{\mathbb{P}}\nolimits_x\left(\mathfrak{m}^{(i)}_1 \in \cdot, \mathfrak{m}^{(i)}_1 < \mathfrak{m}^{(i)}_{x_1}\right), i = 1,2.
\]
If $x_1 = x_2$, then $T_1 > 0$ almost surely under $\mathop{\mathbb{P}}\nolimits_x$, and when $x \notin D$, $x_2 < 1$.
So, we may assume that $x_2$ is in the interval $(x_1,1)$. In this case, $\{\mathfrak{m}^{(2)}_1 < \mathfrak{m}^{(2)}_{x_1}\}$ is not null and $X_2$ can be conditioned,
via a Doob $h$-transform, to exit $(x_1,1)$ at the upper boundary.
The conditioned process is again a diffusion and so the arguments of \S{4.11} of
\cite{itomckean74} show that $\mathop{\mathbb{P}}\nolimits_x\left(\mathfrak{m}^{(2)}_1 \in \cdot, \mathfrak{m}^{(2)}_1 < \mathfrak{m}^{(2)}_{x_1}\right)$ is absolutely continuous. Hence, $\mathop{\mathbb{P}}\nolimits_x\left(\tau^\straTopt \in \cdot, T_1 = 0\right)$ is the convolution of two measures, at least one of which has a density.
The other cases are treated with essentially identical arguments.
\end{proof}
The next step is to show that $v$ solves \eqref{e:genddtvxt} in a probabilistic sense.
\begin{lemma}\label{l:rep}
Fix $i \in V$ and define the function $u:\mathcal{S} \times [0,\infty) \to {\mathbb R}$ by
\begin{equation}\label{e:rep}
u(x,t) \overset{\mathrm{def}}{=} \mathop{\mathbb{E}}\nolimits_x\left( v( X^{(i)}(t \wedge \rho^{(i)}), (t - \rho^{(i)})^+) - \int_0^{t \wedge \rho^{(i)}} f_i(X^{(i)}(s), t-s)ds\right),
\end{equation}
where $\rho^{(i)} = \inf\{t \geq 0: X_i(t) \notin (0,1)\}$ and $f_i$ is the density from
Lemma \ref{l:densityexists}.
Then,
\begin{itemize}
\item[(a)] for each $x \notin D$, $u(x,\cdot)$ has the same Laplace transform as $v(x,\cdot)$,
\item[(b)] both $u(x,\cdot)$ and $v(x,\cdot)$ are right continuous, and as a result
\item[(c)] the tail probability $v$ is equal to $u$ and so has the representation given in \eqref{e:rep}.
\end{itemize}
\end{lemma}
\begin{proof}
(a) The Laplace transform of the tail probability
is, for $x \notin D$,
\begin{eqnarray*}
\int_0^\infty v(t,x) \exp(-rt) dt & = & \mathop{\mathbb{E}}\nolimits_x \int_0^{\infty} \Indi{\tau^\straTopt > t} \exp(-r t) dt \\
& = & \mathop{\mathbb{E}}\nolimits_x \int_0^{\tau^\straTopt} \exp(-r t) dt \\
& = & r^{-1} \left( 1 - \hat v_r(x)\right),
\end{eqnarray*}
by Fubini's Theorem since the integrand is
non-negative. Furthermore, for $x \in D$, both $v(t,x)$ and $1 - \hat v_r(x)$ vanish
and so in fact, for \emph{any} $x \in \mathcal{S}$ we have
\begin{equation}\label{e:laplacev}
\int_0^\infty v(t,x) \exp(-rt) dt = r^{-1} \left( 1 - \hat v_r(x)\right).
\end{equation}
Now, we consider the Laplace transform of $u$. By linearity of the expectation operator
\[
u(x,t) = \mathop{\mathbb{E}}\nolimits_x\left( v( X^{(i)}(t \wedge \rho^{(i)}), (t - \rho^{(i)})^+)\right) - \mathop{\mathbb{E}}\nolimits_x\left(\int_0^{t \wedge \rho^{(i)}} f_i(X^{(i)}(s), t-s)ds\right).
\]
First consider the Laplace transform of the first member of the right hand side,
\[
\int_0^\infty \mathop{\mathbb{E}}\nolimits_x\left( v( X^{(i)}(t \wedge \rho^{(i)}), (t - \rho^{(i)})^+)\right)\exp(-rt)dt.
\]
Applying Fubini's theorem, the preceeding expression becomes
\[
\mathop{\mathbb{E}}\nolimits_x\left( \int_0^\infty v( X^{(i)}(t \wedge \rho^{(i)}), (t - \rho^{(i)})^+)\exp(-rt)dt\right),
\]
which can be decomposed into the sum
\[
\mathop{\mathbb{E}}\nolimits_x\left(\int_0^{\rho^{(i)}} v( X^{(i)}(t), 0)\exp(-rt)dt\right) + \mathop{\mathbb{E}}\nolimits_x\left(\int_{\rho^{(i)}}^\infty v( X^{(i)}(\rho^{(i)}), t - \rho^{(i)})\exp(-rt)dt\right).
\]
The first term is
\begin{equation}\label{e:laplaceu1}
\mathop{\mathbb{E}}\nolimits_x \int_0^{\rho^{(i)}} v( X^{(i)}(t), 0)\exp(-rt)dt = r^{-1} \mathop{\mathbb{E}}\nolimits_x\left(1 - \exp(-r\rho^{(i)})\right),
\end{equation}
because when $x \notin D$, $\mathop{\mathbb{P}}\nolimits_x$-almost surely we have $X^{(i)}(t) \notin D$ for $t < \rho^{(i)}$.
The second term,
\[
\mathop{\mathbb{E}}\nolimits_x \int_{\rho^{(i)}}^\infty v( X^{(i)}(\rho^{(i)}), t - \rho^{(i)})\exp(-rt)dt.
\]
If we shift the variable of integration to $u = t - \rho^{(i)}$ and then use \eqref{e:laplacev}, the last expression becomes
\begin{equation}\label{e:laplaceu2}
r^{-1} \mathop{\mathbb{E}}\nolimits_x\left( \exp(-r \rho^{(i)})( 1- \hat v_r(X^{(i)}(\rho^{(i)})) ) \right).
\end{equation}
The treatment of
\begin{equation}\label{e:laplaceu3}
\int_0^\infty \mathop{\mathbb{E}}\nolimits_x \left( \int_0^{t \wedge \rho^{(i)}} f_i(X^{(i)}(s), t-s)ds \right) \exp(-rt) dt
\end{equation}
proceeds in a similar fashion -- exchange the expectation and
outer integral and then decompose the integrals into $t < \rho^{(i)}$ and $t \geq \rho^{(i)}$.
The integral over $t < \rho^{(i)}$ is
\[
\mathop{\mathbb{E}}\nolimits_x \int_0^{\rho^{(i)}} \int_0^{t} f_i(X^{(i)}(s), t-s)ds \exp(-rt) dt.
\]
Exchanging the integrals in $t$ and $s$ gives
\[
\mathop{\mathbb{E}}\nolimits_x \int_0^{\rho^{(i)}} \int_s^{\rho^{(i)}} f_i(X^{(i)}(s), t-s) \exp(-rt) dtds.
\]
For the integral over $t \geq \rho^{(i)}$, we again exchange the integrals in $t$ and $s$ to give
\[
\mathop{\mathbb{E}}\nolimits_x \int_0^{\rho^{(i)}} \int_{\rho^{(i)}}^{\infty} f_i(X^{(i)}(s), t-s) \exp(-rt) dtds.
\]
Summing these final two expressions and substituting $u = t-s$ shows that \eqref{e:laplaceu3}
is equal to
\[
\mathop{\mathbb{E}}\nolimits_x \int_0^{\rho^{(i)}} \int_{0}^{\infty} f_i(X^{(i)}(s), u) \exp(-rt)du \exp(-rs)ds.
\]
The Laplace transform is a linear operator, and so we may sum
\eqref{e:laplaceu1}, \eqref{e:laplaceu2} and \eqref{e:laplaceu3} to show that the
Laplace transform of $u$ is equal to
\begin{equation}\label{e:laplaceu4}
r^{-1}\mathop{\mathbb{E}}\nolimits_x\left(1 - \exp(-r \rho^{(i)}) \hat v_r(X^{(i)}(\rho^{(i)}))\right) + \mathop{\mathbb{E}}\nolimits_x \left(\int_0^{\rho^{(i)}} \hat f_r^i(X^{(i)}(s)) \exp(-rs)ds\right),
\end{equation}
where we have used
\[
\int_{0}^{\infty} f_i(x, u) \exp(-rt)du = \hat f_r^i(x)
\]
for $x \notin D$.
But, \eqref{e:laplaceu4} is exactly what we get by subsituting the representation for
$\hat v_r$ from Lemma \eqref{l:vrhatrep} into \eqref{e:laplacev}, and so we're done.
(b) Right-continuity of $v$ in $t$ follows from the Monotone Convergence Theorem. A little more
work is required to see that $u$ is right-continuous. We begin by observing that
if $\rho^{(i)} > t$ then $X_i$ has not been absorbed by time $t$ and so, if $x \notin D$,
there is a $\mathop{\mathbb{P}}\nolimits_x$-negligible set outside of which $X^{(i)}(t) \notin D$.
It follows that $\{X^{(i)}(t) \in D, \rho^{(i)} > t \} = \{ \rho^{(i)} > t \}$ almost surely. Combining this with
the fact that $v(\cdot,0) = \Indi{\cdot \notin D}$ shows
\[
\mathop{\mathbb{E}}\nolimits_x\left( v(X^{(i)}(t \wedge \rho^{(i)}),(t - \rho^{(i)})^+); \rho^{(i)} > t \right) = \mathop{\mathbb{P}}\nolimits_x\left(\rho^{(i)} > t \right)~\mathrm{for}~x \notin D.
\]
The latter is right-continuous in $t$ by the Monotone Convergence Theorem.
The complementary expectation
\[
\mathop{\mathbb{E}}\nolimits_x\left( v(X^{(i)}(t \wedge \rho^{(i)}),(t - \rho^{(i)})^+); \rho^{(i)} \leq t \right)
\]
is equal to
\[
\mathop{\mathbb{E}}\nolimits_x\left( v(X^{(i)}(\rho^{(i)}),t - \rho^{(i)}); \rho^{(i)} \leq t \right),
\]
the right continuity of which follows from that of $v$ and the indicator $\Indi{\rho^{(i)} \leq t}$, together with the Dominated Convergence Theorem.
We now consider the expectation of the integral,
\[
\mathop{\mathbb{E}}\nolimits_x\left( \int_0^{t \wedge \rho^{(i)}} f_i(X^{(i)}(s), t-s)ds \right).
\]
Using Fubini's theorem we may exchange the integral and expectation to get
\begin{equation}\label{e:rightctyxikilled}
\int_0^{t} \mathop{\mathbb{E}}\nolimits_x\left(f_i(X^{(i)}(s), t-s); \rho^{(i)} > s \right)ds.
\end{equation}
This suggests the introduction of $(p^\dagger_s; s \geq 0)$, the transition kernel of $X_i$
killed (and sent to a cemetery state) on leaving $(0,1)$. Such a density exists
by the arguments of \S{4.11} of \cite{itomckean74}.
For notational ease, let us assume $i=1$, then \eqref{e:rightctyxikilled} can be written
\[
\int_0^{t} \int_0^1 p^\dagger_s(x_1,y) f_1((y,x_2,x_3), t-s)dy ds.
\]
Finally, changing the variable of integration from $s$ to $s^\prime = t-s$ gives
\[
\int_0^{t} \int_0^1 p^\dagger_{t - s^\prime}(x_1,y) f_1((y,x_2,x_3), s^\prime)dy ds^\prime,
\]
and so regularity of \eqref{e:rightctyxikilled} in $t$ is inherited from $p^\dagger$.
This is sufficient because $p_t^\dagger$ is continuous in $t > 0$ (again see \cite{itomckean74}).
(c) It follows from (a) that for each $x \notin D$, $u(x,t)$ and
$v(x,t)$ are equal for almost every $t \geq 0$. Hence, right continuity is enough to show $v(x,t) = u(x,t)$ for every $t \geq 0$.
\end{proof}
From the probabilistic representation for $v$, we need to deduce some sub-martingale type inequalities
for $v(X^\mathcal{C}(t), T-t)$, $0\leq t \leq T$. As we will see later,
it is enough to consider strategies that, for some $\epsilon > 0$, run only process
during the interval $(k\epsilon, (k+1)\epsilon)$, for integers $k \geq 0$. In other words, the rates for each process are either zero or one and are constant over $(k\epsilon, (k+1)\epsilon)$. More specifically,
\begin{definition}[$\epsilon$-strategy]\label{d:epsilonstrat} For $\epsilon > 0$ we let $\Pi_\epsilon$
denote the set of strategies $\mathcal{C}^\epsilon$ such that for any integer $k \geq 0$,
\[
\mathcal{C}^\epsilon(t) = \mathcal{C}^\epsilon(k\epsilon) + (t - k\epsilon)\xi_k,\;
k \epsilon \leq t \leq (k+1)\epsilon,
\]
where $\xi_k$ takes values in the set of standard basis elements $\{ (1,0,0), (0,1,0), (0,0,1)\}$.
\end{definition}
\begin{lemma}\label{l:submgeachcmpnt}
Suppose $x \in \mathcal{S}$ and $0 \leq t \leq T$, then the following
sub-martingale inequalities hold.
\begin{itemize}
\item[(a)] For $i \in V$,
\[
\mathop{\mathbb{E}}\nolimits_x \left( v(X^{(i)}(t), T-t)\right) \geq v(x,T).
\]
\item[(b)] If $\mathcal{C}^\epsilon \in \Pi_\epsilon$ then
\[
\mathop{\mathbb{E}}\nolimits_x\left( v(X^{\mathcal{C}^\epsilon}(t), T-t) \right) \geq v(x,T).
\]
\end{itemize}
\end{lemma}
\begin{proof}
Consider first the quantity
\begin{equation}\label{e:lsubmg1}
\mathop{\mathbb{E}}\nolimits_x \left( \mathop{\mathbb{E}}\nolimits_{X^{(i)}(t)} \left( v( X^{(i)}((T-t) \wedge \rho^{(i)}), (T- t - \rho^{(i)})^+) \right)\right).
\end{equation}
Our Markovian setup comes with a shift operator $\theta = \theta^{(i)}$ for $X^{(i)}$ defined by $X^{(i)}\circ\theta_s(\omega,t) = X^{(i)}(\theta_s\omega,t) = X^{(i)}(\omega,s+t)$ for each $\omega \in \Omega$.
In terms of this operator, \eqref{e:lsubmg1} becomes
\[
\mathop{\mathbb{E}}\nolimits_x \left( \mathop{\mathbb{E}}\nolimits_x \left( v( X^{(i)}( (T-t) \wedge \rho^{(i)}), (T- t - \rho^{(i)})^+) \circ \theta_t|{\mathcal F}_i(t) \right) \right).
\]
From here, use the Tower Property and the fact that $\rho^{(i)} \circ \theta_t = (\rho^{(i)} - t) \vee 0$
to find that \eqref{e:lsubmg1} equals
\begin{equation}\label{e:lsubmgineq1}
\mathop{\mathbb{E}}\nolimits_x \left( v( X^{(i)}(T \wedge \rho^{(i)}), (T - \rho^{(i)})^+)\right).
\end{equation}
We can give a similar treatment for
\begin{equation}\label{e:lsubmg2}
\mathop{\mathbb{E}}\nolimits_x \left( \mathop{\mathbb{E}}\nolimits_{X^{(i)}(t)} \left( \int_0^{(T-t) \wedge \rho^{(i)}} f_i(X^{(i)}(s), T-t-s)ds\right) \right).
\end{equation}
Using the Markov property of $X^{(i)}$, \eqref{e:lsubmg2} becomes
\[
\mathop{\mathbb{E}}\nolimits_x \left( \mathop{\mathbb{E}}\nolimits_x \left( \int_0^{(T-t) \wedge \rho^{(i)}} f_i(X^{(i)}(s), T-t-s)ds \circ \theta_t|{\mathcal F}_i(t)\right) \right).
\]
Substituting in for $X^{(i)}\circ \theta_t$ and $\rho^{(i)} \circ \theta_t$
and using the Tower Property, the latter expectation is seen to be
\[
\mathop{\mathbb{E}}\nolimits_x \left( \int_0^{(T-t) \wedge (\rho^{(i)}-t) \vee 0} f_i(X^{(i)}(s+t), T-t-s)ds\right).
\]
Now make the substitution $u = s + t$ in the integral and use the fact that $f_i$ is non-negative
to show that \eqref{e:lsubmg2} is less than or equal to
\begin{equation}\label{e:lsubmgineq2}
\mathop{\mathbb{E}}\nolimits_x \left( \int_0^{T \wedge \rho^{(i)}} f_i(X^{(i)}(u), T-u)du\right).
\end{equation}
The final step is to note that, by Lemma \ref{l:rep},
\begin{eqnarray*}
v(x, T-t) &=& \mathop{\mathbb{E}}\nolimits_x \left( v( X^{(i)}(T-t \wedge \rho^{(i)}), (T- t - \rho^{(i)})^+)\right) \\
&& \; - \mathop{\mathbb{E}}\nolimits_x \left( \int_0^{(T-t) \wedge \rho^{(i)}} f(X^{(i)}(s), T-t-s)ds\right),
\end{eqnarray*}
and so $\mathop{\mathbb{E}}\nolimits_x(v(X^{(i)}(t), T-t))$ is equal to \eqref{e:lsubmg1} minus \eqref{e:lsubmg2},
which by the argument above is greater than or equal t
\[
\mathop{\mathbb{E}}\nolimits_x \left(v( X^{(i)}(T \wedge \rho^{(i)}), (T - \rho^{(i)})^+)\right) - \mathop{\mathbb{E}}\nolimits_x\left(\int_0^{T \wedge \rho^{(i)}} f_i(X^{(i)}(u), T-u)du\right).
\]
Again appealing to Lemma \ref{l:rep} shows that the latter is exactly $v(x,T)$.
(b) It is sufficient to prove that for $k \epsilon \leq t \leq (k+1)\epsilon$ we
have
\begin{equation}\label{e:discretestratsubmg}
\mathop{\mathbb{E}}\nolimits_x\left( v(X^{\mathcal{C}^\epsilon}(t), T-t)|{\mathcal F}^{\mathcal{C}^\epsilon}(k\epsilon)\right) \geq
v(X^{\mathcal{C}^\epsilon}(k\epsilon), T-k\epsilon).
\end{equation}
The desired result then follows by applying the Tower Property of conditional expectation and iterating this inequality.
Let us take $\nu \overset{\mathrm{def}}{=} \mathcal{C}^\epsilon(k\epsilon)$ and $\mathcal{H} \overset{\mathrm{def}}{=} {\mathcal F}^{\mathcal{C}^\epsilon}(k\epsilon)$. Then $\nu$ takes values in the
grid $\mathcal{Z} \overset{\mathrm{def}}{=} \{0,\epsilon, 2\epsilon, \ldots\}^3$ and $\Lambda \in \mathcal{H}$ implies
that $\Lambda \bigcap \{\nu = z\}$ is an element of the $\sigma$-field ${\mathcal F}(z) = \sigma({\mathcal F}_1(z_1),\ldots,{\mathcal F}_3(z_3))$ for $z \in \mathcal{Z}$. It follows
from the definition of conditional expectation that
$\mathop{\mathbb{P}}\nolimits_x$ almost surely we have
\begin{equation}\label{e:condexpSP}
\mathop{\mathbb{E}}\nolimits_x\left( \cdot | \mathcal{H} \right) = \mathop{\mathbb{E}}\nolimits_x \left( \cdot | {\mathcal F}(z) \right)~\mathrm{on}~\{\nu = z\}.
\end{equation}
Now, by continuity of $\mathcal{C}^\epsilon_i$ and right-continuity of ${\mathcal F}^{\mathcal{C}^\epsilon}$, $\xi_k$ must be $\mathcal{H}$-measurable. So, if $A \overset{\mathrm{def}}{=} A_1 \times A_2 \times A_3$ with $A_i$ Borel measurable for each $i \in V$,
\eqref{e:condexpSP} gives the equality
\[
\mathop{\mathbb{E}}\nolimits_x\left(\Indi{\nu = z,\; X^{\mathcal{C}^{\epsilon}}(t) \in A,\; \xi_k = e_i} | \mathcal{H} \right) =
\Indi{\nu = z,\; \xi_k = e_i} \mathop{\mathbb{E}}\nolimits_x\left(\Indi{X(z+(t-k\epsilon)e_i) \in A} | {\mathcal F}(z) \right),
\]
where, as before, $X(z) = (X_1(z_1), X_2(z_2), X_3(z_3))$.
Next we use the facts that $\Indi{X_j(z_j) \in A_j}$ is ${\mathcal F}(z)$ measurable for each $j$ and that the filtration ${\mathcal F}_i$ of $X_i$ is independent of ${\mathcal F}_j$ for $j \neq i$, to show that the preceeding expression is equal to
\[
\Indi{\nu = z,\; \xi_k = e_i, X_j(z_j) \in A_j, j \neq i} \mathop{\mathbb{E}}\nolimits_x\left(\Indi{X_i(z_i+(t-k\epsilon)) \in A_i} | {\mathcal F}_i(z_i) \right).
\]
Finally, the Markov property of $X_i$ allows us to write this as
\[
\Indi{\nu = z,\xi_k = e_i} \mathop{\mathbb{E}}\nolimits_{X(z)}\left(\Indi{X^{(i)}(t-k\epsilon) \in A}\right).
\]
As $\mathop{\mathbb{E}}\nolimits_x(v(X^{(i)}(t),s))$ is Borel measurable for any $s, t \geq 0$, this is enough to conclude that in our original notation, on $\{\xi_k = e_i\}$,
\begin{equation}\label{e:discretestratFinalEq}
\mathop{\mathbb{E}}\nolimits_x\left(v(X^{\mathcal{C}^{\epsilon}}(t),T-t)|{\mathcal F}^{\mathcal{C}^\epsilon}(k\epsilon)\right)
= \mathop{\mathbb{E}}\nolimits_{X^{\mathcal{C}^{\epsilon}}(k\epsilon)}\left(v(X^{(i)}(t-k\epsilon),T-t) \right).
\end{equation}
But part (a) shows that
\[
\mathop{\mathbb{E}}\nolimits_{x}\left(v(X^{(i)}(t-k\epsilon),(T - k\epsilon) -(t - k\epsilon))\right) \geq v(x,T-k\epsilon),
\]
and so the right hand side of \eqref{e:discretestratFinalEq} is greater than or equal to
$v(X^{\mathcal{C}^{\epsilon}}(k\epsilon), T-k\epsilon)$.
\end{proof}
It is now relatively painless to combine the ingredients above. We take an arbitrary strategy
$\mathcal{C}$, use Lemma \ref{l:approxstrat} to approximate it by the family $\mathcal{C}^\epsilon$, $\epsilon > 0$ and then use Lemma \ref{l:submgeachcmpnt} part (b) with $t = T \geq 0$ to show that
\[
\mathop{\mathbb{P}}\nolimits_x( \tau^{\mathcal{C}^\epsilon} > T ) = \mathop{\mathbb{E}}\nolimits_x v(X^{\mathcal{C}^\epsilon}(T), 0) \geq v(x,T)
\]
for any $x \notin D$ (equality holds trivially for $x \in D$).
The approximations are such that $\mathcal{C}(t) \preceq \mathcal{C}^\epsilon(t + M\epsilon)$ for some constant $M > 0$. Thus, $\tau^\mathcal{C} \leq t$ implies that $\tau^{\mathcal{C}^\epsilon} \leq t+ M\epsilon$.
More usefully, the contrapositive is that $\tau^{\mathcal{C}^\epsilon} > t + M \epsilon$ implies $\tau^\mathcal{C} > t$ and so monotonicity of the probability measure $\mathop{\mathbb{P}}\nolimits_x$ then ensures
\[
\mathop{\mathbb{P}}\nolimits_x( \tau^{\mathcal{C}} > t ) \geq \mathop{\mathbb{P}}\nolimits_x( \tau^{\mathcal{C}^\epsilon} > t + M\epsilon) \geq v(x,t + M\epsilon).
\]
Taking the limit $\epsilon \to 0$ and using right continuity of $v(x,t)$ in $t$
completes the proof.
\section{Existence and almost sure uniqueness of ${\mathcal{C}^\star}$}\label{s:stratexists}
In this section we give a proof for Lemma \ref{l:stratexists}. Recall that we
wish to study strategies $\mathcal{C}$ that satisfy the property
(RTM) $\mathcal{C}_i$ increases at time $t \geq 0$ (i.e. for every $s > t$, $\mathcal{C}_i(s) > \mathcal{C}_i(t)$)
only if, under some labelling of the processes,
\[
X_j^{\mathcal{C}}(t) \leq X_i^{\mathcal{C}}(t) \leq X_k^{\mathcal{C}}(t).
\]
Our idea is to reduce the existence and uniqueness of our strategy to a one-sided
problem. Then, we can use the following result, taken from Proposition 5 and Corollary 13 in
\cite{mandelbaum1987cma} (alternatively $\S{5.1}$ of \cite{KaspiMandelbaum95}).
\begin{lemma}\label{l:mandelbaumstrat}
Suppose that $(Y_i(t); t \geq 0)$, $i = 1,2$ are independent and identically distributed regular Ito diffusions on ${\mathbb R}$, beginning at the origin and with
complete, right continuous filtrations $(\mathcal{H}_i(t); t \geq 0)$. Then
\begin{itemize}
\item[(a)] there exists a strategy $\gamma = (\gamma_1(t), \gamma_2(t); t \geq 0)$
(with respect to the multiparameter filtration $\mathcal{H} = (\sigma(\mathcal{H}_1(z_1),\mathcal{H}_2(z_2)); z \in {\mathbb R}_+^2)$ such that $\gamma_i$ increases only at times $t \geq 0$ with
\[
Y^\gamma_i(t) = Y^\gamma_1(t) \wedge Y^\gamma_2(t),
\]
i.e. ``$\gamma$ follows the minimum of $Y_1$ and $Y_2$''.
\item[(b)] If $\gamma^\prime$ is another strategy with this property, then, almost surely,
$\gamma^\prime(t) = \gamma(t)$ for every $t \geq 0$. That is, $\gamma$ is a.s. unique.
\item[(c)] the maximum $Y^\gamma_1(t) \vee Y^\gamma_2(t)$ increases with $t$.
\end{itemize}
\end{lemma}
We first consider the question of uniqueness, it will then be obvious how ${\mathcal{C}^\star}$ must
be defined. Suppose that $\mathcal{C}$ is a strategy satisfying (RTM).
If $X_1(0) < X_2(0) = X_3(0)$, then $\mathcal{C}$ cannot run $X_1$ (i.e. $\mathcal{C}_1$ does not increase)
before the first time $\nu$ that either $X^\mathcal{C}_2$ or $X^\mathcal{C}_3$ hit $X_1(0)$.
Until then (or until a decision is made, whichever comes first), $\mathcal{C}_2$ may
increase only at times $t \geq 0$ when $X^\mathcal{C}_2(t) \leq X^\mathcal{C}_3(t)$
and $\mathcal{C}_3$ only when $X^\mathcal{C}_3(t) \leq X^\mathcal{C}_2(t)$. Hence,
on $\tau^\mathcal{C} \wedge \nu \geq t$, the value of $\mathcal{C}(t)$ is determined by the strategy
in lemma \ref{l:mandelbaumstrat}. Now, $X^\mathcal{C}_2 \vee X^\mathcal{C}_3$ increases
during this time, and so if $\nu < \tau^\mathcal{C}$, we have
\[
X_1(0) = X^\mathcal{C}_1(\nu) = X^\mathcal{C}_2(\nu) \wedge X^\mathcal{C}_3(\nu) < X^\mathcal{C}_2(\nu) \vee X^\mathcal{C}_3(\nu).
\]
So again, we are in a position to apply the argument above,
and can do so repeatedly until a decision is made. In fact, it takes
only a finite number of iterations of the argument to determine $\mathcal{C}(t)$
for each $t \geq 0$ (on $\tau^\mathcal{C} \geq t$) because each diffusion $X_i$ is continuous,
the minimum $X^\mathcal{C}_1 \wedge X^\mathcal{C}_2 \wedge X^\mathcal{C}_3$ is decreasing
and the maximum $X^\mathcal{C}_1 \vee X^\mathcal{C}_2 \vee X^\mathcal{C}_3$ increasing.
If $X_1(0) < X_2(0) < X_3(0)$ then $\mathcal{C}$ must run $X_2$ exclusively until it hits
either $X_1(0)$ or $X_3(0)$. From then on, the arguments of the previous case apply.
The remaining possibility is that $X_1(0) = X_2(0) = X_3(0) = a \in (0,1)$.
We shall define random times $\nu_\epsilon$, $0 < \epsilon < (1 - a)\wedge a$ such that
\begin{itemize}
\item $\mathcal{C}(\nu_\epsilon)$ is determined by the property (RTM),
\item under some labelling,
\[
a - \epsilon < X_1^\mathcal{C}(\nu_\epsilon) < a < X^\mathcal{C}_2(\nu_\epsilon) = X^\mathcal{C}_3(\nu_\epsilon)= a+\epsilon,
\]
and
\item $\nu_\epsilon \to 0$ as $\epsilon \to 0$.
\end{itemize}
Again, we may then use the one-sided argument to see that, almost surely, on $\nu_\epsilon \leq t \leq \tau^\mathcal{C}$, $\mathcal{C}(t)$ is determined by (RTM). This is sufficient because $\nu_\epsilon \to 0$ as $\epsilon \to 0$.
To construct $\nu_\epsilon$, suppose, without loss of generality, that $X_1$ and $X_2$
both exit $(a -\epsilon, a+ \epsilon)$ at the upper boundary. We denote by $\alpha_i$
the finite time taken for this to happen, i.e.
\[
\alpha_i \overset{\mathrm{def}}{=} \inf\{ t > 0: X_i(t) \notin (a - \epsilon, a + \epsilon) \}.
\]
Define
\[
l_i \overset{\mathrm{def}}{=} \inf_{0 \leq s \leq \alpha_i} X_i(s)
\]
to be the lowest value attained by $X_i$ before it exits
$(a - \epsilon, a + \epsilon)$. By Proposition 5 of \cite{mandelbaum1987cma}, it is
almost sure that the $l_i$ are not equal and so, we may assume that
$l_3 < l_2 < l_1$ (by relabelling
if necessary).
Intuitively, (RTM) means that $X^\mathcal{C}_1$ and $X^\mathcal{C}_2$ should hit
$a+\epsilon$ together while $X^\mathcal{C}_3$ gets left down at $l_2$.
We already know it takes time $\alpha_i$ for $X_i$ to hit $a+\epsilon$ ($i = 1,2$)
and $X_3$ takes time
\[
\beta_3 \overset{\mathrm{def}}{=} \inf \{ t > 0 : X_3(t) = l_2 \}.
\]
to reach $l_2$. So, we set $\nu_\epsilon = \alpha_1 + \alpha_2 + \beta_3$,
and claim that
\[
\mathcal{C}(\nu_\epsilon) = (\alpha_1,\alpha_2,\beta_3).
\]
The proof proceeds by examining the various cases. Firstly, if
$\mathcal{C}_1(\nu_\epsilon) > \alpha_1$ and $\mathcal{C}_1(\nu_\epsilon) \geq \alpha_1$, then necessarily $\mathcal{C}_3(\nu_\epsilon) < \beta_3$
and $X_3(z_3) > l_2$ for any $z_3 \leq \mathcal{C}_3(\nu_\epsilon)$. But, then there
exist times $\alpha^\prime_i < \mathcal{C}_i(\nu_\epsilon)$ ($i=1,2$) with
\[
l_2 = X_2(\alpha^\prime_2) < X_3(z_3) < X_1(\alpha^\prime_1) = a + \epsilon
\]
for any $z_3 \leq \mathcal{C}_3(\nu_\epsilon)$, contradicting (RTM).
The second case is that $\mathcal{C}_1(\nu_\epsilon) < \alpha_1$
and $\mathcal{C}_2(\nu_\epsilon) \leq \alpha_2$. Necessarily we then have
$\mathcal{C}_3(\nu_\epsilon) > \beta_3$.
Now, $X_i(z_i) \geq l_2$ for $z_i \leq \alpha_i$, $i=1,2$ and so
(RTM) implies that $X_3(z_3) \geq l_2$ as well for $z_3 \leq \mathcal{C}_3(\nu_\epsilon)$.
In addition, (RTM) and $\mathcal{C}_3(\nu_\epsilon) > \beta_3$ imply that
\[
\mathcal{C}_2(\nu_\epsilon) \geq \inf\{ t>0: X_2(t) = l_2 \}
\]
(otherwise $X_3(\beta_3) < X_i(z_i)$ for $z_i \leq \mathcal{C}_i(\nu_\epsilon)$, $i=1,2$).
So, both $X_2$ and $X_3$ have attained $l_2$
and then stayed above it for a positive amount of time
But, by Proposition 5 in \cite{mandelbaum1987cma}, this event that
``the lower envelopes of $X_2$ and $X_3$ are simultaneously flat'' has probability zero.
The final case $\mathcal{C}_1(\nu_\epsilon) > \alpha_1$ and $\mathcal{C}_2(\nu_\epsilon) \geq \alpha_2$
has two subcases, $\mathcal{C}_3(\nu_\epsilon) \leq \beta_3$ and $\mathcal{C}_3(\nu_\epsilon) > \beta_3$
-- both can be eliminated by the methods above. The only
remaining possibility is that $\mathcal{C}_i(\nu_\epsilon) = \alpha_i$ for $i = 1,2$
and $\mathcal{C}_3(\nu_\epsilon) = \beta_3$.
The discussion above tells us how to define ${\mathcal{C}^\star}$ -- if $X_1(0) < X_2(0) \leq X_3(0)$
under some labelling, then we just alternate the one-sided construction from lemma
\ref{l:mandelbaumstrat} repeatedly to give a strategy satisfying (C1) -- (C3).
If $X_1(0) = X_2(0) = X_3(0) = a \in (0,1)$, take $0< \epsilon < a \wedge (1-a)$
and define ${\mathcal{C}^\star}(\nu_u)$, $0 < u \leq \epsilon$ via the construction above.
Now, $\nu_u$ is only left continuous, so we have yet to define ${\mathcal{C}^\star}$ on
the stochastic intervals $(\nu_u, \nu_{u+}]$, $u \leq \epsilon$. But, this is easily done because
$X^{\mathcal{C}^\star}(\nu_u)$ has exactly two components equal and so we can again use the one-sided construction. We define ${\mathcal{C}^\star}$ on $(\nu_\epsilon, \tau^{\mathcal{C}^\star}]$ similarly.
The properties (C1) and (C2) are readily verified. To confirm (C3), we
first observe that ${\mathcal{C}^\star}$ satisfies (RTM). But (RTM)
gives us almost sure uniqueness of the paths of ${\mathcal{C}^\star}$. It follows that
our definition of ${\mathcal{C}^\star}$ does not depend on $\epsilon$.
The second observation is that $\nu_u \to 0$ as $u \to 0$. As a consequence,
for $\eta \in {\mathbb R}^3_+$ and $\delta > 0$,
\begin{eqnarray*}
\{{\mathcal{C}^\star}(t) \preceq \eta \} &=& \{{\mathcal{C}^\star}(t) \preceq \eta, \nu_u < \delta~\mathrm{some}~u < \epsilon \} \\
&=& \bigcup_{q}\{{\mathcal{C}^\star}(t) \preceq \eta, \nu_q < \delta\},
\end{eqnarray*}
where the union is over rational numbers $0 < q < \epsilon$. Using the fact that
${\mathcal F}$ is complete,
\[
\{{\mathcal{C}^\star}(t) \preceq \eta, \nu_q < \delta\} \in {\mathcal F}(\eta_1 + \delta, \eta_2 + \delta, \eta_3 + \delta).
\]
From this we conclude that $\{{\mathcal{C}^\star}(t) \preceq \eta\} \in {\mathcal F}(\eta)$ because ${\mathcal F}$ is right continuous. This confirms (C3).
\section{$X^{\mathcal{C}^\star}$ as a doubly perturbed diffusion}
We now turn our attention to the optimally controlled process $X^{\mathcal{C}^\star}$.
For convenience, we will work with the minimum
\[
I_t \overset{\mathrm{def}}{=} X^{\mathcal{C}^\star}_1(t) \wedge X^{\mathcal{C}^\star}_2(t)\wedge X^{\mathcal{C}^\star}_3(t),
\]
maximum
\[
S_t \overset{\mathrm{def}}{=} X^{\mathcal{C}^\star}_1(t) \vee X^{\mathcal{C}^\star}_2(t)\vee X^{\mathcal{C}^\star}_3(t),
\]
and middle value
\[
M_t \overset{\mathrm{def}}{=} (X^{\mathcal{C}^\star}_1(t) \vee X^{\mathcal{C}^\star}_2(t)) \wedge (X^{\mathcal{C}^\star}_1(t) \vee X^{\mathcal{C}^\star}_3(t)) \wedge (X^{\mathcal{C}^\star}_2(t) \vee X^{\mathcal{C}^\star}_3(t)), t \geq 0
\]
of the components of $X^{\mathcal{C}^\star}$ (so, if $X^{\mathcal{C}^\star}_1(t) \leq X^{\mathcal{C}^\star}_2(t) \leq X^{\mathcal{C}^\star}_3(t)$, then $I_t = X^{\mathcal{C}^\star}_1(t), M_t = X^{\mathcal{C}^\star}_2(t), S_t = X^{\mathcal{C}^\star}_3(t)$).
There is no ambiguity when the values of the components are equal since we are not formally
\emph{identifying} $I_t$, $M_t$ and $S_t$ with a particular component of $X^{\mathcal{C}^\star}$
Clearly,
$M$ behaves as an Ito diffusion solving \eqref{e:itosde} away from
the extrema $I$ and $S$, while at the extrema it experiences a perturbation.
This behaviour is reminiscent of \emph{doubly perturbed Brownian motion},
which is defined as the (pathwise unique) solution $(X^\prime_t; t \geq 0)$
of the equation
\begin{equation*}\label{e:DPBM}
X^\prime_t = B^\prime_t + \alpha \sup_{s \leq t} X^\prime_s + \beta \inf_{s \leq t} X^\prime_s,
\end{equation*}
where $\alpha, \beta < 1$ and $(B^\prime_t; t \geq 0)$ is a Brownian motion
starting from the origin. This process was introduced by Le Gall and Yor
in \cite{le1986excursions}; the reader may consult the survey \cite{perman1997pbm}
and introduction of \cite{Chaumont2000219} for further details. In \S{2} of \cite{Chaumont2000219}, this definition is
generalised to accommodate non-zero initial values for the maximum
and minimum processes in the obvious way -- if $i_0, s_0 \geq 0$,
we take
\[
X^\prime_t = B^\prime_t + \alpha \left( \sup_{s \leq t} X^\prime_s - s_0\right)^+ - \beta
\left( \inf_{s \leq t} X^\prime_s + i_0\right)^-,
\]
i.e. $X^\prime$ hits $-i_0$ or $s_0$ before the perturbations begin. As usual
$a^+ = \max(a,0)$ and $a^- = \max(-a,0)$.
Our suspicion that $M$ should solve this
equation if the underlying processes are Brownian motions
is confirmed in the following
\begin{lemma}\label{l:MisaDPBM}
Suppose that $0 \leq i_0 \leq m_0 \leq s_0 \leq 1$ and $\sigma = 1$.
Then, under $\mathop{\mathbb{P}}\nolimits_{(i_0,m_0,s_0)}$,
there is a standard Brownian motion $(B^\prime_t; t \geq 0)$ (adapted to ${\mathcal F}^{\mathcal{C}^\star}$)
for which the process $M^\prime = M_t - m_0$, $t \geq 0$ satisfies
\[
M^\prime_t = B^\prime_t - \left(\sup_{s \leq t} M^\prime_s - s_0^\prime \right)^+
+ \left(\inf_{s \leq t} M^\prime_s + i_0^\prime \right)^-,
\]
where $i_0^\prime = m_0 - i_0$ and $ s_0^\prime = s_0 - m_0$. In other words,
$M$ is a doubly perturbed Brownian motion with parameters $\alpha = \beta = -1$.
\end{lemma}
\begin{proof}
The multiparameter martingale $( X_1(z_1) + X_2(z_2) + X_3(z_3); z \in {\mathbb R}^3_+)$
is bounded and right continuous. Hence, Theorem \ref{t:MPCSM} implies that
\[
\xi_t \overset{\mathrm{def}}{=} X^{\mathcal{C}^\star}_1(t) + X^{\mathcal{C}^\star}_2(t) + X^{\mathcal{C}^\star}_3(t), t \geq 0
\]
is a continuous (single parameter) martingale with respect to the
filtration ${\mathcal F}^{\mathcal{C}^\star}$. But, the $X_i$ are independent Brownian
motions and so the same argument applies to the multiparameter martingale
\[
\left( (X_1(z_1) + X_2(z_2) + X_3(z_3))^2 - (z_1 + z_2 + z_3); z \in {\mathbb R}^3_+\right),
\]
i.e. $\xi^2_t - t$ is a martingale. It follows that $(\xi_t; t \geq 0)$ is a
Brownian motion with $\xi_0 = i_0 + m_0 +s_0$ and we can
take $B^\prime = \xi - (i_0 + m_0 +s_0)$.
Now, ${\mathcal{C}^\star}$ always ``runs $M$'' away from the extrema $I$ and $S$ of
$X^{\mathcal{C}^\star}$ and so it is no surprise that
\[
I_t = \inf_{s \leq t} M_s \wedge i_0, \; S_t = \sup_{s \leq t} M_s \vee s_0,
\]
relationships which can be proved using the arguments of section \ref{s:stratexists}.
It follows that
\[
M^\prime_t = M_t - m_0 = \xi_t - m_0 - S_t - I_t = B^\prime_t
- \sup_{s \leq t} M_s \vee s_0+ s_0 - \inf_{s \leq t} M_s \wedge i_0 + i_0
\]
The result now follows by noting that for real $a$ and $b$
we have $a \wedge b - b = -(a - b)^-$
and $a \vee b - b = (a - b)^+$.
\end{proof}
Lemma \ref{l:MisaDPBM} is relevant because $\tau^\straTopt$ is precisely
the time taken for the doubly perturbed Brownian motion $M$ to exit
the interval $(0,1)$. In particular,
the expression we find for the Laplace transform
$\hat v_r(x)$ can be recovered from Theorems 4 and 5 in
Chaumont and Doney \cite{chaumontdoney00}.
We have so far assumed that $\sigma = 1$ and are yet to say anything about more
general ``perturbed diffusion processes''.
There are several papers that
consider this problem. Doney and Zhang \cite{doneyzhang05} consider the existence and
uniqueness of diffusions perturbed at their maximum. More recently,
Luo \cite{luo2009} has shown that solutions to
\begin{equation}\label{e:DPDP}
X^\prime_t = \int_0^t \mu(s,X^\prime_s)ds + \int_0^t \sigma(s,X^\prime_s)dB^\prime_s + \alpha \sup_{s \leq t} X^\prime_s + \beta \inf_{s \leq t}X^\prime_s,
\end{equation}
exist and are unique, but only in the case that
$|\alpha| + |\beta| < 1$. A more general perturbed process is considered in
\cite{lanyingyong2009} but similar restrictions on $\alpha$ and $\beta$ apply.
That is, there are no existence and uniqueness results for doubly
perturbed diffusions which cover our choice of $\alpha$ and $\beta$,
and less still for the Laplace transform of the distribution of
the time taken to exit an interval.
This is where our results seem to contribute something new. Lemma \ref{l:MisaDPBM}
easily generalises to continuous $\sigma > 0$, and this combined with
the other results in this paper, lets us see that if
$\mu$ is bounded and Borel measurable and $\sigma > 0$ is continuous, then there is a solution to
\[
M^\prime_t = \int_0^t \mu(M^\prime_s) dB^\prime_s + \int_0^t \sigma(M^\prime_s) dB^\prime_s - \sup_{s \leq t} M_s - \inf_{s \leq t}M_s.
\]
Furthermore, we can compute the Laplace transform of the distribution
of the time taken for any solution of
this equation to exit any interval $(-a,b)$ when $\mu$ is zero.
\section{Concluding remarks and future work}
\subsection{Majority decisions of $2k+1$ diffusions and veto voting}
The problem that we have solved has natural generalisations in which there are $m$ diffusions instead of the three that we have considered.
In particular, one might ask for the majority decision of an odd
number of `diffusive voters' $(X_i(t); t \geq 0)$, $i = 1,\ldots, m$.
Again, we believe that the optimal strategy is to ``run the middle''. In other words,
if $m = 2k+1$,
and
\[
X^{\mathcal{C}^\star}_1(t) \leq \ldots \leq X^{\mathcal{C}^\star}_k(t) < X^{\mathcal{C}^\star}_{k+1}(t)
< X^{\mathcal{C}^\star}_{k+2}(t) \leq \ldots X^{\mathcal{C}^\star}_m(t)
\]
then $\mathcal{C}^\star_{k}$ should increase at unit rate
until $X^{\mathcal{C}^\star}_{k+1}$ hits either $X^{\mathcal{C}^\star}_k(t)$ or $X^{\mathcal{C}^\star}_{k+2}(t)$.
This prescribes that until then, all other components of $X^{\mathcal{C}^\star}$ are constant.
A special case of majority voting is `veto voting', where
we have an arbitrary number $m^\prime > 0$ of diffusions,
and declare a negative decision if at least $k \leq m^\prime$ of them get
absorbed at the lower boundary (otherwise no veto occurs and a positive decision is made). To see that this is a majority voting problem, suppose that there is no veto
if the majority of voters return positive decisions (i.e. $2k < m^\prime$).
This is equivalent to asking for a majority of $m = 2(m^\prime-k)+1$
diffusive voters, with $m+1-2k$ of them beginning in a state of
absorption at the origin. The case $2k \geq m^\prime$ admits a similar
majority voting description and in particular, our conjecture for veto voting
is that if
\[
X^{\mathcal{C}^\star}_1(t) \leq \ldots \leq X^{\mathcal{C}^\star}_{k-1}(t) < X^{\mathcal{C}^\star}_{k}(t)
< X^{\mathcal{C}^\star}_{k+1}(t) \leq \ldots X^{\mathcal{C}^\star}_{m^\prime}(t)
\]
then $\mathcal{C}^\star_{k}$ should increase at unit rate
until $X^{\mathcal{C}^\star}_{k}$ hits either $X^{\mathcal{C}^\star}_{k-1}(t)$ or $X^{\mathcal{C}^\star}_{k+1}(t)$.
In other words, we ``run the component with $k^{th}$ order statistic''.
The extreme of this is true veto voting in which a single diffusion being
absorbed at zero will veto the others.
This is the case $k=1$, and the conjecture is that we should
always ``run the minimum'' of the diffusions.
One might also consider diffusions which obey different stochastic
differential equations. We have found an implicit equation for the
switching boundaries in the optimal strategy for $m^\prime = 2, k = 1$ `veto voting' problem by solving a free boundary problem. However, we have no
conjecture for the general solution.
\subsection{Recursive majority revisited}\label{ss:rmtrevisited}
To close, we return to the discrete recursive majority model that motivated us originally (see discussion in the introduction). Recall that $r_n$ denotes
the expected cost of the optimal strategy for the $n$ layer tree.
The best lower bound for the limit $\gamma = \lim_{n\to \infty} r_n^{1/n}$
in the literature\footnote{It has been communicated to us that Oded Schramm
and Mark Braverman improved this bound to 2.28 but did not publish details.} is
$\frac{9}{4}$, which one arrives at by computing the Fourier coefficients
of the recursive majority function and applying either
an equality due to O'Donnell and Servedio
(see \S{3} of \cite{peres2007random}) or Theorem 1.8 of \cite{schramm2005quantitative}.
But, numerics suggest $\gamma \approx 2.45$, leaving big a gap. The best upper bound
known to us is $\gamma \leq 2.472$.
In the intrduction, we hinted at a continuous approximation to the discrete tree.
What we had in mind was to replace each of the Bernoulli random variables
on the leaves with a Brownian motion starting from $p$. These
Brownian motions are absorbed at the endpoints of $(0,1)$ and scaled so that the expected absorption time is one.
As with the diffusion model treated in this paper, the observer is billed
for the time they spend running each Brownian motion.
Let $R_n$ denote the least expect (time) cost for the Brownian tree.
In this paper (see Remark \ref{r:Etau}), we have shown
\[
R_1(p) = -\frac{6}{p(1-p)}\left( p (1-p) + p^2\ln(p) + (1-p)^2\ln(1-p)\right),
\]
so $R_1 \leq r_1 = 2(1 + p(1-p))$.
Now, any strategy for the discrete model can also be used in this Brownian case and so
$R_n$ is not greater than $r_n$.
It follows that
\[
\gamma \geq \limsup_{n\to \infty} R_n^{1/n},
\]
from which we conclude that studying the Brownian tree may
help give a lower bound for $\gamma$.
Often, the Brownian version of a difficult discrete problem is easier to solve because we have the heavy machinery of stochastic calculus
at our disposal. But, we concede that there is no particular
reason to think that the $n$ layer Brownian model
may be more tractable than the discrete counterpart. Indeed,
while we have treated the $n=1$ case in this paper, we are unable
even to give a conjecture on the $n=2$ optimal strategy.
Still, we might ask, even if it is
not possible to determine the optimal strategy,
can we say anything about the asymptotics of
the expected cost $R_n$, and in doing so sharpen the bound
on $\gamma$? We have not, for example, been able to prove that
$R_n^{1/n}$ is eventually monotone in $n$. Nor do we have the
sub-multiplicative structure to guarantee that
$\Gamma = \lim_{n \to \infty} R_n^{1/n}$ even exists.
If the limit $\Gamma$ does exist, is it equal to $\gamma$? One is
tempted to guess affirmatively but it is possible that the optimal
strategy runs an exponentially growing number of leaf Brownian
motions for very short time, leading
to $\Gamma < \gamma$. To us at least, this seems a tough question to answer.
|
2,877,628,091,634 | arxiv | \section{\label{sec:General_disks}Continuous growth dynamics in cylindrical
geometry}
\subsection{Kinematics}
We consider a cylindrical tube, consisting of an incompressible isotropic hyperelastic material, the inner wall of which is attached
to a fixed solid nucleus, with the outer wall unconstrained (see Figure \ref{fig:Kinematics_general}). We restrict to growth and deformations only in the cross section, such that the cylindrical geometry is always maintained and there is no axial strain. Moreover, we assume that there are no external forces, so that any deformation is caused purely by growth and the elastic response.
\begin{figure}[htpb]
\centering\includegraphics[width=0.7\textwidth]{growing_cylinder_clean}
\caption{\label{fig:Kinematics_general}Sketch of kinematic setup.}
\end{figure}
Geometrically, we work in a planar polar coordinate basis $\left\{ \mathbf{e}^{R},\mathbf{e}^{\theta}\right\} $ (the same basis vectors apply to both initial and current configurations), in which the deformation can be described by the map $\mathbf{x}:\mathcal{B}_0\to\mathcal{B}_t$ given by:
\begin{equation}
\mathbf{x}=r\left(R^{0}\right)\mathbf{e}^{R}\:.\label{eq:deformation_map_disk}
\end{equation}
For this map, the deformation gradient is
\begin{equation}
\mathbf{F}=r'\left(R^{0}\right)\mathbf{e}^{R}\otimes\mathbf{e}^{R}+\frac{r}{R^{0}}\mathbf{e}^{\theta}\otimes\mathbf{e}^{\theta}.
\end{equation}
The elastic deformation gradient takes the form
\begin{equation}
\mathbf{A}=\alpha^{R}\mathbf{e}^{R}\otimes\mathbf{e}^{R}+\alpha^{\theta}\mathbf{e}^{\theta}\otimes\mathbf{e}^{\theta}.
\end{equation}
Incompressibility requires $\det\mathbf{A}=1$; we thus define
$\alpha:=\alpha^{\theta}$, so that $\alpha^{-1}=\alpha^{r}$. We assume a diagonal growth tensor
\begin{equation}
\mathbf{G}=\gamma^{R}\mathbf{e}^{R}\otimes\mathbf{e}^{R}+\gamma^{\theta}\mathbf{e}^{\theta}\otimes\mathbf{e}^{\theta},
\end{equation}
where the difference between radial growth ($\gamma^R>1$) and circumferential growth ($\gamma^\theta>1$) is shown schematically in Figure \ref{fig:growth_types}.
In matrix form (with the basis $\left\{ \mathbf{e}^{R},\mathbf{e}^{\theta}\right\} $ implied), we have
\begin{equation}
\mathbf{F}=\begin{pmatrix}\frac{\mathrm{d}r}{\mathrm{d}R^{0}} & 0\\
0 & \frac{r}{R^{0}}
\end{pmatrix}\:,\qquad\mathbf{A}=\begin{pmatrix}\alpha^{-1} & 0\\
0 & \alpha
\end{pmatrix}\:,\qquad\mathbf{G}=\begin{pmatrix}\gamma^{r} & 0\\
0 & \gamma^{\theta}
\end{pmatrix}\:.
\end{equation}
\begin{figure}[htpb]
\centering\includegraphics{growth_types}
\caption{\label{fig:growth_types}Illustration of isotropic and anisotropic
growth. }
\end{figure}
In the initial (stress-free) reference configuration $\mathcal{B}_{0}$, the inner
cylinder wall is located at $R^{0}=A_{0}$ and the outer wall is located
at $R^{0}=B_{0}$. From the morphoelastic decomposition $\mathbf{F}=\mathbf{AG}$,
we find $r'=\gamma^{R}/\alpha$ and $r/R^{0}=\alpha\gamma^{\theta}$.
By eliminating $\alpha$, we obtain
\begin{equation}
r\left(R^{0}\right)r'\left(R^{0}\right)=\gamma^{R}\left(R^{0}\right)\gamma^{\theta}\left(R^{0}\right)R^{0}.\label{eq:bvp-kinematics}
\end{equation}
Imposing the boundary condition $r\left(A_{0}\right)=A_{0}$, due to
the unmoving solid nucleus, we
can integrate \eqref{eq:bvp-kinematics} as
\begin{equation}
r=\sqrt{A_{0}^{2}+2\int_{A_{0}}^{R^{0}}\!\!\!\gamma^{R}(\tilde{R})\gamma^{\theta}(\tilde{R})\tilde{R}\ \mathrm{d}\tilde{R}}.\label{eq:radial_map_general}
\end{equation}
\subsection{Mechanics}
Given that all deformations are diagonal in the coordinate basis considered
here, the Cauchy stress is also diagonal
\begin{equation}
\mathbf{T}=T^{RR}\mathbf{e}^{R}\otimes\mathbf{e}^{R}+T^{\theta\theta}\mathbf{e}^{\theta}\otimes\mathbf{e}^{\theta}.
\end{equation}
Let $W\left(\alpha^{R},\alpha^{\theta}\right)$ be the strain-energy density, which relates to the Cauchy stress tensor by $\mathbf{T} = \mathbf{A}W_\mathbf{A} - p\mathbf{1}$, where $p$ is the Lagrange multiplier enforcing incompressibility. In components, this reads
\begin{equation}
T^{RR}=\alpha^{r}\frac{\partial W}{\mathrm{\partial\alpha}^{r}}-p\:,\qquad T^{\theta\theta}=\alpha^{\theta}\frac{\partial W}{\mathrm{\partial\alpha^{\theta}}}-p\:/
\end{equation}
With no external loads, mechanical equilibrium requires $\text{div }\mathbf{T}=0$, which takes the form
\begin{equation}
\frac{\partial T^{RR}}{\partial r}=\frac{T^{\theta\theta}-T^{RR}}{r}.\label{eq:linear_momentum_balance}
\end{equation}
Defining $\widehat{W}\left(\alpha\right):=W\left(\alpha^{-1},\alpha\right)$, we have
\begin{equation}
T^{\theta\theta}-T^{RR}=\alpha\widehat{W}'(\alpha).
\end{equation}
In this paper we restrict analysis to a neo-Hookean strain-energy density
\begin{equation}
\widehat{W}\left(\alpha\right)=\frac{\mu}{2}\left(\alpha^{2}+\alpha^{-2}-2\right)\:,\label{eq:neo_Hookean}
\end{equation}
for which \eqref{eq:linear_momentum_balance} becomes
\begin{equation}
\frac{\mathrm{d}T^{RR}}{\mathrm{d}R^{0}} =\frac{2\mu\gamma^{R}}{R^{0}\gamma^{\theta}}\left[1-\frac{\left(R^{0}\right)^{4}\left(\gamma^{\theta}\right)^{4}}{r^{4}}\right].
\label{eq:bvp-mechanics}
\end{equation}
Along with \eqref{eq:bvp-mechanics} we impose $T^{RR}\left(B_{0}\right)=0$, i.e. the outer edge is stress-free. Equations
\eqref{eq:bvp-kinematics} and \eqref{eq:bvp-mechanics}, along with boundary condition $T^{RR}\left(B_{0}\right)=0$, completely determine the deformation and stress state. Due to the fixed inner boundary condition, for a given growth tensor \eqref{eq:bvp-kinematics} can be integrated separately, i.e. the deformation is determined independently from the stress, and the radial Cauchy stress is then determined by integrating \eqref{eq:radial_map_general}.
Once the radial stress component $T^{RR}$ is determined, the
circumferential component satisfies
\begin{equation}
T^{\theta\theta}=T^{RR}+\frac{2\mu r^{2}}{\left(R^{0}\right)^{2}\left(\gamma^{\theta}\right)^{2}}\left[1-\frac{\left(R^{0}\right)^{4}\left(\gamma^{\theta}\right)^{4}}{r^{4}}\right]\:.\label{eq:t2_general}
\end{equation}
Note also that for constant $\gamma^R$ and $\gamma^\theta$, these integrals may be performed analytically, giving explicit expressions for the stress and deformation in terms of the growth. As we show later, the same holds when extending from one layer to multiple layers; if the growth in each layer is constant, the stress components may be written explicitly. It is this fact that we exploit below in formulating a discretized growth dynamics. This is the main motivating reason for the fixed core geometry we consider. Under different boundary conditions, the deformation and stress would be coupled, requiring for instance a root finding exercise to determine the outer radius for which the stress boundary condition is satisfied. In such a case, the framework below applies at the expense of added computational complexity.
\subsection{\label{subsec:Growth-law-general-setup}Growth law}
We now impose a homeostasis driven growth law of the form \eqref{eq1}. In the plane polar geometry, this takes the form
\begin{equation}
\begin{aligned}\dot{\gamma}^{R} & =\left\{ K^{RR}\left[T^{RR}-\left(T^{RR}\right)^{*}\right]+K^{R\theta}\left[T^{\theta\theta}-\left(T^{\theta\theta}\right)^{*}\right]\right\} \gamma^{R}\:,\\
\dot{\gamma}^{\theta} & =\left\{ K^{\theta R}\left[T^{RR}-\left(T^{RR}\right)^{*}\right]+K^{\theta\theta}\left[T^{\theta\theta}-\left(T^{\theta\theta}\right)^{*}\right]\right\} \gamma^{\theta}\:.
\end{aligned}
\label{eq:growth_dynamics_general}
\end{equation}
Here $K^{RR}:=\mathcal{K}^{RRRR}$, $K^{R\theta}:=\mathcal{K}^{RR\theta\theta}$,
$K^{\theta R}:=\mathcal{K}^{\theta\theta RR}$, $K^{\theta\theta}:=\mathcal{K}^{\theta\theta\theta\theta}$ are the only non-vanishing components of the fourth order tensor $\mathcal{\boldsymbol{K}}$, and are assumed to be constant in space and time.
\subsection{Discretisation approach.}
For given homeostatic stress values and components of $\mathcal{\boldsymbol{K}}$, the growth dynamics is fully defined, with the growth components evolving according to \eqref{eq:growth_dynamics_general}. Even in the simplified cylindrical geometry, this comprises a system of nonlinear partial differential equations. Moreover, viewing the dynamics as a discrete process is still complicated by the fact that at each time step updating the growth requires knowing the stress components, which requires integration of \eqref{eq:bvp-mechanics}, which requires integration of \eqref{eq:bvp-kinematics}, which cannot be done analytically for general spatially dependent $\gamma^R$ and $\gamma^\theta$.
However, as stated above, for constant $\gamma^R$ and $\gamma^\theta$, the integrals determining stress may be computed analytically. This suggests a discretization process whereby the annular domain is divided into discrete layers, each with constant growth, and such that the growth in each layer evolves according to averaged values of the stress. In this way, analytical expressions may be determined for both the stress and the average stress, and hence the dynamics is reduced to a set of ordinary differential equations for the growth components.
The inhomogeneity of the full model is replaced
by a piecewise homogeneous model. This preserves the key idea of inhomogeneity
(allowing, for instance, circumferential growth to be higher near
the nucleus than away from it), but is more analytically
tractable and allows for precise statements about the long-term dynamics, stability, and qualitative investigation such as the influence of radial versus circumferential stress to the growth dynamics.
\section{\label{sec:two_disks}Growth dynamics for 2-layer system.}
\subsection{Kinematics}
We first consider two elastic layers attached to a solid nucleus and in perfect mechanical contact at their interface. In the initial
reference configuration $\mathcal{B}_{0}$, the inner wall has the
radial coordinate $R^{0}=A_{0}$, the middle wall at $R^{0}=A_{1}$
and the outer wall at $R^{0}=A_{2}$. In the current configuration
$\mathcal{B}_{t}$, the same material points have coordinates are $r\left(A_{0}\right)=A_{0}$,
$r\left(A_{1}\right)=a_{1}$ and $r\left(A_{2}\right)=a_{2}$ (see Figure \ref{fig:Kinematics_two_layers}).
\begin{figure}[htpb]
\includegraphics[width=10cm]{two_layers_kinematic_setup}\hfill{}
\caption{\label{fig:Kinematics_two_layers}Kinematic setup for the two-layer system.
The innermost layer is attached to an unmoving nucleus ($a_{0}=A_{0}$)
and the boundary condition at the outer layer is no pressure $T^{RR}\left(A_{2}\right)=0$. }
\end{figure}
We impose that in the reference configuration the two annular layers
enclose the same area $\pi\Delta^{2}$ . The initial reference radii
of the two rings thus satisfy
\begin{equation}
\Delta^{2}=A_{2}^{2}-A_{1}^{2}=A_{1}^{2}-A_{0}^{2}\:.
\end{equation}
The deformation follows the same equations formulated in Section 1.1, but with piecewise homogeneous growth
\begin{equation}
\gamma\left(R^{0}\right)=\begin{cases}
\gamma_{1} & \text{if }A_{0}\leq R^{0}\leq A_{1}\\
\gamma_{2} & \text{if }A_{1}<R^{0}\leq A_{2}\:.
\end{cases}\label{eq:growth_piecewise_homogeneous}
\end{equation}
where $\gamma_1$ and $\gamma_2$ are constant. Note that our convention is to use subscript to denote different layers and superscripts for the coordinate basis index. Here, we have imposed isotropic growth, i.e. $\gamma_{1}^{r}=\gamma_{1}^{\theta}=\gamma_{1}$
and $\gamma_{2}^{r}=\gamma_{2}^{\theta}=\gamma_{2}$. The same ideas apply for anisotropic growth, but this simplification reduces the dynamics to a 2D phase space for $\gamma_1$, $\gamma_2$. In principle, one could also have piecewise material properties and piecewise $\boldsymbol{K}$ values; however our objective is to consider the dynamics in a reduced parameter space, hence the only distinction between the layers is the different growth rates.
The deformation in each layer comes from integrating \eqref{eq:radial_map_general}, subject to $r\left(A_{0}\right)=A_{0}$ and $r\left(A_{1}\right)=a_{1}$. We obtain
\begin{equation}
r\left(R^{0}\right)=\begin{cases}
r_{1}\left(R^{0}\right):=\sqrt{A_{0}^{2}+\gamma_{1}^{2}\left[\left(R^{0}\right)^{2}-A_{0}^{2}\right]} & \text{if }A_{0}\leq R^{0}\leq A_{1}\:,\\
r_{2}\left(R^{0}\right):=\sqrt{A_{0}^{2}+\gamma_{1}^{2}\Delta^{2}+\gamma_{2}^{2}\left[\left(R^{0}\right)^{2}-A_{1}^{2}\right]} & \text{if }A_{1}< R^{0}\leq A_{2}\:.
\end{cases}\label{eq:r_=00007Bp=00007Diecewise}
\end{equation}
Note that at $R^{0}=A_{1}$, $r$ is continuous but not differentiable.
\subsection{Mechanics}
The stress balance \eqref{eq:bvp-mechanics} determines the radial stress as
\begin{eqnarray}
&&T^{RR}\left(R^{0}\right)=\\
\nonumber &&\begin{cases}
T_{1}^{RR}\left(R^{0}\right):=T_{1}^{RR}\left(A_{1}\right)+\mu\int_{A_{1}}^{R^{0}}\frac{2}{\tilde{R}}\left(1-\frac{\tilde{R}^{4}\gamma_{1}^{4}}{r_{1}^{4}}\right)\mathrm{d}\tilde{R}, & R^{0}\in[A_{0},A_{1}]\\
T_{2}^{RR}\left(R^{0}\right):=\underbrace{T_{2}^{RR}\left(A_{2}\right)}_{0}+\mu\int_{A_{2}}^{R^{0}}\frac{2}{\tilde{R}}\left(1-\frac{\tilde{R}^{4}\gamma_{2}^{4}}{r_{2}^{4}}\right)\mathrm{d}\tilde{R}, & R^{0}\in[A_{1},A_{2}].
\end{cases}\label{eq:annulus_t1_piecewise11}
\end{eqnarray}
From \eqref{eq:t2_general}, we then obtain the circumferential stress $T^{\theta\theta}\left(R^{0}\right)$:
\begin{eqnarray}
&& T^{\theta\theta}\left(R^{0}\right)=\\
\nonumber&&\begin{cases}
T_{1}^{\theta\theta}\left(R^{0}\right):=T_{1}^{RR}\left(R^{0}\right)+\mu\frac{2r_{1}^{2}}{\gamma_{1}^{2}\left(R^{0}\right)^{2}}\left[1-\frac{\left(R^{0}\right)^{4}\gamma_{1}^{4}}{\gamma_{1}^{4}}\right], & R^{0}\in[A_{0},A_{1}]\\
T_{2}^{\theta\theta}\left(R^{0}\right):=T_{2}^{RR}\left(R^{0}\right)+\mu\frac{2r_{2}^{2}}{\gamma_{2}^{2}\left(R^{0}\right)^{2}}\left[1-\frac{\left(R^{0}\right)^{4}\gamma_{2}^{4}}{r_{2}^{4}}\right], & R^{0}\in[A_{1},A_{2}].
\end{cases}\label{eq:annulus_t2_piecewise22}
\end{eqnarray}
\begin{figure}[htpb]
\centering\includegraphics[width=1\textwidth]{stresses_static_2016}\hfill{}
\caption{\label{fig:stress_profiles-constant-gamma}Radial (top) and circumferential
(bottom) components of Cauchy stress for $A_{0}=1$, $A_{1}=\sqrt{5/2}$,
$A_{2}=2$, $\Delta=\sqrt{5/2}$, $\mu=1$, $\gamma_{2}=1$ and $\gamma_{1}$
as indicated. }
\end{figure}
The expressions $T_{1}^{RR}$ and $T_{2}^{RR}$ as well as $T_{1}^{\theta\theta}$
and $T_{2}^{\theta\theta}$ can be determined analytically as functions
of $A_{0}$, $A_{1}$, $A_{2}$, $\mu$, $\gamma_{1}$ and $\gamma_{2}$, though the exact expressions are long and have been suppressed here.
Sample stress profiles for varying values of $\gamma_1$ (with $\gamma_2=1$) are given in Figure \ref{fig:stress_profiles-constant-gamma}. With $\gamma_1>1$, the inner layer grows uniformly, hence its reference state is a uniformly expanded annulus; however, it is constrained by attachment to the core and to the ungrowing outer layer. Thus the inside of the inner layer is in radial tension (the inner edge is ``stretched'' radially to match the core), the outside is in radial compression, and the entire layer is in compression in the hoop direction. The outer layer, on the other hand, is forced to expand circumferentially to accommodate the growing inner layer and is in circumferential compression; this is balanced by a compression in the radial direction. The inverse effect occurs with $\gamma_1<1$.
\subsection{\label{subsec:Growth-law-2-layers-for-bif-diagram}Growth law}
We define the average stresses $\overline{T_{1}}$ and $\overline{T_{2}}$,
for both radial and circumferential stress components, as
\begin{equation}
\overline{T_{1}}=\frac{2}{\Delta^{2}}\int_{A_{0}}^{A_{1}}T_{1}\left(\tilde{R}\right)\tilde{R}\mathrm{d}\tilde{R}\,,\qquad\overline{T_{2}}=\frac{2}{\Delta^{2}}\int_{A_{1}}^{A_{2}}T_{2}\left(\tilde{R}\right)\tilde{R}\mathrm{d}\tilde{R}.\label{eq:stress_average_two}
\end{equation}
Our approach is to modify the growth dynamics so that the (constant) growth in each layer evolves according to the averaged stress values. That is, we study the system
\begin{equation}
\begin{aligned}\dot{\gamma}_{1} & =\gamma_{1}\left\{ K^{RR}\left[\overline{T_{1}^{RR}}-\left(T_{1}^{RR}\right)^{*}\right]+K^{\theta\theta}\left[\overline{T_{1}^{\theta\theta}}-\left(T_{1}^{\theta\theta}\right)^{*}\right]\right\} \,\\
\dot{\gamma}_{2} & =\gamma_{2}\left\{ K^{RR}\left[\overline{T_{2}^{RR}}-\left(T_{2}^{RR}\right)^{*}\right]+K^{\theta\theta}\left[\overline{T_{2}^{\theta\theta}}-\left(T_{2}^{\theta\theta}\right)^{*}\right]\right\}.
\end{aligned}
\label{eq:annulus_growth_law}
\end{equation}
Note that the isotropic growth enforces $K^{RR}=K^{\theta R}$ and $K^{\theta\theta}=K^{R\theta}$,
hence there are only two (rather than four) growth rate constants $K^{RR}$ and
$K^{\theta\theta}$.
To further reduce the parameter space, we make the additional assumption that the homeostatic stress
values are equivalent in layers 1 and 2, that is
\begin{equation}
\left(T^{RR}\right)^{*}:=\left(T_{1}^{RR}\right)^{*}=\left(T_{2}^{RR}\right)^{*}\qquad\text{and}\qquad\left(T^{\theta\theta}\right)^{*}:=\left(T_{1}^{\theta\theta}\right)^{*}=\left(T_{2}^{\theta\theta}\right)^{*}\,.\label{eq:homeostatic-stress-equal-in-both-layers}
\end{equation}
We emphasize that while $\overline{T_{i}^{RR}}$ and $\overline{T_{i}^{\theta\theta}}$
for $i=1,2$ are averages over actual stresses according to \eqref{eq:stress_average_two},
the homeostatic values $\left(T_{i}^{RR}\right)^{*}$ and $\left(T_{i}^{\theta\theta}\right)^{*}$
for $i=1,2$ are prescribed values that may, but need not, correspond to averages of physically realizable stresses.
To facilitate the analysis, we rescale all stress quantities by a characteristic value $\sigma$, e.g. $\hat{T}^{RR}=T^{RR}/\sigma$, and rescale time as $\hat{t}=t\sigma K^{\theta\theta}$. We also introduce
\begin{equation}
\tilde{K}:=K^{RR}/K^{\theta\theta}\qquad\text{and}\qquad \hat{T}^{*}:=\tilde{K}\left(\hat{T}^{RR}\right)^{*}+\left(\hat{T}^{\theta\theta}\right)^{*}\,.\label{eq:ktilde-tstar}
\end{equation}
The parameter $\tilde{K}$ is a measure of anisotropy of the mechanical
feedback, i.e. a weighting of the contribution of radial vs. circumferential
stress to the (isotropic) growth response.
The rescaled growth law is then
\begin{equation}
\begin{aligned}\dot{\gamma}_{1} & =\gamma_{1}\left[\tilde{K}\overline{T_{1}^{RR}}+\overline{T_{1}^{\theta\theta}}-T^{*}\right]\\
\dot{\gamma}_{2} & =\gamma_{2}\left[\tilde{K}\overline{T_{2}^{RR}}+\overline{T_{2}^{\theta\theta}}-T^{*}\right].
\end{aligned}
\label{eq:dynamical_system_two_layers}
\end{equation}
Here we have re-defined the overdot as derivative with respect
to the rescaled time, and we have dropped all hats for notational convenience. Note that all stress averages depend nonlinearly
on $\gamma_{1}$ and $\gamma_{2}$, but not on the spatial coordinate
$R^{0}$, which has been integrated out.
\subsection{Stability analysis}
To investigate the behavior of the growth dynamics, we can now apply standard techniques of dynamical systems to \eqref{eq:dynamical_system_two_layers}; i.e. we seek equilibria satisfying
$\dot{\gamma}_{1}=0$ and $\dot{\gamma}_{2}=0$ and compute their stability. Let $\left\{ \gamma_{1}^{\text{eq}},\gamma_{2}^{\text{eq}}\right\} $ denote an equilibrium state. The nonlinear nature of the dependence of $\overline{T_{1}^{RR}}$, $\overline{T_{2}^{RR}}$, $\overline{T_{1}^{\theta\theta}}$
and $\overline{T_{2}^{\theta\theta}}$ on $\gamma_{1}$, $\gamma_{2}$
makes it difficult to compute analytically the number and
location of equilibrium states as a function of the parameters $\tilde{K}$
and $T^{*}$ and we shall use numerical methods to this end.
For a given equilibrium state, we then perform a linear stability analysis. Let $0<\varepsilon\ll1$ and expand as
\begin{equation}
\begin{aligned}\gamma_{1} & =\gamma_{1}^{\text{eq}}+\varepsilon\overline{\gamma}_{1}+\mathcal{O}\left(\varepsilon^{2}\right),\\
\gamma_{2} & =\gamma_{2}^{\text{eq}}+\varepsilon\overline{\gamma}_{2}+\mathcal{O}\left(\varepsilon^{2}\right).
\end{aligned}
\label{eq:linear-expansion-two-layers}
\end{equation}
Introducing $\boldsymbol{\gamma}=\left(\gamma_{1},\gamma_{2}\right)$
to describe the state of the system \eqref{eq:dynamical_system_two_layers},
its linearly expanded version (to order $\varepsilon$) takes the
form
\begin{equation}
\dot{\overline{\boldsymbol{\gamma}}}=\mathbf{J}\boldsymbol{\overline{\gamma}}
\end{equation}
where the Jacobian matrix has entries
\begin{equation}
J_{ij}=\left[\frac{\partial\dot{\gamma}_{i}}{\partial\gamma_{j}}\right]_{\boldsymbol{\gamma}=\boldsymbol{\gamma}^{\text{eq}}}.
\end{equation}
Stability is determined in the usual way by the form of eigenvalues of $\mathbf{J}$, which are the roots of the characteristic equation
\begin{equation}
0=(J_{11}-\lambda)(J_{22}-\lambda)-J_{12}J_{21}.
\end{equation}
\subsection{Bifurcation diagram}
The number of equilibrium states and their stability depend on the values of $\tilde{K}$ and $T^*$. In Figure \ref{fig:two_layered_bifurcation_diagram}(a) we present a phase diagram that shows four regions with distinct dynamical behavior. These can be summarized as follows:
\begin{itemize}
\item \textbf{Region I} has four equilibrium states, of which one is a stable node, two are saddles, and the fourth is either an
unstable node or an unstable focus. \\
\item \textbf{Region II} has four equilibrium states: two are saddles
and the other two are either stable nodes or a stable focus and stable node. A Hopf bifurcation at the interface of Regions I
\& II transforms the unstable focus into a stable focus.\\
\item \textbf{Region III }has two equilibrium states, one of which is a
stable node, the other a saddle node. At the interface between Regions II and III, a saddle-node bifurcation occurs that annihilates the stable node and
saddle node in Region II.\\
\item \textbf{Region IV} has no equilibrium states.
\end{itemize}
In Figure
\ref{fig:two_layered_bifurcation_diagram}(b) we show phase portraits for the selected points P1 - P5. Nullclines are plotted as blue and green curves, illustrating the appearance and disappearance of equilibrium states as categorized above.
As is evident in Figure \ref{fig:two_layered_bifurcation_diagram}, there is a wealth of possible dynamical behavior exhibited in this system. That an idealized two-layer model with isotropic growth and equivalent homeostatic values in each layer has such a rich structure highlights a more generic complex nature of mechanically driven growth. Our intent is not to fully categorize the behavior; rather this system should be seen as a paradigm to illustrate complex dynamics. Nevertheless, several observations are in order.
One observation from the phase portraits in Figure \ref{fig:two_layered_bifurcation_diagram}(b) is that unbounded growth is not only possible but ``common'', at least in the sense that many parameter choices and initial conditions lead to trajectories for which $\gamma_i\to\infty$. Perhaps the most natural initial condition is to set $\gamma_1=\gamma_2=1$, which corresponds to letting the system evolve from an initial state with no growth. Examining the trajectories in Figure \ref{fig:two_layered_bifurcation_diagram}(b) shows that points P1 and P2 would not evolve towards the single stable state, but rather would grow without bound.
Another point of interest is that while regions I, II and III contain stable equilibria, the stable states in Regions I and III satisfy $\gamma_{1}^{\text{eq}}\gamma_{2}^{\text{eq}}<1$. These are equilibria for which one of the layers has lost mass (at least one of the $\gamma_i<1$). Growth in both layers requires both $\gamma_i>1$, and we find that such an equilibrium only exists in a small subset of Region II, shaded dark blue in Figure \ref{fig:two_layered_bifurcation_diagram}. We further see that $T^*<0$ in the dark blue region, and $\tilde{K}$ approximately in the range 10 to 17. This implies that in order for a stable equilibrium to exist where both layers have grown, the homeostatic stress must be compressive in one or both components, and the system must respond more strongly to radial than to circumferential stress.
\begin{figure}[htpb]
\centering\includegraphics[width=1\textwidth]{bifurcation_collage}
\caption{\label{fig:two_layered_bifurcation_diagram}(a) Bifurcation diagram
for two layered actively growing piecewise homogeneous system. (b)
Equilibrium states and their dynamical characterization. Parameter values were $A_0=1$, $A_1=1.562$, $A_2=1.970$.}
\end{figure}
\paragraph{Admissible versus inadmissible homeostatic values.}
In Figure \ref{fig:two_layered_bifurcation_diagram} we imposed the homeostatic
stress $T^*$ to be equal in each layer. Moreover, $T^*$ could take any value, and thus had no direct correspondence to a physically realizable stress state. We now define an {\it admissible homeostatic value} as the average over a stress field that can be physically realized with the given geometry and boundary conditions. Such an admissible homeostatic stress state derives from a homeostatic growth, i.e. a given growth field $\boldsymbol{\gamma}^{*}=\left(\gamma_{1}^{*},\gamma_{2}^{*}\right)^{T}$ defines a spatially dependent stress, and averaging according to \eqref{eq:stress_average_two} then gives admissible values for the homeostatic stress:
\begin{equation}
\overline{T_{i}^{RR}}\left(\boldsymbol{\gamma}^{*}\right)\qquad\text{and}\qquad\overline{T_{i}^{\theta\theta}}\left(\boldsymbol{\gamma}^{*}\right),\qquad i=1,2\,.\label{eq:homeostatic-compatibility-2-disks}
\end{equation}
An {\it inadmissible homeostatic value} is one that cannot be expressed as an average over an actual stress, i.e. there exists
no $\boldsymbol{\gamma}^{*}$ defining $\overline{\mathbf{T}}^{*}$.
\begin{figure}[htpb]
\includegraphics[width=0.99\textwidth]{phase_portrait_detailed}\hfill{}
\caption{\label{fig:spiral-dynamics-trajectories}Trajectories and layer sizes
for highly anisotropic growth law with admissible homeostatic state.
\textbf{(a)} Contours for $\dot{\gamma}_{1}=0$ and $\dot{\gamma}_{2}=0$
for the system \ref{fig:spiral-dynamics-trajectories}. As can be
confirmed from the stream plots \textbf{(b)} and \textbf{(c)}, there
is one stable spiral, two saddles, and one stable node.
The saddle point P4 in (b) is the homeostatic equilibrium $\left(\gamma_{1}^{*},\gamma_{2}^{*}\right)$.
Parameters: $\mu=2$, $\Delta=\sqrt{3}$ ($A_{0}=1$, $A_{2}=\sqrt{7}$).
$\tilde{K}=23.5$. Homeostatic growth: $\gamma_{1}^{*}=5.867$, $\gamma_{2}^{*}=3$. }
\end{figure}
\paragraph{Growth law with admissible homeostatic values.}
To conclude our analysis of the two-layer system, we return to the same growth law, but for admissible homeostatic values. Due to the spatial inhomogeneity of the stress profile in the two-layer cylinder (see for instance Figure \ref{fig:stress_profiles-constant-gamma}), it is not possible to have equal homeostatic values in each layer 1 and 2.
The growth law with admissible homeostatic values reads
\begin{equation}
\begin{aligned}\dot{\gamma}_{1} & =\gamma_{1}\left\{ \tilde{K}\left[\overline{T_{1}^{RR}}\left(\boldsymbol{\gamma}\right)-\overline{T_{1}^{RR}}\left(\boldsymbol{\gamma}^{*}\right)\right]+\left[\overline{T_{1}^{\theta\theta}}\left(\boldsymbol{\gamma}\right)-\overline{T_{1}^{\theta\theta}}\left(\boldsymbol{\gamma}^{*}\right)\right]\right\} \\
\dot{\gamma}_{2} & =\gamma_{2}\left\{ \tilde{K}\left[\overline{T_{2}^{RR}}\left(\boldsymbol{\gamma}\right)-\overline{T_{2}^{RR}}\left(\boldsymbol{\gamma}^{*}\right)\right]+\left[\overline{T_{2}^{\theta\theta}}\left(\boldsymbol{\gamma}\right)-\overline{T_{2}^{\theta\theta}}\left(\boldsymbol{\gamma}^{*}\right)\right]\right\} \,.
\end{aligned}
\label{eq:dynamics-N2}
\end{equation}
The phase space for this system is now inherently three dimensional, as the homeostatic stress values are defined by the two choices $\gamma_i^*$ as opposed to the single value $T^*$. Here we restrict our analysis to a single example, with $\gamma_{1}^{*}=5.867$, $\gamma_{2}^{*}=3$, and $\tilde{K}=23.5$, thus representing a preferred state defined by significant growth in each layer, and with strongly anisotropic growth dynamics due to the large value of $\tilde{K}$. The dynamics are presented in Figure \ref{fig:spiral-dynamics-trajectories}. The contour plot in Figure
\ref{fig:spiral-dynamics-trajectories}(a) shows that there are in
total four equilibrium states. The streamlines and trajectory
plots in Figure \ref{fig:spiral-dynamics-trajectories}(b) and (c)
reveal that the equilibria consist of a stable spiral, two saddles, and one
stable node. It is interesting to note that P4, which is the equilibrium state at which both $\gamma_{i}^{\text{eq}}=\gamma_i^*$, is unstable; that is, the system does not remain at the equilibrium state through which the homeostatic values were defined.
Included in Figure \ref{fig:spiral-dynamics-trajectories}(b) are three sample trajectories, with the size of each layer shown at different times, and illustrative of the variety of dynamical behavior. The green trajectory quickly settles to a stable state marked by significant resorption (both $\gamma_i<1$); the blue and red trajectories sit outside the basin of attraction of P1 and show an initial period of resorption followed by significant growth. The red trajectory is in the basin of attraction of the stable focus and thus oscillates between growth and decay as it approaches the stable point at P3, while the blue trajectory, just outside the basin of attraction, ultimately grows without bound, never reaching an equilibrium state.
\section{\label{sec:N_disks}Growth of discrete $N$ layer system}
Next, we generalize the dynamical system of the previous
section from two to $N$ layers where growth and stresses are constant throughout each layer. If $N$ is sufficiently large, a system of $N$ layers can be used as a suitable spatial discretisation of a continuous growth profile on which precise statements can be obtained. In this case, we can generalize Equations \eqref{eq:dynamics-N2} to $N$ coupled ODEs. We will analyze the stability
of this system near a homeostatic equilibrium, and show to what
extent the results obtained for $N=2$ remain unchanged as the discretisation is refined
($N$ increases), which informs the stability of the continuous ($N\rightarrow\infty$)
system.
A major difference compared to the two-layer model
is the method to obtain homeostatic values. Previously,
homeostatic values were prescribed via the homeostatic growth values $\gamma_{1}^{*}$,
$\gamma_{2}^{*}$. In the present model, homeostatic values are obtained
by assuming the existence of a prescribed continuous homeostatic growth profile $\gamma^{*}\left(R^{0}\right)$.
The homeostatic values $\left\{ \gamma_{i}^{*}\right\} $ are then obtained
through local averaging of the prescribed profile $\gamma^{*}\left(R^{0}\right)$
over an interval by generalizing Equations \eqref{eq:stress_average_two}. These values are admissible by construction.
Since growth is
taken as constant in each layer, the stresses can be determined fully
analytically and a stability analysis can then be performed. The stability
analysis will inform under which conditions the dynamical system will either
relax to a homeostatic state after a small perturbation or lead to an instability.
\subsection{Kinematics}
\begin{figure}[htpb]\centering
\includegraphics[width=8cm]{N_layers}\caption{\label{fig:Kinematics_N_layers}Kinematic setup for an isotropically
growing $N$ layered system. Note that the discretization is chosen such that the areas of each layer are equal.}
\end{figure}
We consider $N$ perfectly connected annuli, separated by $N+1$ interfaces,
which in the initial reference configuration have the radial coordinate
values $\left\{ A_{0},A_{1},\ldots,A_{N}\right\} $ as sketched in Figure \ref{fig:Kinematics_N_layers}.
The $K$-th annulus is defined by $A_{K-1}\leq R\leq A_{K}$ for $K\in\left\{ 1,\ldots,N\right\} $.
We choose a particular discretization so that the area between layers, $\pi\Delta^{2}$, is constant:
\begin{equation}
A_{K}^{2}-A_{K-1}^{2}:=\Delta^{2}=\text{const.}
\end{equation}
We can write
$A_{K}$ explicitly as
\begin{equation}
A_{K}^{2}=A_{0}^{2}+K\Delta^{2}\,.\label{eq:Ak-explicit}
\end{equation}
Given a continuous curve $\gamma\left(R^{0}\right)$ we define the
piecewise constant growth profile by taking the average
\begin{equation}
\gamma_{K}:=\overline{\gamma\left(R^{0}\right)}=\frac{2}{\Delta^{2}}\int_{A_{K-1}}^{A_{K}}\gamma\left(\tilde{R}\right)\tilde{R}\mathrm{d}\tilde{R},\quad K=1,\ldots,N\label{eq:gamma-piecewise}
\end{equation}
The growth value $\gamma_{K}$ is constant for all $K$.
We demonstrate the construction of the discrete profile $\left\{ \gamma_{K}\right\} $
from the continuous profile $\gamma\left(R^{0}\right)$ in Figure
\ref{fig:averaging-gamma}, in which we consider as an example the
continuous function
\begin{equation}
\gamma\left(R^{0}\right)=2-\frac{3}{2}\sin\left(\pi\frac{R^{0}-A_{0}}{A_{N}-A_{0}}\right).\label{eq:gamma-continuous-example}
\end{equation}
\begin{figure}\centering
\includegraphics[width=0.8 \textwidth]{doodle1_17_10_2016
\caption{\label{fig:averaging-gamma}Growth $\gamma$ continuous vs. averaged.
The continuous curve \eqref{eq:gamma-continuous-example} is plotted
in blue, and the average over a particular discretisation according
to \eqref{eq:gamma-piecewise} is shown by a solid piecewise constant black curve
($N=8$ with $A_{0}=1$, $A_{N}=5$ and $\Delta=\sqrt{3}$). }
\end{figure}
Once $\left\{ \gamma_{K}\right\} $ are obtained, we compute the radial
map $r_{K}\left(R^{0}\right)$ from the discrete profile $\left\{ \gamma_{K}\right\} $.
Note that while $\gamma_{K}$ is a constant throughout the $K$-th
layer, the radial map $r_{K}$ is a function of the radial coordinate
$R^{0}$:
\begin{equation}
r_{K}^{2}\left(R^{0}\right)=r_{K-1}^{2}\left(A_{K-1}\right)+\gamma_{K}^{2}\left[\left(R^{0}\right)^{2}-A_{K-1}^{2}\right],\qquad r_{0}^{2}\left(R^{0}\right)=A_{0}^{2}.\label{eq:radial-function-recursive}
\end{equation}
Explicitly, this implies
\begin{align}
r_{K}^{2}\left(R^{0}\right) & =A_{0}^{2}+\left(\Delta^{2}\sum_{i=1}^{K-1}\gamma_{i}^{2}\right)+\gamma_{K}^{2}\left[\left(R^{0}\right)^{2}-A_{K-1}^{2}\right].\label{eq:radial-function-explicit}
\end{align}
Notice that the recursive expression \eqref{eq:radial-function-recursive}
and the explicit expression \eqref{eq:radial-function-explicit} are
consistent with the requirement
\begin{equation}
r_{K-1}\left(A_{K-1}\right)=r_{K}\left(A_{K-1}\right),\label{eq:continuity-of-rk}
\end{equation}
which means that $r_{K}$ is continuous at the boundary layer $A_{K-1}$.
\subsection{Mechanics}
\paragraph{Stress components.}
In the continuous version, the radial stress $T^{RR}$ is obtained from \eqref{eq:bvp-mechanics}.
The discrete version reads
\begin{equation}
\frac{\partial T_{K}^{RR}}{\partial R^{0}}=\frac{2\mu}{R^{0}}\left[1-\frac{\gamma_{K}^{4}\left(R^{0}\right)^{4}}{r_{K}^{4}\left(R^{0}\right)}\right],\qquad T_{N}^{RR}\left(A_{N}\right)=0\,.\label{eq:TRR-differential}
\end{equation}
Traction continuity at the interfaces implies
\begin{equation}
T_{K}^{RR}\left(A_{K}\right)=T_{K+1}^{RR}\left(A_{K}\right)\,.\label{eq:continuity-of-TRR}
\end{equation}
We define $\tau^{RR}\left(R^{0}\right)$ as the indefinite integral
over the right hand side of \eqref{eq:TRR-differential} (dropping
the integration constant),
\begin{equation}
\tau^{RR}\left(R^{0}\right):=-\mu\frac{r_{K-1}^{2}\left(A_{K-1}\right)-A_{K-1}^{2}\gamma_{K}^{2}}{r_{K}^{2}\left(R^{0}\right)}-\mu\log\left[\frac{r_{K}^{2}\left(R^{0}\right)}{\left(R^{0}\right)^{2}}\right],
\end{equation}
from which, we express the radial stress in the $K$-th layer as
\begin{equation}
T_{K}^{RR}\left(R^{0}\right)=\tau_{K}^{RR}\left(R^{0}\right)-\tau_{N}^{RR}\left(A_{N}\right)+\sum_{i=K}^{N-1}\mu\frac{A_{i}^{2}\left(\gamma_{i+1}^{2}-\gamma_{i}^{2}\right)}{r_{i}^{2}\left(A_{i}\right)}\label{eq:TRR-explicit}
\end{equation}
\begin{figure}\centering
\includegraphics[width=0.8\textwidth]{radial_MAP_profile}
\caption{\label{fig:radius-discrete-and-continuous}Radial function $r_{K}\left(R^{0}\right)$
for the case of discrete growth $\gamma_{i}$, computed according
to \eqref{eq:radial-function-explicit}. The dashed line represents
the case of no deformation $r=R^{0}$; everything below the dashed
line is resorption (``shrinking''), everything above this line is
growth (Parameters as in Figure \ref{fig:averaging-gamma}).}
\end{figure}
The circumferential stress $T^{\theta\theta}$
is related to the radial stress $T^{RR}$ through \eqref{eq:t2_general}.
The discrete version of the relationship between $T^{RR}$ and $T^{\theta\theta}$
is given by
\begin{equation}
T_{K}^{\theta\theta}\left(R^{0}\right)=T_{K}^{RR}\left(R^{0}\right)+\kappa_{K}\left(R^{0}\right),\label{eq:TthetaTheta-discrete}
\end{equation}
where
\begin{equation}
\kappa_{K}\left(R^{0}\right):=\frac{2\mu r_{K}^{2}\left(R^{0}\right)}{\gamma_{K}^{2}\left(R^{0}\right)^{2}}\left(1-\frac{\gamma_{K}^{4}\left(R^{0}\right)^{4}}{r_{K}^{4}}\right).\label{eq:kappa-discrete}
\end{equation}
Stress profiles corresponding to the growth law \eqref{eq:gamma-continuous-example} are depicted in Figure \ref{fig:discrete-stress-profile}(a) (radial) and Figure \ref{fig:discrete-stress-profile}(b) (circumferential).
\begin{figure}[htpb]\centering
\includegraphics[width=0.8\textwidth]{circumferential_stress_profile}
\caption{\label{fig:discrete-stress-profile}Stress profile and stress averages
for the growth profile \eqref{eq:gamma-continuous-example}. \textbf{(a)} Radial stress profile $T^{RR}$
and average stress profile $\overline{T^{RR}}$. The analytical curve
was obtained from \eqref{eq:TRR-explicit} and the numerical curve
(for validation) was obtained from \eqref{eq:bvp-mechanics}. In both
the numerical and analytical case, the piecewise growth profile $\gamma_{i}$
according to \eqref{eq:gamma-piecewise} was used. The average stress
was computed according to \eqref{eq:TRR-average-explicit} with the
same growth profile as the other curves. \textbf{(b)} Circumferential
stress profile $T^{\theta\theta}$ and average stress profile $\overline{T^{\theta\theta}}$.
The analytical curve was obtained from \eqref{eq:TthetaTheta-discrete}
and the numerical curve (for validation) was obtained from \eqref{eq:t2_general}.
The average stress was computed according to \ref{eq:circumferential-stress-average}.
All other parameters are as in Figure \ref{fig:averaging-gamma}, with Young's modulus $\mu=1$. }
\end{figure}
\paragraph{Average stress.}
As in the two-layer case, average values for the radial and circumferential stress can be computed exactly. The average radial stress in the $K\text{-th}$ layer $\overline{T_{K}^{RR}}$
is
\begin{equation}
\overline{T_{K}^{RR}}=-\tau_{N}^{RR}\left(A_{N}\right)+\sum_{i=K}^{N-1}\mu\frac{A_{i}^{2}\left(\gamma_{i+1}^{2}-\gamma_{i}^{2}\right)}{r_{i}^{2}\left(A_{i}\right)}+\frac{2}{\Delta^{2}}\left[\nu_{K}^{rr}\left(A_{K}\right)-\nu_{K}^{rr}\left(A_{K-1}\right)\right]\label{eq:TRR-average-explicit}
\end{equation}
where $\nu_{K}\left(R^{0}\right)$ is defined as
\begin{equation}
\nu_{K}^{rr}\left(R^{0}\right):=\mu\left[A_{K-1}^{2}-\frac{r_{K-1}^{2}\left(A_{K-1}\right)}{\gamma_{K}^{2}}\right]\log\left[r_{K}^{2}\left(R^{0}\right)\right]-\frac{1}{2}\mu\left(R^{0}\right)^{2}\log\left[\frac{r_{K}^{2}\left(R^{0}\right)}{\left(R^{0}\right)^{2}}\right].
\end{equation}
We have seen in \eqref{eq:TthetaTheta-discrete} how the circumferential
stress $T^{\theta\theta}$ relates to the radial stress $T^{RR}$.
The average over that expression is
\begin{equation}
\overline{T_{K}^{\theta\theta}}=\overline{T_{K}^{RR}}+\overline{\kappa_{K}},\label{eq:circumferential-stress-average}
\end{equation}
We have presented an expression for $\kappa_{K}$ in \eqref{eq:kappa-discrete}.
The average over $\kappa_{K}$ is
\begin{align}
\overline{\kappa_{K}} & =\frac{2\mu\left[r_{K}^{2}\left(A_{K}\right)-\gamma_{K}^{2}A_{K}^{2}\right]}{\Delta^{2}\gamma_{K}{}^{2}}\log\left[\frac{A_{K}^{2}r_{K}^{2}\left(A_{K}\right)}{A_{K-1}^{2}r_{K-1}^{2}\left(A_{K-1}\right)}\right].\label{eq:kappa-average-explicit}
\end{align}
According to \eqref{eq:circumferential-stress-average}, the expression
for $\overline{T_{K}^{\theta\theta}}$ is the sum of $\overline{\kappa_{K}}$
(see \eqref{eq:kappa-average-explicit}) and $\overline{T_{K}^{RR}}$
(see \eqref{eq:TRR-average-explicit}). The average radial and circumferential stress components are depicted as horizontal lines in the respective
layers in Figure \ref{fig:discrete-stress-profile}(a) (radial) and Figure \ref{fig:discrete-stress-profile}(b) (circumferential).
\subsection{\label{subsec:Generating-homeostatic-state}Generating a homeostatic state from a prescribed growth profile.}
The discretization and averaging process described above enables for a concise framework for studying growth dynamics. As the homeostatic state is defined by a growth profile -- a function that is only constrained to be positive -- a generic classification of dynamic behaviour is likely untractable. Our intent, rather, is to birefly investigate stability and the rate of convergence in terms of number of layers. For this, we restrict attention to a linear homeostatic growth profile $\gamma^{*}\left(R^{0}\right)$, characterized by a single parameter, $C_{1}$,
\begin{equation}
\gamma^{*}\left(R^{0}\right)=1+C_{1}\left(R^{0}-A_{0}\right),\qquad C_{1}\left(A_{N}-A_{0}\right)<1.\label{eq:linear-growth-profile}
\end{equation}
Note that this growth profile satisfies $\gamma^{*}\left(A_{0}\right)=1$,
i.e. no growth at the inner boundary.
We obtain the discrete homeostatic stress profile $\left\{ \gamma_{i}^{*}\right\} $
from the continuous profile $\gamma^{*}\left(R^{0}\right)$ by computing
the average according to \eqref{eq:gamma-piecewise}. The homeostatic
stress $\mathbf{T}\left(\boldsymbol{\gamma}^{*}\right)$ is
computed from the discrete homeostatic stress profile $\left\{ \gamma_{i}^{*}\right\} $
according to \eqref{eq:TRR-explicit} and \eqref{eq:TthetaTheta-discrete}.
The homeostatic values $\overline{\mathbf{T}}\left(\boldsymbol{\gamma}^{*}\right)$
are obtained as averages according to \eqref{eq:TRR-average-explicit}
and \eqref{eq:circumferential-stress-average}. It is important to
note that the homeostatic stress is generated by prescribing a growth
profile \eqref{eq:linear-growth-profile}, which by definition ensures that
the homeostatic stress is admissible.
\subsection{\label{subsec:N-disks-stability}Growth Dynamics}
We consider a growth law that generalizes \eqref{eq:dynamics-N2} to $N$ layers. The main difference with \eqref{eq:dynamics-N2} is that the values for homeostatic stress are obtained by the linear growth profile.
The growth law reads
\begin{equation}
\dot{\gamma}_{K}=\gamma_{K}\left\{ \tilde{K}\left[\overline{T_{K}^{RR}}\left(\boldsymbol{\gamma}\right)-\overline{T_{K}^{RR}}\left(\boldsymbol{\gamma}^{*}\right)\right]+\overline{T_{K}^{\theta\theta}}\left(\boldsymbol{\gamma}\right)-\overline{T_{K}^{\theta\theta}}\left(\boldsymbol{\gamma}^{*}\right)\right\} ,\qquad K=1\ldots N.\label{eq:growth-dynamics-N-layers}
\end{equation}
In order to consider the stability of \eqref{eq:growth-dynamics-N-layers}
in the neighborhood of the homeostatic state, we expand growth around its equilibrium values:
\begin{equation}
\gamma_{K}=\gamma_{K}^{*}+\varepsilon\tilde{\gamma}_{K}+\mathcal{O}\left(\varepsilon^{2}\right),\qquad K=1,\ldots,N.\label{eq:gamma-near-homeostasis}
\end{equation}
To linear order in $\varepsilon$, the dynamical system simplifies
to
\begin{equation}
\dot{\tilde{\boldsymbol{\gamma}}}=\mathbf{J}\tilde{\boldsymbol{\gamma}}.
\end{equation}
The eigenvalues of the Jacobian matrix $\mathbf{J}$ characterize the stability of
\eqref{eq:growth-dynamics-N-layers} near the homeostatic state. The
components of the $N\times N$ matrix $\mathbf{J}$ are
\begin{align}
J_{ij} & =\left[\gamma_{i}\left(\tilde{K}\frac{\partial\overline{T_{i}^{RR}}\left(\boldsymbol{\gamma}\right)}{\partial\gamma_{j}}+\frac{\partial\overline{T_{i}^{\theta\theta}}\left(\boldsymbol{\gamma}\right)}{\partial\gamma_{j}}\right)\right]_{\boldsymbol{\gamma}=\boldsymbol{\gamma}^{*}},\quad i,j=1,\ldots,N.\label{eq:Jacobian-N-layers}
\end{align}
We characterize the stability in the neighborhood of the homeostatic
state as a function of two non-dimensional parameters: The mechanical
feedback anisotropy parameter $\tilde{K}$ and the slope of the homeostatic
growth profile $C_{1}$. The latter appears in \eqref{eq:Jacobian-N-layers}
through $\boldsymbol{\gamma}^{*}$ (see Section \ref{subsec:Generating-homeostatic-state}).
\begin{figure}[htpb]\centering
\includegraphics[width=0.8\textwidth]{conclusion_plot}
\caption{\label{fig:Bifurcation-diagram-N-layer}Bifurcation diagram and convergence
for $N$-layered cylinder system. \textbf{(a)}\textendash}\textbf{(d)}:
The unstable (orange) and stable (blue) regions retain their shape
for increasing values of $N$. \textbf{(e)}: For a representative
sample of points P1 to P4, the convergence of the largest eigenvalue
is very good (see interpretation in text). The $\left(\tilde{K}^{-1},C_{1}\right)$
coordinates are $\text{P1}\left(0.1,2.5\right)$, $\text{P2}\left(-0.25,-0.5\right)$,
$\text{P3}\left(0.5,-0.5\right)$, $\text{P4}\left(1,-0.5\right)$.
Other parameters are $\mu_{1}=1$, $A_{0}=1$, $A_{N}=2$.
\end{figure}
Figure \ref{fig:Bifurcation-diagram-N-layer}(a) shows a bifurcation
diagram of the stability of the dynamical system \eqref{eq:growth-dynamics-N-layers}
as a function of $\tilde{K}^{-1}$ and $C_{1}$ for $N=9$ layers
(note that unlike in Figure \ref{fig:two_layered_bifurcation_diagram},
here we use the inverse of $\tilde{K}$ to focus on large circumferential
stress). The regions are colored
according to the largest real part of the eigenvalues $\lambda_{i}$
of $\mathbf{J}$, that is $\lambda=\text{Max(Re\ensuremath{\,\lambda_{1}}, Re\ensuremath{\,\lambda_{2}}, ... Re\ensuremath{\,\lambda_{N}})}$.
There are three parameter regions: an unstable region (orange), a
stable region (blue), and an undecidable region (green) for which
$\lambda$ is within a small tolerance of zero. This last region is included as it is typically within numerical error and its inclusion allows to make precise statements about stability. This relatively shallow
region of $\lambda$ is further explored in Figure \ref{fig:shallow-region}
and allows us to identify the clearly stable and clearly unstable
regions of the diagram. Figure \ref{fig:Bifurcation-diagram-N-layer}(b)\textendash (e)
shows that for increasing values of $N$ (that is, a refinement of
the discretisation), the regions are practically unchanged (b\textendash d),
and that the largest eigenvalue of four selected points converges
reliably to a finite positive (P1 \& P2) or negative (P3 \& P4) eigenvalue.
The green shallow region is more explicitly visualized in Figure \ref{fig:shallow-region}.
This plot shows in the vertical axis the value of the largest real
eigenvalue computed at $\left(\tilde{K}^{-1},C_{1}\right)$ from the
Jacobian matrix \eqref{eq:Jacobian-N-layers}. The planes $\lambda=\text{tol}$
and $\lambda=-\text{tol}$ are shown in dark gray, and eigenvalues
between are assumed to be in the shallow (green) region in which stability
cannot be decided from an expansion of $\boldsymbol{\gamma}$ according
to \eqref{eq:gamma-near-homeostasis} to first order in $\varepsilon$.
Thus, we see that there exist a region of stability, and a region of instability,
which both persist (for large enough $N$) independently of the discretisation.
A strongly anisotropic growth law ($\tilde{K}^{-1}$ close to zero
or negative) is required for the system to be unstable. We also considered the convergence as $N$ increases for a representative sample of points in the stable and unstable regions and confirmed that there was no significant change in $\lambda$.
We expect that the stable and unstable regions represent the true behavior
of the full (inhomogeneous) system discussed in Section \ref{sec:General_disks}.
The intermediate (green) shallow region of eigenvalues has a more complicated
structure due to the discretisation that is not expected in the full system.
\begin{figure}[htpb]\centering\includegraphics[width=0.8\textwidth]{Shallow_Region}\hfill{}
\caption{\label{fig:shallow-region}Detailed depiction of the shallow (green)
region from Figure \ref{fig:Bifurcation-diagram-N-layer}. The large
shallow region has a fine structure which is an artifact of discretisation
and is not expected in the full inhomogeneous system. For this reason,
we choose a three color system in Figure \ref{fig:Bifurcation-diagram-N-layer}(a)\textendash}(d),
in which the shallow region and its fine structure are merged into
one region defined by $-\text{tol}\leq\lambda\leq\text{tol}$. The
two planes serving as upper and lower bounds of this region are depicted
in dark gray. Values above $\lambda=\text{tol}$ are stable, below
$\lambda=-\text{tol}$ are unstable.
\end{figure}
\section{Conclusion}
It is now well appreciated that growth can induce mechanical instabilities \cite{gobe05,bego05}. The related problem that we have considered in this paper is the stability of a grown state through its slow-growth evolution. The question is not therefore about mechanical instability but about the dynamic stability of a preferred homeostatic state. While the former is characterized by a bifurcation from a base geometry to a more complex buckled geometry, occurring on a fast elastic time-scale, the latter involves the system evolving away from a given stress state on the slow growth time-scale. In general the homeostatic state is not homogeneous, hence the issue of stability requires the analysis of partial differential equations defined on multiple configurations with free boundaries. There are no standard mathematical tools available to study this problem even for simple non-homogeneous systems. An alternative is to consider the stability of states that are piecewise homogeneous (in space). The problem is then to establish the stability of coupled ordinary differential equations describing locally homogenous states through the traditional methods of dynamical systems. Within this framework we considered two relatively simple problems.
First, we considered the dynamical stability of a two-layer tube with different, but constant, growth tensors in each layer. We characterized the dynamics of the full nonlinear system, and showed that the number of equilibria
and their stability varies greatly and gives rise to highly intricate
dynamics which we organized via several bifurcations. We identified
a parameter region where the system is stable. We found that the growth dynamics of tubular
structures in the neighborhood of the homeostatic equilibrium depends in a nontrivial way
on the anisotropy of the growth response, and that the equilibrium
becomes unstable for highly anisotropic growth laws. This
complexity of dynamics naturally raises the question about stability of
homeostatic equilibria for more general systems.
Second, we showed that given a continuous law in a cylindrical geometry, we can introduce a suitable discretization of the problem that keeps all the characteristics of the continuous problem. We showed that for a linear growth law, there are
clear regions where stability and instability persist independently of
the discretisation (for sufficiently large $N$). We expect that these
regions represent the true behavior of the full inhomogeneous system.
This result allows us to characterize the stability of a morphoelastic
growing cylinder.
While we have only scratched the surface of the complex dynamic behaviour that exists in such systems, the framework presented here provides a tool to explore growth dynamics and stability of homeostatic states and finally address some of the fundamental challenges of morphoelasticity \cite{goriely17}: What growth laws, in general, would lead to dynamically stable homeostatic states? What is the final size of a growing organism for a given growth law? What are the conditions under which growth dynamics produces oscillatory growth?
\begin{acknowledgements}
We thank Dr. Thomas Lessinnes for many useful discussions in the early stages of this project.
The support for Alain Goriely by the Engineering and Physical Sciences Research Council of Great Britain under research grant EP/R020205/1 is gratefully acknowledged.
\end{acknowledgements}
\bibliographystyle{spmpsci}
\section{\label{sec:General_disks}Continuous growth dynamics in cylindrical
geometry}
\subsection{Kinematics}
We consider a cylindrical tube, consisting of an incompressible isotropic hyperelastic material, the inner wall of which is attached
to a fixed solid nucleus, with the outer wall unconstrained (see Figure \ref{fig:Kinematics_general}). We restrict to growth and deformations only in the cross section, such that the cylindrical geometry is always maintained and there is no axial strain. Moreover, we assume that there are no external forces, so that any deformation is caused purely by growth and the elastic response.
\begin{figure}[htpb]
\centering\includegraphics[width=0.7\textwidth]{growing_cylinder_clean}
\caption{\label{fig:Kinematics_general}Sketch of kinematic setup.}
\end{figure}
Geometrically, we work in a planar polar coordinate basis $\left\{ \mathbf{e}^{R},\mathbf{e}^{\theta}\right\} $ (the same basis vectors apply to both initial and current configurations), in which the deformation can be described by the map $\mathbf{x}:\mathcal{B}_0\to\mathcal{B}_t$ given by:
\begin{equation}
\mathbf{x}=r\left(R^{0}\right)\mathbf{e}^{R}\:.\label{eq:deformation_map_disk}
\end{equation}
For this map, the deformation gradient is
\begin{equation}
\mathbf{F}=r'\left(R^{0}\right)\mathbf{e}^{R}\otimes\mathbf{e}^{R}+\frac{r}{R^{0}}\mathbf{e}^{\theta}\otimes\mathbf{e}^{\theta}.
\end{equation}
The elastic deformation gradient takes the form
\begin{equation}
\mathbf{A}=\alpha^{R}\mathbf{e}^{R}\otimes\mathbf{e}^{R}+\alpha^{\theta}\mathbf{e}^{\theta}\otimes\mathbf{e}^{\theta}.
\end{equation}
Incompressibility requires $\det\mathbf{A}=1$; we thus define
$\alpha:=\alpha^{\theta}$, so that $\alpha^{-1}=\alpha^{r}$. We assume a diagonal growth tensor
\begin{equation}
\mathbf{G}=\gamma^{R}\mathbf{e}^{R}\otimes\mathbf{e}^{R}+\gamma^{\theta}\mathbf{e}^{\theta}\otimes\mathbf{e}^{\theta},
\end{equation}
where the difference between radial growth ($\gamma^R>1$) and circumferential growth ($\gamma^\theta>1$) is shown schematically in Figure \ref{fig:growth_types}.
In matrix form (with the basis $\left\{ \mathbf{e}^{R},\mathbf{e}^{\theta}\right\} $ implied), we have
\begin{equation}
\mathbf{F}=\begin{pmatrix}\frac{\mathrm{d}r}{\mathrm{d}R^{0}} & 0\\
0 & \frac{r}{R^{0}}
\end{pmatrix}\:,\qquad\mathbf{A}=\begin{pmatrix}\alpha^{-1} & 0\\
0 & \alpha
\end{pmatrix}\:,\qquad\mathbf{G}=\begin{pmatrix}\gamma^{r} & 0\\
0 & \gamma^{\theta}
\end{pmatrix}\:.
\end{equation}
\begin{figure}[htpb]
\centering\includegraphics{growth_types}
\caption{\label{fig:growth_types}Illustration of isotropic and anisotropic
growth. }
\end{figure}
In the initial (stress-free) reference configuration $\mathcal{B}_{0}$, the inner
cylinder wall is located at $R^{0}=A_{0}$ and the outer wall is located
at $R^{0}=B_{0}$. From the morphoelastic decomposition $\mathbf{F}=\mathbf{AG}$,
we find $r'=\gamma^{R}/\alpha$ and $r/R^{0}=\alpha\gamma^{\theta}$.
By eliminating $\alpha$, we obtain
\begin{equation}
r\left(R^{0}\right)r'\left(R^{0}\right)=\gamma^{R}\left(R^{0}\right)\gamma^{\theta}\left(R^{0}\right)R^{0}.\label{eq:bvp-kinematics}
\end{equation}
Imposing the boundary condition $r\left(A_{0}\right)=A_{0}$, due to
the unmoving solid nucleus, we
can integrate \eqref{eq:bvp-kinematics} as
\begin{equation}
r=\sqrt{A_{0}^{2}+2\int_{A_{0}}^{R^{0}}\!\!\!\gamma^{R}(\tilde{R})\gamma^{\theta}(\tilde{R})\tilde{R}\ \mathrm{d}\tilde{R}}.\label{eq:radial_map_general}
\end{equation}
\subsection{Mechanics}
Given that all deformations are diagonal in the coordinate basis considered
here, the Cauchy stress is also diagonal
\begin{equation}
\mathbf{T}=T^{RR}\mathbf{e}^{R}\otimes\mathbf{e}^{R}+T^{\theta\theta}\mathbf{e}^{\theta}\otimes\mathbf{e}^{\theta}.
\end{equation}
Let $W\left(\alpha^{R},\alpha^{\theta}\right)$ be the strain-energy density, which relates to the Cauchy stress tensor by $\mathbf{T} = \mathbf{A}W_\mathbf{A} - p\mathbf{1}$, where $p$ is the Lagrange multiplier enforcing incompressibility. In components, this reads
\begin{equation}
T^{RR}=\alpha^{r}\frac{\partial W}{\mathrm{\partial\alpha}^{r}}-p\:,\qquad T^{\theta\theta}=\alpha^{\theta}\frac{\partial W}{\mathrm{\partial\alpha^{\theta}}}-p\:/
\end{equation}
With no external loads, mechanical equilibrium requires $\text{div }\mathbf{T}=0$, which takes the form
\begin{equation}
\frac{\partial T^{RR}}{\partial r}=\frac{T^{\theta\theta}-T^{RR}}{r}.\label{eq:linear_momentum_balance}
\end{equation}
Defining $\widehat{W}\left(\alpha\right):=W\left(\alpha^{-1},\alpha\right)$, we have
\begin{equation}
T^{\theta\theta}-T^{RR}=\alpha\widehat{W}'(\alpha).
\end{equation}
In this paper we restrict analysis to a neo-Hookean strain-energy density
\begin{equation}
\widehat{W}\left(\alpha\right)=\frac{\mu}{2}\left(\alpha^{2}+\alpha^{-2}-2\right)\:,\label{eq:neo_Hookean}
\end{equation}
for which \eqref{eq:linear_momentum_balance} becomes
\begin{equation}
\frac{\mathrm{d}T^{RR}}{\mathrm{d}R^{0}} =\frac{2\mu\gamma^{R}}{R^{0}\gamma^{\theta}}\left[1-\frac{\left(R^{0}\right)^{4}\left(\gamma^{\theta}\right)^{4}}{r^{4}}\right].
\label{eq:bvp-mechanics}
\end{equation}
Along with \eqref{eq:bvp-mechanics} we impose $T^{RR}\left(B_{0}\right)=0$, i.e. the outer edge is stress-free. Equations
\eqref{eq:bvp-kinematics} and \eqref{eq:bvp-mechanics}, along with boundary condition $T^{RR}\left(B_{0}\right)=0$, completely determine the deformation and stress state. Due to the fixed inner boundary condition, for a given growth tensor \eqref{eq:bvp-kinematics} can be integrated separately, i.e. the deformation is determined independently from the stress, and the radial Cauchy stress is then determined by integrating \eqref{eq:radial_map_general}.
Once the radial stress component $T^{RR}$ is determined, the
circumferential component satisfies
\begin{equation}
T^{\theta\theta}=T^{RR}+\frac{2\mu r^{2}}{\left(R^{0}\right)^{2}\left(\gamma^{\theta}\right)^{2}}\left[1-\frac{\left(R^{0}\right)^{4}\left(\gamma^{\theta}\right)^{4}}{r^{4}}\right]\:.\label{eq:t2_general}
\end{equation}
Note also that for constant $\gamma^R$ and $\gamma^\theta$, these integrals may be performed analytically, giving explicit expressions for the stress and deformation in terms of the growth. As we show later, the same holds when extending from one layer to multiple layers; if the growth in each layer is constant, the stress components may be written explicitly. It is this fact that we exploit below in formulating a discretized growth dynamics. This is the main motivating reason for the fixed core geometry we consider. Under different boundary conditions, the deformation and stress would be coupled, requiring for instance a root finding exercise to determine the outer radius for which the stress boundary condition is satisfied. In such a case, the framework below applies at the expense of added computational complexity.
\subsection{\label{subsec:Growth-law-general-setup}Growth law}
We now impose a homeostasis driven growth law of the form \eqref{eq1}. In the plane polar geometry, this takes the form
\begin{equation}
\begin{aligned}\dot{\gamma}^{R} & =\left\{ K^{RR}\left[T^{RR}-\left(T^{RR}\right)^{*}\right]+K^{R\theta}\left[T^{\theta\theta}-\left(T^{\theta\theta}\right)^{*}\right]\right\} \gamma^{R}\:,\\
\dot{\gamma}^{\theta} & =\left\{ K^{\theta R}\left[T^{RR}-\left(T^{RR}\right)^{*}\right]+K^{\theta\theta}\left[T^{\theta\theta}-\left(T^{\theta\theta}\right)^{*}\right]\right\} \gamma^{\theta}\:.
\end{aligned}
\label{eq:growth_dynamics_general}
\end{equation}
Here $K^{RR}:=\mathcal{K}^{RRRR}$, $K^{R\theta}:=\mathcal{K}^{RR\theta\theta}$,
$K^{\theta R}:=\mathcal{K}^{\theta\theta RR}$, $K^{\theta\theta}:=\mathcal{K}^{\theta\theta\theta\theta}$ are the only non-vanishing components of the fourth order tensor $\mathcal{\boldsymbol{K}}$, and are assumed to be constant in space and time.
\subsection{Discretisation approach.}
For given homeostatic stress values and components of $\mathcal{\boldsymbol{K}}$, the growth dynamics is fully defined, with the growth components evolving according to \eqref{eq:growth_dynamics_general}. Even in the simplified cylindrical geometry, this comprises a system of nonlinear partial differential equations. Moreover, viewing the dynamics as a discrete process is still complicated by the fact that at each time step updating the growth requires knowing the stress components, which requires integration of \eqref{eq:bvp-mechanics}, which requires integration of \eqref{eq:bvp-kinematics}, which cannot be done analytically for general spatially dependent $\gamma^R$ and $\gamma^\theta$.
However, as stated above, for constant $\gamma^R$ and $\gamma^\theta$, the integrals determining stress may be computed analytically. This suggests a discretization process whereby the annular domain is divided into discrete layers, each with constant growth, and such that the growth in each layer evolves according to averaged values of the stress. In this way, analytical expressions may be determined for both the stress and the average stress, and hence the dynamics is reduced to a set of ordinary differential equations for the growth components.
The inhomogeneity of the full model is replaced
by a piecewise homogeneous model. This preserves the key idea of inhomogeneity
(allowing, for instance, circumferential growth to be higher near
the nucleus than away from it), but is more analytically
tractable and allows for precise statements about the long-term dynamics, stability, and qualitative investigation such as the influence of radial versus circumferential stress to the growth dynamics.
\section{\label{sec:two_disks}Growth dynamics for 2-layer system.}
\subsection{Kinematics}
We first consider two elastic layers attached to a solid nucleus and in perfect mechanical contact at their interface. In the initial
reference configuration $\mathcal{B}_{0}$, the inner wall has the
radial coordinate $R^{0}=A_{0}$, the middle wall at $R^{0}=A_{1}$
and the outer wall at $R^{0}=A_{2}$. In the current configuration
$\mathcal{B}_{t}$, the same material points have coordinates are $r\left(A_{0}\right)=A_{0}$,
$r\left(A_{1}\right)=a_{1}$ and $r\left(A_{2}\right)=a_{2}$ (see Figure \ref{fig:Kinematics_two_layers}).
\begin{figure}[htpb]
\includegraphics[width=10cm]{two_layers_kinematic_setup}\hfill{}
\caption{\label{fig:Kinematics_two_layers}Kinematic setup for the two-layer system.
The innermost layer is attached to an unmoving nucleus ($a_{0}=A_{0}$)
and the boundary condition at the outer layer is no pressure $T^{RR}\left(A_{2}\right)=0$. }
\end{figure}
We impose that in the reference configuration the two annular layers
enclose the same area $\pi\Delta^{2}$ . The initial reference radii
of the two rings thus satisfy
\begin{equation}
\Delta^{2}=A_{2}^{2}-A_{1}^{2}=A_{1}^{2}-A_{0}^{2}\:.
\end{equation}
The deformation follows the same equations formulated in Section 1.1, but with piecewise homogeneous growth
\begin{equation}
\gamma\left(R^{0}\right)=\begin{cases}
\gamma_{1} & \text{if }A_{0}\leq R^{0}\leq A_{1}\\
\gamma_{2} & \text{if }A_{1}<R^{0}\leq A_{2}\:.
\end{cases}\label{eq:growth_piecewise_homogeneous}
\end{equation}
where $\gamma_1$ and $\gamma_2$ are constant. Note that our convention is to use subscript to denote different layers and superscripts for the coordinate basis index. Here, we have imposed isotropic growth, i.e. $\gamma_{1}^{r}=\gamma_{1}^{\theta}=\gamma_{1}$
and $\gamma_{2}^{r}=\gamma_{2}^{\theta}=\gamma_{2}$. The same ideas apply for anisotropic growth, but this simplification reduces the dynamics to a 2D phase space for $\gamma_1$, $\gamma_2$. In principle, one could also have piecewise material properties and piecewise $\boldsymbol{K}$ values; however our objective is to consider the dynamics in a reduced parameter space, hence the only distinction between the layers is the different growth rates.
The deformation in each layer comes from integrating \eqref{eq:radial_map_general}, subject to $r\left(A_{0}\right)=A_{0}$ and $r\left(A_{1}\right)=a_{1}$. We obtain
\begin{equation}
r\left(R^{0}\right)=\begin{cases}
r_{1}\left(R^{0}\right):=\sqrt{A_{0}^{2}+\gamma_{1}^{2}\left[\left(R^{0}\right)^{2}-A_{0}^{2}\right]} & \text{if }A_{0}\leq R^{0}\leq A_{1}\:,\\
r_{2}\left(R^{0}\right):=\sqrt{A_{0}^{2}+\gamma_{1}^{2}\Delta^{2}+\gamma_{2}^{2}\left[\left(R^{0}\right)^{2}-A_{1}^{2}\right]} & \text{if }A_{1}< R^{0}\leq A_{2}\:.
\end{cases}\label{eq:r_=00007Bp=00007Diecewise}
\end{equation}
Note that at $R^{0}=A_{1}$, $r$ is continuous but not differentiable.
\subsection{Mechanics}
The stress balance \eqref{eq:bvp-mechanics} determines the radial stress as
\begin{eqnarray}
&&T^{RR}\left(R^{0}\right)=\\
\nonumber &&\begin{cases}
T_{1}^{RR}\left(R^{0}\right):=T_{1}^{RR}\left(A_{1}\right)+\mu\int_{A_{1}}^{R^{0}}\frac{2}{\tilde{R}}\left(1-\frac{\tilde{R}^{4}\gamma_{1}^{4}}{r_{1}^{4}}\right)\mathrm{d}\tilde{R}, & R^{0}\in[A_{0},A_{1}]\\
T_{2}^{RR}\left(R^{0}\right):=\underbrace{T_{2}^{RR}\left(A_{2}\right)}_{0}+\mu\int_{A_{2}}^{R^{0}}\frac{2}{\tilde{R}}\left(1-\frac{\tilde{R}^{4}\gamma_{2}^{4}}{r_{2}^{4}}\right)\mathrm{d}\tilde{R}, & R^{0}\in[A_{1},A_{2}].
\end{cases}\label{eq:annulus_t1_piecewise11}
\end{eqnarray}
From \eqref{eq:t2_general}, we then obtain the circumferential stress $T^{\theta\theta}\left(R^{0}\right)$:
\begin{eqnarray}
&& T^{\theta\theta}\left(R^{0}\right)=\\
\nonumber&&\begin{cases}
T_{1}^{\theta\theta}\left(R^{0}\right):=T_{1}^{RR}\left(R^{0}\right)+\mu\frac{2r_{1}^{2}}{\gamma_{1}^{2}\left(R^{0}\right)^{2}}\left[1-\frac{\left(R^{0}\right)^{4}\gamma_{1}^{4}}{\gamma_{1}^{4}}\right], & R^{0}\in[A_{0},A_{1}]\\
T_{2}^{\theta\theta}\left(R^{0}\right):=T_{2}^{RR}\left(R^{0}\right)+\mu\frac{2r_{2}^{2}}{\gamma_{2}^{2}\left(R^{0}\right)^{2}}\left[1-\frac{\left(R^{0}\right)^{4}\gamma_{2}^{4}}{r_{2}^{4}}\right], & R^{0}\in[A_{1},A_{2}].
\end{cases}\label{eq:annulus_t2_piecewise22}
\end{eqnarray}
\begin{figure}[htpb]
\centering\includegraphics[width=1\textwidth]{stresses_static_2016}\hfill{}
\caption{\label{fig:stress_profiles-constant-gamma}Radial (top) and circumferential
(bottom) components of Cauchy stress for $A_{0}=1$, $A_{1}=\sqrt{5/2}$,
$A_{2}=2$, $\Delta=\sqrt{5/2}$, $\mu=1$, $\gamma_{2}=1$ and $\gamma_{1}$
as indicated. }
\end{figure}
The expressions $T_{1}^{RR}$ and $T_{2}^{RR}$ as well as $T_{1}^{\theta\theta}$
and $T_{2}^{\theta\theta}$ can be determined analytically as functions
of $A_{0}$, $A_{1}$, $A_{2}$, $\mu$, $\gamma_{1}$ and $\gamma_{2}$, though the exact expressions are long and have been suppressed here.
Sample stress profiles for varying values of $\gamma_1$ (with $\gamma_2=1$) are given in Figure \ref{fig:stress_profiles-constant-gamma}. With $\gamma_1>1$, the inner layer grows uniformly, hence its reference state is a uniformly expanded annulus; however, it is constrained by attachment to the core and to the ungrowing outer layer. Thus the inside of the inner layer is in radial tension (the inner edge is ``stretched'' radially to match the core), the outside is in radial compression, and the entire layer is in compression in the hoop direction. The outer layer, on the other hand, is forced to expand circumferentially to accommodate the growing inner layer and is in circumferential compression; this is balanced by a compression in the radial direction. The inverse effect occurs with $\gamma_1<1$.
\subsection{\label{subsec:Growth-law-2-layers-for-bif-diagram}Growth law}
We define the average stresses $\overline{T_{1}}$ and $\overline{T_{2}}$,
for both radial and circumferential stress components, as
\begin{equation}
\overline{T_{1}}=\frac{2}{\Delta^{2}}\int_{A_{0}}^{A_{1}}T_{1}\left(\tilde{R}\right)\tilde{R}\mathrm{d}\tilde{R}\,,\qquad\overline{T_{2}}=\frac{2}{\Delta^{2}}\int_{A_{1}}^{A_{2}}T_{2}\left(\tilde{R}\right)\tilde{R}\mathrm{d}\tilde{R}.\label{eq:stress_average_two}
\end{equation}
Our approach is to modify the growth dynamics so that the (constant) growth in each layer evolves according to the averaged stress values. That is, we study the system
\begin{equation}
\begin{aligned}\dot{\gamma}_{1} & =\gamma_{1}\left\{ K^{RR}\left[\overline{T_{1}^{RR}}-\left(T_{1}^{RR}\right)^{*}\right]+K^{\theta\theta}\left[\overline{T_{1}^{\theta\theta}}-\left(T_{1}^{\theta\theta}\right)^{*}\right]\right\} \,\\
\dot{\gamma}_{2} & =\gamma_{2}\left\{ K^{RR}\left[\overline{T_{2}^{RR}}-\left(T_{2}^{RR}\right)^{*}\right]+K^{\theta\theta}\left[\overline{T_{2}^{\theta\theta}}-\left(T_{2}^{\theta\theta}\right)^{*}\right]\right\}.
\end{aligned}
\label{eq:annulus_growth_law}
\end{equation}
Note that the isotropic growth enforces $K^{RR}=K^{\theta R}$ and $K^{\theta\theta}=K^{R\theta}$,
hence there are only two (rather than four) growth rate constants $K^{RR}$ and
$K^{\theta\theta}$.
To further reduce the parameter space, we make the additional assumption that the homeostatic stress
values are equivalent in layers 1 and 2, that is
\begin{equation}
\left(T^{RR}\right)^{*}:=\left(T_{1}^{RR}\right)^{*}=\left(T_{2}^{RR}\right)^{*}\qquad\text{and}\qquad\left(T^{\theta\theta}\right)^{*}:=\left(T_{1}^{\theta\theta}\right)^{*}=\left(T_{2}^{\theta\theta}\right)^{*}\,.\label{eq:homeostatic-stress-equal-in-both-layers}
\end{equation}
We emphasize that while $\overline{T_{i}^{RR}}$ and $\overline{T_{i}^{\theta\theta}}$
for $i=1,2$ are averages over actual stresses according to \eqref{eq:stress_average_two},
the homeostatic values $\left(T_{i}^{RR}\right)^{*}$ and $\left(T_{i}^{\theta\theta}\right)^{*}$
for $i=1,2$ are prescribed values that may, but need not, correspond to averages of physically realizable stresses.
To facilitate the analysis, we rescale all stress quantities by a characteristic value $\sigma$, e.g. $\hat{T}^{RR}=T^{RR}/\sigma$, and rescale time as $\hat{t}=t\sigma K^{\theta\theta}$. We also introduce
\begin{equation}
\tilde{K}:=K^{RR}/K^{\theta\theta}\qquad\text{and}\qquad \hat{T}^{*}:=\tilde{K}\left(\hat{T}^{RR}\right)^{*}+\left(\hat{T}^{\theta\theta}\right)^{*}\,.\label{eq:ktilde-tstar}
\end{equation}
The parameter $\tilde{K}$ is a measure of anisotropy of the mechanical
feedback, i.e. a weighting of the contribution of radial vs. circumferential
stress to the (isotropic) growth response.
The rescaled growth law is then
\begin{equation}
\begin{aligned}\dot{\gamma}_{1} & =\gamma_{1}\left[\tilde{K}\overline{T_{1}^{RR}}+\overline{T_{1}^{\theta\theta}}-T^{*}\right]\\
\dot{\gamma}_{2} & =\gamma_{2}\left[\tilde{K}\overline{T_{2}^{RR}}+\overline{T_{2}^{\theta\theta}}-T^{*}\right].
\end{aligned}
\label{eq:dynamical_system_two_layers}
\end{equation}
Here we have re-defined the overdot as derivative with respect
to the rescaled time, and we have dropped all hats for notational convenience. Note that all stress averages depend nonlinearly
on $\gamma_{1}$ and $\gamma_{2}$, but not on the spatial coordinate
$R^{0}$, which has been integrated out.
\subsection{Stability analysis}
To investigate the behavior of the growth dynamics, we can now apply standard techniques of dynamical systems to \eqref{eq:dynamical_system_two_layers}; i.e. we seek equilibria satisfying
$\dot{\gamma}_{1}=0$ and $\dot{\gamma}_{2}=0$ and compute their stability. Let $\left\{ \gamma_{1}^{\text{eq}},\gamma_{2}^{\text{eq}}\right\} $ denote an equilibrium state. The nonlinear nature of the dependence of $\overline{T_{1}^{RR}}$, $\overline{T_{2}^{RR}}$, $\overline{T_{1}^{\theta\theta}}$
and $\overline{T_{2}^{\theta\theta}}$ on $\gamma_{1}$, $\gamma_{2}$
makes it difficult to compute analytically the number and
location of equilibrium states as a function of the parameters $\tilde{K}$
and $T^{*}$ and we shall use numerical methods to this end.
For a given equilibrium state, we then perform a linear stability analysis. Let $0<\varepsilon\ll1$ and expand as
\begin{equation}
\begin{aligned}\gamma_{1} & =\gamma_{1}^{\text{eq}}+\varepsilon\overline{\gamma}_{1}+\mathcal{O}\left(\varepsilon^{2}\right),\\
\gamma_{2} & =\gamma_{2}^{\text{eq}}+\varepsilon\overline{\gamma}_{2}+\mathcal{O}\left(\varepsilon^{2}\right).
\end{aligned}
\label{eq:linear-expansion-two-layers}
\end{equation}
Introducing $\boldsymbol{\gamma}=\left(\gamma_{1},\gamma_{2}\right)$
to describe the state of the system \eqref{eq:dynamical_system_two_layers},
its linearly expanded version (to order $\varepsilon$) takes the
form
\begin{equation}
\dot{\overline{\boldsymbol{\gamma}}}=\mathbf{J}\boldsymbol{\overline{\gamma}}
\end{equation}
where the Jacobian matrix has entries
\begin{equation}
J_{ij}=\left[\frac{\partial\dot{\gamma}_{i}}{\partial\gamma_{j}}\right]_{\boldsymbol{\gamma}=\boldsymbol{\gamma}^{\text{eq}}}.
\end{equation}
Stability is determined in the usual way by the form of eigenvalues of $\mathbf{J}$, which are the roots of the characteristic equation
\begin{equation}
0=(J_{11}-\lambda)(J_{22}-\lambda)-J_{12}J_{21}.
\end{equation}
\subsection{Bifurcation diagram}
The number of equilibrium states and their stability depend on the values of $\tilde{K}$ and $T^*$. In Figure \ref{fig:two_layered_bifurcation_diagram}(a) we present a phase diagram that shows four regions with distinct dynamical behavior. These can be summarized as follows:
\begin{itemize}
\item \textbf{Region I} has four equilibrium states, of which one is a stable node, two are saddles, and the fourth is either an
unstable node or an unstable focus. \\
\item \textbf{Region II} has four equilibrium states: two are saddles
and the other two are either stable nodes or a stable focus and stable node. A Hopf bifurcation at the interface of Regions I
\& II transforms the unstable focus into a stable focus.\\
\item \textbf{Region III }has two equilibrium states, one of which is a
stable node, the other a saddle node. At the interface between Regions II and III, a saddle-node bifurcation occurs that annihilates the stable node and
saddle node in Region II.\\
\item \textbf{Region IV} has no equilibrium states.
\end{itemize}
In Figure
\ref{fig:two_layered_bifurcation_diagram}(b) we show phase portraits for the selected points P1 - P5. Nullclines are plotted as blue and green curves, illustrating the appearance and disappearance of equilibrium states as categorized above.
As is evident in Figure \ref{fig:two_layered_bifurcation_diagram}, there is a wealth of possible dynamical behavior exhibited in this system. That an idealized two-layer model with isotropic growth and equivalent homeostatic values in each layer has such a rich structure highlights a more generic complex nature of mechanically driven growth. Our intent is not to fully categorize the behavior; rather this system should be seen as a paradigm to illustrate complex dynamics. Nevertheless, several observations are in order.
One observation from the phase portraits in Figure \ref{fig:two_layered_bifurcation_diagram}(b) is that unbounded growth is not only possible but ``common'', at least in the sense that many parameter choices and initial conditions lead to trajectories for which $\gamma_i\to\infty$. Perhaps the most natural initial condition is to set $\gamma_1=\gamma_2=1$, which corresponds to letting the system evolve from an initial state with no growth. Examining the trajectories in Figure \ref{fig:two_layered_bifurcation_diagram}(b) shows that points P1 and P2 would not evolve towards the single stable state, but rather would grow without bound.
Another point of interest is that while regions I, II and III contain stable equilibria, the stable states in Regions I and III satisfy $\gamma_{1}^{\text{eq}}\gamma_{2}^{\text{eq}}<1$. These are equilibria for which one of the layers has lost mass (at least one of the $\gamma_i<1$). Growth in both layers requires both $\gamma_i>1$, and we find that such an equilibrium only exists in a small subset of Region II, shaded dark blue in Figure \ref{fig:two_layered_bifurcation_diagram}. We further see that $T^*<0$ in the dark blue region, and $\tilde{K}$ approximately in the range 10 to 17. This implies that in order for a stable equilibrium to exist where both layers have grown, the homeostatic stress must be compressive in one or both components, and the system must respond more strongly to radial than to circumferential stress.
\begin{figure}[htpb]
\centering\includegraphics[width=1\textwidth]{bifurcation_collage}
\caption{\label{fig:two_layered_bifurcation_diagram}(a) Bifurcation diagram
for two layered actively growing piecewise homogeneous system. (b)
Equilibrium states and their dynamical characterization. Parameter values were $A_0=1$, $A_1=1.562$, $A_2=1.970$.}
\end{figure}
\paragraph{Admissible versus inadmissible homeostatic values.}
In Figure \ref{fig:two_layered_bifurcation_diagram} we imposed the homeostatic
stress $T^*$ to be equal in each layer. Moreover, $T^*$ could take any value, and thus had no direct correspondence to a physically realizable stress state. We now define an {\it admissible homeostatic value} as the average over a stress field that can be physically realized with the given geometry and boundary conditions. Such an admissible homeostatic stress state derives from a homeostatic growth, i.e. a given growth field $\boldsymbol{\gamma}^{*}=\left(\gamma_{1}^{*},\gamma_{2}^{*}\right)^{T}$ defines a spatially dependent stress, and averaging according to \eqref{eq:stress_average_two} then gives admissible values for the homeostatic stress:
\begin{equation}
\overline{T_{i}^{RR}}\left(\boldsymbol{\gamma}^{*}\right)\qquad\text{and}\qquad\overline{T_{i}^{\theta\theta}}\left(\boldsymbol{\gamma}^{*}\right),\qquad i=1,2\,.\label{eq:homeostatic-compatibility-2-disks}
\end{equation}
An {\it inadmissible homeostatic value} is one that cannot be expressed as an average over an actual stress, i.e. there exists
no $\boldsymbol{\gamma}^{*}$ defining $\overline{\mathbf{T}}^{*}$.
\begin{figure}[htpb]
\includegraphics[width=0.99\textwidth]{phase_portrait_detailed}\hfill{}
\caption{\label{fig:spiral-dynamics-trajectories}Trajectories and layer sizes
for highly anisotropic growth law with admissible homeostatic state.
\textbf{(a)} Contours for $\dot{\gamma}_{1}=0$ and $\dot{\gamma}_{2}=0$
for the system \ref{fig:spiral-dynamics-trajectories}. As can be
confirmed from the stream plots \textbf{(b)} and \textbf{(c)}, there
is one stable spiral, two saddles, and one stable node.
The saddle point P4 in (b) is the homeostatic equilibrium $\left(\gamma_{1}^{*},\gamma_{2}^{*}\right)$.
Parameters: $\mu=2$, $\Delta=\sqrt{3}$ ($A_{0}=1$, $A_{2}=\sqrt{7}$).
$\tilde{K}=23.5$. Homeostatic growth: $\gamma_{1}^{*}=5.867$, $\gamma_{2}^{*}=3$. }
\end{figure}
\paragraph{Growth law with admissible homeostatic values.}
To conclude our analysis of the two-layer system, we return to the same growth law, but for admissible homeostatic values. Due to the spatial inhomogeneity of the stress profile in the two-layer cylinder (see for instance Figure \ref{fig:stress_profiles-constant-gamma}), it is not possible to have equal homeostatic values in each layer 1 and 2.
The growth law with admissible homeostatic values reads
\begin{equation}
\begin{aligned}\dot{\gamma}_{1} & =\gamma_{1}\left\{ \tilde{K}\left[\overline{T_{1}^{RR}}\left(\boldsymbol{\gamma}\right)-\overline{T_{1}^{RR}}\left(\boldsymbol{\gamma}^{*}\right)\right]+\left[\overline{T_{1}^{\theta\theta}}\left(\boldsymbol{\gamma}\right)-\overline{T_{1}^{\theta\theta}}\left(\boldsymbol{\gamma}^{*}\right)\right]\right\} \\
\dot{\gamma}_{2} & =\gamma_{2}\left\{ \tilde{K}\left[\overline{T_{2}^{RR}}\left(\boldsymbol{\gamma}\right)-\overline{T_{2}^{RR}}\left(\boldsymbol{\gamma}^{*}\right)\right]+\left[\overline{T_{2}^{\theta\theta}}\left(\boldsymbol{\gamma}\right)-\overline{T_{2}^{\theta\theta}}\left(\boldsymbol{\gamma}^{*}\right)\right]\right\} \,.
\end{aligned}
\label{eq:dynamics-N2}
\end{equation}
The phase space for this system is now inherently three dimensional, as the homeostatic stress values are defined by the two choices $\gamma_i^*$ as opposed to the single value $T^*$. Here we restrict our analysis to a single example, with $\gamma_{1}^{*}=5.867$, $\gamma_{2}^{*}=3$, and $\tilde{K}=23.5$, thus representing a preferred state defined by significant growth in each layer, and with strongly anisotropic growth dynamics due to the large value of $\tilde{K}$. The dynamics are presented in Figure \ref{fig:spiral-dynamics-trajectories}. The contour plot in Figure
\ref{fig:spiral-dynamics-trajectories}(a) shows that there are in
total four equilibrium states. The streamlines and trajectory
plots in Figure \ref{fig:spiral-dynamics-trajectories}(b) and (c)
reveal that the equilibria consist of a stable spiral, two saddles, and one
stable node. It is interesting to note that P4, which is the equilibrium state at which both $\gamma_{i}^{\text{eq}}=\gamma_i^*$, is unstable; that is, the system does not remain at the equilibrium state through which the homeostatic values were defined.
Included in Figure \ref{fig:spiral-dynamics-trajectories}(b) are three sample trajectories, with the size of each layer shown at different times, and illustrative of the variety of dynamical behavior. The green trajectory quickly settles to a stable state marked by significant resorption (both $\gamma_i<1$); the blue and red trajectories sit outside the basin of attraction of P1 and show an initial period of resorption followed by significant growth. The red trajectory is in the basin of attraction of the stable focus and thus oscillates between growth and decay as it approaches the stable point at P3, while the blue trajectory, just outside the basin of attraction, ultimately grows without bound, never reaching an equilibrium state.
\section{\label{sec:N_disks}Growth of discrete $N$ layer system}
Next, we generalize the dynamical system of the previous
section from two to $N$ layers where growth and stresses are constant throughout each layer. If $N$ is sufficiently large, a system of $N$ layers can be used as a suitable spatial discretisation of a continuous growth profile on which precise statements can be obtained. In this case, we can generalize Equations \eqref{eq:dynamics-N2} to $N$ coupled ODEs. We will analyze the stability
of this system near a homeostatic equilibrium, and show to what
extent the results obtained for $N=2$ remain unchanged as the discretisation is refined
($N$ increases), which informs the stability of the continuous ($N\rightarrow\infty$)
system.
A major difference compared to the two-layer model
is the method to obtain homeostatic values. Previously,
homeostatic values were prescribed via the homeostatic growth values $\gamma_{1}^{*}$,
$\gamma_{2}^{*}$. In the present model, homeostatic values are obtained
by assuming the existence of a prescribed continuous homeostatic growth profile $\gamma^{*}\left(R^{0}\right)$.
The homeostatic values $\left\{ \gamma_{i}^{*}\right\} $ are then obtained
through local averaging of the prescribed profile $\gamma^{*}\left(R^{0}\right)$
over an interval by generalizing Equations \eqref{eq:stress_average_two}. These values are admissible by construction.
Since growth is
taken as constant in each layer, the stresses can be determined fully
analytically and a stability analysis can then be performed. The stability
analysis will inform under which conditions the dynamical system will either
relax to a homeostatic state after a small perturbation or lead to an instability.
\subsection{Kinematics}
\begin{figure}[htpb]\centering
\includegraphics[width=8cm]{N_layers}\caption{\label{fig:Kinematics_N_layers}Kinematic setup for an isotropically
growing $N$ layered system. Note that the discretization is chosen such that the areas of each layer are equal.}
\end{figure}
We consider $N$ perfectly connected annuli, separated by $N+1$ interfaces,
which in the initial reference configuration have the radial coordinate
values $\left\{ A_{0},A_{1},\ldots,A_{N}\right\} $ as sketched in Figure \ref{fig:Kinematics_N_layers}.
The $K$-th annulus is defined by $A_{K-1}\leq R\leq A_{K}$ for $K\in\left\{ 1,\ldots,N\right\} $.
We choose a particular discretization so that the area between layers, $\pi\Delta^{2}$, is constant:
\begin{equation}
A_{K}^{2}-A_{K-1}^{2}:=\Delta^{2}=\text{const.}
\end{equation}
We can write
$A_{K}$ explicitly as
\begin{equation}
A_{K}^{2}=A_{0}^{2}+K\Delta^{2}\,.\label{eq:Ak-explicit}
\end{equation}
Given a continuous curve $\gamma\left(R^{0}\right)$ we define the
piecewise constant growth profile by taking the average
\begin{equation}
\gamma_{K}:=\overline{\gamma\left(R^{0}\right)}=\frac{2}{\Delta^{2}}\int_{A_{K-1}}^{A_{K}}\gamma\left(\tilde{R}\right)\tilde{R}\mathrm{d}\tilde{R},\quad K=1,\ldots,N\label{eq:gamma-piecewise}
\end{equation}
The growth value $\gamma_{K}$ is constant for all $K$.
We demonstrate the construction of the discrete profile $\left\{ \gamma_{K}\right\} $
from the continuous profile $\gamma\left(R^{0}\right)$ in Figure
\ref{fig:averaging-gamma}, in which we consider as an example the
continuous function
\begin{equation}
\gamma\left(R^{0}\right)=2-\frac{3}{2}\sin\left(\pi\frac{R^{0}-A_{0}}{A_{N}-A_{0}}\right).\label{eq:gamma-continuous-example}
\end{equation}
\begin{figure}\centering
\includegraphics[width=0.8 \textwidth]{doodle1_17_10_2016
\caption{\label{fig:averaging-gamma}Growth $\gamma$ continuous vs. averaged.
The continuous curve \eqref{eq:gamma-continuous-example} is plotted
in blue, and the average over a particular discretisation according
to \eqref{eq:gamma-piecewise} is shown by a solid piecewise constant black curve
($N=8$ with $A_{0}=1$, $A_{N}=5$ and $\Delta=\sqrt{3}$). }
\end{figure}
Once $\left\{ \gamma_{K}\right\} $ are obtained, we compute the radial
map $r_{K}\left(R^{0}\right)$ from the discrete profile $\left\{ \gamma_{K}\right\} $.
Note that while $\gamma_{K}$ is a constant throughout the $K$-th
layer, the radial map $r_{K}$ is a function of the radial coordinate
$R^{0}$:
\begin{equation}
r_{K}^{2}\left(R^{0}\right)=r_{K-1}^{2}\left(A_{K-1}\right)+\gamma_{K}^{2}\left[\left(R^{0}\right)^{2}-A_{K-1}^{2}\right],\qquad r_{0}^{2}\left(R^{0}\right)=A_{0}^{2}.\label{eq:radial-function-recursive}
\end{equation}
Explicitly, this implies
\begin{align}
r_{K}^{2}\left(R^{0}\right) & =A_{0}^{2}+\left(\Delta^{2}\sum_{i=1}^{K-1}\gamma_{i}^{2}\right)+\gamma_{K}^{2}\left[\left(R^{0}\right)^{2}-A_{K-1}^{2}\right].\label{eq:radial-function-explicit}
\end{align}
Notice that the recursive expression \eqref{eq:radial-function-recursive}
and the explicit expression \eqref{eq:radial-function-explicit} are
consistent with the requirement
\begin{equation}
r_{K-1}\left(A_{K-1}\right)=r_{K}\left(A_{K-1}\right),\label{eq:continuity-of-rk}
\end{equation}
which means that $r_{K}$ is continuous at the boundary layer $A_{K-1}$.
\subsection{Mechanics}
\paragraph{Stress components.}
In the continuous version, the radial stress $T^{RR}$ is obtained from \eqref{eq:bvp-mechanics}.
The discrete version reads
\begin{equation}
\frac{\partial T_{K}^{RR}}{\partial R^{0}}=\frac{2\mu}{R^{0}}\left[1-\frac{\gamma_{K}^{4}\left(R^{0}\right)^{4}}{r_{K}^{4}\left(R^{0}\right)}\right],\qquad T_{N}^{RR}\left(A_{N}\right)=0\,.\label{eq:TRR-differential}
\end{equation}
Traction continuity at the interfaces implies
\begin{equation}
T_{K}^{RR}\left(A_{K}\right)=T_{K+1}^{RR}\left(A_{K}\right)\,.\label{eq:continuity-of-TRR}
\end{equation}
We define $\tau^{RR}\left(R^{0}\right)$ as the indefinite integral
over the right hand side of \eqref{eq:TRR-differential} (dropping
the integration constant),
\begin{equation}
\tau^{RR}\left(R^{0}\right):=-\mu\frac{r_{K-1}^{2}\left(A_{K-1}\right)-A_{K-1}^{2}\gamma_{K}^{2}}{r_{K}^{2}\left(R^{0}\right)}-\mu\log\left[\frac{r_{K}^{2}\left(R^{0}\right)}{\left(R^{0}\right)^{2}}\right],
\end{equation}
from which, we express the radial stress in the $K$-th layer as
\begin{equation}
T_{K}^{RR}\left(R^{0}\right)=\tau_{K}^{RR}\left(R^{0}\right)-\tau_{N}^{RR}\left(A_{N}\right)+\sum_{i=K}^{N-1}\mu\frac{A_{i}^{2}\left(\gamma_{i+1}^{2}-\gamma_{i}^{2}\right)}{r_{i}^{2}\left(A_{i}\right)}\label{eq:TRR-explicit}
\end{equation}
\begin{figure}\centering
\includegraphics[width=0.8\textwidth]{radial_MAP_profile}
\caption{\label{fig:radius-discrete-and-continuous}Radial function $r_{K}\left(R^{0}\right)$
for the case of discrete growth $\gamma_{i}$, computed according
to \eqref{eq:radial-function-explicit}. The dashed line represents
the case of no deformation $r=R^{0}$; everything below the dashed
line is resorption (``shrinking''), everything above this line is
growth (Parameters as in Figure \ref{fig:averaging-gamma}).}
\end{figure}
The circumferential stress $T^{\theta\theta}$
is related to the radial stress $T^{RR}$ through \eqref{eq:t2_general}.
The discrete version of the relationship between $T^{RR}$ and $T^{\theta\theta}$
is given by
\begin{equation}
T_{K}^{\theta\theta}\left(R^{0}\right)=T_{K}^{RR}\left(R^{0}\right)+\kappa_{K}\left(R^{0}\right),\label{eq:TthetaTheta-discrete}
\end{equation}
where
\begin{equation}
\kappa_{K}\left(R^{0}\right):=\frac{2\mu r_{K}^{2}\left(R^{0}\right)}{\gamma_{K}^{2}\left(R^{0}\right)^{2}}\left(1-\frac{\gamma_{K}^{4}\left(R^{0}\right)^{4}}{r_{K}^{4}}\right).\label{eq:kappa-discrete}
\end{equation}
Stress profiles corresponding to the growth law \eqref{eq:gamma-continuous-example} are depicted in Figure \ref{fig:discrete-stress-profile}(a) (radial) and Figure \ref{fig:discrete-stress-profile}(b) (circumferential).
\begin{figure}[htpb]\centering
\includegraphics[width=0.8\textwidth]{circumferential_stress_profile}
\caption{\label{fig:discrete-stress-profile}Stress profile and stress averages
for the growth profile \eqref{eq:gamma-continuous-example}. \textbf{(a)} Radial stress profile $T^{RR}$
and average stress profile $\overline{T^{RR}}$. The analytical curve
was obtained from \eqref{eq:TRR-explicit} and the numerical curve
(for validation) was obtained from \eqref{eq:bvp-mechanics}. In both
the numerical and analytical case, the piecewise growth profile $\gamma_{i}$
according to \eqref{eq:gamma-piecewise} was used. The average stress
was computed according to \eqref{eq:TRR-average-explicit} with the
same growth profile as the other curves. \textbf{(b)} Circumferential
stress profile $T^{\theta\theta}$ and average stress profile $\overline{T^{\theta\theta}}$.
The analytical curve was obtained from \eqref{eq:TthetaTheta-discrete}
and the numerical curve (for validation) was obtained from \eqref{eq:t2_general}.
The average stress was computed according to \ref{eq:circumferential-stress-average}.
All other parameters are as in Figure \ref{fig:averaging-gamma}, with Young's modulus $\mu=1$. }
\end{figure}
\paragraph{Average stress.}
As in the two-layer case, average values for the radial and circumferential stress can be computed exactly. The average radial stress in the $K\text{-th}$ layer $\overline{T_{K}^{RR}}$
is
\begin{equation}
\overline{T_{K}^{RR}}=-\tau_{N}^{RR}\left(A_{N}\right)+\sum_{i=K}^{N-1}\mu\frac{A_{i}^{2}\left(\gamma_{i+1}^{2}-\gamma_{i}^{2}\right)}{r_{i}^{2}\left(A_{i}\right)}+\frac{2}{\Delta^{2}}\left[\nu_{K}^{rr}\left(A_{K}\right)-\nu_{K}^{rr}\left(A_{K-1}\right)\right]\label{eq:TRR-average-explicit}
\end{equation}
where $\nu_{K}\left(R^{0}\right)$ is defined as
\begin{equation}
\nu_{K}^{rr}\left(R^{0}\right):=\mu\left[A_{K-1}^{2}-\frac{r_{K-1}^{2}\left(A_{K-1}\right)}{\gamma_{K}^{2}}\right]\log\left[r_{K}^{2}\left(R^{0}\right)\right]-\frac{1}{2}\mu\left(R^{0}\right)^{2}\log\left[\frac{r_{K}^{2}\left(R^{0}\right)}{\left(R^{0}\right)^{2}}\right].
\end{equation}
We have seen in \eqref{eq:TthetaTheta-discrete} how the circumferential
stress $T^{\theta\theta}$ relates to the radial stress $T^{RR}$.
The average over that expression is
\begin{equation}
\overline{T_{K}^{\theta\theta}}=\overline{T_{K}^{RR}}+\overline{\kappa_{K}},\label{eq:circumferential-stress-average}
\end{equation}
We have presented an expression for $\kappa_{K}$ in \eqref{eq:kappa-discrete}.
The average over $\kappa_{K}$ is
\begin{align}
\overline{\kappa_{K}} & =\frac{2\mu\left[r_{K}^{2}\left(A_{K}\right)-\gamma_{K}^{2}A_{K}^{2}\right]}{\Delta^{2}\gamma_{K}{}^{2}}\log\left[\frac{A_{K}^{2}r_{K}^{2}\left(A_{K}\right)}{A_{K-1}^{2}r_{K-1}^{2}\left(A_{K-1}\right)}\right].\label{eq:kappa-average-explicit}
\end{align}
According to \eqref{eq:circumferential-stress-average}, the expression
for $\overline{T_{K}^{\theta\theta}}$ is the sum of $\overline{\kappa_{K}}$
(see \eqref{eq:kappa-average-explicit}) and $\overline{T_{K}^{RR}}$
(see \eqref{eq:TRR-average-explicit}). The average radial and circumferential stress components are depicted as horizontal lines in the respective
layers in Figure \ref{fig:discrete-stress-profile}(a) (radial) and Figure \ref{fig:discrete-stress-profile}(b) (circumferential).
\subsection{\label{subsec:Generating-homeostatic-state}Generating a homeostatic state from a prescribed growth profile.}
The discretization and averaging process described above enables for a concise framework for studying growth dynamics. As the homeostatic state is defined by a growth profile -- a function that is only constrained to be positive -- a generic classification of dynamic behaviour is likely untractable. Our intent, rather, is to birefly investigate stability and the rate of convergence in terms of number of layers. For this, we restrict attention to a linear homeostatic growth profile $\gamma^{*}\left(R^{0}\right)$, characterized by a single parameter, $C_{1}$,
\begin{equation}
\gamma^{*}\left(R^{0}\right)=1+C_{1}\left(R^{0}-A_{0}\right),\qquad C_{1}\left(A_{N}-A_{0}\right)<1.\label{eq:linear-growth-profile}
\end{equation}
Note that this growth profile satisfies $\gamma^{*}\left(A_{0}\right)=1$,
i.e. no growth at the inner boundary.
We obtain the discrete homeostatic stress profile $\left\{ \gamma_{i}^{*}\right\} $
from the continuous profile $\gamma^{*}\left(R^{0}\right)$ by computing
the average according to \eqref{eq:gamma-piecewise}. The homeostatic
stress $\mathbf{T}\left(\boldsymbol{\gamma}^{*}\right)$ is
computed from the discrete homeostatic stress profile $\left\{ \gamma_{i}^{*}\right\} $
according to \eqref{eq:TRR-explicit} and \eqref{eq:TthetaTheta-discrete}.
The homeostatic values $\overline{\mathbf{T}}\left(\boldsymbol{\gamma}^{*}\right)$
are obtained as averages according to \eqref{eq:TRR-average-explicit}
and \eqref{eq:circumferential-stress-average}. It is important to
note that the homeostatic stress is generated by prescribing a growth
profile \eqref{eq:linear-growth-profile}, which by definition ensures that
the homeostatic stress is admissible.
\subsection{\label{subsec:N-disks-stability}Growth Dynamics}
We consider a growth law that generalizes \eqref{eq:dynamics-N2} to $N$ layers. The main difference with \eqref{eq:dynamics-N2} is that the values for homeostatic stress are obtained by the linear growth profile.
The growth law reads
\begin{equation}
\dot{\gamma}_{K}=\gamma_{K}\left\{ \tilde{K}\left[\overline{T_{K}^{RR}}\left(\boldsymbol{\gamma}\right)-\overline{T_{K}^{RR}}\left(\boldsymbol{\gamma}^{*}\right)\right]+\overline{T_{K}^{\theta\theta}}\left(\boldsymbol{\gamma}\right)-\overline{T_{K}^{\theta\theta}}\left(\boldsymbol{\gamma}^{*}\right)\right\} ,\qquad K=1\ldots N.\label{eq:growth-dynamics-N-layers}
\end{equation}
In order to consider the stability of \eqref{eq:growth-dynamics-N-layers}
in the neighborhood of the homeostatic state, we expand growth around its equilibrium values:
\begin{equation}
\gamma_{K}=\gamma_{K}^{*}+\varepsilon\tilde{\gamma}_{K}+\mathcal{O}\left(\varepsilon^{2}\right),\qquad K=1,\ldots,N.\label{eq:gamma-near-homeostasis}
\end{equation}
To linear order in $\varepsilon$, the dynamical system simplifies
to
\begin{equation}
\dot{\tilde{\boldsymbol{\gamma}}}=\mathbf{J}\tilde{\boldsymbol{\gamma}}.
\end{equation}
The eigenvalues of the Jacobian matrix $\mathbf{J}$ characterize the stability of
\eqref{eq:growth-dynamics-N-layers} near the homeostatic state. The
components of the $N\times N$ matrix $\mathbf{J}$ are
\begin{align}
J_{ij} & =\left[\gamma_{i}\left(\tilde{K}\frac{\partial\overline{T_{i}^{RR}}\left(\boldsymbol{\gamma}\right)}{\partial\gamma_{j}}+\frac{\partial\overline{T_{i}^{\theta\theta}}\left(\boldsymbol{\gamma}\right)}{\partial\gamma_{j}}\right)\right]_{\boldsymbol{\gamma}=\boldsymbol{\gamma}^{*}},\quad i,j=1,\ldots,N.\label{eq:Jacobian-N-layers}
\end{align}
We characterize the stability in the neighborhood of the homeostatic
state as a function of two non-dimensional parameters: The mechanical
feedback anisotropy parameter $\tilde{K}$ and the slope of the homeostatic
growth profile $C_{1}$. The latter appears in \eqref{eq:Jacobian-N-layers}
through $\boldsymbol{\gamma}^{*}$ (see Section \ref{subsec:Generating-homeostatic-state}).
\begin{figure}[htpb]\centering
\includegraphics[width=0.8\textwidth]{conclusion_plot}
\caption{\label{fig:Bifurcation-diagram-N-layer}Bifurcation diagram and convergence
for $N$-layered cylinder system. \textbf{(a)}\textendash}\textbf{(d)}:
The unstable (orange) and stable (blue) regions retain their shape
for increasing values of $N$. \textbf{(e)}: For a representative
sample of points P1 to P4, the convergence of the largest eigenvalue
is very good (see interpretation in text). The $\left(\tilde{K}^{-1},C_{1}\right)$
coordinates are $\text{P1}\left(0.1,2.5\right)$, $\text{P2}\left(-0.25,-0.5\right)$,
$\text{P3}\left(0.5,-0.5\right)$, $\text{P4}\left(1,-0.5\right)$.
Other parameters are $\mu_{1}=1$, $A_{0}=1$, $A_{N}=2$.
\end{figure}
Figure \ref{fig:Bifurcation-diagram-N-layer}(a) shows a bifurcation
diagram of the stability of the dynamical system \eqref{eq:growth-dynamics-N-layers}
as a function of $\tilde{K}^{-1}$ and $C_{1}$ for $N=9$ layers
(note that unlike in Figure \ref{fig:two_layered_bifurcation_diagram},
here we use the inverse of $\tilde{K}$ to focus on large circumferential
stress). The regions are colored
according to the largest real part of the eigenvalues $\lambda_{i}$
of $\mathbf{J}$, that is $\lambda=\text{Max(Re\ensuremath{\,\lambda_{1}}, Re\ensuremath{\,\lambda_{2}}, ... Re\ensuremath{\,\lambda_{N}})}$.
There are three parameter regions: an unstable region (orange), a
stable region (blue), and an undecidable region (green) for which
$\lambda$ is within a small tolerance of zero. This last region is included as it is typically within numerical error and its inclusion allows to make precise statements about stability. This relatively shallow
region of $\lambda$ is further explored in Figure \ref{fig:shallow-region}
and allows us to identify the clearly stable and clearly unstable
regions of the diagram. Figure \ref{fig:Bifurcation-diagram-N-layer}(b)\textendash (e)
shows that for increasing values of $N$ (that is, a refinement of
the discretisation), the regions are practically unchanged (b\textendash d),
and that the largest eigenvalue of four selected points converges
reliably to a finite positive (P1 \& P2) or negative (P3 \& P4) eigenvalue.
The green shallow region is more explicitly visualized in Figure \ref{fig:shallow-region}.
This plot shows in the vertical axis the value of the largest real
eigenvalue computed at $\left(\tilde{K}^{-1},C_{1}\right)$ from the
Jacobian matrix \eqref{eq:Jacobian-N-layers}. The planes $\lambda=\text{tol}$
and $\lambda=-\text{tol}$ are shown in dark gray, and eigenvalues
between are assumed to be in the shallow (green) region in which stability
cannot be decided from an expansion of $\boldsymbol{\gamma}$ according
to \eqref{eq:gamma-near-homeostasis} to first order in $\varepsilon$.
Thus, we see that there exist a region of stability, and a region of instability,
which both persist (for large enough $N$) independently of the discretisation.
A strongly anisotropic growth law ($\tilde{K}^{-1}$ close to zero
or negative) is required for the system to be unstable. We also considered the convergence as $N$ increases for a representative sample of points in the stable and unstable regions and confirmed that there was no significant change in $\lambda$.
We expect that the stable and unstable regions represent the true behavior
of the full (inhomogeneous) system discussed in Section \ref{sec:General_disks}.
The intermediate (green) shallow region of eigenvalues has a more complicated
structure due to the discretisation that is not expected in the full system.
\begin{figure}[htpb]\centering\includegraphics[width=0.8\textwidth]{Shallow_Region}\hfill{}
\caption{\label{fig:shallow-region}Detailed depiction of the shallow (green)
region from Figure \ref{fig:Bifurcation-diagram-N-layer}. The large
shallow region has a fine structure which is an artifact of discretisation
and is not expected in the full inhomogeneous system. For this reason,
we choose a three color system in Figure \ref{fig:Bifurcation-diagram-N-layer}(a)\textendash}(d),
in which the shallow region and its fine structure are merged into
one region defined by $-\text{tol}\leq\lambda\leq\text{tol}$. The
two planes serving as upper and lower bounds of this region are depicted
in dark gray. Values above $\lambda=\text{tol}$ are stable, below
$\lambda=-\text{tol}$ are unstable.
\end{figure}
\section{Conclusion}
It is now well appreciated that growth can induce mechanical instabilities \cite{gobe05,bego05}. The related problem that we have considered in this paper is the stability of a grown state through its slow-growth evolution. The question is not therefore about mechanical instability but about the dynamic stability of a preferred homeostatic state. While the former is characterized by a bifurcation from a base geometry to a more complex buckled geometry, occurring on a fast elastic time-scale, the latter involves the system evolving away from a given stress state on the slow growth time-scale. In general the homeostatic state is not homogeneous, hence the issue of stability requires the analysis of partial differential equations defined on multiple configurations with free boundaries. There are no standard mathematical tools available to study this problem even for simple non-homogeneous systems. An alternative is to consider the stability of states that are piecewise homogeneous (in space). The problem is then to establish the stability of coupled ordinary differential equations describing locally homogenous states through the traditional methods of dynamical systems. Within this framework we considered two relatively simple problems.
First, we considered the dynamical stability of a two-layer tube with different, but constant, growth tensors in each layer. We characterized the dynamics of the full nonlinear system, and showed that the number of equilibria
and their stability varies greatly and gives rise to highly intricate
dynamics which we organized via several bifurcations. We identified
a parameter region where the system is stable. We found that the growth dynamics of tubular
structures in the neighborhood of the homeostatic equilibrium depends in a nontrivial way
on the anisotropy of the growth response, and that the equilibrium
becomes unstable for highly anisotropic growth laws. This
complexity of dynamics naturally raises the question about stability of
homeostatic equilibria for more general systems.
Second, we showed that given a continuous law in a cylindrical geometry, we can introduce a suitable discretization of the problem that keeps all the characteristics of the continuous problem. We showed that for a linear growth law, there are
clear regions where stability and instability persist independently of
the discretisation (for sufficiently large $N$). We expect that these
regions represent the true behavior of the full inhomogeneous system.
This result allows us to characterize the stability of a morphoelastic
growing cylinder.
While we have only scratched the surface of the complex dynamic behaviour that exists in such systems, the framework presented here provides a tool to explore growth dynamics and stability of homeostatic states and finally address some of the fundamental challenges of morphoelasticity \cite{goriely17}: What growth laws, in general, would lead to dynamically stable homeostatic states? What is the final size of a growing organism for a given growth law? What are the conditions under which growth dynamics produces oscillatory growth?
\begin{acknowledgements}
We thank Dr. Thomas Lessinnes for many useful discussions in the early stages of this project.
The support for Alain Goriely by the Engineering and Physical Sciences Research Council of Great Britain under research grant EP/R020205/1 is gratefully acknowledged.
\end{acknowledgements}
\bibliographystyle{spmpsci}
|
2,877,628,091,635 | arxiv | \section{Introduction}
This paper is devoted to studying the global dynamics of solutions to the energy subcritical defocusing semilinear wave equation
\begin{equation}
\label{eq:NLW:semi:3d}
\Box\phi=|\phi|^{p-1}\phi,\quad \phi(0, x)=\phi_0(x),\quad \pa_t\phi(0, x)=\phi_1(x)
\end{equation}
in $\mathbb{R}^{3+1}$ with $1<p<5$.
The existence of global classical solution has early been obtained by J\"{o}rgens in \cite{Jorgens61:energysub:NLW:lowerd}, with various kinds of extensions in \cite{Brenner79:globalregularity:NW}, \cite{Brenner81:globalregularity:d9}, \cite{Pecher76:NLW:global}, \cite{segal63:semigroup}, \cite{Vonwahl75:NW}, \cite{velo85:global:sol:NLW}, \cite{Velo89:globalsolution:NLW} and references therein.
Strauss in \cite{Strauss:NLW:decay} investigated the asymptotics of the global solution for the superconformal case $3\leq p<5$. He showed that the solution scatters to linear wave in $H^1$ with compactly supported initial data (for asymptotic behaviours of this type, we refer to the author's companion paper \cite{yang:scattering:NLW} and references therein). This scattering result relied on the pointwise decay estimate $t^{\ep-1}$ for any $\epsilon >0$, which has later been improved to $t^{-1}$ for the strictly superconformal case $3<p<5$ and to $t^{-1}\ln t$ for the conformal case $p=3$ by Wahl in \cite{vonWahl72:decay:NLW:super}. The starting point of these asymptotic decay estimates is the conservation of approximate conformal energy derived by using the conformal Killing vector field as multiplier. The superconformal structure of the equation plays the role that the corresponding conformal energy (including the potential energy contributed by the nonlinearity) is controlled by the initial data as the spacetime integral arising from the nonlinearity is nonnegative, which in particular implies the uniform bound of the $L^2$ norm of the solution and the time decay of the potential energy. These a priori bounds force the solution to decay in time by viewing the nonlinearity as inhomogeneous term of the linear wave equation.
Another geometric point of view to see this conformal structure is the method of conformal compactification (see \cite{ChristodoulouYangM}, \cite{ChDNull}), based on which together with the representation formula for linear wave equation,
Bieli-Szpak obtained shaper decay estimate for the solution with compactly supported initial data in \cite{Roger:3DNLW:symmetry}, \cite{Bieli:3DNLW}.
To go beyond the superconformal case, Pecher in \cite{Pecher82:decay:3d} observed that the potential energy still decays in time but with a weaker decay rate. For this case, the spacetime integral arising from the nonlinearity mentioned above changes sign to be negative. This term could be controlled by using Gronwall's inequality with the price that the conformal energy grows in time with a rate linearly depending on the coefficient. Since the conformal energy contains weights in $t$, the potential energy still decays in time when $p$ is not too small (sufficiently close to $3$ so that the coefficient is small). This weaker energy decay estimate is sufficiently strong to conclude the pointwise decay estimate for the solution when $p>\frac{1+\sqrt{13}}{2}$. As a consequence, the solution scatters in energy space for $p>2.7005$.
The aim of this paper are two folds: firstly we obtain pointwise decay estimate for the solution with data in some weighted energy space which is weaker than the conformal energy space required in previous works. We prove that the solution decays as quickly as linear waves (with the same initial data) for all $p>\frac{1+\sqrt{17}}{2}$, covering additional part of the subconformal range. Secondly for even smaller $p$ with lower bound $2$, we show that the solution decays at least $t^{-\frac{1}{3}}$. This decay estimate immediately leads to the scattering result in energy space for $p>2.3542$, hence refining Pecher's pointwise decay estimates and scattering result in $\mathbb{R}^{3+1}$.
More precisely, for some fixed constant $1<\ga_0<2$ define the weighted energy norm of the initial data
\begin{align*}
\mathcal{E}_{k,\ga_0}[\phi]=\sum\limits_{l\leq k}\int_{\mathbb{R}^3}(1+|x|)^{\ga_0+2l}(|\nabla^{l+1}\phi_0|^2+|\nabla^l \phi_1|^2)+(1+|x|)^{\ga_0}|\phi_0|^{p+1}dx.
\end{align*}
Then we have
\begin{Thm}
\label{thm:main}
Consider the Cauchy problem to the energy subcritical defocusing semilinear wave equation \eqref{eq:NLW:semi:3d}. For initial data $(\phi_0, \phi_1)$ bounded in $\mathcal{E}_{1, \ga_0}[\phi]$ for some constant $1<\ga_0<2$, the solution is global in time and satisfies the following decay estimates:
\begin{itemize}
\item For the case when
\[
\frac{1+\sqrt{17}}{2}<p<5, \quad \max\{\frac{4}{p-1}-1, 1\}<\ga_0<\min\{p-1, 2\},
\]
then
\begin{equation*}
|\phi(t, x)|\leq C (1+\mathcal{E}_{1, \ga_0}[\phi] )^{\frac{p-1}{2}}(1+t+|x|)^{-1}(1+||x|-t|)^{-\frac{\ga_0-1}{2}};
\end{equation*}
\item Otherwise if $2<p\leq \frac{1+\sqrt{17}}{2}$ and $1<\ga_0<p-1$, then
\begin{equation*}
|\phi(t, x)|\leq C \sqrt{\mathcal{E}_{1, \ga_0}[\phi] } (1+t+|x|)^{-\frac{3+(p-2)^2}{(p+1)(5-p)}\ga_0}(1+||x|-t|)^{-\frac{\ga_0}{p+1}}
\end{equation*}
for some constant $C$ depending on $\ga_0$, $p$ and the zeroth order weighted energy $\mathcal{E}_{0, \ga_0}[\phi] $.
\end{itemize}
\end{Thm}
As a consequence of the above pointwise decay estimate, we extend Pecher's scattering result to a larger range of $p$. Recall the linear operator $\mathbf{L}(t)$ defined in \cite{yang:scattering:NLW}
\begin{align*}
\Box\mathbf{L}(t)(f, g)=0,\quad \mathbf{L}(0)(f, g)=f(x),\quad \pa_t \mathbf{L}(0)(f, g)=g(x).
\end{align*}
\begin{Cor}
\label{cor:scattering:3D}
For $p>p_*$ (defined in the last section and $p_*<2.3542$) and initial data bounded in $\mathcal{E}_{1, p-1}[\phi]$, the solution $\phi$ of \eqref{eq:NLW:semi:3d} is uniformly bounded in the following mixed spacetime norm
\begin{align*}
\|\phi\|_{L_t^p L_x^{2p}}<\infty.
\end{align*}
Consequently the solution scatters in energy space, that is, there exists pairs $(\phi_0^{\pm}(x), \phi_1^{\pm}(x))$ such that
\begin{align*}
\lim\limits_{t\rightarrow\pm\infty}\|\phi(t, x)-\mathbf{L}(t)(\phi_0^{\pm}(x), \phi_1^{\pm}(x))\|_{\dot{H}_x^1}+\| \pa_t\phi(t,x)-\pa_t \mathbf{L}(t)(\phi_0^{\pm}(x), \phi_1^{\pm}(x))\|_{ L_x^2}=0.
\end{align*}
\end{Cor}
We give several remarks.
\begin{Remark}
One can also derive the pointwise decay estimates for the derivatives of the solution by assuming the boundedness of the second order weighted energy of the initial data.
\end{Remark}
\begin{Remark}
The precise decay estimate obtained by Pecher in \cite{Pecher82:decay:3d} is the following
\[
|\phi|\leq C t^{\frac{6+2p-2p^2}{3+p}+\ep},\quad \frac{1+\sqrt{13}}{2}<p\leq 3
\]
with initial data bounded in $\mathcal{E}_{1, 2}[\phi]$. Theorem \ref{thm:main} improves this decay estimate with weaker assumption on the initial data.
\end{Remark}
\begin{Remark}
Note that the solution to the linear wave equation
\[
\Box\phi^{lin}=0,\quad \phi^{lin}(0, x)=\phi_0(x),\quad \pa_t\phi^{lin}(0, x)=\phi_1
\]
with data $(\phi_0, \phi_1)$ bounded in $\mathcal{E}_{1, \ga_0}[\phi]$ for some $1<\ga_0<1$ has the following pointwise decay property
\begin{align*}
|\phi^{lin}(t, x)|\leq C \sqrt{\mathcal{E}_{1, \ga_0}}(1+t+|x|)^{-1}(1+|t-|x||)^{-\frac{\ga_0-1}{2}}
\end{align*}
for some universal constant $C$. Thus when $\frac{1+\sqrt{17}}{2}<p<5$, for arbitrary large data $(\phi_0, \phi_1)$ bounded in $\mathcal{E}_{1, \ga_0}[\phi]$, the solution to the nonlinear equation \eqref{eq:NLW:semi:3d} decays as quickly as the solution to the linear equation with the same initial data. This pointwise decay property is consistent with the scattering result obtained in the author's companion paper \cite{yang:scattering:NLW}, in which it has shown that the solution to \eqref{eq:NLW:semi:3d} scatters in critical Sobolev space $\dot{H}^{\frac{3}{2}-\frac{2}{p-1}}$ and the energy space $\dot{H}^{1}$ when $\frac{1+\sqrt{17}}{2}<p<5$.
\end{Remark}
\begin{Remark}
Our scattering result in energy space applies to power even below the Strauss exponent $p_c=1+\sqrt{2}$, for which small data global solution and scattering hold for the pure power semilinear wave equation with power above $p_c$ (see for example \cite{Pecher88:scattering:sharpp:3D}) while finite time blow up can occur with power below $p_c$ (see John's work in \cite{John79:blowup:NLW:3d}).
\end{Remark}
As mentioned above, the existing approach (see for example \cite{Pecher82:decay:3d}, \cite{Velo87:decay:NLW}) to study the asymptotic behavior of solutions to \eqref{eq:NLW:semi:3d} relied on the following time decay of the potential energy
\begin{align}
\label{eq:timedecay:3D:Pecher}
\int_{\mathbb{R}^3}|\phi|^{p+1}dx\leq C (1+t)^{\max\{4-2p, -2\}},\quad 1<p<5,
\end{align}
which is based on the following energy estimate
\begin{align*}
\int_{\mathbb{R}^{3}}t^2 |\phi|^{p+1}(t, x)dx+\int_0^{t}\int_{\mathbb{R}^3}(2p-6)s|\phi|^{p+1}(s, x)dxds\leq C\mathcal{E}_{0, 2}[\phi]
\end{align*}
obtained by using the conformal Killing vector field $t^2\pa_t+r^2 \pa_r$ ($r=|x|$) as multiplier. Here the constant $C$ depends only on $p$. With this a priori decay estimate for the solution, a type of $L^q$ estimate for linear wave equation (prototype of Strichartz estimate, see for example \cite{Brenner75:Lp:LW}) yields the pointwise decay estimate for the solution. This approach only makes use of the time decay of the solution. However, it is well known that linear waves have improved decay away from light cone, which can be quantified by decay in $u=t-|x|$. Our improvement comes from thoroughly utilizing such $u$ decay of linear waves.
The method we used to explore this $u$ decay is the vector field method originally introduced by Dafermos-Rodnianski in \cite{newapp}. The new ingredient is the $r$-weighted energy estimate derived by using the vector field $r^{\ga}(\pa_t+\pa_r)$ as multiplier with $0\leq \ga\leq 2$.
Applying this to equation \eqref{eq:NLW:semi:3d}, we obtain that
\begin{align*}
\iint_{\mathbb{R}^{3+1}}\frac{p-1-\ga}{p+1}r^{\ga-1}|\phi|^{p+1}dxdt\leq C \mathcal{E}_{0, \ga}[\phi].
\end{align*}
See details in \cite{yang:scattering:NLW}. To obtain a useful estimate for the solution, we require that $0<\ga<p-1$. On the other hand, combined with an integrated local energy estimate obtained by using the vector field $f(r)\pa_r$ as multiplier and the classical energy conservation, the energy flux through the outgoing null hypersurface $\H_u$ (constant $u$ hyersurface) decays in terms of $u$. In particular,
\begin{align*}
\int_{\H_u} |\phi|^{p+1}d\sigma \leq C(1+|u|)^{-\ga}\mathcal{E}_{0, \ga}[\phi].
\end{align*}
Integrating in terms of $u$, we then get that
\begin{align*}
\iint_{\mathbb{R}^{3+1}} (1+|u|)^{\ga-1-\ep}|\phi|^{p+1}dxdt\leq C\mathcal{E}_{0, \ga}[\phi],\quad \forall \ep>0
\end{align*}
by assuming that $\ga>1$ (this forces that $p>2$). This together with the above $r$-weighted energy estimate leads to the spacetime bound
\begin{align*}
\iint_{\mathbb{R}^{3+1}}(1+t+|x|)^{\ga-1-\ep}|\phi|^{p+1}dxdt\leq C\mathcal{E}_{0, \ga}[\phi],
\end{align*}
which is one of the main results obtained in \cite{yang:scattering:NLW} as restated precisely in the following Proposition \ref{prop:spacetime:bd}. For the subconformal case $p<3$, since $\ga$ can be as large as $p-1$, in terms of time decay, this spacetime bound is stronger than \eqref{eq:timedecay:3D:Pecher} as $p-2>2p-5$. Our improvement on the asymptotic decay properties of the solution heavily relies on this uniform spacetime bound.
To show the pointwise decay estimate of solution to \eqref{eq:NLW:semi:3d}, we start by obtaining a uniform weighted energy flux bound through the backward light cone $\N^{-}(q)$ emanating from the point $q=(t_0, x_0)$. Consider the vector field
\begin{align*}
X=u_+^{\ga}(\pa_t-\pa_r)+v_+^{\ga}(\pa_t+\pa_r),\quad v_+=\sqrt{1+(t+|x|)^2},\quad u_+=\sqrt{1+u^2}.
\end{align*}
The case when $\ga=2$ corresponds to the conformal Killing vector field while the case when $1<\ga<2$ has been widely used (see for examples
\cite{Dafermos17:C0Kerr}, \cite{LindbladMKG}). Applying this vector field as multiplier to the region bounded by the backward light cone $\N^{-}(q)$, we obtain that
\begin{align*}
\int_{\mathcal{N}^{-}(q)}\big((1+\frac{x\cdot(x-x_0)}{|x||x-x_0|})v_+^{\ga}+u_+^{\ga}\big)|\phi|^{p+1}
\leq C \mathcal{E}_{0, \ga_0}[\phi],\quad \ga<\ga_0
\end{align*}
for which the above uniform spacetime bound plays the role that it controls the spacetime integral without a definite sign (see details in Proposition \ref{prop:EF:cone:NW:3d}).
Once we have this uniform potential energy bound, we apply the representation formula to demonstrate the pointwise decay for the solution. The nonlinear term can be estimated by interpolation between the $L^\infty$ estimate of the solution and the above potential energy bound.
When $p>\frac{1+\sqrt{17}}{2}$ is sufficiently large, it turns out that the coefficient of the $L^\infty$ norm of the solution is integrable from $0$ to $t_0$. Thus Gronwall's inequality leads to the decay properties of the solution. On the other hand when $2<p\leq \frac{1+\sqrt{17}}{2}$ (the lower bound for $p$ comes from the fact that we need $\ga>1$), we split the integral of the nonlinear term into the region close to the tip point $q$ and the region far away. The argument for the region close to $q$ is the same as the case when $p>\frac{1+\sqrt{17}}{2}$ due to the fact that the coefficient is still integrable on a small interval. The integral on the region far away can be bounded directly by using the above uniform potential energy bound, which however losses decay.
The above argument only works in the exterior region $\{t+1\leq |x|\}$. Similar argument with minor modifications also applies to the interior region $\{t+1>|x|\}$ after conformal transformation (for simplicity we only consider the solution in the future $t\geq 0$). Pick up the hyperboloid $\mathbb{H}$ passing through the 2-sphere $\{t=0, |x|=1\}$. The region enclosed by $\mathbb{H}$ contains the interior region and is conformally equivalent to the compact backward cone $\{\tilde{t}+|\tilde{x}|\leq \frac{5}{6}\}$. The study of the asymptotic behavior of solution to \eqref{eq:NLW:semi:3d} in the interior region is then reduced to control the growth of solution to a class of semilinear wave equation on this compact cone with initial data determined by the original solution on the hyperboloid $\mathbb{H}$, which has already been understood. The argument to control the solution on the compact region is similar to that used for studying the solution in the exterior region.
The plan of the paper is as follows: In Section 2, we define some notations. In Section 3, we use vector field method to derive a uniform weighted energy estimate for the solution through backward light cones, based on which we obtain the pointwise decay estimate for the solution in the exterior region. In addition, we show quantitative and necessary properties for the solution on the hyperboloid $\mathbb{H}$, used as initial data for the solution in the interior region. In section 4, we study a class of semilinear wave equation on a compact backward cone. The approach is similar but this section is independent of others. In the last section, we conduct the conformal transformation and apply the result of Section 4 to conclude the pointwise decay estimate for the solution in the interior region.
\section{Preliminaries and notations}
\label{sec:notation}
We use the standard polar local coordinate system $(t, r,
\om)$ of Minkowski space as well as the null coordinates $u=\frac{t-r}{2}$, $v=\frac{t+r}{2}$, in which $\om$ is the coordinate of the unit sphere.
Introduce a null frame $\{L, \Lb, e_1, e_2\}$ with
\[
L=\pa_v=\pa_t+\pa_r,\quad \Lb=\pa_u=\pa_t-\pa_r
\]
and $\{e_1, e_2\}$ an orthonormal basis of the sphere with
constant radius $r$. At any fixed point $(t, x)$, we may choose $e_1$, $e_2$ such that
\begin{equation}
\label{eq:nullderiv}
\begin{split}
&\nabla_{e_i}L=r^{-1}e_i,\quad \nabla_{e_i}\Lb=-r^{-1}e_i, \quad \nabla_{e_1}e_2=\nabla_{e_2}e_1=0,\quad \nabla_{e_i}e_i=-r^{-1}\pa_r,
\end{split}
\end{equation}
where $\nabla$ is the covariant derivative in Minkowski space. Defines the functions
\[
u_+=\sqrt{1+u^2},\quad v_+=\sqrt{1+v^2}.
\]
Through out this paper, the exterior region will be referred as $\{(t, x)| u=\frac{t-|x|}{2}\leq -1, t\geq 0\}$ while the interior region will be $\{(t, x)| u\geq -1, t\geq 0\}$.
Let $\H_u$ be the outgoing null hypersurface $\{t-|x|=2u, |x|\geq 2\}$ and $\Hb_v$ be the incoming null hypersurface $\{t+|x|=2v, |x|\geq 2\}$. In the exterior region, we may also use the truncated ones $\H_u^{v_1}$, $\Hb_{v}^{u_1}$ defined as follows
\begin{align*}
\H_u^{v_1}=\H_u\cap\{-u\leq v\leq v_1\}, \quad
\Hb_v^{u_1}=\Hb_v\cap\{-v\leq u\leq u_1\}
\end{align*}
and the domain $\mathcal{D}_{u_1}^{ u_2}$ bounded by $\H_{u_1}^{-u_2}$, $\Hb_{-u_2}^{u_1}$ and the initial hypersurface for all $u_2<u_1\leq -1$.
Additional to the above null hypersurfaces, we will also use the hyperboloid
\begin{equation}
\label{eq:def4Hyperboloid}
\Hy:=\left\{(t, x)|(t^*)^2-|x|^2=2R^* t^*\right\}, \quad t^*=t+3, \quad R^*=\frac{5}{6},
\end{equation}
which splits into the future part $\Hy^{+}=\Hy\cap \{t\geq 0\}$ and the past part $\Hy^{-}=\Hy\cap \{t<0\}$. We may note here that the interior region defined above lies inside this hyperboloid.
For any $q=(t_0, x_0)\in \mathbb{R}\times \mathbb{R}^3$ and $r>0$, denote $\B_q(r)$ as the 3-dimensional ball at time $t_0$ with radius $r$ centered at point $q$, that is,
\begin{align*}
\B_q( r)=\{(t,x)| t=t_0, |x-x_0|\leq r\}.
\end{align*}
The boundary of $\B_q( r)$ is the 2-sphere $\S_q(r)$. On the initial hypersurface $\{t=0\}$, we use $\B_{r_1}^{r_2}$ to denote the annulus $\{t=0, r_1\leq |x|\leq r_2\}$.
For any fixed point $q=(t_0, x_0)$, let $\mathcal{N}^{-}(q)$ be the past null cone of the point $q$ (for simplicity we are only concerned for the solution in the future $\{t\geq 0\}$) and $\mathcal{J}^{-}(q)$ to be the past of the point $q$, that is, the region bounded by $\mathcal{N}^{-}(q)$ and the initial hypersurface.
Additional to the standard coordinates $(t, x)$ as well as the associated polar coordinates, let $(\tilde{t}, \tilde{x})$ be the new coordinates centered at the point $q=(t_0, x_0)$
\[
\tilde{t}=t-t_0,\quad \tilde{x}=x-x_0,\quad \tilde{r}=|\tilde{x}|,\quad \tilde{\om}=\frac{\tilde{x}}{|\tilde{x}|},\quad \tilde{u}=\f12 (\tilde{t}-\tilde{r}),\quad \tilde{v}=\f12(\tilde{t}+\tilde{r}).
\]
We also have the associated null frame $\{\tilde{L}, \tilde{\Lb}, \tilde{e}_1, \tilde{e}_2\}$ verifying the same relation \eqref{eq:nullderiv}. Under this new coordinates, the past null cone $\mathcal{N}^{-}(q)$ can be characterized by $\{\tilde{v}=0\}\cap\{0\leq t\leq t_0\}$. Through out this paper, the coordinates $(\tilde{t}, \tilde{x})$ are always referred to be the translated ones centered at the point $q=(t_0, x_0)$ unless it is clearly emphasized.
For simplicity, for integrals in this paper, we will omit the volume form unless it is specified. More precisely we will use
\begin{align*}
\int_{\mathcal{D}}f,\quad \int_{\H} f, \quad \int_{\Hb}f, \quad \int_{\{t=constant\}} f
\end{align*}
to be short for
\begin{align*}
\int_{\mathcal{D}}f dxdt, \quad \int_{\H} f 2r^{2}dv d\om , \quad \int_{\Hb}f 2r^{2}dud\om, \quad \int_{\{t=constant\}} f dx
\end{align*}
respectively. Here $\om$ are the standard coordinates of unit sphere.
Finally we make a convention through out this paper to avoid too many constants that $A\les B$ means that there exists a constant $C$, depending possibly on $p$, $\ga_0$ the weighted energy $\mathcal{E}_{0, \ga_0}[\phi]$ such that $A\leq CB$.
\section{A uniform weighted energy flux bound}
In this section, we establish a uniform weighted energy flux bound on any backward light cone in terms of the zeroth order initial energy, based on the spacetime bound for the solution derived in the author's companion paper \cite{yang:scattering:NLW}, from which we recall the following:
\begin{Prop}
\label{prop:spacetime:bd}
For all $2<p\leq 5 $ and $1<\ga_0<\min\{2, p-1\}$, the solution $\phi$ of \eqref{eq:NLW:semi:3d} is uniformly bounded in the following sense
\begin{align}
\label{eq:spacetime:bd}
\iint_{\mathbb{R}^{3+1}} v_+^{\ga_0-\ep-1}|\phi|^{p+1}dxdt \leq C \mathcal{E}_{0, \ga_0}[\phi]
\end{align}
for some constant $C$ relying only on $p$, $\ep$ and $\ga_0$.
\end{Prop}
\begin{proof}
See the main theorem in \cite{yang:scattering:NLW}.
\end{proof}
Using this spacetime bound, we establish the following uniform weighted energy flux bound.
\begin{Prop}
\label{prop:EF:cone:NW:3d}
Let $q=(t_0, x_0)$ be any point in $\mathbb{R}^{3+1}$ with $t_0\geq 0$. Then for solution $\phi$ of the nonlinear wave equation \eqref{eq:NLW:semi:3d} and for all $1< \ga<\ga_0< \min\{2, p-1\}$, we have the following uniform bound
\begin{equation}
\label{eq:Eflux:ex:EF}
\begin{split}
&\int_{\mathcal{N}^{-}(q)}\big((1+\tau)v_+^{\ga}+u_+^{\ga}\big)|\phi|^{p+1}
\leq C \mathcal{E}_{0, \ga_0}[\phi]
\end{split}
\end{equation}
for some constant $C$ depending only on $p$, $\ga_0$ and $\ga$ and independent of the point $q$. Here $\tau=\om\cdot \tilde{\om}$, $r_0=|x_0|$ and the tilde components are measured under the coordinates $(\tilde{t}, \tilde{x})$ centered at the point $q=(t_0, x_0)$.
\end{Prop}
\begin{proof}
Define the energy momentum tensor for the scalar field $\phi$
\begin{align*}
T[\phi]_{\mu\nu}=\pa_{\mu}\phi\pa_{\nu}\phi-\f12 m_{\mu\nu}(\pa^\ga \phi \pa_\ga\phi+\frac{2}{p+1} |\phi|^{p+1}),
\end{align*}
where $m_{\mu\nu}$ is the flat Minkowski metric.
For any vector field $X$ and any function $\chi$, define the current
\begin{equation*}
J^{X, \chi}_\mu[\phi]=T[\phi]_{\mu\nu}X^\nu -
\f12\pa_{\mu}\chi \cdot|\phi|^2 + \f12 \chi\pa_{\mu}|\phi|^2.
\end{equation*}
By using Stokes' formula, we have the energy identity
\begin{equation}
\label{eq:energy:id}
\iint_{\pa\mathcal{D}}i_{ J^{X, \chi}[\phi]} d\vol =\iint_{\mathcal{D}} T[\phi]^{\mu\nu}\pi^X_{\mu\nu}+
\chi \pa_\mu\phi\pa^\mu\phi -\f12\Box\chi\cdot|\phi|^2 +\chi \phi\Box\phi+X(\phi)(\Box\phi-|\phi|^{p}\phi) d\vol
\end{equation}
for any domain $\mathcal{D}$ in $\mathbb{R}^{3+1}$. Here $\pi^X=\f12 \cL_X m$ is the deformation tensor of the metric $m$ along the vector field $X$ and $i_Z d\vol$ is the contraction of the vector field with the volume form $d\vol$.
For the weighted energy flux estimate \eqref{eq:Eflux:ex:EF}, we choose the vector field $X$ as follows:
\[
X=v_+^{\gamma} L+u_+^\gamma \Lb.
\]
Take the region $\cD$ to be $\mathcal{J}^{-}(q)$ which is bounded by the backward light cone $\N^{-}(q)$ and the initial hypersurface. For the above chosen vector field $X$, we can compute that
\[
\nabla_{L}X=\gamma v_+^{\gamma-2}v L,\quad \nabla_{\Lb}X= \gamma u_+^{\gamma-2}u \Lb,\quad \nabla_{e_i}X=r^{-1}(v_+^\gamma-u_+^{\gamma}) e_i.
\]
Then the non-vanishing components of the deformation tensor $\pi_{\mu\nu}^X$ are
\[
\pi^X_{L\Lb}=-\gamma \left(v_+^{\gamma-2}v+u_+^{\gamma-2}u\right),\quad \pi^X_{e_i e_i}=r^{-1}(v_+^{\ga}-u_+^{\gamma}).
\]
Therefore we can compute that
\begin{align*}
&T[\phi]^{\mu\nu}\pi^X_{\mu\nu}=2T[\phi]^{L\Lb}\pi^X_{L\Lb}+T[\phi]^{e_ie_i}\pi^X_{e_ie_i}\\
&=-\f12\ga (v_+^{\gamma-2}v+u_+^{\gamma-2}u)(|\nabb\phi|^2+\frac{2}{p+1}|\phi|^{p+1})+r^{-1}(v_+^{\ga}-u_+^{\gamma})(|\nabb\phi|^2-\pa^\mu \phi \pa_\mu\phi-\frac{2}{p+1}|\phi|^{p+1})\\
&=\left(-\f12\ga(v_+^{\gamma-2}v+u_+^{\gamma-2}u)+r^{-1}(v_+^{\ga}-u_+^{\gamma})\right)|\nabb\phi|^2-r^{-1}(v_+^{\ga}-u_+^{\gamma})\pa^\mu\phi \pa_\mu\phi\\
&\quad +\left(-\f12\ga(v_+^{\gamma-2}v+u_+^{\gamma-2}u)-r^{-1}(v_+^{\ga}-u_+^{\gamma})\right)\frac{2}{p+1}|\phi|^{p+1}.
\end{align*}
Now choose the function $\chi$ as follows:
\[
\chi=r^{-1}(v_+^{\ga}-u_+^{\gamma}).
\]
For such spherically symmetric function $\chi$ (with respect to the coordinates $(t, x)$), we can compute that
\begin{align*}
\Box \chi=-r^{-1}L\Lb (r\chi)=-2r^{-1}L\Lb(v_+^\ga-u_+^\ga)=0,\quad r>0.
\end{align*}
At $r=0$ it grows at most $r^{\gamma-3}$. Therefore we can write that
\begin{align*}
&T[\phi]^{\mu\nu}\pi^X_{\mu\nu}+
\chi \pa_\mu\phi \pa^\mu\phi -\f12\Box\chi\cdot|\phi|^2+\chi\phi\Box\phi \\
&=\left(\chi-\f12\ga(v_+^{\gamma-2}v+u_+^{\gamma-2}u)\right)|\nabb\phi|^2 + \left(\frac{p-1}{p+1}\chi-\frac{(v_+^{\ga-2}v+u_+^{\ga-2}u)\ga}{p+1}\right)|\phi|^{p+1}.
\end{align*}
Denote $f(s)=(1+s^2)^{\frac{\ga}{2}}$. Then $\chi=\frac{f(v)-f(u)}{v-u}$. It can be checked directly that the derivative $f'(s)$ is concave. In particular we conclude that
\begin{align*}
\chi=\frac{f(v)-f(u)}{v-u}\geq \f12 (f'(v)+f'(u))=\f12 \ga(v_+^{\ga-2}v+u_+^{\ga-2}u).
\end{align*}
Therefore the coefficient of $|\nabb\phi|^2$ is nonnegative. On the other hand, the coefficient of $|\phi|^{p+1}$ can be trivially bounded by
\begin{align*}
|\frac{p-1}{p+1}\chi-\frac{(v_+^{\ga-2}v+u_+^{\ga-2}u)\ga}{p+1}|\leq Cv_+^{\ga-1}
\end{align*}
for some constant $C$ depending only on $p$ and $\ga$. We remark here that this coefficient is also nonnegative for the super-conformal case when $p\geq 3$. We use Proposition \ref{prop:spacetime:bd} to control this potential term for the sub-conformal case when the sign is indefinite.
We next compute the boundary integrals on the left hand side of the energy identity \eqref{eq:energy:id}, which consists of the integral on the initial hypersurface $\B_{(0, x_0)}(t_0)$ and the integral on the backward light cone $\mathcal{N}^{-}(q)$. Let's first compute the boundary integral on the initial hypersurface, under the coordinates system $(t, x)$. As the initial hypersurface $\B_{(0, x_0)}(t_0)$ has the volume form $dx$, the contraction reads
\begin{align*}
i_{J^{X, \chi}[\phi ]}d\vol&= T[\phi ]_{0 L}X^L+T[\phi ]_{\Lb 0}X^{\Lb}- \f12 \pa_t\chi |\phi|^2+ \f12\chi \pa_t|\phi|^2\\
&= \f12 v_+^\ga( |L\phi|^2+|\nabb\phi|^2+\frac{2}{p+1}|\phi|^{p+1})-\f12\pa_t\chi \cdot |\phi|^2+\f12\chi \pa_t|\phi|^2 \\
&\quad +\f12 u_+^\ga(|{\Lb}\phi|^2+|\nabb\phi|^2+\frac{2}{p+1}|\phi|^{p+1})\\
&=\f12(u_+^\ga+v_+^\ga)( |\nabb\phi|^2+\frac{2}{p+1}|\phi|^{p+1})+\f12 v_+^\ga r^{-2}|L(r\phi)|^2\\
&\quad +\f12 u_+^\ga r^{-2}|{\Lb}(r\phi)|^2-\div(\om r^{-1}|\phi|^2(u_+^\ga+v_+^\ga)).
\end{align*}
Here $\om=\frac{x}{|x|}$ can be viewed as a vector on $\mathbb{R}^{3}$ and the divergence is taken over the initial hypersurface $\B_{(0, x_0)}(t_0)$. The integral of the divergence term and be computed by using integration by parts. Under the coordinates $\tilde{x}=x-x_0$ on the initial hypersurface, we have
\begin{align*}
\int_{\B_{(0, x_0)}(t_0)} \div (\om r^{-1}|\phi|^2(u_+^\ga+v_+^\ga))dx&=\int_{\B_{(0, x_0)}(t_0)} \div (\om r^{-1}|\phi|^2(u_+^\ga+v_+^\ga))d\tilde{x}\\
&=\int_{\S_{(0, x_0)}(t_0)} \tilde{r}^2 \tilde{\om} \cdot\om r^{-1}|\phi|^2(u_+^\ga+v_+^\ga)d\tilde{\om}.
\end{align*}
In particular we derive that
\begin{equation}
\label{eq:PWE:ex:bxt0}
\begin{split}
&\int_{\B_{(0, x_0)}(t_0)} i_{J^{X,\chi}[\phi]}d\vol + \int_{\S_{(0, x_0)}(t_0)} \tilde{r}^2 \tilde{\om} \cdot\om r^{-1}|\phi|^2(u_+^\ga+v_+^\ga)d\tilde{\om}\\
&=\f12 \int_{\B_{(0, x_0)}(t_0)} v_+^{\ga} r^{-2}|L(r\phi)|^2 + (u_+^{\ga}+v_+^{\ga})(|\nabb\phi|^2+\frac{2}{p+1}|\phi|^{p+1})+u_+^{\ga}r^{-2}|\Lb(r\phi)|^2 dx \\
&\leq C \mathcal{E}_{0, \ga}[\phi]
\end{split}
\end{equation}
for some constant $C$ depending only on $\ga$.
For the boundary integral on the backward light cone $\mathcal{N}^{-}(q)$, we shift to the coordinates centered at the point $q=(t_0, x_0)$.
Recall the volume form
\[
d\vol=dxdt=d\tilde{x}d\tilde{t}=2\tilde{r}^2 d\tilde{v}d\tilde{u}d\tilde{\om}.
\]
Since the backward light cone $\mathcal{N}^{-}(q)$ can be characterized by $\{\tilde{v}=0\}$ under these new coordinates $(\tilde{t}, \tilde{x})$, we therefore have
\begin{align*}
-i_{J^{X, \chi}[\phi]}d\vol=J_{\tilde{\Lb}}^{X, \chi}[\phi]\tilde{r}^2d\tilde{u}d\tilde{\om}= ( T[\phi]_{\tilde{\Lb}\nu}X^\nu -
\f12(\tilde{\Lb}\chi) |\phi|^2 + \f12 \chi\cdot\tilde{\Lb}|\phi|^2 ) \tilde{r}^2d\tilde{u}d\tilde{\om}.
\end{align*}
For the main quadratic terms, we first can compute that
\begin{align*}
T[\phi]_{\tilde{\Lb}\nu}X^\nu =T[\phi]_{\tilde{\Lb}\tilde{\Lb}}X^{\tilde{\Lb}}+T[\phi]_{\tilde{\Lb}\tilde{L}}X^{\tilde{L}}+T[\phi]_{\tilde{\Lb}\tilde{e}_i}X^{\tilde{e}_i}.
\end{align*}
Since the vector field $X$ is given under the coordinates $(t, x)$, we need to write it under the new null frame $\{\tilde{L}, \tilde{\Lb}, \tilde{e}_1, \tilde{e}_2\}$ centered at the point $q$. Note that
\begin{align*}
\pa_r=\om \cdot \nabla=\om \cdot \tilde{\nabla}=\om\cdot \tilde{\om}\pa_{\tilde{r}}+ \om\cdot (\tilde{\nabla}-\tilde{\om}\pa_{\tilde{r}}).
\end{align*}
Here $\om=\frac{x}{|x|}$, $\nabla=(\pa_{x^1}, \pa_{x^2}, \pa_{x^3})$. Thus we can write that
\begin{align*}
X &=(v_+^\ga+u_+^\ga)\pa_{t}+(v_+^\ga-u_+^\ga)\pa_r\\
&=(v_+^\ga+u_+^\ga)\pa_{\tilde{t}}+(v_+^\ga-u_+^\ga)(\om\cdot \tilde{\om}\pa_{\tilde{r}}+ \om\cdot \tilde{\nabb})\\
&=\f12 \left(u_+^\ga+v_+^\ga+(v_+^\ga-u_+^\ga)\om\cdot \tilde{\om}\right)\tilde{L}+\f12 \left(u_+^\ga+v_+^\ga-(v_+^\ga-u_+^\ga)\om\cdot \tilde{\om}\right)\tilde{\Lb}+(v_+^\ga-u_+^\ga)\om\cdot \tilde{\nabb}.
\end{align*}
Here $\tilde{\nabb}=\tilde{\nabla}-\pa_{\tilde{r}}$. Thus we can compute the quadratic terms
\begin{align*}
T[\phi]_{\tilde{\Lb}\nu}X^\nu
=&\left((1-\tau)v_+^\ga+(1+\tau)u_+^\ga\right)|{\tilde{\Lb}}\phi|^2 +\left((1+\tau)v_+^\ga+(1-\tau)u_+^\ga\right)(|\tilde{\nabb}\phi|^2+\frac{2}{p+1}|\phi|^{p+1})\\
&+2 (v_+^\ga-u_+^\ga) ({\tilde{\Lb}}\phi) (\om\cdot \tilde{\nabb})\phi.
\end{align*}
Here recall that $\tau=\om\cdot \tilde{\om}$.
It turns out that these terms are nonnegative but we need to estimate them together with the lower order terms arising from the function $\chi$.
We compute that
\begin{align*}
&\tilde{\Lb}(r)=-\pa_{\tilde{r}}(r)=-\tilde{\om}_i\pa_i(r)=-\tilde{\om}\cdot \om =-\tau,\\
&\tilde{\nabb}(r)=(\tilde{\nabla}-\tilde{\om}\pa_{\tilde{r}})(r)=\om-\tilde{\om}\tau.
\end{align*}
Therefore we can write
\begin{align*}
&-\f12 r^2 (\tilde{\Lb}{\chi})|\phi|^2+\f12 r^2\chi \tilde{\Lb}|\phi|^2=(r\chi)( {\tilde{\Lb}}(r\phi)+\tau \phi) \phi-\f12(r\tilde{\Lb}(r\chi)+\tau r\chi)|\phi|^2,\\
&r^2|\tilde{\Lb}\phi|^2=|{\tilde{\Lb}}(r\phi)-\tilde{\Lb}(r)\phi|^2=|{\tilde{\Lb}}(r\phi)|^2+\tau^2|\phi|^2+2 {\tilde{\Lb}}(r\phi) \tau\phi,\\
& r^2|\tilde{\nabb}\phi|^
=|\tilde{\nabb}(r\phi)|^2+(1-\tau^2)|\phi|^2-2(\om-\tilde{\om}\tau)\cdot\tilde{\nabb}(r\phi) \phi,\\
& r^2 ({\tilde{\Lb}}\phi) (\om\cdot \tilde{\nabb})\phi={\tilde{\Lb}}(r\phi) (\om \cdot \tilde{\nabb})(r\phi)-\tau(1-\tau^2)|\phi|^2+\phi \tau(\om\cdot \tilde{\nabb})(r\phi) -(1-\tau^2){\tilde{\Lb}}(r\phi)\phi.
\end{align*}
Notice that
\[
\om\cdot \tilde{\nabb}=\om\cdot (\tilde{\om}\times \tilde{\nabla})=\om\times \tilde{\om}\cdot \tilde{\nabb}.
\]
Since $v_+\geq u_+$, we therefore can show that the quadratic terms are nonnegative
\begin{align*}
&\left((1-\tau)v_+^\ga+(1+\tau)u_+^\ga\right)|\tilde{\Lb}(r\phi)|^2+\left((1+\tau)v_+^\ga+(1-\tau)u_+^\ga\right)|\tilde{\nabb}(r\phi)|^2+2 (v_+^\ga-u_+^\ga){\tilde{\Lb}}(r\phi) (\om \cdot \tilde{\nabb})(r\phi)\\
&\geq 2\sqrt{(v_+^{\ga}+u_+^{\ga})^2-\tau^2(v_+^{\ga}-u_+^{\ga})^2}|\tilde{\nabb}(r\phi)||\tilde{\Lb}(r\phi)|-2 (v_+^\ga-u_+^\ga)\sqrt{1-\tau^2}|{\tilde{\Lb}}(r\phi)| |\tilde{\nabb}(r\phi)|\\
& \geq 0.
\end{align*}
For the other lower order terms, we compute that
\begin{align*}
&\left((1-\tau)v_+^\ga+(1+\tau)u_+^\ga\right)(\tau^2|\phi|^2+2{\tilde{\Lb}}(r\phi)\tau\phi )+(r\chi)( {\tilde{\Lb}}(r\phi)+\tau \phi) \phi-\f12(r\tilde{\Lb}(r\chi)+\tau r\chi)|\phi|^2\\
&+\left((1+\tau)v_+^\ga+(1-\tau)u_+^\ga\right)\big((1-\tau^2)|\phi|^2-2(\om-\tilde{\om}\tau)\tilde{\nabb}(r\phi) \phi\big)\\
&+(v_+^\ga-u_+^\ga)\big(-2\tau(1-\tau^2)|\phi|^2+2\tau \phi (\om\cdot \tilde{\nabb})(r\phi)-2\phi (1-\tau^2){\tilde{\Lb}}(r\phi) \big)\\
&=(-\f12 r\tilde{\Lb}(r\chi)+v_+^\ga+u_+^\ga)|\phi|^
+2(v_+^\ga+u_+^\ga)(\tau {\tilde{\Lb}}-\om\cdot \tilde{\nabb})(r\phi) \phi\\
&=-r^2\tilde{r}^{-1} \tilde{\Om}_{ij}(r^{-3}(v_+^\ga+u_+^\ga) \om_j\tilde{\om}_i |r\phi|^2)+\tilde{r}^{-2}r^2\tilde{\Lb}(r^{-1}\tau\tilde{r}^2(v_+^\ga+u_+^\ga) |\phi|^2)\\
&+(-\f12 r\tilde{\Lb}(r\chi)+v_+^\ga+u_+^\ga)|\phi|^2-\tilde{r}^{-2}r^2\tilde{\Lb}(r^{-3}\tau\tilde{r}^2(v_+^\ga+u_+^\ga)) |r\phi|^2+r^2 \tilde{r}^{-1}\tilde{\Om}_{ij}(r^{-3}(v_+^\ga+u_+^\ga) \om_j\tilde{\om}_i) |r\phi|^2.
\end{align*}
We can compute that
\begin{align*}
&\tilde{r}^{-1}\tilde{\Om}_{ij}(r^{-3}\om_j\tilde{\om}_i)=-2r^{-4}(1-2\tau^2)-2\tau \tilde{r}^{-1}r^{-3},\\
&\tilde{r}^{-2}r^4\tilde{\Lb}(r^{-3}\tilde{r}^2\tau)=4\tau^2-1-2r\tilde{r}^{-1}\tau.
\end{align*}
Thus the coefficients of $|\phi|^2$ in the last line in the previous equation verify
\begin{align*}
&(-\f12 r\tilde{\Lb}(r\chi)+v_+^\ga+u_+^\ga)-\tilde{r}^{-2}r^4\tilde{\Lb}(r^{-3}\tau\tilde{r}^2(v_+^\ga+u_+^\ga)) +r^4 \tilde{r}^{-1} \tilde{\Om}_{ij}(r^{-3}(v_+^\ga+u_+^\ga) \om_j\tilde{\om}_i) \\
&=-r(\pa_t-\tilde{\om}\cdot \nabla)(v_+^\ga-u_+^\ga)-r\tau (\pa_t-\tilde{\om}\cdot \nabla)(u_+^\ga+v_+^\ga)+r(\pa_r-\tau \tilde{\om}\cdot \nabla)(u_+^\ga+v_+^\ga)\\
&\quad +(u_+^\ga+v_+^\ga)\left(1-(4\tau^2-1-2r\tilde{r}^{-1}\tau)-2(1-2\tau^2)+2\tau \tilde{r}^{-1}r\right)\\
&=r(\pa_t+\pa_r)u_+^\ga+r(\pa_r-\pa_t)v_+^\ga-\tau r(\pa_t+\pa_r)u_+^\ga-\tau r(\pa_t-\pa_r)v_+^\ga=0.
\end{align*}
The above computations imply that the lower order terms can be written as a divergence form and hence can be estimated by using integration by parts:
\begin{align*}
&\int_{\mathcal{N}^{-}(q)}\big(-r^2\tilde{r}^{-1} \tilde{\Om}_{ij}(r^{-3}(v_+^\ga+u_+^\ga) \om_j\tilde{\om}_i |r\phi|^2)+\tilde{r}^{-2}r^2\tilde{\Lb}(r^{-1}\tau\tilde{r}^2(v_+^\ga+u_+^\ga) |\phi|^2)\big)r^{-2}\tilde{r}^2 d\tilde{u}d\tilde{\om}\\
&= \int_{\S_{(0, x_0)}(t_0)}r^{-1}\tau \tilde{r}^2 (u_+^\ga+v_+^\ga)|\phi|^2d\tilde{\om}.
\end{align*}
This term is an integral on the sphere on the initial hypersurface and cancels the one arising from the boundary integral on $\B_{(0, x_0)}(t_0)$.
Keeping the potential part and discarding the quadratic terms which are nonnegative, we in therefore derive that
\begin{equation*}
\begin{split}
&\frac{2}{p+1}\int_{\mathcal{N}^{-}(q)}((1+\tau)v_+^\ga+(1-\tau)u_+^\ga ) |\phi|^{p+1} \tilde{r}^2d\tilde{u}d\tilde{\om}\\
&\leq -\int_{\mathcal{N}^{-}(q)}i_{J^{X,\chi}[\phi]}d\vol+ \int_{\S_{(0, x_0)}(t_0)}r^{-1}\tau \tilde{r}^2 (u_+^\ga+v_+^\ga)|\phi|^2d\tilde{\om}.
\end{split}
\end{equation*}
Combining this estimate with \eqref{eq:PWE:ex:bxt0} and by using the uniform spacetime bound of Proposition \ref{prop:spacetime:bd}, we then derive that
\begin{align*}
\frac{2}{p+1}\int_{\mathcal{N}^{-}(q)}((1+\tau)v_+^\ga+(1-\tau)u_+^\ga ) |\phi|^{p+1}
& \leq \iint_{\mathcal{J}^{-}(q)}\left|\frac{p-1}{p+1}\chi-\frac{(v_+^{\ga-2}v+u_+^{\ga-2}u)\ga}{p+1}\right||\phi|^{p+1}\\
&\quad +\int_{\pa\mathcal{J}^{-}(q)}i_{J^{X, \chi}[\phi]}d\vol\\
&\leq C \mathcal{E}_{0, \ga+\ep}[\phi]
\end{align*}
for some constant $C$ depending only on $\ga$, $p$ and $0<\ep<p-1-\ga$. The proposition then follows by letting $0<\ep<\ga_0-\ga$.
\end{proof}
\section{The pointwise decay of the solution in the exterior region}
In this section, we make use of the weighted energy flux bound derived in the previous section to investigate the asymptotic behaviour of the solution in the exterior region $\{t+2\leq |x|\}$.
We need the following integration lemma.
\begin{Lem}
\label{lem:integration:ex:ab}
Assume $1<\ga<2$ and $\a$, $\b$ nonnegative such that $\b+\a\ga>2$. Fix $q=(t_0, x_0)$ in the exterior region. For the $2$-sphere $\S_{(t_0-\tilde{r}, x_0)}(\tilde{r})$ on the backward light cone $\mathcal{N}^{-}(q)$, we have
\begin{equation}
\label{eq:integration:ex:ab}
\begin{split}
&\int_{\S_{(t_0-\tilde{r}, x_0)}(\tilde{r})} ((1+\tau)r^{\ga}+(r_0-t_0)^\ga)^{-\a}r^{-\b}d\tilde{\om}\\
&\leq C (r_0-\tilde{r})^{2-\b-\ga+\ep} r_0^{-2}\left((r_0-\tilde{r})^{(1-\a)\ga}+(r_0-t_0)^{(1-\a)\ga}\right)
\end{split}
\end{equation}
for some constant $C$ depending only on $\ep$, $\ga$, $\a$ and $\b$.
Here $\tau=\om\cdot \tilde{\om}$, $r_0=|x_0|$ and $0\leq \tilde{r}\leq t_0<r_0$. And the small positive constant $\ep$ only appears for the case when $\a=1$.
\end{Lem}
\begin{proof}
Denote $s=-\om_0\cdot \tilde{\om}$ with $\om_0=r_0^{-1}x_0$. Note that
\begin{align*}
&r^2=|x_0+\tilde{x}|^2=\tilde{r}^2+r_0^2+2r_0\tilde{r}\om_0\cdot \tilde{\om}=(\tilde{r}-r_0s)^2+(1-s^2)r_0^2,\\
&(1+\tau)r=r+r\om\cdot \tilde{\om}=r+(\tilde{x}+x_0)\cdot \tilde{\om}=r+\tilde{r}-r_0s.
\end{align*}
We can write the integral as
\begin{align*}
&\int_{\S_{(t_0-\tilde{r}, x_0)}(\tilde{r})} ((1+\tau)r^{\ga}+(r_0-t_0)^\ga)^{-\a}r^{-\b}d\tilde{\om} =2\pi\int_{-1}^1 r^{-\b}(r^{\ga-1}(r+\tilde{r}-r_0s)+(r_0-t_0)^{\ga})^{-\a}ds.
\end{align*}
Without loss of generality, we may assume that $t_0\geq \frac{9}{10}r_0$. Otherwise it trivially has
\begin{align*}
\int_{\S_{(t_0-\tilde{r}, x_0)}(\tilde{r})} ((1+\tau)r^{\ga}+(r_0-t_0)^\ga)^{-\a}r^{-\b}d\tilde{\om}
\leq 4\pi 10^{\b+\a\ga} r_0^{-\b-\a\ga}
\end{align*}
since $r\geq r_0-t_0\geq \frac{1}{10}r_0$.
For the integral on $s\leq 0$, we trivially bound that
\begin{align*}
r\geq r_0,\quad r+\tilde{r}-r_0s\geq r+\tilde{r}\geq r_0.
\end{align*}
Thus
\begin{align*}
\int_{-1}^0 r^{-\b}(r^{\ga-1}(r+\tilde{r}-r_0s)+(r_0-t_0)^{\ga})^{-\a}ds\leq r_0^{-\b-\a\ga}.
\end{align*}
Define $s_0=1-(1-\tilde{r}r_0^{-1})^2$. On the interval $[0, s_0]$, note that
\begin{align*}
\sqrt{1-s} \ r_0\geq r_0-\tilde{r}.
\end{align*}
Therefore, we can show that
\begin{align*}
\tilde{r}-r_0s \leq r_0(1-s)\leq r_0\sqrt{1-s},\quad r_0s-\tilde{r}\leq r_0-\tilde{r}\leq r_0\sqrt{1-s}
\end{align*}
as $\tilde{r}\leq t_0<r_0$. This in particular implies that
\begin{align*}
\sqrt{1-s}\ r_0\leq r\leq \sqrt{2(1-s)}\ r_0,\quad \sqrt{(\tilde{r}-r_0s)^2+(1-s^2)r_0^2}+\tilde{r}-r_0s\geq \frac{1}{3}\sqrt{1-s}r_0.
\end{align*}
Here the second inequality follows from the inequality
\[
\sqrt{a^2+b^2}+b\geq (\sqrt{2}-1)|a|,\quad \forall |b|\leq |a|
\]
together with the bound $|\tilde{r}-r_0s|\leq \sqrt{1-s}r_0\leq \sqrt{1-s^2}r_0$.
Therefore on the interval $[0, s_0]$, we can estimate that
\begin{align*}
\int_{0}^{s_0} r^{-\b}(r^{\ga-1}(r+\tilde{r}-r_0s)+(r_0-t_0)^{\ga})^{-\b} ds
&\leq 3^{\a} \int_{0}^{s_0}(\sqrt{1-s} r_0)^{-\b-\a\ga} ds\\
&\leq \frac{2 \times 3^{\a} }{\b+\a\ga-2}r_0^{-2}(r_0-\tilde{r})^{2-\b-\a\ga}.
\end{align*}
Here we used the assumption $\b+\a\ga> 2$.
Finally on the interval $[s_0, 1]$, notice that
\begin{align*}
2r\geq r_0s-\tilde{r}+\sqrt{1-s}r_0=r_0-\tilde{r}+(\sqrt{1-s}-(1-s))r_0\geq r_0-\tilde{r}.
\end{align*}
Moreover
\begin{align*}
r+\tilde{r}-r_0s=\frac{(1-s^2)r_0^2}{r+r_0s-\tilde{r}}\geq\frac{(1-s)r_0^2}{4(r_0-\tilde{r})}.
\end{align*}
Therefore we can estimate that
\begin{align*}
&\int_{s_0}^{1} r^{-\b}(r^{\ga-1}(r+\tilde{r}-r_0s)+(r_0-t_0)^{\ga})^{-\a} ds\\
&\leq 2^{\b} \int_{s_0}^{1}\big((r_0-t_0)^{\ga}+2^{\ga-3}(r_0-\tilde{r})^{\ga-2}(1-s)r_0^2\big)^{-\a}(r_0-\tilde{r})^{-\b}ds\\
&= 2^{\b+3-\ga}(\a-1)^{-1}(r_0-\tilde{r})^{2-\b-\ga} r_0^{-2}\left((r_0-t_0)^{(1-\a)\ga}-((r_0-t_0)^{\ga}+2^{\ga-3}(r_0-\tilde{r})^{\ga})^{1-\a}\right)\\
&\leq C_\ep (r_0-\tilde{r})^{2-\b-\ga+\ep} r_0^{-2}\left((r_0-\tilde{r})^{(1-\a)\ga}+(r_0-t_0)^{(1-\a)\ga}\right)
\end{align*}
for some constant $C_{\ep}$ depending only on $\ep$, $\a$, $\b$ and $\ga$.
The loss of $\ep$ comes from the case when $\a=1$. Since
\begin{align*}
r_0^{-\a\ga-\b}\leq (r_0-\tilde{r})^{2-
\b-\ga+\ep} r_0^{-2}\left((r_0-\tilde{r})^{(1-\a)\ga}+(r_0-t_0)^{(1-\a)\ga}\right)
\end{align*}
due to the assumption $\b+\a\ga> 2$, we thus conclude the Lemma.
\end{proof}
\iffalse
The above lemma is sufficient to deal with the case when $p>\frac{1+\sqrt{17}}{2}$ and $(p-1)(\ga_0+1)>4$. The following two specific lemmas are for the case when $2<p\leq \frac{1+\sqrt{17}}{2}$ and $\ga_0\leq p-1$, which in particular implies that $(p-1)(\ga_0+1)\leq 4$.
\begin{Lem}
Fix $q=(t_0, x_0)$ in the exterior region. For the $2$-sphere $\S_{(t_0-\tilde{r}, x_0)}(\tilde{r})$ on the backward light cone $\mathcal{N}^{-}(q)$, we have the following estimate
\begin{equation}
\label{eq:integration:Lemma:Gronwall:p}
\begin{split}
&\int_{\S_{(t_0-\tilde{r}, x_0)}(\tilde{r})} ((1+\tau)r^{\ga_0}+(r_0-t_0)^{\ga_0})^{-\frac{p-1}{2}} d\tilde{\om} \leq C r_0^{\frac{1-p}{2}\ga_0}
\end{split}
\end{equation}
for some constant $C$ depending only on $p$ and $\ga_0$.
Here $\tau=\om\cdot \tilde{\om}$, $r_0=|x_0|$ and $0\leq \tilde{r}\leq \f12 t_0<r_0$.
\end{Lem}
\begin{proof}
Denote $s=-\om_0\cdot \tilde{\om}$ with $\om_0=r_0^{-1}x_0$. Note that
\begin{align*}
&r^2=|x_0+\tilde{x}|^2=\tilde{r}^2+r_0^2+2r_0\tilde{r}\om_0\cdot \tilde{\om}=(\tilde{r}-r_0s)^2+(1-s^2)r_0^2,\\
&(1+\tau)r=r+r\om\cdot \tilde{\om}=r+(\tilde{x}+x_0)\cdot \tilde{\om}=r+\tilde{r}-r_0s.
\end{align*}
Without loss of generality, we may assume that $t_0\geq \frac{9}{10}r_0$. Otherwise it trivially has
\begin{align*}
\int_{\S_{(t_0-\tilde{r}, x_0)}(\tilde{r})} ((1+\tau)r^{\ga_0}+(r_0-t_0)^{\ga_0})^{-\frac{p-1}{2}} d\tilde{\om}
\les r_0^{-\frac{p-1}{2}\ga_0}
\end{align*}
since $r\geq r_0-t_0\geq \frac{1}{10}r_0$.
In spherical coordinates, we can write the integral on the sphere as
\begin{align*}
&\int_{\S_{(t_0-\tilde{r}, x_0)}(\tilde{r})} ((1+\tau)r^{\ga_0}+(r_0-t_0)^{\ga_0})^{-\frac{p-1}{2}} d\tilde{\om}=2\pi\int_{-1}^1 (r^{\ga_0-1}(r+\tilde{r}-r_0 s)+(r_0-t_0)^{\ga_0})^{-\frac{p-1}{2}}ds.
\end{align*}
Since we restricted to the case $\tilde{r}\leq \f12 t_0\leq \f12 r_0$, it always holds that
\begin{align*}
r^2=(\tilde{r}-r_0s)^2+(1-s^2)r_0^2\geq \frac{1}{9}r_0^2.
\end{align*}
For the integral on $[-1, r_0^{-1}\tilde{r} ]$, we trivially bound that
\begin{align*}
\int_{-1}^{r_0^{-1}\tilde{r}} (r^{\ga_0-1}(r+\tilde{r}-r_0s)+(r_0-t_0)^{\ga_0})^{-\frac{p-1}{2}}ds\les r_0^{-\frac{p-1}{2}\ga_0}.
\end{align*}
On $[r_0^{-1}\tilde{r}, 1]$, we estimate that
\begin{align*}
r+\tilde{r}-r_0s=\frac{(1-s^2)r_0^2}{r+r_0s-\tilde{r}}\geq \frac{(1-s)r_0^2}{2r}\geq \frac{(1-s)r_0}{10}.
\end{align*}
Therefore we can bound that
\begin{align*}
&\int_{r_0^{-1}\tilde{r}}^1 (r^{\ga_0-1}(r+\tilde{r}-r_0s)+(r_0-t_0)^{\ga_0})^{-\frac{p-1}{2}}ds\\
&\les \int_{r_0^{-1}\tilde{r}}^{1}(r_0^{\ga_0}(1-s)+(r_0-t_0)^{\ga_0})^{-\frac{p-1}{2}}ds\\
&\les r_0^{-\ga_0} \left((r_0^{\ga_0-1}(r_0-\tilde{r})+(r_0-t_0)^{\ga_0})^{\frac{3-p}{2}}-(r_0-t_0)^{\ga_0 \frac{3-p}{2}}
\right)\\
&\les r_0^{\frac{1-p}{2}\ga_0}.
\end{align*}
Here we used the fact that $\tilde{r}\leq \f12 t_0\leq \f12 r_0$ and $p<3$.
\end{proof}
Still we restricted to the situation when $2<p\leq \frac{1+\sqrt{17}}{2}$ and $(p-1)(\ga_0+1)\leq 4$ in the exterior region.
\begin{Lem}
Assume that $p\ga>2$.
Fix $q=(t_0, x_0)$ in the exterior region. For the $2$-sphere $\S_{(t_0-\tilde{r}, x_0)}(\tilde{r})$ on the backward light cone $\mathcal{N}^{-}(q)$, we have the following estimate
\begin{equation}
\label{eq:integration:Lemma:direct:p}
\begin{split}
&\int_{\S_{(t_0-\tilde{r}, x_0)}(\tilde{r})} ((1+\tau)r^{\ga}+(r_0-t_0)^\ga)^{-p} d\tilde{\om}\leq C (r_0-\tilde{r})^{2-\ga} r_0^{-2} (r_0-t_0)^{(1-p)\ga }
\end{split}
\end{equation}
for some constant $C$ depending only on $p$ and $\ga$.
Here $\tau=\om\cdot \tilde{\om}$, $r_0=|x_0|$ and $0\leq \tilde{r}\leq t_0<r_0$.
\end{Lem}
\begin{proof}
Denote $s=-\om_0\cdot \tilde{\om}$ with $\om_0=r_0^{-1}x_0$. Note that
\begin{align*}
&r^2=|x_0+\tilde{x}|^2=\tilde{r}^2+r_0^2+2r_0\tilde{r}\om_0\cdot \tilde{\om}=(\tilde{r}-r_0s)^2+(1-s^2)r_0^2,\\
&(1+\tau)r=r+r\om\cdot \tilde{\om}=r+(\tilde{x}+x_0)\cdot \tilde{\om}=r+\tilde{r}-r_0s.
\end{align*}
We can express the integral as follows
\begin{align*}
&\int_{\S_{(t_0-\tilde{r}, x_0)}(\tilde{r})} ((1+\tau)r^{\ga}+(r_0-t_0)^\ga)^{-p}d\tilde{\om}=2\pi\int_{-1}^1 (r^{\ga-1}(r+\tilde{r}-r_0s)+(r_0-t_0)^{\ga})^{-p}ds.
\end{align*}
For similar reason as the previous lemma, the inequality holds trivially when $t_0\leq \frac{9}{10}r_0$ or restricted to the interval $s\leq 0$. It suffices to assume that $t_0\geq \frac{9}{10}r_0$ and $s\geq 0$.
Define $s_0=1-(1-\tilde{r}r_0^{-1})^2$. On the interval $[0, s_0]$, note that
\begin{align*}
\sqrt{1-s} \ r_0\geq r_0-\tilde{r}.
\end{align*}
Therefore, we can show that
\begin{align*}
\tilde{r}-r_0s \leq r_0(1-s)\leq r_0\sqrt{1-s},\quad r_0s-\tilde{r}\leq r_0-\tilde{r}\leq r_0\sqrt{1-s}
\end{align*}
as $\tilde{r}\leq t_0<r_0$. This in particular implies that
\begin{align*}
\sqrt{1-s}\ r_0\leq r\leq \sqrt{2(1-s)}\ r_0,\quad \sqrt{(\tilde{r}-r_0s)^2+(1-s^2)r_0^2}+\tilde{r}-r_0s\geq \frac{1}{3}\sqrt{1-s}r_0.
\end{align*}
Therefore on the interval $[0, s_0]$, we can estimate that
\begin{align*}
\int_{0}^{s_0} (r^{\ga-1}(r+\tilde{r}-r_0s)+(r_0-t_0)^{\ga})^{-p} ds
&\les \int_{0}^{s_0}(\sqrt{1-s} r_0)^{-p\ga} ds\\
&\les r_0^{-2} (r_0-\tilde{r})^{2-p\ga}.
\end{align*}
Here we used the assumption $p \ga> 2$.
Finally on the interval $[s_0, 1]$, notice that
\begin{align*}
2r\geq r_0s-\tilde{r}+\sqrt{1-s}r_0=r_0-\tilde{r}+(\sqrt{1-s}-(1-s))r_0\geq r_0-\tilde{r}.
\end{align*}
Moreover
\begin{align*}
r+\tilde{r}-r_0s=\frac{(1-s^2)r_0^2}{r+r_0s-\tilde{r}}\geq\frac{(1-s)r_0^2}{4(r_0-\tilde{r})}.
\end{align*}
Therefore we can estimate that
\begin{align*}
&\int_{s_0}^{1} (r^{\ga-1}(r+\tilde{r}-r_0s)+(r_0-t_0)^{\ga})^{-p} ds\\
&\les \int_{s_0}^{1}\big((r_0-t_0)^{\ga}+(r_0-\tilde{r})^{\ga-2}(1-s)r_0^2\big)^{-p}ds\\
&\les (r_0-\tilde{r})^{2-\ga} r_0^{-2} (r_0-t_0)^{(1-p)\ga }.
\end{align*}
\begin{align*}
&\int_{s_0}^{1} (r^{\ga-1}(r+\tilde{r}-r_0s)+(r_0-t_0)^{\ga})^{-p} ds\\
&\les \int_{s_0}^{1}\big((r_0-t_0+(r_0-\tilde{r})^{-1}r_0\tilde{r}(1-s))^{\ga}+(r_0-\tilde{r})^{\ga-2}(1-s)r_0^2\big)^{-p}ds\\
&\les (r_0-\tilde{r})^{2-\ga} r_0^{-2} (r_0-t_0)^{(1-p)\ga }.
\end{align*}
Let's consider for the particular case when $\tilde{r}=\f12 t_0$.
\begin{align*}
&\int_{s_0}^{1} (r^{\ga-1}(r+\tilde{r}-r_0s)+(r_0-t_0)^{\ga})^{-p} ds\\
&\les \int_{0}^{(1-r_0^{-1}\tilde{r})^2}\big((r_0-t_0+ r_0 s)^{\ga}+ r_0^{\ga} s\big)^{-p}ds.
\end{align*}
Let $s_*$ be the smallest solution of
\begin{align*}
(r_0-t_0+ r_0 s)^{\ga}= 4^{\ga}r_0^{\ga} s,\quad 0\leq s\leq (1-r_0^{-1}\tilde{r})^2.
\end{align*}
Then
\begin{align*}
&\int_{0}^{s_*} (r^{\ga-1}(r+\tilde{r}-r_0s)+(r_0-t_0)^{\ga})^{-p} ds\\
&\geq \int_{0}^{s_*} (r_0-t_0+ r_0 s)^{-p\ga} ds\\
&\geq u_0^{1-p\ga}(u_0+r_0s_*)^{-1}s_*=4^{-1}u_0^{1-p\ga}r_0^{-1}s_*^{1-\frac{1}{\ga}} .
\end{align*}
Notice that
\begin{align*}
r_0 s_*^{\frac{1}{\ga}}(4-s_*^{1-\frac{1}{\ga}})=u_0.
\end{align*}
Without loss of generality, one may assume that $t_0\geq \frac{9}{10}r_0$. Thus in particular, one has $r_0^{-1}u_0\leq \frac{1}{10}$. Therefore it can be checked that
\begin{align*}
s_*\leq 2 r_0^{-\ga}u_0^{\ga}\leq 2 \cdot 10^{-\ga}\leq \frac{1}{5}.
\end{align*}
Here we keep in mind that $\ga>1$.
Hence from the above equation for $s_*$, we conclude that
\begin{align*}
2r_0^{-\ga}u_0^{\ga}\geq s_*\geq 10^{-1} r_0^{-\ga} u_0^{\ga}.
\end{align*}
Therefore
\begin{align*}
&\int_{0}^{s_*} (r^{\ga-1}(r+\tilde{r}-r_0s)+(r_0-t_0)^{\ga})^{-p} ds\\
&\geq u_0^{1-p\ga}r_0^{-1}s_*^{1-\frac{1}{\ga}}\geq u_0^{1-p\ga}r_0^{-1}(r_0^{-\ga}u_0^{\ga})^{1-\frac{1}{\ga}}=u_0^{\ga-p\ga} r_0^{-\ga}.
\end{align*}
No improvement.
\end{proof}
It turns out that this lemma is implied by Lemma \ref{lem:integration:ex:ab} with $\a=p$, $\b=0$.
\fi
We are now ready to prove the following decay estimates for the solution in the exterior region.
\begin{Prop}
\label{prop:NLW:def:3d:ex}
In the exterior region $\{2+t\leq |x|\}$,
the solution $\phi$ to the equation \eqref{eq:NLW:semi:3d} satisfies the following $L^\infty$ decay estimates:
\begin{itemize}
\item when $p$ and $\ga_0$ verify the relation
\[
\frac{1+\sqrt{17}}{2}<p<5, \quad \max\{\frac{4}{p-1}-1, 1\}<\ga_0<\min\{p-1, 2\},
\]
then
\begin{equation}
\label{eq:phi:pt:Br:largep}
|\phi(t_0, x_0)|\leq C (1+t_0+|x_0|)^{-1}(1+|x_0|-t_0)^{-\frac{\ga_0-1}{2}}\sqrt{\mathcal{E}_{1, \ga_0}[\phi] };
\end{equation}
\item when $2<p\leq \frac{1+\sqrt{17}}{2}$ and $1<\ga_0<p-1$, then
\begin{equation}
\label{eq:phi:pt:Br:smallp}
|\phi(t_0, x_0)|\leq C |x_0|^{-\frac{3+(p-2)^2}{(p+1)(5-p)}\ga_0}(|x_0|-t_0)^{-\frac{(p-1)\ga_0}{p+1}} \sqrt{\mathcal{E}_{1, \ga_0}[\phi] }
\end{equation}
for some constant $C$ depending on $\ga_0$, $p$ and the zeroth order weighted energy $\mathcal{E}_{0, \ga_0}[\phi] $.
\end{itemize}
\end{Prop}
\begin{proof}
The proof for this decay estimate relies on the representation formula for linear wave equations. The nonlinearity will be controlled by using the weighted energy estimates in Proposition \ref{prop:EF:cone:NW:3d}. Note that for $q=(t_0, x_0)$ in the exterior region, we have
\begin{equation}
\label{eq:rep4phi:ex}
\begin{split}
4\pi\phi(t_0, x_0)&=\int_{\tilde{\om}}t_0 \phi_1(x_0+t_0\tilde{\om})d\tilde{\om}+\pa_{t_0}\big(\int_{\tilde{\om}}t_0 \phi_0(x_0+t_0\tilde{\om})d\tilde{\om} \big)-\int_{\mathcal{N}^{-}(q)}|\phi|^{p-1} \phi \ \tilde{r} d\tilde{r}d\tilde{\om}.
\end{split}
\end{equation}
For the linear evolution part, one can use the standard vector field method to show that
\begin{align*}
|\int_{\tilde{\om}}t_0 \phi_1(x_0+t_0\tilde{\om})d\tilde{\om}+\pa_{t_0}\big(\int_{\tilde{\om}}t_0 \phi_0(x_0+t_0\tilde{\om})d\tilde{\om} \big)|
&\les r_0^{-1} (r_0-t_0)^{-\frac{\ga_0-1}{2}}\sqrt{\mathcal{E}_{1, \ga_0}[\phi] }
\end{align*}
for $\ga_0>1$ and $r_0=|x_0|\geq t_0+2$.
For the case when $ \frac{1+\sqrt{17}}{2}<p<5$, by using the weighted energy estimate \eqref{eq:Eflux:ex:EF} and the bound \eqref{eq:integration:ex:ab} with $\a=\frac{p-1}{2}$, $\b=\frac{p+1}{2}$, we can estimate that
\begin{align*}
&|\int_{\mathcal{N}^{-}(q)}\Box \phi \ \tilde{r} d\tilde{r}d\tilde{\om}|\\
&\leq \left(\int_{\mathcal{N}^{-}(q)}((1+\tau)r^{\ga_0}+(r_0-t_0)^{\ga_0})|\phi|^{p+1}\ \tilde{r}^{2} d\tilde{r}d\tilde{\om}\right)^{\frac{p-1}{p+1}}\\
&\quad \cdot \left(\int_{\mathcal{N}^{-}(q)}((1+\tau)r^{\ga_0}+(r_0-t_0)^{\ga_0})^{-\frac{p-1}{2}}|\phi|^{\frac{p+1}{2}}\ \tilde{r}^{\frac{3-p}{2}} d\tilde{r}d\tilde{\om}\right)^{\frac{2}{p+1}}\\
&\les \left(\int_{0}^{t_0}(\sup\limits_{x}|r\phi|^{\frac{p+1}{2}}) (r_0-\tilde{r})^{\frac{3-p}{2}-\ga_0+\ep} r_0^{-2}\left((r_0-\tilde{r})^{\frac{3-p}{2}\ga_0}+(r_0-t_0)^{\frac{3-p}{2}\ga_0}\right)\tilde{r}^{\frac{3-p}{2}} d\tilde{r} \right)^{\frac{2}{p+1}}.
\end{align*}
When $p\geq 3$, we estimate that
\begin{align*}
(r_0-\tilde{r})^{\frac{3-p}{2}-\ga_0+\ep} r_0^{\frac{p-3}{2}}\left((r_0-\tilde{r})^{\frac{3-p}{2}\ga_0}+(r_0-t_0)^{\frac{3-p}{2}\ga_0}\right)\leq (2+t_0-\tilde{r})^{\frac{3-p}{2}-\ga_0+\ep} (2+t_0)^{\frac{p-3}{2}}.
\end{align*}
When $p<3$, we can choose $\ep$ sufficiently small such that $$\frac{3-p}{2}-\ga_0+\ep+\frac{3-p}{2}\ga_0\leq 0$$ due to the assumption $(p-1)(\ga_0+1)>4$ for this case. Thus we can bound that
\begin{align*}
(r_0-\tilde{r})^{\frac{3-p}{2}-\ga_0+\ep} r_0^{\frac{p-3}{2}}\left((r_0-\tilde{r})^{\frac{3-p}{2}\ga_0}+(r_0-t_0)^{\frac{3-p}{2}\ga_0}\right)\leq (2+t_0-\tilde{r})^{\frac{3-p}{2}+\frac{1-p}{2}\ga_0+\ep} (2+t_0)^{\frac{p-3}{2}}.
\end{align*}
We therefore derive that
\begin{align*}
|\phi(t_0, x_0)|&\les \sqrt{\mathcal{E}_{1, \ga_0}[\phi](\B_2^{\infty})}r_0^{-1}(r_0-t_0)^{\frac{1-\ga_0}{2}}\\
&+\left(r_0^{-\frac{p+1}{2}}\int_{0}^{t_0}(\sup\limits_{x}|r\phi|^{\frac{p+1}{2}})(2+t_0-\tilde{r})^{\frac{3-p}{2}-\ga_0+\ep+\frac{\ga_0}{2}\max\{0, 3-p\}} (2+t_0)^{\frac{p-3}{2}} \tilde{r}^{\frac{3-p}{2}} d\tilde{r} \right)^{\frac{2}{p+1}}.
\end{align*}
Now define the function
\begin{equation*}
\mathcal{M}(t_0)=\sup\limits_{|x|\geq t_0+2} |u_+^{\frac{\ga_0-1}{2}}r\phi|^{\frac{p+1}{2}}
\end{equation*}
and
\begin{equation*}
f(t_0, \tilde{r})=(2+t_0-\tilde{r})^{\frac{3-p}{2}-\ga_0+\ep+\frac{\ga_0}{2}\max\{0, 3-p\}} (2+t_0)^{\frac{p-3}{2}} \tilde{r}^{\frac{3-p}{2}}.
\end{equation*}
Here $u_+=1+\f12|t-r|$. Since on the cone $\mathcal{N}^{-}(q)$, $u_+\geq \f12 (r_0-t_0)$,
then the previous inequality leads to
\begin{align*}
\mathcal{M}(t_0)\les (\mathcal{E}_{1, \ga_0}[\phi] )^{\frac{p+1}{4}}+\int_{0}^{t_0}\mathcal{M}(t_0-\tilde{r})f(t_0, \tilde{r}) d\tilde{r}.
\end{align*}
When $p\geq 3$, by the assumption on $p$ and $\ga_0$ of the main Theorem \ref{thm:main}, we in particular have $\ga_0>1$. Choose $\ep$ sufficiently small such that $0<\ep<\ga_0-1$. Then since $3\leq p<5$, we can bound that
\begin{align*}
\int_0^{t_0}f(t_0,\tilde{r})d\tilde{r} &=\int_0^{t_0}(2+t_0-\tilde{r})^{\frac{3-p}{2}-\ga_0+\ep} (2+t_0)^{\frac{p-3}{2}} \tilde{r}^{\frac{3-p}{2}}d\tilde{r}\\
&\les \int_0^{\f12 t_0}(2+t_0)^{-\ga_0+\ep} \tilde{r}^{\frac{3-p}{2}}d\tilde{r}+\int_{\f12 t_0}^{t_0}(2+t_0-\tilde{r})^{\frac{3-p}{2}-\ga_0+\ep} d\tilde{r}\\
&\les (2+t_0)^{-\ga_0+\ep}t_0^{\frac{5-p}{2}}+2^{\frac{5-p}{2}-\ga_0+\ep}+(2+t_0)^{\frac{5-p}{2}-\ga_0+\ep}\\
&\les 1
\end{align*}
as $\frac{5-p}{2}+\ep<\ga_0$. The implicit constant relies only on $p$, $\ga_0$ and $\ep$. For the case when $p<3$, the assumption implies that $(\ga_0+1)(p-1)>4$. Let $\ep>0$ verify the relation $$\frac{5-p}{2}-\frac{(p-1)\ga_0}{2}+\ep<0.$$ Then
\begin{align*}
\int_0^{t_0}f(t_0,\tilde{r})d\tilde{r} &=\int_0^{t_0}(2+t_0-\tilde{r})^{\frac{3-p}{2}-\ga_0+\frac{(3-p)\ga_0}{2}+\ep} (2+t_0)^{\frac{p-3}{2}} \tilde{r}^{\frac{3-p}{2}}d\tilde{r}\\
&\les \int_0^{\f12 t_0}(2+t_0)^{-\frac{\ga_0(p-1)}{2}+\ep} \tilde{r}^{\frac{3-p}{2}}d\tilde{r}+\int_{\f12 t_0}^{t_0}(2+t_0-\tilde{r})^{\frac{3-p}{2}-\frac{p-1}{2}\ga_0+\ep} d\tilde{r}\\
&\les (2+t_0)^{-\frac{p-1}{2}\ga_0+\ep}t_0^{\frac{5-p}{2}}+2^{\frac{5-p}{2}-\frac{p-1}{2}\ga_0+\ep}+(2+t_0)^{\frac{5-p}{2}-\frac{p-1}{2}\ga_0+\ep}\\
&\les 1.
\end{align*}
In any case the function $f(t_0, \tilde{r})$ is integrable on the interval $[0, t_0]$, which is independent of $t_0$. Thus by using Gronwall's inequality, we conclude that
\begin{align*}
\mathcal{M}(t_0)\les (\mathcal{E}_{1, \ga_0}[\phi] )^{\frac{p+1}{4}}.
\end{align*}
The decay estimate for $\phi$ then follows from the definition of $\mathcal{M}(t_0)$ for the case when $p>\frac{1+\sqrt{17}}{2}$.
\bigskip
For the smaller power $p$, we instead have weaker decay estimates for the solution. Since in this case we assumed $1<\ga_0<p-1\leq \frac{\sqrt{17}-1}{2}$, in particular we have $$p\ga_0>2, \quad (p-1)(\ga_0+1)>2,\quad \frac{(p-1)\ga_0}{5-p}<\frac{(p-1)^2}{5-p}<1.$$ To estimate the nonlinear term, we split the integral on the backward light cone $\mathcal{N}^{-}(q)$ into two parts: the first part is restricted to time interval $[\f12 t_0^{\frac{(p-1)\ga_0}{5-p}}, t_0]$, on which we use the same argument as above and the second part can be directly estimated by the weighted energy flux. More precisely we have
\begin{align*}
&|\int_{\mathcal{N}^{-}(q)\cap\{\tilde{r}\geq \f12 t_0^{\frac{p-1}{5-p}\ga_0}\}}\Box \phi \ \tilde{r} d\tilde{r}d\tilde{\om}|\\
&\les \left(\int_{\mathcal{N}^{-}(q)}((1+\tau)r^{\ga_0}+(r_0-t_0)^{\ga_0})|\phi|^{p+1}\ \tilde{r}^{2} d\tilde{r}d\tilde{\om}\right)^{\frac{p}{p+1}}\\
&\quad \cdot \left(\int_{\mathcal{N}^{-}(q)\cap\{\tilde{r}\geq \f12 t_0^{\frac{p-1}{5-p}\ga_0}\}}((1+\tau)r^{\ga_0}+(r_0-t_0)^{\ga_0})^{-p} \tilde{r}^{1-p} d\tilde{r}d\tilde{\om}\right)^{\frac{1}{p+1}}\\
&\les \left(\int_{\f12 t_0 ^{\frac{p-1}{5-p}\ga_0}}^{t_0} (r_0-\tilde{r})^{2-\ga_0} r_0^{-2}(r_0-t_0)^{(1-p)\ga_0} \tilde{r}^{1-p} d\tilde{r} \right)^{\frac{1}{p+1}}\\
&\les r_0^{-\frac{3+(p-2)^2}{(p+1)(5-p)}\ga_0}(r_0-t_0)^{\frac{(1-p)\ga_0}{p+1}}.
\end{align*}
Here we used estimate \eqref{eq:integration:ex:ab} without loss of $\ep$ as $p<3$ (see the proof for Lemma \ref{lem:integration:ex:ab}). Thus we derive that
\begin{align*}
|\int_{\mathcal{N}^{-}(q) }\Box \phi \ \tilde{r} d\tilde{r}d\tilde{\om}|
\les &r_0^{-\frac{3+(p-2)^2}{(p+1)(5-p)}\ga_0}(r_0-t_0)^{\frac{(1-p)\ga_0}{p+1}}\\
&+ \left(\int_{0}^{ \f12 t_0^{\frac{(p-1)\ga_0}{5-p}}}(\sup\limits_{x}|r\phi|^{\frac{p+1}{2}}) (r_0-\tilde{r})^{1-\frac{(p-1)(\ga_0+1)}{2}} r_0^{-2} \tilde{r}^{\frac{3-p}{2}} d\tilde{r} \right)^{\frac{2}{p+1}}.
\end{align*}
Again here we do not loss the $\ep $ decay of the bound \eqref{eq:integration:ex:ab} as $p\leq \frac{1+\sqrt{17}}{2}<3$. Since $\tilde{r}\leq \f12 t_0^{\frac{(p-1)\ga_0}{5-p}}\leq \f12 t_0$, we in particular have that $r_0\les r$, $r_0\les r_0-\tilde{r}$. Define
\begin{align*}
\mathcal{M}_1(t)=\sup\limits_{|x|\geq t+2}( |\phi(t, x)| |x|^{\frac{3+(p-2)^2}{(p+1)(5-p)}\ga_0 }(|x|-t)^{\frac{(p-1)\ga_0}{p+1}})^{\frac{p+1}{2}}.
\end{align*}
We then derive that
\begin{align*}
\mathcal{M}_1(t_0)&\les 1+\int_0^{\f12 t_0^{\frac{(p-1)\ga_0}{5-p}}}\mathcal{M}_1(t_0-\tilde{r}) r_0^{\frac{p+1}{2}+1-\frac{(p-1)(\ga_0+1)}{2}-2}\tilde{r}^{\frac{3-p}{2}}d\tilde{r}\\
& \les 1+\int_0^{\f12 t_0^{\frac{(p-1)\ga_0}{5-p}}}\mathcal{M}_1(t_0-\tilde{r}) t_0^{ -\frac{(p-1)\ga_0 }{2} } \tilde{r}^{\frac{3-p}{2}}d\tilde{r}.
\end{align*}
Here we note that
\begin{align*}
\frac{3+(p-2)^2}{(p+1)(5-p)}\ga_0\leq 1,\quad \frac{3+(p-2)^2}{(p+1)(5-p)}\ga_0+\frac{(p-1)\ga_0}{p+1}\leq \frac{\ga_0+1}{2}.
\end{align*}
By using Gronwall's inequality, we conclude that
\begin{align*}
\mathcal{M}_1(t_0)\les 1.
\end{align*}
The pointwise decay estimate for $\phi$ for the case when $2<p\leq \frac{1+\sqrt{17}}{2}$ then follows.
\end{proof}
In order to study the asymptotic behaviour of the solution in the interior region $\{t+2\geq |x|\}$, we use the method of conformal compactification, which requires to understand the solution on the hyperboloid $\Hy$ defined in the Section \ref{sec:notation}.
\begin{Prop}
\label{prop:energyflux:H:ex}
Assume that $p$ and $\ga_0$ verifies the relation
\begin{align*}
2<p<5,\quad 1<\ga_0<\min\{2, p-1\}.
\end{align*}
Then we have the following weighted energy flux bound through the future part of the hyperboloid
\begin{equation}
\label{eq:EE:hyB}
\begin{split}
E[\phi](\Hy)+\int_{\mathbb{H}^+}r^{\ga_0}|L(r\phi)|^2+|{\Lb} \phi|^2+r^2| L \phi|^2+|\nabb (r\phi)|^2+\frac{2r^2}{p+1}|\phi|^{p+1} dtd\om\leq C \mathcal{E}_{0, \ga_0}[\phi]
\end{split}
\end{equation}
for some universal constant $C$.
In addition, for the large $p$ case
\[
\frac{1+\sqrt{17}}{2}<p<5, \quad \max\{\frac{4}{p-1}-1, 1\}<\ga_0<\min\{p-1, 2\},
\]
we also have the energy bound for the first order derivatives
\begin{equation}
\label{eq:EE:hyB:1derivative}
\begin{split}
E[Z\phi](\Hy)+\int_{\mathbb{H}^+}r^{\ga_0}|L Z(r\phi)|^2+|{\Lb} Z \phi|^2+r^2| L Z\phi|^2+|\nabb Z (r\phi)|^2 dtd\om\leq C \mathcal{E}_{1, \ga_0}[\phi]^{p-1}
\end{split}
\end{equation}
for all $Z\in \Gamma=\{\pa_{\mu}, \Om_{\mu\nu}=x^{\mu}\pa_{\nu}-x^{\nu}\pa_\mu\}$ and some constant $C$ depending on $\mathcal{E}_{0, \ga_0}[\phi]$, $p$ and $\ga_0$. Here the hyperboloid $\Hy$ is parameterized by $(t, \om)$ and $E[\phi](\Hy)$ denotes the energy flux of $\phi$ through $\Hy$.
\end{Prop}
\begin{proof}
The proof goes in the same manner by applying the energy identity \eqref{eq:energy:id} to the same vector fields $X$, $Y$ and function $\chi$ in the proof of the main theorem in the author's companion paper \cite{yang:scattering:NLW} but for the domain $\mathcal{D}$ being the subregion of the exterior region bounded by $\Hy^+$, the initial hypersurface and the incoming null hypersurface $\Hb_{v_0}$. The bulk integral and the boundary integral on the initial hypersurface as well as the incoming null hypersurface $\Hb_{v_0}$ could be found in section 4 in \cite{yang:scattering:NLW}.
It remains to compute the boundary integral on the hyperboloid $\Hy^+$.
Define the functions
\[
\tau_1=\f12\sqrt{(t^*-R^*)^2+r^2},\quad \tau_0=\frac{t^*-R^*}{r}=\sqrt{1+r^{-2}(R^*)^2}.
\]
Here recall that$ R^*=\frac{5}{6}$, $t^*=t+3$. In particular we have
\[
d\tau_1=\f12\tau_1^{-1}((t^*-R^*)dt+rdr),\quad \pa_{\tau_1}=\frac{\tau_1}{t^*-R^*}\pa_t+\frac{\tau_1}{r}\pa_r.
\]
Then the hyperboloid $\mathbb{H}^+$ can be parameterized by $(\tau_1, \om)$ or $(t, \om)$.
We therefore can compute that
\begin{align*}
-2\int_{\mathbb{H}^+\cap\{v\leq v_0\}}i_{J^{X, Y, \chi}[\phi]}d\vol&=2\int_{\mathbb{H}^+ \cap\{v\leq v_0\}} (J^{X, Y, \chi}[\phi])^u (dt+dr)r^2d\om+(J^{X, Y, \chi}[\phi])^v (dr-dt) r^2 d\om\\
&=\int_{\mathbb{H}_+ \cap\{v\leq v_0\} } (1+\tau_0)r^{\ga_0}|L(r\phi)|^2dtd\om - \int_{\mathbb{H}^+ \cap\{v\leq v_0\}}\pa_\tau (r^{\ga_1+1}|\phi|^2)d\tau d\om \\
&\quad +\int_{\mathbb{H}^+ \cap\{v\leq v_0\} }(\tau_0-1)r^{\ga_0}(|\nabb(r\phi)|+\frac{2r^2}{p+1}|\phi|^{p+1})\quad dt d\om.
\end{align*}
Here for the particular choice of $X=r^{\ga_0} L$, $Y=\f12 \ga_0 r^{\ga_0-2}|\phi|^2 L$ and $\chi=r^{\ga_0-1}$, we can compute that
\begin{align*}
-2r^2(J^{X, Y, \chi}[\phi])^u&= r^{\ga_0}|L(r\phi)|^2-\f12 L(r^{\ga_0+1}|\phi|^2),\\
2r^2(J^{X, Y, \chi}[\phi])^v&= -r^{\ga_0}(|\nabla(r\phi)|^2+\frac{2r^2}{p+1}|\phi|^{p+1})-\f12 \Lb(r^{\ga_0+1}|\phi|^2).
\end{align*}
By using integration by parts, we have the identity
\begin{align*}
-\int_{\mathbb{H}^+ \cap\{v\leq v_0\}}\pa_\tau (r^{\ga_1+1}|\phi|^2)d\tau d\om+\int_{\B_2^{\infty}\cap\{v\leq v_0\}} \pa_r(r^{\ga_0+1}|\phi|^2)r^{-2}dx+\int_{\Hb_{v_0}}\Lb(r^{\ga_0+1}|\phi|^2)du d\om=0.
\end{align*}
We therefore conclude from the energy identity \eqref{eq:energy:id} that
\begin{align*}
\int_{\Hy^+}r^{\ga_0}|L(r\phi)|^2 dtd\om\leq \int_{\B_{2}^{\infty}} r^{\ga_0}( r^{-2}|L(r\phi)|^2 +|\nabb\phi|^2+\frac{2}{p+1}|\phi|^{p+1}) dx\leq C \mathcal{E}_{0, \ga_0}[\phi]
\end{align*}
for some universal constant $C$. For more details, we refer the interested reader to \cite{yang:scattering:NLW}.
Now to prove the estimate \eqref{eq:EE:hyB}, we conduct the classical energy estimate derived by using the vector field $\pa_t$ as multiplier. It suffices to compute the energy flux through the hyperboloid $\Hy$ for solution $\phi$ of \eqref{eq:NLW:semi:3d}. For this we compute that
\begin{align*}
E[\phi](\Hy)&=-2\int_{\Hy}i_{J^{\pa_t, 0, 0}[\phi]}d\vol\\
&=-2\int_{\mathbb{H} } (J^{\pa_t, 0, 0}[\phi])^0 r^2 dr d\om-(J^{\pa_t, 0, 0}[\phi])^r r^2 dt d\om\\
&=\int_{\mathbb{H} }(\tau_0 (|\pa\phi|^2+\frac{2}{p+1}|\phi|^{p+1} ) +2\pa_t\phi \pa_r\phi ) \quad r^2 dt d\om\\
&=\int_{\mathbb{H} }(\tau_0 (|\nabb\phi|^2+\frac{2}{p+1}|\phi|^{p+1} )+\frac{\tau_0-1}{2}|\Lb\phi|^2+\frac{1+\tau_0}{2}|L\phi|^2 ) \quad r^2 dt d\om.
\end{align*}
Since $1\leq \tau_0\leq 2$ on $\Hy^+$ and $\tau_0-1=(R^*)^2(1+\tau_0)^{-1}r^{-2}$, the energy conservation then leads
\begin{align*}
\int_{\Hy^+} (|L\phi|^2+|\nabb\phi|^2+\frac{2}{p+1}|\phi|^{p+1}+r^{-2}|\Lb\phi|^2)r^2 dtd\om\leq C \mathcal{E}_{0, 0}[\phi]
\end{align*}
for some universal constant $C$.
This together with the above weighted energy bound for $|L(r\phi)|$ implies the inequality \eqref{eq:EE:hyB}.
As for the energy estimate \eqref{eq:EE:hyB:1derivative} for the derivatives,
consider the equation for $Z\phi$
\[
\Box Z\phi=Z(|\phi|^{p}\phi)
\]
with nonlinearity $Z(|\phi|^p\phi)$. The associated energy momentum tensor for $Z\phi$ is
\begin{align*}
T[Z\phi]_{\mu\nu}=\pa_{\mu}Z\phi \pa_\nu Z\phi-\f12 m_{\mu\nu}\pa^\ga Z\phi \pa_\ga Z\phi.
\end{align*}
The energy identity \eqref{eq:energy:id} still holds but without the potential part $|\phi|^{p}\phi$. The above computations for $\phi$ then lead to
\begin{equation}
\label{eq:bd4LZrphi}
\begin{split}
&\int_{\mathbb{H}^+}r^{\ga_0}|L Z(r\phi)|^2+|{\Lb} Z^k \phi|^2+r^2| L Z^k\phi|^2+|\nabb Z^k (r\phi)|^2 dtd\om\\
&\les \mathcal{E}_{1, \ga_0}[\phi] +\iint_{\mathcal{D}}|X (Z\phi)+\chi Z\phi \Box Z\phi|+ |\Box Z\phi||\pa_t Z\phi|dxdt\\
&\les \mathcal{E}_{1, \ga_0}[\phi] +\iint_{\mathcal{D}}(r^{\ga_0-1}|L(r Z\phi)|+|\pa_t Z\phi|)|Z\phi| |\phi|^{p-1} dx dt.
\end{split}
\end{equation}
Here recall that the region $\mathcal{D}$ is bounded by the hyperboloid $\Hy^+$ and the initial hypersurface and by our convention, the implicit constant relies only on $\mathcal{E}_{0, \ga_0}[\phi]$, $\ga_0$ and $p$.
To bound the bulk integral on the right hand side of the above inequality, we instead apply the above $r$-weighted energy estimate and the classical energy estimate to the domain $\cD_{u_1}^{u_2}$ bounded by the outgoing null hypersurface $\H_{u_1}$, the incoming null hypersurface $\Hb_{-u_2}$ and the initial hypersurface. The $r$-weighted energy estimate with the same choice of the vector fields $X$, $Y$ and the function $\chi$ shows that
\begin{align*}
\int_{\H_{u_1}}r^{\ga_0} |L(rZ\phi)|^2 dvd\om &\les \mathcal{E}_{1, \ga_0}[\phi] +\iint_{\cD_{u_1}^{u_2}}r^{\ga_0-1}|L(rZ\phi)||Z\phi||\phi|^{p-1} dxdt\\
&\les \mathcal{E}_{1, \ga_0}[\phi] +\iint_{\cD_{u_1}^{u_2}} r^{\ga_0-2}|L(rZ\phi)|^2u_+^{-1-\ep}+ u_+^{1+\ep} r^{\ga_0}|Z\phi|^2|\phi|^{2p-2} dxdt.
\end{align*}
The integral of the first term on the right hand side can be absorbed by using Gronwall's inequality. For the integral of the nonlinearity, we rely on the pointwise decay estimate of $\phi$ obtained in Proposition \ref{prop:NLW:def:3d:ex} as well as the energy estimate for $\phi$ in Proposition \ref{prop:EF:cone:NW:3d}. First we note that in the exterior region
\begin{align*}
|Z\phi|^2\les |L(r\phi)|^2+ u_+^2|\Lb \phi|^2+ r^2 |\nabb\phi|^2+|\phi|^2,\quad \forall t+2\leq |x|.
\end{align*}
The integral of $|L(r\phi)|^2+r^2|\nabb\phi|^2$ can be bounded by using the weighted energy estimate \eqref{eq:Eflux:ex:EF}. For $|\Lb\phi|^2$, recall the energy estimate for $\phi$
\begin{align*}
\int_{\Hb_{-u_2}^{u_1}}|\Lb\phi|^2 \les (u_1)_+^{-\ga_0}\mathcal{E}_{0, \ga_0}[\phi] ,\quad \forall u_2<u_1\leq -1.
\end{align*}
To bound the integral of $|\phi|^2$, we rely on the $r$-weighted energy estimate for $\phi$ through the outgoing null hypersurface $\H_u$
\begin{align*}
\int_{\H_u}r^{-1-\ep}|\phi|^2\les \int_{\om}|r\phi(-u, u, \om)|^2 d\om+u_+^{1-\ga_0}\int_{\H_u}r^{\ga_0}|L(r\phi)|^2 dvd \om \les u_+^{1-\ga_0}\mathcal{E}_{0, \ga_0}[\phi],\quad \forall u\leq -1.
\end{align*}
By our assumption on $p$, $\ga_0$, we in particular have the lower bound for $p$ and choose $\ep$ such that
\[
p>\frac{1+\sqrt{17}}{2}>\frac{5}{2}, \quad (\ga_0+1)(p-1)>4+3\ep.
\]
Therefore
we can show that
\begin{align*}
&\iint_{\cD_{u_1}^{u_2}} u_+^{1+\ep} r^{\ga_0}|Z\phi|^2|\phi|^{2p-2} dxdt\\
&\les \mathcal{E}_{1, \ga_0}[\phi]^{p-1} \iint_{\cD_{u_1}^{u_2}} u_+^{1+\ep-(\ga_0-1)(p-1)} r^{\ga_0-2p+2}(|L(r\phi)|^2+ u_+^2|\Lb \phi|^2+ r^2 |\nabb\phi|^2+|\phi|^2)\\
&\les \mathcal{E}_{1, \ga_0}[\phi]^{p-1} \iint_{\cD_{u_1}^{u_2}} u_+^{\ga_0-2-\ep} r^{-1-\ep}(u_+^2|\Lb \phi|^2+|\phi|^2)+r^{\ga_0-3}(|L(r\phi)|^2 + r^2 |\nabb\phi|^2 )\\
&\les \mathcal{E}_{1, \ga_0}[\phi]^{p-1} \mathcal{E}_{0, \ga_0}[\phi].
\end{align*}
This in particular implies that
\begin{align*}
\int_{\H_{u_1}}r^{\ga_0} |L(rZ\phi)|^2 dvd\om +\iint_{\cD_{u_1}^{u_2}}r^{\ga_0-1}|L(rZ\phi)||Z\phi||\phi|^{p-1} dxdt \les \mathcal{E}_{1, \ga_0}[\phi]^{p-1}.
\end{align*}
Here without loss of generality we may assume that $\mathcal{E}_{1, \ga_0}[\phi]\geq 1.$ Based on these computations and by using energy estimate for $Z\phi$, we further can show that
\begin{align*}
\int_{\H_{u_1}^{u_2}}|L Z\phi|^2+\int_{\Hb_{-u_2}^{u_1}}|\Lb Z\phi|^2 &\les \mathcal{E}_{1, 0}[\phi] +\iint_{\cD_{u_1}^{u_2}}|\pa_t Z\phi||Z\phi||\phi|^{p-1} dxdt\\
&\les (u_1)_+^{-\ga_0}\mathcal{E}_{1, \ga_0}[\phi] +\iint_{\cD_{u_1}^{u_2}}r_+^{-1-\ep}|\pa_t Z\phi|^2+r^{1+\ep}|Z\phi|^2|\phi|^{2p-2} dxdt.
\end{align*}
The integral of the first term can be absorbed by using Gronwall's inequality while the second term has been estimated above by choosing $\ep$ such that $1+\ep<\ga_0$. We hence conclude that
\begin{align*}
\iint_{\cD_{u_1}^{u_2}}|\pa_t Z\phi||Z\phi||\phi|^{p-1} \les (u_1)_+^{-\ga_0}\mathcal{E}_{1, \ga_0}[\phi]+(u_1)_+^{-\ga_0}\mathcal{E}_{1, \ga_0}[\phi]^{p-1}\les (u_1)_+^{-\ga_0}\mathcal{E}_{1, \ga_0}[\phi]^{p-1}.
\end{align*}
The weighted energy estimate \eqref{eq:EE:hyB:1derivative} then follows in view of \eqref{eq:bd4LZrphi}.
\end{proof}
The above proposition will play the role that the solution in the interior region has uniform bounded energy flux. The method for proving the decay estimates is similar to the above argument for deriving the decay estimates for the solution in the exterior region after conformal transformation. The nonlinearity will be controlled by the weighted energy flux through backward light cone. To use Gronwall's inequality, one needs first bound the linear evolution with prescribed data, for which, in the interior region, will be the data on the hyperboloid $\Hy$. For large $p$ when the solution decays sufficiently fast, one can use the standard energy estimates to control the linear evolution, which however, fails for the smaller $p$ case. We instead rely on the representation formula together with the uniform weighted energy flux bound through backward light cones.
Define the region inclosed by the hyperboloid $\Hy$
\begin{align*}
\mathbf{D}:=\left\{(t, x)|(t^*)^2-|x|^2\geq (R^*)^{-1} t^*\right\},\quad \mathbf{D}^+=\mathbf{D}\cap\{t\geq 0\}, \quad R^*=\frac{5}{6},\quad t^*=t+3.
\end{align*}
Let $\phi^{lin}_{H}$ be the linear evolution in $\mathbf{D}$, that is,
\begin{align*}
\Box \phi^{lin}_{H}=|\phi|^{p-1}\phi (1-\mathbf{1}_{\mathbf{D}^+}),\quad \phi^{lin}_H(0, x)=\phi_0,\quad \pa_t \phi_H^{lin}(0, x)=\phi_1,
\end{align*}
where $\mathbf{1}_{\mathbf{D}^+}$ stands for the characteristic function of the set $\mathbf{D}^+$. We see that $\phi^{lin}_H$ coincides with $\phi$ in the region $(\mathbb{R}^{3}/\mathbf{D})\cap\{t\geq 0\}$.
We have the following estimate for $\phi^{lin}_H$ inside $\mathbf{D}$.
\begin{Prop}
\label{prop:NLW:def:3d:ex:ID:H}
Let $p$ and $\ga_0$ verify the same assumptions as in Proposition \ref{prop:NLW:def:3d:ex}. Then inside the hyperboloid $\mathbf{D}$, for large $p>\frac{1+\sqrt{17}}{2}$, we have
\begin{equation}
\label{eq:phi:pt:Br:largep:lin}
|\phi^{lin}_H(t_0, x_0)|\leq C (2+t_0+|x_0|)^{-1}(2+||x_0|-t_0|)^{-\frac{\ga_0-1}{2}}\mathcal{E}_{1, \ga_0}[\phi]^{\frac{p-1}{2}},
\end{equation}
while for the case $2<p\leq \frac{1+\sqrt{17}}{2}$ and $1<\ga_0<p-1$, we have
\begin{equation}
\label{eq:phi:pt:Br:smallp:lin}
|\phi^{lin}_H (t_0, x_0)|\leq C (2+t_0+|x_0|)^{-\frac{3+(p-2)^2}{(p+1)(5-p)}\ga_0}(1+||x_0|-t_0|)^{-\frac{\ga_0}{p+1}} \sqrt{\mathcal{E}_{1, \ga_0}[\phi] }
\end{equation}
for some constant $C$ depending on $\ga_0$, $p$ and the zeroth order weighted energy $\mathcal{E}_{0, \ga_0}[\phi] $.
\end{Prop}
\begin{proof}
The larger $p$ case of estimate \eqref{eq:phi:pt:Br:largep:lin} follows directly by using the standard energy method, in view of the weighted energy bounds \eqref{eq:EE:hyB}, \eqref{eq:EE:hyB:1derivative} from the previous Proposition for the initial data for $\phi^{lin}_H$ on $\Hy$. Details for this decay estimate for linear waves could be found, for example, in \cite{yang1}.
For the small $2<p\leq \frac{1+\sqrt{17}}{2}$ case, which requires that $\ga_0<p-1$, the above energy method fails. Denote
\[
u_0=1+|t_0-|x_0||,\quad v_0=2+t_0+|x_0|.
\]
Recall that for $q=(t_0, x_0)$, we have
\begin{equation*}
\begin{split}
4\pi\phi^{lin}_H(t_0, x_0)&=\int_{\tilde{\om}}t_0 \phi_1(x_0+t_0\tilde{\om})d\tilde{\om}+\pa_{t_0}\big(\int_{\tilde{\om}}t_0 \phi_0(x_0+t_0\tilde{\om})d\tilde{\om} \big)-\int_{\mathcal{N}^{-}(q)/\mathbf{D}}|\phi|^{p-1} \phi \ \tilde{r} d\tilde{r}d\tilde{\om}.
\end{split}
\end{equation*}
Decay estimates for the linear evolution part can be carried out by using standard vector field method
\begin{align*}
|\int_{\tilde{\om}}t_0 \phi_1(x_0+t_0\tilde{\om})d\tilde{\om}|+|\pa_{t_0}\big(\int_{\tilde{\om}}t_0 \phi_0(x_0+t_0\tilde{\om})d\tilde{\om} \big)|\leq C v_0^{-1}u_0^{-\frac{\ga_0-1}{2}}\sqrt{\mathcal{E}_{1, \ga_0}[\phi]}.
\end{align*}
We now need to control the contribution of the nonlinear part from the exterior region. The case when $t_0\leq 20$ is trivial since in this case $|t_0|+|x_0|\leq 20$ (confined in $\mathbf{D}$). Minor modification of the argument for estimating the nonlinear terms in Proposition \ref{prop:NLW:def:3d:ex} also applies to the case $|t_0-|x_0||\leq 2$ (that is Lemma \ref{lem:integration:ex:ab} holds for $|t_0-|x_0||\leq 10$). Alternatively by moving the origin around, the decay estimates of Proposition \ref{prop:NLW:def:3d:ex} are also valid for $q=(t_0, x_0)$ with $|t_0-|x_0||\leq 10$. Hence in the sequel, it suffices to consider the case when $t_0\geq 20$ and $t_0>|x_0|+10$.
First we can estimate that
\begin{align*}
|\int_{\mathcal{N}^{-}(q)/\mathbf{D}}|\phi|^{p-1} \phi \ \tilde{r} d\tilde{r}d\tilde{\om}|&\leq \left(\int_{\mathcal{N}^{-}(q)}|\phi|^{p+1}((1+\tau)v_+^{\ga}+u_+^{\ga})\tilde{r}^2 d\tilde{r}d\tilde{\om}\right)^{\frac{p}{p+1}}\\
& \quad \cdot \left(\int_{\mathcal{N}^{-}(q)/\mathbf{D}}((1+\tau)v_+^{\ga}+u_+^{\ga})^{-p}\tilde{r}^{1-p} d\tilde{r}d\tilde{\om}\right)^{\frac{1}{p+1}}\\
&\leq (\mathcal{E}_{0, \ga_0}[\phi])^{\frac{p}{p+1}} \left(\int_{\mathcal{N}^{-}(q)/\mathbf{D}}((1+\tau)v_+^{\ga}+u_+^{\ga})^{-p}\tilde{r}^{1-p} d\tilde{r}d\tilde{\om}\right)^{\frac{1}{p+1}}
\end{align*}
for all $1<\ga<\ga_0$.
Denote $s=-\om_0\cdot \tilde{\om}$ with $\om_0=r_0^{-1}x_0$. Recall that
\begin{align*}
&r^2=|x|^2=(\tilde{r}-r_0s)^2+(1-s^2)r_0^2,\quad r\tau=\tilde{r}-r_0s.
\end{align*}
On $\mathcal{N}^{-}(q)/\mathbf{D}$, $\tilde{r}$ and $s$ have to verify the relation
\begin{align*}
r^2+4\geq t^2,\quad t=t_0-\tilde{r},
\end{align*}
that is,
\begin{align*}
s\leq s_*, \quad 2\tilde{r}(t_0-r_0s_*)+4= (t_0-r_0)(t_0+r_0).
\end{align*}
As $-1\leq s\leq 1$, to make the set $\mathcal{N}^{-}(q)/\mathbf{D}$ non-empty, it in particular requires that $$\tilde{r}\geq \frac{t_0-r_0}{2}-2\geq \frac{1}{5}u_0.$$
Here keep in mind that we have assumed that $t_0\geq 20$, $t_0-r_0\geq 10$. For the case when $\tilde{r}\geq \frac{t_0+r_0}{2}-2$, it can be showed that
\[
s_*\geq 1,\quad r\tau=\tilde{r}-r_0s\geq 0,\quad r\geq \frac{1}{2}(\tilde{r}-r_0s+\sqrt{1-s^2}r_0)\geq \frac{1}{4}(\tilde{r}-r_0+\sqrt{1-s}r_0).
\]
Therefore we can estimate that
\begin{align*}
& \int_{\mathcal{N}^{-}(q)/\mathbf{D}\cap\{\tilde{r}\geq\frac{t_0+r_0}{2}-2\}}((1+\tau)v_+^{\ga}+u_+^{\ga})^{-p}\tilde{r}^{1-p} d\tilde{r}d\tilde{\om}\\
&\les \int_{-1}^{1}\int_{\frac{t_0+r_0}{2}-2}^{t_0} (\tilde{r}-r_0+\sqrt{1-s}r_0 )^{-p\ga}\tilde{r}^{1-p}d\tilde{r}ds\\
&\les t_0^{1-p} u_0 \int_{-1}^{1}(u_0+\sqrt{1-s}r_0)^{-p\ga}ds.
\end{align*}
For the case when $r_0\leq \f12 t_0$, we trivially have that
\begin{align*}
\int_{-1}^{1}(u_0+\sqrt{1-s}r_0)^{-p\ga}ds\les t_0^{-p\ga}
\end{align*}
as $u_0=t_0-r_0+1\geq \frac{1}{4}t_0$. When $r_0\geq \f12 t_0$, we show that
\begin{align*}
\int_{-1}^{1}(u_0+\sqrt{1-s}r_0)^{-p\ga}ds &\les \int_{-1}^{1-r_0^{-2}u_0^2} (\sqrt{1-s}r_0)^{-p\ga}ds+\int_{1-r_0^{-2}u_0^2}^{1} u_0^{-p\ga}ds\\
&\les r_0^{-2}u_0^{2-p\ga}+ r_0^{-p\ga}(r_0^{-2}u_0^2)^{1-\frac{p\ga}{2}}\\
&\les t_0^{-2}u_0^{2-p\ga}.
\end{align*}
Here notice that $p\ga>2$. We thus conclude that
\begin{align*}
\int_{\mathcal{N}^{-}(q)/\mathbf{D}\cap\{\tilde{r}\geq\frac{t_0+r_0}{2}-2\}}((1+\tau)v_+^{\ga}+u_+^{\ga})^{-p}\tilde{r}^{1-p} d\tilde{r}d\tilde{\om} \les t_0^{-1-p} u_0^{3-p\ga}.
\end{align*}
Next we consider the case when $\frac{t_0-r_0}{2}-2\leq \tilde{r}\leq \frac{t_0+r_0}{2}-2$. By the definition of $s_*$, we have
\begin{align*}
& \int_{\mathcal{N}^{-}(q)/\mathbf{D}\cap\{\frac{t_0-r_0}{2}-2\leq \tilde{r}\leq\frac{t_0+r_0}{2}-2\}}((1+\tau)v_+^{\ga}+u_+^{\ga})^{-p}\tilde{r}^{1-p} d\tilde{r}d\tilde{\om}\\
&\les \int_{\frac{t_0-r_0}{2}-2}^{\frac{t_0+r_0}{2}-2} \int_{-1}^{s_*} ((r+\tilde{r}-r_0s) r^{\ga-1}+u_+^{\ga})^{-p}\tilde{r}^{1-p} d\tilde{r}ds.
\end{align*}
For the case when $s_*\leq r_0^{-1}\tilde{r}$, that is,
\begin{align*}
s_*=\frac{2\tilde{r}t_0+4-t_0^2+r_0^2}{2\tilde{r}r_0}\leq r_0^{-1}\tilde{r}\Longleftrightarrow (\tilde{r}-\f12 t_0)^2\geq 2+\f12 r_0^2-\frac{1}{4}t_0^2,
\end{align*}
and for the situation $2+\f12 r_0^2\leq \frac{1}{4}t_0^2$, then
\begin{align*}
r\geq \frac{1}{4}(\tilde{r}-r_0s+\sqrt{1-s}r_0).
\end{align*}
Let's distinguish for two cases: when $r_0$ is small compared to $t_0$, that is, $r_0\leq \frac{1}{10}t_0$, then
\begin{align*}
r\geq \frac{1}{4}(\tilde{r}-r_0)\geq \frac{1}{100}t_0.
\end{align*}
Otherwise we have $\frac{1}{10}t_0\leq r_0\leq\frac{\sqrt{2}}{2}\sqrt{t_0^2-8} $ and then we can show that
\begin{align*}
\tilde{r}-r_0s+\sqrt{1-s}r_0&\geq \tilde{r}-r_0+\sqrt{\frac{(t_0-r_0)(t_0+r_0-2\tilde{r})-4}{2\tilde{r}r_0}}r_0\\
&\geq \tilde{r}-r_0+\frac{1}{20}\sqrt{r_0(\frac{t_0+r_0}{2}-\tilde{r})}\\
&\geq \frac{1}{100}t_0,\quad \forall s\leq s_*,\quad \frac{t_0-r_0}{2}-2\leq \tilde{r}\leq \frac{t_0+r_0}{2}-2.
\end{align*}
Thus for the case when $8+2r_0^2\leq t_0^2$, we always have
\begin{align*}
\int_{\frac{t_0-r_0}{2}-2}^{\frac{t_0+r_0}{2}-2} \int_{-1}^{s_*} ((r+\tilde{r}-r_0s) r^{\ga-1}+u_+^{\ga})^{-p}\tilde{r}^{1-p} d\tilde{r}d\tilde{\om}\les t_0^{2-p-p\ga}.
\end{align*}
Now it remains to consider the case when $2+\f12 r_0^2>\frac{1}{4}t_0^2$. For the integral on $r_0\leq \tilde{r}\leq \frac{t_0+r_0}{2}-2$, similarly we can estimate that
\begin{align*}
& \int_{r_0}^{\frac{t_0+r_0}{2}-2} \int_{-1}^{s_*} ((r+\tilde{r}-r_0s) r^{\ga-1}+u_+^{\ga})^{-p}\tilde{r}^{1-p} d\tilde{r}ds\\
&\les \int_{r_0}^{\frac{t_0+r_0}{2}-2} \int_{-1}^{s_*} (\tilde{r}-r_0+\sqrt{1-s}r_0)^{-p\ga}t_0^{1-p} d\tilde{r}ds\\
&\les \int_{r_0}^{\frac{t_0+r_0}{2}-2} (1-s_*)^{1-\f12 p\ga}t_0^{1-p-p\ga} d\tilde{r}ds\\
&\les t_0^{-1-p}u_0^{1-\frac{p\ga}{2}}
\end{align*}
as $p\ga\leq p(p-1)\leq 4$.
Now we need to estimate the integral on $[\frac{t_0-r_0}{2}-2, r_0]$. And we first consider the case when $\tilde{r}\leq \f12 r_0$.
Since $2+\f12 r_0^2\geq \frac{1}{4}t_0^2$, $t_0\geq 20$ and $s\leq s_*$, in particular we have
\begin{align*}
&r\geq t\geq t_0-\tilde{r}\geq t_0-\f12 r_0\geq \frac{1}{10}t_0,\\
& r+\tilde{r}-r_0s=\frac{(1-s^2)r_0^2}{r+r_0s-\tilde{r}}\geq \frac{1}{100}(1-s)t_0,\\
&1-s_
=\frac{(t_0-r_0)(t_0+r_0-2\tilde{r})-4}{2\tilde{r}r_0}\geq \frac{u_0}{10 \tilde{r}}.
\end{align*}
The second inequality holds trivially when $\tilde{r}-r_0s\geq 0$. Otherwise we use the bound of $r\leq 2r_0$.
Therefore we can estimate that
\begin{align*}
&\int^{\f12 r_0}_{\frac{t_0-r_0}{2}-2}\int_{-1}^{s_{*}}((r+\tilde{r}-r_0s) r^{\ga-1}+u_+^{\ga})^{-p}\tilde{r}^{1-p} d\tilde{r}ds\\
&\les \int^{\f12 r_0}_{\frac{t_0-r_0}{2}-2}\int_{-1}^{s_{*}}(1-s)^{-p}t_0^{-p\ga} \tilde{r}^{1-p} d\tilde{r}ds\\
&\les \int^{\f12 r_0}_{\frac{t_0-r_0}{2}-2} (u_0\tilde{r}^{-1})^{1-p}t_0^{-p\ga} \tilde{r}^{1-p} d\tilde{r}\\
&\les u_0^{1-p}t_0^{1-p\ga}.
\end{align*}
Finally it remains to consider the integral on $[\f12 r_0, r_0]$ with $2+\f12 r_0^2>\frac{1}{4}t_0^2$.
Denote $$s_{**}=\min\{s_*, r_0^{-1}\tilde{r}\}.$$
For the integral restricted on $-1\leq s\leq s_{**}$, note that
\begin{align*}
r^2=(\tilde{r}-r_0s)^2+(1-s^2)r_0^2 &\geq \f12 (\tilde{r}-r_0s_*)^2+\f12 (1-s_*^2)r_0^2+\f12 (1-s)r_0^2\\
&\geq \f12 (t_0-r_0)^2+\f12 (1-s)r_0^2.
\end{align*}
Here recall that $s_*$ is defined such that $r=t=t_0-\tilde{r}$.
Therefore we can show that
\begin{align*}
&\int_{\f12 r_0}^{r_0}\int_{-1}^{s_{**}}((r+\tilde{r}-r_0s) r^{\ga-1}+u_+^{\ga})^{-p}\tilde{r}^{1-p} d\tilde{r}ds\\
&\les \int^{ r_0}_{\f12 r_0}\int_{-1}^{s_{**}}(u_0^2+(1-s)r_0^2)^{-\f12 p\ga} t_0^{1-p} d\tilde{r}ds\\
&\les \int^{ r_0}_{\f12 r_0} (r_0^{-2}u_0^2+1-r_0^{-1}\tilde{r})^{1-\frac{p\ga}{2}} t_0^{1-p-p\ga} d\tilde{r}\\
&\les t_0^{2-p-p\ga} (1+(r_0^{-2}u_0^2)^{2-\frac{p\ga}{2}})\\
&\les t_0^{2-p-p\ga}
\end{align*}
as $p\ga<4$ and $u_0< r_0$.
Lastly for the integral on $s_{**}=r_0^{-1}\tilde{r}<s\leq s_*$, which in particular requires that
\begin{align*}
\tilde{r}\leq r_*=\f12 t_0+\sqrt{2+\f12 r_0^2-\frac{1}{4}t_0^2}<r_0.
\end{align*}
Moreover
\begin{align*}
r+\tilde{r}-r_0s=\frac{(1-s^2)r_0^2}{r+r_0s-\tilde{r}}\geq \frac{(1-s)r_0^2}{ r},\quad r\leq 2r_0,\quad \forall \tilde{r}\leq r_0.
\end{align*}
Therefore
\begin{align*}
(r+\tilde{r}-r_0s)r^{\ga-1}\geq (1-s)r_0^{2}r^{\ga-2}\geq 2^{\ga-2}(1-s)r_0^{\ga}\geq 2^{-2}(1-s)t_0^{\ga}.
\end{align*}
This leads to
\begin{align*}
&\int_{\f12 r_0}^{r_*}\int_{r_0^{-1}\tilde{r}}^{s_{*}}((r+\tilde{r}-r_0s) r^{\ga-1}+u_+^{\ga})^{-p}\tilde{r}^{1-p} d\tilde{r}ds\\
&\les \int^{ r_*}_{\f12 r_0}\int_{r_0^{-1}\tilde{r}}^{s_{*}}(1+(1-s)t_0^{\ga})^{-p} t_0^{1-p} d\tilde{r}ds\\
&\les t_0^{1-p}\int^{ r_*}_{\f12 r_0}t_0^{-\ga}(1+(1-s_*)t_0^{\ga})^{1-p} d\tilde{r} \\
&\les t_0^{1-p}\int^{ r_*}_{\f12 r_0}t_0^{-\ga}(1+t_0^{\ga-2}u_0 (t_0+r_0-2\tilde{r}))^{1-p} d\tilde{r} \\
&\les t_0^{3-p-2\ga}u_0^{-1}
\end{align*}
Since $2<p<\frac{1+\sqrt{17}}{2}$, $1<\ga<p-1$ and $u_0<t_0$, gathering all the above estimates, we have shown that
\begin{align*}
|\int_{\mathcal{N}^{-}(q)/\mathbf{D}}|\phi|^{p-1} \phi \ \tilde{r} d\tilde{r}d\tilde{\om}|
\les (\mathcal{E}_{0, \ga_0}[\phi])^{\frac{p}{p+1}} t_0^{\frac{3-p-2\ga}{p+1}}u_0^{-\frac{1}{p+1}}.
\end{align*}
Now we compute that
\begin{align*}
3-p-2\ga+\frac{p^2-4p+7}{5-p}\g
=-\frac{3-p}{5-p}(2p-4+(p+1)(\ga-1))< \frac{9-p^2}{5-p} (1-\ga)<1-\ga_0
\end{align*}
by choosing $\ga$ sufficiently close to $\ga_0$. This demonstrates that
\begin{align*}
|\phi^{lin}_H(t_0, x_0)|\les t_0^{\frac{3-p-2\ga}{p+1}}u_0^{-\frac{1}{p+1}}\les t_0^{-\frac{(p-2)^2+3}{(p+1)(5-p)}\ga_0}u_0^{-\frac{\ga_0}{p+1}}.
\end{align*}
This proves \eqref{eq:phi:pt:Br:smallp:lin} and hence we finished the proof for the Proposition.
\iffalse
If $u_0^2\leq t_0^{2-\ga}$, then
\begin{align*}
t_0^{-\frac{3-p}{5-p}(2p-4+(p+1)(\ga-1))}u_0^{-1}&\leq u_0^{-1-\frac{2}{2-\ga}\frac{3-p}{5-p}(2p-4+(p+1)(\ga-1))}\\
&=u_0^{-1+\frac{2(p+1)(3-p)}{(5-p)}-\frac{2}{2-\ga}\frac{(3p-3)(3-p)}{(5-p)}}\\
&\leq u_0^{-1+\frac{2(p+1)(3-p)}{(5-p)}-\frac{2(3p-3)(3-p)}{(5-p)}}\\
&\leq u_0^{-1-\frac{4(p-2)(3-p)}{(5-p)}}
\end{align*}
computations
\begin{align*}
(3-p)(p+3)-5+p=4-p^2+p>0,\quad 2<p<\frac{1+\sqrt{17}}{2}.
\end{align*}
\begin{align*}
&\int^{\f12 r_0}_{\frac{t_0-r_0}{2}-2}\int_{-1}^{s_{**}}((r+\tilde{r}-r_0s) r^{\ga-1}+u_+^{\ga})^{-p}\tilde{r}^{1-p} d\tilde{r}ds\\
&\les \int^{\f12 r_0}_{\frac{t_0-r_0}{2}-2}\int_{-1}^{s_{**}}(\sqrt{1-r_0^{-1}\tilde{r}}r_0)^{-p}\tilde{r}^{1-p} d\tilde{r}ds\les t_0^{-p\ga}u_0^{2-p}.
\end{align*}
As and $\tilde{r}\leq \f12 r_0$, we have
\fi
\end{proof}
\section{Semilinear wave equation on a truncated backward light cone}
\label{sec:comp}
We study the solution to a class of semilinear wave equations on a compact region with smooth initial data $(\phi_0, \phi_1)$ which may blow up on the boundary. This is motivated by studying the asymptotic behaviour of solutions to subcritical defocusing nonlinear wave equation in the interior region. However the content in this section is independent and may be of independent interest.
Let $R>1$ be a constant and $\B_R$ be the ball with radius $R$ in $\mathbb{R}^3$. Denote $\cJ^+(\B_R)$ be the future maximal Cauchy development, that is, $(t, x)\in\mathbb{R}\times \mathbb{R}^{3}$ belongs $\cJ^+(\B_R)$ if and only if $x+t\om\in \B_R$ for all $\om\in \mathbb{S}^2$.
Consider the Cauchy problem to the following nonlinear wave equation
\begin{equation}
\label{eq:NLW:3D:conf}
\begin{cases}
\Box \phi= \La^{3-p}|\phi|^{p-1}\phi,\\
\phi(0, x)=\phi_0,\quad \pa_t\phi(0, x)=\phi_1, \quad x\in \B_R
\end{cases}
\end{equation}
on $\mathcal{J}^{+}(\B_R)$ with $\La=((R-t)^2-|x|^2)^{-1}$.
For any fixed point $q=(t_0, x_0)\in \mathcal{J}^+(\B_R)$, recall that $\mathcal{N}^{-}(q)$ is the past null cone of the point $q$ in $\mathcal{J}^{+}(\B_R)$ (as pointed out before, we are only concerned with the solution in the future) and $\mathcal{J}^{-}(q)$ is the past of the point $q$, that is, the region bounded by $\mathcal{N}^{-}(q)$ and $\B_R$. As defined in Section \ref{sec:notation}, the tilde coordinates, quantities are referred to those ones with coordinates centered at the given point $q=(t_0, x_0)$.
At the fixed point $q=(t_0, x_0)$, define the following functions
\[
u_*=R-t+r, \quad v_*=R-t-r,\quad \tau=\frac{x\cdot (x-x_0)}{|x||x-x_0|}=\om\cdot \tilde{\om}.
\]
In particular $\La=u_*^{-1}v_*^{-1}$.
Assume that the initial data $(\phi_0, \phi_1)$ are bounded in the following weighted energy norm
\[
\tilde{\mathcal{E}}_{0, \gamma}=\int_{\B_R}(R-|x|)^{\gamma}|L\phi|^2+|{\Lb}\phi|^2+|\nabb\phi_0|^2+|\phi_0|^2+(R-|x|)^{p-3+\ga}|\phi_0|^{p+1} dx
\]
for some constant $0<\ga<1$. Define
\begin{align*}
\mathcal{I}=\iint_{\cJ^+(\B_R)} \La^{3-p}|\phi|^{p+1}v_{*}^{\ga-1} d\vol.
\end{align*}
First we establish a weighted energy flux bound for the potential.
\begin{Prop}
\label{prop:EF:cone:gamma}
For any point $q=(t_0, x_0)\in \mathcal{J}^{+}(\B_R)$, the solution verifies the uniform bound
\begin{equation}
\label{eq:comp:v:EF}
\begin{split}
&\int_{\mathcal{N}^{-}(q)} ( v_*^\ga+(1-\tau)u_*^{\ga}) \La^{3-p} |\phi|^{p+1} \leq C (\tilde{\cE}_{0, \ga}+\mathcal{I})
\end{split}
\end{equation}
for some constant $C$ depending only on $R$, $p$ and $\ga$.
\end{Prop}
\begin{proof}
The proof is similar to that for Proposition \ref{prop:EF:cone:NW:3d}. For solution $\phi$ of the equation \eqref{eq:NLW:3D:conf}, define the associated energy momentum tensor
\begin{align*}
T[\phi]_{\mu\nu}=\pa_{\mu}\phi\pa_{\nu}\phi-\f12 m_{\mu\nu}(\pa^\ga \phi \pa_\ga\phi+\frac{2}{p+1}\La^{3-p}|\phi|^{p+1}).
\end{align*}
Here $m_{\mu\nu}$ is the flat Minkowski metric.
Then
\begin{align*}
\pa^\mu T[\phi]_{\mu\nu}=&(\Box\phi-\La^{3-p} |\phi|^{p-1}\phi)\pa_\nu\phi+\frac{p-3}{p+1}\La^{2-p}\pa_\nu\La |\phi|^{p+1}.
\end{align*}
Recall that current $J^{X, \chi}[\phi]$ defined for any vector field $X$ and any function $\chi$
\begin{equation*}
J^{X, \chi}_\mu[\phi]=T[\phi]_{\mu\nu}X^\nu -
\f12\pa_{\mu}\chi \cdot|\phi|^2 + \f12 \chi\pa_{\mu}|\phi|^2.
\end{equation*}
For solution $\phi$ of equation \eqref{eq:NLW:3D:conf}, we derive the following energy identity
\begin{equation}
\label{eq:energy:id:conf}
\iint_{\mathcal{D}}\pa^\mu J^{X,\chi}_\mu[\phi] d\vol =\iint_{\mathcal{D}}\frac{p-3}{p+1}\La^{2-p}X(\La) |\phi|^{p+1}+ T[\phi]^{\mu\nu}\pi^X_{\mu\nu}+
\chi \pa_\mu\phi\pa^\mu\phi -\f12\Box\chi\cdot|\phi|^2+\chi\phi\Box\phi d\vol
\end{equation}
for any domain $\mathcal{D}$ in $\mathcal{J}^+(\B_{R})$.
Apply the above energy identity to the domain $\mathcal{J}^{-}(q)$ for some fixed point $q=(t_0, x_0)$ with vector field $X$
\[
X=(R-t-r)^{\gamma} L+(R-t+r)^\gamma \Lb=v_*^\ga L+u_*^\ga \Lb.
\]
Since the bulk integral on the right hand side is an integral over a spacetime region, we can compute it under the null frame $(L, \Lb, e_1, e_2)$. We first can compute that
\[
\nabla_{L}X=-2\gamma v_*^{\gamma-1},\quad \nabla_{\Lb}X=-2\gamma u_*^{\gamma}\Lb,\quad \nabla_{e_i}X=r^{-1}(v_*^\gamma-u_*^{\gamma}) e_i.
\]
In particular, the non-vanishing components of the deformation tensor $\pi_{\mu\nu}^X$ are
\[
\pi^X_{L\Lb}=2\gamma \left(v_*^{\gamma-1}+u_*^{\gamma-1}\right),\quad \pi^X_{e_i e_i}=r^{-1}(v_*^{\ga}-u_*^{\gamma}).
\]
Therefore we have
\begin{align*}
&T[\phi]^{\mu\nu}\pi^X_{\mu\nu}=2T[\phi]^{L\Lb}\pi^X_{L\Lb}+T[\phi]^{e_ie_i}\pi^X_{e_ie_i}\\
&=\ga (v_*^{\gamma-1}+u_*^{\gamma-1})(|\nabb\phi|^2+\frac{2}{p+1}\La^{3-p}|\phi|^{p+1})+r^{-1}(v_*^{\ga}-u_*^{\gamma})(|\nabb\phi|^2-\pa^\mu \phi \pa_\mu\phi-\frac{2}{p+1}\La^{3-p}|\phi|^{p+1})\\
&=\left(\ga(v_*^{\gamma-1}+u_*^{\gamma-1})+r^{-1}(v_*^{\ga}-u_*^{\gamma})\right)|\nabb\phi|^2-r^{-1}(v_*^{\ga}-u_*^{\gamma})\pa^\mu\phi \pa_\mu\phi\\
&\quad +\left(\ga(v_*^{\gamma-1}+u_*^{\gamma-1})+r^{-1}(u_*^{\ga}-v_*^{\gamma})\right)\frac{2}{p+1}\La^{3-p}|\phi|^{p+1}.
\end{align*}
Now take the function $\chi$ to be
\[
\chi=r^{-1}(v_*^{\ga}-u_*^{\gamma}).
\]
We may note that
\begin{align*}
\Box \chi=-r^{-1}L\Lb (r\chi)=-2r^{-1}L\Lb(v_*^\ga-u_*^\ga)=0,\quad r>0.
\end{align*}
Moreover we can compute that
\begin{align*}
X(\La)=(u_*^\ga \Lb+v_*^\ga L)(u_*v_*)^{-1}=-(u_*v_*)^{-2}(-2 u_*^{\ga}v_*-2v_*^{\ga}u_*)=2\La^{2}(u_*^{\ga}v_*+v_*^{\ga}u_*),
\end{align*}
Therefore for solution $\phi$ to \eqref{eq:NLW:3D:conf}, we have
\begin{align*}
&T[\phi]^{\mu\nu}\pi^X_{\mu\nu}+
\chi \pa_\mu\phi \pa^\mu\phi -\f12\Box\chi\cdot|\phi|^2 +\chi\phi\Box\phi +\frac{p-3}{p+1}\La^{2-p}X(\La) |\phi|^{p+1}\\
&=\left(\ga(v_*^{\gamma-1}+u_*^{\gamma-1})+r^{-1}(v_*^{\ga}-u_*^{\gamma})\right)|\nabb\phi|^2+r^{-1}(v_*^{\ga}-u_*^{\gamma}) \La^{3-p}|\phi|^{p+1}\\
&\quad +\left(\ga(v_*^{\gamma-1}+u_*^{\gamma-1})+r^{-1}(u_*^{\ga}-v_*^{\gamma})\right)\frac{2}{p+1}\La^{3-p}|\phi|^{p+1}+2\frac{p-3}{p+1}\La^{4-p}(u_*v_*^{\ga}+u_*^{\ga}v_*) |\phi|^{p+1}\\
&=\left(\ga(v_*^{\gamma-1}+u_*^{\gamma-1})-r^{-1}(u_*^{\ga}-v_*^{\gamma})\right)(|\nabb\phi|^2+\frac{2}{p+1}\La^{3-p}|\phi|^{p+1})\\
&\quad + \La^{3-p}|\phi|^{p+1} \frac{p-3}{p+1}(2(v_*^{\gamma-1}+u_*^{\gamma-1})-r^{-1}(u_*^{\ga}-v_*^{\gamma}))
\end{align*}
Now note that $v_*\geq 0$, $u_*\geq 0$ and $2r=u_*-v_*$. Define $f(u)=u^{\ga}$ for $u>0$. As $0< \ga<1$, we conclude that the function $f'(u)=\ga u^{\ga-1}$ is convex. Therefore
\begin{align*}
\frac{f(u_*)-f(v_*)}{u_*-v_*}=\int_{0}^{1}f'(su_*+(1-s)v_*)ds \leq \int_{0}^{1}sf'(u_*)+(1-s)f'(v_*)ds=\frac{f'(u_*)+f'(v_*)}{2},
\end{align*}
which implies that
\[
\ga(v_*^{\gamma-1}+u_*^{\gamma-1})+r^{-1}(v_*^{\ga}-u_*^{\gamma})\geq 0.
\]
Since $0< \ga<1$, we conclude that
\[
2(v_*^{\gamma-1}+u_*^{\gamma-1})-r^{-1}(u_*^{\ga}-v_*^{\gamma})\geq 0.
\]
Therefore the bulk integral on the right hand side of the energy identity \eqref{eq:energy:id:conf} is nonnegative for the super-conformal case $p\geq 3$. Otherwise, we make use of the priori bound $\mathcal{I}$. This leads to the following energy estimate
\begin{equation}
\label{eq:positive:ga:comp}
\begin{split}
\iint_{\mathcal{J}^{-}(q)}\pa^\mu J^{X, \chi}_\mu[\phi]d\vol&=\int_{\mathcal{N}^{-}(q)}i_{ J^{X, \chi}[\phi]}d\vol+\int_{\mathcal{J}^{-}(q)\cap{\B_R}}i_{ J^{X, \chi}[\phi]}d\vol\\
&\geq - \frac{4|p-3|}{p+1}\iint_{\cJ^{-}(q)}\La^{3-p}|\phi|^{p+1} v_*^{\gamma-1} d\vol\\
&\geq -\frac{4|p-3|}{p+1}\mathcal{I}
\end{split}
\end{equation}
by using Stokes' formula. Here we note that $0<\ga<1$ and $u_*\geq v_*$. The boundary integral on the initial hypersurface $\mathcal{J}^{-}(q)\cap{\B_R}$ can be bounded by the initial data. The above inequality then gives control on the weighted energy flux through the backward light cone $\mathcal{N}^{-}(q)$. To find the explicit form of this weighted energy flux, we shift to the coordinates centered at the point $q=(t_0, x_0)$.
Recall from the proof for Proposition \ref{prop:EF:cone:NW:3d} that
\begin{align*}
-i_{J^{X,\chi}[\phi]}d\vol=J_{\tilde{\Lb}}^{X,\chi}[\phi]\tilde{r}^2d\tilde{u}d\tilde{\om}= ( T[\phi]_{\tilde{\Lb}\nu}X^\nu -
\f12(\tilde{\Lb}\chi) |\phi|^2 + \f12 \chi\cdot\tilde{\Lb}|\phi|^2 ) \tilde{r}^2d\tilde{u}d\tilde{\om}.
\end{align*}
For the main quadratic terms, we first can compute that
\begin{align*}
T[\phi]_{\tilde{\Lb}\nu}X^\nu =T[\phi]_{\tilde{\Lb}\tilde{\Lb}}X^{\tilde{\Lb}}+T[\phi]_{\tilde{\Lb}\tilde{L}}X^{\tilde{L}}+T[\phi]_{\tilde{\Lb}\tilde{e}_i}X^{\tilde{e}_i}.
\end{align*}
We expand the vector field $X$ under the new null frame $\{\tilde{L}, \tilde{\Lb}, \tilde{e}_1, \tilde{e}_2\}$ centered at the point $q$. Recall those computations in the proof for Proposition \ref{prop:EF:cone:NW:3d}, we can write that
\begin{align*}
X
&=\f12 \left(u_*^\ga+v_*^\ga+(v_*^\ga-u_*^\ga)\om\cdot \tilde{\om}\right)\tilde{L}+\f12 \left(u_*^\ga+v_*^\ga-(v_*^\ga-u_*^\ga)\om\cdot \tilde{\om}\right)\tilde{\Lb}+(v_*^\ga-u_*^\ga)\om\cdot \tilde{\nabb}.
\end{align*}
Here $\tilde{\nabb}=\tilde{\nabla}-\pa_{\tilde{r}}$. Denote $\tau=\om\cdot \tilde{\om}$. Then we can compute the quadratic terms
\begin{align*}
T[\phi]_{\tilde{\Lb}\nu}X^\nu
=&\left((1-\tau)v_*^\ga+(1+\tau)u_*^\ga\right)|{\tilde{\Lb}}\phi|^2 +\left((1+\tau)v_*^\ga+(1-\tau)u_*^\ga\right)(|\tilde{\nabb}\phi|^2+\frac{\La^{3-p}}{p+1}|\phi|^{p+1})\\
&+2 (v_*^\ga-u_*^\ga) ({\tilde{\Lb}}\phi) (\om\cdot \tilde{\nabb})\phi.
\end{align*}
Similar to the proof of Proposition \ref{prop:EF:cone:NW:3d}, we write the above quantity in terms of $r\phi$ and show that the quadratic terms are nonnegative.
Indeed since $u_*^\ga\geq v_*^\ga$, we therefore can bound that
\begin{align*}
&\left((1-\tau)v_*^\ga+(1+\tau)u_*^\ga\right)|\tilde{\Lb}(r\phi)|^2+\left((1+\tau)v_*^\ga+(1-\tau)u_*^\ga\right)|\tilde{\nabb}(r\phi)|^2+2 (v_*^\ga-u_*^\ga){\tilde{\Lb}}(r\phi) (\om \cdot \tilde{\nabb})(r\phi)\\
&\geq \left((1-\tau)v_*^\ga+(1+\tau)u_*^\ga\right)|\tilde{\Lb}(r\phi)|^2+\left((1+\tau)v_*^\ga+(1-\tau)u_*^\ga\right)|\tilde{\nabb}(r\phi)|^2\\
&\quad -2 (u_*^\ga-v_*^\ga)\sqrt{1-\tau^2}|{\tilde{\Lb}}(r\phi)| |\tilde{\nabb}(r\phi)|\\
&\geq \frac{2u_*^\ga v_*^\ga}{(1-\tau)u_*^\ga+(1+\tau)v_*^\ga} |\tilde{\Lb}(r\phi)|^2 +\frac{2u_*^\ga v_*^\ga}{(1+\tau)u_*^\ga+(1-\tau)v_*^\ga}|\tilde{\nabb}(r\phi)|^2 \geq 0.
\end{align*}
For the other lower order terms, we compute that
\begin{align*}
&\left((1-\tau)v_*^\ga+(1+\tau)u_*^\ga\right)(\tau^2|\phi|^2+2{\tilde{\Lb}}(r\phi)\tau\phi )+(r\chi)( {\tilde{\Lb}}(r\phi)+\tau \phi) \phi-\f12(r\tilde{\Lb}(r\chi)+\tau r\chi)|\phi|^2\\
&+\left((1+\tau)v_*^\ga+(1-\tau)u_*^\ga\right)\big((1-\tau^2)|\phi|^2-2(\om-\tilde{\om}\tau)\tilde{\nabb}(r\phi) \phi\big)\\
&+(v_*^\ga-u_*^\ga)\big(-2\tau(1-\tau^2)|\phi|^2+2\phi( \tau(\om\cdot \tilde{\nabb})(r\phi)-(1-\tau^2){\tilde{\Lb}}(r\phi))\big)\\
&=(-\f12 r\tilde{\Lb}(r\chi)+v_*^\ga+u_*^\ga)|\phi|^
+2(v_*^\ga+u_*^\ga)(\tau {\tilde{\Lb}}-\om\cdot \tilde{\nabb})(r\phi) \phi\\
&=-r^2\tilde{r}^{-1} \tilde{\Om}_{ij}(r^{-3}(v_*^\ga+u_*^\ga) \om_j\tilde{\om}_i |r\phi|^2)+\tilde{r}^{-2}r^2\tilde{\Lb}(r^{-1}\tau\tilde{r}^2(v_*^\ga+u_*^\ga) |\phi|^2)\\
&+(-\f12 r\tilde{\Lb}(r\chi)+v_*^\ga+u_*^\ga)|\phi|^2-\tilde{r}^{-2}r^2\tilde{\Lb}(r^{-3}\tau\tilde{r}^2(v_*^\ga+u_*^\ga)) |r\phi|^2+r^2 \tilde{r}^{-1}\tilde{\Om}_{ij}(r^{-3}(v_*^\ga+u_*^\ga) \om_j\tilde{\om}_i) |r\phi|^2.
\end{align*}
Similarly we can compute that
\begin{align*}
&\tilde{r}^{-1}\tilde{\Om}_{ij}(r^{-3}\om_j\tilde{\om}_i)=-2r^{-4}(1-2\tau^2)-2\tau \tilde{r}^{-1}r^{-3},\\
&\tilde{r}^{-2}r^4\tilde{\Lb}(r^{-3}\tilde{r}^2\tau)=4\tau^2-1-2r\tilde{r}^{-1}\tau.
\end{align*}
Thus the coefficients of $|\phi|^2$ in the last line in the previous equation verify
\begin{align*}
&(-\f12 r\tilde{\Lb}(r\chi)+v_*^\ga+u_*^\ga)-\tilde{r}^{-2}r^4\tilde{\Lb}(r^{-3}\tau\tilde{r}^2(v_*^\ga+u_*^\ga)) +r^4 \tilde{r}^{-1} \tilde{\Om}_{ij}(r^{-3}(v_*^\ga+u_*^\ga) \om_j\tilde{\om}_i) \\
&=-r(\pa_t-\tilde{\om}\cdot \nabla)(v_*^\ga-u_*^\ga)-r\tau (\pa_t-\tilde{\om}\cdot \nabla)(u_*^\ga+v_*^\ga)+r(\pa_r-\tau \tilde{\om}\cdot \nabla)(u_*^\ga+v_*^\ga)\\
&\quad +(u_*^\ga+v_*^\ga)\left(1-(4\tau^2-1-2r\tilde{r}^{-1}\tau)-2(1-2\tau^2)+2\tau \tilde{r}^{-1}r\right)\\
&=r(\pa_t+\pa_r)u_*^\ga+r(\pa_r-\pa_t)v_*^\ga-\tau r(\pa_t+\pa_r)u_*^\ga-\tau r(\pa_t-\pa_r)v_*^\ga=0.
\end{align*}
By using integration by parts on the backward light cone $\mathcal{N}^{-}(q)$, we derive that
\begin{align*}
&\int_{\mathcal{N}^{-}(q)}\big(-r^2\tilde{r}^{-1} \tilde{\Om}_{ij}(r^{-3}(v_*^\ga+u_*^\ga) \om_j\tilde{\om}_i |r\phi|^2)+\tilde{r}^{-2}r^2\tilde{\Lb}(r^{-1}\tau\tilde{r}^2(v_*^\ga+u_*^\ga) |\phi|^2)\big)r^{-2}\tilde{r}^2 d\tilde{u}d\tilde{\om}\\
&= \int_{\mathcal{N}^{-}(q)\cap \B_R}r^{-1}\tau \tilde{r}^2 (u_*^\ga+v_*^\ga)|\phi|^2d\tilde{\om}.
\end{align*}
To summarize, the above computations show that
\begin{equation}
\label{eq:EST:Nq:comp}
\begin{split}
&\int_{\mathcal{N}^{-}(q)}((1+\tau)v_*^\ga+(1-\tau)u_*^\ga ) \frac{\La^{3-p}}{p+1}|\phi|^{p+1} \tilde{r}^2d\tilde{u}d\tilde{\om}\\
&\leq -\int_{\mathcal{N}^{-}(q)}i_{J^X[\phi, F]}d\vol+ \int_{\mathcal{N}^{-}(q)\cap \B_R}r^{-1}\tau \tilde{r}^2 (u_*^\ga+v_*^\ga)|\phi|^2d\tilde{\om}.
\end{split}
\end{equation}
We next compute the boundary integral on $\B_R\cap \mathcal{J}^{-}(q)$ in the inequality \eqref{eq:positive:ga:comp} which we compute it under the coordinates system $(t, x)$. As the initial hypersurface $\B_R$ has the volume form $dx$, the contraction reads
\begin{align*}
i_{J^{X, \chi}[\phi ]}d\vol&= T[\phi ]_{0 L}X^L+T[\phi ]_{\Lb 0}X^{\Lb}- \f12 \pa_t\chi |\phi|^2+ \f12\chi \pa_t|\phi|^2\\
&= \f12 v_*^\ga( |L\phi|^2+|\nabb\phi|^2+\frac{2\La^{3-p}}{p+1}|\phi|^{p+1})-\f12\pa_t\chi \cdot |\phi|^2+\f12\chi \pa_t|\phi|^2 \\
&\quad +\f12 u_*^\ga(|{\Lb}\phi|^2+|\nabb\phi|^2+\frac{2\La^{3-p}}{p+1}|\phi|^{p+1})\\
&=\f12(u_*^\ga+v_*^\ga)( |\nabb\phi|^2+\frac{2\La^{3-p}}{p+1}|\phi|^{p+1})+\f12 v_*^\ga r^{-2}|L(r\phi)|^2\\
&\quad +\f12 u_*^\ga r^{-2}|{\Lb}(r\phi)|^2-\div(\om r^{-1}|\phi|^2(u_*^\ga+v_*^\ga)).
\end{align*}
Here $\om=\frac{x}{|x|}$ can be viewed as a vector on $\mathbb{R}^{3}$ and the divergence is taken over the initial hypersurface $\B_{R}$. The integral of the divergence term and be computed by using integration by parts. Under the coordinates $\tilde{x}=x-x_0$ on the initial hypersurface, we have
\begin{align*}
\int_{\mathcal{J}^{-}(q)\cap \B_R} \div (\om r^{-1}|\phi|^2(u_*^\ga+v_*^\ga))dx
&=\int_{\mathcal{N}^{-}(q)\cap \B_R} \tilde{r}^2 \tilde{\om} \cdot\om r^{-1}|\phi|^2(u_*^\ga+v_*^\ga)d\tilde{\om}.
\end{align*}
This term cancels the one from the integral on $\mathcal{N}^{-}(q)$ in the estimate \eqref{eq:EST:Nq:comp}.
By our assumption on the initial data, we thus bound that
\begin{align*}
&\int_{\mathcal{J}^{-}(q)\cap \B_R}i_{J^{X, \chi}[\phi ]}d\vol+\int_{\mathcal{N}^{-}(q)\cap \B_R} \tilde{r}^2 \tau r^{-1}|\phi|^2(u_*^\ga+v_*^\ga)d\tilde{\om}\\
&=\f12 \int_{\mathcal{J}^{-}(q)\cap B_R} (u_*^\ga+v_*^\ga)( |\nabb\phi|^2+\frac{2\La^{3-p}}{p+1}|\phi|^{p+1})+ v_*^\ga r^{-2}|L(r\phi)|^2+ u_*^\ga r^{-2}|{\Lb}(r\phi)|^2 dx\\
&\leq C_R\int_{\B_R}(R-|x|)^\ga | L\phi|^2 + |\nabb\phi|^2+|{\Lb}\phi|^2 +|\phi|^2+(R-|x|)^{p-3}|\phi|^{p+1}dx\\
&\leq C_R \tilde{\cE}_{0, \ga}
\end{align*}
for some constant $C_R$ depending only on $R$. In particular this constant is independent of the choice of the point $q$. Now combining estimates \eqref{eq:positive:ga:comp}, \eqref{eq:EST:Nq:comp} , we derive the estimate \eqref{eq:comp:v:EF} of the Proposition in view of the inequality
\[
(1+\tau)v_*^\ga+(1-\tau)u_*^\ga\geq \frac{1}{2} (1-\tau)u_*^{\ga}+v_*^{\ga}.
\]
\end{proof}
\iffalse
Like the case in the exterior region, we rely on the representation formula for linear wave equation to obtain the pointwise decay estimates for the solution based on the weighted energy flux bound in the previous Proposition. The solution consists the linear part which evolves from the initial data and the nonlinear part. However the initial data are not bounded even in the energy space. We instead use energy method to control the linear evolution.
For this purpose, consider the following Cauchy problem
\begin{equation}
\label{eq:lineareq:com}
\Box w=0,\quad w(0, x)=0,\quad \pa_t w(0, x)=w_1,\quad |x|\leq R.
\end{equation}
For $0<\ga<1$ and any function $w(x)$ of $x$, denote
\begin{align*}
\mathcal{E}_{\ga}[w](\B_{R})=\int_{|x|\leq R}|w|^2(R-|x|)^\ga +\sum\limits_{i,j}|\Om_{ij}w|^2(R-|x|)^\ga+|\nabla w|^2(R-|x|)^{2+\ga}dx,
\end{align*}
where $\Om_{ij}=x_i\pa_j-x_j\pa_i$ are the angular momentum vector fields.
We have the following estimate for the linear solution.
\begin{Prop}
\label{prop:lineardecay:comp}
Consider the Cauchy problem to the linear wave equation \eqref{eq:lineareq:com}. Then there exists constant $C$ depending only on $\ga$ such that
\begin{equation}
\label{eq:lineardecay:comp}
|w(t, x)|^2\leq C (R-t)^{-1-\ga} \mathcal{E}_{ \ga}[w_1](\B_R),\quad \forall (t, x)\in \mathcal{J}^+(\B_{R}).
\end{equation}
In particular we note that the constant $C$ is independent of the radius $R$.
\end{Prop}
To prove this Proposition, we first establish a type of Hardy's inequality.
\begin{Lem}
\label{lem:Hardy:com}
For $0<\ga<1$ and $\ep>0$, we have the following type of Hardy's inequality
\begin{equation*}
\label{eq:Hardy:com}
\int_{\B_{R}}|\varphi|^2 (R-|x|)^{-1+\ep}dx\leq C R^{\ep+\ga-1} \int_{\B_{R}}(|\varphi|^2R^{-2}+|\nabla \varphi|^2)(R-|x|)^\ga dx
\end{equation*}
for some constant $C$ depending only on $\ep$ and $\ga$.
\end{Lem}
\begin{proof}
Note that the inequality is scaling invariant in terms of $R$ by the scaling transformation $\varphi(x)=\varphi_1(x/R)$. It hence suffices to prove the lemma for the case when $R=1$.
Moreover the inequality is homogeneous. Without loss of generality we assume that
\[
\int_{\B_{1}}(|\varphi|^2+|\nabla \varphi|^2)(1-|x|)^\ga dx=1.
\]
Here $\B_{1}$ denotes the ball with radius $1$ on the initial hypersurface.
We can estimate that
\begin{align*}
&\int_{\om}|\varphi|^2(r\om)(1-r)^{\ep}r^4 d\om +\ep\int_{0}^{r}\int_{\om}|\varphi|^2 (1-s)^{\ep-1}s^4 dsd\om \\
&=\int_{0}^{r}\int_{\om}|\varphi|^2 4(1-s)^{\ep}s^3+2 \varphi \cdot \pa_r\varphi (1-s)^{\ep}s^4 \quad dsd\om\\
&\leq \int_{0}^{r}\int_{\om} 64\ep^{-1}|\varphi|^2 (1-s)^{\ga}s^2+16\ep^{-1}|\nabla\varphi|^2(1-s)^\ga s^2+\f12 \ep |\varphi|^2(1-s)^{2\ep-\ga}s^4 \quad dsd\om.
\end{align*}
Since $\ga<1$ and $\ep>0$, take $r=1$. We derive from the above estimate that
\begin{align*}
\ep\int_{0}^{1}\int_{\om}|\varphi|^2 (1-s)^{\ep-1}s^4 dsd\om \leq 160\ep^{-1}.
\end{align*}
On the other hand, we have
\begin{align*}
\int_{|x|\leq \f12}|\varphi|^2(1-|x|)^{\ep-1}dx \leq 2^{1+\ga-\ep}\int_{|x|\leq \f12}|\varphi|^2(1-|x|)^{\ga} dx \leq 2^{1+\ga-\ep}.
\end{align*}
Combining the above two bounds, we conclude that
\begin{align*}
\int_{|x|\leq 1}|\varphi|^2(1-|x|)^{\ep-1}dx\leq 2^{1+\ga-\ep}+640\ep^{-2}.
\end{align*}
Hence the lemma holds.
\end{proof}
We also need the following weighted Sobolev embedding.
\begin{Lem}
\label{lem:weightedSob:com}
Let $\varphi$ be a smooth function defined on the ball $\B_{R}$. Then we have the weighted Sobolev embedding
\begin{equation}
\label{eq:weighteSob:com}
|\varphi|^2(x)\leq C R^{-1-\ga}\mathcal{E}_0 ,\quad \forall |x|\leq R
\end{equation}
for some constant $C$ depending only on $\ga$, where $\mathcal{E}_0=\int_{\B_{R}}|\varphi|^2(R-|x|)^{\ga}R^{-2}dx+\mathcal{E}_{ \ga}[\nabla\varphi](\B_{R})$.
\end{Lem}
\begin{proof}
Like the previous Lemma, the inequality is scaling invariant. Hence it is sufficient to prove this Lemma for the case when $R=1$ and
\[
\mathcal{E}_0=\int_{B_{1}}(|\varphi|^2+|\nabla\varphi|^2+|\nabla\Om_{ij}\varphi|^2)(1-|x|)^{\ga}+|\nabla\nabla\varphi|^2(1-|x|)^{2+\ga}dx=1
\]
For small $|x| \leq \frac{1}{2}$, we can use the standard Sobolev embedding to conclude that
\begin{align*}
|\varphi|^2(x)\leq C \|\varphi\|_{H^2(\B_{\f12})}^2\leq 2^{2+\ga}C.
\end{align*}
Here $C$ is the universal constant from the embedding inequality (however it may very in different places). For large $|x|$ we make use of the better decay of the angular derivative. From the previous Lemma \ref{lem:Hardy:com}, we derive the better decay of $\varphi$ as well as $\pa_\om\varphi$
\begin{equation}
\label{eq:bd4varphiom}
\int_{|x|\leq 1}(|\varphi|^2+|\pa_\om\varphi|^2)(1-|x|)^{\ep-1}dx\leq C_{\ep, \ga}
\end{equation}
for some constant $C_{\ep, \ga}$ depending only on $\ep$ and $\ga$. Here $\pa_\om$ denotes derivatives of the angular momentum $\Om_{ij}$. By using the Poincar\'e inequality we can derive that
\begin{align*}
\int_{\om}|\varphi|^6d\om \leq C\int_{\om}|\varphi|^2+|\pa_\om\varphi|^2d\om \cdot \int_{\om}|\varphi|^4d\om
\end{align*}
for some universal constant $C$. For some fixed constant $r_0\in[\frac{1}{3}, \f12]$ and $r\geq r_0$, integrate from the sphere with radius $r_0$. We can derive that
\begin{align*}
\int_{\om}|\varphi|^4(r\om)d\om&=\int_{\om}|\varphi|^4(r_0\om)d\om+4\int_{r_0}^{r}\pa_r\varphi \cdot \varphi^3 d\om dr\\
&\leq \int_{\om}|\varphi|^4(r_0\om)d\om+C\left(\int_{r_0}^{r}\int_{\om}|\pa_r\varphi|^2(1-|x|)^\ga drd\om \right)^\f12\left(\int_{r_0}^{r}\int_{\om}|\varphi|^6(1-|x|)^{-\ga} drd\om \right)^\f12\\
&\leq \int_{\om}|\varphi|^4(r_0\om)d\om+C \sup\limits_{r_0\leq s\leq r}\left(\int_{\om}|\varphi|^4(s\om)d\om\right)^\f12 \left(\int_{r_0}^{r}\int_{\om}(|\varphi|^2+|\pa_\om \varphi|^2)(1-|x|)^{-\ga} drd\om \right)^\f12
\end{align*}
for some universal constant $C$ and all $r_0 \leq r\leq 1$. By choosing $\ep=1-\ga$ in estimate \eqref{eq:bd4varphiom} and $r_0=\f12$ in the above inequality, we derive from the previous estimate that
\[
\int_{\om}|\varphi|^4(r\om)d\om\leq C_\ga,\quad \forall \f12 \leq r\leq 1
\]
for some constant $C_\ga$ depending only on $\ga$. Here we used the bound of $\varphi$ on the sphere with radius $\f12$.
For $\pa_\om \varphi$, we are not able to derive the same bound due to the lack of good control on $\pa_\om \pa_\om \varphi$. By using Sobolev embedding, we first have
\[
\left(\int_{\frac{1}{3}}^{\frac{1}{2}}\int_{\om}|\pa_\om \varphi|^4 r^2 dr d\om\right)^\f12 \leq C\int_{\frac{1}{3}}^{\f12}\int_{\om} (|\pa_\om \varphi|^2+|\nabla\pa_\om \varphi|^2 r^2) drd\om\leq C
\]
for some universal constant $C$. In particular we can choose some $r_0\in[\frac{1}{3}, \frac{1}{2}]$ such that
\[
\int_{\om}|\pa_\om\varphi|^4(r_0\om)d\om \leq C.
\]
Then replace $\varphi$ with $\pa_\om \varphi$ in the previous $L^4$ estimate. We obtain that
\begin{align*}
\int_{\om}|\pa_\om\varphi|^4d\om &\leq C+\sup\limits_{r_0\leq s\leq r}\left(\int_{\om}|\pa_\om\varphi|^4(s\om)d\om\right)^\f12 \left(\int_{r_0}^{r}\int_{\om}(|\pa_\om\varphi|^2+|\nabla \pa_\om \varphi|^2)(1-|x|)^{-\ga} drd\om \right)^\f12\\
&\leq C+\sup\limits_{r_0\leq s\leq r}\left(\int_{\om}|\pa_\om\varphi|^4(s\om)d\om\right)^\f12 (1-r)^{-\ga},
\end{align*}
which implies that
\begin{align*}
\int_{\om}|\pa_\om \varphi|^4d\om \leq C (1-r)^{-2\ga},\quad \forall r_0\leq r\leq 1.
\end{align*}
Now for any $2\leq p_1\leq 4$, integrate from the sphere with the same $r_0$ such that the $L^4$ norm of $\pa_\om\varphi $ is bounded. We can show that
\begin{align*}
\int_{\om}|\pa_\om\varphi|^{p_1}(r\om) d\om\leq \int_{\om}|\pa_\om \varphi|^{p_1}(r_0\om) d\om+&C_{p_1}\left(\int_{r_0}^r \int_{\om}|\pa_r\pa_\om \varphi|^2(1-|x|)^{\ga} drd\om\right)^\f12\\
\cdot &\left(\int_{r_0}^{r}\int_{\om }|\pa_\om \varphi|^{2p_1-2}(1-|x|)^{-\ga} drd\om \right)^\f12.
\end{align*}
Now choosing $p_1=2+\ep$, $\ep=\frac{1-\ga}{3}$ and interpolation between $L^2$ and $L^4$, we derive that
\begin{align*}
\int_{\om}|\pa_\om\varphi|^{p_1}(r\om) d\om &\leq C+C\left(\int_{r_0}^{r}\int_{\om }|\pa_\om \varphi|^{2}(1-|x|)^{\ep-1} drd\om\right)^{\frac{1-\ep}{2}}\left(\int_{r_0}^{r}\int_{\om }|\pa_\om \varphi|^{4}(1-|x|)^{1+\ep} drd\om\right)^{\frac{\ep}{2}}\\
&\leq C+C_\ga \left(\int_{r_0}^{r}(1-s)^{1+\ep-2\ga}ds\right)^{\f12\ep}\leq C_\ga,\quad \forall r_0\leq r\leq 1.
\end{align*}
Here we have used the bound of the $L^4$ estimate for $\pa_\om \varphi$ and the assumption that $\ga<1$. Since $p_1=2+\ep>2$, this estimate together with the above uniform $L^4$ bound for $\varphi$ leads to the uniform pointwise estimate for $\varphi$.
\end{proof}
We now use these two lemmas to prove Proposition \ref{prop:lineardecay:comp}. Since the equation is linear, without loss of generality we may assume that the initial data verify
\[
\int_{|x|\leq R}(|w_1|^2 +\sum\limits_{i,j}|\Om_{ij}w_1|^2)(R-|x|)^\ga dx=1.
\]
Recall the notation that $\B_{q}(\tilde{r})$ denotes the spatial ball centered at the point $q$ with radius $\tilde{r}$. By using the energy estimates for the linear wave equation, we derive that
\begin{align*}
\int_{\B_{(t, 0)}(r)}|\pa w|^2 dx+\int_{r}^{t+r}\int_{\om}|(\pa_u, \nabb) w|^2(t+r-s, r\om) s^2 ds d \om\leq \int_{\B_{(0, 0)}(t+r)}|\pa w|^2 dx .
\end{align*}
Here $\pa$ is short for the full derivative $(\pa_t, \nabla)$.
Multiply both side by $(R-t-r)^{\ga-1}$ and integrate $r$ from $0$ to $R-t$. We derive that
\begin{align*}
&\int_{B_{(t, 0)}(R-t)}|\pa w|^2(R-t-|x|)^\ga dx\\
&=\ga \int_{0}^{R-t}(R-t-r)^{\ga-1}\int_{B_{(t, 0)}(r)}|\pa w|^2 dx dr\\
&\leq \ga\int_{0}^{R-t}(R-t-r)^{\ga-1} \int_{s\leq r+t}\int_{\om }|w_1|^2 s^2 dsd\om \\
&=\int_{B_{R}}|w_1|^2(R-\max\{|x|, t\})^{\ga}dx\leq 1.
\end{align*}
By commuting the linear wave equation with the angular momentum vector fields $\Om_{ij}$, we also derive the bound
\[
\int_{\B_{(t, 0)}(R-t)}|\pa\Om_{ij} w|^2(R-t-|x|)^\ga dx\leq 1.
\]
Then by commuting the linear wave equation with $\nabla$, similarly we can derive that
\begin{align*}
&\int_{\B_{(t, 0)}(R-t)}|\pa \nabla w |^2(R-t-|x|)^{2+\ga}dx \leq \int_{\B_{R}}|\nabla w_1|^2(R-\max\{|x|, t\})^{\ga+2}dx\leq 1.
\end{align*}
To apply Lemma \ref{lem:weightedSob:com} to conclude the pointwise bound for $w(t, x)$, it remains to derive the weighted $L^2$ estimate for the solution $w$. For this we rely on the energy flux through the cone $\mathcal{N}^{-}(t+r, 0)$. Indeed since $w$ vanishes on the initial hypersurface, we can show that
\begin{align*}
\int_{\om}|w|^2(t, r\om) d\om &\leq \int_{\om}\left(\int_{r}^{r+t}|\pa_u w|(t+r-s, s\om)ds\right)^2d\om\\
&\leq \int_{\om} \int_{r}^{r+t}|\pa_u w|^2(t+r-s, s\om)s^2 \cdot r^{-1} ds d\om\\
&\leq r^{-1} \int_{\B_{(0, 0)}(t+r)}|w_1|^2dx.
\end{align*}
Therefore multiplying both side with $(R-t-r)^{\ga-1}$, we derive that
\begin{align*}
\int_{0}^{R-t}\int_{\om}|w|^2(t, r\om)(R-t-r)^{\ga-1} rd\om dr \leq \int_{0}^{R-t}(R-t-r)^{\ga-1}\int_{|x|\leq r+t}|w_1|^2 dx dr\leq \ga^{-1}.
\end{align*}
In particular we have
\begin{align*}
\int_{\B_{(t, 0)}(R-t)}|w|^2(R-t-r)^{\ga}dx\leq (R-t)^2 \int_{0}^{R-t}\int_{\om}|w|^2(R-t-r)^{\ga-1} rd\om dr\leq \ga^{-1}(R-t)^2.
\end{align*}
The linear decay estimate \eqref{eq:lineardecay:comp} then follows by using Lemma \ref{lem:weightedSob:com}. This proves Proposition \ref{prop:lineardecay:comp}.
\fi
Based on the above uniform weighted flux bound, we are able to derive the pointwise estimates for solution of \eqref{eq:NLW:3D:conf}. Let $\phi^{lin}$ be the linear evolution on $\cJ^{+}(\B_R)$ with initial data $(\phi_0, \phi_1)$ on $\B_R$, that is,
\begin{align*}
\Box\phi^{lin}=0,\quad \phi^{lin}(0, x)=\phi_0,\quad \pa_t\phi^{lin}(0, x)=\phi_1,\quad t+|x|\leq R.
\end{align*}
To avoid too many constants, we make a convention in this section that $A\les B$ means that there is a constant $C$, depending only on $R$, $p$, $\ga$, $\tilde{\cE}_{0, \ga}+\mathcal{I}$ and a fixed small constant $0<\ep<10^{-2}(1-\ga)$, such that $A\leq CB$.
Before stating the main result of this section, we prove two integration lemmas.
\begin{Lem}
\label{lem:bd:vga}
Fix $(t_0, x_0)\in \mathcal{J}^+(B_R)$. For all $0\leq \tilde{r}\leq t_0$, $t=t_0-\tilde{r}$ and $r=|x_0+\tilde{r}\tilde{\om}|$, we have the following uniform bound
\begin{equation*}
\int_{\S_{(t, x_0)}(\tilde{r})}(R-t-r)^{-\ga'}d\tilde{\om}\leq C (R-t)^{\ga'}(R-t_0)^{-\ga'} (v_0+\tilde{r})^{-\ga'}
\end{equation*}
for all $ \ga'<1$ and for some constant $C$ depending only on $\ga'$ and $R$.
\end{Lem}
\begin{proof}
Denote $u_0=R-t_0+r_0$ and $v_0=R-t_0-r_0$ where $r_0=|x_0|$. By the assumption that $t_0+r_0\leq R$, we in particular have that $r\leq R-t$, which implies that
\[
(R-t-r)^{-\ga'}\leq 2 \left((R-t)^2-r^2\right)^{-\ga'}(R-t)^{\ga'}.
\]
Note that $r^2=r_0^2+\tilde{r}^2+2 \tilde{r} x_0\cdot \tilde{\om}$. We can compute that
\begin{align*}
\int_{|\tilde{\om}|=1}(R-t-r)^{-\ga'}d\tilde{\om}&\leq 4\pi (R-t)^{\ga'} \int_{-1}^{1}((R-t)^2-t_0^2-\tilde{r}^2-2\tilde{r}r_0\tau)^{-\ga'}d\tau\\
&\leq 4\pi(1-\ga')^{-1} (R-t)^{\ga'} (r_0 \tilde{r})^{-1} (u_0^{1-\ga'}(v_0+2\tilde{r})^{1-\ga'}-v_0^{1-\ga'}(u_0+2\tilde{r})^{1-\ga'}).
\end{align*}
By definition, we see that
\[
u_0(v_0+2\tilde{r})-v_0(u_0+2\tilde{r})=4\tilde{r}r_0.
\]
As $\gamma'<1$, we derive that
\[
u_0^{1-\ga'} (v_0+2\tilde{r})^{1-\ga'}-v_0^{1-\ga'}(u_0+2\tilde{r})^{1-\ga'}\leq C(R, \ga') \tilde{r}r_0 u_0^{-\gamma'}(v_0+\tilde{r})^{-\gamma'}
\]
for some constant $C(R, \ga')$ depending only on $\ga'$ and $R$.
The lemma then follows as $0\leq r_0\leq R-t_0$.
\end{proof}
The above integration lemma will be used for the larger $p$ case when $p>\frac{1+\sqrt{17}}{2}$. The following specific lemma will be used for small $p$.
\begin{Lem}
\label{lem:bd:vga:smallp:in}
Fix $(t_0, x_0)\in \mathcal{J}^+(B_R)$. For all $0\leq \tilde{r}\leq t_0$, $t=t_0-\tilde{r}$, $r=|x_0+\tilde{r}\tilde{\om}|$ and $0<\ga<1$, $0\leq \a< 1$, we have the following uniform bound
\begin{equation*}
\int_{\S_{(t, x_0)}(\tilde{r})} ((1-\tau)u_*^{\ga}+v_*^{\ga})^{-\a} d\tilde{\om}\leq C (R-t_0)^{-\a \ga}
\end{equation*}
for some constant $C$ depending only on $\ga$ , $\a$ and $R$.
\end{Lem}
\begin{proof}
Using the same notations from the previous Lemma, denote $s=-\om_0\cdot \tilde{\om}$.
Recall that
\begin{align*}
&r^2=(\tilde{r}-r_0s)^2+(1-s^2)r_0^2,\quad \tau r =(\tilde{x}+x_0)\cdot \tilde{\om}=\tilde{r}-r_0s.
\end{align*}
We can write the integral as
\begin{align*}
\int_{\S_{(t, x_0)}(\tilde{r})} ((1-\tau)u_*^{\ga}+v_*^{\ga})^{-\a} d\tilde{\om}=2\pi\int_{-1}^{1} ((1-r^{-1}(\tilde{r}-r_0 s))u_*^{\ga}+v_*^{\ga})^{-\a}ds.
\end{align*}
Note that $R-t\leq u_*\leq 2(R-t)$. Hence
\begin{align*}
\int_{\S_{(t, x_0)}(\tilde{r})} ((1-\tau)u_*^{\ga}+v_*^{\ga})^{-\a} d\tilde{\om}\les \int_{-1}^{1} ((1-r^{-1}(\tilde{r}-r_0 s))(R-t)^{\ga}+v_*^{\ga})^{-\a} ds.
\end{align*}
Here and in the following of the proof the implicit constants rely only on $R$, $\a$, $\ga$.
For the case when $r_0\leq \frac{3}{4} (R-t_0)$, it holds that $$v_*\geq R-t_0-r_0\geq \frac{1}{4} (R-t_0),$$
from which we conclude that
\begin{align*}
\int_{\S_{(t, x_0)}(\tilde{r})} ((1-\tau)u_*^{\ga}+v_*^{\ga})^{-\a} d\tilde{\om}\les (R-t_0)^{-\a\ga}.
\end{align*}
Otherwise for the case when $r_0\geq \frac{3}{4}(R-t_0)$, and if $\tilde{r}\leq 2 r_0 $, note that
\begin{align*}
1-\tau=1-r^{-1}(\tilde{r}-r_0s) \geq \frac{1-s^2}{100}.
\end{align*}
Therefore we can show that
\begin{align*}
&\int_{-1}^{1}((1-\tau)(R-t)^{\ga}+v_*^{\ga})^{-\a} ds
\les \int_{-1}^{1} (R-t)^{-\a\ga}(1-s^2)^{-\a} ds \les (R-t_0)^{-\a\ga}.
\end{align*}
For the remaining case $\tilde{r}\geq 2r_0$, we instead have
\begin{align*}
v_*=R-t-r\geq v_0+10^{-2}(1+s) r_0
\end{align*}
Therefore we derive that
\begin{align*}
\int_{-1}^{1}((1-\tau)(R-t)^{\ga}+v_*^{\ga})^{-\a} ds
&\les \int_{-1}^{1} (v_0+(1+s) r_0)^{-\a\ga} ds\\
&\les r_0^{-1}((v_0+2r_0)^{1-\a\ga}-v_0^{1-\a\ga})\\
&\les (R-t_0)^{-\a\ga}
\end{align*}
as $0\leq \a\ga<1$. This proves the lemma.
\end{proof}
\iffalse
\begin{Lem}
\label{lem:bd:vga:smallp:Grownall}
Fix $(t_0, x_0)\in \mathcal{J}^+(B_R)$. For all $0\leq \tilde{r}\leq \f12 (R-t_0)$, $t=t_0-\tilde{r}$ and $r=|x_0+\tilde{r}\tilde{\om}|$, we have the following uniform bound
\begin{equation*}
\int_{\S_{(t, x_0)}(\tilde{r})} ((1-\tau)u_*^{\ga}+v_*^{\ga})^{-\frac{p-1}{2}}v_*^{p-3}d\tilde{\om}\leq C (R-t_0)^{-\frac{p-1}{2}\ga}(R-t_0-r_0)^{p-3}
\end{equation*}
for some constant $C$ depending only on $p$, $\ga$ and $R$.
\end{Lem}
\begin{proof}
Denote $u_0=R-t_0+r_0$ and $v_0=R-t_0-r_0$ where $r_0=|x_0|$.
Denote $s=-\om_0\cdot \tilde{\om}$ with $\om_0=r_0^{-1}x_0$. Note that
\begin{align*}
&r^2=|x_0+\tilde{x}|^2=\tilde{r}^2+r_0^2+2r_0\tilde{r}\om_0\cdot \tilde{\om}=(\tilde{r}-r_0s)^2+(1-s^2)r_0^2,\\
& \tau r=r\om\cdot \tilde{\om}=(\tilde{x}+x_0)\cdot \tilde{\om}=\tilde{r}-r_0s.
\end{align*}
We can write the integral as
\begin{align*}
\int_{\S_{(t, x_0)}(\tilde{r})} ((1-\tau)u_*^{\ga}+v_*^{\ga})^{-\frac{p-1}{2}}v_*^{p-3}d\tilde{\om}=2\pi\int_{-1}^{1} ((1-r^{-1}(\tilde{r}-r_0 s))u_*^{\ga}+v_*^{\ga})^{-\frac{p-1}{2}}v_*^{p-3}ds
\end{align*}
Since $R-t\leq u_*\leq 2(R-t)$, hence
\begin{align*}
\int_{\S_{(t, x_0)}(\tilde{r})} ((1-\tau)u_*^{\ga}+v_*^{\ga})^{-\frac{p-1}{2}}v_*^{p-3}d\tilde{\om}\les \int_{-1}^{1} ((1-r^{-1}(\tilde{r}-r_0 s))(R-t)^{\ga}+v_*^{\ga})^{-\frac{p-1}{2}}v_*^{p-3}ds.
\end{align*}
For the case when $r_0\leq \frac{3}{4} (R-t_0)$, it holds that $v_*\geq R-t_0-r_0\geq \frac{1}{4} (R-t_0)$. In particular, we conclude that
\begin{align*}
\int_{\S_{(t, x_0)}(\tilde{r})} ((1-\tau)u_*^{\ga}+v_*^{\ga})^{-\frac{p-1}{2}}v_*^{p-3}d\tilde{\om}\les (R-t_0)^{p-3-\frac{p-1}{2}\ga}.
\end{align*}
Here we may notice that $1<p<3$, $\ga>0$. When $ \frac{3}{2}\tilde{r}\leq \frac{3}{4} (R-t_0)\leq r_0 $ and $s\geq r_0^{-1}\tilde{r}$, then
\begin{align*}
&\int_{r_0^{-1}\tilde{r}}^{1} ((1-r^{-1}(\tilde{r}-r_0 s))(R-t)^{\ga}+v_*^{\ga})^{-\frac{p-1}{2}}v_*^{p-3}ds \\
&\les \int_{r_0^{-1}\tilde{r}}^{1} (R-t)^{-\frac{p-1}{2}\ga} v_*^{p-3}ds\\
&\les (R-t)^{-\frac{p-1}{2}\ga+3-p} \int_{r_0^{-1}\tilde{r}}^{1} ((R-t)^2-r_0^2-\tilde{r}^2+2r_0\tilde{r}s)^{p-3}ds\\
&\les (R-t)^{-\frac{p-1}{2}\ga+3-p} (r_0\tilde{r})^{-1}\left(((R-t)^2-(r_0-\tilde{r})^2)^{p-2}-((R-t)^2-r_0^2+\tilde{r}^2)^{p-2}\right)\\
&\les (R-t)^{-\frac{p-1}{2}\ga+3-p} (r_0\tilde{r})^{-1}(r_0\tilde{r}-\tilde{r}^2)(u_0v_0+2u_0\tilde{r})^{p-3}\\
&\les (R-t)^{-\frac{p-1}{2}\ga+3-p} u_0^{p-3} (v_0+2\tilde{r})^{p-3}\\
&\les (R-t_0)^{-\frac{p-1}{2}\ga} (v_0+\tilde{r})^{p-3}.
\end{align*}
On the interval $[-1, r_0^{-1}\tilde{r}]$, first we have
\begin{align*}
1-r^{-1}(\tilde{r}-r_0s)=\frac{(1-s^2)r_0^2}{r(r+\tilde{r}-r_0s)}\geq \frac{(1+s)r_0^2}{2(\tilde{r}-r_0s)^2+2(1-s^2)r_0^2}\geq \frac{1+s}{100}
\end{align*}
\begin{align*}
R-t-r=v_0+\tilde{r}+r_0-\sqrt{\tilde{r}^2+r_0^2-2\tilde{r}r_0s}=v_0+\frac{2r_0\tilde{r}(1+s)}{\tilde{r}+r_0+r}\geq v_0+10^{-2}(1+s)\tilde{r}.
\end{align*}
Therefore we can show that
\begin{align*}
&\int^{r_0^{-1}\tilde{r}}_{-1} ((1-r^{-1}(\tilde{r}-r_0 s))(R-t)^{\ga}+v_*^{\ga})^{-\frac{p-1}{2}}v_*^{p-3}ds \\
&\les \int^{r_0^{-1}\tilde{r}}_{-1} ((R-t)^{\ga}(1+s)+(v_0+(1+s)\tilde{r})^{\ga})^{-\frac{p-1}{2}} (v_0+(1+s)\tilde{r})^{p-3}ds\\
&\les (R-t_0)^{-\frac{p-1}{2}\ga} v_0^{p-3}.
\end{align*}
\end{proof}
\begin{Lem}
\label{lem:bd:vga:smallp:direct}
Fix $(t_0, x_0)\in \mathcal{J}^+(B_R)$. For all $0\leq \tilde{r}\leq t_0$, $t=t_0-\tilde{r}$ and $r=|x_0+\tilde{r}\tilde{\om}|$, we have the following uniform bound
\begin{equation*}
\int_{\S_{(t, x_0)}(\tilde{r})} ((1-\tau)u_*^{\ga}+v_*^{\ga})^{-p}v_*^{p-3} d\tilde{\om}\leq C (R-t_0-r_0)^{p-3-(p-1)\ga}(R-t_0)^{-\ga}
\end{equation*}
for all $ \ga'<1$ for some constant $C$ depending only on $\ga'$ and $R$.
\end{Lem}
\begin{proof}
Again, we can write the integral as
\begin{align*}
\int_{\S_{(t, x_0)}(\tilde{r})} ((1-\tau)u_*^{\ga}+v_*^{\ga})^{-p}v_*^{p-3} d\tilde{\om}\les \int_{-1}^{1}((1-\tau)(R-t)^{\ga}+v_*^{\ga})^{-p}v_*^{p-3}ds.
\end{align*}
Denote $u_0=R-t_0+r_0$ and $v_0=R-t_0-r_0$ where $r_0=|x_0|$.
For the case when $\tilde{r}\leq 2r_0$, the argument is quite similar to the proof of previous lemma. One can first show that
\begin{align*}
1-\tau=1-r^{-1}(\tilde{r}-r_0s) \geq \frac{1-s^2}{100}, \quad v_*=R-t-r\geq v_0+10^{-2}(1+s)\tilde{r}.
\end{align*}
Hence
\begin{align*}
&\int_{-1}^{1}((1-\tau)(R-t)^{\ga}+v_*^{\ga})^{-p}v_*^{p-3}ds\\
&\les \int_{-1}^{1} ((R-t)^{\ga}(1-s^2)+(v_0+(1+s)\tilde{r})^{\ga})^{-p}(v_0+(1+s)\tilde{r})^{p-3}ds\\
&\les (R-t)^{-\ga}v_0^{\ga(1-p)+p-3}.
\end{align*}
Now for the case when $\tilde{r}\geq 2r_0$, we instead have
\begin{align*}
1-\tau=1-r^{-1}(\tilde{r}-r_0s) \geq \frac{1-s^2}{100}r_0^2\tilde{r}^{-2}, \quad v_*=R-t-r\geq v_0+10^{-2}(1+s) r_0.
\end{align*}
Thus
Hence
\begin{align*}
&\int_{-1}^{1}((1-\tau)(R-t)^{\ga}+v_*^{\ga})^{-p}v_*^{p-3}ds\\
&\les \int_{-1}^{1} ((R-t)^{\ga}(1-s^2)r_0^{2}\tilde{r}^{-2}+(v_0+(1+s) r_0)^{\ga})^{-p}(v_0+(1+s) r_0)^{p-3}ds\\
&\les \int_{-1}^{1} (v_0+(1+s) r_0)^{-\ga} v_0^{-\ga(p-1)+p-3} ds\\
&\les v_0^{-\ga(p-1)+p-3}(R-t_0)^{-\ga}.
\end{align*}
\end{proof}
\fi
Define
\[
\a_p=\frac{3+(p-2)^2}{(p+1)(5-p)},\quad 1<p<5.
\]
Now we are ready to prove the main result of this section.
\begin{Prop}
\label{Prop:pointwise:decay:EM}
The solution $\phi$ to the equation \eqref{eq:NLW:3D:conf} on $\mathcal{J}^{+}(\B_{R})$ verifies the following pointwise decay estimates:
\begin{itemize}
\item If
\begin{equation*}
\frac{1+\sqrt{17}}{2}<p<5,\quad 0<\ga<1,\quad (p-1)(3-\ga)>4,
\end{equation*}
then
\begin{equation}
\label{eq:phi:pt:Br:EM:largep}
|\phi(t_0, x_0)|\leq C \sup\limits_{|x|\leq R-t_0}|\phi^{lin}(t_0, x)|.
\end{equation}
\item
For the case when
\begin{align*}
2<p\leq \frac{1+\sqrt{17}}{2},\quad \ga=2-\ga_0+\ep,\quad 1<\ga_0-\ep<p-1,
\end{align*}
for all $\b\leq \frac{p-1}{p+1}\ga_0-\ep$, we have
\begin{equation}
\label{eq:phi:pt:Br:EM:smallp}
|\phi(t_0, x_0)|\leq C (1+\sup\limits_{t+|x|\leq R}|\phi^{lin}u_*^{1-\b} v_*^{1-\a_p \ga_0}|)u_0^{-1+\b} v_0^{-1+\a_p\ga_0}.
\end{equation}
The constant $C$ depends only on $\mathcal{I}+\tilde{\cE}_{0, \ga}$, $R$, $p$, $\ga$, $\b$ and $\ep>0$. Here $u_*=R-t+r$, $v_*=R-t-r$ and
$u_0=R-t_0$, $v_0=R-t_0-|x_0|$. The small positive constant $\ep$ may be different in different places.
\end{itemize}
\end{Prop}
\begin{proof}
To avoid too many constants, the implicit constant in $\les$ in the following proof depends only on $\mathcal{I}+\tilde{\cE}_{0, \ga}$, $R$, $p$, $\ga$, $\b$ and $\ep>0$.
The proof for this Proposition relies on the representation formula for linear wave equation. The nonlinearity will be controlled by using the weighted flux bound in Proposition \ref{prop:EF:cone:gamma}. Recall that for any $q=(t_0, x_0)\in \mathcal{J}^{+}(\B_{R})$, we have the representation formula for the solution
\begin{equation}
\label{eq:rep4phi:comp}
\begin{split}
4\pi\phi(t_0, x_0)&
4\pi\phi^{lin}(t_0, x_0)
-\int_{\mathcal{N}^{-}(q)}\Box \phi \ \tilde{r} d\tilde{r}d\tilde{\om}.
\end{split}
\end{equation}
\iffalse
For the linear evolution part,
by the definition of $\tilde{\cE}_{1, \ga}$ and Lemma \ref{lem:weightedSob:com}, we derive that
\[
|\phi_0|\les \sqrt{\tilde{\cE}_{1, \ga}},\quad \forall |x|\leq R.
\]
In particular we have
\begin{align*}
\int_{\tilde{\om}}|\phi(0, x_0+t_0\tilde{\om})| d\tilde{\om}\les \sqrt{\tilde{\cE}_{1, \ga}}\les (R-t_0)^{-\frac{1+\ga}{2}}\sqrt{\tilde{\cE}_{1, \ga}}.
\end{align*}
For the estimate for $\pa\phi(0, x_0+t_0\tilde{\om})$, consider the linear wave equation
\[
\Box w=0,\quad w(0, x)=0, \quad \pa_t w(0, x)=|\pa\phi|(0, x)=|\phi_1|+|\nabla\phi_0|, \quad |x|\leq R.
\]
Then by the representation formula for linear wave equations we have
\begin{align*}
4\pi w(t_0, x_0)=t_0\int_{\tilde{\om}}\pa_t w(0, x_0+t_0\tilde{\om})d\om=t_0\int_{\tilde{\om}}|\pa\phi|(0, x_0+t_0\tilde{\om})d\tilde{\om}.
\end{align*}
By definition, we estimate that
\begin{align*}
\mathcal{E}_{\ga}[\pa_t w(0, x)](\B_R)&=\int_{\B_{R}}(|\pa \phi|^2+|\Om_{ij}|\pa\phi||^2)(R-|x|)^\ga+|\nabla |\pa\phi||^2 (R-|x|)^{2+\ga}dx\\
&\les \int_{\B_{R}}(|\pa \phi|^2+|{\Om_{ij}}\pa\phi|^2)(R-|x|)^\ga+|\pa\pa\phi|^2 (R-|x|)^{2+\ga}dx\\
&\les \tilde{\cE}_{1, \ga}.
\end{align*}
Thus by using Proposition \ref{prop:lineardecay:comp}, we conclude that
\begin{align*}
| w(t_0, x_0)|\les (R-t_0)^{-\frac{1+\ga}{2}}\sqrt{\tilde{\cE}_{1, \ga}}
\end{align*}
Therefore the linear evolution verifies the bound
\begin{align*}
|\int_{\tilde{\om}}t_0 \phi_1(x_0+t_0\tilde{\om})d\tilde{\om}+\pa_{t_0}\big(\int_{\tilde{\om}}t_0 \phi_0(x_0+t_0\tilde{\om})d\tilde{\om} \big)|
&\les \int_{\tilde{\om}} t_0 |\pa\phi| +|\phi_0| d\tilde{\om} \les (R-t_0)^{-\frac{1+\ga}{2}}\sqrt{\tilde{\cE}_{1, \ga}}.
\end{align*}
\fi
We mainly need to control the nonlinear part. From the equation \eqref{eq:NLW:3D:conf} as well as the flux bound \eqref{eq:comp:v:EF}, we can estimate that
\begin{align*}
&|\int_{\mathcal{N}^{-}(q)}\Box \phi \ \tilde{r} d\tilde{r}d\tilde{\om}|\\
&\leq \int_{\mathcal{N}^{-}(q)}\La^{3-p}|\phi|^{p}\ \tilde{r} d\tilde{r}d\tilde{\om}\\
&\leq \left(\int_{\mathcal{N}^{-}(q)} v_*^{\ga}\La^{3-p}|\phi|^{p+1}\ \tilde{r}^{2} d\tilde{r}d\tilde{\om}\right)^{\frac{p-1}{p+1}}\left(\int_{\mathcal{N}^{-}(q)}v_*^{-\frac{p-1}{2}\ga}\La^{3-p}|\phi|^{\frac{p+1}{2}}\ \tilde{r}^{\frac{3-p}{2}} d\tilde{r}d\tilde{\om}\right)^{\frac{2}{p+1}}\\
&\les \left(\int_{0}^{t_0}(\sup\limits_{x}|(R-t)^{\frac{1+\ga}{2}}\phi|^{\frac{p+1}{2}})\int_{\S_{(t_0-\tilde{r}, x_0)(\tilde{r})}}(R-t)^{-\frac{(p+1)(\ga+1)}{2}} \La^{3-p}v_*^{-\frac{p-1}{2}\ga}\tilde{r}^{\frac{3-p}{2}}d\tilde{\om} d\tilde{r} \right)^{\frac{2}{p+1}}.
\end{align*}
Here under the coordinates $(\tilde{t}, \tilde{x})$, $t=t_0-\tilde{r}$.
Let's first consider the decay estimate of \eqref{eq:phi:pt:Br:EM:largep} for the larger $p$ case. By our assumption, we in particular have that
\[
p-3-\frac{p-1}{2}\ga=\frac{(p-1)(2-\ga)}{2}-2>\frac{(p-1)(3-\ga)}{4}-2>-1.
\]
Since $v_*=R-t-r$, thus by using Lemma \ref{lem:bd:vga}, we can bound that
\begin{align*}
\int_{\S_{(t_0-\tilde{r}, x_0)(\tilde{r})}} v_*^{p-3-\frac{p-1}{2}\ga} d\tilde{\om}\les \left((R-t)^{-1}(R-t_0)\tilde{r}\right)^{p-3-\frac{p-1}{2}\ga}.
\end{align*}
Define
\begin{align*}
\tilde{\cM}(t)&=\sup\limits_{|x|\leq R-t}|(R-t)^{\frac{1+\ga}{2}}\phi|^{\frac{p+1}{2}}, \quad \forall 0\leq t\leq R\\
\tilde{f}(t_0, \tilde{r})&= (R-t_0)^{\frac{(p+1)(\ga+1)}{2}}(R-t_0+\tilde{r})^{\frac{p-1}{2}\ga-\frac{(p+1)(\ga+1)}{2}}(R-t_0)^{p-3-\frac{(p-1)\ga}{2}}\tilde{r}^{\frac{p-3}{2}-\frac{(p-1)\ga}{2}} .
\end{align*}
We then conclude that
\begin{align*}
|\phi(t_0, x_0)|^{\frac{p+1}{2}}&\les |\phi^{lin}(t_0, x_0)|^{\frac{p+1}{2}}+ \int_{0}^{t_0}\tilde{\cM}(t_0-\tilde{r} ) (R-t_0)^{-\frac{1+\ga}{2}\cdot \frac{p+1}{2}}\tilde{f}(t_0,\tilde{r}) d\tilde{r},
\end{align*}
which implies that
\begin{align}
\label{eq:cM:0}
\tilde{\cM}(t_0)&\les \sup\limits_{|x_0|\leq R-t_0}|\phi^{lin}(t_0, x_0)(R-t_0)^{\frac{1+\ga}{2}}|^{\frac{p+1}{2}} + \int_{0}^{t_0}\tilde{\cM}(t_0-\tilde{r} ) \tilde{f}(t_0,\tilde{r}) d\tilde{r}.
\end{align}
To apply Gronwall's inequality, we check that $\tilde{f}(t_0, \tilde{r})$ is uniformly integrable on $[0, t_0]$ with respect to $\tilde{r}$. Indeed, for the case when $p\geq 3$, we show that
\begin{align*}
\int_0^{t_0}\tilde{f}(t_0, \tilde{r})d\tilde{r}\leq \int_0^{t_0}(R-t_0)^{p-3}\tilde{r}^{\frac{p-3}{2}-\frac{(p-1)\ga}{2}}d\tilde{r}\les (R-t_0)^{p-3}t_0^{\frac{(p-1)(1-\ga)}{2}}\les 1.
\end{align*}
Here the implicit constant relies only on $p$, $\ga$ and $R$.
For the case when $p<3$, split the integral into two parts. For small $\tilde{r}$, we have similar bound
\begin{align*}
\int_0^{\min\{R-t_0, t_0\}}\tilde{f}(t_0, \tilde{r})d\tilde{r}\leq \int_0^{R-t_0}(R-t_0)^{p-3}\tilde{r}^{\frac{p-3}{2}-\frac{(p-1)\ga}{2}}d\tilde{r}\les (R-t_0)^{p-3+\frac{(p-1)(1-\ga)}{2}} \les 1
\end{align*}
due to the relation
\[
p-3+\frac{(p-1)(1-\ga)}{2}=\frac{(p-1)(3-\ga)}{2}-2>0.
\]
For large $\tilde{r}\geq \min\{R-t_0, t_0\}$, note that for this remaining case ($p<3$),
\begin{align*}
\frac{p-3}{2}-\frac{(p-1)\ga}{2}<0,\quad \frac{p-1}{2}\ga-\frac{(p+1)(\ga+1)}{2}<-1
\end{align*}
as $p>1$, $\ga>0$. We thus can bound that
\begin{align*}
\int_{\min\{R-t_0, t_0\}}^{t_0}\tilde{f}(t_0, \tilde{r}) d\tilde{r}&\leq \int_{\min\{R-t_0, t_0\}}^{t_0} (R-t_0)^{\frac{(p+1)(\ga+1)}{2}+ \frac{3(p-3)}{2}-(p-1)\ga}(R-t_0+\tilde{r})^{\frac{p-1}{2}\ga-\frac{(p+1)(\ga+1)}{2}} d\tilde{r}\\
&\les (R-t_0)^{\frac{(p+1)(\ga+1)}{2}+ \frac{3(p-3)}{2}-(p-1)\ga+\frac{p-1}{2}\ga-\frac{(p+1)(\ga+1)}{2}+1} \\
&=(R-t_0)^{\frac{(p-1)(3-\ga)-4}{2}}\les 1.
\end{align*}
In view of \eqref{eq:cM:0}, Gronwall's inequality then implies that
\begin{align*}
|\phi(t_0, x_0)|\les \sup\limits_{|x|\leq R-t_0} |\phi^{lin}(t_0, x)|.
\end{align*}
This shows the bound \eqref{eq:phi:pt:Br:EM:largep}.
\bigskip
Next for estimate \eqref{eq:phi:pt:Br:EM:smallp} with small power $2<p\leq \frac{1+\sqrt{17}}{2}$, we control the nonlinearity directly by using the weighted flux for large $\tilde{r}$. We split the integral into two parts: specifically for the smaller $\tilde{r}$ on $[0, t_*]$ and larger $\tilde{r}$ on $[t_*, t_0]$, where we define
\[
t_*= u_0^{\frac{2(3-p)+(p-1)\ga}{5-p} } v_0^{\frac{2(3-p)}{5-p}},\quad u_0=R-t_0.\quad v_0=R-t_0-r_0.
\]
Without loss of generality, we may assume that $t_*<t_0$. Otherwise it suffices to evaluate the integral on the single interval $[0, t_0]$.
For the integral on $[t_*, t_0]$, from the weighted energy estimate \eqref{eq:comp:v:EF}, we can show that
\begin{align*}
&|\int_{\mathcal{N}^{-}(q)\cap\{\tilde{r}\geq t_*\}}\Box \phi \ \tilde{r} d\tilde{r}d\tilde{\om}|\\
&\leq \left(\int_{\mathcal{N}^{-}(q)} ((1-\tau)u_*^{\ga}+v_*)^{\ga})\La^{3-p}|\phi|^{p+1}\ \tilde{r}^{2} d\tilde{r}d\tilde{\om}\right)^{\frac{p}{p+1}}\\
&\quad \cdot \left(\int_{\mathcal{N}^{-}(q)\cap\{\tilde{r}\geq t_*\}}((1-\tau)u_*^{\ga}+v_*)^{\ga})^{-p}\La^{3-p} \tilde{r}^{1-p} d\tilde{r}d\tilde{\om}\right)^{\frac{1}{p+1}}\\
&\les \left(\int_{t_*}^{t_0} \int_{\S_{(t_0-\tilde{r}, x_0)}(\tilde{r})}v_*^{p-3} ((1-\tau)u_*^{\ga}+v_*)^{\ga})^{-p} u_*^{p-3} \tilde{r}^{1-p} d\tilde{\om} d\tilde{r}\right)^{\frac{1}{p+1}}.
\end{align*}
Since $v_*\geq v_0$, by using Lemma \ref{lem:bd:vga:smallp:in}, we then can show that
\begin{align*}
&|\int_{\mathcal{N}^{-}(q)\cap\{\tilde{r}\geq t_*\}}\Box \phi \ \tilde{r} d\tilde{r}d\tilde{\om}|^{p+1}\\
&\les v_0^{p-3-(p-1)\ga-\ep}\int_{t_*}^{t_0}(R-t)^{p-3}\tilde{r}^{1-p} \int_{\S_{(t_0-\tilde{r}, x_0)}(\tilde{r})}((1-\tau)u_*^{\ga}+v_*)^{\ga})^{-1+\ep \ga^{-1}} d\tilde{\om} d\tilde{r}\\
&\les v_0^{p-3-(p-1)\ga-\ep} (R-t_0)^{-\ga+\ep} \int_{t_*}^{t_0}(R-t_0)^{p-3}\tilde{r}^{1-p} d\tilde{r}\\
&\les v_0^{p-3-(p-1)\ga-\ep} (R-t_0)^{p-3-\ga+\ep} t_*^{2-p} \\
&\les u_0^{-\frac{(3-p)(p+1)+\ga(p^2-4p+7)}{5-p}+\ep}v_0^{-\frac{(3-p)(p+1)}{5-p}-(p-1)\ga-\ep}\\
&\les u_0^{(-1+\b_1\ga_0+(1-\a_p)\ep-\delta_1)(p+1) } v_0^{(-1+\a_p \ga_0-(1+\b_1)\ep+\delta_1)(p+1)}\\
&\les u_0^{(-1+\b_1\ga_0-(\a_p+\b_1)\ep)(p+1) } v_0^{(-1+\a_p \ga_0)(p+1)},
\end{align*}
in which
\begin{align*}
\ga=2-\ga_0+\ep,\quad \a_p=\frac{p^2-4p+7}{(5-p)(p+1)},\quad \b_1=\frac{p-1}{p+1},\quad \delta_1=\frac{2(p-2)(3-p)(\ga_0-1)}{(5-p)(p+1)}.
\end{align*}
Since $\ep$ is arbitrary, without any confusion, replacing $(\a_1+\b_1)\ep$ by $\ep$, we therefore derive that
\begin{align*}
|\phi(t_0, x_0)|&\les |\phi^{lin}(t_0, x_0)|+|\int_{\mathcal{N}^{-}(q) }\Box \phi \ \tilde{r} d\tilde{r}d\tilde{\om}| \\
&\les |\phi^{lin}(t_0, x_0)|+u_0^{-1+\b_1\ga_0-\ep} v_0^{-1+\a_p \ga_0}+|\int_{\mathcal{N}^{-}(q)\cap\{\tilde{r}\leq t_*\}}\Box \phi \tilde{r} d\tilde{r}d\tilde{\om}|.
\end{align*}
Now for smaller $\tilde{r}$, we rely on Gronwall's inequality and similar to the above argument, we first have
\begin{align*}
&|\int_{\mathcal{N}^{-}(q)\cap\{\tilde{r}\leq t_*\}}\Box \phi \ \tilde{r} d\tilde{r}d\tilde{\om}| \les \left(\int_{\mathcal{N}^{-}(q)\cap\{\tilde{r}\leq t_*\}}((1-\tau)u_*^{\ga}+v_*^\ga)^{-\frac{p-1}{2}}\La^{3-p}|\phi|^{\frac{p+1}{2}}\ \tilde{r}^{\frac{3-p}{2}} d\tilde{r}d\tilde{\om}\right)^{\frac{2}{p+1}}.
\end{align*}
Now define
\begin{align*}
\mathcal{M}_2[\phi](t)=\sup\limits_{|x|\leq R-t} |\phi(t, x)u_*^{1-\b}v_*^{ 1-\a_p \ga_0}|^{\frac{p+1}{2}},\quad \b\leq \b_1-\ep.
\end{align*}
As $\ga_0<p-1$, it can be checked that
\begin{align*}
\b_1\ga_0<1,\quad \a_p\ga_0\leq 1.
\end{align*}
Notice that $v_*\geq v_0$, $u_*\geq u_0$. The previous inequality in particular leads to
\begin{align*}
\mathcal{M}_2[\phi](t_0)\les 1+\mathcal{M}_2[\phi^{lin}](t_0)+ \int^{t_*}_{0} \mathcal{M}_2[\phi](t_0-\tilde{r})\int_{\S_{(t_0-\tilde{r}, x_0)}(\tilde{r})} ((1-\tau)u_*^{\ga}+v_*^\ga)^{-\frac{p-1}{2}}\La^{3-p} \tilde{r}^{\frac{3-p}{2}}d\tilde{\om}d\tilde{r} .
\end{align*}
Now by using Lemma \ref{lem:bd:vga:smallp:in} as $p<3$ and the choice of $t_*$, we estimate that
\begin{align*}
& \int^{t_*}_{0}\int_{\S_{(t_0-\tilde{r}, x_0)}(\tilde{r})}((1-\tau)u_*^{\ga}+v_*^\ga)^{-\frac{p-1}{2}}\La^{3-p} \tilde{r}^{\frac{3-p}{2}} d\tilde{r}d\tilde{\om}\\
&\les \int^{t_*}_{0}\int_{\S_{(t_0-\tilde{r}, x_0)}(\tilde{r})} ((1-\tau)u_*^{\ga}+v_*^\ga)^{-\frac{p-1}{2}}v_0^{p-3}u_0^{p-3} \tilde{r}^{\frac{3-p}{2}} d\tilde{r}d\tilde{\om}\\
&\les (R-t_0)^{-\frac{p-1}{2}\ga}v_0^{p-3}(R-t_0)^{p-3} t_*^{\frac{5-p}{2}} \les 1.
\end{align*}
Hence by using Gronwall's inequality, we conclude that
\begin{align*}
|\phi(t_0, x_0)|\les (1+\mathcal{M}_2[\phi^{lin}](t_0))^{\frac{2}{p+1}}u_0^{-1+\b}v_0^{-1+\a_p\ga_0}.
\end{align*}
We thus finished the proof for Proposition \ref{Prop:pointwise:decay:EM}.
\end{proof}
\section{The solution in the interior region and proof for the main theorem}
The aim of this section is to apply the result of Proposition \ref{Prop:pointwise:decay:EM} from previous section together with estimates of Propositions \ref{prop:energyflux:H:ex}, \ref{prop:NLW:def:3d:ex:ID:H} to derive the asymptotic decay properties of solutions to the nonlinear wave equations \eqref{eq:NLW:semi:3d} in the interior region $\{t+2\geq |x|\}$ which is contained inside $\mathbf{D}$, inclosed by the forward hyperboloid $\mathbb{H}$ defined in \eqref{eq:def4Hyperboloid}.
Define the conformal map
\begin{align*}
\mathbf{\Phi}:(t, x)\longmapsto (\tilde{t}, \tilde{x})=\left(-\frac{t^*}{(t^*)^2-|x|^2}+R^*,\quad \frac{x}{(t^*)^2-|x|^2}\right)
\end{align*}
from the region $\mathbf{D}$ to Minkowski space.
The image of $\mathbf{\Phi}(\mathbf{D})$ is a truncated backward light cone
\begin{align*}
\mathbf{\Phi}(\mathbf{D})=\left\{(\tilde{t}, \tilde{x})|\quad \tilde{t}+|\tilde{x}|<R^*,\quad \tilde{t}\geq 0\right\}.
\end{align*}
Denote
\begin{equation*}
\Lambda(t, x)=(t^*)^2-|x|^2.
\end{equation*}
Direct computation shows that $\tilde{\phi}=(\Lambda \phi)\circ \mathbf{\Phi}^{-1}$ (as a scalar field in $(\tilde{t}, \tilde{x})$ variables on $\mathbf{\Phi}(\mathbf{D})$) verifies the nonlinear wave equation \eqref{eq:NLW:3D:conf}. For simplicity we may identify $\Lambda \phi$ with $(\Lambda \phi)\circ \mathbf{\Phi}^{-1}$.
The initial hypersurface for the above backward light cone is a ball with radius $R^*$
$$ \mathbf{\Phi}(\mathbb{H}) =\{(0, \tilde{x})||\tilde{x}|\leq R^*\}.$$
By doing this conformal transformation, the Cauchy problem of equation \eqref{eq:NLW:semi:3d} with initial hypersurface $\Hy$ is then equivalent to the Cauchy problem of equation \eqref{eq:NLW:3D:conf} with initial hypersurface $\mathbf{\Phi}(\mathbb{H}) $.
To apply the result of Proposition \ref{Prop:pointwise:decay:EM}, we need first to control the weighted energy norm $\tilde{\mathcal{E}}_{0, \ga}$ and the weighted spacetime integral $\mathcal{I}$ (defined before Proposition \ref{prop:EF:cone:gamma}) in terms of $\mathcal{E}_{0, \ga_0}[\phi]$.
Setting
\begin{equation*}
\ga=2-\ga_0+\ep
\end{equation*}
with
\begin{equation}
\label{eq:ga1}
0<\ep<10^{-1}\min\{\ga_0-1, 2-\ga_0, |\ga_0+1-\frac{4}{p-1}|\}.
\end{equation}
By our assumption on $\ga_0$, we in particular have $0<\ga<1$.
Let $$\tilde{u}=\frac{\tilde{t}-\tilde{r}}{2},\quad \tilde{v}=\frac{\tilde{t}+\tilde{r}}{2}, \quad \tilde{\om}=\frac{\tilde{x}}{|\tilde{x}|}$$ be the null coordinates system, with the associated null frame $\{\tilde{L} , \tilde{\Lb} , \La e_1, \La e_2\}$ on $\mathbf{\Phi}(\mathbf{D})$. Here we recall that $\{L, \Lb, e_1, e_2 \}$ is the null frame on $\mathbf{D}$ under the coordinates $(t, x)$. Recall the weighted energy norm $\tilde{\cE}_{0, \ga}$ associated to $\tilde{\phi}$ on $\mathbf{\Phi}(\mathbb{H})$
\begin{align*}
\tilde{\cE}_{0, \ga}& =\int_{\mathbf{\Phi}(\mathbb{H})}(R^*-|\tilde{x}|)^{\ga} |\tilde{L} \tilde\phi|^2
+|{\tilde{\Lb}}\tilde{\phi}|^2+|\tilde{\nabb}\tilde{\phi}|^2+|\tilde{\phi}|^2 +(R^*-|\tilde{x}|)^{p-3+\ga}|\tilde{\phi}|^{p+1}d\tilde{x},
\end{align*}
where
\[
\tilde{\Om}_{ij}=\tilde{x}_i\tilde{\pa}_{j}-\tilde{x}_j\tilde{\pa}_{i}=x_i\pa_j-x_j\pa_i=\Om_{ij}
\]
and $\tilde{\pa}$ is the full derivative on $\mathbf{\Phi}(\mathbf{D})$.
Direct computations imply the following change of null frames:
\begin{align*}
&L=(t^*+r)^{-2}\tilde{L
,\quad\Lb=(t^*-r)^{-2}\tilde{\Lb},\qua
\pa_i-\om_i \pa_r=((t^*)^2-r^2)^{-1}(\pa_{\tilde{x}_i}-\tilde{\om}_i\pa_{\tilde{r}})
\end{align*}
We first prove the following bound.
\begin{Prop}
\label{prop:bd:ID:comp}
Let $\ga=2-\ga_0+\ep$. The solution $\tilde{\phi}= \La\phi$ on $\mathbf{\Phi}(\mathbf{D})$ verifies the following bounds
\begin{align}
\label{eq:bd:IDSB:comp}
\tilde{\cE}_{0,\ga}+
\iint_{\mathbf{\Phi}(\mathbf{D})} \La^{3-p}|\tilde{\phi}|^{p+1}(R^*-\tilde{t}-|\tilde{x}|)^{\ga-1} d\tilde{x}d\tilde{t}
\leq C\mathcal{E}_{0, \ga_0}[\phi].
\end{align}
for some constant C depending on $\ga_0$, $p$ and $\ep$.
\end{Prop}
\begin{proof}
On the hyperboloid $\mathbb{H}$, the coordinate functions verify the following relation
\begin{align*}
(t^*-(2R^*)^{-1})dt=rdr,\quad (t^*)^2-r^2=(R^*)^{-1}t^*,
\end{align*}
which implies that
\begin{align*}
d\tilde{r}=\La^{-2}((t^*)^2+r^2)dr-2 \La^{-2}t^* r dt=\f12 (t^*)^{-1}(t^*-(2R^*)^{-1})^{-1}dr.
\end{align*}
Since $\tilde{\om}=\om$, $\tilde{r}=\Lambda^{-1}r$, the surface measure obeys
\begin{align*}
d\tilde{x}=\tilde{r}^2d\tilde{r}d\tilde{\om}=\f12 \tilde{r}^2 (t^*)^{-1}(t^*-(2R^*)^{-1})^{-1}drd\om=\f12 \Lambda^{-2}(t^*)^{-1}(t^*-(2R^*)^{-1})^{-1} dx.
\end{align*}
We also note that
on the hyperboloid $\mathbb{H}$
\[
0\leq t^*-r=\frac{(R^*)^{-1}t^*}{t^*+r}\leq (R^*)^{-1},\quad \La=(R^*)^{-1}t^*,\quad t^*\geq (R^*)^{-1},
\]
which leads to the following bounds
\begin{align*}
d\tilde{x}\les \La^{-4}dx,\quad t^*+r\les \La\les t^*+r.
\end{align*}
Here in only in proof for this proposition, the implicit constant relies only on $\ga_0$, $p$ and $\ep$.
For the zeroth order weighted energy $\tilde{\cE}_{0, \ga}$, by definition we can estimate that
\begin{align*}
&(R^*-|\tilde{x}|)^{\ga} |{\tilde{L}}\tilde{\phi}|^2 +|\tilde{\nabb} \tilde{\phi}|^2
+ |\tilde{ \phi}|^2+|{\tilde{\Lb}}\tilde{\phi}|^2 +(R^*-|\tilde{x}|)^{p-3+\ga}|\tilde{\phi}|^{p+1} \\
&=(t^*+r)^{4-\ga}|{L}(\La\phi)|^2+\Lambda^4|\nabb\phi|^2+(t^*-r)^{4}|{\Lb}(\La\phi)|^2+|\La \phi|^2+(t^*+r)^{-p+3-\ga}|\La\phi|^{p+1}\\
&\les \La^{2+\ga_0}(|{L}(r\phi)|^2+|{L}\phi|^2+|\phi|^{p+1})+\Lambda^4|\nabb \phi|^2+ \La^2(
|{\Lb}\phi|^2+ |\phi|^2).
\end{align*}
Here $R^*-|\tilde{x}|=\La^{-1}(R^* \La-r)=t^*+r$.
Note that the classical energy flux through the hyperboloid $\mathbb{H}$ verifies the lower bound
\begin{align*}
\int_{\mathbb{H}} \La^{-2}|{\Lb} \phi|^2+|L \phi|^2+|\nabb \phi|^2+\frac{2}{p+1}|\phi|^{p+1}
dx\les E[\phi](\mathbb{H}).
\end{align*}
By using the bound $d\tilde{x}\les \La^{-4}dx$, we therefore can estimate that
\begin{align*}
\tilde{\cE}_{0, \ga}&\les \int_{\mathbb{H}}\big(\La^{2+\ga_0}(|{L}(r\phi)|^2+|{L}\phi|^2+|\phi|^{p+1})+\Lambda^4|\nabb \phi|^2+ \La^2(
|{\Lb}\phi|^2+ |\phi|^2) \big)\Lambda^{-4}dx\\
&\les E[\phi](\mathbb{H})+ \int_{\mathbb{H}^+} r^{\ga_0-2} |{L}(r\phi)|^2 dx\\
&\les {\cE}_{0, \ga_0}[\phi].
\end{align*}
Here the integral of $|\phi|^2$ is estimated by using Hardy's inequality and the $r$-weighted energy estimate through the hyperboloid $\Hy$ follows from Proposition \ref{prop:energyflux:H:ex}.
\iffalse
For higher order weighted energy $\tilde{\cE}_{1, \ga_1}$, by definition, we need to estimate $|\tilde{\pa}^{l_1+1} {\tilde{\Om}_{ij}}^{l_2} \tilde{\phi}|$ in $(\tilde{t}, \tilde{x})$ coordinates in terms of the associated quantities in $(t, x)$ coordinates for all $l_1+l_2=1$. First notice that the region inclosed by $\Hy^{-}$ and the initial hypersurface is compact. The decay estimates of Proposition \ref{prop:NLW:def:3d:ex} hold on this region by shifting the origin of the Minkowski space. In particular, the solution $\phi$ is uniformly bounded in this compact region. Therefore by using the standard energy estimate, we conclude that
\begin{align*}
E[Z\phi](\Hy^{-})\les \mathcal{E}_{1, \ga_0}[\phi]^{p-1},\quad \forall Z\in \{\pa_\mu, \Om_{\mu\nu}\}.
\end{align*}
Now as $$\tilde{\Om}_{ij}=\Om_{ij}, \quad \Om_{ij}(\La)=\Om_{ij}(t^*+r)=\Om_{ij}(t^*-r)=0, $$
the above argument for $\tilde{\cE}_{0, \ga_1}$ also holds for the case when $l_1=0$, that is,
\begin{align*}
\int_{\mathbf{\Phi}(\mathbb{H})}|\tilde{\pa} {\tilde{\Om}_{ij}} \tilde{\phi}|^2(R^*-|\tilde{x}|)^{\ga_1}d\tilde{x} \les E[\Om_{ij}\phi](\mathbb{H})+ \int_{\mathbb{H}^+} r^{\ga_0-2} |{L}(r\Om_{ij}\phi)|^2 dx\les \mathcal{E}_{1, \ga_0}[\phi]^{p-1}.
\end{align*}
Here we used the weighted energy estimate \eqref{eq:EE:hyB} for $Z\phi$.
For the case when $l_1=1$, $l_2=0$, for any scalar field $f$ defined on $\bf{\Phi}(\mathbb{H})$, we can estimate that
\begin{equation*}
\begin{split}
|\tilde{\pa}f|^2&=|{\tilde{\Lb}}f|^2+|{\tilde{L}}f|^2+|\tilde{\nabb}f|^2\\
&=|(t^*+r)^2 L f(\mathbf{\Phi})|^2+|(t^*-r)^2 {\Lb} f(\mathbf{\Phi})|^2+|\La \nabb f(\mathbf{\Phi})|^2\\
&\les \sum\limits_{\mu, \nu}(t^*+r)^2(|\Om_{\mu\nu} f(\mathbf{\Phi})|^2+|\pa f(\mathbf{\Phi})|^2)\\
&\les \La^2 \sum\limits_{ Z\in \Ga}|Z f(\mathbf{\Phi})|^2.
\end{split}
\end{equation*}
We remark here that the above estimate holds only restricted to the hyperboloid $\Hy$ or $\mathbf{\Phi}(\Hy)$. Moreover
let $\mathcal{Z}$ be the push forward of the vector field $Z$ under the map $\bf{\Phi}$. For any function $f$ defined on $\bf{\Phi}(\mathbb{H})$, we have
\begin{align*}
|[\mathcal{Z}, \tilde{\pa}]f|\les \sum\limits_{Z\in \Ga}\La |Z f(\bf{\Phi})|.
\end{align*}
Therefore by repeating the above process, we can show that
\begin{align*}
|\tilde{\pa}^{2} \tilde{\phi}|^2
&\les \La^2\sum\limits_{Z\in \Ga}|Z (\tilde{\pa} \tilde{\phi})(\Phi)|^2\\
&\les \La^2 \sum\limits_{Z\in \Ga}|\tilde{\pa}\mathcal{Z} \tilde{\phi}|^2+|[\tilde{\pa},\mathcal{Z} ] \tilde{\phi}|^2\\
&\les \La^4 \sum\limits_{Z\in \Ga}|Z^2 (\La \phi)|^2+|Z (\La\phi)|^2\\
&\les \La^{2 } \sum\limits_{k\leq 1, Z\in \Ga}|(t^*+r)^2 L Z^{k} (\La\phi)|^2+|(t^*-r)^2 \Lb Z^{k} (\La\phi)|^2+|\La \nabb Z^{k} (\La\phi)|^2.
\end{align*}
Therefore we can estimate that
\begin{align*}
&\int_{ \mathbf{\Phi}(\Hy)}|\tilde{\pa}^{2} \tilde{\phi}|^2(R^*-|\tilde{x}|)^{\ga_1+2}d\tilde{x}\\
&\les \sum\limits_{l\leq 1, Z\in \Ga}\int_{\mathbb{H}} \La^{2} \big( |(t^*+r)^2 L Z^{l} (\La\phi)|^2+|(t^*-r)^2 \Lb Z^{l}(\La\phi)|^2+|\La \nabb Z^{l} (\La\phi)|^2\big)(t^*+r)^{-\ga_1-2}\La^{-4}dx\\
&\les \sum\limits_{l\leq 1, Z\in \Ga}\int_{\mathbb{H}} (1+r)^{\ga_0-2}| L {Z}^l(r\phi) |^2 +\La^{-2}(| {\Lb }{Z}^l \phi |^2+| {Z}^l \phi |^2)+ |L {Z}^l \phi |^2+ |\nabb {Z}^l \phi |^2dx\\
&\les \sum\limits_{l\leq 1} E[Z^l\phi](\Hy)+\int_{\Hy^+} (1+r)^{\ga_0-2}| L {Z}^l(r\phi) |^2\les \mathcal{E}_{1, \ga_0}[\phi]^{p-1}.
\end{align*}
We thus complete the proof for the bound \eqref{eq:bd:ID:comp}.
\fi
For the spacetime $\mathcal{I}$, Proposition \ref{prop:spacetime:bd} in particular implies that
\begin{align*}
\iint_{\mathbf{D}} |\phi|^{p+1}v_+^{\ga_0-1-\ep} dxdt\les \mathcal{E}_{0, \ga_0}[\phi]
\end{align*}
for the solution $\phi$ to \eqref{eq:NLW:semi:3d}.
Since the map $\mathbf{\Phi}$ is conformal and $\La$ is the conformal factor, we conclude that
\begin{align*}
d\tilde{x}d\tilde{t}=\La^{-4} dxdt,
\end{align*}
which can also be derived by direct computations. Thus the associated scalar field $\tilde{\phi}$ on $\mathbf{\Phi}$ verifies the following weighted bound
\begin{align*}
\iint_{\mathbf{\Phi}(\mathbf{D})} \La^{3-p}|\tilde{\phi}|^{p+1}(R^*-\tilde{t}-|\tilde{x}|)^{-\ga_0+1+\ep} d\tilde{x}d\tilde{t}&\les\iint_{\mathbf{\Phi}(\mathbf{D})} \La^{-p-1}|\tilde{\phi}|^{p+1}v_+^{\ga_0-1-\ep}\La^4 d\tilde{x}d\tilde{t}\\
&=\iint_{\mathbf{D}} |\phi|^{p+1}v_+^{\ga_0-1-\ep} dxdt\les \mathcal{E}_{0, \ga_0}[\phi].
\end{align*}
We thus finished the proof for the Proposition.
\end{proof}
We now prove the main theorem \ref{thm:main} by showing the pointwise decay estimates for the solution $\phi$ to \eqref{eq:NLW:semi:3d} in the interior region. As indicated previously, $\tilde{\phi}=\La\phi$ solves equation \eqref{eq:NLW:3D:conf} on the compact region $\mathbf{\Phi}(\mathbf{D})$ for solution $\phi$ to \eqref{eq:NLW:semi:3d}.
In view of the previous Proposition \ref{prop:bd:ID:comp}, we derive that
\begin{align*}
\tilde{\cE}_{0, \ga}+\mathcal{I}\les \mathcal{E}_{0, \ga_0}[\phi].
\end{align*}
By our assumption on $\ga_0$ and the choice of $\ep$, we always have $0<\ga<1$. For the case when
$$\frac{1+\sqrt{17}}{2}<p<5, \quad \max\{\frac{4}{p-1}-1, 1\}<\ga_0<\min\{p-1, 2\},$$
the choice of $\ep$ also implies that
\begin{align*}
(p-1)(3-\ga)=(p-1)(1+\ga_0-\ep)>4.
\end{align*}
Then from Proposition \ref{Prop:pointwise:decay:EM}, we conclude that
\begin{align*}
|\La\phi|(\tilde{t}, \tilde{x})&\les \sup\limits_{|\tilde{y}|\leq \tilde{t}}|\tilde{\phi}^{lin}(\tilde{t}, \tilde{y})|
\end{align*}
Here $\tilde{\phi}^{lin}$ is the linear evolution with initial data $(\tilde{\phi}(0, \tilde{x}), \tilde{\pa}_{\tilde{t}}\tilde{\phi}(0, \tilde{x}))$
By the conformal transformation, $\tilde{\phi}^{lin}$ can be identified with $\La \phi_H^{lin}$, in which $\phi_H^{lin}$ was defined before Proposition \ref{prop:NLW:def:3d:ex:ID:H}.
Recall that
\[
\tilde{t}=R^*-\La^{-1}(t+3),\quad |\tilde{x}|=\La^{-1}r, \quad \La =(t+3-r)(t+3+r).
\]
And inside the hyperboloid $\mathbb{H}$, we have $v_+\leq t+3$. Thus
\begin{align*}
\frac{1}{8}u_+^{-1}&\leq R^*-\tilde{t}=\La^{-1}(t+3)\leq u_+^{-1},\\
\frac{1}{4}v_+^{-1}&\leq R^*-\tilde{t}-|\tilde{x}|=\La^{-1}(t+3-r)=(t+3+r)^{-1}\leq v_+^{-1}.
\end{align*}
Therefore by using the decay estimate \eqref{eq:phi:pt:Br:largep:lin} of Proposition \ref{prop:NLW:def:3d:ex:ID:H}, we then conclude that
\begin{align*}
|\tilde{\phi}^{lin}(\tilde{t}, \tilde{x})|&\les \La |\phi^{lin}_H|\les \mathcal{E}_{1, \ga_0}[\phi]^{\frac{p-1}{2}} (2+t+|x|)^{-1}(2+||x|-t|)^{-\frac{\ga_0-1}{2}}\La\\
&\les \mathcal{E}_{1, \ga_0}[\phi]^{\frac{p-1}{2}} (R^*-\tilde{t})^{-\frac{1+\ga_0}{2}},
\end{align*}
which leads to
\begin{align*}
|\phi(t, x)|\les \mathcal{E}_{1, \ga_0}[\phi]^{\frac{p-1}{2}} \La^{-1}(R^*-\tilde{t})^{-\frac{1+\ga_0}{2}} \les \mathcal{E}_{1, \ga_0}[\phi]^{\frac{p-1}{2}}v_+^{-1}u_+^{-\frac{\ga_0-1}{2}}.
\end{align*}
This proves the pointwise decay estimate for the solution in the interior region for the large $p$ case.
Finally for the small $p$ case, take
\[
\b=\frac{\ga_0}{p+1}.
\]
The small positive constant $\ep$ can be chosen so that $\b\leq \frac{p-1}{p+1}\ga_0-1$ as $\ga_0>1$.
Then by using the linear decay estimate \eqref{eq:phi:pt:Br:smallp:lin} of Proposition \ref{prop:NLW:def:3d:ex:ID:H}, we can show that
\begin{align*}
|\tilde{\phi}^{lin}u_*^{1-\b} v_*^{1-\a_p \ga_0}|&\les |\La \phi_{H}^{lin} u_+^{\b-1} v_+^{-1+\a_p \ga_0}|\\
&\les \sqrt{\mathcal{E}_{1, \ga_0}[\phi] } u_+ v_+ v_+^{-\a_p \ga_0}u_+^{-\b}u_+^{\b-1} v_+^{-1+\a_p \ga_0}\\
&\les \sqrt{\mathcal{E}_{1, \ga_0}[\phi] }.
\end{align*}
Here recall that $u_*=R^*-\tilde{t}$, $v_*=R^*-\tilde{t}-|\tilde{x}|$. Hence from estimate \eqref{eq:phi:pt:Br:EM:smallp} of Proposition \ref{Prop:pointwise:decay:EM}, we conclude that
\begin{align*}
|\tilde{\phi}(\tilde{t}_0, \tilde{x}_0)|&\les (1+\sup\limits_{\tilde{t}+|\tilde{x}|\leq R*}|\tilde{\phi}^{lin}u_*^{1-\b} v_*^{1-\a_p \ga_0}|)(R^*-\tilde{t}_0)^{-1+\b} (R^*-\tilde{t}_0-|\tilde{x}_0|)^{-1+\a_p\ga_0}\\
&\les (1+\sqrt{\mathcal{E}_{1, \ga_0}[\phi]}) (R^*-\tilde{t}_0)^{-1+\b} (R^*-\tilde{t}_0-|\tilde{x}_0|)^{-1+\a_p\ga_0},
\end{align*}
which implies that in the interior region for the case when $2<p\leq \frac{1+\sqrt{17}}{2}$, the solution $\phi$ of \eqref{eq:NLW:semi:3d} verifies the following decay estimate
\begin{align*}
|\phi(t, x)|\leq |\La^{-1}\tilde{\phi}(\tilde{t}, \tilde{x})|\les (1+\sqrt{\mathcal{E}_{1, \ga_0}[\phi]}) u_+^{-1}v_+^{-1}u_+^{1-\b} v_+^{1-\a_p\ga_0}\les (1+\sqrt{\mathcal{E}_{1, \ga_0}[\phi]}) u_+^{-\b} v_+^{-\a_p\ga_0}.
\end{align*}
Here the implicit constant replies on $\mathcal{E}_{0, \ga_0}[\phi]$, $\ga_0$ and $p$.
We thus complete the proof for the main Theorem \ref{thm:main}.
\bigskip
As for the scattering result of Corollary \ref{cor:scattering:3D}, by using the standard energy estimate, the solution scatters in energy space if the mixed norm $\|\phi\|_{L_t^p L_x^{2p}}$ of the solution is finite (see e.g. Lemma 4.4 in \cite{Strauss78:NLW}). Moreover, it has been shown in the author's companion paper, the solution scatters in $\dot{H}^{s}$ for all $\frac{3}{2}-\frac{2}{p-1}\leq s\leq 1$ for the case when $p>\frac{1+\sqrt{17}}{2}$. In particular, it suffices to consider the small $p$ case when $2<p\leq \frac{1+\sqrt{17}}{2}$. By using the pointwise decay estimate of the main theorem \ref{thm:main}, we estimate that
\begin{align*}
\|\phi\|_{L_t^p L_x^{2p}}^p&=\int_{\mathbb{R}} \left(\int_{\mathbb{R}^3} |\phi|^{p+1}v_+^{\ga_0-1-\ep}|\phi|^{p-1}v_+^{-\ga_0+1+\ep}dx\right)^{\f12} dt \\
&\les \int_{\mathbb{R}} \left(\int_{\mathbb{R}^3} |\phi|^{p+1}v_+^{\ga_0-1-\ep}(1+t)^{-\ga_0+1+\ep-(p-1)\a_p \ga_0}dx\right)^{\f12} dt\\
&\les \left(\int_{\mathbb{R}}\int_{\mathbb{R}^3} |\phi|^{p+1}v_+^{\ga_0-1-\ep}dx dt\right)^{\f12}\left(\int (1+|t|)^{-\ga_0+1+\ep-(p-1)\a_p \ga_0}dt\right)^{\f12}.
\end{align*}
In view uniform spacetime bound of Proposition \ref{prop:spacetime:bd}, $\|\phi\|_{L_t^p L_x^{2p}}$ is finite if
\begin{align*}
\ga_0-1+(p-1)\a_p \ga_0>1
\end{align*}
by choosing $\ep$ sufficiently small. As $1<\ga_0<p-1$, by choosing $\ga_0$ sufficiently close to $p-1$, it is equivalent to that
\begin{align*}
f(p)=p-2+(p-1)^2 \frac{3+(p-2)^2}{(5-p)(p+1)}-1>0.
\end{align*}
It can be checked that there is a unique solution $p_*$ of $f(p)$ on $[2, 3]$ and when $p>p_*$, one has $f(p)>0$. Numerically, one can show that
\[
2.3541<p_*<2.3542.
\]
This proves the scattering result of Corollary \ref{cor:scattering:3D}.
|
2,877,628,091,636 | arxiv | \section{Introduction}
Entanglement entropy (EE) of subregions is an ill-defined quantity in quantum field theory (QFT). This fact can be understood from various perspectives. From a lattice point of view, as we reduce the lattice spacing a growing amount of entanglement across the entangling surface adds up, producing the usual area-law divergence (and others) in the limit. From the continuum theory perspective, the underlying reason has to do with the fact that algebras of operators associated to spatial regions are von Neumann algebras of type-III, for which all traces are either vanishing or infinite ---see {\it e.g.,}\ \cite{haag,Witten:2018lha}.
The situation improves when one considers two (or more) disjoint regions: entanglement measures such as mutual information $I(A,B)$ do make sense in QFT. The whole issue with the type-III-ness of subregion algebras has to do with the sharp spatial cut introduced by the entangling surface $\partial A$. When instead of considering a region and its complement, we consider two disjoint regions $A,B$, the so-called ``split-property''\footnote{This property holds in general under very mild assumptions related to the growth of the number of degrees of freedom at high energies, \cite{Buchholz:1973bk,Buchholz:1986dy}.} guarantees the existence of a tensor product decomposition of the global Hilbert space as $\mathcal{H}=\mathcal{H}_{\mathcal{N}_{AB}}\otimes \mathcal{H}_{\mathcal{N}_{AB}'}$ where $\mathcal{N}_{AB}$ and its commutant $\mathcal{N}'_{AB}$
are type-I factors. The idea is that there always exists one such factor $\mathcal{N}_{AB}$ which contains the algebra of the first region, $\mathcal{A}_A$, while still commuting with the operators in algebra of the second, $\mathcal{A}_B$. Namely, one has $\mathcal{A}_A \subseteq \mathcal{N}_{AB} \subseteq (\mathcal{A}_B)'$. Importantly, contrary to $\mathcal{A}_A$ or $\mathcal{A}_B$, $\mathcal{N}_{AB}$ cannot be sharply associated to any particular geometric region\footnote{See our previous paper \cite{Bueno:2020vnx} for a possible notion of spatial ``algebra density'' in the case of free fermions.}. There is no problem in defining traces for Type-I von Neumann algebras and so given $\mathcal{N}_{AB}$, we can define the corresponding von Neumann entropy $S(\mathcal{N}_{AB})$ as the entropy of the reduced state in any of the factors of the tensor product.
Now, there are infinitely many possible splits associated to a pair of regions $A,B$, so which one to choose? Interestingly, given a state which is cyclic and separating for the various algebras ({\it e.g.,}\ the vacuum), there is a somewhat canonical choice. This is \cite{Doplicher:1982cv,Doplicher:1984zz,Doplicher:1983if}
\begin{equation}
\mathcal{N}_{AB}\equiv \mathcal{A}_A \vee J_{AB} \mathcal{A}_A J_{AB}\,,\quad \text{with the commutant given by} \quad
\mathcal{N}_{AB}'=\mathcal{A}_B \vee J_{AB} \mathcal{A}_B J_{AB}\, . \label{tomo}
\end{equation}
Here we used the standard notation $\mathcal{A} \vee \mathcal{B}$ to refer to the double commutant of the algebra of the union, namely, $\mathcal{A} \vee \mathcal{B}\equiv (\mathcal{A} \cup \mathcal{B})''$.
Also, $J_{AB}$ is the Tomita-Takesaki modular conjugation operator associated to the algebra of $AB$ and the corresponding state. The von Neumann entropy associated to this type-I factor defines the reflected entropy \cite{Longo:2019pjj}
\begin{equation}\label{rsss}
R(A,B) \equiv S(\mathcal{N}_{AB})\, .
\end{equation}
An alternative route to the same notion was presented by Dutta and Faulkner in \cite{Dutta:2019gen}. A given state $\rho_{AB} $ in a Hilbert space $\mathcal{H}_A \otimes \mathcal{H}_B$ can be canonically purified as $\ket{ \sqrt{\rho_{AB}}}\in (\mathcal{H}_A \otimes \mathcal{H}_A^*) \otimes (\mathcal{H}_B \otimes \mathcal{H}_B^*)$. Then, the von Neumann entropy associated to the reduced density matrix $\rho_{AA^*}$ obtained from tracing out over $\mathcal{H}_B \otimes \mathcal{H}_B^*$ is nothing but the reflected entropy. Indeed, the modular conjugation operator $J_{AB}$ precisely maps $\mathcal{A}_A$ into $\mathcal{A}_{A^*}$, and one has $\mathcal{N}_{AB}=\mathcal{A}_{AA^*}$.
While this construction is not directly suitable for QFTs, one can safely use it in the lattice and unambiguously recover reflected entropy as defined in \req{rsss} in the continuum limit. A useful construction in terms of replica-manifold partition functions was also presented in that paper. In addition, they also showed that reflected entropy generally bounds above the mutual information. Namely,
\begin{equation}\label{ir}
R(A,B)\geq I(A,B)\, ,
\end{equation}
holds for general theories.
Much of the interest in reflected entropy so far has come from the observation, by the same authors, that for holographic theories dual to Einstein gravity, this quantity is proportional to the minimal entanglement wedge cross section, $R_{\rm holo.}(A,B)= 2 E_{W}(A,B)$, at leading in order in Newton's constant \cite{Dutta:2019gen}. Subsequent work studying aspects of reflected entropy building up on the results of \cite{Dutta:2019gen} includes \cite{Jeong:2019xdr,Kusuki:2019rbk,Akers:2019gcv,Kusuki:2019evw,Moosa:2020vcs,Kudler-Flam:2020url,Boruch:2020wbe,Asrat:2020uib,Kudler-Flam:2020yml,BabaeiVelni:2020wfl,Nakata:2020fjg,Li:2020ceg,Chandrasekaran:2020qtn}. Candidates for multipartite versions of reflected entropy have also been proposed in \cite{Bao:2019zqc,Chu:2019etd,Marolf:2019zoo}. In passing, let us mention that $E_{W}$ has also been proposed to be related to the ``entanglement of purification'' \cite{Takayanagi:2017knl,Nguyen:2017yqw} and to the so-called ``odd entropy'' \cite{Tamaoka:2018ned}. Regarding the latter, a similar connection between reflected entropy and odd entropy has been observed in \cite{Berthiere:2020ihq} in the case of Chern Simons theories in $(2+1)$ dimensions, although it is expected that both quantities differ in general \cite{Dutta:2019gen}.
So far, it has not been rigorously proven that reflected entropy should be finite in general\footnote{Except when $A,B$ stop being disjoint. In fact, reflected entropy can be used as a geometric regulator for entanglement entropy \cite{Dutta:2019gen}, similarly to mutual information \cite{Casini:2006ws,Casini:2015woa}.}, although it is believed to be so at least for most QFTs ---see \cite{Longo:2019pjj} and also \cite{Narnhofer:2002ic,Otani:2017pmn,Hollands:2017dov}. This was proven to be the case for free fermions in $(1+1)$ dimensions in \cite{Longo:2019pjj} and confirmed later in \cite{Bueno:2020vnx}, where we explicitly evaluated it for that theory as a function of the conformal cross ratio. The calculations in \cite{Berthiere:2020ihq} also yield finite answers.
The main purpose of this paper is to continue developing the general technology required for the evaluation of reflected entropy for Gaussian systems. As mentioned above, this was started in our previous paper \cite{Bueno:2020vnx}, where we obtained general formulas valid for free fermions in arbitrary dimensions. The focus here will be on free scalars, for which we will provide analogous expressions. This is the subject of section \ref{refffsc}. Analogously to the fermions case, we show that reflected entropy can be computed in terms of correlators of the bosonic fields associated to the system $A$. General formulas valid in general dimensions are presented both in the case in which the system is described in terms of $N$ scalars and $N$ conjugate momenta as well as in the case corresponding to a unified description in terms of $2N$ Hermitian operators. The main formulas are eqs. (\ref{phiii}), (\ref{g}) and (\ref{entroo}) in the first case and eqs. (\ref{gammaa3}), (\ref{F113}), (\ref{vy}) and (\ref{svn}) in the second.
We apply this formulas to the case of a chiral scalar in $(1+1)$ dimensions in section \ref{chiralsca}. We compute reflected entropy for this model for a pair of intervals as a function of the conformal cross ratio, and compare the result (normalized by the central charge) with the holographic \cite{Dutta:2019gen} and fermionic ones \cite{Bueno:2020vnx}. The scalar curve turns out to be considerably lower than the other two, but still greater than the mutual information in the whole range, as expected by the general inequality \req{ir}. In this section we also study how the type-I character of the algebra $\mathcal{N}_{AB}$ manifests itself in the structure of eigenvalues of the matrix of correlators required for the evaluation of reflected entropy as compared to the entanglement entropy one. As opposed to the latter, in the case of reflected entropy only a few eigenvalues make a relevant contribution to the result in the continuum. In this section we also verify the conjectured monotonicity of reflected entropy under inclusions for scalars and fermions.
In section \ref{2plus1} we start the study of reflected entropy for $(2+1)$-dimensional free theories. In particular, we evaluate $R(A,B)$ for free scalars and fermions for regions $A,B$ corresponding to pairs of parallel squares of length $L$ separated a distance $\ell$. In both cases we find a finite answer as a function of $x\equiv L/\ell$ and verify that \req{ir} holds. Also, we observe that reflected entropy behaves linearly with $x$ as this quotient grows, $R(A,B) \simeq \kappa^{(R)} x$, analogously to mutual information. We compute the coefficient $\kappa^{(R)}$ numerically for both theories as well as for holography (using the connection with $E_W$) and compare it to the respective mutual information answers. In the opposite regime, {\it i.e.,}\ for $ x \ll 1$, we observe that $R(x) \sim - I(x) \log x$ holds for both free theories. The same behavior is found to occur for the $(1+1)$-dimensional theories considered in section \ref{chiralsca}, which leads us to conjecture that this is a general relation valid for arbitrary regions far apart from each other in general $d$-dimensional CFTs.
We conclude with some future directions in section \ref{finnnn}. Appendix \ref{ape} contains a table with the numerical results found for the reflected entropy of $(1+1)$-dimensional free scalars and fermions for various values of the cross ratio.
\section{Reflected entropy for free scalars}\label{refffsc}
In this section we show how the reflected entropy for Gaussian scalar systems in general dimensions can be computed ---analogously to the entanglement entropy, and similarly to the fermion case explored in \cite{Bueno:2020vnx}--- from matrices of two-point functions of the scalar and conjugate-momentum fields. We also discuss how this formula gets modified when the usual description in terms of a set of scalars and momenta $\{\phi_i,\pi_j \}$, $i,j=1,\dots,N$, is replaced by one in terms of $2N$ Hermitian operators $f_i$, $i=1,\dots,2N$, more suitable in certain cases, such as the one corresponding to a chiral scalar in $d=2$.
\subsection{Purification and general formulas: take one}\label{tone}
Let us start with some general comments about purifications and Tomita-Takesaki theory ---see {\it e.g.,}\ \cite{Borchers:2000pv} for a review of the latter.
Consider a quantum mechanical system with Hilbert space ${\cal H}_1$ and an invertible density matrix $\rho$ written in its spectral decomposition
\begin{equation} \label{rori}
\rho=\sum_p \lambda_p \vert p\rangle\langle p\vert\, .
\end{equation}
Let us now consider a copy of ${\cal H}_1$, which we denote by ${\cal H}_2$. We can define a purification $\vert \Omega \rangle$ of $\rho$ in ${\cal H}_1\otimes{\cal H}_2$, so that $\rho=\textrm{tr}_{{\cal H}_2}\vert \Omega\rangle\langle \Omega\vert$. In the Schmidt basis, this can be written as
\begin{equation}
\vert \Omega \rangle = \sum_p \sqrt{\lambda_p} \vert p \,\tilde{p}\rangle\,.\label{laba}
\end{equation}
Observe that the orthonormal basis $\{\vert\tilde{p}\rangle\}$ for ${\cal H}_2$ in (\ref{laba}) is arbitrary, different choices corresponding to different purifications of $\vert \Omega \rangle$. As far as reflected entropy is concerned, all these choices are equivalent.
Modular conjugation $J$ is defined by the anti-unitary operator
\begin{equation}
J\equiv \sum_{p q} \vert p \,\tilde{q}\rangle \langle q\,\tilde{p}\vert \, *\,,\label{jota}
\end{equation}
where $*$ denotes complex conjugation in the basis $\{\vert p \tilde{q} \rangle\}$.
One has $J\vert \Omega\rangle=\vert \Omega\rangle$, $J^2=1$, $J^{\dagger}=J^{-1}=J$. Another important property is that the conjugation of an operator acting on the first factor produces an operator acting on the second,
\begin{equation}
J({\cal O}\otimes 1)J=1\otimes \bar{{\cal O}}\,.
\end{equation}
Now, defining
$
\Delta\equiv \rho\otimes \rho^{-1}\,,
$
the Tomita-Takesaki relations follow,
\begin{equation} \label{ttr}
J\, \Delta=\Delta^{-1} \,J\,,\hspace{1cm} J \Delta^{1/2} {\cal O}_1 | \Omega \rangle={\cal O}_1^{\dagger} | \Omega \rangle \,,
\end{equation}
where ${\cal O}_1$ is any operator acting on the first factor.
Let us now focus our discussion on free scalar fields.
Let $\phi_i$, and $\pi_j$, $i,j=1,...,N$, be a system of scalars and conjugate momenta acting on a Hilbert space ${\cal H}_1$. These are Hermitian operators which satisfy canonical commutation relations
\begin{equation}\label{comuu}
[\phi_i,\pi_j]=i\delta_{ij}\, , \quad [\phi_i,\phi_j]=[\pi_i,\pi_j]=0\, .
\end{equation}
Given a density matrix $\rho\in \mathcal{H}_1$, we can purify it by considering a Hilbert space ${\cal H}$ of double dimension and extend the bosonic algebra with $2N$ additional operators $\phi_i,\pi_j$ so that \req{comuu} holds for $i,j=1,\dots,2N$. This can be achieved by defining
\begin{equation}\label{till}
\tilde{\phi}_i\equiv J \phi_i J \, , \quad \tilde{\pi}_j\equiv -J\pi_j J\, .
\end{equation}
Then it follows that the set $\{(\phi_1,\pi_1),\dots, (\phi_N, \pi_N),(\tilde \phi_1,\tilde \pi_1),\dots, (\tilde \phi_N,\tilde \pi _N) \}$ forms a canonical algebra of Hermitian operators in the full space ---in particular, \req{comuu} holds for all variables.
With these definitions, scalar correlators depend only on the density matrix $\rho$ for the first $N$ scalars. In order to see this, let us define $\Psi_i^0\equiv \phi_i $, $\Psi_i^1\equiv \pi_i$, and the same for $\tilde \Psi^a_i$, $a=0,1$. We have, in the purified state $\ket{\Omega}$ in the full space,
\begin{align}\label{pspsp}
\braket{\Omega|\Psi_{i_1}^{a_1}\cdots \Psi_{i_k}^{a_k} \tilde \Psi_{j_1}^{b_1}\cdots \tilde \Psi_{j_l}^{b_l}|\Omega}&=(-1)^{\sum_l b_l}\braket{\Omega|\Psi_{i_1}^{a_1}\cdots \Psi_{i_k}^{a_k} J \Psi_{j_1}^{b_1}\cdots \Psi_{j_l}^{b_l}|\Omega}\\ \notag
&=(-1)^{\sum_l b_l}\braket{\Omega|\Psi_{i_1}^{a_1}\cdots \Psi_{i_k}^{a_k} \Delta^{1/2} \Psi_{j_l}^{b_l}\cdots \Psi_{j_1}^{b_1}|\Omega}\\ \notag &=(-1)^{\sum_l b_l} {\tilde \rho} \left(\rho^{1/2} \Psi_{i_1}^{a_1}\cdots \Psi_{i_k}^{a_k} \rho^{1/2} \Psi_{j_l}^{b_l}\cdots \Psi_{j_1}^{b_1} \right)\, .
\end{align}
The first equal follows from \req{till} and the properties of the modular conjugation. The second, from \req{ttr} and the Hermiticity of the fields. The third can be easily verified using \req{rori} and \req{laba} explicitly.
Now, consider a set of creation and annihilation operators $a_l,a_l^{\dagger}$, $l=1,\dots,N$, satisfying $[a_i,a_j^{\dagger}]=\delta_{ij}$, related to the $\phi_i$ and $\pi_j$ via linear combinations
\begin{equation}
\phi_i = \alpha_{ij}\left[ a^{\dagger}_j +a_j\right] \, , \quad \pi_i=i \beta_{ij} \left[ a_j-a_j^{\dagger}\right]\, ,
\end{equation}
where $\alpha$ and $\beta$ are real matrices \cite{Casini:2009sr}.
The commutation relations in \req{comuu} impose the constraint $\alpha=-\frac{1}{2} (\beta^{T})^{-1}$.
The idea is now to assume a density matrix $\rho$ of the form \cite{2003JPhA...36L.205P,Chung_2000}
\begin{equation}
\rho= \Pi_l (1-e^{-\epsilon_l}) e^{-\sum_l \epsilon_l a_l^{\dagger} a_l}\, ,
\end{equation}
which defines a Gaussian state.
The two-point correlators of the fields and momenta will be denoted by (this notation is somewhat standard for correlators in general states)
\begin{equation}
X_{ij}\equiv {\tilde \rho}(\rho \phi_i \phi_j)\, , \quad P_{ij} \equiv {\tilde \rho}(\rho \pi_i \pi_j)\, .
\end{equation}
On the other hand, for Gaussian states invariant under time reflection, we have \cite{Casini:2009sr}
\begin{equation}
{\tilde \rho}(\rho \phi_i \pi_j)={\tilde \rho}(\rho \phi_i \pi_j)^* = \frac{i}{2} \delta_{ij}\, .
\end{equation}
These matrices of correlators can be written in terms of the expectation value of the number operator $n_{kk}\equiv \braket{a_k^{\dagger}a_k}=(e^{\epsilon_k}-1)^{-1}$. The results read
\begin{equation}\label{XPn}
\alpha( 2 n+1) \alpha^T =X\, , \quad \frac{1}{4} (\alpha^{-1})^T (2n+1) (\alpha^{-1})=P \quad \Rightarrow \quad \frac{1}{4}\alpha (2n+1)^2 \alpha^{-1}= XP \, .
\end{equation}
Going back to our double Hilbert space, the purified state $\ket{\Omega}$ is also Gaussian for the full system of scalars
Organizing the scalars in a single field $\Phi_i \equiv \phi_i$, $i=1,\dots,N$ and $\Phi_{i+N}\equiv \tilde \phi_i$, $i=1,\dots,N$, and proceeding similarly for the momenta, $\Pi_i \equiv \pi_i$, $i=1,\dots,N$ and $\Pi_{i+N}\equiv \tilde \pi_i$, $i=1,\dots,N$, we are interested in the following correlators
\begin{equation}
\Phi_{ij}\equiv \braket{ \Omega | \Phi_i \Phi_j | \Omega } \, , \quad \Pi_{ij} \equiv \braket{ \Omega | \Pi_i \Pi_j | \Omega } \, , \quad i=1,\dots,2N\, .
\end{equation}
Using \req{pspsp} we obtain the following block-matrix representation of these two objects
\begin{align} \label{phi1}
\Phi&=\left(
\begin{array}{cc}
\alpha (2n+1) \alpha ^T & 2 \alpha \sqrt{n(n+1)} \alpha^T\\
2 \alpha \sqrt{n(n+1)} \alpha^T & \alpha (2n+1) \alpha^T
\end{array}
\right)\, , \\ \Pi&=\left(
\begin{array}{cc}
\frac{1}{4} (\alpha^{-1})^T (2n+1)\alpha^{-1} &-\frac{1}{2} (\alpha^{-1})^T \sqrt{n(n+1)} \alpha^{-1}\\
-\frac{1}{2} (\alpha^{-1})^T \sqrt{n(n+1)} \alpha^{-1} & \frac{1}{4} (\alpha^{-1})^T (2n+1)\alpha^{-1}
\end{array}
\right)\, . \label{pi1}
\end{align}
These can be written in terms of $X$ and $P$ alone as
\begin{align} \label{phiii}
\Phi=\left(
\begin{array}{cc}
X & g(XP) X\\
g(XP) X & X
\end{array}
\right)\, , \quad \Pi=\left(
\begin{array}{cc}
P &- Pg(XP) \\
- Pg(XP) & P
\end{array}
\right)\, ,
\end{align}
where
\begin{equation}\label{g}
g(A)\equiv \sqrt{A-1/4}\sqrt{A}^{-1}\, .
\end{equation}
The purity of the global state imposes that these matrices satisfy $\Phi \Pi=1/4$, which can be easily verified.
Now, the von Neumann entropy corresponding to a region $Y$ can be obtained from the restriction of $\Phi$ and $\Pi$ to $Y$, {\it i.e.,}\ $(\Phi_Y)_{ij}=\Phi_{ij}$ and $(\Pi_Y)_{ij}=\Pi_{ij}$ for all $i,j\in Y$. Defining $C_{Y}\equiv \sqrt{\Phi_Y \Pi_Y}$, the entropy is given by
\begin{equation}\label{entroo}
S(Y)={\tilde \rho} \left[(C_Y+1/2) \log (C_Y+1/2) - (C_Y-1/2)\log (C_Y-1/2) \right]\, .
\end{equation}
In the continuum, the same expression can be used, where $C_Y$ is to be understood as a kernel, $C(x,y)$, $x,y\in Y$.
When computing reflected entropy for a pair of regions $A$, $B$, we need to evaluate the $X$, $P$ and $g(XP)$ matrices for all sites belonging to those regions, which allows us to build the $\Phi$ and $\Pi$ matrices, and then restrict the different blocks to the region $A$ sites ---see below for explicit examples. Formulas \req{entroo} and \req{phiii} can be thought of as generalizations of the well-known expressions required for the evaluation of the usual entanglement entropy ---see {\it e.g.,}\ \cite{Casini:2009sr}. In that case, \req{entroo} holds, where the matrix $C_Y$ is now the restriction to the entangling region $Y$ of the matrix $C_Y\equiv \sqrt{X_Y P_Y}$. In the reflected entropy case, \req{entroo} computes the entropy for $\rho_{AA^*}$ instead of $\rho_A$. The difference between both cases is codified in the additional blocks appearing in $\Phi$ and $\Pi$ with respect to $X$ and $P$ respectively.
\subsection{Purification and general formulas: take two}\label{take2}
The previous description in terms of scalar and conjugate-momentum fields can be generalized by considering instead a set of $2N$ Hermitian operators $f_i$ satisfying commutation relations of the form
\begin{equation}
[f_i,f_j]=i \left( \delta_{j i+1}-\delta_{j i-1}\right)\equiv i C_{ij}\, .
\end{equation}
This is a more suitable choice in some cases, such as the one corresponding to a $d=2$ chiral scalar, which we consider in the following section. This setup has been previously considered {\it e.g.,}\ in \cite{Sorkin:2012sn,Coser:2017dtb,2004PhRvA..70e2329B,Arias:2018tmw}.
Once again, we extend this bosonic algebra with $2N$ additional operators
\begin{equation}
\tilde f_i\equiv J f_i J\, .
\end{equation}
These satisfy the commutation relations $[\tilde f_i, \tilde f_j]= -i C_{ij} $. Again, with this definition, the scalar correlators depend only on the density matrix of the original Hilbert space. In the purified state $\ket{\Omega}$ in the full space, we have
\begin{align}
\braket{\Omega | f_{i_1}\cdots f_{i_k} \tilde f_{j_1} \cdots \tilde f_{j_l} |\Omega}&=\braket{\Omega | f_{i_1}\cdots f_{i_k} J f_{j_1} \cdots f_{j_l} |\Omega} \\ &=\braket{\Omega | f_{i_1}\cdots f_{i_k} \Delta^{1/2} f_{j_l} \cdots f_{j_1} | \Omega} \\ &={\tilde \rho} \left(\rho^{1/2} f_{i_1}\cdots f_{i_k}\rho^{1/2} f_{j_l} \cdots f_{j_1} \right)\, .
\end{align}
Let us denote
\begin{equation}\label{Fij}
F_{ij}\equiv \braket{f_i f_j} \, .
\end{equation}
Organizing the operators in a single field $\mathcal{F}_i\equiv f_i$, $i=1,\dots,N$ and $\mathcal{F}_{i+N}\equiv \tilde f_i$, $i=1,\dots,N$, we can define the matrix of commutators
\begin{equation}\label{gammaa3}
\mathcal{C}_{ij}\equiv -i[\mathcal{F}_i,\mathcal{F}_j] \quad \Rightarrow \quad
\mathcal{C}= \left(
\begin{array}{cc}
C & 0 \\
0 &-C
\end{array}
\right)\, .
\end{equation}
Using the Hermiticity of the $\mathcal{F}_i$ it is easy to prove that\footnote{Note that when we write things like ${\rm Im} A_{ij}$, we literally refer to the matrix built from the imaginary parts of the components of the original matrix (and the same for the real parts).}
\begin{equation}\label{CF}
\mathcal{C}_{ij}=2\, {\rm Im}\, \mathcal{F}_{ij}\, ,
\end{equation}
where we defined the matrix of correlators
\begin{equation}
\mathcal{F}_{ij}\equiv \braket{\Omega| \mathcal{F}_i \mathcal{F}_j |\Omega}\, , \quad i=1,\dots,2N\, .
\end{equation}
The different blocks in this matrix turn out to be given by
\begin{align} \label{phiii2}
\mathcal{F}&=\left(
\begin{array}{cc}
F & i CV g(V^2 ) \\
i CV g(V^2 ) & F-iC
\end{array}
\right)\, , \quad \text{where} \quad V\equiv -i C^{-1} F -\frac{1}{2}\, ,
\end{align}
and $g(A)$ was defined in \req{g}. This matrix can also be written as
\begin{align} \label{F113}
\mathcal{F}&=\left(
\begin{array}{cc}
{\rm R}+ i {\rm I} & g\left(-\frac{1}{4} {\rm R} {\rm I}^{-1}{\rm R} {\rm I}^{-1} \right) {\rm R} \\
g\left(-\frac{1}{4} {\rm R} {\rm I}^{-1}{\rm R} {\rm I}^{-1} \right) {\rm R} & {\rm R}- i {\rm I}
\end{array}
\right)\, ,
\end{align}
where we defined $ {\rm R}_{ij}\equiv {\rm Re}(F_{ij})$, $ {\rm I}_{ij}\equiv {\rm Im}(F_{ij})$.
Note that the off-diagonal terms are manifestly real. In order to evaluate the entropy associated to some region $Y$, we define the matrix
\begin{equation}\label{vy}
\mathcal{V}_Y \equiv -i ( \mathcal{C}_Y)^{-1} \mathcal{F}_Y -\frac{1}{2}\, ,
\end{equation}
where $\mathcal{C}_Y$ and $\mathcal{F}_Y$ are the restrictions of $\mathcal{C}$ and $\mathcal{F}$ to $Y$. Then, the
corresponding Von Neumann entropy can be obtained as
\begin{equation}\label{svn}
S(Y)
{\tilde \rho} \left[ (\mathcal{V}_Y+1/2)\log |\mathcal{V}_Y+1/2| \right]\, .
\end{equation}
When computing reflected entropies, we need to evaluate $\mathcal{C}$ and $\mathcal{F}$ for all sites belonging to $A$ and $B$, and then obtain the restrictions of their different blocks to the region $A$.
As a check of our results, we can observe that one should find $S(Y)=0$ when applied to the global state, which means that the unrestricted matrix $\mathcal{V}$ should be such that $\mathcal{V}^2=1/4$, which can be easily verified to be the case. Observe also that, once again, these expressions can be seen as generalizations of the analogous entanglement entropy formulas. For that quantity \req{svn} holds \cite{Sorkin:2012sn} with $ \mathcal{V}_Y$ replaced by $V_Y\equiv -i ( C_Y)^{-1} F_Y -\frac{1}{2}$.
The terms appearing in the diagonal of $\mathcal{F}$ in the expressions above follow straightforwardly, but the origin of the off-diagonal pieces requires some further explanation. In order to see where they come from, let us define vectors $\vec f\equiv (f_1, \dots,f_{2N})^T$ and $\vec \Phi\equiv (\phi_1,\dots,\phi_N,\pi_1,\dots,\pi_N)^T $ and $\vec {\tilde f}\equiv (\tilde f_1, \dots, \tilde f_{2N})^T$ and $\vec {\tilde \Phi}\equiv (\tilde \phi_1,\dots,\tilde \phi_N, \tilde \pi_1,\dots,\tilde \pi_N)^T $. As argued in \cite{Arias:2018tmw}, we can perform a change of basis to relate the $\vec f$ and $\vec \Phi$ representations as
$\vec \Phi=Q O \vec f$ where $Q={\rm diag}(D^{-1/2},D^{-1/2})$ being $D$ a diagonal matrix with positive elements, and $O$ an orthogonal matrix.
On the one hand, we have
\begin{align}
&C=O^T Q^{-1} \left(\begin{array}{cc} 0 & 1\\ -1 & 0 \end{array}\right) Q^{-1} O\, , \quad F=O^T Q^{-1}\left(\begin{array}{cc} X & i/2 \\ -i/2 & P \end{array}\right) Q^{-1} O\, , \\ & \Rightarrow V=O^TQ \left(\begin{array}{cc} 0 & i P \\ -iX & 0 \end{array}\right)Q^{-1}O\, , \quad F-iC= O^T Q^{-1}\left(\begin{array}{cc} X & -i/2 \\ i/2 & P \end{array}\right) Q^{-1} O\, .
\end{align}
Now, our goal is to evaluate $\braket{f_i \tilde f_j}$. In order to do that, we use the result obtained in \req{phiii} in the $\phi,\pi$ basis. We have
\begin{equation}\label{phiphi}
\braket{\Phi \tilde \Phi}= \left(\begin{array}{cc} \braket{\phi \tilde \phi} & 0\\ 0 & \braket{\pi \tilde \pi} \end{array}\right)= \left(\begin{array}{cc} g(XP) X & 0\\ 0 & g(PX) P \end{array}\right)\, .
\end{equation}
Then, we have
\begin{equation}
\braket{\Phi \tilde \Phi}= Q O \braket{f \tilde f} O^T Q \quad \Rightarrow \quad \braket{f \tilde f} = O^T Q^{-1} \braket{\Phi \tilde \Phi} Q^{-1} O\, .
\end{equation}
Now, in order to write the expression in \req{phiphi} in terms of correlators of $f_i$, we can use the above expressions for $C$ and $V$. We find
\begin{equation}
i CV g( V^2 ) =O^T Q^{-1} \left(\begin{array}{cc} g(XP) X & 0\\ 0 & g(PX) P \end{array}\right) Q^{-1} O \quad \Rightarrow \quad \braket{f \tilde f}=i CV g( V^2 ) \, ,
\end{equation}
which is the desired relation appearing in the off-diagonal blocks of $\mathcal{F}$.
\section{Reflected entropy for a $d=2$ chiral scalar}\label{chiralsca}
In this section we evaluate numerically the reflected entropy for two intervals for a chiral scalar field as a function of the conformal cross-ratio and compare the result to the ones corresponding to holographic Einstein gravity and a free fermion. We also study the eigenvalues spectrum of the matrix of correlators which intervenes in the computation of the reflected entropy and comment on its differences with respect to the one required for the evaluation of the usual type-III entanglement entropy of a single interval. We also verify the monotonicity of reflected entropy under inclusions both for the scalar and the fermion.
\subsection{Reflected entropy for two intervals}
The lattice Hamiltonian for a chiral scalar in $1+1$ dimensions can be taken to be
\begin{equation}
H=\frac{1}{2}\sum_i f_i^2\, .
\end{equation}
In this case, the correlator defined in \req{Fij} was obtained in \cite{Arias:2018tmw}, the result being
\begin{equation}\label{fff}
F_{ij}= \begin{cases} -\frac{1+(-1)^{i-j}}{\pi ((i-j)^2-1)} \, , \quad & |i-j|\neq 1 \, , \\ +\frac{i}{2} \left( \delta_{j i+1}-\delta_{j i-1}\right) \, , \quad & |i-j|= 1 \, .
\end{cases}
\end{equation}
Given two regions $A$ and $B$, we can evaluate the reflected entropy as the von Neumann entropy of $\rho_{AA^*}$ using the expression for $F_{ij}$ above and the formulas obtained in the previous section. The indices $i,j$ in $F_{ij}$ take values in sites belonging to the region $A\cup B$. Namely, if we define the discretized intervals through $A\cup B = (a_1, a_{1}+1, \dots , b_{1}-1, b_1)\cup (a_2, a_2+1, \dots , b_2-1, b_2)$, then $i,j$ take values $j = a_1, a_1+1, \dots , b_1-1, b_1, a_2, a_2+1, \dots , b_2-1, b_2$. Given $(a_1, b_1)$ and $(a_2, b_2)$ as input, which determine the length and separation of the corresponding intervals, we can then evaluate the matrix $F_{ij}$. The real and imaginary parts of its components are easily obtained from \req{fff} and given by
\begin{equation}
{\rm Re}\, F_{ij}= \begin{cases} -\frac{1+(-1)^{i-j}}{\pi ((i-j)^2-1)} \, , \quad & |i-j|\neq 1 \, , \\ 0 \, , \quad & |i-j|= 1 \, ,
\end{cases} \quad
{\rm Im}\, F_{ij}= \frac{1}{2} \left( \delta_{j i+1}-\delta_{j i-1}\right) \, .
\end{equation}
With these matrices at hand, we can numerically compute the diagonal terms appearing in $\mathcal{C}$ and $\mathcal{F}$ in \req{gammaa3} and \req{F113} respectively, as well as the combination $W\equiv (-\tfrac{i}{2}{\rm R} {\rm I }^{-1})_{ij}$, required for the off-diagonal blocks of $\mathcal{F}$. In order to obtain those, we first diagonalize $W$. Given its eigenvalues $\{ d_m\}$, we build the diagonal matrix $|d_m |^{-1} \sqrt{d_m^2-1/4}\, \delta_{mn}$ and transform it back to the original basis, which yields $\left.g(-\tfrac{1}{4} {\rm R} {\rm I}^{-1} {\rm R} {\rm I}^{-1})\right|_{ij}$. Multiplying by ${\rm R}$, we obtain the off-diagonal blocks of $\mathcal{F}$. Using these matrices we can obtain the von Neumann entropy associated to $\rho_{AA^*}$, from
the submatrices corresponding to the $A$ sites. These correspond to the first $(b_1 - a_1) \times (b_1 -a_1)$-dimensional blocks in each case. With those pieces we can finally build the matrices $\mathcal{C}|_{AA^*}$ and $\mathcal{F}|_{AA^*}$ as
\begin{align} \label{F1134}
\left. \mathcal{F} \right|_{AA^*}&=\left(
\begin{array}{cc}
\left. [{\rm R}+ i {\rm I}] \right|_A & \left.\left[ g\left(-\frac{1}{4} {\rm R} {\rm I}^{-1}{\rm R} {\rm I}^{-1} \right) {\rm R} \right]\right|_A\\
\left.\left[ g\left(-\frac{1}{4} {\rm R} {\rm I}^{-1}{\rm R} {\rm I}^{-1} \right) {\rm R} \right]\right|_A & \left. [{\rm R}- i {\rm I}]\right|_A
\end{array}
\right)\, , \quad \left. \mathcal{C} \right|_{AA^*}&=\left(
\begin{array}{cc}
\left. 2 {\rm I} \right|_A & \left.0\right|_A\\
\left.0\right|_A & -\left. 2 {\rm I}\right|_A
\end{array}
\right)\, .
\end{align}
The last step is to evaluate $\mathcal{V}_{AA^*} \equiv -i ( \mathcal{C}|_{AA^*})^{-1} \mathcal{F}|_{AA^*} -\frac{1}{2}$.
Denoting its eigenvalues as $\{\nu_m\}$, the reflected entropy can be finally obtained from \req{svn} as
\begin{equation}\label{eigenR}
R_{\rm scal.}=\sum_{m} (\nu_m+1/2)\log | \nu_m+1/2|\, .
\end{equation}
Lattice calculations give rise to a doubling of eigenvalues, so when showing results we need to divide the numerical results by 2. On the other hand, from now on we will normalize reflected entropies by the central charge $c$ of the corresponding theory, which in the case of the chiral scalar is $c=1/2$. Hence, the numerical results obtained following the above procedure automatically yield $ R_{\rm scal.}/c$.
\begin{figure}[t] \centering
\includegraphics[scale=0.75]{rcHoloFerScal3.pdf}
\caption{We plot the reflected entropy normalized by the central charge, $R/c$, as a function of the cross-ratio $\eta$ for: a chiral scalar (blue line and dots), a free fermion (red line and dots) \cite{Bueno:2020vnx} and holographic Einstein gravity (black line) \cite{Dutta:2019gen}. The latter corresponds to the leading-order result in the Newton constant which drops to zero for $\eta =1/2$. The gray dashed line is the general-theory behavior as $\eta \rightarrow 1$. }
\label{refiss1}
\end{figure}
In the continuum, the reflected entropy for two intervals of lengths $L_A$, $L_B$ separated a distance $\ell$ is a function of the conformal cross-ratio
\begin{equation}\label{cr7}
\eta \equiv \frac{(b_1-a_1)(b_2-a_2)}{(a_2-a_1)(b_2-b_1)}=\frac{L_A L_B}{(\ell+L_A)(\ell + L_B)} \, .
\end{equation}
In order to obtain $ R_{\rm scal.}(\eta)$ in that limit, we fix $\eta$ and consider an increasing number of points in the discretized intervals. The results for the reflected entropy asymptote to their continuum values, which we obtain through a polynomial fit in the inverse size of the intervals. We plot our results in Fig. \ref{refiss1}. In the same plot, we include the results corresponding to holographic Einstein gravity and a free fermion. The former was obtained in \cite{Dutta:2019gen} using replica-trick techniques, and reads
\begin{equation}
R_{\rm holo.}(\eta)= \begin{cases}
\frac{2c}{ 3} \log \left[\frac{1+\sqrt{\eta}}{\sqrt{1-\eta}} \right] + \mathcal{O}(c^0)\, , \quad \text{ for } \quad \eta>1/2\, , \\
\mathcal{O}(c^{0}) \, , \quad \quad \quad \quad \quad \, \, \, \, \, \, \, \, \, \, \, \, \quad \text{ for } \quad \eta<1/2\, .
\end{cases}
\end{equation}
This agrees with previous $E_W$ calculations \cite{Takayanagi:2017knl,Nguyen:2017yqw}. On the other hand, the fermion results were obtained using numerical methods in \cite{Bueno:2020vnx}. In Fig. \ref{refiss1} we have also included the $\eta\rightarrow 1$ limit which was argued to hold for general $d=2$ CFTs in \cite{Dutta:2019gen}. This reads
\begin{equation}\label{r11}
R(\eta\rightarrow 1)=-\frac{c}{3}\log (1-\eta)+\frac{c}{3}\log 4\, .
\end{equation}
While the fermion and holographic results clearly approach the limiting curve in the expected regime (doing so from below), the scalar takes values which are considerably smaller for values of $\eta$ very close to one. In appendix \ref{ape} we present the numerical values of the data points shown in Fig.\,\ref{refiss1} both for the scalar and the fermion, which may be useful for future comparisons.
In spite of being much smaller than the fermion and holographic results, we can verify that $R_{\rm scal.}$ is indeed greater than the mutual information $I_{\rm scal.}$ as required by the general inequality in \req{ir}. For that, we recall the results for the mutual information of fermion and scalar \cite{Arias:2018tmw}. These are given by
\begin{equation}\label{mutuss}
I_{\rm ferm.}/c=-\frac{1}{3}\log (1-\eta)\, , \quad I_{\rm scal.}/c=-\frac{1}{3}\log (1-\eta)+2U(\eta)\, ,
\end{equation}
where
\begin{equation}
U(\eta)\equiv -\frac{i\pi}{2} \int_0^{\infty} ds \frac{s}{\sinh^2(\pi s)} \log \left[\frac{_2F_1[1+is,-is; 1; \eta]}{_2F_1[1-is,+is; 1; \eta]} \right] \, ,
\end{equation}
which is a real and negative function for all values of $\eta$. We plot the corresponding reflected entropies and mutual informations for both models in Fig.\,\ref{refiss12222}. In both cases, the inequality is satisfied, as it should, and the quotient $R/I$ monotonously decreases for growing values of $\eta$. In the limit $\eta\rightarrow 1$, both quotients tend to one. In the case of the scalar, it requires values of $\eta$ extremely close to one to approach that limit ---see blue diamond in the right plot. This is related to the behavior of the function $U(\eta)$, which goes as $U(\eta) \sim -\tfrac{1}{2} \log \left[ -\log [1-\eta]\right]$ for $\eta\rightarrow 1$ \cite{Arias:2018tmw}.
\begin{figure}[t]\hspace{-0.3cm}
\includegraphics[scale=0.67]{ReflectedMutual2d.pdf}
\includegraphics[scale=0.645]{RentreI.pdf}
\caption{We plot the reflected entropy and mutual information for two intervals $A,B$, as a function of the cross-ratio for a free fermion (red) and a free scalar (blue). In the right we plot the quotient between both quantities for each model. The black dot corresponds to the limit $\eta=1$, where both quotients should tend to one. The red and blue dots correspond to the greatest values of $\eta$ for which we numerically evaluated the reflected entropy for each model. The red dotted line has been computed using the general-CFT formula \req{r11} and \req{mutuss} and is valid for $\eta\rightarrow 1$. In the case of the scalar, the curve becomes very steep near $\eta\rightarrow 1$ because of $U(\tau)$. For instance, the small blue diamond shown in the figure corresponds to the value $\eta=0.9999999999999999$, for which $R_{\rm scal.}/I_{\rm scal.}=1.470488$. }
\label{refiss12222}
\end{figure}
In the opposite limit, {\it i.e.,}\ for $\eta\rightarrow0$, the quotients seem to diverge logarithmically. In the case of the fermion, we found that the tentative function \cite{Bueno:2020vnx}
\begin{equation}
R_{\rm ferm.}(\eta\rightarrow 0)/c \sim -0.15\eta \log \eta+ 0.67 \eta +\dots
\end{equation}
fits reasonably well the numerical data for values of the cross ratio $\eta\lesssim 0.1$. In the case of the scalar, a similar analysis suggests that the leading order term takes the form
\begin{equation}\label{rc0}
R_{\rm scal.}(\eta\rightarrow 0)/c \sim -0.04\eta^2 \log \eta+\dots
\end{equation}
The fit in this case goes wrong much faster than in the case of the fermion, and can only be trusted for values of the cross ratio $\eta\lesssim 0.001$. In spite of this limited range of validity, we are rather confident the functional dependence of the leading term is the one shown in \req{rc0}. In the case of the mutual informations, one finds instead \cite{Casini:2009vk,Casini:2005rm,Arias:2018tmw}
\begin{align}
I_{\rm ferm.}(\eta\rightarrow 0)/c \sim \frac{1}{3}\eta+ \dots \, , \quad
I_{\rm scal.}(\eta\rightarrow 0)/c \sim \frac{1}{30}\eta^2+ \dots \, .
\end{align}
These results reflect the different nature of both quantities. While mutual information admits a power-law expansion in that limit \cite{Calabrese:2009ez,Calabrese:2010he,Cardy:2013nua,Agon:2015ftl}, which reflects the fact that it measures correlations between operators exclusively localized in $A,B$, the information captured by reflected entropy is in fact spread throughout the whole real line (except for the region corresponding to the interval $B$). The latter fact was shown very explicitly in the case of the fermion in \cite{Bueno:2020vnx}, where a notion of spatial-density for the corresponding type-I algebra was introduced.
\subsection{Eigenvalues spectrum}
In \cite{Bueno:2020vnx}, we studied how the spectra of the correlator matrices entering the entanglement and reflected entropies differed from each other for a $(1+1)$-dimensional free fermion. The goal of this subsection is to perform an analogous analysis in the case of the chiral scalar. Just like for the fermion, the formulas required for the evaluation of reflected entropy in the case of free scalars are also identical to the entanglement entropy ones ---namely, they have the same form in terms of certain two-point functions of the fields. The difference between both quantities is that in the entanglement entropy case the relevant matrices are $C_A$ and $F_A$, whereas for the reflected we need $\mathcal{C}_{AA^*}$ and $\mathcal{F}_{AA^*}$. In this setup, this is what makes the difference between computing a von Neumann entropy for a type-III algebra associated to region $A$, and a von Neumann entropy for the canonical type-I algebra associated to regions $A$ and $B$, {\it i.e.,}\ a reflected entropy.
\begin{figure}[t]\hspace{-0.3cm}
\includegraphics[scale=0.68]{spectrumscalartypeIII.pdf}
\includegraphics[scale=0.68]{spectrumscalartypeI.pdf}
\caption{We plot the ``leading'' eigenvalues of the correlator matrices $V_A$ and $\mathcal{V}_{AA^*}$ involved in the evaluation of: the usual type-III entanglement entropy for a single interval (left); the reflected entropy $R(A, B)$ for two invervals $A$,$B$ with cross-ratio $\eta=1/4$ (right). For both plots, the horizontal axis corresponds to the number of points taken for the intervals ($A$ in the first case and both $A$ and $B$ in the second). In both cases, we use a logarithmic function of the eigenvalues which simplifies presentation of several eigenvalues in the same plot ---see \req{arraa}. }
\label{refiss122}
\end{figure}
The eigenvalues of $\mathcal{V}_{AA^*}$ always appear doubled, as mentioned above. In the following discussion we just remove the repeated eigenvalues and multiply the result by 2. For each remaining eigenvalue $\nu_j$ there is always another one corresponding to $-\nu_j$. Hence, it is useful to arrange the eigenvalues as
\begin{equation}\label{arraa}
\nu_{2k}\equiv \tfrac{1}{2}+\varepsilon_k\, , \quad \nu_{2k-1}\equiv-\tfrac{1}{2}-\varepsilon_k\, , \quad \text{with} \quad k=1,2,\dots,\#_A\, ,
\end{equation}
where the $\varepsilon_k$ are positive numbers and $\#_A$ is the number of lattice points corresponding to the interval $A$. The continuum limit corresponds to $\#_A\rightarrow \infty$. The above expressions can be inverted as $\varepsilon_k=(\nu_{2k}- \nu_{2k-1}-1)/2=\nu_{2k}-1/2=-\nu_{2k-1}-1/2$.
Then, we can rewrite the reflected entropy \req{eigenR} as
\begin{equation}
R_{\rm scal.}=2\sum_{k=1}^{\#_A}\left[ (\varepsilon_k+1) \log (1+\varepsilon_k) - \varepsilon_k \log \varepsilon_k \right] \, .
\end{equation}
Except for values of $\eta$ very close to $1$, the $\varepsilon_k$ are all very small numbers, so $R_{\rm scal.}$ is approximately given by
\begin{equation}
R_{\rm scal.}=2\sum_{k=1}^{\#_A} \left[ \varepsilon_k [1-\log \varepsilon_k ] + \frac{\varepsilon_k^2}{2} + \mathcal{O}( \varepsilon_k^3) \right] \, .
\end{equation}
In this expression, both $ \varepsilon_k$ and $-\varepsilon_k \log \varepsilon_k$ make comparable contributions to $ R_{\rm scal.}$ for the most relevant eigenvalues, but $-\varepsilon_k \log \varepsilon_k$ always dominates whenever $\varepsilon_k <1/ e$, which again is the case for all values of $\eta$ except for those extremely close to $\eta=1$. In order to compare the behavior of the eigenvalues of $\mathcal{V}_{AA^*}$ with those of $V_A$ we choose to plot $-\log\varepsilon_k$ as a function of the number of points in the interval $A$. Note that the smaller the values of $-\log\varepsilon_k$ for a given pair of eigenvalues $\{\nu_{2k},\nu_{2k-1}\}$, the greater the contribution to $R_{\rm scal.}$, since the resulting function appears multiplied by $\varepsilon_k$ in the reflected entropy expression. Indeed, the closer to $0$ a given $\varepsilon_j$ is, the smallest its contribution, since then $\varepsilon_j \log \varepsilon_j \rightarrow 0$, and $(\varepsilon_j+1) \log (\varepsilon_j+1) \rightarrow \log 1=0$. As for the eigenvalues of $V_A$, in that case there is no doubling but, just like for the reflected entropy, for each positive eigenvalue there always appears its negative version, so the arrangement \req{arraa} can be performed as well, where now the $\varepsilon_k$ are no longer small in general.
In Fig.\,\ref{refiss122} we plot the function $-\log\varepsilon_k$ as we approach the continuum for the eigenvalues of $\mathcal{V}_{AA^*}$ and $V_A$ which contribute the most to the reflected and entanglement entropies, respectively. The greatest contribution comes, in both cases, from the lowest curve, and so on. In a very similar fashion to the situation encountered for a free fermion in \cite{Bueno:2020vnx}, we observe that only a few eigenvalues make a significant contribution to $R(A,B)$. The eigenvalues quickly stabilize as we approach the continuum, as expected for a finite type-I algebra. On the other hand, in the entanglement entropy case, an increasing number of eigenvalues of $V_A$ become relevant, which produces the usual logarithmically divergent behavior.
It is natural to wonder how well the first eigenvalues manage to reproduce the full reflected entropy result. In order to test this, one can define ``partial'' reflected entropies as
\begin{equation}
R_{\rm scal.}^{(p)}=2\sum_{k=1}^{p}\left[ (\varepsilon_k+1) \log (1+\varepsilon_k) - \varepsilon_k \log \varepsilon_k \right] \, ,
\end{equation}
where again it is understood that we have arranged the $\varepsilon_k$ from greatest to smallest. For our working example of $\eta=1/4$, one finds,
\begin{align}
R_{\rm scal.}^{(1)}(1/4)&=0.0089725\, , \\
R_{\rm scal.}^{(2)}(1/4)&= 0.0098531 \, , \\
R_{\rm scal.}^{(3)}(1/4)&= 0.0100063 \, , \\
R_{\rm scal.}^{(4)}(1/4)&= 0.0100385 \, , \\
R_{\rm scal.}^{(\infty)}(1/4)&=0.0100512 \, .
\end{align}
As we can see, already with four eigenvalues we obtain a pretty accurate approximation to the full answer. A similar situation is encountered for intermediate values of $\eta$. On the other hand, as we approach the $\eta \rightarrow 1$ limit, a growing number of eigenvalues is required.
\subsection{Monotonicity of reflected entropy}
\begin{figure}[t]\centering
\includegraphics[scale=0.68]{monotonicity.pdf}
\caption{For a free fermion and a chiral scalar, we plot the reflected entropy corresponding to a fixed interval $A$ and a region $\upsilon B$ consisting of two intervals obtained as follows: given a single interval $B$ identical to $A$ and with a fixed cross-ratio $\eta=1/9$, we remove a certain subset of $B$ symmetric around its center so that we keep a total fraction $\upsilon$ of $B$. For instance, $\upsilon=2/5$ means that we have divided $B$ in five identical pieces and we have computed reflected entropy for $A$ and the pair of intervals resulting from removing the three intermediate fifths of $B$. The result appears normalized by $R(A,B)$, {\it i.e.,}\ by the one obtained by considering the full interval $B$. The black dots correspond to the limit cases and are shared by the two models. }
\label{mono}
\end{figure}
The monotonicity of reflected entropy under inclusions (or its lack thereof) is an open problem. Namely, it is not know whether
\begin{equation}\label{monos}
R(A,BC) \overset{?}{ \geq} R(A,B)\, ,
\end{equation}
is a general property of reflected entropy. An analogous inequality was proven for integer-$n>1$ R\'enyi versions of the reflected entropy in \cite{Dutta:2019gen}, but the $n=1$ case still remains uncertain.
We have tested the validity of \req{monos} for the free scalar and the free fermion by computing reflected entropy of pairs of regions $A$ and $\upsilon B$ where only a fraction $\upsilon$ of the original interval $B$ is considered. In Fig.\,\ref{mono}, we have considered a particular case corresponding to intervals $A,B$ with cross-ratio $\eta=1/9$.
We find that \req{monos} always holds, {\it i.e.,}\ as we increase the fraction of $B$ which we consider, the reflected entropy grows. Hence, reflected entropy indeed satisfies the monotonicity property in these cases. We have repeated the experiment for other values of $\eta$, and \req{monos} is always respected both for the scalar and the fermion. While our analysis is only partial, our results suggest that \req{monos} indeed holds for all possible choices of $A,B,C$ in the case of free scalars and fermions in $d=2$.
\section{Reflected entropy in $d=3$}\label{2plus1}
In this section we move to $(2+1)$-dimensional theories. In particular, we compute the reflected entropy for free massless scalars and fermions. We choose simple regions $A,B$ corresponding to parallel squares of length $L $ separated a distance $\ell$ along their bases. We study the behavior of $R(A,B)$ both for small and large values of $L/\ell$. For the latter, we extract the coefficients controlling the linear growth and compare them to the mutual information ones for both theories as well as for holographic Einstein gravity. Regarding the former, we observe a pattern, shared by the $d=2$ theories considered in the previous section, which leads us to conjecture that reflected entropy and mutual information for pairs of regions characterized by scales $L_A\sim L_B\sim L$ and separated a distance $\ell$ are universally related in the large-separation regime ($x\equiv L/\ell \ll 1$) by $R(x)\sim - I(x)\log x$ in general dimensions.
\subsection{Free scalar correlators}
In the case of the scalar, the Hamiltonian we have considered reads
\begin{equation}
H=\frac{1}{2} \sum_{n,m=-\infty}^{\infty} \left[ \pi_{n,m}^2 + (\phi_{n+1,m}- \phi_{n,m})^2 + (\phi_{n,m+1} -\phi_{n,m})^2 \right]\, ,
\end{equation}
where the lattice spacing has been set to one. In this case, the formulation is in terms of bosonic fields and momenta, so the discussion in section \ref{tone} applies, and the relevant formulas for the reflected entropy are \req{phiii}, \req{entroo}. The relevant correlators read \cite{Casini:2009sr}
\begin{align}
X_{(0,0),(i,j)}\equiv \braket{\phi_{0,0}\phi_{i,j}} &=\frac{1}{8\pi^2} \int_{-\pi}^{\pi} \mathrm{d} x \int_{-\pi}^{\pi} \mathrm{d} y \frac{\cos (i x) \cos(j y)}{\sqrt{2(1-\cos(x))+2(1-\cos(y))}}\, , \\
P_{(0,0),(i,j)}\equiv\braket{\pi_{0,0}\pi_{i,j}} &=\frac{1}{8\pi^2} \int_{-\pi}^{\pi} \mathrm{d} x \int_{-\pi}^{\pi} \mathrm{d} y \cos(i x) \cos(jy) \sqrt{2(1-\cos x)+2(1-\cos y)}\, .
\end{align}
The subindices here refer to the coordinates of the corresponding two-dimensional lattice points. The correlators are invariant under translations, so that, $\braket{\phi_{0,0}\phi_{i,j}} =\braket{\phi_{n,m}\phi_{i+n,j+m}}$, and the same for the momenta.
For computational purposes, it is useful to perform the integral over $y$ in both expressions. The result can be written in terms of the regularized hypergeometric function ${}_p \tilde F_q$ as
\begin{align}
X_{(0,0),(i,j)} &=\frac{1}{2^{5/2}\pi } \int_{-\pi}^{\pi} \mathrm{d} x \frac{ \cos (i x) }{\sqrt{3-\cos x}} \, {}_3\tilde F_{2} \left[\{\tfrac{1}{2},\tfrac{1}{2},1\} ; \{1-j, 1+j\}; \frac{2}{3-\cos x} \right] \, , \\
P_{(0,0),(i,j)}&=\frac{1}{2^{3/2}\pi} \int_{-\pi}^{\pi} \mathrm{d} x \cos (i x) \sqrt{3-\cos x} \, {}_3\tilde F_{2} \left[\{\tfrac{1}{2},\tfrac{1}{2},1\} ; \{1-j, 1+j\}; \frac{2}{3-\cos x} \right] \, .
\end{align}
These integrals can be easily evaluated numerically.
Regions $A,B$ in the lattice correspond to subsets of points $p=(p_x,p_y)$. For instance, for a square region of length $L$ and with the lower left vertex at $(0,0)$, we have $A \equiv \{(p_x,p_y)\in \mathbb{Z}_2 \, |\, p_x,p_y=0,\dots, L \}$.
Given a pair of two-dimensional regions $A$ and $B$ we can evaluate the reflected entropy as follows. First, we need to evaluate the matrices $X$ and $P$. These are composed of four blocks corresponding to the $AA$, $AB$, $BA$ and $BB$ components, respectively. For instance, $X_{AB}$ corresponds to the block of eigenvalues $X_{p,q}$ where $p=(p_x, p_y)$, $q=(q_x,q_y)$ are points in the lattice such that $p\in A$ and $q\in B$. Once we have $X$ and $P$, we need to evaluate $g(XP)$. In order to do that, we find the eigenvalues $\{ d_m\}$ of the matrix $XP$. Then, we build the diagonal matrix $\sqrt{d_m-1/4}\sqrt{d_m}^{-1}\delta_{mn}$ and transform it back to the original basis, which yields $g(XP)$. In order to obtain the off-diagonal blocks in $\Phi$ and $\Pi$, we multiply it by $X$ or $P$ as required in an obvious way. Finally, we restrict the matrices $\Phi$ and $\Pi$ to the $A$ region as
\begin{align} \label{phiii0}
\Phi|_{AA^*}=\left(
\begin{array}{cc}
X|_A & \left[g(XP) X\right]|_A\\
\left[ g(XP) X\right]|_A & X|_A
\end{array}
\right)\, , \quad \Pi|_{AA^*}=\left(
\begin{array}{cc}
P|_A & \left[-Pg(XP)\right]|_A \\
\left[- Pg(XP) \right]|_A & P|_A
\end{array}
\right)\, ,
\end{align}
where we used the notation $|_A$ to refer to the $AA$ block in each case. The final step is to evaluate $C_{AA^*}\equiv \sqrt{\Phi|_{AA^*} \Pi|_{AA^*}}$. Given the eigenvalues of this matrix, which we denote $\{\nu_m \}$, the reflected entropy finally reads
\begin{equation}
R_{\rm scal.}= \sum_m (\nu_m+1/2) \log (\nu_m+1/2)- (\nu_m-1/2)\log (\nu_m-1/2)\, .
\end{equation}
\subsection{Free fermion correlators}
For the $(2+1)$-dimensional Dirac fermion, the lattice Hamiltonian reads
\begin{equation}
H=- \frac{i}{2} \sum_{n,m} \left[\left( \psi_{m,n}^{\dagger} \gamma^0 \gamma^1 (\psi_{m+1,n} - \psi_{m,n}) + \psi^{\dagger}_{m,n}\gamma^0\gamma^2 (\psi_{m,n+1}-\psi_{m,n}) \right)- h.c. \right]\, ,
\end{equation}
and the corresponding correlators \cite{Casini:2009sr}
\begin{equation}
\braket{\psi^{\dagger}_{n,k} \psi_{j,l}}=\frac{1}{2} \delta_{nj}\delta_{kl} + \int_{-\pi}^{\pi} \mathrm{d} x \int_{-\pi}^{\pi} \mathrm{d} y \frac{\sin( x) \gamma^0 \gamma^1 +\sin (y) \gamma^0\gamma^2}{8\pi^2\sqrt{\sin^2x+\sin^2y}}e^{i (x (n-j)+ y (k-l))}\, .
\end{equation}
Just like in the case of the scalar, the subindices in the fermionic fields above correspond to the coordinates of the corresponding lattice points.
The relevant formulas for the evaluation of the reflected entropy in the case of fermionic Gaussian systems were obtained in \cite{Bueno:2020vnx}. Let us quickly summarize the relevant results here. We start with $N$ fermions, $\psi_i$, $i=1,\dots,N$, satisfying canonical anticommutation relations $\{\psi_i,\psi^{\dagger}_j \}=\delta_{i,j}$ and a density matrix $\rho$ in the corresponding Hilbert space of dimension $2^N$. We can purify this state by doubling the Hilbert space and use the modular reflection operator $J$ associated to such state and the algebra of the first $N$ fermions to double the fermion algebra ---doing this properly involves a unitary constructed from the fermion number operator \cite{Doplicher:1969tk}--- in a way such that we are left with a canonical set of $2N$ operators. Denoting by $D_{ij}\equiv {\tilde \rho} (\rho \psi_i \psi_j^{\dagger})$ the correlators of the original system, the matrix which turns out to be relevant for the evaluation of reflected entropy is given by
\begin{equation}
C=\left(
\begin{array}{cc}
D & \sqrt{D(1-D)} \\
\sqrt{D(1-D)} & (1- D
\end{array}
\right)\, ,\label{cij}
\end{equation}
where the additional blocks correspond to the appearance of new correlators involving the new fermionic fields in the doubled system. Just like in the case of the scalars, the final answer can be fully written in terms of correlators of the original system, as is apparent in \req{cij}. Finally, the reflected entropy for a pair of regions $A$,$B$ is obtained from the restrictions of the corresponding block matrices to $A$
\begin{equation}
C_{AA^*}=\left(
\begin{array}{cc}
D|_A& \left.\sqrt{D(1-D)}\right|_A\\
\left.\sqrt{D(1-D)}\right|_A & (1- D)|_A
\end{array}
\right)\, .\label{cija}
\end{equation}
Denoting by $\{ \nu_m \}$ the eigenvalues of $C_{AA^*}$, we finally have
\begin{equation}
R_{\rm ferm.}=- \sum_m \left[ \nu_m \log(\nu_m)+ (1- \nu_m)\log(1-\nu_m)\right]\, .
\end{equation}
When taking the continuum limit, we have to take into account the doubling of the fermionic degrees of freedom on the lattice. In $(2+1)$ dimensions, this requires dividing the final result by $4$ in order to obtain the result corresponding to a Dirac fermion. When presenting the results, we will consider reflected entropy (or mutual information) per degree of freedom, which in this case requires dividing by an addition factor of $2$.
\subsection{Reflected entropy for two parallel squares}
Using the results of the previous two subsections, we are ready to evaluate the reflected entropy of scalar and fermionic systems in $(2+1)$ dimensions. We do this for regions $A$,$B$ corresponding to two squares of length $L$ aligned so that the second square can be obtained by moving the first a distance $L+\ell$ along the (positive) direction of its base. We have then two parallel squares separated by a distance $\ell$. The corresponding sets in the lattice correspond to $A \equiv \{(p_x,p_y)\in \mathbb{Z}_2 \, |\, p_x,p_y=0,\dots, L \}$ and $B \equiv \{(p_x,p_y)\in \mathbb{Z}_2 \, |\, p_x=L+\ell,\dots, 2L+\ell\, , \, p_y=0,\dots,L \}$.
Using the procedures explained in the previous two subsections, we obtain the results shown in Fig.\,\ref{refiss331} for the corresponding reflected entropies as a function of the quotient $L/\ell$. Just like it happens for the mutual information ---also shown in the plots--- the scalar result is greater than the fermion one in the whole range of values. Also, in both cases, we find that the general inequality \req{ir} holds. In the case of the scalar, it is actually possible to obtain reflected entropy using the formulas in subsection \ref{take2} instead of those in subsection \ref{tone}. We have done so and verified that the results agree, which is a good consistency check for our general formulas.
\begin{figure}[t] \centering
\includegraphics[scale=0.65]{RSF6.pdf}
\caption{We plot the reflected entropy (per degree of freedom) for regions $A$, $B$, corresponding to two squares of length $L$ separated by a distance $\ell$ as a function of $L/\ell$ for a free scalar (blue) and a free fermion (red). For both fields we also plot the mutual information $I(A,B)$ for the same pair of regions (dashed lines). The latter curves are obtained numerically using the usual definition $I(A,B)=S_{\ssc \rm EE}(A)+S_{\ssc \rm EE}(B)-S_{\ssc \rm EE}(AB)$, where the corresponding entanglement entropies are computed in the lattice using the same von Neumann entropy formulas as for the reflected entropies, but associated to $\rho_A$ instead of $\rho_{AA^*}$ in each case. }
\label{refiss331}
\end{figure}
For small values of $x\equiv L/\ell$ we do not have a priori a clear guess of what the behavior of $R_{\rm scal.}$ and $R_{\rm ferm.}$ should be. We have looked for trial functions involving simple combinations of powers and logarithms and such that: they go to zero at $x=0$, they are positive in the whole range, they grow monotonically in the domain considered, the fit coefficients are neither too large nor too small. In the case of the scalar, we find that the following function does a good job in fitting the numerical data
\begin{align}\label{sepas}
R_{\rm scal.}(x \ll 1) &\sim - 0.133 x^2 \log x+ 0.0497 x^2 \, .
\end{align}
We plot this function alongside the numerical data points in Fig.\,\ref{refiss33d1}. As we can see, the fit is actually good up to values $x\lesssim 0.38$. In the case of the fermion, we find that the following fit approximates well the data points up to similar values of $x$
\begin{align}\label{sepaf}
R_{\rm ferm.}(x \ll 1) &\sim - 0.111 x^4 \log x-0.03144 x^4 \, .
\end{align}
This appears shown in the right plot in Fig.\,\ref{refiss331}. It is interesting to compare these expressions with the corresponding mutual information behavior. For that, we note that given two regions with characteristic scale $L$ separated by a much larger distance $\ell$, one finds for general $d$-dimensional CFTs \cite{Calabrese:2009ez,Calabrese:2010he,Cardy:2013nua,Agon:2015ftl}
\begin{equation}\label{ix}
I (x \ll 1)\sim x^{4\Delta}\, ,
\end{equation}
where $\Delta$ is the scaling dimension of the lowest-dimensional operator of the corresponding theory. Hence, for scalars and fermions we have
\begin{equation}
I_{\rm ferm.} (x \ll 1)\sim x^{2(d-1)}\, , \quad I_{\rm scal.} (x \ll 1)\sim x^{2(d-2)}\, ,
\end{equation}
respectively. Thus, we observe that $I_{\rm ferm.} (x \ll 1)\sim x^4$ and $I_{\rm scal.} (x \ll 1)\sim x^{2}$ in the three-dimensional case considered here. Comparing with \req{sepas} and \req{sepaf}, we observe that reflected entropy behaves with the same power as mutual information multiplied by a logarithm of $L/\ell$. Going back to section \ref{chiralsca}, we observe that exactly the same phenomenon is found both for the chiral scalar\footnote{Note that in the case of the chiral scalar considered in section \ref{chiralsca}, the lowest-dimensional operator is $\partial \phi$, for which $\Delta=1$.} and the fermion. These results are very suggestive and lead us to propose the following conjecture.
{\bf Conjecture:} The reflected entropy for two regions $A,B$ with characteristic scales $L_A\sim L_B\sim L$ separated a distance $\ell$ behaves as
\begin{equation}\label{conje}
R(x) \sim -I(x) \log x\sim - x^{4\Delta} \log x\, , \quad (x\equiv L/\ell)\, ,
\end{equation}
in the $x \ll 1$ regime for general CFTs in arbitrary dimensions.
It would be interesting to test the validity of this conjectural relation for additional models in various dimensions (as well as for higher-dimensional free-field theories) or to (dis)prove it in general. A natural setup where \req{conje} could be tested would be holography. In that case, the leading-order result of both reflected entropy and mutual information vanishes for sufficiently small values of $x$ ({\it e.g.,}\ for $\eta< 1/2$ in the intervals case in $d=2$). Accessing the first non-vanishing contribution in the mutual information case in that regime requires considering quantum corrections to the Ryu-Takayanagi formula \cite{Faulkner:2013ana}, and the result agrees with the general CFT behavior in \req{ix} \cite{Agon:2015ftl}. An analogous expression for the leading correction of holographic reflected entropy was presented in \cite{Dutta:2019gen}, so it should be in principle possible to check the validity of our conjecture in that case.
\begin{figure}[t] \centering
\includegraphics[scale=0.642]{nearzeroscalar.pdf}
\includegraphics[scale=0.645]{nearzerofermion2.pdf}
\caption{We plot the reflected entropy (per degree of freedom) for regions $A$, $B$, corresponding to two squares of length $L$ separated by a distance $\ell$ as a function of $L/\ell$ for a free scalar (blue dots) and a free fermion (red dots) in the small-$L/\ell$ region. We also show the trial functions explained in the text.}
\label{refiss33d1}
\end{figure}
For large values of $L/\ell$, the behavior of $R(A,B)$ becomes linear. The reason for this is that, as the length of the squares grows with respect to the separation, the setup becomes more and more similar to the case of two infinitely-extended parallel sets for which the leading contribution is an ``area-law'' like term. The situation is analogous for the mutual information, and the corresponding linear growth is also apparent in the corresponding dashed lines in Fig. \ref{refiss331}. More generally, for any $d$-dimensional CFT, when $A,B$ are two sets with large parallel faces of area $\mathcal{A}$ separated by a comparatively small distance $\ell$ one finds
\begin{equation}
I(A,B)= \kappa^{(I)}_d \frac{\mathcal{A}}{\ell^{d-2}}+ \text{subleading}\, , \quad R(A,B) = \kappa^{(R)}_d \frac{\mathcal{A}}{\ell^{d-2}}+ \text{subleading}\, .
\end{equation}
As shown in \cite{Casini:2009sr,Casini:2005zv}, both for free scalars and fermions, the values for the mutual information coefficients $\kappa^{(I)}_d$ can be obtained from a dimensional reduction to $(1+1)$ dimensions. The results are given in terms of the functions appearing in the entropic version of the $c$-theorem\footnote{This is defined from the entanglement entropy of an interval of length $L$ as $c(L)\equiv L \tfrac{dS_{\ssc \rm EE} (L)}{dL}$.} \cite{Casini:2004bw} corresponding to the respective free theories in that number of dimensions. The explicit results in $d=3,4,5,6$ for both types of fields read
\begin{align}
\kappa^{(I)}_{3, {\rm \, sc.}}\simeq3.97\cdot 10^{-2} \, , \quad \kappa^{(I)}_{4, {\rm \, sc.}}\simeq 5.54 \cdot 10^{-3} \, , \quad \kappa^{(I)}_{5, {\rm \, sc.}}\simeq1.31\cdot 10^{-3} \, , \quad \kappa^{(I)}_{6, {\rm \, sc.}}\simeq4.08 \cdot 10^{-4} \, , \\
\kappa^{(I)}_{3, {\rm \, fer.}}\simeq3.61\cdot 10^{-2} \, , \quad \kappa^{(I)}_{4, {\rm \, fer.}}\simeq5.38 \cdot 10^{-3} \, , \quad \kappa^{(I)}_{5, {\rm \, fer.}}\simeq1.30\cdot 10^{-3} \, , \quad \kappa^{(I)}_{6, {\rm \, fer.}}\simeq4.06 \cdot 10^{-4} \, .
\end{align}
As $d\rightarrow \infty$, the scalar and fermion results tend to a common value, given by \cite{Casini:2009sr}
\begin{equation}
\kappa^{(I)}_{d \rightarrow \infty}=\frac{\Gamma\left[\tfrac{d-2}{2} \right]}{2^{d+2} \pi^{\tfrac{d-2}{2}}}\, .
\end{equation}
Naturally, the $d=3$ coefficients are the slopes of the leading contributions to the dashed curves shown in Fig. \ref{refiss331} as $L/\ell \gg 1$. In order to extract these values from the numerical results, we perform a fit with a linear, a logarithmic and a constant function to the data points obtained with $L/\ell >4$. The results obtained from numerical fits are in good agreement with the values of $\kappa^{(I)}_{3, {\rm \, scal.}}$ and $\kappa^{(I)}_{3, {\rm \, ferm.}}$ shown above. Proceeding similarly for the reflected entropy, we find
\begin{equation}
\kappa^{(R)}_{3, {\rm \, scal.}}\simeq 6.95\cdot 10^{-2} \, , \quad \kappa^{(R)}_{3, {\rm \, ferm.}}\simeq 6.16\cdot 10^{-2}\, .
\end{equation}
We can compare these results to holographic theories dual to Einstein gravity, for which the values of $\kappa^{(R)}_{d, {\rm \, holo.}}$ and $\kappa^{(I)}_{d, {\rm \, holo.}}$ can be obtained analytically in general dimensions. In the case of the mutual information, the coefficient can be extracted from the
universal term in the entanglement entropy corresponding to a strip of width $\ell$ much smaller than the rest of dimensions. The bulk action reads
\begin{equation}
I_g=\frac{1}{16\pi G} \int d^{d+1}x \sqrt{|g|} \left[\frac{d(d-1)}{L^2}+R \right]\, ,
\end{equation}
where $G$ is the Newton constant and where we parametrized the cosmological constant so that AdS$_{(d+1)}$ is a solution of the theory with radius $L$. Entanglement entropy can be then obtained from the Ryu-Takayanagi prescription \cite{Ryu:2006bv,Ryu:2006ef} and the relevant coefficient turns out be given by \cite{Ryu:2006bv}
\begin{equation}
\kappa^{(I)}_{d, {\rm \, holo.}}= \frac{2^{d-3}\pi^{\tfrac{d-1}{2}} \Gamma\left[\tfrac{d}{2(d-1)} \right]^{d-1} }{(d-2) \Gamma\left[\tfrac{1}{2(d-1)} \right]^{d-1}} \frac{L^{d-1}}{G}\, .
\end{equation}
In the case of the reflected entropy, we can obtain the result assuming its relation to the minimal entanglement wedge cross section, $R_{\rm holo.}(A,B)=2E_W(A,B)$ proposed in \cite{Dutta:2019gen}. This calculation was performed in \cite{Jokela:2019ebz} in the more general case of two parallel strips of fixed width. Taking the large-width limit of the result we can extract $\kappa^{(R)}_{d, {\rm \, holo.}}$. The result reads
\begin{equation}
\kappa^{(R)}_{d, {\rm \, holo.}}= \frac{2^{d-3} \pi^{\tfrac{d-2}{2}}\Gamma\left[\tfrac{d}{2(d-1)} \right]^{d-2} }{(d-2) \Gamma\left[\tfrac{1}{2(d-1)} \right]^{d-2}} \frac{L^{d-1}}{G}\, .
\end{equation}
In order to compare with the free-field results, we can consider the quotient between both coefficients, which reads
\begin{equation}
\frac{\kappa^{(R)}_{d, {\rm \, holo.}}}{\kappa^{(I)}_{d, {\rm \, holo.}}}= \frac{\Gamma\left[\tfrac{1}{2(d-1)} \right]}{\sqrt{\pi} \Gamma\left[\tfrac{d}{2(d-1)} \right]}\, .
\end{equation}
This is always larger than $1$, as it should in view of the inequality \req{ir}. In particular,
\begin{equation}
\frac{\kappa^{(R)}_{3, {\rm \, holo.}}}{\kappa^{(I)}_{3, {\rm \, holo.}}}\simeq 1.669 \, , \quad \frac{\kappa^{(R)}_{4, {\rm \, holo.}}}{\kappa^{(I)}_{4, {\rm \, holo.}}}\simeq 2.319 \, , \quad \frac{\kappa^{(R)}_{5, {\rm \, holo.}}}{\kappa^{(I)}_{5, {\rm \, holo.}}}\simeq 2.963\, , \quad \frac{\kappa^{(R)}_{6, {\rm \, holo.}}}{\kappa^{(I)}_{6, {\rm \, holo.}}}\simeq 3.604\, .
\end{equation}
As $d\rightarrow \infty$, one has
\begin{equation}
\frac{\kappa^{(R)}_{d\rightarrow \infty, {\rm \, holo.}}}{\kappa^{(I)}_{d\rightarrow \infty, {\rm \, holo.}}}= \frac{1}{\pi} \left[2d+(\log 4-2) + \mathcal{O} (1/d) \right]\, .
\end{equation}
The $d=3$ result is not so different from the ones we find numerically for the free fields. For those, we obtain
\begin{equation}
\frac{\kappa^{(R)}_{3, {\rm \, scal.}}}{\kappa^{(I)}_{3, {\rm \, scal.}}}\simeq 1.75 \, , \quad \quad \frac{\kappa^{(R)}_{3, {\rm \, ferm.}}}{\kappa^{(I)}_{3, {\rm \, ferm.}}}\simeq 1.71 \, .
\end{equation}
Some degree of similarity between the free fermion and holography ---as far as entropic measures are concerned--- has been previously observed in other situations ---see {\it e.g.,}\ \cite{Bueno:2015qya}. Here we observe that the fermion result is indeed more similar to the holographic answer, but it is not extremely close to either of the two.
While the leading contribution for large values of $L/\ell$ is linear, there is a subleading logarithmic term. The presence of logarithmic contributions associated to corner regions is characteristic of this kind of measures. In the entanglement entropy case, the corresponding universal contribution has been subject of intense study ---see {\it e.g.,}\ \cite{Bueno:2019mex} for an updated list of relevant references. While we have not attempted to evaluate with reasonable numerical precision the logarithmic terms appearing in the case of the two square regions considered here for the reflected entropy, we point out that we do not expect the corresponding pieces to be immediately related to entanglement entropy corner terms corresponding to a single square region (as opposed to mutual information, for which they are). In order to extract such term from a reflected entropy calculation, we would need to consider regions $A,B$ corresponding to a square and the complement of a larger square, respectively. On the other hand, this also means that reflected entropy contains new universal coefficients with no immediate entanglement entropy counterpart.
We leave a study of such kind of terms for future work.
\section{Outlook}\label{finnnn}
As we have illustrated, the formulas presented here and in \cite{Bueno:2020vnx} allow for simple numerical evaluations of reflected entropy for free scalars and fermions. While the expressions are valid in general dimensions, our analysis so far has been mostly focused on two-dimensional theories. In section \ref{2plus1} we made a first incursion into higher dimensions, but we restricted ourselves to parallel square-like regions. It would be interesting to continue exploring higher-dimensional theories and the various universal terms appearing associated to different kinds of regions ---{\it e.g.,}\ the coefficients $\kappa^{(R)}_d$ for $d>3$. As mentioned above, this will include terms with an without entanglement entropy counterparts.
Another direction would entail considering massive theories. In most cases, the relevant correlators are simple (and known) modifications of the ones we have used here for the corresponding massless cases, so such generalizations are clearly accessible.
It would also be interesting to better clarify the connection and differences between reflected entropy and other entanglement measures. In particular, it would be nice to test the validity of our conjectural relation \req{conje} for additional theories. In fact, perhaps a general proof could be attempted using Replica-trick methods \cite{Dutta:2019gen}. Beyond mutual information, connections between reflected entropy and odd-entropy have also been reported \cite{Tamaoka:2018ned,Berthiere:2020ihq,Mollabashi:2020ifv}, which would be interesting to examine further.
Finally, in \cite{Bueno:2020vnx} we introduced a modification of reflected entropy ---``type-I entropy''--- which differed from the former in the case of theories obtained from quotients of theories by global symmetry groups. Operators implementing the corresponding symmetry operations on type-I algebras can be constructed and computationally amenable notions of entropy can be associated to their expectation values. Those connect in a simple fashion the reflected entropies of complete theories and the type-I entropies of subalgebras. This suggests possible interesting entropic connections between bosonic and fermionic theories related by quotients.
We plan to explore some of these directions in the near future.
\section*{Acknowledgments}
We thank Cl\'ement Berthiere, Robie A. Hennigar and Javier M. Mag\'an for useful discussions.
This work was supported by the Simons foundation through the It From Qubit Simons collaboration.
H.C. was also supported by CONICET, CNEA, and Universidad Nacional de Cuyo, Argentina.
|
2,877,628,091,637 | arxiv | \section{Introduction} \label{sec_intro}
Understanding the star formation and interstellar medium (ISM) at low metallicity is crucial to unveil physical and chemical processes in the past Galactic environment or those in high-redshift galaxies, where the metallicity was significantly lower compared to the present-day solar neighborhood.
Hot \added{molecular} cores are one of the early stages of star formation and they play a key role in the formation of chemical complexity of the ISM.
Physically, hot cores are defined as having small source size (\replaced{$\leq$}{$\lesssim$}0.1 pc), high density (\replaced{$\geq$}{$\gtrsim$}10$^6$ cm$^{-3}$), and warm gas/dust temperature (\replaced{$\geq$}{$\gtrsim$}100 K) \citep[e.g., ][]{vanD98, Kur00}.
Chemistry of hot cores is characterized by sublimation of ice mantles, which accumulated in the course of star formation.
In cold molecular clouds and prestellar cores, gaseous molecules and atoms are frozen onto dust grains.
With increasing dust temperatures by star formation activities, chemical reaction among heavy species become active on grain surfaces to form larger complex molecules \citep[e.g., ][]{Gar06}.
In addition, sublimated molecules, such as CH$_3$OH and NH$_3$, are subject to further gas-phase reactions \citep[e.g., ][]{NM04,Taq16}.
As a result, warm and dense gas around protostars become chemically rich, and embedded protostars are observed as one of the most powerful molecular line emitters, which is called a hot core.
They are important targets for astrochemical studies of star-forming regions, because a variety of molecular species, including complex organic molecules (COMs), are often detected in hot cores \citep[][and references therein]{Her09}.
Thus detailed studies on chemical properties of hot cores are important for understanding complex chemical processes triggered by star formation.
Recent ALMA (Atacama Large Millimeter/submillimeter Array) observations of hot molecular cores in a nearby low metallicity galaxy, the Large Magellanic Cloud (LMC), have suggested that the metallicity has a significant effect on their molecular compositions \citep{ST16b, ST20, Sew18}; cf., the metallicity of the LMC is $\sim$1/2-1/3 of the solar neighborhood.
A comparison of molecular abundances between LMC and Galactic hot cores suggests that organic molecules (e.g., CH$_3$OH, a classical hot core tracer) show a large abundance variation in low-metallicity hot cores \citep{ST20}.
There are organic-poor hot cores that are unique in the LMC \citep{ST16b}, while there are relatively organic-rich hot cores, where the abundances of organic molecules roughly scale with the metallicity \citep{Sew18}.
Astrochemical simulations for low-metallicity hot cores suggest that dust temperature during the initial ice-forming stage would play a key role for making the chemical diversity of organic molecules \citep{Ach18, ST20}.
On the other hand, sulfur-bearing molecules such as SO$_2$ and SO are commonly detected in known LMC hot cores and their molecular abundances \replaced{simply}{roughly} scale with the metallicity of the LMC.
Although the reason is still under debate, the results suggest that SO$_2$ can be an alternative molecular species to trace hot core chemistry in metal-poor environments.
The above results suggest that molecular abundances in hot cores do not always simply scale with the elemental abundances of their parent environments.
However, it is still unclear if the observed chemical characteristics of LMC hot cores are common in other low metallicity environments or they are uniquely seen only in the LMC.
Currently, known low-metallicity hot core samples are limited to those in the LMC.
It is thus vital to understand universal characteristics of interstellar chemistry by studying chemical compositions of star-forming cores in diverse metallicity environments.
Recent surveys \citep[e.g.,][]{And15,And18,Izu17,Wen21} have found a number of ($\sim$10-20) star-forming region candidates in the extreme outer Galaxy, which is defined as having galactocentric distance ($D_{GC}$) larger than 18 kpc \citep{Yas06,Kob08}.
The extreme outer Galaxy has a very different environment from those in the solar neighborhood, with lower metallicity \citep[less than -0.5 dex,][]{Fer17,Wen19}, lower gas density \citep[e.g.,][]{Nak16}, and small or no perturbation from spiral arms.
Such an environment is of great interest for studies of the star formation and ISM in the early phase of the Milky Way formation and those in dwarf galaxies \citep{Fer98,Kob08}.
The low metallicity environment is in common with the Magellanic Clouds, and thus the extreme outer Galaxy is an ideal laboratory to test the universality of the low metallicity molecular chemistry observed in the LMC and SMC.
Among star-forming regions in the extreme outer Galaxy, WB89-789 (IRAS 06145+1455; 06$^\mathrm{h}$17$^\mathrm{m}$24$\fs$2, 14$\arcdeg$54$\arcmin$42$\arcsec$, J2000) has particularly young and active nature \citep{Bra94}.
It is located at the galactocentric distance of 19.0 kpc and the distance from Earth is 10.7 kpc \citep[based on optical spectroscopy of a K3 III star,][]{Bra07}.
The metallicity of WB89-789 is estimated to be a factor of four lower than the solar value according to the Galactic oxygen abundance gradient reported in the literature \citep{Fer17,Wen19,Bra19,Are20,Are21}.
The region is associated with dense clouds traced by CS and CO \citep{Bra07}.
The total mass of the cloud is estimated to be 6 $\times$ 10$^3$ M$_{\sun}$ for a $\sim$10 pc diameter area \citep{Bra94}.
An H$_2$O maser is detected towards the region \citep{Wou93}, but no centimeter radio continuum is found \citep{Bra07}.
Several class I protostar candidates are identified by previous infrared observations \citep{Bra07}.
We here report the first detection of a hot molecular core in the extreme outer Galaxy based on submillimeter observations towards WB89-789 with ALMA.
Section \ref{sec_tarobsred} describes the details of the target source, observations, and data reduction.
The observed molecular line spectra and images, as well as analyses of physical and chemical properties of the source, are presented in Section \ref{sec_res}.
Discussion about the properties of the hot core and comparisons of molecular abundances with known Galactic and LMC hot cores are given in Section \ref{sec_disc}.
This section also presents the detection of another embedded protostar with high-velocity outflows in the WB89-789 region.
The conclusions are given in Section \ref{sec_sum}.
\begin{deluxetable*}{ l c c c c c c c c c}
\tablecaption{Observation summary \label{tab_Obs}}
\tablewidth{0pt}
\tabletypesize{\footnotesize}
\tablehead{
\colhead{} & \colhead{Observation} & \colhead{On-source} & \colhead{Mean} & \colhead{Number} & \multicolumn{2}{c}{Baseline} & \colhead{} & \colhead{} & \colhead{Channel} \\
\cline{5-6}
\colhead{} & \colhead{Date} & \colhead{Time} & \colhead{PWV\tablenotemark{a}} & \colhead{of} & \colhead{Min} & \colhead{Max} & \colhead{Bem size\tablenotemark{b}} & \colhead{MRS\tablenotemark{c}} & \colhead{Spacing} \\
\colhead{} & \colhead{} & \colhead{(min)} & \colhead{(mm)} & \colhead{Antennas} & \colhead{(m)} & \colhead{(m)} & \colhead{($\arcsec$ $\times$ $\arcsec$)} & \colhead{($\arcsec$)} & \colhead{} }
\startdata
Band 6 & 2018 Dec 6 -- & 115.5 & 0.5--1.5 & 45--49 & 15.1 & 783.5 & 0.41 $\times$ 0.50 & 5.6 & 0.98 MHz \\
(250 GHz) & 2019 Apr 16 & & & & & & & & (1.2 km s$^{-1}$) \\
Band 7 & 2018 Apr 30 -- & 64.1 & 0.6--1.0 & 43-44 & 15.1 & 500.2 & 0.46 $\times$ 0.52 & 5.4 & 0.98 MHz \\
(350 GHz) & 2018 Aug 22 & & & & & & & & (0.85 km s$^{-1}$) \\
\enddata
\tablenotetext{a}{Precipitable water vapor.}
\tablenotetext{b}{The average beam size of continuum achieved by TCLEAN with the Briggs weighting and the robustness parameter of 0.5.
Note that we use a common circular restoring beam size of 0$\farcs$50 for Band 6 and 7 data to construct the final images.}
\tablenotetext{c}{Maximum Recoverable Scale.}
\end{deluxetable*}
\section{Target, observations, and data reduction} \label{sec_tarobsred}
\subsection{Target} \label{sec_tar}
The target star-forming region is WB89-789 \citep{Bra94}.
The region contains three Class I protostar candidates identified by near-infrared observations \citep{Bra07}, and one of them is a main target of the present ALMA observations.
The region observed with ALMA is indicated on a near-infrared two-color image shown in Figure \ref{IR_image}.
The observed position is notably reddened compared with other parts of WB89-789.
\begin{figure}[tp]
\begin{center}
\includegraphics[width=5.5cm]{f1.eps}
\caption{
Near-infrared two-color image of the WB89-789 star-forming region based on 2MASS data \citep{Skr06}.
Blue is $J$-band (1.25 $\mu$m) and red is $K_s$-band (2.16 $\mu$m).
The image size is 100\arcsec $\times$ 100\arcsec.
The green square indicates the field-of-view of the ALMA submillimeter images shown in Figures \ref{images1}--\ref{images2}.
}
\label{IR_image}
\end{center}
\end{figure}
\subsection{Observations} \label{sec_obs}
Observations were conducted with ALMA in 2018 and 2019 as a part of the Cycle 5 (2017.1.01002.S) and Cycle 6 (2018.1.00627.S) programs (PI: T. Shimonishi).
A summary of the present observations is shown in Table \ref{tab_Obs}.
The pointing center of antennas is RA = 06$^\mathrm{h}$17$^\mathrm{m}$23$^\mathrm{s}$ and Dec = 14$\arcdeg$54$\arcmin$41$\arcsec$ (ICRS).
The total on-source integration time is 115.5 minutes for Band 6 data and 64.1 minutes for Band 7.
Flux and bandpass calibrators are J0510+1800, J0854+2006, and J0725-0054 for Band 6, while J0854+2006 and J0510+1800 for Band 7, respectively.
Phase calibrators are J0631+2020 and J0613+1708 for Band 6 and J0643+0857 and J0359+1433 for Band 7.
Four spectral windows are used to cover the sky frequencies of 241.40--243.31, 243.76-245.66, 256.90--258.81, and 258.76--260.66 GHz for Band 6, while 337.22--339.15, 339.03-340.96, 349.12--351.05, and 350.92--352.85 GHz for Band 7.
The channel spacing is 0.98 MHz, which corresponds to 1.2 km s$^{-1}$ for Band 6 and 0.85 km s$^{-1}$ for Band 7.
The total number of antennas is 45--49 for Band 6 and 43--44 for Band 7.
The minimum--maximum baseline lengths are 15.1--783.5 m for Band 6 and 15.1--500.2 m for Band 7.
A full-width at half-maximum (FWHM) of the primary beam is about 25$\arcsec$ for Band 6 and 18$\arcsec$ for Band 7.
\subsection{Data reduction} \label{sec_red}
Raw data is processed with the \textit{Common Astronomy Software Applications} (CASA) package.
We use CASA 5.4.0 (Band 6) and 5.1.1 (Band 7) for the calibration and CASA 5.5.0 for the imaging.
The synthesized beam sizes of 0$\farcs$39--0$\farcs$42 $\times$ 0$\farcs$49--0$\farcs$52 with a position angle of -36 degree for Band 6 and 0$\farcs$45--0$\farcs$46 $\times$ 0$\farcs$51--0$\farcs$52 with a position angle of - 54 degree for Band 7 are achieved with the Briggs weighting and the robustness parameter of 0.5.
In this paper, we use a common circular restoring beam size of 0$\farcs$50, which corresponds to 0.026 pc (5350 au) at the distance of WB89-789.
The synthesized images are corrected for the primary beam pattern using the impbcor task in CASA.
The continuum image is constructed by selecting line-free channels.
Before the spectral extraction, the continuum emission is subtracted from the spectral data using the CASA's uvcontsub task.
The spectra and continuum flux are extracted from the 0$\farcs$50 diameter circular region centered at RA = 06$^\mathrm{h}$17$^\mathrm{m}$24$\fs$073 and Dec = 14$\arcdeg$54$\arcmin$42$\farcs$27 (ICRS), which corresponds to the submillimeter continuum center of the target and is equivalent to the hot core position.
Hereafter, the source is referred to as WB89-789 SMM1.
\deleted{The extracted spectra are shown in Figures \ref{spec_B6}--\ref{spec_B7}. }
\begin{deluxetable*}{ l l l l l l l l }[tbp!]
\tablecaption{Summary of detected molecular species \label{tab_line_summary}}
\tablewidth{0pt}
\tabletypesize{\small}
\tablehead{
\colhead{2 atoms} & \colhead{3 atoms} & \colhead{4 atoms} & \colhead{5 atoms}& \colhead{6 atoms} & \colhead{7 atoms} & \colhead{8 atoms} & \colhead{9 atoms} \\
}
\startdata
CN & HDO & H$_2$CO & c-C$_3$H$_2$ & CH$_3$OH & CH$_3$CHO & HCOOCH$_3$ & CH$_3$OCH$_3$ \\
NO & H$^{13}$CO$^+$ & HDCO & HC$_3$N & $^{13}$CH$_3$OH & c-C$_2$H$_4$O & & C$_2$H$_5$OH \\
CS & HC$^{18}$O$^+$ & D$_2$CO & H$_2$CCO & CH$_2$DOH & & & C$_2$H$_5$CN \\
C$^{34}$S & H$^{13}$CN & HNCO & HCOOH & CH$_3$CN & & & \\
C$^{33}$S & HC$^{15}$N & H$_2$CS & & NH$_2$CO & & & \\
SO & CCH & & & & & & \\
$^{34}$SO & SO$_2$ & & & & & & \\
$^{33}$SO & $^{34}$SO$_2$ & & & & & & \\
SiO & OCS & & & & & & \\
& $^{13}$OCS & & & & & & \\
\enddata
\end{deluxetable*}
\section{Results and analysis} \label{sec_res}
\subsection{Spectra} \label{sec_spc}
Figures \ref{spec_B6}--\ref{spec_B7} show submillimeter spectra extracted from the continuum center of WB89-789 SMM1.
Spectral lines are identified with the aid of the Cologne Database for Molecular Spectroscopy\footnote{https://www.astro.uni-koeln.de/cdms} \citep[CDMS,][]{Mul01,Mul05} and the molecular database of the Jet Propulsion Laboratory\footnote{http://spec.jpl.nasa.gov} \citep[JPL,][]{Pic98}.
\added{The detection criterion adopted here is 3$\sigma$ significance level and the velocity coincidence with the systemic velocity ($V_{sys}$) of WB89-789 SMM1 (34.5 km s$^{-1}$).
The lines with the significance level higher than 2.5$\sigma$ but lower than 3$\sigma$ are indicated as tentative detection in the tables in Appendix A.
More than 85 $\%$ of lines are detected above 5$\sigma$ level. }
Line parameters are measured by fitting a Gaussian profile to detected lines.
We estimate the peak brightness temperature, the FWHM, the LSR velocity, and the integrated intensity for each line based on the fitting.
For spectral lines for which a Gaussian profile does not fit well, their integrated intensities are calculated by directly integrating the spectrum over the frequency region of emission.
Full details of the line fitting can be found in Appendix A (Tables of measured line parameters) and Appendix B (Figures of fitted spectra).
The table also contains the estimated upper limits on important non-detection lines.
A variety of carbon-, oxygen-, nitrogen-, sulfur-, and silicon-bearing species, including COMs containing up to nine atoms, are detected from WB89-789 SMM1 (see Table \ref{tab_line_summary}).
Multiple high excitation lines (upper state energy $>$100 K) are detected for many species.
Measured line widths are typically 3--6 km s$^{-1}$.
Most of lines consist of a single velocity component, but SiO has doppler shifted components at $V_{sys}$ $\pm$ 5 km s$^{-1}$ as indicated in Figure \ref{line_others} in Appendix B.
\begin{figure*}[tp!]
\begin{center}
\includegraphics[width=17cm]{f2.eps}
\caption{
ALMA band 6 spectra extracted from the the 0$\farcs$50 (0.026 pc) diameter region centered at the present hot molecular core in the extreme outer Galaxy, WB89-789 SMM1.
Detected emission lines are labeled.
Unidentified lines are indicated by ``?".
The source velocity of 34.5 km s$^{-1}$ is assumed.
}
\label{spec_B6}
\end{center}
\end{figure*}
\begin{figure*}[tp!]
\begin{center}
\includegraphics[width=17cm]{f3.eps}
\caption{
Same as in Figure \ref{spec_B6}, but for ALMA Band 7.
}
\label{spec_B7}
\end{center}
\end{figure*}
\subsection{Images} \label{sec_img}
Figures \ref{images1}--\ref{images2} show synthesized images of continuum and molecular emission lines observed toward the target region.
The images are constructed by integrating spectral data in the velocity range where the emission is detected.
Most molecular lines, except for those of molecular radicals CN, CCH, and NO, have their intensity peak at the continuum center, which corresponds to the position of a hot core.
Simple molecules such as H$^{13}$CO$^+$, H$^{13}$CN, CS, and SO are extended compared to the beam size.
Secondary intensity peaks are also seen in those species.
Complex molecules and HDO are concentrated at the hot core position.
A characteristic symmetric distribution is seen in SiO.
Further discussion about the distribution of the observed emission is presented in Section \ref{sec_disc_dist}.
\begin{figure*}[tp!]
\begin{center}
\includegraphics[width=17.5cm]{f4.eps}
\caption{
Integrated intensity distributions of molecular emission lines.
Gray contours represent the 1.2 mm continuum distribution and the contour levels are 5$\sigma$, 10$\sigma$, 20$\sigma$, 40$\sigma$, 100$\sigma$ of the rms noise (0.044 mJy/beam).
Low signal-to-noise ratio regions (S/N $<$2) are masked.
The spectra discussed in the text are extracted from the region indicated by the black open circle.
The blue cross represents the 1.2 mm continuum center.
The synthesized beam size is shown by the gray filled circle in each panel.
North is up, and east is to the left.
}
\label{images1}
\end{center}
\end{figure*}
\begin{figure*}[tp!]
\begin{center}
\includegraphics[width=17.5cm]{f5.eps}
\caption{
Same as in Figure \ref{images1}.
}
\label{images2}
\end{center}
\end{figure*}
\subsection{Derivation of column densities, gas temperatures, and molecular abundances} \label{sec_ana}
\subsubsection{Rotation diagram analysis} \label{sec_rd}
Column densities and rotation temperatures are estimated based on the rotation diagram analysis for the molecular species where multiple transitions with different excitation energies are detected (Figure \ref{rd1}).
We here assume an optically thin condition and the local thermodynamic equilibrium (LTE).
We use the following formulae based on the standard treatment of the rotation diagram analysis \citep[e.g., ][]{Sut95, Gol99}:
\begin{equation}
\log \left(\frac{ N_{u} }{ g_{u} } \right) = - \left(\frac {\log e}{T_{\mathrm{rot}}} \right) \left(\frac{E_{u}}{k} \right) + \log \left(\frac{N}{Q(T_{\mathrm{rot}})} \right), \label{Eq_rd1}
\end{equation}
where
\begin{equation}
\frac{ N_{u} }{ g_{u} } = \frac{ 3 k \int T_{\mathrm{b}} dV }{ 8 \pi^{3} \nu S \mu^{2} }, \label{Eq_rd2} \\
\end{equation}
and $N_{u}$ is a column density of molecules in the upper energy level, $g_{u}$ is the degeneracy of the upper level, $k$ is the Boltzmann constant, $\int T_{\mathrm{b}} dV$ is the integrated intensity estimated from the observations, $\nu$ is the transition frequency, $S$ is the line strength, $\mu$ is the dipole moment, $T_{\mathrm{rot}}$ is the rotational temperature, $E_{u}$ is the upper state energy, $N$ is the total column density, and $Q(T_{\mathrm{rot}})$ is the partition function at $T_{\mathrm{rot}}$.
All the spectroscopic parameters required in the analysis are extracted from the CDMS or JPL database.
Derived column densities and rotation temperatures are summarized in Table \ref{tab_N}.
Most molecular species are well fitted by a single temperature component.
Data points in diagrams of CH$_3$CN and C$_2$H$_5$CN are relatively scattered.
For CH$_3$OH, CH$_3$CN, HNCO, SO$_2$, and HCOOCH$_3$, transitions with relatively large $S\mu^2$ values at low $E_{u}$ ($<$300 K) are excluded from the fit in order to avoid possible effect of optical thickness (see gray points in Fig. \replaced{\ref{sec_rd}}{\ref{rd1}}).
\added{Adapted threshold values are log $S\mu^2$ $>$1.1 for CH$_3$OH, log $S\mu^2$ $>$2.4 for CH$_3$CN, log $S\mu^2$ $>$1.6 for HNCO, log $S\mu^2$ $>$1.2 for SO$_2$, and log $S\mu^2$ $>$1.8 for HCOOCH$_3$. }
Complex organic molecules, HDO, and SO$_2$ show high rotation temperatures ($>$130 K).
This suggests that they are originated from a warm region associated with a protostar.
On the other hand, C$^{33}$S and D$_2$CO, and H$_2$CS show lower temperatures, suggesting that they arise from a colder region in the outer part of the protostellar envelope.
\added{SO also shows a low rotation temperature.
Its $T_{\mathrm{rot}}$ is close to that of C$^{33}$S.
However, SO lines are often optically thick in dense cores, particularly for low-$E_{u}$ lines, thus the derived rotation temperature would be an upper limit. }
\begin{figure*}[tp!]
\begin{center}
\includegraphics[width=16cm]{f6.eps}
\caption{
Results of rotation diagram analyses.
Upper limit points are shown by the downward arrows.
The solid lines represent the fitted straight line.
Derived column densities and rotation temperatures are shown in each panel.
The open squares are excluded in the fit because they significantly deviate from other data points.
The gray squares are also excluded in the fit because of their large $S\mu^2$ values.
CH$_3$OH is fitted by using only E-type transitions, which are shown in blue.
For HCOOH, trans- (square) and cis- (circle) species are plotted together.
See Section \ref{sec_rd} for details.
}
\label{rd1}
\end{center}
\end{figure*}
\subsubsection{Column densities of other molecules} \label{sec_n}
Column densities of molecular species for which rotation diagram analysis is not applicable are estimated from Equation \ref{Eq_rd1} after solving it for $N$.
Their rotation temperatures are estimated as follows, by taking into account that the sight-line of WB89-789 SMM1 contains both cold and warm gas components as described in Section \ref{sec_rd}.
The rotation temperature of C$^{33}$S is applied to those of CS and C$^{34}$S, considering a similar distribution of isotopologues.
Similarly, the rotation temperature of D$_2$CO is applied to H$_2$CO and HDCO, and that of SO$_2$ to $^{34}$SO$_2$.
For other species, we assume that molecules with an extended spatial distribution trace a relatively low-temperature region rather than a high-temperature gas associated with a hot core.
CN, CCH, H$^{13}$CO$^+$, HC$^{18}$O$^+$, H$^{13}$CN, HC$^{15}$N, NO, SiO\deleted{, SO}, $^{34}$SO, $^{33}$SO, and c-C$_3$H$_2$ correspond to this case.
We assume a rotation temperature of 35 K for those species, which is roughly equivalent to that of C$^{33}$S.
High gas temperatures are observed for COMs, SO$_2$, and HDO, which are associated with a compact hot core region.
Average temperature of those species is $\sim$200 K.
We assume this temperature for column density estimates (including upper limit) of c-C$_2$H$_4$O, HC$_3$N, $^{13}$CH$_3$CN, $^{13}$OCS, and CH$_3$SH.
Estimated column densities are summarized in Table \ref{tab_N}.
We have also estimated column densities of selected species based on non-LTE calculations with RADEX \citep{vdT07}.
For input parameters, we use the H$_2$ gas density of 2.1 $\times$ 10$^7$ cm$^{-3}$ according to our estimate in Section \ref{sec_h2} and the background temperature of 2.73 K.
Kinetic temperatures are assumed to be the same as temperatures tabulated in Table \ref{tab_N}.
The line intensities and widths are taken from the tables in Appendix A \footnote{The following lines are used for non-LTE calculation with RADEX; H$^{13}$CO$^+$(3--2), HC$^{18}$O$^+$(4--3), H$_2$CO(5$_{1,5}$--4$_{1,4}$), c-C$_3$H$_2$(3$_{2,1}$--2$_{1,2}$), CN(N = 3--2, J = $\frac{5}{2}$--$\frac{3}{2}$, F = $\frac{5}{2}$--$\frac{5}{2}$), H$^{13}$CN(3--2), HC$^{15}$N(3--2), HC$_3$N(27--26), NO(J = $\frac{7}{2}$--$\frac{5}{2}$, $\Omega$ = $\frac{1}{2}$, F = $\frac{9}{2}$$^+$--$\frac{7}{2}$$^-$), CH$_3$CN(14$_{0}$--13$_{0}$), SiO(6--5), CS(5--4), OCS(20--19), H$_2$CS(7$_{1,6}$--6$_{1,5}$), SO($N_J$ = 6$_{6}$--5$_{5}$), and CH$_3$OH(7$_{5}$ E--6$_{5}$ E). }.
We assume an empirical 10$\%$ uncertainty for input line intensities.
The resultant column densities are summarized in Table \ref{tab_N}.
The calculated non-LTE column densities are reasonably consistent with the LTE estimates.
\subsubsection{Column density of H$_2$, dust extinction, and gas mass} \label{sec_h2}
A column density of molecular hydrogen ($N_{\mathrm{H_2}}$) is estimated from the dust continuum data.
We use the following equation to calculate $N_{\mathrm{H_2}}$ based on the standard treatment of optically thin dust emission:
\begin{equation}
N_{\mathrm{H_2}} = \frac{F_{\nu} / \Omega}{2 \kappa_{\nu} B_{\nu}(T_{d}) Z \mu m_{\mathrm{H}}} \label{Eq_h2},
\end{equation}
where $F_{\nu}/\Omega$ is the continuum flux density per beam solid angle as estimated from the observations, $\kappa_{\nu}$ is the mass absorption coefficient of dust grains coated by thin ice mantles at 1200/870 $\mu$m as taken from \citet{Oss94} and we here use 1.07 cm$^2$ g$^{-1}$ for 1200 $\mu$m and 1.90 cm$^2$ g$^{-1}$ for 870 $\mu$m, $T_{d}$ is the dust temperature and $B_{\nu}(T_{d})$ is the Planck function, $Z$ is the dust-to-gas mass ratio, $\mu$ is the mean atomic mass per hydrogen \citep[1.41, according to][]{Cox00}, and $m_{\mathrm{H}}$ is the hydrogen mass.
We use the dust-to-gas mass ratio of 0.002, which is obtained by scaling the Galactic value of 0.008 by the metallicity of the WB89-789 region.
A line of sight towards a hot core contain dust grains with different temperatures because of the temperature gradient in a protostellar envelope.
Representative dust temperature (i.e. mass-weighted average temperature) would fall somewhere in between that of a warm inner region and a cold outer region.
\citet{ST20} presented a detailed analysis of effective dust temperature in the sight-line of a low-metallicity hot core in the LMC, based on a comparison of $N_{\mathrm{H_2}}$ derived by submillimeter dust continuum with the above method, model fitting of spectral energy distributions (SEDs), and the 9.7 $\mu$m silicate dust absorption depth.
The paper concluded that $T_{d}$ = 60 K for the dust continuum analysis yields the $N_{\mathrm{H_2}}$ value which is consistent with those obtained by other different methods.
This temperature corresponds to an intermediate value between a cold gas component ($\sim$50 K) represented by SO and a warm component ($\sim$150 K) represented by CH$_3$OH and SO$_2$ in this LMC hot core.
The present hot core, WB89-789 SMM1, harbors similar temperature components as discussed in Sections \ref{sec_rd} and \ref{sec_n}.
We thus applied $T_{d}$ = 60 K for the present source.
The continuum brightness of SMM1 is measured to be 11.33 $\pm$ 0.05 mJy/beam for 1200 $\mu$m and 28.0 $\pm$ 0.2 mJy/beam for 870 $\mu$m (3$\sigma$ uncertainty).
Based on the above assumption, we obtain $N_{\mathrm{H_2}}$ = 1.6 $\times$ 10$^{24}$ cm$^{-2}$ for 1200 $\mu$m and $N_{\mathrm{H_2}}$ = 1.2 $\times$ 10$^{24}$ cm$^{-2}$ for the 870 $\mu$m.
The $N_{\mathrm{H_2}}$ value changes by a factor of up to 1.6 when the assumed $T_{d}$ is varied between 40 K and 90 K.
Alternatively, a column density of molecular hydrogen can be determined by the model fitting of the observed spectral energy distribution (SED).
The best-fit SED discussed in Section \ref{sec_disc_star} yields $A_V$ = 184 mag.
We here use a standard value of $N_{\mathrm{H}}$/$E(B-V)$ = 5.8 $\times$ 10$^{21}$ cm$^{-2}$ mag$^{-1}$ \citep{Dra03} and a slightly high $A_{V}$/$E(B-V)$ ratio of 4 for dense clouds \citep{Whi01b}.
Taking into account a factor of four lower metallicity, we obtain $N_{\mathrm{H_2}}$/$A_{V}$ = 2.9 $\times$ 10$^{21}$ cm$^{-2}$ mag$^{-1}$, where we assume that all the hydrogen atoms are in the form of H$_2$.
Using this conversion factor, we obtain $N_{\mathrm{H_2}}$ = 5.3 $\times$ 10$^{23}$ cm$^{-2}$.
This $N_{\mathrm{H_2}}$ is similar to the $N_{\mathrm{H_2}}$ derived from the aforementioned method assuming $T_{d}$ = 150 K.
Such $T_{d}$ may be somewhat high as a typical dust temperature in the line of sight, but it is not very unrealistic value given the observed temperature range of molecular gas towards WB89-789 SMM1.
In this paper, we use $N_{\mathrm{H_2}}$ = 1.1 $\times$ 10$^{24}$ cm$^{-2}$ as a representative value, which corresponds to the average of $N_{\mathrm{H_2}}$ derived by the dust continuum data and the SED fitting.
This $N_{\mathrm{H_2}}$ corresponds to $A_V$ = 380 mag using the above conversion factor.
Assuming the source diameter of 0.026 pc and the uniform spherical distribution of gas around a protostar, we estimate the gas number density to be $n_{\mathrm{H_2}}$ = 2.1 $\times$ 10$^7$ cm$^{-3}$, where the total gas mass of 13 M$_{\sun}$ is enclosed.
\added{
Similarly, the mass for a 0.1 pc diameter region (i.e., a canonical size of dense cores) is estimated to be 75 M$_{\sun}$ with $T_{d}$ = 60 K, where Band 6 and Band 7 estimates are averaged.
For the whole field shown in Figures \ref{images1}--\ref{images2}, which roughly corresponds to a 0.5 pc diameter region, the total mass is estimated to be 800--2500 M$_{\sun}$, where we assume $T_{d}$ = 20--10 K for extended dust emission.
Note that this is a lower limit because the maximum recoverable scale of the present observations is 5$\farcs$4 (0.28 pc).
}
\subsubsection{Fractional abundances and isotope abundance ratios} \label{sec_x}
Fractional abundances with respect to H$_2$ are shown in Table \ref{tab_X}, which are calculated based on column densities estimated in Sections \ref{sec_rd}--\ref{sec_h2}.
The fractional abundances normalized by the CH$_3$OH column density are also discussed in Sections \ref{sec_disc_molab}-\ref{sec_disc_molab2}, because of the non-negligible uncertainty associated with $N_{\mathrm{H_2}}$ (see Section \ref{sec_h2}).
Abundances of HCO$^{+}$, HCN, SO, CS, OCS, and CH$_3$OH are estimated from their isotopologues, H$^{13}$CO$^{+}$, H$^{13}$CN, $^{34}$SO, C$^{34}$S, O$^{13}$CS, and $^{13}$CH$_3$OH.
Detections of isotopologue species for SO, CS, OCS, and CH$_3$OH imply that the main species would be optically thick.
Isotope abundance ratios of $^{12}$C/$^{13}$C = 150 and $^{32}$S/$^{33}$S = 35 are assumed, which are obtained by extrapolating the relationship between isotope ratios and galactocentric distances reported in \citet{Wil94} and \citet{Hum20} to $D_{GC}$ = 19 kpc.
Abundance ratios are derived for several rare isotopologues; we obtain CH$_2$DOH/CH$_3$OH = 0.011 $\pm$ 0.002, D$_2$CO/HDCO = 0.45 $\pm$ 0.10, $^{34}$SO/$^{33}$SO = 5 $\pm$ 1, C$^{34}$S/C$^{33}$S = 2 $\pm$ 1, and $^{32}$SO$_2$/$^{34}$SO$_2$ = 20 $\pm$ 4.
The $^{32}$SO$_2$/$^{34}$SO$_2$ ratio in WB89-789 SMM1 is similar to the solar $^{32}$S/$^{34}$S ratio \citep[22,][]{Wil94}, although we expect a slightly higher value in the outer Galaxy due to the $^{32}$S/$^{34}$S gradient in the Galaxy \citep{Chi96b, Hum20}.
Astrophysical implication for the deuterated species are discussed in Section \ref{sec_disc_molab2}.
The rotation diagram of CH$_3$CN is rather scattered.
Although its isotopologue line is not detected, optical thickness might affect the column density estimate, as CH$_3$CN is often optically thick in hot cores \citep[e.g., ][]{Fue14}.
To obtain a possible range of its column density, we use the rotation diagram of $^{12}$CH$_3$CN data to estimate a lower limit and the non-detection of the $^{13}$CH$_3$CN(19$_{0}$--18$_{0}$) line at 339.36630 GHz ($E_{u}$ = 163 K) for an upper limit.
We have also repeated the analysis for the spectra extracted from a 0.1 pc (1$\farcs$93) diameter region at the hot core position, for the sake of comparison with LMC hot cores (see Section \ref{sec_disc_molab2}).
Those abundances are also summarized in Table \ref{tab_X}.
The abundances for a 0.1 pc area do not drastically vary from those for a 0.026 pc area.
Molecules with compact spatial distribution (e.g., COMs) tend to decrease their abundances by a factor of $\sim$2--3 in the 0.1 pc data due to the beam dilution effect.
In contrast, those with extended spatial distributions and intensity peaks outside the hot core region (H$^{13}$CO$^+$, CCH, CN, and NO) increases by a factor of $\sim$2 in the 0.1 pc data.
\begin{deluxetable}{ l c c c c}
\tablecaption{Estimated rotation temperatures, column densities, and source sizes \label{tab_N}}
\tabletypesize{\footnotesize}
\tablehead{
\colhead{Molecule} & \colhead{$T$$_{rot}$} & \colhead{$N$(X)} & \colhead{$N$(X) non-LTE} & \colhead{Size} \\
\colhead{ } & \colhead{(K)} & \colhead{(cm$^{-2}$)} & \colhead{(cm$^{-2}$)} & \colhead{($\arcsec$)}
}
\startdata
H$_2$ & \nodata & 1.1 $\times$ 10$^{24}$ & \nodata & 0.85\tablenotemark{c} \\
\tableline
H$^{13}$CO$^+$ & 35 & (7.0 $\pm$ 0.1) $\times$ 10$^{12}$ & (7.6 $\pm$ 0.9) $\times$ 10$^{12}$ & $>$1.5\tablenotemark{d} \\
HC$^{18}$O$^+$ & 35 & (5.8 $\pm$ 0.9) $\times$ 10$^{11}$ & (5.7 $\pm$ 0.6) $\times$ 10$^{11}$ & 1.18\tablenotemark{d} \\
CCH & 35 & (2.7 $\pm$ 0.1) $\times$ 10$^{14}$ & \nodata & $>$2\tablenotemark{d} \\
c-C$_3$H$_2$ & 35 & (9.5 $\pm$ 2.2) $\times$ 10$^{13}$ & (8.2 $\pm$ 0.9) $\times$ 10$^{13}$\tablenotemark{a} & $>$1\tablenotemark{d} \\
H$_2$CO & 39 & (1.1 $\pm$ 0.1) $\times$ 10$^{14}$ & (1.3 $\pm$ 0.1) $\times$ 10$^{14}$\tablenotemark{a} & $>$1.5\tablenotemark{d} \\
HDCO & 39 & (5.1 $\pm$ 0.3) $\times$ 10$^{13}$ & \nodata & $>$1\tablenotemark{d} \\
D$_2$CO & \textit{39$^{+6}_{-5}$} & \textit{(2.3 $\pm$ 0.5) $\times$ 10$^{13}$} & \nodata\nodata & $>$1\tablenotemark{d} \\
CN & 35 & (3.3 $\pm$ 0.2) $\times$ 10$^{14}$ & (2.5 $\pm$ 0.3) $\times$ 10$^{14}$ & $>$2\tablenotemark{d} \\
H$^{13}$CN & 35 & (1.2 $\pm$ 0.1) $\times$ 10$^{13}$ & (1.1 $\pm$ 0.1) $\times$ 10$^{13}$ & 0.92\tablenotemark{d} \\
HC$^{15}$N & 35 & (6.3 $\pm$ 0.2) $\times$ 10$^{12}$ & (5.8 $\pm$ 0.6) $\times$ 10$^{12}$ & 0.75\tablenotemark{d} \\
HC$_3$N & 200 & (2.7 $\pm$ 0.3) $\times$ 10$^{13}$ & (2.1 $\pm$ 0.2) $\times$ 10$^{13}$ & 0.65 \\
NO & 35 & (9.0 $\pm$ 2.5) $\times$ 10$^{14}$ & (8.9 $\pm$ 0.9) $\times$ 10$^{14}$ & $>$1.5\tablenotemark{d} \\
HNCO & \textit{237$^{+17}_{-15}$} & \textit{(3.0 $\pm$ 0.2) $\times$ 10$^{14}$} & \nodata & 0.54 \\
CH$_3$CN & \textit{279$^{+12}_{-11}$} & \textit{(1.8 $\pm$ 0.1) $\times$ 10$^{14}$} & (8.6 $\pm$ 0.8) $\times$ 10$^{13}$ & 0.51 \\
$^{13}$CH$_3$CN & 200 & $<$5 $\times$ 10$^{12}$ & \nodata & \nodata \\
C$_2$H$_5$CN & \textit{130$^{+20}_{-15}$} & \textit{(6.3 $\pm$ 1.7) $\times$ 10$^{13}$} & \nodata & 0.52 \\
NH$_2$CO & \textit{140$^{+8}_{-7}$} & \textit{(4.2 $\pm$ 0.7) $\times$ 10$^{13}$} & \nodata & 0.56 \\
SiO & 35 & (2.5 $\pm$ 0.2) $\times$ 10$^{12}$ & (2.5 $\pm$ 0.3) $\times$ 10$^{12}$ & 0.65 \\
CS & 36 & (1.5 $\pm$ 0.2) $\times$ 10$^{14}$ & (2.0 $\pm$ 0.3) $\times$ 10$^{14}$ & $>$1.5 \\
C$^{34}$S & 36 & (3.1 $\pm$ 0.1) $\times$ 10$^{13}$ & \nodata & 0.70 \\
C$^{33}$S & \textit{36$^{+4}_{-3}$} & \textit{(1.5 $\pm$ 0.2) $\times$ 10$^{13}$} & \nodata & 0.61 \\
OCS & \textit{106$^{+6}_{-5}$} & \textit{(6.5 $\pm$ 0.5) $\times$ 10$^{14}$} & (6.4 $\pm$ 0.7) $\times$ 10$^{14}$ & 0.55 \\
$^{13}$OCS & 200 & (8.7 $\pm$ 2.4) $\times$ 10$^{13}$ & \nodata & 0.45 \\
H$_2$CS & \textit{43$^{+3}_{-2}$} & \textit{(1.5 $\pm$ 0.1) $\times$ 10$^{14}$} & (1.4 $\pm$ 0.2) $\times$ 10$^{14}$\tablenotemark{a} & 0.62 \\
SO & \textit{35$^{+1}_{-1}$} & \textit{(4.0 $\pm$ 0.3) $\times$ 10$^{14}$} & (4.5 $\pm$ 0.5) $\times$ 10$^{14}$ & 0.70\tablenotemark{d} \\
$^{34}$SO & 35 & (5.9 $\pm$ 0.1) $\times$ 10$^{13}$ & \nodata & 0.66 \\
$^{33}$SO & 35 & (1.1 $\pm$ 0.1) $\times$ 10$^{13}$ & \nodata & 0.53 \\
SO$_2$ & \textit{166$^{+5}_{-5}$} & \textit{(1.2 $\pm$ 0.1) $\times$ 10$^{15}$} & \nodata & 0.53 \\
$^{34}$SO$_2$ & 166 & (5.9 $\pm$ 0.9) $\times$ 10$^{13}$ & \nodata & 0.51 \\
CH$_3$SH & 200 & $<$3 $\times$ 10$^{14}$ & \nodata & \nodata \\
HDO & \textit{217$^{+14}_{-12}$} & \textit{(2.2 $\pm$ 0.2) $\times$ 10$^{15}$} & \nodata & 0.52 \\
CH$_3$OH & \textit{245$^{+4}_{-4}$} & \textit{(1.9 $\pm$ 0.1) $\times$ 10$^{16}$} & (2.6 $\pm$ 0.1) $\times$ 10$^{16}$\tablenotemark{b} & 0.51 \\
$^{13}$CH$_3$OH & \textit{181$^{+10}_{-9}$} & \textit{(2.8 $\pm$ 0.2) $\times$ 10$^{15}$} & \nodata & 0.46 \\
CH$_2$DOH & \textit{155$^{+18}_{-15}$} & \textit{(4.6 $\pm$ 0.3) $\times$ 10$^{15}$} & \nodata & 0.52 \\
HCOOCH$_3$ & \textit{181$^{+6}_{-5}$} & \textit{(8.6 $\pm$ 0.4) $\times$ 10$^{15}$} & \nodata & 0.51 \\
CH$_3$OCH$_3$ & \textit{137$^{+5}_{-4}$} & \textit{(2.6 $\pm$ 0.1) $\times$ 10$^{15}$} & \nodata & 0.52 \\
C$_2$H$_5$OH & \textit{136$^{+14}_{-12}$} & \textit{(9.6 $\pm$ 1.3) $\times$ 10$^{14}$} & \nodata & 0.50 \\
CH$_3$CHO & \textit{192$^{+52}_{-34}$} & \textit{(6.4 $\pm$ 0.8) $\times$ 10$^{14}$} & \nodata & 0.49 \\
\textit{trans}-HCOOH & \textit{71$^{+11}_{-9}$} & \textit{(2.7 $\pm$ 0.6) $\times$ 10$^{14}$} & \nodata & 0.58 \\
\textit{cis}-HCOOH & \textit{69$^{+50}_{-21}$} & \textit{(2.4 $\pm$ 1.2) $\times$ 10$^{13}$} & \nodata & 0.49 \\
H$_2$CCO & \textit{92$^{+14}_{-11}$} & \textit{(1.0 $\pm$ 0.2) $\times$ 10$^{14}$} & \nodata & 0.55 \\
c-C$_2$H$_4$O & 200 & (8.9 $\pm$ 2.0) $\times$ 10$^{13}$ & \nodata & 0.47 \\
\enddata
\tablecomments{
\added{For $T$$_{rot}$ and $N$(X), those derived by rotation diagrams are shown in italics. }
Uncertainties and upper limits are of the 2 $\sigma$ level and do not include systematic errors due to adopted spectroscopic constants.
See Sections \ref{sec_rd}-\ref{sec_h2} and \ref{sec_disc_dist} for details.
}
\tablenotetext{a}{Assuming ortho/para ratio of three. }
\tablenotetext{b}{Assuming E-CH$_3$OH/A-CH$_3$OH ratio of unity \citep{Wir11}. }
\tablenotetext{c}{Size of continuum emission. }
\tablenotetext{d}{Associated with extended component. }
\end{deluxetable}
\begin{deluxetable}{ l c c }
\tablecaption{Estimated fractional abundances \label{tab_X}}
\tabletypesize{\small}
\tablehead{
\colhead{Molecule} & \multicolumn{2}{c}{$N$(X)/$N_{\mathrm{H_2}}$} \\
\colhead{} & \colhead{0.026 pc area} & \colhead{0.1 pc area}
}
\startdata
HCO$^+$\tablenotemark{a} & (9.5 $\pm$ 3.2) $\times$ 10$^{-10}$ & (1.5 $\pm$ 0.3) $\times$ 10$^{-9}$ \\
H$_2$CO & (1.0 $\pm$ 0.3) $\times$ 10$^{-10}$ & (1.2 $\pm$ 0.1) $\times$ 10$^{-10}$ \\
HDCO & (4.7 $\pm$ 1.3) $\times$ 10$^{-11}$ & (3.9 $\pm$ 0.2) $\times$ 10$^{-11}$ \\
D$_2$CO & (2.1 $\pm$ 0.7) $\times$ 10$^{-11}$ & (2.0 $\pm$ 0.3) $\times$ 10$^{-11}$ \\
C$_2$H & (2.5 $\pm$ 0.7) $\times$ 10$^{-10}$ & (5.8 $\pm$ 1.2) $\times$ 10$^{-10}$ \\
c-C$_3$H$_2$ & (8.6 $\pm$ 3.1) $\times$ 10$^{-11}$ & (5.9 $\pm$ 1.2) $\times$ 10$^{-11}$ \\
CN & (3.0 $\pm$ 0.8) $\times$ 10$^{-10}$ & (6.6 $\pm$ 1.3) $\times$ 10$^{-10}$ \\
HCN\tablenotemark{a} & (1.7 $\pm$ 0.6) $\times$ 10$^{-9}$ & (1.2 $\pm$ 0.3) $\times$ 10$^{-9}$ \\
HC$_3$N & (2.5 $\pm$ 0.7) $\times$ 10$^{-11}$ & (1.4 $\pm$ 0.1) $\times$ 10$^{-11}$ \\
NO & (8.1 $\pm$ 3.2) $\times$ 10$^{-10}$ & (1.6 $\pm$ 0.1) $\times$ 10$^{-9}$ \\
HNCO & (2.7 $\pm$ 0.8) $\times$ 10$^{-10}$ & (7.1 $\pm$ 0.6) $\times$ 10$^{-11}$ \\
CH$_3$CN\tablenotemark{b} & (4.2 $\pm$ 2.7) $\times$ 10$^{-10}$ & (3.7 $\pm$ 2.8) $\times$ 10$^{-10}$ \\
C$_2$H$_5$CN & (5.8 $\pm$ 2.2) $\times$ 10$^{-11}$ & (2.4 $\pm$ 0.9) $\times$ 10$^{-11}$ \\
NH$_2$CHO & (3.8 $\pm$ 1.2) $\times$ 10$^{-11}$ & (1.8 $\pm$ 0.1) $\times$ 10$^{-11}$ \\
SiO & (2.2 $\pm$ 0.6) $\times$ 10$^{-12}$ & (1.2 $\pm$ 0.1) $\times$ 10$^{-12}$ \\
CS\tablenotemark{c} & (9.7 $\pm$ 3.3) $\times$ 10$^{-10}$ & (6.4 $\pm$ 1.3) $\times$ 10$^{-10}$ \\
SO\tablenotemark{c} & (1.9 $\pm$ 0.5) $\times$ 10$^{-9}$ & (1.3 $\pm$ 0.3) $\times$ 10$^{-9}$ \\
OCS\tablenotemark{a} & (1.2 $\pm$ 0.5) $\times$ 10$^{-8}$ & (4.1 $\pm$ 1.4) $\times$ 10$^{-9}$ \\
H$_2$CS & (1.4 $\pm$ 0.4) $\times$ 10$^{-10}$ & (9.0 $\pm$ 1.0) $\times$ 10$^{-11}$ \\
SO$_2$ & (1.1 $\pm$ 0.3) $\times$ 10$^{-9}$ & (2.9 $\pm$ 0.1) $\times$ 10$^{-10}$ \\
CH$_3$SH & $<$3 $\times$ 10$^{-10}$ & $<$2 $\times$ 10$^{-10}$ \\
HDO & (2.0 $\pm$ 0.6) $\times$ 10$^{-9}$ & (7.7 $\pm$ 0.9) $\times$ 10$^{-10}$ \\
CH$_3$OH\tablenotemark{a} & (3.8 $\pm$ 1.3) $\times$ 10$^{-7}$ & (1.7 $\pm$ 0.3) $\times$ 10$^{-7}$ \\
CH$_2$DOH & (4.2 $\pm$ 1.2) $\times$ 10$^{-9}$ & (1.5 $\pm$ 0.2) $\times$ 10$^{-9}$ \\
HCOOCH$_3$ & (7.8 $\pm$ 2.2) $\times$ 10$^{-9}$ & (3.0 $\pm$ 0.2) $\times$ 10$^{-9}$ \\
CH$_3$OCH$_3$ & (2.3 $\pm$ 0.6) $\times$ 10$^{-9}$ & (1.0 $\pm$ 0.1) $\times$ 10$^{-9}$ \\
C$_2$H$_5$OH & (8.7 $\pm$ 2.7) $\times$ 10$^{-10}$ & (3.3 $\pm$ 0.8) $\times$ 10$^{-10}$ \\
CH$_3$CHO & (5.8 $\pm$ 1.8) $\times$ 10$^{-10}$ & (2.1 $\pm$ 0.4) $\times$ 10$^{-10}$ \\
HCOOH\tablenotemark{d} & (2.7 $\pm$ 1.0) $\times$ 10$^{-10}$ & (1.2 $\pm$ 0.4) $\times$ 10$^{-10}$ \\
H$_2$CCO & (9.2 $\pm$ 3.0) $\times$ 10$^{-11}$ & (3.7 $\pm$ 0.9) $\times$ 10$^{-11}$ \\
c-C$_2$H$_4$O & (8.1 $\pm$ 2.8) $\times$ 10$^{-11}$ & (5.9 $\pm$ 1.2) $\times$ 10$^{-11}$ \\
\enddata
\tablecomments{
Uncertainties and upper limits are of the 2$\sigma$ level.
Column densities of molecules for a 0.026 pc area are summarized in Table \ref{tab_N}.
An empirical uncertainty of 30 $\%$ is assumed for $N_{\mathrm{H_2}}$.
}
\tablenotetext{a}{Estimated from $^{13}$C isotopologue with $^{12}$C/$^{13}$C = 150 }
\tablenotetext{b}{Rotation diagram analysis of CH$_3$CN is used to derive a lower limit and the non-detection of $^{13}$CH$_3$CN for an upper limit. }
\tablenotetext{c}{Estimated from $^{34}$S isotopologue with $^{32}$S/$^{34}$S = 35 }
\tablenotetext{d}{Sum of $trans-$ and $cis-$species. }
\end{deluxetable}
\begin{figure}[tpbh!]
\begin{center}
\includegraphics[width=8.5cm]{f7.eps}
\caption{
The SED of WB89-789 SMM1.
The plotted data are obtained by the ESO 2.2 m telescope \citep[pluses, black; ][]{Bra07}, the WISE all-sky survey \citep[open diamonds, light green; ][]{Wri10}, \textit{AKARI} FIS all-sky survey \citep[open diamonds, blue; ][]{Yam10}, and ALMA (filled star, red, this work).
The angular resolution of each data is indicated in brackets.
The gray dashed line indicates the best-fitted SED with the model of \citet{Rob07}.
}
\label{sed}
\end{center}
\end{figure}
\section{Discussion} \label{sec_disc}
\subsection{Hot molecular core and protostar associated with WB89-789 SMM1} \label{sec_disc_star}
The nature of WB89-789 SMM1 is characterized as
(i) the compact distribution of warm gas ($\sim$0.03 pc, see Section \ref{sec_disc_dist}),
(ii) the high gas temperature that can trigger the ice sublimation ($\geq$100 K, Section \ref{sec_rd}),
(iii) the high density (2 $\times$ 10$^7$ cm$^{-3}$, Section \ref{sec_h2}),
(iv) the association with a luminous protostar (see below),
and (v) the presence of chemically rich molecular gas.
Those properties suggest that the source is associated with a hot molecular core.
Figure \ref{sed} shows a SED of SMM1, where the data are collected from available databases and literatures \citep{Bra07, Wri10, Yam10}.
The bolometric luminosity of the source is estimated to be 8.4 $\times$ 10$^3$ L$_{\sun}$ based on the SED fitting with the model of \citet{Rob07}.
This luminosity is equivalent to a stellar mass of about 10 M$_{\sun}$ according to the mass-luminosity relationship of zero age main sequence (ZAMS) stars \citep{ZY07}.
Note that far-infrared data, which is important for the luminosity determination of embedded sources, is insufficient for SMM1.
Only upper limits are provided due to the low angular resolution of available \textit{AKARI} FIS all-sky survey data.
Thus the derived luminosity (and therefore mass) may be lower than the current estimate.
Future high spatial resolution infrared observations in those missing wavelengths are highly required.
Alternatively, we can estimate the luminosity of SMM1 by scaling the luminosity of a low-metallicity LMC hot core, ST16, whose SED is well determined based on a comprehensive infrared \replaced{data set}{dataset} from 1 to 1200 $\mu$m \citep{ST20}.
This LMC hot core has a total luminosity of 3.1 $\times$ 10$^5$ L$_{\sun}$ and a $K_s$-band magnitude ([$K_s$]) of 13.4 mag at 50 kpc, while SMM1 has [$K_s$] = 14.7 mag at 10.7 kpc.
Scaling the luminosity of ST16 with the distance and $K_s$-band magnitude, we obtain 4.3 $\times$ 10$^3$ L$_{\sun}$ for SMM1, which is a factor of two lower than the estimate by the SED fitting.
In either case, present estimates suggest that the luminosity of SMM1 would correspond to the lower-end of high-mass ZAMS or upper-end of intermediate-mass ZAMS.
\begin{figure}[tp!]
\begin{center}
\includegraphics[width=8.7cm]{f8.eps}
\caption{
Schematic illustration of the molecular gas distribution and the temperature structure in WB89-789 SMM1.
}
\label{schematic}
\end{center}
\end{figure}
\subsection{Distribution of molecular line emission and dust continuum} \label{sec_disc_dist}
The observed emission lines and continuum show different spatial distributions depending on species.
Those distributions have important clues to understand their origins.
A schematic illustration of the temperature structure and molecular gas distribution in WB89-789 SMM1 are shown in Figure \ref{schematic} based on the discussion in this section.
We have estimated the spatial extent of observed emission by fitting a two-dimensional Gaussian to the continuum center (Table \ref{tab_N}).
Compact distributions (FWHM = 0$\farcs$5--0$\farcs$6, 0.026--0.031 pc), that is comparable with the beam size, are seen in HDO, COMs, CH$_3$CN, HNCO, OCS, and high-excitation SO$_2$ lines.
HC$_3$N is slightly extended (FWHM = 0$\farcs$65).
They are concentrated at the hot core position, suggesting that they are originated from a warm region where ice mantles are sublimated.
SO, $^{34}$SO, $^{33}$SO, and low-excitation SO$_2$ show relatively compact distributions (FWHM = 0$\farcs$5--0$\farcs$7, 0.026--0.036 pc) at the hot core position, but also show a secondary peak at the south side of the hot core.
This secondary peak coincides with the peak of the NO emission.
Other sulfur-bearing species such as C$^{34}$S, C$^{33}$S, and H$_2$CS show compact distributions \replaced{(FWHM = 0$\farcs$6--0$\farcs$0.7 (0.031--0.052 pc)}{(FWHM = 0$\farcs$6--0$\farcs$0.7, 0.031--0.052 pc)} centered at the hot core.
A characteristic distribution that is symmetric to the hot core position is seen in SiO.
It shows a compact emission (FWHM = 0$\farcs$65) at the hot core center, but also shows other peaks at the north-east and south-west sides of the hot core.
Those secondary peaks are slightly elongated.
SiO is a well-known shock tracer.
The observed structure would be originated from the shocked gas produced by bipolar protostellar outflows.
A driving source of the outflows would be a protostar embedded in a hot core, since the distribution of SiO is symmetric to the hot core position.
Even extended distributions (FWHM $>$ 1$\farcs$0) are seen in CN, CCH, H$^{13}$CO$^{+}$, HC$^{18}$O$^{+}$, H$^{13}$CN, HC$^{15}$N, NO, CS, H$_2$CO, and HDCO, D$_2$CO, and low-excitation CH$_3$OH.
Gas-phase reactions and non-thermal desorption of icy species would have non-negligible contribution to the formation of those species, because they are widely distributed beyond the hot core.
We note that dust continuum, H$^{13}$CN, HC$^{15}$N have a moderately sharp peak (FWHM $<$ 1$\farcs$0) at the hot core position in addition to the extended component.
c-C$_3$H$_2$ shows a patchy distribution, whose secondary peak at the south-west of the hot core does not \replaced{coincides}{coincide} with those of other species.
Molecular radicals (CN, CCH, and NO) do not have their emission peak at the hot core position.
This would suggest that the chemistry outside the hot core region largely contributes to their production.
CN and CCH are known to be abundant in photodissociation regions (PDRs), because atomic carbon is efficiently provided by photodissociation of CO under moderate UV fields \citep[e.g., ][]{Fue93, Ste95, Jan95, Rod98, Pet17}.
In the present source, their emission shows a similar spatial distribution.
A similar distribution between CN and CCH has been also observed in a LMC hot core \citep{ST20}; they argue that CN and CCH would trace PDR-like outflow cavity structures that are irradiated by the UV light from a protostar associated with a hot core.
We speculate that this is also the case for WB89-789 SMM1.
Figure \ref{Mom1} shows velocity maps (moment 1) of CN and CCH lines.
CN and CCH emission are elongated in the south-west direction from the hot core (see also Figure \ref{images1}).
The figure also shows a possible direction of protostellar outflows expected from the spatial distribution of SiO.
The elongated directions of CN and CCH coincide with the inferred direction of outflows.
In addition, the elongated south-west parts of CN and CCH are blue-shifted by $\sim$1--2 km s$^{-1}$ compared to the hot core position.
This may be due to outflow gas motion, although CN and CCH would trace an outflow cavity wall rather than outflow gas itself.
Actually the observed velocity shift is smaller than a typical value of high-velocity wing components in massive protostellar outflows \citep[$\geq$ 5 km s$^{-1}$, e.g., ][]{Beu02, Mau15}.
\added{We note that a clear velocity structure is not seen in the SiO velocity map, except for the position of another embedded protostar discussed in Section \ref{sec_disc_SiO}. }
Future observations of optically-thick outflow tracers such as CO are necessary to confirm the presence of high-velocity gas associated with protostellar outflows.
\begin{figure}[tp!]
\begin{center}
\includegraphics[width=8.5cm]{f9.eps}
\caption{
Velocity maps (moment 1) of CN and CCH lines.
The color scale indicates the offset velocity relative to the systemic velocity of 34.5 km s$^{-1}$.
A possible direction of outflows expected from the distributions of SiO is shown by the red arrows.
Contours represent the integrated intensity distribution and the contour levels are 8$\%$, 20$\%$, 40$\%$, and 60$\%$ of the peak value.
Low signal-to-noise regions (S/N $<$5) are masked.
The blue cross represents the 1.2 mm continuum center. }
\label{Mom1}
\end{center}
\end{figure}
\subsection{Molecular abundances: Comparison with Galactic hot cores} \label{sec_disc_molab}
Figure \ref{abu1} shows a comparison of molecular abundances between WB89-789 SMM1 and other known Galactic hot cores.
The data for an intermediate-mass hot core, NGC7192 FIRS2, is adopted from \citet{Fue14}.
The abundances are based on the 220 GHz region observations for a 0.009 pc diameter area centered at the hot core.
The luminosity of NGC7192 FIRS2 ($\sim$500 L$_{\sun}$) corresponds to that of a 5 M$_{\sun}$ ZAMS.
The data for a high-mass source, the Orion hot core, is adopted from \citet{Sut95}, which is based on the 340 GHz region observations for a 0.027 pc diameter area at the hot core.
The abundance of HNCO is taken from \citet{Sch97}.
The molecular abundances in WB89-789 SMM1 is generally lower than those of inner Galactic counterparts.
The degree of the abundance decrease is roughly consistent with the lower metallicity of the WB89-789 region as indicated by the scale bar in Figure \ref{abu1}.
Particularly, SMM1 and the intermediate-mass hot core NGC7192 FIRS2 show similar molecular abundances after taking into account the four times lower metallicity of the former source.
For the comparison with Orion, it seems that HC$_3$N, C$_2$H$_5$CN, and SO$_2$ are significantly less abundant in SMM1 even taking into account the lower metallicity, while CH$_3$OH is overabundant in SMM1 despite the low metallicity.
\begin{figure*}[tpb!]
\begin{center}
\includegraphics[width=17.0cm]{f10.eps}
\caption{
Comparison of molecular abundances between an outer Galactic hot core (black, WB89-789 SMM1), an intermediate-mass hot core (green, NGC7192 FIRS2), and a high-mass hot core (cyan, Orion).
An abundance difference by a factor of four is indicated by the black solid line with hats.
The area with thin vertical lines indicate the error bar.
No data is available for HDO in NGC7192 FIRS2.
See Section \ref{sec_disc_molab} for details.
}
\label{abu1}
\end{center}
\end{figure*}
\begin{figure*}[tpb!]
\begin{center}
\includegraphics[width=18.0cm]{f11.eps}
\caption{
Comparison of molecular abundances normalized by the CH$_3$OH column density for (a) WB89-789 SMM1 vs. NGC7192 FIRS2 and (b) WB89-789 SMM1 vs. ST16 (LMC).
Carbon- and oxygen-bearing species are shown by the blue squares, nitrogen-bearing species in green, and sulfur-bearing species in red.
The dotted lines in the panel (a) represent an abundance ratio of 2:1 and 1:2 for WB89-789 SMM1 : NGC7192 FIRS2, while the solid line represent that of 1:1.
Similarly, the dotted lines in the panel (b) represent a ratio of 100:1, 10:1, 1:10, and 1:100 for WB89-789 SMM1:ST16, while 1:1 for the solid line.
The leftward triangles in the panel (b) indicate the upper limit for ST16.
See Section \ref{sec_disc_molab} for details.
}
\label{abu2}
\end{center}
\end{figure*}
To further focus on chemical complexity at low metallicity, Figure \ref{abu2} shows a comparison of fractional abundances of COMs normalized by the CH$_3$OH column density for WB89-789 SMM1 and NGC7192 FIRS2.
Such a comparison is useful for investigating chemistry of organic molecules in warm and dense gas around protostars \citep{Her09,Dro19}, because CH$_3$OH is believed to be a parental molecule for the formation of even larger COMs \citep[e.g.,][]{NM04,Gar06}.
In addition, CH$_3$OH is a product of grain surface reaction, thus warm CH$_3$OH gas mainly arise from a high-temperature region, where ices are sublimated and characteristic hot core chemistry proceeds.
Furthermore, the normalization by CH$_3$OH can cancel the metallicity effect in the abundance comparison.
The $N$(X)/$N$(CH$_3$OH) ratios are remarkably similar between WB89-789 SMM1 and NGC7192 FIRS2 as shown in Figure \ref{abu2} (a).
The ratios of SMM1 coincide with those of NGC7192 FIRS2 within a factor of 2 for the most molecular species.
The correlation coefficient is calculated to be 0.94, while it is 0.96 if CH$_3$CN is excluded.
It seems that CH$_3$CN deviates from the overall trend, although the uncertainty is large due to the opacity effect (see \ref{sec_x}).
C$_2$H$_5$OH also slightly deviates from the trend.
The reason for their behavior is still unclear, but it may be related to the formation pathway of those molecules.
The above two comparisons suggest that chemical compositions of the hot core in the extreme outer Galaxy scale with the metallicity.
In the WB89-789 region, the metallicity is expected to be four times lower compared to the solar neighborhood.
The observed abundances of COMs in the SMM1 hot core is lower than the other Galactic hot cores, but the decrease is proportional to this metallicity.
Furthermore, similar $N$(COMs)/$N$(CH$_3$OH) ratios suggest that CH$_3$OH is an important parental species for the formation of larger COMs in a hot core, as suggested by aforementioned theoretical studies.
CH$_3$OH ice is believed to form on grain surfaces and several formation processes are proposed by laboratory experiments; i.e., hydrogenation of CO, ultraviolet photolysis and radiolysis of ice mixtures \citep[e.g.,][]{Hud99,Wat07}.
It is known that CH$_3$OH is already formed in quiescent prestellar cores before star formation occurs \citep{Boo11}.
Solid CH$_3$OH will chemically evolve to larger COMs by a combination of photolysis, radiolysis, and grain heating during the warm-up phase that leads to the formation of a hot core \citep{Gar06}.
High-temperature gas-phase chemistry of sublimated CH$_3$OH would also contribute to the COMs formation \citep{NM04,Taq16}.
The present results suggest that various COMs can form even in a low-metallicity environment, if their parental molecule, CH$_3$OH, is efficiently produced in a star-forming core.
\added{The detection of a chemically-rich star-forming core in the extreme outer Galaxy has an impact on the understanding of the occurrence of the chemical complexity in a primordial environment of the early phase of the Galaxy formation. }
We here note that observations of ice mantle compositions are not reported for the outer Galaxy so far .
Future infrared observations of ice absorption bands towards embedded sources in the outer Galaxy are important.
\begin{figure*}[tbp!]
\begin{center}
\includegraphics[width=16.5cm]{f12.eps}
\caption{
Comparison of molecular abundances between an outer Galactic hot core, WB89-789 SMM1 (black), and three LMC hot cores, ST11 (red), ST16 (orange), and N113 A1 (light yellow).
Abundances of SMM1 are calculated for a 0.1 pc diameter region.
The area with thin vertical lines indicate the error bar.
The bar with a color gradient indicate an upper limit.
The absence of bars indicate the lack of available data.
See Section \ref{sec_disc_molab2} for details.
}
\label{abu3}
\end{center}
\end{figure*}
\subsection{Molecular abundances: Comparison with LMC hot cores} \label{sec_disc_molab2}
It is still unknown if the observed simply-metallicity-scaled chemistry of COMs in the WB89-789 SMM1 hot core is common in other hot core sources in the outer Galaxy.
A comparison of the present data with those of hot cores in the LMC would provide a hint for understanding the universality of low-metallicity hot core chemistry.
The metallicity of the LMC is reported to be lower than the solar value by a factor of two to three \citep[e.g.,][]{Duf82, Wes90, Rus92, Cho16}, which is in common with the outer Galaxy.
Figure \ref{abu3} shows a comparison of molecular abundances between WB89-789 SMM1 and three LMC hot cores.
The plotted molecular column densities for LMC hot cores are adopted from \citet{ST16} for ST11, \citet{ST20} for ST16, and \citet{Sew18} fro N113 A1.
Another LMC hot core in \citet{Sew18}, N113 B3, have similar molecular abundances with those of N113 A1.
The $N_{\mathrm{H_2}}$ value of ST11 and N113 A1 is re-estimated using the same dust opacity data and dust temperature ($T_{d}$ = 60 K) as in this work;
We obtained $N_{\mathrm{H_2}}$ = 1.2 $\times$ 10$^{24}$ cm$^{-2}$ for ST11 and $N_{\mathrm{H_2}}$ = 9.2 $\times$ 10$^{23}$ cm$^{-2}$ for N113 A1.
The dust temperature assumed in ST16 is 60 K as described in Section \ref{sec_h2}.
Molecular column densities are estimated for circular/elliptical regions of 0.12 $\times$ 0.12 pc, 0.10 $\times$ 0.10 pc, and 0.21 $\times$ 0.13 pc for ST11, ST16, and N113 A1, respectively.
For a fair comparison, we have re-calculated $N_{\mathrm{H_2}}$ and molecular column densities of SMM1 for a 0.1 pc (1$\farcs$93) diameter region.
Those abundances are plotted in Figure \ref{abu3} and summarized in Table \ref{tab_X}.
The chemical composition of the outer Galaxy hot core does not resemble those of LMC hot cores as seen in Figure \ref{abu3}.
The dissimilarity is also seen in the $N$(X)/$N$(CH$_3$OH) comparison between SMM1 and ST16 as shown in Figure \ref{abu2} (b), where the correlation coefficient is calculated to be 0.69.
\citet{ST20} argue that SO$_2$ will be a good tracer of low-metallicity hot core chemistry, because (i) it is commonly detected in LMC hot cores with similar abundances, and (ii) it is originated from a compact hot core region.
SO also shows similar abundances within LMC hot cores.
In WB89-789 SMM1, however, the abundances of SO$_2$ and SO relative to H$_2$ are lower by a factor of 28 and 5 compared with LMC hot cores.
The measured rotation temperatures of SO$_2$ are similar between those hot cores, i.e., 166 K (SO$_2$) for SMM1, 232 K (SO$_2$) and 86 K ($^{34}$SO$_2$) for ST16, 190 K (SO$_2$) and 95 K ($^{34}$SO$_2$) for ST11.
The SO$_2$ column densities for ST16 and ST11 are estimated from $^{34}$SO$_2$, while that for SMM1 is from SO$_2$.
However, the SO$_2$ column density of SMM1 increases only by a factor of up to three when it is estimated from $^{34}$SO$_2$ (see Section \ref{sec_x}).
Thus the low SO$_2$ abundance in the outer Galactic hot core would not be due to the optical thickness.
In contrast to the S-O bond bearing species, the C-S bond bearing species such as CS, H$_2$CS , and OCS do not show significant abundance decrease in WB89-789 SMM1.
Thus it is not straightforward to attribute the low abundance of SO$_2$ (and perhaps SO) to the low elemental abundance ratio of sulfur in the outer Galaxy.
Hot core chemistry models suggest that SO$_2$ is mainly produced in high-temperature gas-phase reactions in warm gas, using H$_2$S sublimated from ice mantles \citep{Cha97, NM04}.
This also applies to the SO$_2$ formation in low-metallicity sources as shown in astrochemical simulations for LMC hot cores \citep{ST20}.
We speculate that the different behavior of SO$_2$ in outer Galaxy and LMC hot cores may be related to differences in the evolutionary stage of hot cores.
A different luminosity of host protostars may also contribute to the different sulfur chemistry; i.e., $\sim$8 $\times$ 10$^3$ L$_{\sun}$ for WB89-789-SMM1, while several $\times$ 10$^5$ L$_{\sun}$ for LMC hot cores.
A different cosmic-ray ionization rate between the outer Galaxy and the LMC may also affect the chemical evolution, although the rate is not known for the outer Galaxy.
Among nitrogen-bearing molecules, NO shows interesting behavior in LMC hot cores.
After corrected for the metallicity, NO is overabundant in LMC hot cores compared with Galactic counterparts despite the low elemental abundance of nitrogen in the LMC \citep{ST20}.
Only NO shows such behavior among the nitrogen-bearing molecules observed in LMC hot cores.
In WB89-789 SMM1, however, such an overabundance of NO is not observed.
The NO abundance of SMM1 is 1.6 $\times$ 10$^{-9}$ for a 0.1 pc region data.
This is a factor of five lower than a typical NO abundance in Galactic high-mass hot cores \citep[8 $\times$ 10$^{-9}$,][]{Ziu91}, which is consistent with a factor of four lower metallicity in WB89-789.
The present high-spatial resolution data have revealed that NO does not mainly arise from a hot core region, as shown in Figure \ref{images1}.
It has an intensity peak at the south part of the hot core, where low-excitation lines of SO and SO$_2$ also have a secondary peak (Section \ref{sec_disc_dist}).
Thus, shock chemistry or photochemistry, rather than high-temperature chemistry, would contribute to the production of NO in low-metallicity protostellar cores.
In that case, a lower luminosity of SMM1 than those of LMC hot cores may contribute to the different behavior of NO.
For other nitrogen-bearing molecules, HNCO and CH$_3$CN, a clear difference is not identified between outer Galactic and LMC hot cores, although the number of data points is limited and the abundance uncertainty is large.
The reason of the unusually low abundance of SiO in SMM1 is unknown.
It may be related to different shock conditions or grain compositions, because dust sputtering by shock is mainly responsible for the production of SiO gas.
Formation of COMs is one of the important standpoints for low-metallicity hot core chemistry.
It is reported that CH$_3$OH show a large abundance variation in LMC hot cores \citep{ST20}.
There are organic-poor hot cores such as ST11 and ST16, while N113 A1 and B3 are organic-rich.
The CH$_3$OH abundance of WB89-789 SMM1 is higher than those of any known LMC hot cores.
The abundances of HCOOCH$_3$ and CH$_3$OCH$_3$ in SMM1 are comparable with those of an organic-rich LMC hot core, N113 A1.
Th detection of many other COMs in SMM1 suggests the source have experienced rich organic chemistry despite its low-metallicity nature.
Astrochemical simulations for LMC hot cores suggest that dust temperature at the initial ice-forming stage have a major effect on the abundance of CH$_3$OH gas in the subsequent hot core stage \citep{Ach18,ST20}.
Simulations of grain surface chemistry dedicated to the LMC environment also suggest that dust temperature is one of the key parameters for the formation of CH$_3$OH in dense cores \citep{Ach15,Pau18}.
This is because (i) CH$_3$OH is mainly formed by the grain surface reaction, and (ii) the hydrogenation reaction of CO, which is a dominant pathway for the CH$_3$OH formation, is sensitive to the dust temperature due to the high volatility of atomic hydrogen.
For this reason, it is inferred that organic-rich hot cores had experienced a cold stage ($\lesssim$ 10K) that is sufficient for the CH$_3$OH formation before the hot core stage, while organic-poor ones might have missed such a condition for some reason.
Alternatively, the slight difference in the hot core's evolutionary stage may contribute to the CH$_3$OH abundance variation, because the high-temperature gas-phase chemistry is rapid and it can decrease CH$_3$OH gas at a late stage \citep[e.g.,][]{NM04,Gar06,Vas13,Bal15}.
Low-metallicity hot core chemistry simulations in \citet{ST20} argue that the maximum achievable abundances of CH$_3$OH gas in a hot core stage significantly decrease as the visual extinction of the initial ice-forming stage decreases.
On the other hand, the simulations show that the CH$_3$OH gas abundance is simply metallicity-scaled if the initial ice-forming stage is sufficiently shielded.
In a well-shielded initial condition, the grain surface is cold enough to trigger the CO hydrogenation, and the resultant CH$_3$OH abundance is roughly regulated by the elemental abundances.
The observed metallicity-scaled chemistry of COMs in WB89-789 SMM1 implies that the source had experienced such an initial condition before the hot core stage.
Deuterium chemistry is widely used in interpreting chemical and physical history of interstellar molecules \citep[e.g.,][]{Cas12,Cec14}.
The measured CH$_2$DOH/CH$_3$OH ratio in WB89-789 SMM1 is 1.1 $\pm$ 0.2 $\%$, which is comparable to the higher end of those ratios observed in high-mass protostars and the lower end of those in low-mass protostars \citep[e.g., see Fig.2 in][]{Dro21}.
The ratio is orders of magnitude higher than the deuterium-to-hydrogen ratio in the solar neighborhood \citep[2 $\times$ 10$^{-5}$;][]{Lin06,Pro10} and that in the big-bang nucleosynthesis \citep[3 $\times$ 10$^{-5}$;][references therein]{Bur02}.
This suggests that the efficient deuterium fractionation occurred upon the formation of CH$_3$OH in SMM1.
The D$_2$CO/HDCO ratio is 45 $\pm$ 10 $\%$, which is comparable to those observed in low-mass and high-mass protostars \citep[e.g.,][]{Zah21}.
This would suggest that physical conditions for deuterium fractionation could be similar between WB89-789 SMM1 and inner Galactic protostars.
Note that higher spatial resolution observations and detailed multiline analyses would affect the measured abundance of deuterated species as \replaced{shown}{reported} in \citet{Per18} for the case of a nearby low-mass protostar.
The H$_2$CO column density derived in this work may be a lower limit because the line is often optically thick, thus we do not discuss the abundance ratio relative to H$_2$CO.
It is known that the deuterium fractionation efficiently proceeds at low temperature \citep[e.g.,][]{Rob03,Cas12,Taq14,Fur16}.
This is because the key reaction for the trigger of deuterium fractionation, H$^+_3$ + HD $\rightarrow$ H$_2$D$^+$ + H$_2$ + 232 K, is exothermic and its backward reaction cannot \added{efficiently} proceed below 20 K.
In addition, gaseous neutral species such as CO and O efficiently destruct H$_2$D$^+$, thus their depletion at low temperature further enhances the deuterium fractionation \citep[e.g.,][]{Cas12}.
A sign of high deuterium fractionation observed in WB89-789 SMM1 suggests that the source had experienced such a cold environment during its formation.
This picture is consistent with the implication obtained from the metallicity-scaled chemistry of COMs, which also suggests the occurrence of a cold and well-shielded initial condition as discussed above.
Although the low metallicity is common between the outer Galaxy and the LMC, their star-forming environments would be different; the LMC has more harsh environments as inferred from active massive star formation over the whole galaxy, while that for the outer Galaxy might be quiescent due to its low star formation activity.
Those environmental differences need to be taken into account for further understanding of the chemical evolution of star-forming regions at low metallicity.
Future extensive survey of protostellar objects towards the outer Galaxy is thus vitally important for further discussion.
Astrochemical simulations dedicated to the environment of the outer Galaxy, and the application to lower-mass protostars, are also important.
\subsection{Another embedded protostar traced by high-velocity SiO gas} \label{sec_disc_SiO}
We have also detected a compact source associated with high-velocity SiO gas at the east side of WB89-789 SMM1.
Hereafter, we refer to this source as WB89-789-SMM2.
According to the SiO emission, the source is located at RA = 06$^\mathrm{h}$17$^\mathrm{m}$24$\fs$246 and Dec = 14$^\circ$54$\arcmin$43$\farcs$25 (ICRS), which is 2$\farcs$7 (0.14 pc) away from SMM2.
Figure \ref{moment1_SiO}(a) shows the SiO(6-5) spectrum extracted from a 0$\farcs$6 diameter region centered at the above position.
The SiO line is largely shifted to the blue and red sides relative to the systemic velocity in a symmetric fashion.
The peaks of the shifted emission are located at $V_{sys}$ $\pm$ 25 km s$^{-1}$.
\begin{figure}[tpb!]
\begin{center}
\includegraphics[width=8.5cm]{f13.eps}
\caption{
(a) SiO(6-5) spectrum of WB89-789-SMM2.
The dotted line indicates a systemic velocity of 34.5 km s$^{-1}$.
High-velocity ($V_{sys}$ $\pm$25 km s$^{-1}$) SiO components are seen at the blue-/red-shifted sides of the systemic velocity.
(b) Velocity map (moment 1) of the SiO(6-5) line.
The color scale indicates the offset velocity relative to the systemic velocity.
\added{Low signal-to-noise ratio regions (S/N $<$5) are masked. }
Gray contours represent the intensity distribution of SiO(6-5) integrated from 0 to 60 km s$^{-1}$, and the contour levels are 1.5$\sigma$, 4$\sigma$, and 12$\sigma$ of the rms level.
The yellow star indicates the SiO center of SMM2, while the blue cross indicates the hot core position (SMM1).
The subset panel shows the 1200 $\mu$m continuum image for a 1$\farcs$2 $\times$ 1$\farcs$2 region centered at SMM2.
See Section \ref{sec_disc_SiO} for details.
}
\label{moment1_SiO}
\end{center}
\end{figure}
Figure \ref{moment1_SiO}(b) shows a velocity map and integrated intensity distribution of SiO(6-5).
In the figure, to focus on SiO in WB89-789-SMM2, the intensity is integrated over much wider velocity range (0--60 km s$^{-1}$) compared with that adopted in Figure \ref{images1} (31--38 km s$^{-1}$).
The velocity map clearly indicates that the velocity structure of SiO in SMM2 is spatially symmetric to the SiO center.
At this position, a local peak is seen in 1200 $\mu$m continuum as shown in the figure, suggesting the presence of an embedded source.
SMM2 does not show any emission lines of COMs, and no alternative molecular lines are identified at the \replaced{positions}{frequencies} of doppler-shifted SiO emission.
Also taking into account the clear spectral and spatial symmetry, the observed lines must be attributed to high-velocity SiO gas.
The spectral characteristics of the observed high-velocity SiO resemble those of extremely high velocity (EHV) outflows observed in Class 0 protostars \citep{Bac91,Taf10,Taf15,Tyc19}.
The EHV flows are known to appear as a discrete high-velocity ($V$ $\gtrsim$30 km s$^{-1}$) peak, and observed in the youngest stage of star formation \citep[][references therein]{Bac96,Mat19}.
The EHV flows extends up to several thousands au from the central protostar in SiO, and usually have collimated bipolar structures \citep[e.g.,][]{Bac91,Hir10,Tyc19,Mat19}.
The beam size of the present data is about 5000 au, thus such structures will not be \added{fully} spatially resolved.
Actually, a symmetric spatial distribution of blue-/red-shifted SiO is only marginally resolved into two beam size regions (Fig. \ref{moment1_SiO}(b)).
A spatial extent of SiO emission is about 1$\arcsec$ (0.052 pc).
Assuming an outflow velocity of 25 km s$^{-1}$, we estimate a dynamical timescale of EHV flows to be at least 2000 years.
This is roughly consistent with dynamical timescales of other EHV sources, which range from a few hundred to a few thousand years \citep[][references therein]{Bac96}.
A 1200 $\mu$m continuum flux in a 0$\farcs$6 diameter region centered at SMM2 is 0.60 $\pm$ 0.05 mJy/beam.
Assuming $T_{d}$ = 20 K, we obtain $N_{\mathrm{H_2}}$ = 3.2 $\times$ 10$^{23}$ cm$^{-2}$.
This is equivalent to a gas number density of $n_{\mathrm{H_2}}$ = 4.9 $\times$ 10$^6$ cm$^{-3}$.
If we assume a higher $T_{d}$, i.e. 40 K, then the derived column density is 2.5 times lower than the \replaced{above estimate}{20 K case}.
In either case, the continuum data suggests the presence of high-density gas at this position.
\added{
A column density and fractional abundance of SiO gas at the above position is estimated to be $N(\mathrm{SiO)}$ $\sim$ 2 $\times$ 10$^{13}$ cm$^{-2}$ and $N(\mathrm{SiO)}$/$N_{\mathrm{H_2}}$ $\sim$ 6 $\times$ 10$^{-11}$, where we assume optically thin emission in the LTE and the gas/dust temperature of 20 K.
The fractional abundance will be two times higher if we assume the gas/dust temperature of 10 K or 40 K.
The SiO abundance in SMM2 is at least 30 times higher than that observed in SMM1.
The observed enhancement of SiO in SMM2 would be related to shock chemistry triggered by EHV outflows.
}
Previous single-dish observations of CO detected extended ($\sim$20$\arcsec$) molecular outflows in the WB89-789 region \citep{Bra94,Bra07}.
The center of the outflow gas coincides with the position of the IRAS source (IRAS 06145+1455; 06$^\mathrm{h}$17$^\mathrm{m}$24$\fs$2, 14$\arcdeg$54$\arcmin$42$\arcsec$, J2000).
This position is consistent with those of SMM1 or SMM2, given the large beam size of CO(3-2) observations (14$\arcsec$) in \citet{Bra07}.
The observed CO outflow gas has an extended blue-shifted component (20 $<$ $V_{LSR}$ $<$ 31 km $s^{-1}$) towards the south-east direction from the center, while a red-shifted component (37 $<$ $V_{LSR}$ $<$ 55 km $s^{-1}$) is extended towards the north-west direction \citep[see Figure 9 in ][]{Bra07}.
This outflow direction coincides with that of the high-velocity SiO outflows observed in this work.
The SiO outflows from SMM2 may have a common origin with the large-scale CO outflows.
In summary, it is likely that a compact, high-density, and embedded object is located at the position of WB89-789-SMM2.
Presumably, a protostar associated with SMM2 is driving the observed high-velocity SiO gas flows.
Its short dynamical timescale and similarity with EHV flows suggest that the object is at the youngest stage of star formation (Class 0/I).
Non-detection of warm gas emission also supports its young nature.
We note that the detailed structure of high-velocity SiO gas is not \added{fully} spatially resolved, and CO lines, which often trace high-velocity outflows, are not covered in the present data.
Future high-spatial resolution observations of CO and other outflow tracers are key to further clarify the nature of WB89-789-SMM2.
\section{Summary} \label{sec_sum}
The extreme outer Galaxy is an excellent laboratory to study star formation and interstellar medium in \replaced{a low-metallicity and primitive Galactic environment. }{a Galactic low-metallicity environment. }
The following conclusions are obtained in this work.
\begin{enumerate}
\item
A hot molecular core is for the first time detected in the extreme outer Galaxy (WB89-789-SMM1), based on submillimeter observations with ALMA towards the WB89-789 star-forming region located at the galactocentric distance of 19 kpc.
\item
A variety of carbon-, oxygen-, nitrogen-, sulfur-, and silicon-bearing species, including complex organic molecules containing up to nine atoms and larger than CH$_3$OH, are detected towards a warm ($>$100 K) and compact ($<$ 0.03 pc) region associated with a protostar ($\sim$8 $\times$ 10$^3$ L$_{\sun}$).
The results suggest that a great molecular complexity exists even in a \replaced{primitive}{lower metallicity} environment of the extreme outer Galaxy.
\item
\added{For deuterated species, we have detected HDO, HDCO, D$_2$CO, and CH$_2$DOH.
HDO and CH$_2$DOH arise from a compact and high-temperature ($T$$_{rot}$ = 155--220 ) region, while HDCO and D$_2$CO are in lower temperature ($T$$_{rot}$ $\sim$ 40 K) and slightly extended.
The measured ratios of CH$_2$DOH/CH$_3$OH and D$_2$CO/HDCO are 1.1 $\pm$ 0.2 $\%$ and 45 $\pm$ 10 $\%$, respectively.
}
\item
Fractional abundances of CH$_3$OH and other COMs relative to H$_2$ generally scale with the metallicity of WB89-789, which is a factor of four lower than the solar value.
\item
A comparison of fractional abundances of COMs relative to the CH$_3$OH column density between the outer Galactic hot core and a Galactic intermediate-mass hot core show a remarkable similarity.
The results suggest the metallicity-scaled chemistry for the formation of COMs in this source.
CH$_3$OH is an important parental molecule for the COMs formation even in a \replaced{low-metallicity}{lower metallicity} environment.
\item
On the other hand, the molecular abundances of the present hot core do not resemble those of LMC hot cores.
We speculate that different luminosities or star-forming environments between outer Galactic and LMC hot cores may contribute to this.
\item
According to astrochemical simulations of low-metallicity hot cores, the observed metallicity-scaled chemistry of COMs in WB89-789-SMM1 implies that the source had experienced well-shielded and cold ice-forming stage before the hot core stage.
\item
We have also detected another compact source (WB89-789-SMM2) associated with high-velocity SiO gas ($V_{sys}$ $\pm$ 25 km s$^{-1}$) in the same region.
The characteristics of the source resemble those of EHV outflows observed in Class 0 protostars.
Physical properties and dynamical timescale of this outflow source are discussed.
\end{enumerate}
This paper makes use of the following ALMA data: ADS/JAO.ALMA$\#$2017.1.01002.S \added{and 2018.1.00627.S. }
ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile.
The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ.
This work has made extensive use of the Cologne Database for Molecular Spectroscopy and the molecular database of the Jet Propulsion Laboratory.
This work makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation.
This work was supported by JSPS KAKENHI Grant Number \replaced{19H05067 and 21H00037}{19H05067, 21H00037, and 21H01145}.
\added{Finally, we would like to thank an anonymous referee for insightful comments, which substantially improved this paper. }
\software{CASA \citep{McM07})}
|
2,877,628,091,638 | arxiv | \section{Introduction}
Granular matter, when externally excited, show a series of peculiar
phenomena. One of them is the mixing or segregation that takes place when
two or more species of different grains are put
together. Depending on the control parameters and the energy injection
mechanisms, grains of different size, shape, mass or mechanical properties can
mix or segregate.
Consider the particular case of a mixture of small grains and one large
intruder. Here the intruder can go up \cite{Rosato} or down \cite{Breu} --the direct and
reverse Brazil nut effects, respectively--, phenomenon that has been studied in many papers (see, e.g.
Ref. \cite{Kudrolli} and references therein).
When both species have similar sizes (but possibly different) we can select few cases
where a variety of
segregation mechanisms and scenario appear \cite{RuizSuarez,Schroter, Khakhar}.
For instance, in Ref. \cite{Mullin}
particles of different masses, radii and restitution coefficients are placed in a dish
which is horizontally vibrated, finding complete segregation. Segregation is also found in the same geometry
when the grains have different friction coefficient with the base \cite{Ciamarra}.
Under horizontal swirling, radial
segregation of particles of different sizes has been observed
\cite{Schnautz}. In avalanches, grains of different shape segregate in
stripes \cite{Makse}; in partially filled rotating drums, axial size
segregation develops \cite{Hill}. In two dimensional systems under gravity, sinusoidally
vibrated, clustering has been observed \cite{King}.
This segregation effect can be modulated by using non-sinusoidal vibration \cite{King2}.
In some of the cases mentioned above the grain species differ on the friction or
restitution coefficient. However few papers have studied segregation when
this is the only difference between grains. One of these cases is
Ref. \cite{Kondic}, where a mixture of spheres
that only differ in friction coefficients (static, dynamic and
rolling) is horizontally vibrated. They find complete mixing --that is, no segregation-- for a flat plate
while segregation is only observed when the plate was slightly inclined.
Therefore, these results contradict the previously mentioned ones.
In a theoretical approach, Ref. \cite{Serero} constructs the hydrodynamic
equations from the Boltzmann equation, finding segregation induced by
inelasticity. The authors explain the phenomenon as a consequence of the
temperature
gradient in the system induced by inelastic collisions, and relate the
concentration gradient with the temperature gradient.
In the same spirit, Ref. \cite{Brey} studies the low density hydrodynamics of a mixture in the so called
tracer limit, i.e. where the concentration of one of the components tends to
zero. Among other results, they find that the temperature ratio of both species
must be a constant. This constant value was already measured by two
experimental groups \cite{Menon,Wildman} in 2D and 3D respectively and by
computer simulations \cite{Paolotti}. Generalization to high density has been done by Garz\'o \cite{Garzo}.
The goal of this paper is to confirm or deny the existence of segregation
induced by a inelasticity difference and characterize this phenomenon.
The main tool will be Molecular Dynamics computer simulations of two dimensional
systems of a binary mixture kept fluidized by a vibrating base.
The structure of this paper is as follows. In Sect. II we describe the system under
consideration. Section III is devoted to the macroscopic study of the system, in particular
density and temperature profiles.
Section IV present a microscopic study via the pair distribution functions. Section V proposes
a model that possibly explains the segregation. We conclude with Section VI summarizing the results of the paper.
\section{Description of the system}
We study the effect of the difference on restitution coefficients in
the segregation phenomenon, by means of Molecular Dynamics simulations of a
bidimensional granular mixture of two types of particles, named
A and B. Grains are modeled as smooth Inelastic Hard Disks both having the same
diameter $\sigma$ and mass $m$, but differing on the normal restitution
coefficient that characterizes their inelastic collisions.
The restitution coefficient for A-A collisions is $\alpha_A$, for B-B
collisions is $\alpha_B$. For the interparticle collisions A-B we have taken
$\alpha_{AB}=(\alpha_A+\alpha_B)/2$. In what follows we will consider
that B are the most inelastic particles ($\alpha_B<\alpha_A$).
We have taken a fixed total number of particles $N_T=N_A+N_B$, changing the
concentration of the B particles.
For the simulations reported in this paper, we have fixed $N_T=680$ disks
and varied $N_B$ from 10 (that can be considered as a tracer limit)
until 160. The disks are placed under the action of a
gravitational acceleration $g$ pointing downward in a rectangular
box of width $L_x=50\sigma$, infinite height, and with the bottom wall
oscillating periodically at high frequency $\omega$ and small amplitude
$A$, with a bi-parabolic waveform \cite{Soto}. Periodic boundary conditions are
used in the horizontal direction, trying to avoid the appearance of convective rolls
by the influence of the walls. Under these conditions, the system reaches a stationary
state with gradients in the vertical direction \cite{Grossman}.
Units are chosen such that $\sigma=1$, $m=1$, and we fix the energy scale
by the wall oscillation, $m(A\omega)^2=1$. Simulations are
performed with $g=0.15$, and $A=0.01\sigma$.
\section{Macroscopic segregation} \label{sec.macroseg}
In order to illustrate the main observed features, we report results of a simulation
having a small fraction of inelastic particles $N_B=10$, and $\alpha_B=0.7$ and
the rest nearly elastic: $N_A=670$, $\alpha_A=0.98$.
The density profiles of the two species are shown in
Fig. \ref{fig.densprofiles}a, where we plot
the number density of particles of type A and B: $n_A(z)$ and $n_B(z)$. The normalization
of these quantities is such that $\int_0^\infty dz \int_0^{Lx}dx \, n_A(z) = N_A$ (resp. B).
For plotting purposes only, $n_B$ is rescaled by a factor $N_A/N_B$, so in the case of no segregation both profiles would be identical.
Both densities have the characteristic shape of
vibrofluidized systems subject to gravity: there is a initial density
increase due to the abrupt temperature drop caused by dissipation, and at
higher positions, density decreases again due to gravity
\cite{Grossman}. Density exhibits a maximum at $z\simeq 15\sigma$
where the density $n\simeq 0.5$, so the system cannot be considered as dilute.
The density profile of the more inelastic particles, B, is plotted as a dotted line in Fig 1a.
Its maximum is located at smaller $z$, indicating that they are closer to the
bottom of the container as compared to the more elastic ones. Therefore, particles segregate although the segregation is not complete.
The temperature profiles are also highly inhomogeneous, as shown in Fig.
\ref{fig.tempprofiles}a.
For both
species, temperature presents a initial abrupt drop, but later (after
$z\simeq 20\sigma$ both profiles present a linear increase with height.
This phenomenon was already observed in one-component systems, and it is
associated to the energy transport term, $-\mu\nabla n$, associated to density
gradients, that appear in granular fluids \cite{RamirezSoto}. Let us note that the
maximum density does not coincide with the temperature minimum: there is a shift
between these two quantities which is qualitatively described by a hydrostatic
balance in presence of gravity \cite{RamirezSoto,Serero}.
\begin{figure}[htb]
\includegraphics[angle=0,clip=true,width=0.95\columnwidth,
keepaspectratio]{Fig1.eps}
\caption{Density profiles for two species, density of A, $n_A$, (solid
line) and the rescaled density of B, $n_B N_A/N_B$, (dotted line). (a):
both species are inelastic with $N_A=670$, $N_B=10$, $\alpha_A=0.98$,
$\alpha_B=0.7$. (b): A is elastic while B is inelastic and $N_A=640$,
$N_B=40$, and $\alpha_B=0.5$}
\label{fig.densprofiles}
\end{figure}
\begin{figure}[htb]
\includegraphics[angle=0,clip=true,width=0.95\columnwidth,
keepaspectratio]{Fig2.eps}
\caption{Temperature profiles for two species, $T_A$ (solid
line) and $T_B$ (dotted line). (a) and (b) graphics have the same
parameters as in Fig. \ref{fig.densprofiles}}
\label{fig.tempprofiles}
\end{figure}
The described segregation of species A and B is produced by their
different restitution coefficients as all the other properties are the same.
To study in more detail the effect of the difference of inelasticities and
to understand the origin of this particular segregation, we proceed to
study the limiting case in which the A particles are elastic
($\alpha_A=1$) and only the B particles are
inelastic (and consequently, collisions A-B are also inelastic).
In this way we also limit the parameter space, allowing to
a more detailed quantitative study.
Figures \ref{fig.densprofiles}b and
\ref{fig.tempprofiles}b show the density and temperature profiles for such case,
where particles of type A are elastic ($\alpha_A=1$) particles B are inelastic
($\alpha_B=0.5$) considering $N_B=40$. It is observed
that the main properties of the profiles are preserved, even the positive
slope of $T_A$ despite the A-A collisions are elastic. Partial segregation is again observed,
where inelastic particles, B, sink to the bottom of the container while elastic ones, A,
are majority at upper layers of the fluid.
These results are not surprising in view of the predictions
of Ref.~\cite{Serero}, where it is argued that the segregation is
produced when the particles with different restitution coefficients are immersed
in a temperature gradient. The gradient is
induced by the inelastic collisions, so such gradient can be created vibrating a
mixture of elastic and inelastic particles. The latter ones dissipate some of the energy
injected by vibration creating a stationary state. The hydrodynamic description
of the mixture also contains the dissipative flux $ -\mu \nabla n $, and
therefore it is expected that
the hydrodynamic profiles of density and temperature will be equivalent to a
full inelastic system.
Figure \ref{fig.Tempratio} shows the temperature ratios $T_B(z)/T_A(z)$ for the simulations
described in Fig. \ref{fig.tempprofiles}. At low
densities it was found
experimentally \cite{Wildman,Menon} and by employing kinetic theory \cite{Brey}
that such temperature ratio must be constant. However, we find a non constant ratio
in the $z$ direction that only agrees with the result of \cite{Brey} at high
$z$.
Their prediction is valid at low densities in the tracer limit,
conditions that are only reached in our case for large values of $z$.
In the second case, where A particles are elastic, equivalent predictions were given in
\cite{Martin,Trizac}.
\begin{figure}[htb]
\includegraphics[clip=true,width=0.9\columnwidth,
keepaspectratio]{Fig3.eps}
\caption{Temperature ratios $T_B(z)/T_A(z)$ for the simulations
described in Fig. \ref{fig.tempprofiles}. Simulation results (solid circles) and
theoretical prediction in \cite{Brey} (solid line) for $N_A=670$, $N_B=10$,
$\alpha_A=0.98$, $\alpha_B=0.7$. Simulation results (open squares)
and
theoretical prediction in \cite{Brey} (dashed line) for A elastic,
$N_A=640$, $N_B=40$, and $\alpha_B=0.5$}
\label{fig.Tempratio}
\end{figure}
In order to quantify the segregation, a series of simulations are performed with
$N_B$ ranging from 10 to 160, and $\alpha_B$ between 0.2 and 0.9. Larger values of $N_B$ or
smaller restitution coefficients lead to clustering as described in \cite{Meerson}.
For each simulation we compute the segregation parameter, defined as:
\begin{equation}
\delta =1-
\int dz\, n_A(z)n_B(z)\bigg/ \sqrt{\textstyle\int dz\, n_A^2(z)\int dz\, n_B^2(z)}
\end{equation}
where the $n_A(z)$ and $n_B(z)$ are the local density, as plotted in Fig.
\ref{fig.densprofiles}.
The segregation parameter is bounded between 0 and 1. The value $\delta=1$
corresponds to complete segregation, as $\delta$ equals 1 only if $n_A(z)$ and $n_B(z)$
do not overlap. On the contrary, $\delta=0$ means complete mixing, as this value
can only be obtained if $n_B(z)$ is
proportional to $n_A(z)$.
The results for $\delta$ are collected in Fig. \ref{fig.delta} where the quantity
$\delta$ is plotted versus the coefficient $\alpha_{B}$ for different values
of $N_B$.
The fact that $\delta$ is always non vanishing
confirms that the segregation exists whenever the restitution coefficients are
different. Only in the case when $\alpha_B \to 1$ the quantity $\delta$
approaches 0, limit in which there is no segregation. Note that, for each
$\alpha_B$, $\delta$ increases with $N_B$. The results confirm that
segregation is not complete as $\delta$ never gets close to 1.
In addition, for each simulation, the center of mass of the A and B
species are computed, $Z_{A/B}$, finding that $Z_A>Z_B$ in all cases.
\begin{figure}[htb]
\includegraphics[angle=0,clip=true,width=0.9\columnwidth,
keepaspectratio]{Fig4.eps}
\caption{Segregation parameter $\delta$ as a function of
$\alpha_B$. Different curves correspond to various
concentrations of $B$ particles. In the elastic limit $\alpha\to 1$ all curves
coincide at $\delta=0$.}
\label{fig.delta}
\end{figure}
It can be asked whether the observed segregation could compensate the
buoyancy force experienced by lighter $B$ particles. To verify this idea,
a series of
simulations, keeping fixed $\alpha_A=1$, $\alpha_B=0.5$, $N_A=640$, $N_B=40$,
and $m_A(A\omega)^2=1$, but varying $m_B/m_A$ is performed. In each simulation,
the position of the center of masses of the $A$ and $B$ species are computed, $Z_A, Z_B$.
The results are presented in Fig. \ref{fig.varyingmB}. The inelastic particles
have lower center of mass if $m_B/m_A> 0.37$, and therefore sinking due to dissipation wins to the buoyancy force in this range.
On the contrary, buoyancy force
dominates if $m_B/m_A < 0.37$ and inverse segregation is
obtained when $B$ particles are lighter than this threshold. The segregation parameter $\delta$ does not vanish for any value of $m_B/m_A$, indicating that there is no complete mixing even at the value of $m_B/m_A = 0.37$, where both center of masses coincide. The value of $\delta$, however, is minimum at this precise mass ratio.
\begin{figure}[htb]
\includegraphics[angle=0,clip=true,width=0.9\columnwidth,
keepaspectratio]{Fig5.eps}
\caption{Position of the center of mass of $A$ (open squares) and $B$
particles (solid circles) as a function of the relative mass of $B$ particles
$m_B/m_A$.
Simulation parameters are fixed to $\alpha_A=1$, $\alpha_B=0.5$, $N_A=640$,
$N_B=40$, and $m_A(A\omega)^2=1$.}
\label{fig.varyingmB}
\end{figure}
\section{Microstructure}
A snapshot of the previously studied case, $\alpha_A=1, \alpha_B=0.5$ and $N_B=40$
is shown in Fig. \ref{fig.snapshot}.
The particles of type A are plotted as open circles,
and B are black symbols.
This snapshot suggest that, besides the macroscopic partial segregation
characterized by $\delta$, there is also a
micro-segregation,
where B particles tend to be close to other B
particles, differently as if the particles where labeled A or B at
random. To decide if this observation is indeed true, we analyze
quantitatively the system computing the pair correlation functions.
\begin{figure}[htb]
\includegraphics[angle=0,clip=true,width=0.9\columnwidth,
keepaspectratio]{Fig6.eps}
\caption{Snapshot of the simulated system for $N_B=40$, $\alpha_B=0.5$.
A particles are white and B particles black. The border of some particles
exceed the lateral system size due to the periodic boundary conditions in this
direction.}
\label{fig.snapshot}
\end{figure}
As the system is inhomogeneous in the $z$ direction, the pair correlation
functions became
$z$-dependent. We chose to divide the
system in horizontal slabs of height $10\sigma$. For every pair of
particles, the center of mass is obtained and, according to it, their
relative distance is considered for the histogram of distances
associated to that particular slab. Furthermore, the pairs are classified according
to the type of particles involved. Finally, the histograms are normalized as
usual with the local density of each species in the slab, such that
at long distances the obtained pair correlation function approaches to unity.
This results in functions $g_{\mu\nu}^s(r)$, for the species $\mu,\nu
=\{A,B\}$, and the slab $s$.
In most of the studied cases, the region of density closest to homogeneous
corresponds to the second slab ($10\sigma\leq z<20\sigma$). In what
follows we will present results only for this slab and we will suppress
accordingly
the superscript in the pair correlation functions. The qualitative
properties for other slabs are similar to this particular slab, although the
effects are reduced because at higher slabs the densities are smaller.
Figure \ref{fig.gr} shows the density correlation function for the pairs AA and
BB for $N_A=580$, $N_B=100$, $\alpha_B=0.8$, conditions in which the system
develops an average density of $n\simeq 0.56$. Correlation of distinct particles AB have
intermediate properties between AA and BB. The main noticeable feature of these correlation
functions is that the first and second peak of the BB function is much larger than the AA one. In other
words, the large values of $g_{BB}$ at contact means that the are more B-B pairs in the system
that in a configuration where A and B particles are labeled at random.
This excess number of B-B pairs could be guessed from the figure \ref{fig.snapshot}, where one can
easily locate 4 B-B pairs.
The structure of the correlation functions in our inelastic system
could resemble the radial distribution of an elastic fluid at a {\em selected}
higher density. We tried to exploit this idea by comparing the
granular distribution function with an elastic one by choosing an appropriate density.
The way to select the density is by adjusting the value of the pair distribution
function {\em at contact}. We take this criterion inspired by the kinetic theory of dense
fluids, where the pair distribution function at contact is used to improve the Boltzmann
equation including certain correlations. With such procedure we find different fitting
densities for AA and BB pairs, being their values: $n_{BB}=0.823$ and
$n_{AA}=0.636$.
Surprisingly, none of them (not even the elastic) agree with the average density in the
system, $n\simeq 0.56$. This means that the dissipative particles, type B, are
able to modify the structural properties of the A particles, which we have chosen
to be elastic. The fitted elastic pair correlation functions are plotted as dashed lines in
Fig. \ref{fig.gr}. Despite of the fitting procedure, large discrepancies are observed between our correlation
functions and the elastic ones. Other fitting procedures could have been selected, but none of
them produce an agreement of the correlation functions in all its range.
The inelastic
correlation function shows a strong enhancement of the first peak, followed by a second,
and at most a third peak, reflecting a short range structure of few particle diameters.
However, the location of the peaks differs form the elastic
correlation function, indicating changes in the microstructure.
The fast decay of the correlation function in inelastic systems has been observed in
homogeneously heated inelastic gases as well \cite{Ignacio}. This analysis
confirms the existence of a microsegregation of particles of type B.
\begin{figure}[htb]
\includegraphics[clip=true,width=\columnwidth,keepaspectratio]{Fig7.eps}
\caption{Solid lines: radial distribution functions $g_{AA}(r)$, and
$g_{BB}(r)$. $N_B=100$, $\alpha_B=0.8$. Dashed lines: equilibrium radial
distribution function given in Ref.~\cite{Kolafa}, with the density fitted to
give the same value of $g_{AA}$ or $g_{BB}$ at contact (left $n_{AA}=0.636$,
right $n_{BB}=0.823$) }
\label{fig.gr}
\end{figure}
In addition, the pair correlation function at contact $\chi=g(\sigma^+)$ is computed
for the pairs AA, AB, and BB.
The values of $\chi$ grow when increasing $N_B$ and decreasing
$\alpha_B$, obeying that $\chi_{BB}>\chi_{AB}> \chi_{AA}$.
Finally let us remark that $\chi_{BB}$ can reach very high values. For instance, at $N_B=160$ and $\alpha_B=0.5$,
$\chi_B\simeq 60$, despite that the average total density is only $n\simeq 0.5$.
It has been proposed \cite{Lutsko}, based on an
Enskog-like kinetic model that, due to the smaller outgoing velocity
after a collision, particles stay longer in the close vicinity, when
the restitution coefficient is less than one. These arguments leads to an enhancement
of the pair correlation function in terms of the restitution coefficient $\alpha$, as
\begin{equation}
\chi(\alpha) = \frac{1+\alpha}{2\alpha}\chi_0. \label{chi}
\end{equation}
Here $\chi_0$ is the pair correlation function of elastic particles at
the same density, that are given by the Verlet-Levesque \cite{LV} and Carnahan-Starling
\cite{CS} factors for 2 and 3 dimensions.
To test if this prediction is valid we have plotted in Fig. \ref{fig.cuocientechi} the ratios
$\chi_{BB}/\chi_{AA}$ against $\alpha_B$ and $\chi_{AB}/\chi_{AA}$ against
$\alpha_{AB}$ and compared them with the factors $(1+\alpha_B)/(2\alpha_B)$
and $(1+\alpha_{AB})/(2\alpha_{AB})$ respectively. Dividing by $\chi_{AA}$ we get rid of
the $\chi_0$ factor in (\ref{chi}).
The figure
indicates that the Enskog model is not enough to describe the high values of
the pair correlation when one inelastic particle is present, as was
already quoted by its author \cite{Lutsko}. A possible
origin of this effect are recollisions, that are not taken into account in
Enskog's theory, a phenomenon that is pronounced in granular systems.
Inelastic particles, after a collision, separate at a slower speed,
increasing the possibility of having a collision with a third particle,
approaching it again the original pair. In some way, inelasticity
increases the so-called {\em cage effect} in liquids.
\begin{figure}[htb]
\includegraphics[angle=0,clip=true,width=0.9\columnwidth,
keepaspectratio]{Fig8.eps}
\caption{Contact pair correlation function quotients as a
function of the restitution coefficient. $\chi_{BB}/\chi_{AA}$ results
from simulations (filled circles) and the Enskog theoretical value
(solid line); and $\chi_{AB}/\chi_{AA}$ results
from simulations (open triangles) and the Enskog theoretical value
(dotted line). Here $N_B=120$.}
\label{fig.cuocientechi}
\end{figure}
Besides density correlations induced by B particles, they also create a local decrease
of temperature because their collisions are inelastic. We have computed the radial temperature
of particles of type $\mu$ located at a distance $r$ of a $\nu$ particle:
$T_{\mu\nu}(r)$. Figure \ref{fig.gTr} shows these radial temperature functions.
Their asymptotic values are the average temperatures of A and B particles in the slab $10\sigma < r < 20\sigma$.
Besides that B particles are colder than A particles, they produce a local decrease
of temperature both for A and B particles. This effect is more pronounced when
increasing the inelasticity but almost independent of the concentration of B particles,
whose main effect is to reduce globally the
temperature. As before this is a local effect that extends for
about $2\sigma$.
\begin{figure}[htb]
\includegraphics[angle=0,clip=true,width=0.9\columnwidth,
keepaspectratio]{Fig9.eps}
\caption{Radial temperatures $T_{\mu\nu}$ of a particle of type
$\mu$ around a particle of type $\nu$. $N_B=40$, $\alpha_B=0.5$.}
\label{fig.gTr}
\end{figure}
\section{Origin of the global segregation}
So far we have argued along the paper that B particles locally modify the structure
of B particles (inelastic) but also of A particles (elastic).
For instance, Fig. \ref{fig.gr} shows that
the density around B particles is higher than around A
particles. The excess mass around B particles with respect A particles is defined as,
\begin{equation}\label{excessmass}
\delta m = \int_\sigma^{\infty} dr\, 2\pi r(\rho_B(r)-\rho_A(r)),
\end{equation}
where $\rho_\mu(r)$ is the total density around a particle of type $\mu$. The excess mass is positive
for all values of $N_A, N_B$ and restitution coefficient $\alpha_B$.
Similarly, the local temperature around a B particle is lower than the local
temperature around a A particle (see Fig. \ref{fig.gTr}).
These two results imply that around a B particle it is developed a dense
and cold region, that we call a {\em cold droplet}. The characteristic size of the droplet
is microscopic; in agreement with the plots of Sect IV, must be about 2-3 $\sigma$.
The development of a cold
droplet around each B particle increases their effective mass and also, due to the lower
temperature, decreases the pressure. This mechanism could be similar
(at the local level) to the clustering instability described in
\cite{Goldhirsch}. In our case the unstable process does not continue as the
energy injection due to the vibration breaks the droplet, that must be seen as dynamical object,
that forms and evanesces continuously.
Once the cold droplet is formed around B the buoyancy force in weaker than its effective weight
and therefore there is a net force that
tends to sink the cluster and consequently the B particle.
This continuously sink of the B particle is the responsible of the macroscopic
segregation described in Sect. III.
\section{Summary and Conclusions}
The main conclusion of the present paper is that different restitution coefficients alone
create segregation in a binary mixture vertically vibrated. The restitution coefficients
must be considered, in addition to the usual material properties (mass ratio and diameter ratio)
in order to describe accurately the segregation.
The effect of the inelasticity is such that the most inelastic particles sink to
the bottom of the
container while the less inelastic ones raise to the top. Segregation is not
complete, however, but
only partial. The density profiles of each species shows a maximum which is
located in a different
position depending on the inelasticities. Concerning the temperatures, most
dissipative particles have
a lower temperature than the most elastic ones, both having the characteristic
shape of vibrofluidized
system (fast cooling away from the moving boundary followed by a heating that
grows linearly with the distance).
The segregation effects presented here also appear by vibrating a
mixture of elastic and inelastic particles. Again inelastic particles migrate to
the bottom of the container and elastic one prefer the upper part. The
temperature distribution also
looks similar to the full inelastic mixture.
Besides the macroscopic segregation there is also segregation at the microscopic scale.
With the word microscopic we refer to properties at distances of few particles diameters.
In our case we find a
notorious increase of the probability of finding two inelastic particles together as compared with the less
inelastic or elastic ones. This enhancement cannot be described by only considering kinematic properties.
Our guess is that dynamic correlations are required to properly describe these correlations.
Finally, we propose a mesoscopic explanation to the segregation: the most inelastic particles induce
locally a region of high density and low temperature, resembling a cold droplet that falls in a
gravitational field. The droplet is created by the dissipation in a way that resembles the clustering
instability of the granular gases.
The effect of the different restitution coefficients may act in an opposite direction as the usual Brazil
nut effect. For instance, consider a vibrated granular fluid and insert a large (or light) intruder than tends to
move upward. If the intruder is very dissipative, it will move downwards, as we have described in this paper.
Which is the final position? Which force does finally win? Partial answer is presented in Fig.~\ref{fig.varyingmB}, where we show the competition between the buoyancy force and the sinking effect due to dissipation.
Moreover, could two large an inelastic intruders come
together by the effect of inelasticity alone, as shown in the Sect. IV of the present paper? Further research is needed in order to answer these questions.
\begin{acknowledgments}
We want to thank J.M.R. Parrondo for very useful comments.
R.B. is supported by the Spanish Projects MOSAICO,
FIS2004-271 and UCM/PR34/07-15859. The research is supported by {\em Fondecyt} grants
1061112, 1070958, and 7070301 and {\em Fondap} grant 11980002.
\end{acknowledgments}
|
2,877,628,091,639 | arxiv | \section{Introduction}
Two-dimensional materials based on honeycomb lattices have become a subject of intense
investigation in the past few years, due to their interesting band structure and associated
topological properties. The low-energy dynamics of such systems are typically dominated
by states near the $K$ and $K'$ points in the Brillouin zone. The paradigm of this
is realized in graphene, a pure carbon honeycomb lattice, which hosts a gapless spectrum with
Dirac points at these locations \cite{Castro_Neto_RMP} due to a combination
of inversion and time-reversal symmetry, as well as the very weak spin-orbit coupling (SOC)
typical of light elements. More recently, transition metal dichalcogenide (TMD) monolayers,
where a transition metal $M$ (e.g., Mo or W) resides on one sublattice and a dimer of
chalcogen $X$ atoms (e.g., S, Se) on the other,
have emerged as important materials in this class \cite{Mak_2010,Splendiana_2010}.
These system are gapped at the $K$ and $K'=-K$ points,
and the strong SOC associated with $M$ atoms leads to very interesting
spin-valley coupling near these points \cite{Xiao_2012,Liu_2013}.
In particular, one finds spin up and down
components of the valence band well-separated in energy, with their ordering interchanged
for the two valleys. This allows for an effective valley polarization to be induced when the
system spin polarizes via pumping with circularly polarized light \cite{Cao_2012,Zeng_2012,Mak_2012}.
The coupling of spin and valley in this way has been dramatically demonstrated via the
observation of a valley Hall effect in this circumstance \cite{Mak_2014}.
The locking of spin and valley degrees of freedom in TMD monolayers is a unique feature of these materials.
When hole-doped, it leads to a non-zero expectation value of $\sigma_z\tau_z$,
where $\sigma_z$ a Pauli matrix for spin, and $\tau_z$ the analogous operator for the valley index.
This occurs without any interaction present in the Hamiltonian, yet is reminiscent of ferromagnetic
ordering, albeit without time-reversal symmetry-breaking since this reverses both spin
and valley. Recently, it has been argued that for strong enough interactions, TMD systems
develop a spontaneous imbalance of spin/valley populations \cite{Scrace_2015,Braz_2017},
which leads to actual ferromagnetic spin order in the groundstate. It thus becomes
interesting to consider how one might probe and distinguish these orderings. One possible
strategy is to investigate the spin response of the system, both to search for sharp
collective modes that are a hallmark of ferromagnets, and to understand broader features of
the response that demonstrate the ordering present in these materials. This is
the subject of our study.
\begin{center}
\begin{figure}[t]
\includegraphics[width=0.44\textwidth]{full_chi_log}
\caption{Absorptive part of spin response function
Im $\chi_\tau({\bf q},\omega)$
for $\textbf{q}=0$, chemical potential $\mu_0=-0.49\Delta$ and $U_0=0.2$eV with $\tau=+1$. Model parameters for band structure in Table I.
A sharp collective mode near $\omega \approx -0.0845\Delta$ is prominent above a particle-hole continuum
in the interval $-0.092 \lesssim \omega/\Delta \lesssim -0.087$, where $\Delta$ = 1.66 eV.
}
\label{fig:phys_response}
\end{figure}
\end{center}
We focus on the basic qualitative physics of this system by employing a simple two-band model
for $MX_2$ compounds \cite{Xiao_2012} with a short-range repulsive interaction, and compute the spin response
using the time-dependent Hartree-Fock approximation (TDHFA) \cite{Giuliani_book}. For concreteness
quantitative results are computed using parameters appropriate for MoS$_2$,
and we examine results for several representative hole-dopings
and interaction strengths. A typical result is illustrated in Fig. \ref{fig:phys_response} for a
system with low hole doping, such that only a single spin species of the valence band is partially
unoccupied in each of the valleys.
For small wavevectors
$q$, a sharp collective mode is visible below a continuum of particle-hole spin-flip excitations
which are present even in the absence of interactions (although the frequency interval where
they reside is renormalized by them). An interesting feature of the
collective mode is that, for low hole doping, it is present for arbitrarily weak interaction strength,
even if
the system is not spin-ferromagnetic. Its presence may be understood as arising from the
effective $\sigma_z\tau_z$ polarization that is induced when the system is hole-doped.
Interestingly, this is a direct analog of ``Silin-Leggett'' modes \cite{silin_1958,leggett_1970}
that appear when fermions become spin-polarized by a external magnetic field. In that system, the non-interacting Hamiltonian induces a spin polarization in the groundstate which is not present spontaneously. Nevertheless, the combination of different Fermi surfaces for different spins, together with exchange interactions which energetically favor ferromagnetism locally, leads to sharp, collective excited states of low energy. These modes have been detected in spin-polarized $^3$He \cite{Tastevin_1985}.
In the TMD system,
an analogous sharp response appears when the system absorbs angular momentum, typically
from a photon, and is dominated by excitations around one of the two valleys. The spin response
from the other valley is negligible around these frequencies, but can be seen at negative
frequencies, which is equivalent to absorption of photons with the opposite helicity.
This effect is well-known in the context of undoped TMD systems \cite{Zeng_2012, Mak_2012,Cao_2012}
where the particle-hole excitations involve electrons excited from the valence to the conduction
band. In the present situation one finds this behavior from excitations within the valence band,
from occupied spin states to unoccupied ones available due to the doping, of opposing spin.
The resulting sharp modes are much lower in energy than comparable exciton modes of an
undoped system \cite{Ross_2013,Ugeda_2014,Wu_2015,Trushin_2016}.
True ferromagnetism in this system has been argued to arise when interactions are sufficiently strong that unequal populations of the two valleys becomes energetically
favorable \cite{Scrace_2015,Braz_2017}, and for a hole-doped, short-range interaction model, it occurs as a
first-order transition at a critical interaction strength $U_c \cite{Braz_2017}$. Within our model this results in an effective shift of the bands relative to one another, so that a system sufficiently clean and cold to allow observation of resonances associated with collective spin modes
would present them at different frequencies for different helicities.
\begin{center}
\begin{figure}[t]
\includegraphics[width=0.45\textwidth]{tau_p_full.pdf}
\caption{The top panel, for $\mu_0=-0.49\Delta$, in which there is only a single
Fermi surface in the valley (demonstrated in Fig.~\ref{fig:band}), has a continuum of particle-hole
excitations (shown in green) below some minimum frequency.
The lower panel has $\mu_0=-0.57\Delta$ for which there are two Fermi surfaces in the valley, giving rise to the continuum modes with vanishingly small energies for $q_x>0.4k_0$
with $k_0=\Delta/2ta$. For both panels, $U_0=0.2$eV and $\tau=+1$. Other parameters are listed in Table I.
Blue lines illustrate the collective spin wave mode dispersion.}
\label{fig:loww}
\end{figure}
\end{center}
At higher dopings the valence bands will support two Fermi
surfaces in each valley, indicating that they contain holes of both spins.
Because of the opening of the second Fermi surface the system now supports
gapless spin-flip excitations, albeit at finite wavevector.
Regions in
frequency and wavevector where these exist are illustrated in Fig.
\ref{fig:loww}, along with the spin wave dispersion for these parameters.
Observation of such a continuum of gapless modes would allow a direct
demonstration of the spin-split Fermi surfaces in this system. In practice,
because these modes appear above wavevectors of order
$q \lesssim 1/a$ with $a$ the lattice constant, their
presence may be difficult to observe by direct electromagnetic absorption
because of momentum conservation. In real systems,
disorder relaxes this constraint and may make their detection feasible \cite{Pinczuk_1997}.
Our analysis also shows that the system in principle supports a {\it second}
collective spin wave mode, one associated
with inter-orbital spin flips. This mode exists extremely close to the edge of the
continuum of particle-hole spin excitations and in practice might be difficult to discern in the spin-response
function. Its presence would presumably be more easily detected in response functions
that combine inter-orbital excitations with spin flips.
This article is organized as follows. In Section II we describe both the single particle
Hamiltonian and the interaction model we adopt for this system. Section III describes
a static Hartree-Fock analysis of the system, demonstrating that the effective single-particle
Hamiltonian is rather similar to the non-interacting one, with renormalized parameters.
In Section IV we carry out a time-dependent Hartree-Fock analysis of the spin response
function, and show how one can identify poles that signal allowed spin-flip excitations of
the system. In Section V we carry out an analytic analysis of the equations generated in
the previous section, appropriate for low hole doping. Section VI provides results one
finds from numerical solutions for the spin response functions. We conclude with a summary
in Section VII.
\section{Model of the system}
Our starting point is a simple two-band Hamiltonian for the monolayer MX$_2$, such as MoS$_2$, developed through several numerical, symmetry-based analyses~\cite{Xiao_2012} which capture the electronic properties near the $K,-K$ valleys. In the absence of interactions this has the form
\begin{align}
H_0^\tau(\textbf{k}) =\left[
\begin{array}{cc}
\Delta/2 & at(\tau k_x-ik_y) \\
at(\tau k_x+ik_y) & -\Delta/2 +s\tau\lambda\\
\end{array}\right],\label{eq:ham}
\end{align}
which is written in the basis $|\psi^{\tau}_c\rangle = |d_{z^2}\rangle$ and $|\psi^{\tau}_v\rangle = \frac{1}{\sqrt{2}}(|d_{x^2-y^2}\rangle +i\tau|d_{xy}\rangle)$, where $\tau=\pm$ is the valley index, $t$ is the hopping matrix element and $d_{z^2}$, $d_{x^2-y^2}$, $d_{xy}$ are orbitals of the $M$ atoms.
(Here and throughout this paper we take $\hbar = 1$.)
Spin is a good quantum number, denoted by $s=1$ for $\uparrow$ and $s=-1$ for $\downarrow$. The strength of spin-orbit coupling is encoded
in the parameter
$\lambda$. In the ground state of this Hamiltonian, states up to the chemical potential $\mu_0$,
which is tunable in principle via gating,
are filled. Estimates
\cite{Xiao_2012} for the parameters relevant to MoS$_2$ are listed in Table I.
\begin{figure}[t]
\includegraphics[width=0.48\textwidth]{spectrum_tau_p}
\caption{The band dispersion of Hamiltonian (\ref{eq:ham}) showing a direct band gap $E_g$ between the valence and the conduction band and the separation of spin polarized bands in the conduction band. Position for two of $\mu_0$ are marked on the right margin. $k_0=\Delta/2ta$ is the scale of momentum. The parameters used are listed in Table 1 and $\tau=+1$.}\label{fig:band}
\end{figure}
\begin{center}
\begin{table}[h]
\begin{tabular}{ | c | c | c | c |}
\hline
$a$ & $t$ & $\Delta$ & $\lambda$ \\ \hline
3.190 \r{A} & 1.059 eV & 1.66 eV & 0.075 eV \\
\hline
\end{tabular}
\caption{Values of various parameters for MoS$_2$ from Ref. \onlinecite{Xiao_2012}.}
\end{table}
\end{center}
The energy eigenstates of the full Hamiltonian with momentum $\textbf{k}$ and spin $s$ will be denoted by $\phi_{l,s}(\textbf{k})$, with $l=\{\tau,\alpha\}$ ($\alpha = \pm$ for conduction/valence bands),
and have the form
\begin{align}\label{freephi}
\phi_{l,s}(\textbf{k}) = \frac{1}{\sqrt{2}}\left(\begin{array}{c} \tau e^{-i\tau\phi}\sqrt{1+\frac{\alpha m_{s\tau}}{\sqrt{m_{s\tau}^2+a^2t^2k^2}}} \\ \alpha\sqrt{1-\frac{\alpha m_{s\tau}}{\sqrt{m_{s\tau}^2+a^2t^2k^2}}} \end{array}\right),
\end{align}
with corresponding eigenvalues
\begin{align}\label{eq:freeen}
&\epsilon_{l,s}^{\alpha}(\textbf{k}) =\frac{\tau s\lambda}{2} + \alpha \sqrt{m_{s\tau}^2+(atk)^2},
\end{align}
where $m_{s\tau}=\frac{\Delta-\tau s\lambda}{2}$ and $k=\sqrt{k_x^2+k_y^2}$. The bands near the $K$
($\tau=1$) valley, shown in Fig.~\ref{fig:band}, illustrate the distinct spin structure of the system.
The valence and conduction band are separated by a relatively large
gap $E_g=(\Delta -\lambda)$ at $k=0$, whereas the two spin valence bands are further separated by a
smaller gap of magnitude $E_{\lambda}=2\lambda$. This gap between the spin-split valence bands remains almost constant for a range of $k$ until $akt\gg \Delta$. Note that the two conduction bands
of the model are nearly degenerate. The $K$ and $-K$ valleys of the system are related by time-reversal,
so that the spins of the two bands are reversed in going from one to the other.
To write down an effective interaction, it is convenient to define field operators of
spin $s$ projected into the set of states defined in our model,
\begin{align}
\Psi_s(\textbf{r}) = \frac{1}{\sqrt{L_xL_y}}\sum_{\textbf{k},l}
e^{i(\textbf{k}+\textbf{K}_{\tau_l})\cdot \textbf{r}}\phi_{l,s}(\textbf{k})c_{l,s}(\textbf{k}),
\end{align}
where $c_{l,s}(\textbf{k})$ is the annihilation operator for the $l,s$ state at momentum $\mathbf{k}$
relative to the valley minima/maxima at $\textbf{K}_{\tau_l} = \tau_l \textbf{K}$, with the sign determined
by the $\tau$ index implicit in $l$, and $L_xL_y$ is the area of the system.
A repulsive interaction among the band-electrons can then be represented in the form
\begin{widetext}
\begin{align}
H_{\text{int}} = {1 \over 2} \sum_{s,s'} \int d^2\textbf{r} d^2\textbf{r}' V(\textbf{r}-\textbf{r}'):\Psi^\dagger_{s}(\textbf{r})\Psi_{s}(\textbf{r})
\Psi^\dagger_{s'}(\textbf{r}')\Psi_{s'}(\textbf{r}'):,
\end{align}
with $V$ represents a finite-range repulsive interaction.
Physically this arises from Coulomb interactions among the band electrons;
the finite range can be provided by a screening gate or by carriers in the
layer itself (although we will not treat the screening dynamically in what follows).
We assume the screening length is large on the scale of the lattice constant
so that inter-valley contributions to the density $\Psi^\dagger_{s}(\textbf{r})\Psi_{s}(\textbf{r})$
oscillate rapidly, and can be ignored when integrated over ${\bf r}$. This leads to the
replacement
\begin{align}
H_{\text{int}} \rightarrow {1 \over 2} \sum_{s,s'} \sum_{\tau,\tau'}
\int d^2\textbf{r} d^2\textbf{r}' V(\textbf{r}-\textbf{r}')
:\Psi^\dagger_{s\tau}(\textbf{r})\Psi_{s\tau}(\textbf{r})
\Psi^\dagger_{s'\tau'}(\textbf{r}')\Psi_{s'\tau'}(\textbf{r}'):,
\end{align}
with
\begin{align}
\Psi_{s\tau}(\textbf{r}) = \frac{1}{\sqrt{L_xL_y}}\sum_{\textbf{k},l}
e^{i\textbf{k}\cdot \textbf{r}}\phi_{l,s}(\textbf{k})c_{l,s}(\textbf{k})\delta_{\tau,\tau_l},
\end{align}
where $\tau_l$ is the valley content of the composite $l$ index. At this point we can
make the approximation
$V(\mathbf{r}-\mathbf{r}') = 2U_0 \delta^2(\mathbf{r}-\mathbf{r}')$, and
arrive at an interaction form
\begin{align}
H_{\text{int}} = &U \sum_{\{l_i \textbf{k}_i \textbf{q}\}}\sum_{s,s'}
\phi^\dagger_{l_1 s}(\textbf{k}_1)\phi^\dagger_{l_2 s'}(\textbf{k}_2)
\phi_{l_3 s'}(\textbf{k}_2+\textbf{q}')\phi_{l_4 s}(\textbf{k}_1-\textbf{q}')
\delta_{\tau_{l_1},\tau_{l_4}} \delta_{\tau_{l_2},\tau_{l_3}}
c^\dagger_{l_1 s}(\textbf{k}_1) c^\dagger_{l_2 s'}(\textbf{k}_2)
c_{l_3 s'}(\textbf{k}_2+\textbf{q}')c_{l_4 s}(\textbf{k}_1-\textbf{q}'),
\label{Hint}
\end{align}
where $U=\frac{U_0}{L_xL_y}$. This is the interaction Hamiltonian that we use
in the Hartree-Fock analyses that follow.
\vskip 4mm
\end{widetext}
\begin{center}
\begin{figure}[t]
\includegraphics[width=0.45\textwidth]{chi_1.pdf}
\caption{Plot of a typical $\chi(\mathbf{q},\omega)$, Eq.~(\ref{eq:finalchi}), showing the particle-hole excitations of the spin-split valence bands below an energy $\omega_c$. At $\omega_1$, there is a single collective mode visible for which the real part of the denominator of Eq.~(\ref{eq:finalchi}) is zero. Here we have used $\mathbf{q} = \mathbf{0}$, $\mu_0=-0.49\Delta$, $\tau=+1$ and $U_0=0.2$eV.}\label{fig:chi}
\end{figure}
\end{center}
\section{Hartree-Fock Approximation}
In order to carry out an analysis of the spin response in this system within
the time-dependent Hartree-Fock approximation, it is first necessary to find
the density matrix of the system within the static Hartree-Fock (HF) approximation.
This has the form
\begin{equation}
\langle c^\dagger_{ls}(\textbf{k})c_{l's'}(\textbf{k}') \rangle = n_{ls}(\textbf{k})\delta_{ll'}\delta_{ss'}\delta_{\textbf{k},\textbf{k}'}.
\end{equation}
Note in writing this, we have assumed that neither interband nor intervalley coherence
have formed in the system spontaneously. Performing a HF decomposition on Eq.~(\ref{Hint}) gives a potential for an effective single-body Hamiltonian,
\begin{align}
H^{\text{HF}}_{\text{int}} =& -2U\sum_{ll',ss',\textbf{k}}\delta_{ss'}\sum_{a,b=A/B}c^{\dagger}_{ls}\phi^{a*}_{ls}({\textbf{k}})\times\nonumber\\
&\times\left(\sum_{l''}\phi_{ls}^a(\textbf{k})n_{l''s}(\textbf{k})\phi_{l''s}^{b*}(\textbf{k})\right)\phi^{b}_{l's}({\textbf{k}})c_{l's}.\label{eq:hfint}
\end{align}
where, for notational simplicity, we have used the $a,b$ indices to denote the orbital degree of freedom ($A \equiv |d_{z^2}\rangle$ and $B \equiv \frac{1}{\sqrt{2}}(|d_{x^2-y^2}\rangle +i\tau|d_{xy}\rangle) $). The full HF Hamiltonian
for electrons with wavevector ${\bf k}$ then becomes
\begin{equation}
H^{0,\text{HF}}_{ls,l's}(\textbf{k})=H^{0}_{ls,l's}(\textbf{k}) - 2U\sum_{ab}\phi^{a*}_{ls}({\textbf{k}})n^{ab}_s\phi^{b}_{l's}({\textbf{k}}),
\end{equation}
with $n^{ab}_{s \tau_l} = \sum_{\textbf{k}l}\phi_{ls}^a(\textbf{k})n_{ls}(\textbf{k})\phi_{ls}^{b*}(\textbf{k})$.
The quantities $n_{ls}$ need to be determined self-consistently.
Note in writing $H^{0,\text{HF}}_{ls,l's}(\textbf{k})$, we have dropped
a term proportional to the total fermion number which is a constant. In the orbital basis ($l,l'$) one may write
\begin{equation}
H^{0,\text{HF}}(\textbf{k}) =\left[
\begin{array}{cc}
\tilde{m}_{s\tau} & at\tau k e^{-i\tau\phi} \\
at\tau k e^{i\tau\phi} & -\tilde{m}_{s\tau}\\
\end{array}\right]+\tau s \lambda/2 - U(n^{AA}_{s\tau} + n^{BB}_{s\tau}),\label{eq:hf0}
\end{equation}
with renormalized mass
$\tilde{m}_{s \tau} = \frac{\Delta-\tau s\lambda}{2} - U(n^{AA}_{s \tau} - n^{BB}_{s \tau})$. For a fixed density (obtained by fixing $\mu_0$), the value of $\tilde{m}_{\tau s}$ is found
numerically using the requirement that the values $n_{ls}(k)$ used to generate Eq.~(\ref{eq:hf0})
yield wavefunctions that produce the very same values -- i.e., the density matrix
used to generate the HF Hamiltonian is the same as what one finds from its
eigenvectors and eigenvalues. In the present case, the wavefunctions have
a functional form that is the same as that of
the free wavefunctions, Eq.~(\ref{freephi}),
with modified parameters:
\begin{align}\label{hfphi}
\phi_{l,s}(\textbf{k}) = \frac{1}{\sqrt{2}}\left(\begin{array}{c} \tau e^{-i\tau\phi}\sqrt{1+\frac{\alpha \tilde{m}_{s\tau}}{\sqrt{\tilde{m}_{s\tau}^2+a^2t^2k^2}}} \\ \alpha\sqrt{1-\frac{\alpha \tilde{m}_{s\tau}}{\sqrt{\tilde{m}_{s\tau}^2+a^2t^2k^2}}} \end{array}\right).
\end{align}
The energy eigenvalues then become
\begin{align}
&\tilde{\epsilon}_{l,s}(\textbf{k}) =\frac{\tau s\lambda}{2} + \alpha \sqrt{\tilde{m}_{s\tau}^2+(atk)^2} - U(n^{AA}_{\tau s} + n^{BB}_{\tau s}),\label{eq:HFE}
\end{align}
which is similar but not identical to the non-interacting energy eigenvalues, Eq.~(\ref{eq:freeen}). Here, in analogy with the previous section, the index $l=\{\tau,\alpha\}$
implicitly contains the valley index $\tau$ as well as the
conduction/valence band index $\alpha=\pm1$. In the remainder of this paper,
we will use these as the basis states for our analysis.
\section{Time dependent Hartree-Fock Approximation}
Our focus in this study is the spin-spin response function
\begin{align}
\chi_{\tau}(\textbf{r}-\textbf{r}',t) = -i\Theta(t)\langle[\rho^{+-}_{\tau} (\textbf{r},t), \rho^{-+}_{\tau} (\textbf{r}',0)] \rangle,\label{eq:corr}
\end{align}
with $\rho^{\sigma\sigma'}_{\tau}(\mathbf{r},t) = \Psi^{\text{HF}\dagger}_{\sigma\tau}(\textbf{r},t)\Psi^{\text{HF}}_{\sigma'\tau}(\textbf{r},t)$, with field operators
\begin{align}
\Psi_{s\tau}^{\text{HF}}(\textbf{r}) = \frac{1}{\sqrt{L_xL_y}}\sum_{\textbf{k},l}
e^{i\textbf{k}\cdot \textbf{r}}\phi_{l,s}(\textbf{k})c_{l,s}(\textbf{k})\delta_{\tau,\tau_l}.
\end{align}
The single particle states appearing in this expression are the HF wavefunctions, Eq.~(\ref{hfphi}). We do not consider intervalley particle-hole operators as this would involve large momentum imparted to the system. Assuming translational invariance, in momentum space the response function has the form
\begin{widetext}
\begin{align}
\chi_{\tau}(\textbf{q},t)
=-&\frac{i\Theta(t)}{L_xL_y}\sum_{\{\textbf{k}_i,\textbf{q}_i,l_i\}}f_{l_1l_2,\uparrow\downarrow}(\textbf{k}_1+\textbf{q},\textbf{k}_1)f_{l_3l_4,\downarrow\uparrow}(\textbf{k}_2-\textbf{q},\textbf{k}_2) \langle[e^{iHt}c^\dagger_{l_1\uparrow}(\textbf{k}_1+\textbf{q})c_{l_2\downarrow}(\textbf{k}_1)e^{-iHt},c^\dagger_{l_3\downarrow}(\textbf{k}_2-\textbf{q})c_{l_4\uparrow}(\textbf{k}_2)]\rangle \nonumber \\
\equiv & \frac{1}{L_xL_y}\sum_{\{\textbf{k}_i,\textbf{q}_i,l_i\}}f_{l_1l_2,\uparrow\downarrow}(\textbf{k}_1+\textbf{q},\textbf{k}_1)f_{l_3l_4,\downarrow\uparrow}(\textbf{k}_2-\textbf{q},\textbf{k}_2)
\tilde{\chi}_{l_1l_2l_3l_4}(\textbf{k}_1,\textbf{k}_2,\textbf{q},t), \label{eq:physresp}
\end{align}
with
\begin{align}
\tilde{\chi}_{l_1l_2l_3l_4}(\textbf{k}_1,\textbf{k}_2,\textbf{q},t) =-i\Theta(t) \langle[e^{iHt}c^\dagger_{l_1\uparrow}(\textbf{k}_1+\textbf{q})c_{l_2\downarrow}(\textbf{k}_1)e^{-iHt},c^\dagger_{l_3\downarrow}(\textbf{k}_2-\textbf{q})c_{l_4\uparrow}(\textbf{k}_2)]\rangle. \label{eq:chitilde}
\end{align}
It is implicit that the $\tau_{l}$ content of each $l$ index on the right hand side
of this equation is a single value of $\tau$, and the Hamiltonian
appearing in the $e^{\pm iHt}$ factors
is $H = H_0+ H_{\text{int}}$, using Eqs.~(\ref{eq:ham}) and (\ref{Hint}). The
weights $f_{l_il_j,\sigma\sigma'}(\mathbf{k}_1,\mathbf{k}_2) \equiv
\phi_{l_i\sigma}^{\dagger}(\mathbf{k}_1)\phi_{l_j\sigma'}(\mathbf{k}_2)$ are
wavefunction overlap factors, and the indices $l_i$ have allowed values
$\tau_l=\pm1$ and $\alpha_l = \pm1$.
To obtain an explicit expression for $\tilde{\chi}$,
we take a time derivative of its definition implicit in Eq.~(\ref{eq:physresp}), which
generates expectation values involving 2, 4, and 6 fermion operators. We approximate
the last of these using a HF decomposition \cite{Giuliani_book}, leading to a closed
expression for the response function that involves elements of the static density
matrix described in the last subsection. This is the form in which we carry out
the time-dependent Hartree-Fock approximation. The resulting equation may be
expressed as
\begin{align}
i\partial_t\tilde{\chi}_{l_1l_2l_3l_4}(\textbf{k}_1,\textbf{k}_2,\textbf{q},t) =& \{n_{l_1\uparrow}(\textbf{k}_1+\textbf{q})-n_{l_2\downarrow}(\textbf{k}_1)\}\delta_{l_1l_4}\delta_{l_2l_3}\delta_{\textbf{k}_1,\textbf{k}_2-\textbf{q}} - \Big[\tilde{\epsilon}_{l_1,\uparrow}(\textbf{k}_1+\textbf{q}) - \tilde{\epsilon}_{l_2,\downarrow}(\textbf{k}_1)\Big] \tilde{\chi}_{l_1l_2l_3l_4}(\textbf{k}_1,\textbf{k}_2,\textbf{q},t)\nonumber \\
& + 2U\sum_{ab}\Big[\phi^a_{l_1\uparrow}(\textbf{k}_1+\textbf{q})\Big(n_{l_2\downarrow}(\textbf{k}_1)-n_{l_1\uparrow}(\textbf{k}_1+\textbf{q})\Big)\phi^{b*}_{l_2\downarrow}(\textbf{k}_1)\Big]\tilde{\chi}^{ab}_{\uparrow\downarrow l_3l_4}(\textbf{k}_1,\textbf{k}_2,\textbf{q},t),
\label{deriv}
\end{align}
where
$$ \tilde{\chi}^{ab}_{s_1s_2 l_3l_4}(\textbf{k}_2, \textbf{q}, t) \equiv \sum_{l_1l_2\textbf{k}_1}\phi^{a*}_{l_1s_1}(\textbf{k}_1+\textbf{q})\phi^{b}_{l_2s_2}(\textbf{k}_1)\tilde{\chi}_{l_1l_2l_3l_4}(\textbf{k}_1,\textbf{k}_2,\textbf{q},t) $$
defines $\tilde{\chi}^{ab}_{\uparrow \downarrow l_3 l_4}$
and $\phi^{a}_{l,s}$ is the amplitude for the $a$th orbital (see Eq.~(\ref{hfphi})). Some details leading up to Eq.~(\ref{deriv}) are provided in Appendix A.
Fourier transforming Eq.~(\ref{deriv}) with respect to time,
with further work it may be cast in the form
\begin{align}
-\chi_0^{cd,c'd'}(\textbf{q},\omega) = \chi^{cd,c'd'}(\textbf{q},\omega) - 2U_0\sum_{ab}\chi_0^{cd,ab}(\textbf{q},\omega)\chi^{ab,c'd'}(\textbf{q},\omega).\label{eq:chi}
\end{align}
Here $U_0 = L_xL_y U$, $\chi^{cd,c'd'}(\textbf{q},\omega) \equiv \frac{1}{L_xL_y}\sum_{l_3,l_4,\textbf{k}}\tilde{\chi}^{cd}_{\uparrow\downarrow l_3l_4}(\textbf{k},\textbf{q},\omega)
\phi^{c'}_{l_4\uparrow}(\textbf{k})\phi^{d'*}_{l_3\downarrow}(\textbf{k}-\textbf{q})$, and
\begin{equation}
\chi_0^{ab,cd}(\textbf{q}, \omega) = - \frac{1}{L_xL_y}\sum_{l_3,l_4,\textbf{k}_2} \frac{n_{l_4\uparrow}(\textbf{k}_2)-n_{l_3\downarrow}(\textbf{k}_2-\textbf{q})}{\omega + i\delta + \tilde{\epsilon}_{l_4,\uparrow}(\textbf{k}_2) - \tilde{\epsilon}_{l_3,\downarrow}(\textbf{k}_2-\textbf{q}) } \phi^{a*}_{l_4\uparrow}(\textbf{k}_2)\phi^{b}_{l_3\downarrow}(\textbf{k}_2-\textbf{q})\phi^{c}_{l_4\uparrow}(\textbf{k}_2)\phi^{d*}_{l_3\downarrow}(\textbf{k}_2-\textbf{q})\label{eq:chi0}
\end{equation}
\end{widetext}
is the susceptibility associated with the single-particle Hamiltonian $ H^{0,HF}$, which may be viewed as a $4\times4$ matrix written in the basis $AA,BB,AB,BA$.
Finally, we write Eq.~(\ref{eq:chi}) in matrix form and relate it to the physical response function in Eq.~(\ref{eq:physresp}), yielding
\begin{equation}
\chi_{\tau}(\textbf{q},\omega) = -\text{Tr}'\left[ \Big(1-2U_0\chi_0(\textbf{q},\omega)\Big)^{-1}\chi_0(\textbf{q},\omega)\right].\label{eq:finalchi}
\end{equation}
In this equation, all the matrices are $4\times4$. but the $\text{Tr}'$ is taken only over the ``diagonal'' elements, $\text{Tr}'\chi^{ab,cd}=\sum_{a,c=A,B}\chi^{aa,cc}$. Eq.~(\ref{eq:finalchi}) is one of our main results.
When $\text{Im}\chi(\textbf{q},\omega) \ne 0$ the system may absorb energy from a perturbation
that flips an electron spin,
so that the system has spin excitations with energy $\omega$ at momentum $q$; as a function
of $\omega$ for fixed $q$ this either comes over a range of frequencies, where there
is a continuum of excitations, or as sharp poles where there is a collective mode \cite{Giuliani_book}.
The latter case is characterized by $\text{Det}(1-2U_0\chi_0(\mathbf{q},\omega))=0$.
An example of $\chi(\mathbf{q},\omega)$ is illustrated in Fig. \ref{fig:phys_response},
where both a continuum and a sharp collective mode are evident. Fig.~\ref{fig:chi}
shows the same example on a linear scale. In this case a sharp collective mode is expected at the point where the relevant determinant vanishes.
This mode is separated from the ``incoherent'' particle-hole excitations whose edge is denoted by $\omega_c$.
In addition to the collective mode that is evident in Fig. \ref{fig:chi}, a second mode arises
very close to the particle-hole continuum edge, which is rather difficult to discern in the
response function due to its close proximity to the continuum excitations. The presence of
such a mode can be demonstrated explicitly by examining the low hole-doping limit.
We now turn to this discussion.
\begin{figure
\includegraphics[width=0.45\textwidth]{pole.pdf}
\caption{Schematic representation of the left and right hand sides of Eq.~(\ref{eq:SW12}) as functions of $\omega$, shown in red and blue respectively. For low enough $k_F$, an isolated spin wave mode is always present.}\label{fig:pole}
\end{figure}
\section{Spin-wave modes for small hole-doping}
For small densities of holes, it is possible to make analytical progress on finding zeros
of $\text{Det}(1-2U\chi_0(\mathbf{q},\omega))$ in the limit $q \rightarrow 0$,
indicating the location of sharp, collective spin-wave modes.
Specifying $\tau=1$ as the valley we will focus upon,
the valence bands are indexed by $\alpha=-1$ in Eq.~(\ref{eq:HFE}).
The dominant contributions to $\chi_0$ in Eq.~(\ref{eq:chi0})
come from $l_3 = l_4 = \{\tau=1,\alpha=-1\}$. This leads to the approximate expression
\begin{align}
\tilde{\chi}_0^{ab,cd}(\textbf{q}=0) &= - \frac{1}{L_xL_y}\sum_{\textbf{k}} M^{ab,cd}(\textbf{k}) \frac{\Delta n(k)}{\omega + i\delta + \Delta\tilde{\epsilon}(k) },
\end{align}
where $\Delta n(k) = n_{\uparrow}(k) - n_{\downarrow}(k)$ and $\Delta\tilde{\epsilon}(k) = \lambda - (\tilde{m}_\uparrow-\tilde{m}_\downarrow) - U(n_\uparrow(k)-n_\downarrow(k)) - \frac{1}{2}\left(\frac{1}{\tilde{m}_\uparrow} - \frac{1}{\tilde{m}_\downarrow}\right)(atk)^2 \equiv E_0 - \frac{1}{2}\gamma k^2$, where $E_0 = \lambda -(\tilde{m}_{\uparrow} - \tilde{m}_{\downarrow}) - U_0 (n_{\uparrow}(k)-n_{\downarrow}(k))$ and $\gamma = \left(\frac{1}{\tilde{m}_\uparrow} - \frac{1}{\tilde{m}_\downarrow}\right)(at)^2$.
Notice we have employed a small $k$ expansion of $\tilde\epsilon(k)$, which works well because $\Delta n(k)$
differs from zero only at small $k$ in the low hole doping limit.
The particle-hole continuum is identified by the interval of $\omega$ for which $\omega + \Delta \tilde{\epsilon}(k)$
vanishes for some $k$ where $\Delta n(k) \ne 0$. This range is given in the present
approximation by $-E_0<\omega<-E_0 + \frac12\gamma k_F^2 \equiv \omega_c$,
where $k_F$ is the Fermi wavevector for the pocket of holes in the valence band.
The matrix elements $M^{ab,cd}(\mathbf{k}) = \phi^{a*}_{\uparrow}(\textbf{k})\phi^{b}_{\downarrow}(\textbf{k})$ can be obtained by similarly expanding the Hartree-Fock wave functions for small $k$,
\begin{equation}
\tilde{\phi}_s(\textbf{k})
\approx \left[
\begin{array}{c}
e^{-i\phi} \frac{atk}{2\tilde{m}_s} \\
-[1 - \frac{(atk)^2}{8\tilde{m}_s^2} ]\\
\end{array}\right],
\end{equation}
where only up to second order terms in $k$ are kept.
To this order the only relevant non-vanishing elements of the $M$ matrix are
\begin{align}
M^{AA,BB} &= M^{BB,AA}= \frac{(atk)^2}{4\tilde{m}_\uparrow\tilde{m}_\downarrow}, \nonumber \\
M^{BB,BB} &= 1 - \frac{(atk)^2}{4\tilde{m}_\uparrow^2}- \frac{(atk)^2}{4\tilde{m}_\downarrow^2}, \nonumber \\
M^{AB,BA} &= M^{BA,AB}=\frac{(atk)^2}{4\tilde{m}_\uparrow^2}.\nonumber
\end{align}
Except for
$M^{AA,AA}$ which vanishes to $\mathcal{O}(k^2)$, all the other entries of $M$ contain phases of the form $e^{-i\phi}$,
with $\phi$ the angle of {\bf k} with respect to the $k_x$-axis, which
vanishes upon integration over momentum. Thus these do not contribute to $\tilde{\chi}_0$. At $\textbf{q}=0$, $\tilde{\chi}_0$ has a block-diagonal form and $\text{Det}(1-2U\chi_0(\mathbf{q},\omega))$ can be written as the product of two subdeterminants, $D_1$ and $D_2$, given by
\begin{align}
D_1 =& (1-2U_0\tilde{\chi}_0^{AA,AA})(1-2U_0\tilde{\chi}_0^{BB,BB}) \nonumber\\
&~~~~~~~~~~~~- 4U_0^2\tilde{\chi}_0^{AA,BB}\tilde{\chi}_0^{BB,AA}, \\
D_2 =& 1 - 4U_0^2\tilde{\chi}_0^{AB,BA}\tilde{\chi}_0^{BA,AB}.
\end{align}
If either of these vanishes at an $\omega$ outside the particle-hole continuum frequency interval, there
is a sharp collective mode at that frequency. Note that particular response functions appearing in
$D_1$ and $D_2$ indicate that the former is associated with spin flips in which electrons remain
in the same orbital, while the latter arises due to electrons which both flip spin and change
orbital.
Using the integrals
\begin{align}
I_0 &= \frac{1}{L_xL_y}\sum_{|\textbf{k}| < k_F}\frac{1}{\omega+E_0-\frac{1}{2}\gamma k^2} \nonumber\\
&= \int_0^{k_F} \frac{kdk}{2\pi}\frac{1}{\omega+E_0-\frac{1}{2}\gamma k^2}\nonumber \\
&= -\frac{1}{2\pi \gamma}\text{ln}\left(\frac{\omega+E_0-\frac{1}{2}\gamma k_F^2}{\omega+E_0}\right)
\end{align}
and
\begin{align}
I_1 &= \frac{1}{L_xL_y}\sum_{|\textbf{k}| < k_F}\frac{k^2}{\omega+E_0-\frac{1}{2}\gamma k^2}\nonumber \\
&= \frac{1}{2\pi\gamma}\left[-\frac{\omega+E_0}{\gamma}\text{ln}\left(\frac{\omega+E_0-\frac{1}{2}\gamma k_F^2}{\omega+E_0}\right)-k_F^2\right],
\end{align}
the condition $D_1 = 0$ reduces to
\begin{equation}
1-2U_0\left(I_0 - \frac{(at)^2}{4}\left(\frac{1}{\tilde{m}_\uparrow^2} + \frac{1}{\tilde{m}_\downarrow^2}\right)I_1\right) = \frac{U_0^2 (at)^4}{4\tilde{m}_\uparrow^2\tilde{m}_\downarrow^2}.\label{eq:SW1}
\end{equation}
Similarly, $D_2 = 0$ can be simplified to
\begin{equation}
I_1 = \pm \frac{2\tilde{m}_\uparrow\tilde{m}_\downarrow}{U_0(at)^2}.\label{eq:SW2}
\end{equation}
\begin{center}
\begin{figure*}[t]
\includegraphics[width=0.95\textwidth]{varying_mu.pdf}
\caption{Spin wave excitations and the particle-hole continuum as a function of the chemical potential ($\mu_0$) shown for three different values of the interaction strength $U_0$ when $\mathbf{q}=\mathbf{0}$. The green band corresponds to the particle-hole continuum as is shown in Fig.~\ref{fig:chi}. The blue dashed line corresponds to the isolated mode at frequency $\omega_1$ described in Fig.~\ref{fig:chi} and Fig.~\ref{fig:pole}. The mode corresponding to Eq.~(\ref{eq:2ndmode}) is barely visible as a red line. The vertical lines indicate the boundary beyond which the stability condition is violated (see main text for details).}\label{fig:withmu}
\end{figure*}
\end{center}
The condition Eq.~(\ref{eq:SW1}) will be met for some value of $\omega$ outside
the particle-hole continuum, for small interaction strength $U_0$.
This can be understood as follows.
For small $U_0$, we approximate the equation as
\begin{equation}
\frac{(at)^2}{4}\left(\frac{1}{\tilde{m}_\uparrow^2} + \frac{1}{\tilde{m}_\downarrow^2}\right)I_1 \approx I_0 -\frac{1}{2U_0}.
\end{equation}
Using the fact that
$$
I_1=\frac{\omega+E_0}{\gamma}I_0-\frac{k_F^2}{2\pi\gamma}
$$
this equation can be recast as
\begin{align}
I_0 = \frac{(at/2)^2\left(\frac{1}{\tilde{m}_\uparrow^2} + \frac{1}{\tilde{m}_\downarrow^2}\right)\frac{k_F^2}{2\pi\gamma}-\frac{1}{2U_0}}{(at/2)^2\left(\frac{1}{\tilde{m}_\uparrow^2} + \frac{1}{\tilde{m}_\downarrow^2}\right)\frac{E_0+\omega}{\gamma}-1}.\label{eq:SW12}
\end{align}
The numerator of the right hand side of this equation is negative for
small $U_0$. As $\omega$ increases from large negative values, the right hand side is
positive and increases in magnitude, diverging at
\begin{align}
\omega=\omega_{\text{div}} \equiv -E_0 + \frac{4\gamma}{a^2t^2} \left(\frac{1}{\tilde{m}_\uparrow^2} + \frac{1}{\tilde{m}_\downarrow^2}\right)^{-1}.
\end{align}
Importantly, $\omega_{\text{div}} > \omega_c$ in the low doping limit, so the divergence
is above the particle-hole continuum.
Above $\omega_{div}$ the right hand side increases uniformly from arbitrarily large negative values,
eventually vanishing at large positive $\omega$.
By contrast, $I_0$ diverges to large negative values as $\omega \rightarrow -E_0$ from below, and
comes down from arbitrarily large positive values starting at the particle-hole continuum edge
$\omega_c$. This guarantees there will be a crossing of the left and right
hand sides of Eq.~(\ref{eq:SW12}) between this edge and $\omega_{\text{div}}$, and a collective mode with frequency $\omega_1$ in this interval. This is qualitatively shown in Fig.~\ref{fig:pole}.
Note that for decreasing $U_0$ this solution moves closer to the particle-hole continuum, which we indeed find numerically, as illustrated in Fig.~\ref{fig:withmu}. As is shown in Appendix B, for small $U_0$ and small hole doping, one can show that for $\textbf{q}=0$
\begin{align}
\omega_1 \approx -E_0+ \frac12\gamma k_F^2\left(1+e^{-\pi\gamma/U_0}\right) \label{eq:1stmode}.
\end{align}
The second condition Eq.~(\ref{eq:SW2}), for small $U_0$, can only be satisfied for the negative sign of the right hand side. The position of the spinwave mode at $\mathbf{q}=0$ can be approximately evaluated to be
\begin{align}
\omega_2 \approx -E_0+ \frac12\gamma k_F^2\left(1+e^{-\epsilon_0/k_F^2U_0}\right), \label{eq:2ndmode}
\end{align}
where $\epsilon_0 = 8\pi \tilde{m}_{\uparrow}\tilde{m}_{\downarrow}/a^2t^2$.
It is clear from Eqs.~(\ref{eq:1stmode}) and (\ref{eq:2ndmode}) that the separation of $\omega_2$ from the particle-hole continuum is very small when compared to that of $\omega_1$ for small hole doping and for the relevant parameter range. This result is again consistent with our numerical solutions, as illustrated in Fig.~\ref{fig:withmu}.
We conclude this section with two comments on these results. First, the appearance
of a sharp collective mode with arbitrarily small $U_0$ supports the interpretation
of the non-interacting groundstate as being effectively polarized in a ``pseudospin''
spin variable, $\sigma_z\tau_z$, as discussed in the Introduction. When interactions
are introduced, incoherent particle-hole excitations are pushed up in energy
via a loss of exchange energy which, for repulsive interactions, generically lowers
the groundstate energy for a polarized state. However, an appropriate linear combination
of particle-hole pair states can minimize this loss of exchange energy, leading to
the sharp collective mode.
Secondly, although we have demonstrated the existence of two discrete modes, the second of these
(at $\omega=\omega_2$) lies exceedingly close to the particle-hole continuum edge. This means that
small perturbations can easily admix these different kinds of modes together, making the
detection of the second mode challenging. Indeed, in our own numerics the introduction of
broadening in our discrete wavevector sum, introduced to simulate the thermodynamic limit,
typically mixes this mode with the continuum. In this situation the mode does not show
up sharply in the response function we focus upon. We note that our analysis shows the
mode to be associated with simultaneous spin flip {\it and} a change of orbital, $A \leftrightarrow B$,
so that we expect this second mode should show up more prominently in more complicated
response functions that simultaneously probe both of these.
\section{Numerical Results and Discussion}
In general,
to compute $\chi_\tau$ we need to know $\chi_0$. This can be obtained numerically, and we accomplish this by approximating the integral in Eq.~(\ref{eq:chi0}) as a discrete sum.
For our calculations we discretize
momenta onto a $100\times 100$ two dimensional grid,
with each momentum component running from $-k_0$ to $+k_0$. We have checked that the contribution to
$\chi_0$ dies off quickly within the range of momentum integration. We also discretize $\omega$ to a set
of 5000 points, within which we compute physical response functions. A small but non-vanishing imaginary
$\eta$ is retained, of the order of the spacing of the $\omega$ values, to produce the continuity
expected in the thermodynamic limit (where the momentum grid over which we sum becomes
arbitrarily fine). Figs. \ref{fig:phys_response} and \ref{fig:chi} depict typical results.
\begin{figure
\includegraphics[width=0.45\textwidth]{withq.pdf}
\caption{The blue line depicts the dispersion of the isolated spin wave excitation,ie, the $\omega,q_x$ points for which the real part of the denominator of the spin susceptibility given by Eq.~(\ref{eq:finalchi}) vanishes. The green continuum represents the particle-hole excitations for which the denominator of Eq.~(\ref{eq:finalchi}) has a nonvanishing imaginary component as is shown in Fig.~\ref{fig:chi}. Here we have taken $U_0 = 0.2$eV.}\label{fig:U1}
\end{figure}
The response function Eq.~(\ref{eq:corr}) qualitatively describes the dynamics of an electron-hole pair
between bands of opposite spins. The lowest energy excitations necessarily
involve the bands nearest the chemical potential $\mu$.
When $\mu$ is within the gap so that the system is insulating,
such an excitation will have energy comparable to the band gap
$E_g \sim 1$eV \cite{Ross_2013,Ugeda_2014,Wu_2015}.
On the other hand,
when hole-doped, the
chemical potential falls below the top of the valence band, electron-hole pairs
from the two spin species in the valence band become available (see Fig.~\ref{fig:band}).
The resulting excitations can have energy of order $\lambda \sim 0.1$eV, a considerably
lower energy scale.
Discrete poles
in $\chi$ have
infinite lifetime and represent the collective spin-wave modes of the system; these only
can arise when interactions are included in the model. A set of representative plots
illustrating both the
spin-wave dispersion and the particle-hole continuum
are shown in Fig.~\ref{fig:U1} for both the valleys.
Note the clear symmetry apparent between the two valley responses when $\omega \rightarrow -\omega$.
This is a manifestation of time-reversal symmetry, and indicates that strong absorption
from a perturbation with one helicity in one of the two valleys implies equally strong
absorption in the other valley when the helicity is reversed.
It is interesting to consider the possible consequences of this if the system develops true
ferromagnetism, which is thought to occur above some critical interaction strength $U_c$
\cite{Scrace_2015,Braz_2017}. In the simplest description, this leads to different
self-consistent exchange fields and different hole populations for each valley \cite{Braz_2017}.
The computation of spin-response in this situation
is essentially the same as carried out in our study, but the effective chemical potential would
be different for each valley. In this case we expect the spin response to be different for
the two possible perturbations, reflecting the broken time-reversal symmetry in the groundstate.
Such behavior has indeed been observed for {\it electron}-doped
TMD's \cite{Scrace_2015}.
Another feature apparent in Fig.~\ref{fig:withmu} is
a cusp in the continuum spectrum, which appears at
$\mu_0=\mu_c\approx -0.55\Delta$. This is the point at which the chemical potential touches the top of the
lower valence band (Fig.~\ref{fig:band}). For $\mu_0>\mu_c$,
a particle-hole
continuum is only present at non-vanishing
frequencies determined by the difference in energy between the highest occupied and the lowest unoccupied bands of opposite spins. However, for $\mu_0 \leq \mu_c$, low energy particle-hole
excitations set in for processes in which (for one of the valleys) a spin-down valence band
electron is excited to the spin-up valence band at finite wave vector, but vanishing frequencies. This is further illustrated in Fig.~\ref{fig:loww}, in which one finds the continuum excitations
reaching down to zero energy, at a finite $q_x$, only when the chemical potential is below this critical value.
As is apparent from Fig.~\ref{fig:chi} the first spin-wave mode from the condition Eq.~(\ref{eq:SW12}) appears above the continuum. Further, for a given $U_0$, the separation from the continuum increases linearly with increasing hole doping, as illustrated in Fig.~\ref{fig:U1},
until the chemical potential touches the top of the lower valence band. At this point a similar cusp as for the continuum appears in the spin wave dispersion. The linear increase of the separation between the spin wave mode
and the top of the particle-hole continuum at small hole doping can be understood in the following way.
As shown in Appendix B, Eq.~(\ref{eq:SW12}) can be approximated for small hole doping and small $U_0$ by
\begin{align}
\frac{\delta_0}{\delta_0 + c_0\delta \mu} \approx e^{-\pi\gamma/U}, \label{eq:2ndmodeexpansion}
\end{align}
where $\delta_0$ is the separation of the spin wave from the continuum, and $\delta\mu$
is the change in chemical potential due to hole doping, and the constant $c_0 = \gamma/\tilde{m}_\uparrow$.
As the right hand side of the equation is independent of $\delta\mu$, the solution
$\delta_0$ should also be proportional to $\delta\mu$.
As discussed in the previous section, the second spin wave solution of Eq.~(\ref{eq:2ndmode}) lies extremely close to the continuum, and so is almost invisible in our numerical solutions for the range of the parameters we
consider. One expects this mode to be visible for larger $U_0$ and larger hole doping. However, in our calculations
we find that the stability condition\cite{Giuliani_book} $\omega (-\text{Im}\chi_\tau)>0$ fails for some range of $\omega$
for $U_0$ large enough that we
are able to numerically resolve the mode from the continuum. An example of this is shown in Fig.~\ref{fig:2ndsw}. The point beyond which this stability condition is not satisfied is indicated by vertical lines in Fig.~\ref{fig:withmu}.
Note that, physically,
the instability we find in the response functions indicates that the symmetry of the
ground-state we are assuming is broken, very likely into a state with inter-orbital coherence.
Whether such a state exists at large $U$, or is preempted by a first-order transition into
a state with different hole populations in the valleys, requires a more general Hartree-Fock
study than we have presented in this work, and is left for future study.
\begin{center}
\begin{figure}[t]
\includegraphics[width=0.45\textwidth]{mu_0p57_U_0p5_new}
\caption{Spin susceptibility for $U_0 = 0.5$eV, $\tau=+1$ and $\mu_0 = -0.57\Delta$. Two discrete spin wave modes (indicated by arrows) are visible near $\omega = -0.055\Delta$ and $\omega=-0.092\Delta$, with the second mode very close to the continuum. However, the positivity $\omega (-\text{Im}\chi)>0$ does not hold for all $\omega$ implying that our assumed Hartree-Fock state is not the true ground state.}\label{fig:2ndsw}
\end{figure}
\end{center}
\section{Summary}
In this paper, we have studied collective excitations of a simple TMD model, showing that even
without the formation of spontaneous magnetic order, interactions induce sharp collective
modes that are commonly associated with such order. The presence of these modes can be
understood as a consequence of intrinsic order induced by the strong spin-orbit interaction
that yields different energetic orderings of spins in different valleys, and arises when
the system is doped. The presence of these modes is a direct analog of ``Silin-Leggett modes''
present in a simple Fermi liquid subject to a magnetic field, such that the Fermi wavevector
becomes spin-dependent. Our analysis is developed using the time-dependent Hartree-Fock
approximation of a physical spin response function,
and reveals two sharp modes in addition to a continuum of particle-hole excitations.
While one of these modes (associated with spin flips for electrons maintaining their orbital
index) breaks out from the continuum in a clear way, the other (associated with electrons
changing both spin and orbital) remains very close to the continuum edge and is difficult
to distinguish independently. Signatures of how the subbands are populated can be seen in
properties of the spin response functions when the chemical potential is modified, which
in principle can be accomplished by gating the system.
Our calculations indicate that with strong enough interaction the system becomes unstable. Within our model this would likely be to a state with inter-orbital coherence, but first order instabilities in which the system spontaneously forms unequal valley and spin populations are also possible, which may preempt any instability indicated in linear response. The validity of the simple model that we use, Eq.~(\ref{eq:ham}), is also limited by the positions of other bands in the system, notably, at the $\Gamma$ point~\cite{Xiao_2012}. For MoS$_2$, this separation is small as bands near the $\Gamma$ point lie 0.1-0.2 eV below the tops of the bands at the $K,K'$ points. The separation in energy is larger for certain dichalcogenides, such as WS$_2$, MoSe$_2$, WSe$_2$, MoTe$_2$, WTe$_2$, among others. Our results, which are based on a simple two-band model near $K,K'$ points, will change qualitatively when the Fermi energy is low enough that bands at the $\Gamma$ points contain holes. Whatever the true groundstate of the system, our formalism in principle allows a calculation of the density matrix associated with it, and of collective modes around it. Moreover, the approach we present can be extended to more general response functions (for example, involving spin and orbital simultaneously) which could reveal further and perhaps clearer signatures of the two collective modes we find in our analysis. Exploration of these represent interesting directions for future work.
{\it Acknowledgements} -- HAF acknowledges
the support of US-National Science Foundation through grant nos. DMR-1506263 and DMR-1506460,
and the US-Israel Binational Science Foundation through grant no. 2016130. HAF also
thanks Aspen Center for Physics (NSF grant PHY-1607611), where part of this
work was performed. The research of DKM was supported in part by the INFOSYS scholarship for senior students. A. K. acknowledges the support from the Indian Institute of Technology - Kanpur.
|
2,877,628,091,640 | arxiv | \section{Introduction} \label{sec:intro}
Simultaneous Localization and Planning (SLAP) refers to the ability of autonomous robots to (re)plan dynamically every time the localization module updates the probability distribution on the robot's state. For autonomous robots to reliably operate in a real-world environments, online (re)planning under uncertainty is a key requisite to enable SLAP and cope with uncertainties and changes in the environment. For example, consider a low-cost mobile robot, working in an office environment, where its goals (tasks) are assigned online based on user requests, while coping with changes in the obstacle map (e.g., office doors switch state between open and closed). Such changes in the obstacle map or in the goal location often call for global replanning as they switch the optimal homotopy class of solutions from one to another. Therefore, the robot needs to change its motion plan in real-time as it encounters changes in its goal or in the environment. What makes such online replanning challenging is that low-cost robots are not able to follow the control commands exactly due to motion noise and they do not have perfect measurements due to sensor noise. Therefore, this problem calls for online planning algorithms in uncertain, partially observable environments.
In a broader sense, this problem is an instance of the problem of decision making and control under uncertainty.
\begin{figure}[ht]
\centering
\subfigure[\axx{Belief tree: forward construction} ] {\includegraphics[width=0.8\linewidth]{POMDP_tree.png}}
\subfigure[\axx{Belief graph: backward construction} ] {\includegraphics[width=0.7\linewidth]{POMDP_graph.png}}
\caption{\axx{(a) This figure depicts a typical search tree in belief space, corresponding to a really small problem with 3 actions $ \mathbb{U}=\{u_{1},u_{2},u_{3}\} $ and two observations $ \mathbb{Z}=\{z_{1},z_{2}\} $. Each posterior distribution/belief branches into $ |\mathbb{U}| $ number of priors and each prior belief branches into $ |\mathbb{Z}| $ posteriors, and thus the tree grows exponentially in the the search depth. (b) This figure depicts the idea of using funnels (local feedback controllers) in belief space that can break this exponential growth by funneling a large set of posteriors into a pre-computed beliefs (in red circles). Thus a graph is formed in belief space with funnels as edges and the pre-computed beliefs as nodes. The graph grows linearly with the search depth.}}
\label{fig:tree-with-funnels}
\end{figure}
In general, decision making and control under uncertainty is a ubiquitous challenge in many robotic applications. For an autonomous robot to operate reliably, it is crucial to be able to perceive sensory measurements, infer its situation (state) in the environment, and plan and take actions accordingly. However, in partially-observable environments the state of the system cannot be determined exactly due to imperfect and noisy measurements. Based on the system's model, a filtering module (e.g., Kalman filter) can provide an estimation of the state, i.e., a probability distribution function (pdf) over all possible system states. This pdf describing the localization uncertainty is also referred to as \textit{belief} or \textit{information-state}. Based on the belief at every time step actions are chosen. To formalize this problem of finding the optimal mapping between perceived observations and the taken action, we rely on the most general formulation, i.e., Partially-Observable Markov Decision Processes (POMDP) \cite{Smallwood73,Kaelbling98}.
There are a number of challenges in dealing with POMDPs, including the \textit{curse of dimensionality} and \textit{curse of history}. Curse of dimensionality refers to the high dimensions of the belief space. If the underlying robotic system evolves in a discrete grid world with $ n $ cells, the corresponding belief space is an $ n $-dimensional continuous space. Moreover, if the underlying state space is continuous (which is the case for most real robotic applications), then the belief space is infinite dimensional. Methods such as \cite{Pineau03, Smith05-HSVI, Spaan05, Kurniawati08-SARSOP, kurniawati11-Migs, Grady2013AMA, littlefield2015importance, bai2015intention, patil2015scaling} alleviate these issues and take POMDPs to more challenging and realistic problems. In this paper, we consider a class of POMDPs that commonly arise in modeling the SLAP problem. \axx{The settings are similar to the ones used in KF-based localization literature \cite{Thrun2005,dissanayake2001solution}, such as} \textit{(i)} the system model is given as closed-form nonlinear equations, \textit{(ii)} the state/action/observation spaces are continuous, and \textit{(iii)} belief is unimodal, thus it is well-approximated by a Gaussian.
In addition to above-mentioned challenges associated with POMDPs, when dealing with real-world physical systems, another important challenge is the discrepancy between the real models with the models used for computation, such as discrepancies in the environment map, the system and sensor models, and noise models.
Such discrepancies can lead to deviations of the system from the desired plan. A plausible solution for this problem is an ability to carry out planning in a simultaneous manner with localization, i.e., an ability to replan dynamically to cope with changes in the environment and deviations resulted from model discrepancies and intermittent sensing failures.
To enable an online replanning scheme for SLAP, we rely on multi-query methods in belief space and specifically the Feedback-based Information Roadmap (FIRM) method, as discussed below. The main body of POMDP literature, in particular sampling-based methods, propose single-query solvers, i.e., the computed solution depends on the initial belief \cite{Prentice09,Berg11-IJRR, kurniawati11-Migs}. Therefore, in replanning (planning from a new initial belief) almost all the computations need to be reproduced, which limits their usage in solving SLAP where dynamic replanning is required.
\axx{However, multi-query methods such as FIRM provide a construction mechanism, independent of the initial belief of the system (Fig. \ref{fig:tree-with-funnels} and \ref{fig:funnel-FIRM}), making them suitable methods for SLAP.
}
The original FIRM framework provided a reliable methodology for solving the problem of motion planning under uncertainty by reducing the intractable dynamic programming (DP) to a tractable DP over the nodes of the FIRM graph. In this paper, we extend our previous work on FIRM by proposing a dynamic replanning scheme in belief space that enables online replanning for real world applications in mobile robotics.This extension leads to intelligent behaviors that provably takes the solution closer to the optimal and can guarantee the success probability only increases via this extension. In addition to theoretical results on the online generalization of FIRM, the main emphasis of this paper is on the implementation of the proposed SLAP solution on a physical robot. We investigate how dynamic online replanning can generate a feedback plan that leads to higher performance and success probability. Also, we demonstrate its unique ability to cope with discrepancies between real models and computational models such as changes in the environment and large deviations which can globally change the plan by changing the optimal homotopy class of solutions. We believe these results lay the ground work for advancing the theoretical POMDP framework towards practical SLAP applications, and achieving long-term autonomy in robotic systems.
\subsection{Related Work}\label{subsec:RelatedWork}
Online replanning in belief space is a vital capability to solve the SLAP problem for two main reasons: First, belief dynamics are usually more random than state dynamics because the belief is directly affected by the measurement noise. Therefore, a noisy measurement or spurious data association
can cause large changes in the belief. Second, in practical applications, discrepancies between real and computation models are a significant source of error and cause the belief to occasionally have behaviors different than expected nominal behavior. Thus, simultaneously replanning as localizing, gives a system the ability to recover from such deviations. In addition, enabling SLAP can help the robot to cope with changes in the environment as well as recover from large unexpected deviations in the robot's pose.
\axx{
\ph{Active localization}
}
Solving the planning problem alongside localization and mapping has been the topic of several recent works (e.g, \cite{hcarrillo-icra14-optim-active-slam}, \cite{hcarrillo-icra12-active-slam}, \cite{lcarlone-2014-jirs}, \cite{kim-eustice-pdn}). The method in \cite{GBS-IJRR-2015} presents an approach to uncertainty-constrained simultaneous planning, localization and mapping for unknown environments and in \cite{lcarlone-icra14}, the authors propose an approach to actively explore unknown maps while imposing a hard constraint on the localization uncertainty at the end of the planning horizon. Our work assumes the environment map is known, formulates the problem in its most general form (POMDP), and focuses on online replanning in the presence of obstacles (possibly changing).
\axx{
\ph{General-purpose POMDP solvers}
There is a strong body of literature on general purpose POMDP solvers (e.g., \cite{kurniawati2012global}, \cite{Bai10-contstatePOMDP}, \cite{chaudhari-ACC13}, \cite{smith2004heuristic}, \cite{ong2010planning})). We divide the literature on general purpose POMDPs to two main categories: The first category is offline solvers, whose goal is to compute a policy (e.g., \cite{Pineau03,Spaan05,Smith05-HSVI,Kurniawati08-SARSOP}). A survey paper on a class of these methods relying on point-based solvers is \cite{shani2013survey}. The second category is online search algorithms. In these methods instead of a policy (action for all possible beliefs), the goal is to find the best action for the current belief using a forward search tree rooted in the current belief. \cite{Ross_2008online_survey} surveys recent methods in this category. In recent years, general-purpose online solvers have become faster and more efficient. AEMS \cite{Ross_2007_AEMS}, DESPOT \cite{somani2013despot}, ABT \cite{kurniawati-isrr13}, and POMCP \cite{silver2010monte} are among the most successful methods in this category. Direct application of these methods to SLAP-like problems is a challenge due to \textit{(i)} Expensive simulation steps, \textit{(ii)} continuous, high-dimensional spaces, \textit{(iii)} difficult tree pruning steps. We discuss these tree challenges in the following paragraphs.
}
\axx{
Majority of above-mentioned methods rely on simulating the POMDP model forward in time and create a tree of possible scenarios in future. At each simulation step $ (x',z,c)\sim\mathcal{G}(x,u) $, the simulator $ \mathcal{G} $ simulates taking action $ u $ at state $ x $ and computes the next state $ x' $, observation $ z $, and the cost and constraints of this transition.
When dealing with Games (e.g., Go) or traditional POMDP problems (e.g., RockSample \cite{Ross_2008online_survey}), the forward simulation step and cost computation is computationally very inexpensive. However, in SLAP-like problems computing costs are typically much more expensive. An important example cost is a boolean value that determines if the system had collided with obstacles or not during this transition. This a very expensive computation in robotics applications, that restricts the application of tree-based methods to cases where computing collision probabilities is needed.
}
\axx{
The second challenge in applying tree-based methods to SLAP is the ``low chance of revisiting the same belief". Tree-based methods require the simulator to revisit the same belief many times to learn its value. However, in SLAP-like problems with continuous state/action/observation spaces, the chances of visiting the same belief is almost zero.
Even the discretized version of the problem is really large. For a full tree of depth $ d $, the number of simulation steps along the tree is of the order $ n_{cost} = O((|\mathbb{U}||\mathbb{Z}|)^{d}) $.
Represented belief by a minimum $ n_{b} $ number of particles (to ensure accurate computation of collision probability), one needs to perform $ n_{coll} = O(n_{b}(|\mathbb{U}||\mathbb{Z}|)^{d}) $ collision checks ($ (x',z,c)\sim\mathcal{G}(x,u) $) to construct the full tree.
To provide some intuition on the size and type of the SLAP problem in this paper, where robot is working in an office-like environment localizing itself using visual landmarks, we report typical values in our experiments:
We execute around 100 macro-actions, each of which has around 100 primitive actions. So, the scenario depth is in the order of $ d = 10^4 $ steps. Typical number of particles is in the order of $ n_{b}=100 $. Our action, observation, and state spaces are continuous $ |\mathbb{X}| = |\mathbb{U}| = |\mathbb{Z}|= \infty$ but a reasonable discretization could be of the order of $|\mathbb{U}| = 50^2$ where the speed of each wheel is discretized to 50 region. Assuming the robot can see 5 landmarks at any given time and each landmark observation is 2 dimensional (range and bearing), where each dimension is discretized to 100 steps, we have $ |\mathbb{Z}|= 100^{10}$. Thus, the chances of revising the same belief in the discretized version of the problem is really low.
}
\axx{
Finally, unlike many traditional POMDP domains where the domain structure (e.g., game rules) prunes a lot of tree branches, pruning the search tree is much more challenging in SLAP-like problems. For example consider motion planning problem where there exists a grid of homotopy classes amid obstacles. So, these are exponential number of paths from one corner to the opposite corner of the grid. Now, imagine that at the end of this grid, there is a narrow passage that all these paths has to go through to reach the goal. Thus, none of these paths (tree branches) can be pruned, because to compute which path is better, the belief has to be propagated over all of these paths to see which one leads to a smaller collision probability at the final narrow passage. Note that the problem has a terminal belief and there is no discount factor.
}
\axx{
Thus, clearly applying exact general-purpose POMDP solvers to this type of problem is a challenge, mainly because one needs to compute accurate costs (e.g., collision probabilities) very deep in the search tree with high accuracy. To be able to tackle this problem, we exploit the additional structure existing in the SLAP problem, including knowledge of closed form dynamics, and sensor model to design closed loop controllers. We further restrict the scope of the problem to Gaussian belief space. Finally, we add additional structure (funnels) to the problem that leads to suboptimal solutions but allows us to solve the problem and provide guarantees on the safety constraints (collision probabilities). Further, using rollout methods, we take this suboptimal solution closer to the optimal solution by bypassing unnecessary of funnels in the online phase.
It is worth noting that most of the above-mentioned tree-based methods are complementary to the proposed method and can be combined with the ideas presented in this paper. We discuss some potentially useful combinations in the future work section.
}
\axx{
\ph{Continuous Gaussian POMDP solvers}
A different class of methods restrict the form of uncertainty to Gaussian and extend the traditional deterministic motion planning methods (e.g., PRM and RRT) to belief space. Examples include \cite{Prentice09} \cite{Berg11-IJRR}, \cite{Bry11}. More recently, \cite{Berg12AAAI} proposes an efficient approximate value iteration method. Starting from an initial solution (trajectory in belief space), it converges to the closest local minimum to the initial solution. These methods are single query (the solution is valid for a given initial belief), thus in case of replanning from a new belief most of the computation needs to be redone. Replanning becomes more challenging when the planner has to switch the plan from one homotopy class to another.
}
\axx{
\ph{RHC with direct optimization}
Another class of planners rely on the direct optimization methods in the Receding Horizon Control (RHC) manner. The optimization variable is typically an open-loop sequence of actions. The replanning algorithm in RHC can be recapped as: At every step a sequence of optimal actions is computed within a limited horizon of $ T $ steps. Then, only the first action is executed and the rest is discarded. The executed action takes the system to a new point and from this new point, a new sequence of optimal actions is recomputed within horizon $ T $ and this process is repeated until the system reaches the goal region.} The RHC framework was originally designed for deterministic systems and its extension to stochastic systems and belief space planning is still an open problem. A direct approach is to replace the uncertain quantities (such as future observations) with their nominal values (e.g., most likely observations), and then treat the stochastic system as a deterministic one and use it in an RHC framework (e.g., \cite{Erez2010}, \cite{Platt10}, \cite{Chakrav11-IRHC}, \cite{He11JAIR}, \cite{Toit10}).
However, in such an approach the optimization is carried out only within a limited horizon, and therefore the system may locally choose good actions but after a while may find itself in a high-cost region.
\axx{
\ph{POMDP applied to physical robots}}
From an experimental point of view, a few recent work have focused on applying belief space planning to real-world robots. \cite{arne-brock-icra2014} implements a belief planner on a mobile manipulator with time of traversal as a cost metric. \cite{kaelbling2012integrated} is an integrated task and motion planner, utilizing symbolic abstraction, whose performance is demonstrated on a PR2 robot tasked with picking and placing household objects. In \cite{alterovitz-iros14}, the authors develop a stochastic motion planner and show its performance on a physical manipulation task where unknown obstacles are placed in the robot's operating space and the task-relevant objects are disturbed by external agents. \axx{\cite{bai2015intention} extends the application of POMDP methods to autonomous driving in a crowd, where an autonomous golf card drives amid pedestrian and avoids them by predicting their intentions.}
\axx{Authors in \cite{Marthi_RSS12_PR2} apply a POMDP-based planner to navigate a PR2 robot in an office-like environment. The paper proposes an elegant way of incorporating environment changes into the planning framework and can cope with changes in the homotopy class. The main difference with our method is that the authors are concerned about the uncertainty in obstacles rather than the robot and assume that the robot position in the map is perfectly known.}
\subsection{Highlights and Contributions}\label{subsec:contributions}
This paper extends our previous work in \cite{Ali-FIRM-ICRA14}. Compared to \cite{Ali-FIRM-ICRA14}, we discuss in detail the concept of rollout-based belief space planning, policy execution, and present extensive simulation and experimental results to demonstrate the performance improvements made by using the proposed method. We also present analyses that guarantees a lower execution cost and failure probability as compared to the nominal FIRM policy. The main contributions, highlights, and impact of this work can be summarized as follows.
\ph{Online belief planner to enable SLAP}
We propose an online planning method in belief space. The method lends itself to the class of rollout-based methods \cite{Bertsekas07} and extends them to the belief space. Compared to belief space RHC methods, this method is not limited to a horizon, does not get stuck in local minima, and does not assume deterministic future observations.
\ph{Online swtiching between homotopy classes}
In motion planning, homotopy classes of paths refer to sets of paths that can be deformed
into each other by continuous transformation (bending and stretching) without
passing through obstacles \cite{Bhattacharya11}. A unique feature of the presented method is that it is capable of updating the plan globally online, even when the homotopy class of optimal solution has changed. This feature allows the proposed planner to work in changing environments and cope with large deviations.
\newcommand{1.8in}{1.8in}
\begin{figure*}[ht!]
\centering
\subfigure[A simple scenario with a FIRM roadmap, robot and environment as depicted.]{\includegraphics[width=1.8in]{rolloutcartoon1.png}}
\hspace{0.1in}
\subfigure[The rollout policy is computed periodically. Four candidate edges are compared (three of them are shown in dashed line and the last one is the rest of the current edge.)]{\includegraphics[width=1.8in]{rolloutcartoon2.png}}
\hspace{0.1in}
\subfigure[In clutter-free regions, rollout takes a shorter route (edge 3), increasing performance and speed while loosing certainty (i.e., skipping node stabilization).]{\includegraphics[width=1.8in]{rolloutcartoon3.png}}
\hspace{0.1in}
\subfigure[While completing edge 3, the new rollout further cuts down task execution time by taking shorter route through a newly computed rollout edge 2.]{\includegraphics[width=1.8in]{rolloutcartoon4.png}}
\hspace{0.1in}
\subfigure[The robot is approaching the cluttered region. As needed the planner will slow the robot down to trade performance with certainty.]{\includegraphics[width=1.8in]{rolloutcartoon5.png}}
\hspace{0.1in}
\subfigure[Stabilization reduces localization uncertainty (covariance shrinks), thus guaranteeing high success probability.]{\includegraphics[width=1.8in]{rolloutcartoon6.png}}
\hspace{0.1in}
\subfigure[Stabilization occurs again as robot is still passing through the narrow passage.]{\includegraphics[width=1.8in]{rolloutcartoon7.png}}
\hspace{0.1in}
\subfigure[New rollout connections allow bypassing stabilization.]{\includegraphics[width=1.8in]{rolloutcartoon8.png}}
\hspace{0.1in}
\subfigure[The robot approaching the goal.]{\includegraphics[width=1.8in]{rolloutcartoon9.png}}
\caption{A representational scenario depicting how rollout-based planner achieves higher performance compared to the standard FIRM algorithm while guaranteeing robustness. The 9 scenes depict different stages of task execution as the robot moves from the start to goal location.}
\label{fig:rollout-cartoon}
\end{figure*}
\axx{\ph{Smart stabilization policy}}
The proposed method supercedes a state-of-the-art method, FIRM \cite{Ali13-IJRR}, in performance, success probability, and ability to cope with changing environments. It builds upon a FIRM and inherits the desired features of the FIRM framework such as robustness, scalability, and the feedback nature of the solution. \axx{But, it also significantly reduces the need for belief node stabilization in the original FIRM method to cases where it is absolutely necessary. Thus the proposed method can be viewed as a FIRM with smart selective stabilization policy.} In the original FIRM framework, at every node the system needs to steer its sensory information to reach the belief node (each graph node is a belief, i.e., a particular localization uncertainty). But, in this paper, by embedding an online local planning module in the FIRM framework, we achieve a locally optimal tradeoff between stabilization to a node (i.e., exploring the information space to reach the exact belief node) and moving forward towards goal (exploiting the gradient of local cost function), while the global optimality on the graph is still guaranteed by solving dynamic programming. As a result of this optimal tradeoff, interesting behaviors emerge out of the algorithm without encoding any heuristic. These behaviors exchange information and energy. For example, consider a case when the desired cost is to ``reach a goal while minimizing the probability of colliding with obstacles''. In that case, in the open areas where there are no narrow passages, the system bypasses the belief node stabilizations. It speeds up and does not waste any time gathering information and reducing its uncertainty as there is not much benefit in doing so in obstacle-free regions. However, once it faces with obstacles or narrow enough passages, it automatically decides to perform stabilization (partially) until the uncertainty is shrunk enough to safely traverse the narrow passage. Fig. \ref{fig:rollout-cartoon}, shows this phenomenon pictorially.
\ph{Performance guarantees}
\axx{We provide lower bounds on the performance of the method and show that in stationary environments, the performance and success probability of the proposed method always exceeds (or in the worst case is equivalent to) those of the FIRM method. }
\ph{Applications to physical systems} \axx{Among the set of methods that cope with continuous state/action/observation POMDP, only a very small number of methods (e.g., \cite{arne-brock-icra2014},\cite{kaelbling2012integrated}, \cite{bai2015intention},\cite{Marthi_RSS12_PR2}) have been applied to physical systems due to the computational complexity of these methods in real-world robotics problem.} An important contribution of this work is to implement a continuous state/action/observation POMDP solver on a physical robot in a real-world office-like environment. We explain this procedure in details and discuss the theoretical tools and methods designed during the process of to help with the real-world implementing including the lazy feedback approach.
\section{Belief Space Planning for Mobile Robots} \label{sec:FIRM-for-phsyical systems}
In this section, we briefly describe the abstract framework of Feedback-based Information RoadMap (FIRM) followed by a description of its concrete implementation in our experiments. We refer the reader to \cite{Ali13-IJRR,Ali11-FIRM-IROS} for a more in-depth description of the abstract FIRM method.
\subsection{Planning Under Uncertainty} \label{sec:planning-under-incertainty}
In motion planning under uncertainty, we are looking for a policy $\pi_k$ at time-step $k$ that generates a control action $u_k$ based on available information about the robot. We start by defining some key terms. Consider a system whose state is denoted by $x_k$ at the $k$-th time step, let $u_k$ and $w_k$, be the control action and motion noise respectively at time $k$. Let us denote the state evolution model by $x_{k+1} = f(x_k,u_k,w_k)$. In a partially observable system, we do not have access to the true state of the system. Rather, we get some noisy observations of the robot state. Let us denote the sensor measurement (or observation) vector by $z_k$ at the $k$-th time step and the measurement model by $z_k = h(x_k,v_k)$, where $v_k$ denotes sensing noise. The only data that is available for decision making at the $ k $-th time step (i.e., generating $ u_k $) is the history of observations and controls: $\mathcal{H}_{k}=\{z_{0:k},u_{0:k-1}\}=\{z_{0},z_{1},\cdots,z_{k},u_{0},\cdots,u_{k-1}\} $.
Conditional probability distribution over all possible system states $b_{k}=p(x_{k}|\mathcal{H}_{k}) $, which is referred to as \textit{belief} or \textit{information-state} compresses the data $ \mathcal{H}_{k} $. It is well-known that in Bayesian filtering, belief can be computed recursively based on the last action and current observation $b_{k+1}=\tau(b_k,u_k,z_{k+1})$ \cite{Bertsekas07},\cite{Thrun2005}:
\begin{align}
\nonumber
b_{k+1}&=\alpha{p(z_{k+1}|x_{k+1})\int_{\mathbb{X}}p(x_{k+1}|x_{k},u_{k})b_{k}dx_{k}}=:\tau(b_k,u_k,z_{k+1}).
\end{align}
where, $ \alpha={p(z_{k+1}|\mathcal{H}_{k},u_{k})}^{-1} $ is the normalization constant. As a result of filtering, the action $ u_{k} $ can be taken based on the belief $ b_{k} $ using a policy (planner) $ \pi_{k} $, i.e., $ u_{k}=\pi_{k}(b_{k}) $. It is well-known that $ \pi_{k} $ is the solution of a POMDP, which is intractable over continuous state/action/observation spaces.
\subsection{A Brief Overview of FIRM}\label{subsec:FIRM-overview}
FIRM is a framework designed to reduce the mentioned intractable POMDP problem to a tractable problem, by generating a representative graph (PRM: Probabilistic Roadmap Method) in the belief space.
Similar to PRMs where the \textit{solution path} is a concatenation of local paths, in FIRM the \textit{solution policy} is a concatenation of local policies. Every node in an FIRM is a small region $B=\{b : \|b-\grave{b}\|\leq \epsilon\}$ around a sampled belief $ \grave{b} $. We denote the $ i $-th node by $ B^{i} $ and the set of nodes by $ \mathbb{V}=\{B^{i} \} $. Each edge in an FIRM is a closed-loop local controller whose goal is to steer the belief into the target node of the edge. An edge controller connecting nodes $ i $ and $ j $ is denoted by $ \mu^{ij} $ and the set of edges by $ \mathbb{M}=\{\mu^{ij} \} $. \axx{A metaphor for each local controller is a ``funnel in belief space". As depicted in Fig. \ref{fig:funnel-FIRM}, each funnel steers the set of beliefs to a milestone belief. Further, using the slide-funnel composition, we can create sparse graphs of funnels as shown in Fig. \ref{fig:funnel-FIRM}. A basic instantiation of a funnel in belief space is stationary Linear Quadratic Gaussian (SLQG) controller (see Appendix C in \cite{Ali13-IJRR}) and a basic instantiation of a slide in belief space is Time-Varying Linear Quadratic Gaussian (TV-LQG) controller (see Appendix B in \cite{Ali13-IJRR}).}
\begin{figure}[ht]
\centering
\subfigure[\axx{Belief funnel}]{\includegraphics[width=1.1in]{funnelBelief.png}\label{fig:belief-funnel}}
\subfigure[\axx{Funnel chain}]{\includegraphics[width=0.9in]{funnelChaining.png}}
\subfigure[\axx{Funnel graph}]{\includegraphics[width=1.3in]{funnelGraphing.png}}
\subfigure[\axx{Funnel-slide-funnel}]{\includegraphics[width=1.4in]{funnelSlideFunnel.png}}
\subfigure[\axx{FIRM: graph of funnel-slide-funnel}]{\includegraphics[width=1.9in]{funnelSlideGraph.png}\label{fig:FIRM-slide-funnel-graph}}
\caption{\axx{An extension of sequential composition methods \cite{burridge1999sequential} to belief space. (a) A funnel in belief space that collapses a set of Gaussian distribution to a particular Gaussian distribution, referred to as the graph node or milestone. The 2D projection denotes the belief space, where each point represents a full Gaussian distribution. The projection of the mouth of funnel is a metaphor for its region of attraction in belief space. (b) A chain of funnels to guide the belief towards a goal. (c) A graph of funnels, where the tip of multiple funnels can fall into the region of attraction of a single funnel. (d) For a sparse set of funnels, one can use tracking controllers (slide) to create the funnel-slide-funnel structure. (e) Graph of funnel-slide-funnel. FIRM graph is of this type.}}
\label{fig:funnel-FIRM}
\end{figure}
\axx{Given a graph of these local controllers (Fig. \ref{fig:FIRM-slide-funnel-graph}), we can define policy $ \pi^{g} $ on the graph as a mapping from graph nodes to edges; i.e., $ \pi^{g}:\mathbb{V}\rightarrow\mathbb{M} $. $ \Pi^{g} $ denotes the set of all such graph policies.} Having such a graph in belief space, we can form a tractable POMDP on the FIRM graph (so-called FIRM MDP):
\vspace{-2pt}
\begin{align}\label{eq:FIRM-MDP}
\pi^{g^{*}}=\arg\min_{\Pi^{g}} \mathbb{E}\sum_{n=0}^{\infty}C^{g}(B_{n},\pi^{g}(B_{n}))
\vspace*{-3pt}
\end{align}
where, $ B_{n} $ is the $ n $-th visited node, and $ \mu_{n} $ is the edge taken at $ B_{n} $. $ C^{g}(B,\mu):=\sum_{k=0}^{\mathcal{T}}c(b_{k},\mu(b_{k})) $ is the generalized cost of taking local controller $ \mu $ at node $ B $ centered at $ b_{0} $.
We incorporate the failure set in planning by adding a hypothetical FIRM node $ B^{0}=F $ to the list of FIRM nodes. Since the FIRM MDP in Eq.\eqref{eq:FIRM-MDP} is defined over a finite set of nodes, it can be solved by computing the graph node cost-to-go's through the following dynamic programming problem:
\begin{align}\label{eq:FIRM-DP}
\!\!\!\!J^{g}(B^{i}) \!= \!\min_{\mu} \{C^{g}(B^{i},\mu)+\!\sum\limits_{\gamma=0}^{N}\mathbb{P}^{g}(B^{\gamma}|B^{i},\mu)J^{g}(B^{\gamma})\}
\end{align}
and $ \pi^{g}(B^{i}) =\mu^{*}$, where $ \mu^{*} $ is the argument of above minimization. $ \mathbb{P}^{g}(B^{\gamma}|B^{i},\mu) $ is the probability of reaching $ B^{\gamma} $ from $ B^{i} $ under $ \mu $. The failure and goal cost-to-go's (i.e., $ J^{g}(B^{0}) $ and $ J^{g}(B^{goal}) $) are set to a suitably high positive value and zero, respectively.
\axx{
Collision (failure) probability of FIRM starting from a given node $ B^{i} $ can be computed \cite{Ali12-ProbComp-ICRA} as:
\begin{align}
\mathbb{P}(Fail|B^i,\pi^{g}) = 1 - \Gamma_{i}^{T}(I-Q)^{-1}R_{goal},
\end{align}
where, $ \Gamma_{i} $ is a column vector of zeros with only the $ i $-th element set to one. $ Q $ is a matrix, whose $ (i,j) $-th element is $ Q[i,j]=\mathbb{P}(B^{i}|B^{j},\pi^{g}(B^{j})) $ and $ R_{goal} $ is a column vector, whose $ j $-th entry is $ R_{goal}[j]= \mathbb{P}(B^{goal}|B^{j},\pi^{g}(B^{j}))$. It can be shown that FIRM is an anytime algorithm \cite{Ali12-ProbComp-ICRA}, i.e., in a given static environment, increasing the number of nodes the cost (e.g., the failure probability) will go down. As will be discussed in the next section, this failure probability will be an upper bound for the failure probability of the FIRM-based rollout planner.
}
\subsection{Concrete FIRM instance in our implementation} \label{subsec:FIRM-elements}
Here we discuss the concrete realization of the FIRM graph constructed for conducting the experiments.
\ph{One-step-cost}
Although the objective function can be general, the cost function we use in our experiments includes the localization uncertainty, control effort, and elapsed time.
\begin{align}\label{eq:one-step-cost}
c(b_{k},u_{k})=\zeta_{p}\text{tr}(P_{k})+\zeta_{u}\|u_{k}\|+\zeta_{T}.
\end{align}
where $ tr(P_{k}) $ is the trace of estimation covariance as a measure of localization uncertainty.
The norm of the control signal $ \|u_{k}\| $ denotes the control effort, and $\zeta_{T}$ is present in the cost to penalize each time lapse. Coefficients $ \zeta_{p} $, $ \zeta_{u} $, and $ \zeta_{T} $ are user-defined task-dependent scalars to combine these costs to achieve a desirable behavior. In the presence of constraints (such as obstacles in the environment), we assume the task fails if the robot violates these constraints (e.g., collides with obstacles). \axx{Therefore, collision and goal belief are terminal states such that $ J^{g}(B^{goal})=0 $ and $ J^{g}(B^{0})=J^{g}(F)$ is set to a suitably high cost-to-go. Note that typically the one-step-cost in Eq. \ref{eq:one-step-cost} is defined in the state space (i.e., cost of taking action $ u $ at state $ s $). While our cost can be written as a state space cost, writing it directly in belief space better demonstrates the “active” localization aspect of the work (in the sense of minimizing the uncertainty in the localization belief) along the plan.
}
\ph{Steering localization covariance} To construct a FIRM graph, we first need to sample a set of stabilizers (belief steering functions). Each stabilizer is a closed-loop controller, whose role is to drive the localization uncertainty (or belief) to a FIRM node. A stabilizer consists of a filter and a separated controller \cite{Kumar-book-86}. The filter governs the belief evolution and the separated-controller generates control signals based on the available belief at each time step \cite{Kumar-book-86}. To design these steering laws, we first sample a set of points $ \mathcal{V}=\{\mathbf{v}^{i}\} $ in the robot's state space and then associated with each point we construct a stabilizer \cite{Ali13-IJRR}. In the vicinity of each node $ \mathbf{v}^{j} $, we rely on Stationary Kalman Filter (SKF) as the steering filter (which is constructed by linearizing the system about the target point $ \mathbf{v}^{j} $) as the stabilizer's filter.
It can be shown that for an observable system, the covariance under the $ j $-th SKF approaches to covariance $ P^{+^{j}}_{s} $, which can be efficiently computed by solving a corresponding Discrete Algebraic Riccati Equation \cite{Arnold84DARE}.
\ph{Steering localization mean} While steering belief toward node $ B^{j} $, separated-controller $ \mu^{ij} $ is responsible for generating the control signals based on the available belief, i.e., $ u_{k}=\mu^{ij}(b_{k}) $. The iRobot Create is a nonholonomic robot and is modeled as a unicycle (see Section \ref{subsec:robotModel}), thus to steer the estimation mean toward the target node $ \mathbf{v}^{j} $, one needs to use a controllers designed for stabilizing nonholonomic systems (e.g., \cite{Oriolo02-DFL}, \cite{murray1993nonholonomic}, \cite{samson1991feedback}). However, the randomness of the estimation mean (resulted from randomness of observations) calls for a controller that can perform such stabilization under uncertainty. To this end, we implemented different controllers including polar coordinate-based controller \cite{deLuca2001control} and Dynamic Feedback Linearization-based controller \cite{Ali12-DFL-IROS}. Observing the behavior of different controllers, we adopted a variant of the Open-Loop Feedback Control (OLFC) scheme \cite{Bertsekas07} for stabilization purposes. In this variant of OLFC, for a given $ \mathbf{v}^{j} $, we compute an open-loop control sequence starting from the current estimation mean and ending at $ \mathbf{v}^{j} $. Then, we apply a truncated sequence of the first $ l $ controls ($ l=5 $ in our experiments)\footnote{Only one control (i.e., $ l=1 $) is not enough due to the nonholonomicity of the system.}. This process repeats every $ l $ steps until we reach the vicinity of $ \mathbf{v}^{j} $.
\ph{FIRM graph} Associated with each sample $ \mathbf{v}^{j} $, we can define the belief node $\grave{b}^{j}\equiv(\mathbf{v}^{j},P^{+^{j}}_{s})$. Defining FIRM node as a ball around this point $ B^{j}=\{b : \|b-\grave{b}^{j}\|\leq \epsilon\} $, we can steer the Gaussian localization uncertainty to this ball with combination of OLFC and SKF. Accordingly, we sample $ N $ FIRM nodes $ \{B^{j}\}_{j=1}^{N} $.
The SKF/OLFC combination between nodes $ i $ and $ j $ forms the FIRM edge (local controller) and is denoted by $ \mu^{ij} $. We connect each node to $ k $-nearest neighbors and the set of constructed edges is denoted by $ \mathbb{M}=\{\mu^{ij} \} $.
Then, we compute and store costs and transition probabilities associated with each edge by offline simulations. Finally, we solve the DP in Eq. \eqref{eq:FIRM-DP} to get the optimal graph cost-to-go's $ J^{g}(B^{i}) $ and policy $ \pi^{g}(B^{i}) $ for all $ i $.
\axx{
\ph{FIRM requirements} It is worth noting that FIRM is not a general-purpose POMDP solver. It provides a solution for a class of POMDP problems (including SLAP) where one can design closed-loop controllers with a funneling behavior in belief space. In the current instantiation of the FIRM, designing funnels require knowledge about closed-form dynamics and sensor model. Also, the system needs to be locally linearizable at belief nodes, and the noise is assumed to be Gaussian.
}
\section{SLAP via Rollout-based Dynamic Replanning in Belief Space}\label{sec:Rollout-policy-for-replanning}
\axx{
As discussed earlier, SLAP in this paper refers to the problem (re)planning dynamically every time the localization module updates the probability distribution on the robot's state. A principled solution to this problem, calls for resolving (or modifying) a POMDP in real-time. The SLAP problem in this paper corresponds to a restricted class of POMDPs, where the assumption follows the typical assumption in Kalman filter-based SLAM literature in robotics: discussions are limited to POMDPs where the state transition model and the observation model are given in the form of a locally linearizable (locally differentiable) explicit functions. The belief is Gaussian. The problem has a terminal belief and there is no discount factor.
}
To enable SLAP we resort to dynamic replanning in belief space which will handle changes in the environment and goal location, large deviations in the robot's location, and discrepancies between real and computational models. In this section, we discuss the extension of the RHC and Rollout policy \cite{Bertsekas07} to the belief space to design a principled scheme for online replanning and increase the performance of FIRM by smart selective stabilization.
To make the connection with the rollout policy, we re-state the POMDP problem in a more general setting of the time-varying policy.
\begin{align}\label{eq:belief-MDP-timeVarying}
\nonumber &\pi_{0:\infty}(\cdot) = \arg\min_{\Pi_{0:\infty}}\sum\limits_{k=0}^{\infty}\mathbb{E}\left[c(b_k,\pi_{k}(b_k))\right]\\
\nonumber &~s.t.~~~b_{k+1}=\tau(b_k,\pi_{k}(b_{k}),z_{k}),~~~z_{k}\sim p(z_{k}|x_{k})\\
&~~~~~~~~x_{k+1}=f(x_k,\pi_{k}(b_{k}),w_{k}),~~~w_{k}\sim p(w_{k}|x_{k},\pi_{k}(b_{k}))
\end{align}
In the above problem, we seek for a sequence of policies $\pi_{0:\infty}=\{\pi_{0}(\cdot),\pi_{1}(\cdot),\pi_{2}(\cdot),\cdots \} $, where $ \pi_{k} $ maps any given $ b_{k} $ to the optimal action $ u_{k} $, $ \Pi_{k} $ is the space of all possible policies at time step $ k $, i.e., $ \pi_{k}\in\Pi_{k} $. In the infinite horizon case, it can be shown that the solution is a stationary policy $ \pi_{s} $, i.e., $\pi_{1}=\pi_{2}=\cdots=\pi_{s} $ and the problem is reduced to the one introduced earlier in this paper. However, we keep the time-varying format for the reasons that will be clear further below.
As discussed earlier, solving the above POMDP problem is computationally intractable over continuous state, action, and observation spaces. However, the more difficult problem is to solve the SLAP problem which requires re-solving the above POMDP ``online'' every time the localization pdf is updated. We handle this problem by reusing computations in an efficient way as will be explained in the next subsection. However, we first start by RHC which is a natural way of thinking about such repeated online solutions.
\ph{RHC in belief space} Receding horizon control (often referred to as rolling horizon or model predictive control) was originally designed for deterministic systems \cite{garcia1989model} to cope with model discrepancies. For stochastic systems, where the closed-loop (feedback) control law is needed, formulation of the RHC scheme is up for debate \cite{Li02,Hessem03,Shah12,Chakrav11-IRHC}.
In the most common form of RHC \cite{Bertsekas07} the stochastic system is approximated with a deterministic system by replacing the uncertain quantities with their typical values (e.g., maximum likelihood value.) In belief space planning the quantity that injects randomness in belief dynamics is the observation. Thus, one can replace the random observations $ z_{k} $ with their deterministic maximum likelihood value $ z^{ml} $, where $ z_{k}^{ml}:=\arg\max_{z} p(z_{k}|x^{d}_{k}) $ in which $ x^{d} $ is the nominal deterministic value for the state that results from replacing the motion noise $ w $ by zero, i.e., $ x^{d}_{k+1}=f(x^{d}_{k},\pi_{k}(b^{d}_{k}),0) $. The deterministic belief $ b^{d} $ is then used for planning in the receding horizon window. At every time step, the RHC scheme performs a two-stage computation. At the first stage, the RHC scheme for deterministic systems solves an open-loop control problem (i.e., returns a sequence of actions $ u_{0:T} $) over a fixed finite horizon $ T $ as follows:
\vspace{-13pt}
\begin{align}\label{eq:RHC-BeliefSpace}
\nonumber &u_{0:T} = \arg\min_{\mathbb{U}_{0:T}}\sum\limits_{k=0}^{T}c(b^{d}_k,u_{k})\\
\nonumber &~~s.t.~~~~b^{d}_{k+1}=\tau(b^{d}_k,u_{k},z^{ml}_{k+1})\\
\nonumber &~~~~~~~~~~z^{ml}_{k+1}=\arg\max_{z} p(z|x^{d}_{k+1}) \\
&~~~~~~~~~~x^{d}_{k+1}=f(x^{d}_{k},u_{k},0)
\end{align}
In the second stage, it executes only the first action $ u_{0} $ and discards the remaining actions in the sequence $ u_{0:T} $. However, since the actual observation is noisy and is not equal to the $ z^{ml} $, the the belief $ b_{k+1} $ will be different that $ b^{d}_{k+1} $. Subsequently, RHC performs these two stages from the new belief $ b_{k+1} $. In other words, RHC computes an open loop sequence $ u_{0:T} $ from this new belief, and this process continues until the belief reaches the desired belief location. Algorithm \ref{alg:RHC-beliefSpace} recaps this procedure. State-of-the-art methods such as \cite{platt-wafr12-RHC} and \cite{Toit10} utilize the RHC-in belief space. \cite{Toit10} refers to the method as partially-closed loop RHC (PCLRHC) as it exploits partial information about future observations (i.e., $ z^{ml} $) and does not ignore them.
\begin{algorithm}[h!]
\caption{RHC with most likely observations for partially-observable stochastic systems}\label{alg:RHC-beliefSpace}
\textbf{input} : Initial belief $ b_{current}\in\mathbb{X} $, $ B_{goal}\subset\mathbb{B} $\\
{
\While{$ b_{current}\notin B_{goal} $}{
$ u_{0:T} = $ Solve the optimization in Eq.\eqref{eq:RHC-BeliefSpace} starting from $ b^{d}_{0}=b_{current} $;\\
Apply the action $ u_{0} $ to the system;\\
Observe the actual $ z $;\\
Compute the belief $ b_{current} \leftarrow \tau(b_{current},u_{0},z) $;\\
}
}
\end{algorithm}
A known shortcoming of the stated RHC formulation is its limited horizon
which might lead the system to local minima by choosing actions that guide the robot toward ``favorable'' states (with low cost) in the near future followed by a set of ``unfavorable'' states (with a high cost) in the long run.
To improve the basic RHC, different variants have been proposed including the ``rollout policy'' \cite{Bertsekas07}. Here, we discuss how they can be extended and realized in belief space.
\ph{Rollout policy in belief space} Another class of methods that aims to reduce the complexity of the stochastic planning problem in Eq.$\,$\eqref{eq:belief-MDP-timeVarying} is the class of rollout policies \cite{Bertsekas07}, which are more powerful than the described version of RHC in the following sense: First, they
do not approximate the system with a deterministic one. Second, they avoid local minima using a suboptimal policy that approximates the true cost-to-go beyond the horizon. This function is referred to as the ``base policy'' and denoted by $ \widetilde{J} $. Formally, at each step of the rollout policy scheme, the following closed-loop optimization is solved:
\begin{align}\label{eq:Rollout-BeliefSpace}
\nonumber &\pi_{0:T}(\cdot) = \arg\min_{\Pi_{0:T}}\mathbb{E}\left[\sum\limits_{k=0}^{T}c(b_k,\pi_{k}(b_k))+\widetilde{J}(b_{T+1})\right]\\
\nonumber &~~s.t.~~b_{k+1}=\tau(b_k,\pi_{k}(b_{k}),z_{k}),~~~z_{k}\sim p(z_{k}|x_{k})\\
&~~~~~~~~x_{k+1}=f(x_k,\pi_{k}(b_{k}),w_{k}),~~~w_{k}\sim p(w_{k}|x_{k},\pi_{k}(b_{k}))
\end{align}
Then, only the first control law $ \pi_{0} $ is used to generate the control signal $ u_{0} $ and the remaining policies are discarded. Similar to the RHC, after applying the first control, a new sequence of policies is computed from the new point. The rollout algorithm is described in Algorithm \ref{alg:Rollout-BeliefSpace}.
\begin{algorithm}[h!]
\caption{Rollout algorithm in belief space}\label{alg:Rollout-BeliefSpace}
\textbf{input} : Initial belief $ b_{current}\in\mathbb{B} $, $ B_{goal}\subset\mathbb{B} $\\
{
\While{$ b_{current}\notin B_{goal} $}{
$ \pi_{0:T} = $ Solve optimization in Eq.\eqref{eq:Rollout-BeliefSpace} starting from $ b_{0} = b_{current} $;\\
Apply the action $ u_{0} = \pi(b_{0}) $ to the system;\\
Observe the actual $ z $;\\
Compute the belief $ b_{current} \leftarrow \tau(b_{current},u_{0},z) $;\\
}
}
\end{algorithm}
Although the rollout policy in the belief space efficiently reduces the computational cost compared to the original POMDP problem, it is still formidable to solve since the optimization is carried out over the policy space. Moreover there should be a base policy that provides a reasonable cost-to-go $ \widetilde{J} $. In the following, we propose a rollout policy in the belief space based on the FIRM-based cost-to-go.
\subsection{Enabling SLAP via FIRM-based rollout in belief space}\label{subsec:firm-based-rollout}
In this section, we discuss how a rollout policy in belief space (and hence SLAP) can be realized using the FIRM framework. As explained briefly, in FIRM, the system transitions between two nodes\footnote{\axx{In the cartoon in Fig. \ref{fig:funnel-FIRM}, it looks like $ B^{j} $ is the sole destination for $ \mu^{ij} $. But, in dense graphs the belief under $ \mu^{ij} $ might be absorbed by a different funnel before reaching $ B^j $. The summation over $ \gamma $ in the following equations takes that into account.}} $B^i$ and $B^j$ at sampled beliefs $b^i$ and $b^j$ using a controller $\mu^{ij}$. The global level decision-making is limited to when the system is in the region $B^i$ and the rest of the time, the local controls are executed according to $\mu^{ij}$. In FIRM-based rollout, we raise this limitation by forcing the system to globally replan at every step to enable SLAP. In particular, suppose that at time $t$, the belief state of the system is in $b_t$. Then we solve the following problem online for $b_t$:
\begin{enumerate}[leftmargin=0cm,itemindent=.5cm,labelwidth=0.4cm,labelsep=0cm,align=left]
\item We connect $b_t$ to all it's FIRM neighbors in some radius $R$ using suitable controllers $\mu^{tj}$, designed in a similar way to the ones used as FIRM edges.
\item We evaluate the transition costs $C(b_t, \mu^{tj})$ and the probability of landing in nodes $ B^{\gamma} $ under the influence of the controller $ \mu^{tj} $ at $ b_t $, i.e., $\mathbb{P}(B^{\gamma} | b_t, \mu^{tj})$.
\item We evaluate the best edge outgoing from $ b_{t} $ by solving:
\begin{align}\label{eq:rollout-minimization}
\!\!\!\!j^{*} \!= \!\arg\min_{j} \{C(b_t,\mu^{tj})+\!\sum\limits_{\gamma=0}^{N}\mathbb{P}(B^{\gamma}|b_t,\mu^{tj})J^{g}(B^{\gamma})\}
\end{align}
where $J^{g}(B^{\gamma})$ is the nominal cost-to-go under the FIRM policy from node $B^{\gamma}$ and $J^g(B^0)$ is the failure cost-to-go as discussed in Section \ref{subsec:FIRM-overview}.
\item We choose $\mu^{tj^{*}}$ as the local controller at $ b_t $ if the expected success probability exceeds the current one. In other words, if $ \mu^{ij} $ is the current local controller at time $ t $, we only switch to $ \mu^{tj^{*}} $ if below condition holds:
\begin{align}\label{eq:check-success-prob}
\mathbb{E}[success|b_t,\mu^{tj^{*}}]>\mathbb{E}[success|b_t,\mu^{tj}]
\end{align}
where expected success probability is
\begin{align}\label{eq:define-expected-success-prob}
\mathbb{E}[success|b_t,\mu^{t\alpha}]=\sum_{\gamma=1}^{N}\mathbb{P}(B^{\gamma}|b_t,\mu^{t\alpha})P^{success}(B^{\gamma})
\end{align}
and $P^{success}(B^{\gamma})$ is the probability of success for reaching the goal from FIRM node $B^{\gamma}$ under the nominal FIRM policy.
\end{enumerate}
Algorithm \ref{alg:Rollout-FIRM} describes planning with the proposed rollout process. We split the computation to offline and online phases. In the offline phase, we carry out the expensive computation of graph edge costs and transition probabilities. Then, we handle pose deviations and the changes in start/goal location by repeated online replanning, while reusing offline computations.
\begin{algorithm}[h!]
\caption{Rollout algorithm with FIRM as base policy}\label{alg:Rollout-FIRM}
\textbf{input} : Initial belief $ b_{t} $ and goal belief region $ B_{goal} $\\
Construct a FIRM graph and store nodes $ \mathbb{V}=\{B^{i} \} $, edges $ \mathbb{M}=\{\mu^{ij}\} $, Cost-to-go $ J^{g}(\cdot) $, and Success probabilities $ P^{success}(\cdot)$;$ ~~~ $\tcp{offine phase}
{
\While{$ b_t\notin B_{goal} ~~~~~~~~~~~~~~~~$\tcp{online phase}}{
Find $ r $ neighboring nodes $\mathbb{V}_{r}=\{B^{i}\}_{i=1}^{r} $ to $b_{t}$;\\
Set $ B_{t}=\{b_{t}\} $, $ J(B_{t}) = \infty $, and $ S = 0 $;\\
\ForAll{$B^j \in \mathbb{V}_{R}$}
{
$\mu^{tj}$ = Generate controller from $b_t$ to $B^j$ ; \\
$C(b_t, \mu^{tj}), \mathbb{P}(B^\gamma | b_t, \mu^{tj})$ = Simulate $\mu^{tj}$ to compute transition probability and expected cost; \\
Compute the expected success $ \mathbb{E}[success|b_t,\mu^{tj}] $;\\
\If{$ \mathbb{E}[success|b_t,\mu^{tj}]\geq S $}
{
Compute the candidate cost-to-go as $J^{cand} = C(b_t,\mu^{tj})+\!\sum_{\gamma=0}^{N}\mathbb{P}(B^{\gamma}|b_t,\mu^{tj})J^{g}(B^{\gamma})$;\\
\If{$ J^{cand} < J(B_{t})$ \label{line:fromAlg:condition}}
{$ J(B_{t}) = J^{cand}$ and $ S = \mathbb{E}[success|b_t,\mu^{tj}] $;\label{line:J-update}\\
$ \mu^{tj^{*}}=\mu^{tj} $;\label{line:mu-update}\\}
}
}
Apply the action $ u_{t} = \mu^{tj^{*}}(b_{t}) $ to the system;\\
Get the actual measurement $ z_{t+1} $;\\
Compute the next belief $ b_t \leftarrow \tau(b_{t},u_{t},z_{t+1}) $;\\
\If{user submits a new goal state $ \mathbf{v}^{goal} $}
{
$ B^{goal} \leftarrow$ Sample the corresponding FIRM node;\\
Add $ B^{goal}$ to the FIRM graph; $ \mathbb{V}\leftarrow\mathbb{V}\cup\{B^{goal} \} $;\\
Connect $ B^{goal} $ to its $ r $ nearest neighbors using edges $ \{\mu^{(i,goal)} \} $. Also, $ \mathbb{M}\leftarrow\mathbb{M}\cup\{\mu^{(i,goal)} \} $;\\
$ [J^{g}(\cdot),P^{success}(\cdot)] $ = DynamicProgramming($ \mathbb{V},\mathbb{M} $);
\end{algorithm}
Below, we discuss the how Algorithm \ref{alg:Rollout-FIRM} realizes a variant of Eq. \ref{eq:rollout-minimization} and extends the rollout policy methods \cite{Bertsekas07} to belief space. Following, the concepts and terminology in \cite{Bertsekas07}, here, nominal FIRM policy plays the role of the base policy. Accordingly, the cost-to-go of the FIRM policy is used to approximate the cost-to-go beyond the horizon. Given a dense FIRM graph, where nodes partition the belief space, i.e., $ \cup_{i} B^{i}=\mathbb{B} $, then at the end of time horizon $ T $, the belief $ b_{T+1} $ belongs to a FIRM node $ B $ with known cost-to-go. With a sparse FIRM graph, where nodes do not cover the entire belief space, we design local policies that drive the belief into a FIRM node at the end of horizon. However, since the belief evolution is random, reaching a FIRM node at deterministic time horizon $ T $ may not be guaranteed. Therefore, we propose a new variant of rollout, defining the horizon based on belief (instead of time).
\begin{align}\label{eq:IRM-rollout}
\nonumber \pi_{0:\infty}(\cdot)
&= \arg\min_{\widetilde{\Pi}}\mathbb{E}\left[\sum\limits_{k=0}^{\mathcal{T}}c(b_k,\pi_{k}(b_k))+\widetilde{J}(b_{\mathcal{T}+1})\right]\\
\nonumber &~~s.t.~~~~b_{k+1}=\tau(b_k,\pi_{k}(b_{k}),z_{k}),~~~z_{k}\sim p(z_{k}|x_{k})\\
\nonumber &~~~~~~~~~~x_{k+1}=f(x_k,\pi_{k}(b_{k}),w_{k}),~~~w_{k}\sim p(w_{k}|x_{k},\pi_{k}(b_{k}))\\
&~~~~~~~~~~b_{\mathcal{T}+1}\in \cup_{j}B^{j},
\end{align}
where for $ b_{\mathcal{T}+1}\in B^{j} $ we have
\begin{align}\label{eq:base-CostToGo}
\widetilde{J}(b_{\mathcal{T}+1})=J^{g}(B^{j})
\end{align}
and $ \widetilde{\Pi} $ is a restricted set of policies under which the belief will reach a FIRM node $ B^{j} $ in finite time. In other words, if $ \pi\in\widetilde{\Pi} $ and $ \pi=\{\pi_{1},\pi_{2},\cdots \} $, then for finite $ \mathcal{T} $, we have $ \mathbb{P}(b_{\mathcal{T}+1}\in\cup_{j} B^{j}|\pi) = 1$.
Thus, the last constraint in Eq. \eqref{eq:IRM-rollout} is redundant and automatically satisfied for suitably constructed $\widetilde{\Pi}$.
Also, the FIRM-based cost-to-go $ J^{g}(\cdot) $ plays the role of the cost-to-go beyond the horizon $ \widetilde{J}(\cdot) $ (Eq. \eqref{eq:base-CostToGo}).
Note that based on Algorithm \ref{alg:Rollout-FIRM}, we can provide guarantees on the performance of the proposed method. Before formally stating the results, recall that at each instance of rollout computation, the current belief $b_t$ is added as a virtual node $B^{virtual}_t$ to the FIRM graph to generate the augmented FIRM graph $ G^{a}_t $. A virtual node being defined as a temporary node with no incoming edges, which is removed from the graph as soon as the system departs its vicinity.
\begin{proposition}
The performance and success probability of the FIRM-Rollout policy is lower bounded by the nominal FIRM policy at any belief state during execution of the planner.
\end{proposition}
\begin{proof}
As discussed, to compute the rollout at time $t$, belief $b_t$ is added to the FIRM graph as a virtual node,
with no incoming edges that drive the system into it. Therefore, the dynamic programming solution remains unchanged. Thus, the optimal cost-to-go from the virtual node $B^{virtual}_t$ is given by the minimum of the sum of the rollout edge cost and the cost-to-go from the target of rollout edge, i.e.,
\begin{align}
\nonumber J(B^{virtual}_{t}) = \min_{j} \{C(b_t,\mu^{tj})+\!\sum\limits_{\gamma=0}^{N}\mathbb{P}(B^{\gamma}|b_t,\mu^{tj})J^{g}(B^{\gamma})\}
\end{align}
Since the current FIRM edge is one of edges over which the above minimization is carried out, the cost-to-go (performance) with rollout is strictly upper (lower) bounded by the nominal FIRM policy cost (performance). Furthermore, due to the check in Eq. \eqref{eq:check-success-prob}, it can be further assured that the probability of success of the rollout policy is strictly greater than that of the FIRM policy \axx{in static environments}.
Once the rollout is computed and the target node is chosen, the robot starts executing the controller $\mu^{tj^*}$ and leaves the vicinity of node $B^t$. This node then gets removed from the graph. Thus, it is called a virtual node.
Further, it should be noted that as the robot moves on the virtual edge (edge from node $B_{virtual}^t$ to $B^{j^{*}}$), the rollout process is repeated which leads the robot to skip the belief stabilization as needed. Consequently, as the robot moves, due to rollout, it chooses actions which are never worse-off than the nominal FIRM policy. We refer the reader to Fig.\ref{fig:rollout-cartoon} for a visual explanation of the process.
$ \blacksquare $ \end{proof}
\textit{Remark:} If the desired factor was merely the success probability, one can ignore the cost-to-go comparison condition Algorithm \ref{alg:Rollout-FIRM} and only maximize the success probability.
In addition to improving the performance while not compromising on the safety, the rollout procedure is particularly helpful in handling the changes in the environment map. We discuss this aspect in the following section.
\axx{
\subsection{Complexity Analysis}
In this section, we analyze the computational complexity of the offline and online phase of the proposed algorithm.
}
\axx{
\ph{Offline phase}
We assume the environment is a hypercube $ [0,w]^{d} $. For constructing the offline policy on a $ k $-regular graph with $ N $ nodes, we have to simulate $ kN $ edges offline. Let us denote the number of particles describing belief by $ n^{off}_b $. Assuming a fixed velocity $ V=1 $ on edges, and assuming simulations steps occur at every $ \Delta t $ seconds, the number of simulation calls (including collision checks) is $ n_{coll} = \sum_{s = 1}^{kN} n_b^{off}\Delta t^{-1} l_{s} $, where $ l_s $ is the length of the $ s $-th edge.
}
\axx{
Assuming a uniform distribution of the sampled points (in the sense of infinity norm) in the configuration space, the density of points is $ \rho = Nw^{-d} $. Accordingly, the dispersion \cite{Lavalle04Grid,Hsu06_IJRR_sampling} of the sampled points is $ \delta = wN^{-d^{-1}} $. Assuming all edges have equal length (in the $ l^{\infty}$-norm sense), the edge length of the underlying PRM (over which FIRM has been built) is $ l_{s}=\delta=w\sqrt[d]{N}^{-1} $.
\begin{align}
n_{coll} &= (n_b^{off}\Delta t^{-1})wkN^{1-d^{-1}}
\end{align}
}
\axx{
\ph{Online phase}
In the online phase, we connect each node to all nodes in the neighborhood of radius $ R $ (in infinity norm). Thus, the size of neighboring area for connection is $ R^{d} $, which encompasses $ R^{d}*\rho $ neighboring points. For $ R = r\delta $, it will encompass $ r^{d} $ points. Thus, we have $ r^{d} $ new edge in the online phase. It can be shown that the length of $ (i+1)^{d}-i^{d} $ of these edges is in the range $i\delta<edgeLength<(i+1)\delta$.
}
\axx{
For all edge lengths between $i\delta<l_{s}=edgeLength<(i+1)\delta$, let's approximate $l_{s} $ by $ i^{+}\delta $ where $ i\leq i^{+}\leq i+1 $.
Then, the sum of the length of all new edges is:
\begin{align}
\nonumber
L_{s} &= \sum_{s=1}^{r^{d}}l_{s} = \sum_{i=1}^{r}\sum_{s = (i-1)^{d}+1}^{i^{d}}l_{s} =\delta\sum_{i=1}^{r}((i)^{d}-(i-1)^{d}-1)i^{+}
\end{align}
}
\axx{
Let us denote the number of particles describing belief by $ n_b $. The number of simulation calls (including collision checks) is:
\begin{align}
\nonumber
n_{coll} &= n_b\Delta t^{-1}L_{s}= n_b\Delta t^{-1}\sqrt[d]{N^{-1}}
\sum_{i=1}^{R\sqrt[d]{N}w^{-1}}\!\!\!\!\!((i)^{d}-(i-1)^{d}-1)i^{+}
\end{align}
}
\axx{
Upper/lower bounds on the number of collision checks can be obtained by setting $ i^{+} $ to its upper and lower bounds, i.e., $ i+1 $ and $ i $. To gain further insight on the complexity let's assume $ i^{+} $ is a constant (i.e., all edge lengths are the same) and set it to its maximum value $ i^{+}=R\sqrt[d]{N}w^{-1} $. Then, the upper bound on collision checks $ n^{+}_{coll} $ is:
\begin{align}
\nonumber
n^{+}_{coll} &= (n_b\Delta t^{-1}wN^{-d^{-1}})
(R\sqrt[d]{N}w^{-1})
[(R\sqrt[d]{N}w^{-1})^{d}-R\sqrt[d]{N}w^{-1}]\\
&=n_b\Delta t^{-1}w^{-d}R^{d+1}N
-n_b\Delta t^{-1}w^{-1}R^{2}\sqrt[d]{N}
\end{align}
}
\axx{
Given this upper-bound on the computation time and given uniform grid sampling strategy, the online computation time grows sub-linearly with the number of underlying FIRM nodes $ N $ in the worst case. Also, for a given dimension the online computation time is polynomial in the connection radius $ R $. To remove the dimension from equation and extend the results to random sampling, we can write the first term of the above equation as:
\begin{align}
\nonumber
n^{+}_{coll} &= (n_b\Delta t^{-1})R V\rho
\end{align}
where $ \rho $ is the density of samples in the environment and $ V $ is the volume of the connection neighborhood and $ R $ is the radius of the connection neighborhood.
}
\subsection{Enabling SLAP in changing environments}
In this section, we discuss the ability of the proposed planner to handle changes in the obstacle map. We focus on a challenging case, where changes in the obstacle map are persistent and can possibly eliminate a homotopy class of solutions. Doors are an important example of this class. If the robot observes a door is closed (which was expected to be open), it might have to globally change the plan to get to the goal from a different passage. This poses a challenge to the state-of-the-art methods in the belief space planning literature.
To handle such changes in the obstacle map and replan accordingly, inspired by the lazy evaluation methods for PRM frameworks \cite{bohlin00lazyPRM}, we propose a method for lazy evaluation of the generated feedback tree, referred to as ``lazy feedback evaluation'' algorithm. The basic idea is that at every node the robot re-evaluates \textit{only} the next edge (or the next few edges up to a fixed horizon) that the robot will most likely take. By re-evaluation, we mean it needs to re-compute the collision probabilities along these edges. If there is a significant change (measured by $ \alpha $ in Alg. \ref{alg:lazy-feedback-eval}) in the collision probabilities, the dynamic programming problem is re-solved and new cost-to-go's are computed. Otherwise, the cost-to-go's remain unchanged and the robot keeps following its rollout-policy. Such lazy evaluation (computing the collision probabilities for a single edge or a small number of edges) can be performed online. The method is detailed in Algorithm \ref{alg:lazy-feedback-eval}.
\begin{algorithm}[h!]
\caption{Lazy Feedback Evaluation (Lazy Replanning)}\label{alg:lazy-feedback-eval}
\textbf{input} : FIRM graph\\
\textbf{output} : Updated cost-to-go, $J^{g}(\cdot)$ and success probabilities $ P^{success}(\cdot) $\\
{
Perceive the obstacles map; \\
\If{there is a change in map}
{
$ \mathcal{F}\leftarrow $ Retrieve the sequence of nominal edges returned by feedback up to horizon $ l $; Set $ ctr = 1 $;\\
\ForAll{edges $ \mu \in \mathcal{F}$}
{Re-compute collision probabilities $ \mathbb{P}_{new}(B,\mu) $ \axx{from start node $ B $ of edge $ \mu $};\\
\If{$ |\mathbb{P}_{new}(B,\mu) - \mathbb{P}(B,\mu)| > \alpha $}
{$\mathbb{P}(B,\mu)\leftarrow\mathbb{P}_{new}(B,\mu)$;\\
$ ctr++ $}
}
Update edge set $ \mathbb{M} $ based on new transition probabilities;\\
\If{$ ctr>0 $}
{$ [J^{g}(\cdot),P^{success}(\cdot)] $ = DynamicProgramming($ \mathbb{V},\mathbb{M} $);}
}
\Return $J^{g}(\cdot)$ and $P^{success}(\cdot)$;
}
\end{algorithm}
\textit{Remark}: Another challenge with these persistent changes is that they stay in the memory. Imagine a case where the robot is in a room with two doors. Suppose after checking both doors, the robot realizes they are closed. In those cases where there is no homotopy class of solutions to the goal, the door state is reset to ``open'' after a specific amount of time to persuade the robot to recheck the state of doors.
It is important to note that it is the particular structure of the proposed planner that makes such replanning feasible online. The graph structure of the underlying FIRM allows us to \textit{locally} change the collision probabilities in the environment without affecting the collision probability of the rest of the graph \axx{(i.e., properties of different edges on the graph are independent of each other; see Fig. \ref{fig:funnel-FIRM} and \ref{fig:tree-with-funnels}).} Such a property is not present in the state-of-the-art sampling-based belief space planners (e.g., \cite{Prentice09},\cite{Berg11-IJRR}), where the collision probabilities and costs on \textit{all} edges are dependent on each other and hence need to be re-computed.
\section{Simulation Results}\label{sec:rollout-simulation}
In this section, we present simulation results for a comparison of the standard FIRM algorithm versus FIRM with Rollout in a 2D navigation problem. The simulation represents a motion planning scenario wherein the robot is tasked to go from a start location to multiple goal locations sequentially in an environment with clutter, narrow passages and assymetrically placed landmarks/beacons. We conduct two simulations, the first with the standard FIRM algorithm and the second with the Rollout based policy framework presented in this work. All simulations were carried out on a Dell Precision Desktop with a Quad-Core Intel(R) Xeon(R) E5-2609 CPU running at 2.40GHz and 16GB of RAM running Ubuntu 14.04.
\subsubsection{Motion Model}
We represent the motion of the robot by a simple planar kinematic model in which each component of the state can be independently controlled. The state of the robot at time
$k$ is denoted by $ x_k = (\mathsf{x}_k, \mathsf{y}_k, \mathsf{\theta}_k)^T $ (2D position and the heading angle). We denote the control as $ u_k = (v_{x,k}, v_{y,k},\omega_k)^T $ and the process noise by $ w_k=(n_{v_x}, n_{v_y},n_{\omega})^T\sim\mathcal{N}(0,\mathbf{Q}_k) $.
Let $f$ denote the kinematics of the robot such that $x_{k+1} = f(x_k, u_k, w_k)=x_{k}+u_k\delta t+w_k\sqrt{\delta t}$.
\subsubsection{Observation Model}
We use a range-bearing landmark based observation model in which the landmarks are assumed to be passive beacons placed at various locations in the environment. Each landmark has a unique ID associated to it and this ID can be read by the sensor along with the range and bearing relative to the robot. Let ${^i}\mathbf{L}$ be the location of the $i$-th landmark, then the displacement vector ${^i}\mathbf{d}$ from the robot to ${^i}\mathbf{L}$ is given by ${^i}\mathbf{d}=[{^i}d_{x}, {^i}d_{y}]^T:={^i}\mathbf{L}-\mathbf{p}$, where $\mathbf{p}=[\mathsf{x},\mathsf{y}]^T$ is the position of the robot. Therefore, the observation ${^i}z$ of the $i$-th landmark can be described as ${^i}z={^i}h(x,{^i}v)=[\|{^i}\mathbf{d}\|,\text{atan2}({^i}d_{y},{^i}d_{x})-\theta]^T+{^i}v$.
The observation noise is assumed to be zero-mean Gaussian such that $ {^i}v\sim\mathcal {N}(\mathbf{0},{^i}\mathbf{R}) $ where ${^i}\mathbf{R}=\text{diag}((\eta_r\|{^i}\mathbf{d}\|+\sigma^r_b)^2,(\eta_{\theta}\|{^i}\mathbf{d}\|+\sigma^{\theta}_b)^2)$.
The measurement quality decreases as the robot gets farther from the landmarks and the parameters $\eta_r$ and $\eta_{\theta}$ determine this dependency. $\sigma_b^r$ and $\sigma_b^{\theta}$ are the bias standard deviations.
\subsubsection{Environment and Scenario}
The environment in Fig. \ref{fig:environment} represents a 2D office space measuring $21$m $\times$ $21$m with obstacles and beacons placed in it. The robot is represented by a circular disk of radius $1$m. There are two narrow passages \textit{P1} and \text{P2} which represent paths or edges of low transition probability/high collision probability. The narrow passages are $1.25$m wide thus offering a very small clearance for the robot to pass through. The robot is placed at starting location `A' and tasked to visit 4 different locations in a sequential manner, these are marked as B, C, D and E in the environment. Thus, we split the task into 4 sequential segments: 1) $ A\rightarrow B $, 2) $ B\rightarrow C $, 3) $ C\rightarrow D $, and 4) $ D\rightarrow E $. The simulation is carried out twice, once with the standard FIRM algorithm and once with the proposed rollout-based method in belief space. In the following sections, we explain each simulation and then present a comparative analysis.
\begin{figure}
\centering
\subfigure[]
{\includegraphics[height=1.7in]{Environment.png}\label{fig:environment}}
\subfigure[]
{\includegraphics[height=1.7in]{Roadmap.png}\label{fig:prm}}
\caption{(a) The simulation environment. The black diamonds depict the landmarks, the grey polygons are the obstacles and the white space represents the free space. The locations of interest that the robot is tasked to visit are marked by red crosses. The two narrow passages P1 and P2 are marked, these represent regions of high collision probability (risky) due to the small clearance. (b) The underlying FIRM roadmap, the grey lines depict edges and the cyan disks are the nodes, the dashed ellipses represent the stationary covariance of the FIRM nodes.}
\end{figure}
\subsubsection{Planning with Standard FIRM Algorithm}
Here we give a brief explanation of how the FIRM algorithm works and introduce some key terminology and then explain the individual steps of the simulation itself. \\
\textit{Offline-phase}: First, we construct the underlying FIRM roadmap as depicted in Fig. \ref{fig:prm}. \axx{This roadmap is constructed by uniformly sampling configurations in the free space and then building the node beliefs on these configurations which are the nodes of our FIRM roadmap. To create each belief node, we follow the procedure in Section \ref{subsec:FIRM-elements}. In short, we linearize the system dynamics and sensor model around the sampled configuration point. We create the stationary Kalman Filter corresponding to this local linear system and find its reachable belief by solving the corresponding Riccati equation.} At each node, there exists a stabilization controller (stabilizer) which drives beliefs from a region around the node to the stationary belief. The edges of the FIRM roadmap are generated by first finding valid (collision free) straight line connections between neighboring nodes (nodes within a neighborhood of fixed radius $R$) and then generating edge controllers which drive the belief from the start belief node to the vicinity of the goal belief node. For each edge in the graph, we run Monte Carlo simulations to compute the expected execution cost and transition probability. Once we have constructed the underlying FIRM roadmap, we store it for use in the online phase.
\textit{Online-phase}: In the online phase, the planner has access to the stored roadmap that is computed offline and receives a start and a goal configuration. These configurations are added to the existing roadmap by computing the appropriate stationary belief, stabilizer and edge controllers. Now, using the fact that FIRM preserves the optimal sub-structure property \axx{(edges are independent of each other; see Fig. \ref{fig:funnel-FIRM} and \ref{fig:tree-with-funnels})}, we solve the Dynamic Programming on the graph for the given goal location.
Before we proceed, we define a few terms that will be used frequently in the subsequent text:
\begin{itemize}[leftmargin=0cm,itemindent=.5cm,labelwidth=0.4cm,labelsep=0cm,align=left]
\item \textbf{FIRM Feedback Tree}: The solution of the dynamic programming problem, i.e., $ \pi^{g} $, is visualized with a \textit{feedback tree}. Recall that $ \pi^{g} $ is a mapping (look-up table) that returns the next best edge for any give graph node. Therefore, for each node, the feedback tree contains only one outgoing edge ($ \mu=\pi^{g}(B^{i}) $) that pulls the robot towards the goal. The feedback tree is rooted at the goal node.
\item \textbf{Most-Likely Path (MLP)}: The most likely path is generated by concatenating the sequence of edges to take for the motion plan concerning the given start/goal combination i.e. beginning at the start node and then adding the subsequent edges as given by the FIRM feedback. Thus, it depicts the solution as a path which is instructive to us. Note that the robot does not follow the most-likely path exactly due to the noises.
\end{itemize}
In segment 1, the planner is given A and B as the start and goal locations respectively and computes the FIRM Feeedback, Fig. \ref{fig:firm-segment-1-a} shows the feedback on the graph. Once the feedback is computed, the robot starts executing the feedback policy. We see in Fig. \ref{fig:firm-segment-1-b} that the robot is close to the most-likely path. Note that the robot does not exactly follow the most-likely path due to noise, however in this case, the noise is not high enough to change the homotopy class. Figure \ref{fig:firm-segment-1-b} shows this path passing through the narrow passage P2. On its way to the goal, the robot follows the edge controllers returned by the feedback policy and stabilizes to the FIRM nodes at the end of the edges. Once the robot reaches the first goal location B, we begin segment 2 and the planner is given the new goal location C. A new feedback policy is computed online (Fig. \ref{fig:firm-segment-2-a}) and the robot moves to C along the MLP (Fig. \ref{fig:firm-segment-2-b}), again passing through the narrow passage P2. Similarly, a new feedback policy is computed for each subsequent goal (Fig. \ref{fig:firm-segment-3-a} and Fig. \ref{fig:firm-segment-4-a}) and the robot eventually reaches its final destination E successfully.
\begin{figure}
\centering
\subfigure[The FIRM feedback is shown by the green edges for goal location B. The robot is depicted by the blue disk and the red disk depicts the current belief.]{\includegraphics[height=1.5in]{firm-feedback-segment1.png}\label{fig:firm-segment-1-a}}
\hspace{0.1in}
\subfigure[The most likely path under FIRM marked by yellow.]{\includegraphics[height=1.5in]{firm-mlp-segment1.png}\label{fig:firm-segment-1-b}}
\caption{Phase 1 of policy execution with FIRM, starting at A and going to B. The locations A and B are marked in Fig. \ref{fig:environment}}.
\end{figure}
\begin{figure}
\centering
\subfigure[The FIRM feedback for goal location C.]{\includegraphics[height=1.5in]{firm-feedback-segment2.png}\label{fig:firm-segment-2-a}}
\hspace{0.1in}
\subfigure[The most likely path under FIRM from B to C marked by yellow.]{\includegraphics[height=1.5in]{firm-mlp-segment2.png}\label{fig:firm-segment-2-b}}
\caption{Phase 2 of policy execution with FIRM, starting at B and going to C. The locations B and C are marked in Fig. \ref{fig:environment}}
\end{figure}
\begin{figure}
\centering
\subfigure[The FIRM feedback for goal location D.]{\includegraphics[height=1.5in]{firm-feedback-segment3.png}\label{fig:firm-segment-3-a}}
\hspace{0.1in}
\subfigure[The most likely path under FIRM from C to D (yellow).]{\includegraphics[height=1.5in]{firm-mlp-segment3.png}\label{fig:firm-segment-3-b}}
\caption{Phase 3 of policy execution with FIRM, starting at C and going to D.}
\end{figure}
\begin{figure}
\centering
\subfigure[The FIRM feedback for goal location E.]{\includegraphics[height=1.5in]{firm-feedback-segment4.png}\label{fig:firm-segment-4-a}}
\hspace{0.1in}
\subfigure[The most likely path under FIRM from D to E (yellow).]{\includegraphics[height=1.5in]{firm-mlp-segment4.png}\label{fig:firm-segment-4-b}}
\caption{Phase 4 of FIRM policy execution, starting at D and going to E.}
\end{figure}
\subsubsection{Planning with rollout-based planner}
Here again, we first begin with the underlying FIRM roadmap that is constructed offline as explained in the previous section and then find the FIRM feedback in segment 1 for start and goal locations A and B respectively. Once the feedback policy is computed (same as that in Fig. \ref{fig:firm-segment-1-a}), the rollout-based planner starts by following the feedback tree. However, from here on the rollout behavior emerges. Let the rollout update time interval be defined as $T_{rollout}$. Thus, at every $T_{rollout}$ seconds the planner locally computes connections to existing FIRM nodes in a neighborhood of radius $R$ centered at the robot's belief, i.e., the planner locally generates edge controllers with their associated cost-to-connect and the transition probability. Since FIRM gives us the cost-to-go to the goal from each and every FIRM node, by finding the local connection costs, the planner checks which connection provides the lowest sum of the cost to connect to a neighborhood node and the cost-to-go from that node (Eq. \ref{eq:rollout-minimization}). The connection with the lowest sum is chosen as the next edge to follow.
Fig. \ref{fig:rollout-segment-1-a} shows the planner checking connections (red-edges) locally to neighboring FIRM nodes. An important behavior emerges in segment 1, as the robot proceeds, the rollout is able to find a shorter path through the relatively open area by skipping unnecessary stabilizations (as shown in Fig. \ref{fig:rollout-segment-1-b} and \ref{fig:rollout-segment-1-c}). As the robot traverses the narrow passage, the rollout changes the behavior by forcing the robot to ``stabilize'' to the node as it concludes it is an optimal decision to further reduce uncertainty while proceeding through the narrow passage (shown in Fig. \ref{fig:rollout-segment-1-d}). Eventually the robot reaches location B through the path as marked in green in Fig. \ref{fig:rollout-segment-1-f}. It is clear that the rollout gives the robot a distinct advantage over the nominal FIRM plan as it guides the robot through a direct, much shorter route. Further, it should be noted that although the end segments of the two paths (i.e. after exiting the narrow passage) look similar, they differ significantly in the velocity profiles. Along the yellow path, the robot stabilizes to each and every FIRM node along the way while along the green path (rollout), it skips stabilizations and only stabilizes when necessary. This helps the robot maintain a higher average velocity while executing the plan.
\begin{figure}
\centering
\subfigure[The robot checks for a new rollout policy by locally checking connections with neighbors. The rollout connections are shown in red. The yellow path depicts the most-likely path under the nominal FIRM feedback.]{\includegraphics[height=1.5in]{rollout-segment1_1.png}\label{fig:rollout-segment-1-a}}
\hspace{0.1in}
\subfigure[Rollout guides the robot away from the MLP through a shorter path through the relatively open area.]{\includegraphics[height=1.5in]{rollout-segment1_2.png}\label{fig:rollout-segment-1-b}}\quad
\subfigure[Robot approaches narrow passage 2 through a more direct path as compared to the MLP.]{\includegraphics[height=1.5in]{rollout-segment1_3.png}\label{fig:rollout-segment-1-c}}
\hspace{0.1in}
\subfigure[The robot stabilizes at a FIRM node while passing through the narrow passage.]{\includegraphics[height=1.5in]{rollout-segment1_4.png}\label{fig:rollout-segment-1-d}}\quad
\subfigure[The robot approaches goal location B.]{\includegraphics[height=1.5in]{rollout-segment1_5.png}\label{fig:rollout-segment-1-e}}
\hspace{0.1in}
\subfigure[The path taken under the rollout (green) is geometrically shorter than the path under FIRM. Further, by skipping unnecessary stabilizations, the robot moves faster.] {\includegraphics[height=1.5in]{rollout-path-segment1.png}\label{fig:rollout-segment-1-f}}
\caption{ Phase 1 with rollout: Starting at A and going to B.
}
\end{figure}
At point B, a new FIRM feedback is computed online for the next goal location C (segment 2). Prior to approaching the narrow passage, rollout drives the robot to stabilize to a FIRM node as seen in Fig. \ref{fig:rollout-segment-2-a}. This stabilization helps the robot become more certain of its state before passing through a region with low transition probability. The robot then passes through the narrow passage P2 and reaches C through the path (green) shown in Fig. \ref{fig:rollout-segment-2-b}. In segment 3, (goal location D), the plan again takes the robot through the narrow passage P2, and during this segment, the planner drives the robot to stabilize to three FIRM nodes on the way to D. (Fig. \ref{fig:rollout-segment-3} shows segment 3).
Finally, in segment 4, the robot is given goal location E. Here, we notice that while returning to the narrow passage from D, the rollout causes the robot to divert slightly from the MLP as marked in Fig. \ref{fig:rollout-segment-4-a}. The same was not observed when the robot was moving towards D. The reason that this happens is that the cost to reach D through the MLP was lower than the computed cost through a shorter but riskier path. However, on the way to E, while moving away from D, the MLP has a higher cost than the one given by a more direct path as given by rollout. Finally, the mission is completed and the robot arrives at E as shown in Fig. \ref{fig:rollout-segment-4-b}.
\begin{figure}
\centering
\subfigure[The robot stabilizes to a FIRM node before approaching the narrow passage P2.]{\includegraphics[height=1.5in]{rollout-segment2.png}\label{fig:rollout-segment-2-a}}
\hspace{0.1in}
\subfigure[The robot has reached goal location C. The robot's path (green) and ]{\includegraphics[height=1.5in]{rollout-path-segment2.png}\label{fig:rollout-segment-2-b}}
\caption{ Phase 2: Starting at B and going to C.}
\end{figure}
\begin{figure}
\centering
\subfigure[Rollout connections during execution while approaching narrow passage P2.]{\includegraphics[height=1.5in]{rollout-segment3.png}\label{fig:rollout-segment-3-a}}
\hspace{0.1in}
\subfigure[Robot has reached the goal location D.]{\includegraphics[height=1.5in]{rollout-path-segment3.png}\label{fig:rollout-segment-3-b}}
\caption{ Phase 3: Starting at C and going to D.}
\label{fig:rollout-segment-3}
\end{figure}
\begin{figure}
\centering
\subfigure[The robot passing through narrow passage P2 while moving to goal E.]{\includegraphics[height=1.5in]{rollout-segment4.png}\label{fig:rollout-segment-4-a}}
\hspace{0.1in}
\subfigure[Robot has completed its mission.]{\includegraphics[height=1.5in]{rollout-path-segment4.png}\label{fig:rollout-segment-4-b}}
\caption{ Phase 4: Starting at D and going to E.}
\label{fig:rollout-segment-4}
\end{figure}
\axx{
\subsubsection{Analysis of Simulation Results}\label{subsec:rollout-sim-analysis}
In this section, we present the analysis for the simulation results from the previous section by running the planner many times. The key finding is that using rollout based FIRM policy execution significantly increases the performance of the standard FIRM implementation while preserving its robustness and scalability.
}
\axx{
\ph{Cost of Execution} We have recorded the amount of uncertainty (trace of covariance) along the robot's path. Figure \ref{fig:cost-vs-time} shows the accumulative version of this cost on 50 runs for the same task using rollout-based planner and with standard FIRM. We can see that the cost for the rollout based policy rises slower than the cost for FIRM,
and as the planning horizon increases, rollout offers increasing returns in performance.
}
\ph{Stabilization to FIRM Nodes} One of the key performance issues for the standard FIRM algorithm is also one of its major strengths, i.e, the node stabilization process of reaching nodes in belief space. Node stabilization makes FIRM robust and scalable while maintaining the optimal sub-structure of the graph \axx{(all the edges are independent of each other; see Fig. \ref{fig:funnel-FIRM})}. Thus, though stabilization allows FIRM to provide certain guarantees, it might lead to slower robot motion in general as it needs reach each belief node along the way, thus increasing the time to complete the task and also adding cost during the stabilization process at each node. Rollout-based planner brings a higher level of intelligence to this process of node stabilization. Rollout performs stabilization as and when required and bypasses it when possible. Thus, by bypassing the stabilization when not required, rollout allows the robot complete the task faster as well as with less execution cost. \axx{Fig.\ref{fig:nodes-reached} shows the number of nodes the robot has stabilized to with the passage of time in 50 runs. In this example, the robot stabilizes to $ \sim45 $ nodes under FIRM compared to $ \sim10 $ nodes under rollout-based planner ($ \sim75 $\% reduction), while the difference is growing as the task becomes longer.
}
\ph{Time of Execution} Task completion
Another indicators of performance is how much time a planner takes to complete the task while guaranteeing a high likelihood of success.
\axx{From Fig. \ref{fig:cost-vs-time} and \ref{fig:nodes-reached}, the time taken to complete the task with rollout is around $2500$ timesteps ($250$ seconds) compared to $3000$ timesteps ($300$ seconds) for FIRM. There is around a $15$\% reduction in the time to complete the task compared to the standard FIRM algorithm. The improvement in execution time makes the rollout-based planner a better candidate than FIRM for time-sensitive applications.}
\begin{figure}
\centering
\subfigure[]{\includegraphics[height=1.2in]{CostComparison.pdf}\label{fig:cost-vs-time}}
\hspace{0.1in}
\subfigure[]{\includegraphics[height=1.2in]{NodesReachedComparison.pdf}\label{fig:nodes-reached}}
\caption{\axx{Performance comparison of the original FIRM algorithm and rollout-based planner on 50 runs. (a) Cost of execution: The execution cost for FIRM rises faster than the cost of rollout based policy. (b) The number of belief nodes that the robot stabilizes to, during plan execution, which is consistently lower for the rollout-based planner.}}
\end{figure}
\section{Experimental Results for a Physical System} \label{sec:experiments}
This section includes the main contribution of this paper: demonstrating an online POMDP-based solution for the simultaneous localization and planning problem on a physical robot in a real-world setting. We discuss the details of implementation and report the important lessons learned while investigating the application of FIRM-based SLAP on physical systems.
\axx{
A video demonstrating the system is available in \cite{youtube-video-RHC-FIRM}.
}
\subsection{Target Application and System Set Up} \label{subsec:systemSetUp}
Consider a scenario, where the robot needs to operate and reach a goal in an office environment. Each time the robot reaches a goal, a new goal is submitted by a higher-level application (e.g., manually by a user or different users). During a set of long runs, we investigate the performance of online-replanning in SLAP and the robustness of the method to (i) changing obstacles, such as doors, and moving people, (ii) changes in the goal location, (iii) deviations due to intermittent sensory failures, and (iv) kidnap situations (significant sudden deviation in the robot's location). During the run, there are many situations where the robot needs to replan: It needs to replan each time a new goal is submitted and move toward the new goal. Also, the robot encounters changes in the obstacle map. For example the robot may expect a door to a room be open while planning a path, but senses that it is closed on reaching there. Similarly, it encounters moving people and must avoid bumping into them. Observing these changes, the robot updates its map and replans online to update its policy. Moreover, the robot might be deviated from its nominal path. As an extreme case of this situation, we look into the ``kidnapped'' situation, where a person might move the robot to an unknown location during the run. Thus, the robot needs to recover from this catastrophic situation. Finally, the robot may deviate from its nominal location due to temporary failures in the sensing system. In all these cases an online replanning scheme can help the robot to recover from the aforementioned situations and accomplish its goal. It is worth noting that the main focus of all the different experiments in this section is to demonstrate the performance and feasibility of SLAP by enabling online belief space planning on physical robots.
\subsubsection{Environment} \label{subsec:environment}
The specific environment for conducting our experiments is the fourth floor of the Harvey Bum Bright (HRBB) building at the Texas A\&M University campus in College Station, TX. The floor-plan is shown in Fig. \ref{fig:floorPlan}. The floor spans almost 40 meters of hallways whose average width is approximately 2 meters, which is distinguished in yellow and blue in Fig. \ref{fig:floorPlan}. The particular set of experiments reported in this paper is conducted in the region which is highlighted in blue in Fig. \ref{fig:floorPlan}, part of which contains a large cluttered office area (407 area). This area has interesting properties that makes the planning more challenging: (i) 407 area is highly cluttered inside and includes lots of chairs and more than 15 workstations. (ii) As is seen in Fig. \ref{fig:floorPlan}, there are several doors in this area which may be opened or closed. Two of these doors (front-door and back-door) are labeled in Fig. \ref{fig:floorPlan}. (iii) There are objects such as chairs and trash-cans in this environment which usually get displaced. (iv) There are moving people, who are avoided using a reactive behavior, which may displace the robot from its planned path. This further introduces challenges for the high level planner.
\begin{figure}[ht!]
\centering
\includegraphics[width=3in]{floorPlan.pdf}
\caption{Floor-plan of the environment, in which experiments are conducted.}
\label{fig:floorPlan}
\end{figure}
\subsubsection{Robot Model} \label{subsec:robotModel}
The physical platform utilized in our experiments is an iRobot Create mobile robot (See Fig. \ref{fig:robot-landmark}). The robot can be modeled as a unicycle whose kinematics are as follows:
\begin{align}\label{eq:unicycle-motion-model}
\!\!\!\!x_{k+1} \!=\! f(x_k,u_k,w_k) \!=\!
\left(\!\!
\begin{array}{c}
\mathsf{x}_{k}+(V_k\delta t + n_v\sqrt{\delta t})\cos\theta_k \\
\mathsf{y}_{k}+(V_k\delta t + n_v\sqrt{\delta t})\sin\theta_k \\
\mathsf{\theta}_{k}+\omega_k\delta t + n_{\omega}\sqrt{\delta t}
\end{array}\!\right),
\end{align}
where
$ x_k = (\mathsf{x}_k, \mathsf{y}_k, \mathsf{\theta}_k)^T $ describes the robot state, in which $(\mathsf{x}_k,\mathsf{y}_k)^T$ is the 2D position of the robot and $ \theta_k $ is the heading angle of the robot, at time step $ k $. Control commands are the linear and angular velocities $ u_k = (V_k,\omega_k)^T $. We use the Player robot interface \cite{gerkey2003player} to send these control commands to the robot.
\ph{Motion noise} The motion noise vector is denoted by $w_k=(n_v,n_{\omega})^T\sim\mathcal{N}(0,\mathbf{Q}_k)$, which mostly arose
from uneven tiles on the floor, wheel slippage, and inaccuracy in the length of time control signals need to be applied. Experimentally, we found that in addition to the fixed uncertainty associated with the control commands there exists a portion of the noise that is proportional to the signal strength. Thus, we model the variance of the process noise at the $ k $-th time step as $ \mathbf{Q}_{k} = \operatorname{diag}((\eta V_{k}+\sigma_b^{V})^{2}, (\eta \omega_{k}+\sigma_b^{\omega})^{2}) $, where in our implementations we have $ \eta = 0.03 $, $\sigma_b^{V} = 0.01\text{m/s}$, $ \sigma_b^{\omega} = 0.001 \text{rad} $.
\begin{figure}[ht!]
\vspace{-10pt}
\centering
\includegraphics[width=2.3in]{robot.pdf}
\caption{A picture of the robot (an iRobot Create) in the operating environment. Landmarks can be seen on the walls.}
\label{fig:robot-landmark}
\end{figure}
\subsubsection{Sensing Model} \label{subsec:sensorModel}
For sensing purposes, we use the on-board camera existing on the laptop. We perform a vision-based landmark detection based on ArUco (a minimal library for Augmented Reality applications) \cite{Aruco}. Each landmark is a black and white pattern printed on a letter-size paper. The pattern on each landmark follows a slight modification of the Hamming code, and has a unique id, so that it can be detected robustly and uniquely. Landmarks are placed on the walls in the environment (see Fig. \ref{fig:robot-landmark}).
The absolute position and orientation of each landmark in the environment is known. The ArUco library performs the detection process and presents the relative range and bearing to each visible landmark along with its id. Therefore, if we denote the $ j $-th landmark position in the 2D global coordinates as $ ^{j}L $, we can model the observation as a range-bearing sensing system:
\begin{align}
\nonumber {^j}z_{k}&=[\|{^j}\mathbf{d}_{k}\|,\text{atan2}({^j}d_{2_{k}},{^j}d_{1_{k}})-\theta]^T+{}^jv,~~{^j}v\sim\mathcal {N}(\mathbf{0},{^j}\mathbf{R}),
\end{align}
where ${^j}\mathbf{d}_{k}=[{^j}d_{1_{k}},{^j}d_{2_{k}}]^T:=[\mathtt{x}_{k},\mathtt{y}_{k}]^T-L_j$.
\ph{Measurement noise} A random vector $ {^j}v $ models the measurement noise associated with the measurement of the $ j $-th landmark.
Experimentally, we found that the intensity of the measurement noise increases by the distance from the landmark and by the incident angle. The incident angle refers to the angle between the line connecting the camera to landmark and the wall, on which landmark is mounted. Denoting the incident angle by $ \phi\in[-\pi/2,\pi/2] $, we model the sensing noise associated with the $ j $-th landmark as a zero mean Gaussian, whose covariance is
\begin{align
\nonumber
{^j}\mathbf{R}_{k} = \operatorname{diag}(
(\eta_{r_{d}}\|{^j}\mathbf{d}_{k}\|+\eta_{r_{\phi}}|\phi_{k}|+\sigma^r_b)^2,\\
(\eta_{\theta_{d}}\|{^j}\mathbf{d}_{k}\|+\eta_{\theta_{\phi}}|\phi_{k}|+\sigma^{\theta}_b)^2
),
\end{align}
where, in our implementations we have $ \eta_{r_{d}} = 0.1 $, $\eta_{r_{\phi}} = 0.01$, $ \sigma^r_b = 0.05\text{m} $, $ \eta_{\theta_{d}} = 0.001 $, $ \eta_{\theta_{\phi}} = 0.01 $, and $ \sigma^{\theta}_b = 2.0\text{deg} $.
\ph{Full vector of measurements} At every step the robot observes a subset of the landmarks, which fall into its field of view. Suppose at a particular step the robot can see $ r $ landmarks $ \{L_{i_1},\cdots,L_{i_r} \} $. The concatenation of visible landmarks is the total measurement vector that is denoted by
$z=[{^{i_{1}}}z^T,\cdots,{^{i_{r}}}z^T]^T$ and due to the independence of measurements of different landmarks, the observation model for all landmarks can be written as $z=h(x)+v$, where $ v = [{^{i_{1}}}v^T,\cdots,{^{i_{r}}}v^T]^T \sim\mathcal{N}(\mathbf{0},\mathbf{R})$, where $\mathbf{R}=\text{diag}(^{i_{1}}\mathbf{R},\cdots,{^{i_{r}}}\mathbf{R})$.
\subsection{SLAP versus Regular Localization and Planning} \label{subsec:closed-PRM}
In this section, we contrast the results of a regular localization and planning with the proposed SLAP solution. Regular localization and planning here refers to a method, where first the planner (ignoring the localizer) generates a path for the robot to follow. Then, in the execution phase, we run the localization algorithm to compute the mean and follow the pre-computed trajectory using a closed-loop controller. However, in the proposed SLAP solution, the planner takes the localizer into account in the planning phase and replans simultaneously as the localizer updates its estimation.
The environment is shown in Fig. \ref{fig:prm-env}. Blue regions are obstacles and black regions are free space. Landmarks are shown by small white diamonds. The start and goal locations for the motion planning problem are marked in Fig. \ref{fig:prm-env}. The goal location is inside room 407 (see Fig. \ref{fig:floorPlan}) and the start is close to the front door.
\ph{Regular planning and localization} To select a suitable planner, we tried a variety of traditional planners such as PRM, RRT, and their variants. We observed that due to the motion noise of this low-cost robot and sparsity of the information in the certain parts of this environment, many of these variants lead to collisions with obstacles and cannot reach the goal point. \axx{The best results, we got was from the MAPRM (Medial-Axis PRM) method \cite{wilmarth1999maprm}.} This planner is computationally more expensive than the other variants but is very powerful in dealing with collisions. since it samples points and construct a PRM that has the most clearance from obstacles, so it leads to plans with minimum collisions in our experiments. An MAPRM graph (in white) for this environment is shown in Fig.~\ref{fig:prm-env}.
As can be seen, there exist two homotopy classes of paths between the start and goal nodes: Through the front door of room 407 and through the back door of the room. From Fig.~\ref{fig:prm-env}, it is obvious that the path through the front door is shorter. Moreover, the path through the front door has a larger obstacle clearance (larger minimum distance from obstacles along the path) compared to the path through the back door (since the back door is half open). Therefore, based on conventional metrics in deterministic settings, such as shortest path or maximum clearance, MAPRM chooses the path through the front door over the path through the back door. The feedback tree that results from solving the DP in this case is shown in Fig. \ref{fig:prm-feedback}. As expected, the DP guides the robot to go through the front door.
To execute the plan generated from PRM, we use time-varying LQG controllers to keep the robot close to the nominal path returned as the solution of the PRM-based planning. However, due to the lack of enough information along the nominal path, the success rate of this plan is low, and the robot frequently collides with obstacles along the path as the robot is prone to drift. The success probability along the nominal path is computed by a multiple (100) runs and is equal to 27\% (27 runs out of 100 runs were collision-free).
\ph{FIRM-based SLAP}
As can be seen in Fig. \ref{fig:prm-env}, the distribution of information is not uniform in the environment. The density of landmarks (information sources) along the path through the back door is more than the landmarks along the path through the front door. In the FIRM-based SLAP, though, the landmark map is automatically incorporated in the planning phase in a principled way. As a result, it leads to a better judgment of how narrow the passages are. For example, in this experiment, although the path through the front door is shorter than the path through the back door, considering the information sources, the success probability of going through the back door is much more than going through the front door. Such knowledge about the environment is reflected in the FIRM cost-to-go and success probability in a principled framework. As a result, it generates a policy that suits the application, taking into account the uncertainty, and available information in the environment. Solving DP on the FIRM graph gives the feedback shown in Fig.$~$\ref{fig:firm-feedback}, which results in an 88\% success probability.
\begin{figure}[h!]
\centering
\includegraphics[width=3in]{PRM.pdf}
\caption{The environment including obstacles (blue regions), free space (black region), and landmarks (white diamonds). An MAPRM graph approximating the connectivity of free space is also shown (white graph)
}
\label{fig:prm-env}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=3in]{PRM_feedback.pdf}
\caption{The feedback tree generated by solving DP on MAPRM is shown in yellow. From each node there is only one outgoing edge, guiding the robot toward the goal defined in Fig. \ref{fig:prm-env}. Arrows in pink coarsely represent the direction on which the feedback guides the robot.}
\label{fig:prm-feedback}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=3in]{FIRM_feedback.pdf}
\caption{The feedback tree generated by solving DP on FIRM is shown. As can be seen the computed feedback guides robots through the more informative regions that leads to more accurate localization and less collision probabilities. Arrows in pink coarsely represent the direction on which the feedback guides the robot.}
\label{fig:firm-feedback}
\end{figure}
\subsection{Online Replanning aspect of SLAP} \label{subsec:Robustness-experiments}
In this section, we focus on the ``simultaneous" part in SLAP, which emphasizes the ability of the robot to replan after every localization update. In other words, in SLAP, the robot dynamically replans based on the new information coming from its sensors.
We look into two important cases to illustrate the effect of online replanning. We first look into a challenging case where the obstacle map changes and possibly eliminates a homotopy class of solutions. This means a slight deviation in the plan is not sufficient and the planner has to switch to a different homotopy class in real-time, which is not feasible with the state-of-the-art methods in the belief space planning literature. Second, we look into deviations from the path. There, we focus on the kidnapped robot problem as the most severe case of deviation, the solution to which can be applied to smaller deviations as well. Finally, we design a complex scenario that includes changes in the obstacles, small and large deviations, online changes in goal location, etc., and demonstrate the performance of the proposed method on it.
\subsubsection{Changes in the obstacle map} \label{subsec:changingObstacles}
$ ~ $\\
Here, we show how enabling simultaneous planning and localization in an online manner, can handle changes in the obstacle map.
\axx{
In this paper, we assume no prior knowledge about the environment dynamics. As a result, we have a simple model for obstacle dynamics: All new obstacles will be added to the map with a large forgetting time of 10 minutes (i.e., almost-permanent – forgetting time in our experiments is 10 minutes). The only exception in this model is moving people: if a moving person is detected, a new obstacle will not be added to the map. Instead, we assume there exists a lower-level reactive behavior (e.g., stopping or dodging) in a subsumption-like architecture \cite{Brooks86} that suppresses the belief space planner in the vicinity of the moving person. Once the control is back to the SLAP layer, the robot might have deviated from its nominal plan and thus the SLAP layer has to replan to recover from such deviations.
Therefore, the method is very efficient in dealing with persistent/slow changes in the map (e.g., closed-open doors). An important aspect of the method is that it can deal with severe changes that might eliminate or create homotopy classes of solutions. Doors are an important example of this class. If the robot observes a door is closed (which was expected to be open), it might have to \emph{globally} change the plan to get to the goal from a different passage. This is a very challenging problem for today's belief space planners.
}
As the first experiment, we consider the environment shown in Fig. \ref{fig:floorPlan}. The start and goal locations are shown in Fig. \ref{fig:openDoor}. We construct a PRM in the environment ignoring the changing obstacles (assuming all doors are open and there are no people in the environment). Then we construct a corresponding FIRM and solve dynamic programming on it. As a result, we get the feedback tree shown in Fig. \ref{fig:openDoor} that guides the robot toward goal through the back door of room 407. However, the challenge is that the door may be closed when the robot reaches it, and there may be people moving in the environment. Moreover, for different reasons (such as motion blur in the image or blocked landmarks by people or different objects), the robot might misdetect landmarks temporarily during the run.
\footnote
Designing perception mechanisms for obstacle detection is not a concern of this research, thus
we circumvent the need for this module by sticking small markers with specific IDs on moving objects (doors or people's shoes)
}
To handle such a change in the obstacle map and replan accordingly, we use the ``lazy feedback evaluation'' method outlined in Algorithm \ref{alg:lazy-feedback-eval}.
\ph{Results on physical robots} Figure \ref{fig:closedDoor} shows a snapshot of our run when the robot detects the change signal, i.e., detects the door is in a different state than its recorded situation in the map. As a result, the robot updates the obstacle map as can be seen in Fig. \ref{fig:closedDoor} (door is closed). Accordingly, the robot replans; Figure \ref{fig:closedDoor} shows the feedback tree resulting from the replanning. The new feedback guides the robot through the front door since it detects the back door is closed. \axx{The full video of this run provides much more detail and is available in \cite{youtube-video-RHC-FIRM}}.
\ph{Comparison with the state-of-the-art} It is important to note that it is the particular structure of the proposed SLAP framework that makes such replanning feasible online. The graph structure of the underlying FIRM allows us to \textit{locally} change the collision probabilities in the environment without affecting the collision probability of the rest of the graph \axx{(i.e., properties of different edges on the graph are independent of each other; see Fig. \ref{fig:funnel-FIRM})}. It is important to note that such a property is not present in any other state-of-the-art belief space planner, such as BRM (Belief Roadmap Method) \cite{Prentice09}, or LQG-MP \cite{Berg11-IJRR}. In those methods, collision probabilities and costs on \textit{all} edges (the number of possible edges is exponential in the size of the underlying PRM) need to be re-computed. \axx{General purpose planner ABT \cite{kurniawati-isrr13} is also not applicable to this setting due to the size of the problem and need to recompute collision probabilities. If an obstacle in the vicinity of robot changes its position, it will change the pdf evolution on a tree branch near the tree root. Accordingly, the whole subtree (including collision probabilities) under that branch needs to be updated.}
\begin{figure}[h!]
\centering
\subfigure[]{\includegraphics[width=3.3in]{open_door.pdf}\label{fig:openDoor}}
\subfigure[]{\includegraphics[width=3.3in]{closed_door.pdf}\label{fig:closedDoor}}
\caption{(a) The back door is open at this snapshot. The feedback guides the robot toward goal through the back door. (b) The back door is closed at this snapshot. Robot detects the door is closed and updates the obstacle map (adds door). Accordingly robot replans and computes the new feedback. The new feedback guides the robot through the front door.}
\label{fig:open-closed-door}
\end{figure}
\subsubsection{Deviations in robot's location} \label{subsec:KidnappedRobot}$ ~ $\\
In this subsection, we demonstrate how online replanning enables SLAP in the presence of large deviations in the robot's position. As the most severe form of this problem, we consider the \textit{kidnapped robot problem}. In the following we discuss this problem and challenges it introduces.
\ph{Kidnapped robot problem} An autonomous robot is said to be in the kidnapped situation if it is carried to an unknown location while it is in operation. The problem of recovering from this situation is referred to as the kidnapped robot problem \cite{Choset05}. This problem is commonly used to test a robot's ability to recover from catastrophic localization failures. This problem introduces different challenges such as (i) how to detect kidnapping, (ii) how to re-localize the robot, and (iii) how to control the robot to accomplish its goal. Our main focus, here, is on the third part, i.e., how to replan in belief space from the new point in the belief space after recovering from being kidnapped. This is in particular challenging because large deviations in the robot's pose can globally change the plan and optimal homotopy class of solutions. Therefore, the planner should be able to change the global plan online.
\axx{
\ph{Detecting the kidnapped situation} To embed the kidnapped situation into the framework in a principled way, we add a boolean observation $ z^{lost} $ to the observation space. Let us denote the innovation signal as $ \widetilde{z}_{k} = z_{k}-z^{-}_{k}$ (the difference between the actual observations and predicted observation). Recall that in our implementation, the observation at time step $ k $ from the $ j $-th landmark is the relative range and bearing of the robot to the $ j $-th landmark, i.e., $ ^{j}z_{k}=({^j}r_{k},{^j}\theta_{k}) $. The predicted version of this measurement is denoted by $ ^{j}z^{-}_{k}=({^j}r^{-}_{k},{^j}\theta^{-}_{k}) $. We monitor the following measures of the innovation signal:
\begin{align}
\widetilde{r}_{k} = \max_{j}(|{^j}r_{k}-{^j}r_{k}^{-}|),~~~\widetilde{\theta}_{k} = \max_{j}(d^{\theta}({^j}\theta_{k},{^j}\theta_{k}^{-}))
\end{align}
where $ d^{\theta}(\theta,\theta') $ returns the absolute value of the smallest angle that maps $ \theta $ onto $ \theta' $. Passing these signals through a low-pass filter, we filter out the outliers (temporary failures in the sensory reading). Denoting the filtered signals by $ \overline{r}_{k} $ and $ \overline{\theta}_{k} $, if both conditions $ \overline{r}_{k}<r_{max} $ and $ \overline{\theta}_{k}<\theta_{max} $ are satisfied, then $ z^{lost}=0 $, otherwise $ z^{lost}=1 $. When $ z^{lost}=0 $, we follow the current rollout planner.
However, $ z^{lost}=1 $ means that the robot is constantly observing high innovations, and thus it is not in the location in which it believes to be (i.e., it is kidnapped). Once it is detected that the robot is kidnapped, we first replace the estimation covariance with a large covariance (to get an approximately uniform distribution over the state space).
}
\axx{
\ph{Replanning from kidnapped situation}
The rollout-FIRM algorithm can inherently handle such replanning. In other words, the kidnapped situation, i.e., a deviated mean and very large covariance, will just be treated as a new initial believe and a new query, and FIRM rollout will create the best macro-action (funnel) on the fly and execute it. Note that the belief deviation might change the optimal homotopy class and the plan should be updated globally, which makes it challenging for many POMDP planners. Using the proposed rollout planner, the robot just needs to go to a neighboring node from this deviated point. Since the underlying FIRM graph is spread in the belief space, the only required computation is to evaluate the cost of edges that connect the new start point to the neighboring FIRM nodes.
}
\axx{
To get safer plans when replanning from $ z^{lost}=1 $ situation, we update the rollout planning mechanism slightly: In addition to the new initial belief, we add one more belief node to the graph, as described below.
Consider a new kidnapped initial belief $ b_{0}\equiv(\widehat{x}^{+}_{0},P_{0}) $. Let $ \delta $ denote the distance between the mean of this new belief $ \widehat{x}^{+}_{0} $ and the closest mean on the graph nodes. If $ z^{lost}=1 $ and $ \delta $ is not small, the mean belief is far from actual robot position and moving the robot $ \delta $ meters based on a wrong belief might lead to collision. To ensure that the proposed rollout-based planner can take this case into account, we add a FIRM node $ b' $ to the graph at (or very close to) the configuration point $ v = \widehat{x}^{+}_{0} $.
In such a case starting from a deviated belief $ b_{0} $ with large covariance, rollout planner will take the robot to $ b' $ first, which is a belief with the same mean but smaller covariance (i.e., turning in-place or taking very small velocities). The reason is that moving the mean of a distribution with large covariance will lead to high collision probability.
}
\ph{Results on physical robots} Figure \ref{fig:kidnapping} shows a snapshot of a run that contains two kidnappings and illustrates the robustness of the planning algorithm to the kidnapping situation. The start and goal positions are shown in Fig. \ref{fig:kidnapping}. The feedback tree (shown in yellow) guides the robot toward the goal through the front door. However, before reaching the goal point, the robot is kidnapped in the hallway (see Fig. \ref{fig:kidnapping}) and placed in an unknown location within room 407 (see Fig. \ref{fig:kidnapping}). In our implementations, we consider $ r_{max}=1 $ (meters) and $\theta_{max}=50 $ (degrees). The first jump in Fig. \ref{fig:innovation} shows this deviation. Once the robot recovers from being kidnapped (i.e., when both innovation signals in Fig. \ref{fig:innovation} fall below their corresponding thresholds), replanning from the new point is performed. This time, the feedback guides the robot toward the goal point from within room 407. However, again before the robot reaches the goal point, it is kidnapped and placed in an unknown location (see Fig. \ref{fig:kidnapping}). The second jump in the innovation signals in Fig. \ref{fig:innovation} corresponds to this kidnapping.
\begin{figure}[h!]
\centering
\includegraphics[width=3.2in]{Kidnapping.pdf}
\caption{This figure shows the set up for the experiment containing two kidnapping.}
\label{fig:kidnapping}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=3.5in]{Innovation_curve.pdf}
\caption{This figure shows the innovation signals $ \bar{r}_{k} $ and $ \bar{\theta}_{k} $ during this run, along with the thresholds $ r_{max} $ and $ \theta_{max} $ (dashed red lines)}
\label{fig:innovation}
\end{figure}
\subsection{Longer and more complex experiments: Robustness to changing goals, obstacles, and to large deviations} \label{subsec:longer experiment}
In this section, we emphasize the ability of the system to perform long-term SLAP that consist of visiting several goals. The replanning ability allows us to change the plan online as the goal location changes. In this experiment, we consider a scenario in which the user(s) submit a new goal for the robot to reach after it reaches its currently assigned goal. While the robot needs to change the plan each time a new goal is submitted, it frequently encounters changes in the obstacle map \axx{(open/closed doors and moving people)} as well as intermittent sensor failures and kidnapping situations. Thus, the ability to simultaneously replan online while localizing is necessary to cope with these changes. \axx{The video in \cite{youtube-video-RHC-FIRM} shows the robot's performance in this long and complex scenario.}
\begin{figure}[h!]
\centering
\includegraphics[width=3.5in]{ComplexMap.pdf}
\caption{This figure shows the set up for the longer experiment with a sequence of goals as well as intermediate events and changes in the environment map.}
\label{fig:complex-map}
\end{figure}
In the following, we provide an itemized description of the specific steps involved in this run based on Fig. \ref{fig:complex-map}. Also, we discuss different changes in the environment with which the robot needs to cope along the way to accomplish its goals. \axx{All of the following steps can be seen more clearly in the accompanying video \cite{youtube-video-RHC-FIRM}.}
\textbf{1)} The robot begins at the starting point shown in Fig. \ref{fig:complex-map} and aims to reach goal 1 as shown in Fig. \ref{fig:complex-map}. Goal 1 is inside room 407. FIRM returns a feedback tree that guides the robot through the back door of 407.
\textbf{2)} The robot goes through the narrow passage introduced by the back door (it is half-open). However, before reaching the goal it gets kidnapped (the first kidnap point as shown in Fig. \ref{fig:complex-map}). The robot is placed in an unknown location (shown in Fig. \ref{fig:complex-map} by the \textit{first placement point}.)
\textbf{3)} Observing new landmarks, the robot detects that it has been kidnapped. Accordingly it adds a new node to the graph and replans online. As a result, the feedback guides the robot toward the goal point through the back door again.
\textbf{4)} However, in the meantime the back door has been closed. When the robot reaches the vicinity of the back door, it detects that it is closed. Therefore, it updates its map by adding an obstacle at the doorway. Note that the robot will open the door (remove the obstacle) in its map after the forgetting time of 10 minutes. Accordingly, the robot replans a feedback tree that guides it through the front door toward the goal point.
\textbf{5)} Along the way, people are moving in the hallway and inside the 407 area. Thus, the robot replans accordingly as it encounters the people. Moving people are reactively avoided and the standing people and static obstacles such as a trash-can (see Fig. \ref{fig:complex-map}) temporarily get added to the map as obstacles. Replanning several times to cope with such changes, the robot goes through the front and inner doors and reaches the goal point inside the 407 area.
\textbf{6)} After reaching the goal point, another goal (second goal in Fig. \ref{fig:complex-map}) is assigned to the robot.
\textbf{7)} Replanning for reaching this goal leads to a feedback tree that guides the robot through the inner door, and front door, toward goal 2.
\textbf{8)} However, as the robot reaches the vicinity of the inner door, it detects the door has been closed. Therefore, it updates its map and replans accordingly. The replanning leads to a feedback tree that guides the robot toward goal 2 through the back door. Again, along the way robot encounters moving people in the office 407 and in the hallway.
\textbf{9)} However, before reaching the goal point, the robot gets kidnapped at the ``second kidnap point'' as shown in Fig. \ref{fig:complex-map}. The robot is placed at a really far-off point (the ``second placement point''). Once the robot detects it is kidnapped, it replans and moves slower to gather information. Detecting landmarks, it reduces its uncertainty and continues going toward the goal point.
\textbf{10)} After reaching the goal point, the next goal (i.e., third goal) is assigned to the robot (see Fig. \ref{fig:complex-map}). Replanning for this goal, leads to a feedback that guides the robot through the front door.
\textbf{11)} However, when the robot reaches the front door, it encounters a person standing in the doorway. Accordingly, it replans and decides to go through the back door.
\textbf{12)} On the way to the back door, it is again displaced at the ``third kidnap point'' and placed at the ``third placement point''.
\textbf{13)} This time, due to the forgetting time, the replanning leads to a path through the front door (the person is not there any more).
\textbf{14)} Again, the robot follows the feedback and achieves its goal.
This long and complicated scenario demonstrates how simultaneous planning can lead to a robust behavior in the presence of intermittent model discrepancies, changes in the environment, and large deviations in the robot's location. It is worth noting that online replanning in belief space is a challenge for the state-of-the-art methods in belief space as they mainly rely on structures that depend on the system's initial belief. Hence, when the system's localization pdf encounters a significant deviation, replanning from the new localization pdf requires the structure to be re-built, which is not a feasible operation online. However, constructing a query-independent graph (the underlying FIRM) allows us to embed it in a replanning scheme such as the proposed rollout policy technique and perform online replanning dynamically to enable SLAP.
\axx{
\section{Method limitations and Future work} \label{sec:futureWork}
In this section, we recap the method assumptions and limitations mentioned in previous sections.
\ph{Restricted class of POMDPs}
As discussed in the previous sections, it is worth noting that the proposed method is not a general-purpose POMDP solver. It provides a solution for a class of POMDP problems (including SLAP) where one can design closed-loop controllers with a funneling behavior in belief space. In the proposed instantiation of the FIRM in this paper, designing funnels requires knowledge about closed-form dynamics and sensor model. Also, the system needs to be locally linearizable at belief nodes, and the noise is assumed to be Gaussian. Further, designing a funnel/controller in belief space requires the uncertainty to be over the part of the state space that is controllable (e.g., the ego-vehicle). For example, the proposed SLAP solution is not applicable to two-player games, where there is no direct control on the opponent’s motion or sensing.
\ph{Combining FIRM with general-purpose online solvers}
Most of the general-purpose tree-based POMDP solvers can be combined with FIRM, where the online planners creates and searches the tree and use FIRM as the approximate policy (and cost-to-go) beyond the tree horizon. In particular, when the problem in hand does not satisfy above-mentioned assumptions, one can approximate the problem with a problem that satisfy above assumptions, create a FIRM graph and use it as the base policy. However, in the vicinity of the current belief, one can use general-purpose online POMDP solvers, such as Despot \cite{somani2013despot}, ABT \cite{kurniawati-isrr13}, POMCP \cite{silver2010monte}, AEMS \cite{Ross_2007_AEMS} that act on the exact problem.
}
\axx{
\ph{Dealing with dynamic environments}
In this paper, we assume no prior knowledge about the environment dynamics. As a result, the simple model we use for new obstacles is: they enter map with a large forgetting time of 10 minutes (with the exception of moving people that do not enter the map and avoided reactively). A more sophisticated and efficient solution can be obtained by learning and modeling changes over time \cite{Marthi_RSS12_PR2} or using some prior on the motion of moving objects. Incorporating such a knowledge in the proposed planning framework is a subject of future work.
}
\section{Conclusions} \label{sec:conclusions}
In this paper, we proposed a rollout-policy based algorithm for online replanning in belief space to enable SLAP. The proposed algorithm is able to switch between different homotopy classes of trajectories in real-time. It also bypasses the belief stabilization process of the state-of-the-art FIRM framework. A detailed analysis was presented, which shows that the method can recover the performance and success probability that was traded-off in the stabilization phase of FIRM. Further, by re-using the costs and transition probabilities computed in the offline construction phase, the method is able to enable SLAP, via online replanning, in the presence of changes in the environment and large deviations in the robot's pose. Via extensive simulations we demonstrated performance gains when using the rollout-based belief planner. As a key focus of the work, we also demonstrated the results of the proposed belief space planner on a physical robot in a real-world indoor scenario. Our experiments show that the proposed method is able to perform SLAP and guide the robot to its target locations while dynamically replanning in real-time and reacting to changes in the obstacles and deviations in the robot's state. Such replanning is an important ability for practical systems where stochasticity in the system's dynamics and measurements can often result in failure. Hence, we believe that the proposed SLAP solution takes an important step towards bringing belief space planners to physical applications and advances long-term safe autonomy for mobile robots.
\bibliographystyle{plainnat}
|
2,877,628,091,641 | arxiv | \section{Introduction}
Information extraction addresses the inference of formal knowledge (typically, entities and relations) from text. The field has recently experienced a significant boost due to the development of neural approaches~\cite{zeng:2014:rel_cnn, zhang:2015:rel_pos, kumar:2017:rel_survey}. This has led to two shifts in research: First, while earlier work has focused on sentence level relation extraction~\cite{hendrickx:2010:semeval,han:2018:fewrel,zhang:2017:tacred}, more recent models extract facts from longer text passages (document-level). This enables the detection of inter-sentence relations that may only be implicitly expressed and require reasoning across sentence boundaries. Current models in this area do not rely on mention-level annotations and aggregate signals from multiple mentions of the same entity.
The second shift has been towards multi-task learning: While earlier approaches tackle entity mention detection and relation extraction with separate models, recent joint models address these tasks at once~\cite{bekoulis:2018:multi_head,nguyen:2019:biaffine_attention,wadden:2019:dygie++}. This does not only improve simplicity and efficiency, but is also commonly motivated by the fact that tasks can benefit from each other: For example, knowledge of two entities' types (such as {\it person}+{\it organization}) can boost certain relations between them (such as {\it ceo\_of}).
\begin{figure}
\centering
\framebox{
\begin{tabular}{ p{7cm} }
The \bluebf{Portland Golf Club} is a private golf club in the northwest \redbf{United States}, in suburban Portland, Oregon. The \bluebf{PGC} is located in the unincorporated \greenbf{Raleigh Hills} area of eastern Washington County, southwest of downtown Portland and east of Beaverton. \bluebf{PGC} was established in the winter of \textbf{1914}, when a group of nine businessmen assembled to form a new club after leaving their respective clubs. The \bluebf{golf club} hosted the Ryder Cup matches of 1947, the first renewal in a decade, due to World War II. The \redbf{U.S.} team defeated Great Britain 11 to 1 in wet conditions in early November.
\end{tabular}
}
\caption{Our goal is to perform end-to-end entity-level relation extraction on whole documents. We extract entity mentions (\enquote{PGC}), entity clusters (\{Portland Golf Club, PGC, golf club\}), their types ($ORG$) and relations to other entities in the document, such as (\{Portland Golf Club, PGC, golf club\}$_{ORG}$, \emph{inception}, \{1914\}$_{TIME}$), with a single, joint model. Note that document-level relation extraction requires the aggregation of relevant information from multiple sentences, such as in (\{Raleigh Hills\}$_{LOC}$, \emph{country}, \{United States, U.S.\})$_{LOC}$). Other entities in the example document are omitted for clarity.}
\label{fig:doc_example}
\end{figure}
We follow this line of research, and present \name{}\footnote{The code for reproducing our results is available at \href{https://github.com/lavis-nlp/jerex}{https://github.com/lavis-nlp/jerex}.} (\enquote{\textbf{J}oint \textbf{E}ntity-Level \textbf{R}elation \textbf{Ex}tractor}), a novel approach for joint information extraction.
\name{} is to our knowledge the first approach that combines a multi-task model with entity-level relation extraction: In contrast to previous work, our model jointly learns relations and entities without annotations on mention level, but extracts document-level entity clusters and predicts relations between those clusters using a \emph{multi-instance learning} (MIL)~\cite{dietterich:1997:mil, riedel:2010:nyt, surdeanu:2012:mil_multi_label} approach.
The model is trained jointly on mention detection, coreference resolution, entity classification and relation extraction (Figure~\ref{fig:doc_example}).
While we follow best practices for the first three tasks, we propose a novel representation for relation extraction, which combines global entity-level representations with localized mention-level ones.
We present experiments on the DocRED~\cite{yao:2019:docred} dataset for entity-level relation extraction. Though it is arguably simpler compared to recent graph propagation models~\cite{nan:2020:bert_lsr} or special pre-training~\cite{ye:2020:coref_bert}, our approach achieves state-of-the-art results.
We also report the first results for end-to-end relation extraction on DocRED as a reference for future work. In ablation studies we show that (1) combining a global and local representations is beneficial, and (2) that joint training appears to be on par with separate per-task models.
\section{Related Work}
Relation extraction is one of the most studied natural language processing (NLP) problems to date. Most approaches focus on classifying the relation between a given entity mention pair. Here various neural network based models, such as RNNs~\cite{zhang:2015:rel_pos}, CNNs~\cite{zeng:2014:rel_cnn}, recursive neural networks~\cite{socher:2012:mv_rnn} or Transformer-type architectures~\cite{wu:2019:semeval_bert} have been investigated. However, these approaches are usually limited to local, intra-sentence, relations and are not suited for document-level, inter-sentence, classification. Since complex relations require the aggregation of information distributed over multiple sentences, document-level relation extraction has recently drawn attention (e.g. ~\citealt{quirk:2017:distant_intra_sentence_re,verga:2018:multi_instance,gupta:2019:intra_inter_re,yao:2019:docred}). Still, these models rely on specific entity mentions to be given. While progress in the joint detection of entity mentions and intra-sentence relations has been made~\cite{gupta:2016:table_filling, bekoulis:2018:multi_head, luan:2018:scierc}, the combination of coreference resolution with relation extraction for entity-level reasoning in a single, jointly-trained, model is widely unexplored.
\paragraph{Document-level Relation Extraction} Recent work on document-level relation extraction directly learns relations between entities (i.e. clusters of mentions referring to the same entity) within a document, requiring no relation annotations on mention level. To gather relevant information across sentence boundaries, multi-instance learning has successfully been applied to this task. In multi-instance learning, the goal is to assign labels to bags (here, entity pairs), each containing multiple instances (here, specific mention pairs). \citet{verga:2018:multi_instance} apply multi-instance learning to detect domain-specific relations in biological text. They compute relation scores for each mention pair of two entity clusters and aggregate these scores using a smooth max-pooling operation. ~\citet{christopoulou:2019:dots} and~\citet{sahu:2019:multi_instance_graph} improve upon \citet{verga:2018:multi_instance} by constructing document-level graphs to model global interactions.
While the aforementioned models tackle very specific domains with few relation types, the recently released DocRED dataset~\cite{yao:2019:docred} enables general-domain research on a rich relation type set (96 types). \citet{yao:2019:docred} provide several baseline architectures, such as CNN-, LSTM- or Transformer-based models, that operate on global, mention averaged, entity representations. \citet{wang:2019:two-step-bert} use a two-step process by identifying related entities in a first step and classifying them in a second step.~\citet{Tang:2020:hin} employ a hierarchical inference network, combining entity representations with attention over individual sentences to form the final decision.
~\citet{nan:2020:bert_lsr} apply a graph neural network ~\cite{kipf:2017:gcn} to construct a document-level graph of mention, entity and meta-dependency nodes.
The current state-of-the-art constitutes the CorefRoBERTa model proposed by \citet{ye:2020:coref_bert}, a RoBERTa~\cite{liu:2019:roberta} variant that is pre-trained on detecting co-referring phrases. They show that replacing RoBERTa with CorefRoBERTa improves performance on DocRED.
All these models have in common that entities and their mentions are both assumed to be given. In contrast, our approach extracts mentions, clusters them to entities, and classifies relations jointly.
\newpage
\paragraph{Joint Entity Mention and Relation Extraction}
Prior joint models focus on the extraction of mention-level relations in sentences. Here, most approaches detect mentions by BIO (or BILOU) tagging and pair detected mentions for relation classification, e.g.~\cite{gupta:2016:table_filling, zhou:2017:joint_hybrid, zheng:2017:joint_novel_tagging, bekoulis:2018:multi_head, nguyen:2019:biaffine_attention, miwa:2016:stacked_rnn}. However, these models are not able to detect relations between overlapping entity mentions. Recently, so-called span-based approaches~\cite{lee:2017:span_coreference} were successfully applied to this task~\cite{luan:2018:scierc, eberts:2020:spert}: By enumerating each token span of a sentence, these models handle overlapping mentions by design. \citet{wolf:2018:hierarch_multi_task} train a multi-task model on named entity recognition, coreference resolution and relation extraction. By adding coreference resolution as an auxilary task, ~\citet{luan:2019:span_graphs} propagate information through coreference chains. Still, these models rely on mention-level annotations and only detect intra-sentence relations between mentions, whereas our model explicitly constructs clusters of co-referring mentions and uses these clusters to detect complex entity-level relations in long documents using multi-instance reasoning.
\section{Approach} \label{sec:approach}
\begin{figure*}[ht]
\centering
\includegraphics[width=1.\textwidth]{approach.pdf}
\caption{Our approach combines entity mention localization (a), coreference resolution (b), entity classification (c) and relation classification (d) within a joint multi-task model, which is trained jointly on entity-level relation extraction. The sub-components share a single BERT encoder for document encoding. Each input document is only encoded once (\emph{single-pass}) to speed-up training/inference, with sub-components operating on the contextualized embeddings. Both entity classification and relation classification use multi-instance learning to synthesize relevant signals scattered throughout the input document.}
\label{fig:approach}
\end{figure*}
\name{} processes documents containing multiple sentences and extracts entity mentions, clusters them to entities, and outputs types and relations on entity level. \name{} consists of four task-specific components, which are based on the same encoder and mention representations, and are trained in a joint manner. An input document is first tokenized, yielding a sequence of $n$ byte-pair encoded (BPE) \cite{sennrich:2016:bpe} tokens. We then use the pre-trained Transformer-type network BERT \cite{devlin:2018:bert} to obtain a contextualized embedding sequence $(\mathbf{e}_1, \mathbf{e}_2, ... \mathbf{e}_n)$ of the document. Since our goal is to perform end-to-end relation extraction, neither entities nor their corresponding mentions in the document are known in inference.
\subsection{Model Architecture}
We suggest a multi-level model: First, we localize all entity mentions in the document (a) by a \emph{span-based} approach~\cite{lee:2017:span_coreference}. After this,
detected mentions are clustered into entities by \emph{coreference resolution} (b). We then classify the type (such as \emph{person} or \emph{company}) of each entity cluster by a fusion over local mention representations (\emph{entity classification}) (c). Finally, relations between entities are extracted by a reasoning over mention pairs (d). The full model architecture is illustrated in Figure~\ref{fig:approach}.
\paragraph{(a) Entity Mention Localization} Here our model performs a search over all document token subsequences (or \emph{spans}). In contrast to BIO/BILOU-based approaches for entity mention localization, span-based approaches are able to detect overlapping mentions. Let $s := (\mathbf{e}_i, \mathbf{e}_{i+1},$ $ ..., \mathbf{e}_{i+k})$ denote an arbitrary candidate span. Following~\citet{eberts:2020:spert}, we first obtain a span representation by max-pooling the span's token embeddings:
\begin{equation}
\label{eq:spanrepr}
\mathbf{e}(s) := \text{max-pool}(\mathbf{e}_i, \mathbf{e}_{i+1}, ..., \mathbf{e}_{i+k})
\end{equation}
Our \emph{mention classifier} takes the span representation $\mathbf{e}(s)$ as well as a span size embedding $\mathbf{w}_{k+1}^s$ \cite{lee:2017:span_coreference} as meta information. We perform binary classification and use a sigmoid activation to obtain a probability for $s$ to constitute an entity mention:
\begin{equation}
\label{eq:spanclassifier}
\hat{y}^s = \sigma \Big(\text{FFNN}^s(\mathbf{e}(s) \circ \mathbf{w}_{k+1}^s) \Big)
\end{equation}
where $\circ$ denotes concatenation and $\text{FFNN}^s$ is a two-layer feedforward network with an inner ReLu activation.
Span classification is carried out on all token spans up to a fixed length $L$. We apply a filter threshold $\alpha^s$ on the confidence scores, retaining all spans with $\hat{y}^s \geq \alpha_s$ and leaving a set $\mathcal{S}$ of spans supposedly constituting entity mentions.
\paragraph{(b) Coreference Resolution} Entity mentions referring to the same entity (e.g. \enquote{Elizabeth II.} and \enquote{the Queen}) can be scattered throughout the input document. To later extract relations on entity level, local mentions need to be grouped to document-level entity clusters by coreference resolution. We use a simple mention-pair~\cite{soon:2001:coref_mention_pair} model: Our component classifies pairs $(s_1,s_2) \in \mathcal{S} {\times} \mathcal{S}$ of detected entity mentions as coreferent or not, by combining
the span representations $\mathbf{e}(s_1)$ and $\mathbf{e}(s_2)$ with an edit distance embedding $\mathbf{w}_{d}^c$: We compute the Levenshtein distance~\cite{levenshtein:1966:levenshtein} between spans $d := D(s_1,s_2)$ and use a learned embedding $\mathbf{w}_{d}^c$.
A mention pair representation $\mathbf{x}^c$ is constructed by concatenation:
\begin{equation}
\label{eq:corefrepr}
\mathbf{x}^c := \mathbf{e}(s_1) \circ \mathbf{e}(s_2) \circ \mathbf{w}_{d}^c
\end{equation}
Similar to span classification, we conduct binary classification using a sigmoid activation, obtaining a similarity score between the two mentions:
\begin{equation}
\label{eq:corefscore}
\hat{y}^c := \sigma \Big( \text{FFNN}^c(\mathbf{x}^c) \Big)
\end{equation}
where $\text{FFNN}^c$ follows the same architecture as $\text{FFNN}^s$.
We construct a similarity matrix $C \in \mathbb{R}^{m \times m}$ (with $m$ referring to the document's overall number of mentions) containing the similarity scores between every mention pair. By applying a filter threshold $\alpha^c$, we cluster mentions using complete linkage~\cite{muellner:2011:clustering}, yielding a set $\mathcal{E}$ containing clusters of entity mentions. We refer to these clusters as \emph{entities} or \emph{entity clusters} in the following.
\paragraph{(c) Entity Classification} Next, we map each entity to a type such as $location$ or $person$: We first fuse the mention representations of an entity cluster $\{s_1, s_2, ..., s_t\} \in \mathcal{E}$ by max-pooling:
\begin{equation}
\label{eq:entityrepr}
\mathbf{x}^e := \text{max-pool}(\mathbf{e}(s_1), \mathbf{e}(s_2), ..., \mathbf{e}(s_t))
\end{equation}
Entity classification is then carried out on the entity representation $\mathbf{x}^e$, allowing the model to draw information from mentions spread across different parts of the document. $\mathbf{x}^e$ is fed into a softmax classifier, yielding a probability distribution over the entity types:
\begin{equation}
\label{eq:corefclassifier}
\hat{y}^e := \text{softmax} \Big( \text{FFNN}^e(\mathbf{x}^e) \Big)
\end{equation}
We assign the highest scored type to the entity.
\paragraph{(d) Relation Classification}
Our final component assigns relation types to pairs of entities.
Note that the directionality, i.e. which entity constitutes the head/tail of the relation, needs to be inferred, and that the input document can express multiple relations between different mentions of the same entity pair. Let $\mathcal{R}$ denote a set of pre-defined relation types. The relation classifier processes each entity pair $(e_1, e_2) \in \mathcal{E} {\times} \mathcal{E}$, estimating which, if any, relations from $\mathcal{R}$ are expressed between these entities. To do so, we score every candidate triple ($e_1{,} r_i{,} e_2$),
expressing that $e_1$ (as head) is in relation $r_i$ with $e_2$ (as tail). We design two types of relation classifiers: A \emph{global relation classifier}, serving as a baseline, which consumes the entity cluster representations $\mathbf{x}^e$, and a \emph{multi-instance classifier}, which assumes that certain entity mention pairs support specific relations and synthesizes this information into an entity-pair level representation.
\paragraph{Global Relation Classifier (GRC)} The global classifier builds upon the max-pooled entity cluster representations $\mathbf{x}_1^e$ and $\mathbf{x}_2^e$ of an entity pair $(e_1, e_2)$. We further embed the corresponding entity types ($\mathbf{w}_1^e$ / $\mathbf{w}_2^e$), which was shown to be beneficial in prior work~\cite{yao:2019:docred}, and compute an entity-pair representation by concatenation:
\begin{equation} \label{eq:entitypairrepr}
\mathbf{x}^p := \Big( \mathbf{x}_1^e \circ \mathbf{w}_1^e \Big) \circ \Big( \mathbf{x}_2^e \circ \mathbf{w}_2^e \Big)
\end{equation}
This representation is fed into a 2-layer FFNN (similar to FFNN$^s$), mapping it to the number of relation types $\#\mathcal{R}$. The final layer features sigmoid activations for multi-label classification and assigns any relation type exceeding a threshold $\alpha^r$:
\begin{equation}
\label{eq:globalrelclassifier} \hat{y}^r := \sigma \Big( \text{FFNN}^p(\mathbf{x}^p) \Big)
\end{equation}
\paragraph{Multi-instance Relation Classifier (MRC)} In contrast to the global classifier (GRC), the multi-instance relation classifier operates on mention level: Since only entity-level labels are available, we treat entity mention pairs as latent variables and estimate relations by a fusion over these mention pairs. For any pair of entity clusters $e_1{=}\{s_1^1, s_2^1, ..., s_{t_1}^1\}$ and $e_2{=}\{s_1^2, s_2^2, ..., s_{t_2}^2\}$, we compute a mention-pair representation for any $(s_1, s_2) {\in} e_1 {\times} e_2$. This representation is obtained by concatenating the global entity embeddings (Equation \eqref{eq:entityrepr}) with the mentions' local span representations (Equation \eqref{eq:spanrepr})
\begin{equation}
\label{eq:mentioninput1}
\mathbf{u}(s_1,s_2) := \Big( \mathbf{e}(s_1) \circ \mathbf{x}^e_1 \Big) \circ \Big( \mathbf{e}(s_2) \circ \mathbf{x}^e_2 \Big)
\end{equation}
Further, as we expect close-by mentions to be stronger indicators of relations, we add meta embeddings for the {\it distances} $d_s$,$d_t$ between the two mentions, both in sentences ($d_s$) and in tokens ($d_t$). In addition, following \citet{eberts:2020:spert}, the max-pooled context between the two mentions ($\mathbf{c}(s_1, s_2)$) is added. This \emph{localized context} provides a more focused view on the document and was found to be especially beneficial for long, and therefore noisy, inputs:
\begin{equation}
\label{eq:mentioninput2}
\mathbf{u'}(s_1{,}s_2) {:=} \mathbf{u}(s_1{,}s_2) \circ \mathbf{c}(s_1{,} s_2) \circ \mathbf{w}_{d_s}^r \circ
\mathbf{w}_{d_t}^{r'}
\end{equation}
This mention-pair representation is mapped by a single feed-forward layer to the original token embedding size ($768$):
\begin{equation}
\label{eq:mentionpairrepr}
\mathbf{u''}(s_1, s_2) := \text{FFNN}^p(\mathbf{u'}(s_1, s_2))
\end{equation}
These focused representations are then combined by max-pooling:
\begin{equation} \label{eq:fusedrepr}
\mathbf{x}^r {=} \text{max-pool}(\{\mathbf{u''}(s_1, s_2) | s_1{\in}e_1{,} s_2{\in} e_2\})
\end{equation}
Akin to GRC, we concatenate $\mathbf{x}^r$ with entity type embeddings $\mathbf{w}_1^e/\mathbf{w}_2^e$ and apply a two-layer FFNN (again, similar to FFNN$^s$).
Note that for both classifiers (GRC/MRC), we need to score both ($s_1$, $r_i$, $s_2$) and ($s_2$, $r_i$, $s_1$) to infer the direction of asymmetric relations.
\subsection{Training} \label{sec:training}
\npdecimalsign{.}
\nprounddigits{2}
\begin{table*}
\sisetup{round-mode=places,detect-weight}
\centering
\begin{tabular}{c l S S S S S S}
\toprule
& & \multicolumn{3}{c}{{\textbf{Joint Model}$^*$}} & \multicolumn{3}{c}{{\textbf{Pipeline}}} \\ \cmidrule(lr){3-5} \cmidrule(lr){6-8}
\textbf{Level} & \textbf{Task} & {Precision} & {Recall} & {F1} & {Precision}&{Recall}&{F1} \\ \midrule
(a) & Mention Localization & 93.28913423754554 & 92.70222023875934 & 92.99438038342063 & 92.87034924830034 & 92.45788240544462 & 92.66362618180385 \\
(b) & Coreference Resolution & 82.51984790001059 & 83.05892738909733 & 82.78786712808218 & 82.11101710467351 & 82.66166409181197 & 82.38517453455762 \\
(c) & Entity Classification & 79.8436351712385 & 80.35898190378107 & 80.09985709880569 & 78.9993913237506 & 79.52331911137267 & 79.2602527755901 \\
\multirow{2}{*}{(d)} & Relation Classification & 42.75927494795886 & 38.247410947991355 & 40.375261254251676 & 43.606748210084604 & 37.50312962330716 & 40.32059813379855 \\
& Relation Classification (GRC) & 38.68889837573745 & 37.31876635939456 & 37.9813724906379 & 39.06633073375847 & 36.43564356435643 & 37.69679991072489 \\
\bottomrule
\end{tabular}
\caption{Test set evaluation results of our multi-level end-to-end system \name{} on DocRED (using the end-to-end split). We either train the model jointly on all four sub-components (left) or arrange separately trained models in a pipeline (right)
($^*$ joint results are for MRC except for the last row).}
\label{table:joint_results}
\end{table*}
We perform a supervised multi-task training, whereas each training document features ground truth for all four subtasks (mention localization, coreference resolution, as well as entity and relation classification). We optimize the joint loss of all four components:
\begin{equation}
\mathcal{L} := \beta_s \cdot \mathcal{L}^s + \beta_c \cdot \mathcal{L}^c + \beta_e \cdot \mathcal{L}^e + \beta_r \cdot \mathcal{L}^r
\end{equation}
$\mathcal{L}^s$, $\mathcal{L}^c$ and $\mathcal{L}^r$ denote the binary cross entropy losses of the span, coreference and relation classifiers. We use a cross entropy loss ($\mathcal{L}^e$) for the entity classifier. A batch is formed by drawing positive and negative samples from a single document for all components. We found such a {\it single-pass approach} to offer significant speed-ups both in learning and inference:
\begin{itemize}
\item Entity mention localization: We utilize all ground truth entity mentions $\mathcal{S}^{gt}$ of a document as positive training samples, and sample a fixed number $N_s$ of random non-mention spans up to a pre-defined length $L_s$ as negative samples. Note that we only train and evaluate on the full tokens according to the dataset's tokenization, i.e. not on byte-pair encoded tokens, to limit computational complexity. Also, we only sample intra-sentence spans as negative samples. Since we found intra-mention spans to be especially challenging (\enquote{New York} versus \enquote{New York City}), we sample up to $\frac{N_s}{2}$ intra-mention spans as negative samples.
\item Coreference resolution: The coreference classifier is trained on all span pairs drawn from ground truth entity clusters $\mathcal{E}^{gt}$ as positive samples. We further sample a fixed number $N_c$ of pairs of random ground truth entity mentions that do not belong to the same cluster as negative samples.
\item Entity classification: Since the entity classifier only receives clusters that supposedly constitute an entity during inference, it is trained on all ground truth entity clusters of a document.
\item Relation classification: Here we use ground truth relations between entity clusters as positive samples and $N_r$ negative samples drawn from $\mathcal{E}^{gt} {\times} \mathcal{E}^{gt}$ that are unrelated according to the ground truth.
\end{itemize}
Each component's loss is obtained by averaging over all samples. We learn the weights and biases of sub-component specific layers as well as the meta embeddings during training. BERT is fine-tuned in the process.
\section{Experiments} \label{sec:experiments}
We evaluate \name{} on the DocRED dataset~\cite{yao:2019:docred}. DocRED ist the most diverse relation extraction dataset to date (6 entity and 96 relation types). It includes over 5,000 documents, each consisting of multiple sentences. According to~\citet{yao:2019:docred}, DocRED requires multiple types of reasoning, such as logical or common-sense reasoning, to infer relations.
Note that previous work only uses DocRED for relation extraction (which equals our relation classifier component) and assumes entities to be given (e.g.~\citealt{wang:2019:two-step-bert, nan:2020:bert_lsr}). On the other hand, DocRED is exhaustively annotated with mentions, entities and entity-level relations, making it suitable for end-to-end systems.
Therefore, we evaluate \name{} both as a relation classifier (to compare it with the state-of-the-art) and as a joint model (as reference for future work on joint entity-level relation extraction).
While prior joint models focus on mention-level relations (e.g.~\citealt{gupta:2016:table_filling, bekoulis:2018:multi_head, chi:2019:hierarch_attention}), we extend the strict evaluation setting to entity level: A mention is counted as correct if its span matches a ground truth mention span. An entity cluster is considered correct if it matches the ground truth cluster exactly and the corresponding mention spans are correct. Likewise, an entity is considered correct if the cluster as well as the entity type matches a ground truth entity. Lastly, we count a relation as correct if its argument entities as well as the relation type are correct. We measure precision, recall and micro-F1 for each sub-task and report micro-averaged scores.
\paragraph{Dataset split} The original DocRED dataset is split into a train (3,053 documents), dev (1,000) and test (1,000) set. However, test relation labels are hidden and evaluation requires the submission of results via Codalab. To evaluate end-to-end systems, we form a new split by merging train and dev. We randomly sample a train (3,008 documents), dev (300 documents) and test set (700 documents). Note that we removed 45 documents since they contained wrongly annotated entities with mentions of different types. Table~\ref{table:joint_split} contains statistics of our end-to-end split\footnote{Note that DocRED contains some duplicate annotations. These are included in the statistics, but are filtered for evaluation in the end-to-end setting.}. We release the split as a reference for future work.
\npdecimalsign{.}
\nprounddigits{2}
\begin{table}
\sisetup{round-mode=places,detect-weight}
\centering
\begin{tabular}{l c c c c }
\toprule
\textbf{Split} & {\#Doc.} & {\#Men.} & {\#Ent.} & {\#Rel.} \\ \midrule
Train & 3,008 & 78,677 & 58,708 & 37,486 \\
Dev & 300 & 7,702 & 5,805 & 3,678 \\
Test & 700 & 17,988 & 13,594 & 8,787 \\
Total & 4,008 & 104,367 & 78,107 & 49,951 \\
\bottomrule
\end{tabular}
\caption{DocRED dataset split used for end-to-end relation extraction.}
\label{table:joint_split}
\end{table}
\paragraph{Hyperparameters} We use $\text{BERT}_{\text{BASE}}$ (cased)\footnote{We use the implementation from~\cite{wolf:2019:hugging_face}.} for document encoding, an attention-based language model pre-trained on English text~\cite{devlin:2018:bert}. Hyperparameters were tuned on the end-to-end dev set: We adopt several settings from~\cite{devlin:2018:bert}, including the usage of the Adam Optimizer with a linear warmup and linear decay learning rate schedule, a peak learning rate of 5e-5\footnote{We performed a grid search over [5e-6, 1e-5, 5e-5, 1e-4, 5e-4].} and application of dropout with a rate of $0.1$ throughout the model. We set the size of meta embeddings ($\mathbf{w}^s$, $\mathbf{w}^c$, $\mathbf{w}^e$, $\mathbf{w}_{d_s}^r$, $\mathbf{w}_{d_t}^{r'}$) to $25$ and the number of epochs to $20$. Performance is measured once per epoch on the dev set, out of which the best performing model is used for the final evaluation on the test set. A grid search is performed for the mention, coreference and relation filter threshold ($\alpha^s{=}0.85$, $\alpha^c{=}0.85$, $\alpha^r (\text{GRC}){=}0.55$, $\alpha^r (\text{MRC}){=}0.6$) with a step size of 0.05. The number of negative samples ($N_s{=}N_c{=}N_r{=}200$) and sub-task loss weights ($\beta_s{=}\beta_c{=}\beta_r{=}1$, $\beta_e{=}0.25$) are manually tuned. Note that some documents in DocRED exceed the maximum context size of BERT ($512$ BPE tokens). In this case we train the remaining position embeddings from scratch.
\subsection{End-to-End Relation Extraction}
\npdecimalsign{.}
\nprounddigits{2}
\begin{table}
\sisetup{round-mode=places,detect-weight}
\centering
\begin{tabular}{l S S }
\toprule
\textbf{Model} & {Ign F1} & {F1} \\ \midrule
CNN~\cite{yao:2019:docred} & 40.33 & 42.26 \\
LSTM~\cite{yao:2019:docred} & 47.71 & 50.07 \\
Ctx-Aware~\cite{yao:2019:docred}$^*$ & 48.40 & 50.70 \\
BiLSTM~\cite{yao:2019:docred} & 48.78 & 51.06 \\
Two-Step~\cite{wang:2019:two-step-bert}$^*$ & {{-}} & 53.92 \\
HIN~\cite{Tang:2020:hin}$^*$ & 53.70 & 55.60 \\
\B{\name{} (GRC)}$^*$ & 53.76 & 55.91 \\
LSR~\cite{nan:2020:bert_lsr}$^*$ & 56.97 & 59.05 \\
CorefRo~\cite{ye:2020:coref_bert}$^*$ & 57.90 & 60.25 \\
\B{\name{} (MRC)}$^*$ & \B{58.44} & \B{60.40} \\
\bottomrule
\end{tabular}
\caption{Comparison of our relation classification component (GRC/MRC) with the state-of-the-art on the DocRED relation extraction task. We report test set results on the original DocRED split. Ign F1 ignores relational facts also present in the train set. Models marked with $*$ use a Transformer-type model for document encoding.}
\label{table:state_art}
\end{table}
\name{} is trained and evaluated on the end-to-end dataset split (see Table~\ref{table:joint_split}). We perform 5 runs for each experiment and report the averaged results. To study the effects of joint training, we experiment with two approaches: (a) All four sub-components are trained jointly in a single model as described in Section~\ref{sec:training} and (b) we construct a pipeline system by training each task separately and not sharing the document encoder.
Table~\ref{table:joint_results} illustrates the results for the joint (left) and pipeline (right) approach. As described in Section~\ref{sec:approach}, each sub-task builds on the results of the previous component during inference. We observe the biggest performance drop for the relation classification task, underlining the difficulty in detecting document-level relations. Furthermore, the multi-instance based relation classifier (MRC) outperforms the global relation classifier (GRC) by about 2.4\% F1 score. We reason that the fusion of local evidences by multi-instance learning helps the model to focus on appropriate document sections and alleviates the impact of noise in long documents. Moreover, we found the multi-instance selection to offer good interpretability, usually selecting the most relevant instances (see Figure~\ref{fig:multi_instance_example} for examples). Overall, we observe a comparable performance by joint training versus using the pipeline system.
\npdecimalsign{.}
\nprounddigits{2}
\begin{table}
\sisetup{round-mode=places,detect-weight}
\centering
\begin{tabular}{l S S }
\toprule
& \multicolumn{1}{c}{{\textbf{JM}$^*$}} & \multicolumn{1}{c}{{\textbf{SM}}} \\ \cmidrule(lr){2-2} \cmidrule(lr){3-3}
\textbf{Task} & {F1} & {F1} \\ \midrule
Mention Localization & 92.99438038342063 & 92.66362618180385 \\
Coreference Resolution & 90.53607011458206 & 90.45661398542912 \\
Entity Classification & 95.6613211711049 & 95.29056936883919 \\
Relation Classification & 59.45818471053916 & 59.76466698664397 \\
Relation Classification (GRC) & 56.447481187905815 & 56.5456223978627 \\
\bottomrule
\end{tabular}
\caption{Single-task performance of the joint model (left) and separate models (right) on the end-to-end split ($^*$ joint results are for MRC except for the last row).}
\label{table:classify_results}
\end{table}
This is also confirmed by the results reported in Table~\ref{table:classify_results}, where we evaluate the four components independently, i.e. each component receives ground truth samples from the previous step in the hierarchy (e.g. ground truth mentions for coreference resolution). Again, we observe the performance difference between the joint and pipeline model to be negligible.
This shows that it is not necessary to build separate models for each task, which would result in training and inference overhead due to multiple expensive BERT passes. Instead, a single neural model is able to jointly learn all tasks necessary for document-level relation extraction, therefore easing training, inference and maintenance.
\begin{figure*}
\centering
\framebox{
\begin{tabular}{ p{15cm} }
\small
\fcolorbox{darkblue}{docgrey}{\textcolor{darkblue}{\textbf{Queequeg}}} is a fictional character in the 1851 novel Moby-Dick by American author {}\fcolorbox{darkblue}{docgrey}{\textcolor{darkblue}{\textbf{Herman Melville}}}. The son of a South Sea chieftain who left home to explore the world, \textcolor{darkblue}{\textbf{Queequeg}} is the first principal character encountered by the narrator, Ishmael. The quick friendship and relationship of equality between the tattooed cannibal and the white sailor shows \textcolor{darkblue}{\textbf{Melville}}'s basic theme of shipboard democracy and racial diversity...
\vspace{0.1cm}
\hrule
\vspace{0.1cm}
Shadowrun:Hong Kong is a turn-based tactical role-playing video game set in the Shadowrun universe. It was developed and published by \fcolorbox{darkblue}{docgrey}{\textcolor{darkblue}{\textbf{Harebrained Schemes}}}, who previously developed \fcolorbox{darkblue}{docgrey}{\textcolor{darkblue}{\textbf{Shadowrun Returns}}} and its standalone expansion. It includes a new single - player campaign and also shipped with a level editor that lets players create their own Shadowrun campaigns and share them with other players. In January 2015, \textcolor{darkblue}{\textbf{Harebrained Schemes}} launched a Kickstarter campaign in order to fund additional features and content they wanted to add to the game, but determined would not have been possible with their current budget. The initial funding goal of US \$ 100,000 was met in only a few hours. The campaign ended the following month, receiving over \$ 1.2 million. The game was developed with an improved version of the engine used with \textcolor{darkblue}{\textbf{Shadowrun Returns}} and Dragonfall. \textcolor{darkblue}{\textbf{Harebrained Schemes}} decided to develop the game only for Microsoft Windows, OS X, and Linux, ...
\end{tabular}
}
\caption{Two example documents of the DocRED dataset. Highlighted are relations \enquote{creator} between \enquote{Queequeg} and \enquote{Herman Melville} (top) and \enquote{developer} between \enquote{Shadowrun Returns} and \enquote{Harebrained Schemes} (bottom). Bordered pairs are the top selections of the multi-instance relation classifier.}
\label{fig:multi_instance_example}
\end{figure*}
\subsection{Relation Extraction}
We also compare our model with the state-of-the-art on DocRED's relation extraction task. Here, entity clusters are assumed to be given. We train and test our relation classification component on the original DocRED dataset split. Since test set labels are hidden, we submit the best out of 5 runs on the development set via CodaLab to retrieve the test set results. Table~\ref{table:state_art} includes previously reported results from current state-of-the-art models. Note that our global classifier (GRC) is similar to the baseline by \cite{yao:2019:docred}. However, we replace mention span averaging with max-pooling and also choose max-pooling to aggregate mentions into an entity representation, yielding considerable improvement over the baseline. Using the multi-instance classifier (MRC) instead further improves performance by about 4.5\%. Here our model also outperforms complex methods based on graph attention networks~\cite{nan:2020:bert_lsr} or specialized pre-training~\cite{ye:2020:coref_bert}, achieving a new state-of-the-art result on DocRED's relation extraction task.
\subsection{Ablation Studies}
We perform several ablation studies to evaluate the contributions of our proposed multi-instance relation classifier enhancements: We remove either the global entity representations $\mathbf{x}_1^e,\mathbf{x}_2^e$ (Equation \ref{eq:entityrepr}) (a) or the localized context representation $\mathbf{c}(s_1,s_2)$ (Equation \ref{eq:mentioninput2}) (b). The performance drops by about $0.66\%$ F1 score when global entity representations are omitted, indicating that multi-instance reasoning benefits from the incorporation of entity-level context. When the localized context representation is omitted, performance is reduced by about $0.90\%$, confirming the importance of guiding the model to relevant input sections. Finally, we limit the model to fusing only intra-sentence mention pairs (c). In case no such instance exists for an entity pair, the closest (in token distance) mention pair is selected. Obviously, this modification reduces computational complexity and memory consumption, especially for large documents. Nevertheless, while we observe intra-sentence pairs to cover most relevant signals, exhaustively pairing all mentions of an entity pair yields an improvement of $0.67\%$.
\npdecimalsign{.}
\nprounddigits{2}
\begin{table}
\sisetup{round-mode=places,detect-weight}
\centering
\begin{tabular}{l S}
\toprule
\textbf{Model} & {F1} \\ \midrule
Relation Classification (MRC) & 59.76466698664397 \\
- (a) Entity Representations & 59.095287041999505 \\
- (b) Localized Context & 58.853425842356955 \\
- (c) Exhaustive Pairing & 59.08573665533463 \\
\bottomrule
\end{tabular}
\caption{Ablation studies for the multi-level relation classifier (MRC) using the end-to-end split. We either remove global entity representations (a), the localized context (b) or only use intra-sentence mention pairs (c). The results are averaged over 5 runs.}
\label{table:ablations}
\end{table}
\section{Conclusions}
We have introduced \name{}, a novel multi-task model for end-to-end relation extraction. In contrast to prior systems, \name{} combines entity mention localization with coreference resolution to extract entity types and relations on an entity level. We report first results for entity-level, end-to-end, relation extraction as a reference for future work. Furthermore, we achieve state-of-the-art results on the DocRED relation extraction task by enhancing multi-instance reasoning with global entity representations and a localized context, outperforming several more complex solutions. We showed that training a single model jointly on all sub-tasks instead of using a pipeline approach performs roughly on par, eliminating the need of training separate models and accelerating inference. One of the remaining shortcomings lies in the detection of false positive relations, which may be expressed according to the entities' types but are actually not expressed in the document. Exploring options to reduce these false positive predictions seems to be an interesting challenge for future work.
\section*{Acknowledgments}
This work was funded by German Federal Ministry of Education and Research (Program FHprofUnt, Project DeepCA (13FH011PX6)).
|
2,877,628,091,642 | arxiv | \section{Introduction}
One of the main objectives of modern astrophysics is understanding the process
of galaxy formation and evolution. The best way to tackle this issue is
studying the properties of galaxies observed at the epoch of their formation
and early evolution, such as their stellar population, history of mass assembly,
morphology, metallicity and interplay with the intergalactic medium.
However, disentangling these processes in nearby systems
is already extremely difficult, and the challenge is even greater at higher
redshift, where sources are compact in size ($\sim 0.1\arcsec - 0.3\arcsec$)
and larger galaxies are rare (e.g., Bouwens et
al. \cite{bouwens04}). To resolve and study the details of high-redshift
galaxies using ground based telescopes, which can provide larger samples and
deeper observations than space-based observations, it is
necessary to overcome the blurring effects of the atmosphere through
the use of adaptive optics (AO) systems. These can allow ground-based
telescopes to operate at or near the diffraction limit in the near-infrared
($\sim 0.07\arcsec$ in $K$ band for an 8\,m telescope), resulting in a high
angular resolution and a low background in each pixel.
Besides the technical advantages afforded by AO, near-infrared surveys provide
one of the best opportunities to investigate the cosmic evolution of galaxies
and their mass assembly. In particular, $K$-band (2.2\,${\rm \mu m}$)
selected samples are ideally suited for addressing the problems of galaxy
formation and evolution. First, since the rest frame near-IR luminosity is a
good tracer of the galaxy stellar mass (e.g., Brinchmann \& Ellis
\cite{brinchmann}; Bell \& de Jong \cite{bell01}; Mannucci et al.
\cite{mannucci05}), $K$-band surveys allow us to select galaxies according
to their mass up to $z \sim 1.5$ ($\lambda_{\rm rest} \sim 0.9-1.0\,{\rm \mu
m}$), rather than suffer strong biases towards star-forming and peculiar
galaxies like optical surveys (e.g. Drory et al. \cite{drory04}, Fontana
et al. \cite{fontana04}). Another strong argument for selecting galaxies
in the near infrared is that, due to the similarity of the spectral shapes of
different galaxy types and stellar population ages in the rest frame near-IR
over a wide redshift range (e.g., Mannucci et al. \cite{mannucci01}), the
selection of galaxies in the $K$ band is not affected by strong
$k$-correction effects (e.g., Cowie et al. \cite{cowie94}). In contrast,
selection in the $I$ band becomes very type sensitive beyond $z=1$, and the
situation is even more extreme in the $B$ band, where the fading of early-type
galaxies is substantial even at modest redshifts. Thus, near-IR samples do
not depend as strongly on galaxy type as optically selected ones, which are
more sensitive to recent and ongoing star formation activity (as they
sample the rest-frame UV light) and are biased against old and passive or weakly
star-forming galaxies.
Finally, near-IR surveys are less affected by dust extinction than optical
ones, making it possible to select highly extinguished star-forming galaxies.
The observation of the obscured dusty star formation rate is crucial for
measuring the global star formation history. Calculations based on the
observed rest frame UV flux (e.g., Madau et al. \cite{madau96}; Connolly et
al. \cite{connolly97}) might be significantly underestimated if a large
fraction of the overall star formation at high redshift takes place in highly
obscured starburst galaxies (e.g., Steidel et al. \cite{steidel99}; Blain et
al. \cite{blain02}).
Morphology is one of the most appropiate ways to characterize the properties
of galaxies, and we will only reach a complete understanding of galaxies by
deriving the mechanisms responsible for their morphologies.
In this context, the study of galaxy size, and of the evolution of other galaxy
properties according to morphological type, have made use mainly
of the classification derived from deep optical HST imaging (e.g.,
Simard et al.
\cite{simard99}; Labb\'e et al. \cite{labbe03};
Trujillo \& Aguerri \cite{trujillo04};
Pannella et al. \cite{pannella06}), due to the higher angular resolution
achievable at optical wavelengths with HST.
However, near-infrared morphology is a better tracer of the underlying mass
distribution, as it is not biased towards recent star formation and
is less affected by dust obscuration.
By using adaptive optics, it is now possible to push the analysis of source
properties (surface density, magnitude, color, morphology, etc.) as a function
of source size in the near-IR to an entirely new regime, and study sources that are both
faint \textit{and} compact. Ample evidence already indicates
that such source populations do exist -- e.g., a large fraction of the $H_{AB}<21$ sources
detected by Yan et al. (\cite{yan98}) are still unresolved at the
$\sim 0.35\arcsec$ resolution provided by HST/NICMOS in the near-IR. The
AO-corrected, diffraction-limited, near-IR PSF of an 8\,m telescope is a
powerful tool to study this kind of object, since the angular resolution
it yields is even higher than can be obtained by HST at this wavelength.
Although the advantages of near-IR AO observations for studying how galaxies
form and evolve in the early universe are clear, until now there have been
only a few attempts using natural guide stars (NGS; see
e.g., Larkin et al. \cite{larkin}; Glassman, Larkin \& Lafreni\'ere
\cite{glassman}; Steinbring et al. \cite{steinbring04}; Minowa et al.
\cite{minowa}), due to the very small number of known extragalactic sources
lying at distances $\Delta \theta \la 30\arcsec$ from bright ($V \la 13$)
stars needed to correct the wavefront for AO guiding, and to the problems
arising from the anisoplanaticism of the PSF in AO observations. The
prospects for AO cosmology will undoubtedly improve with the widespread
adoption of laser guide star (LGS) systems, since these impose less stringent
requirements on the brightness of stars used for tip-tilt correction (e.g.,
Melbourne et al. \cite{melbourne}). However, to overcome the present
shortage of targets for AO cosmology, it is necessary to identify and
characterize extragalactic sources in the vicinity of bright guide stars (see
e.g., Larkin et al. \cite{larkin99}; Davies et al. \cite{davies01};
Christopher \& Smail \cite{cresm}).
We therefore undertook a campaign of seeing-limited near-IR imaging of fields
selected around stars bright enough for AO guiding ($10.3 \le R \le 12.4$),
blue ($B-R \le 1.1$, in order to maximize the amount of light on the wavefront
sensor), lying at high galactic latitude ($|b| \ge 15\deg$, to minimize
extinction and contamination by foreground stars), and with a declination
suitable for observations with the ESO Very Large Telescope at low air mass
($-44\deg \le \delta \le -13\deg$). A total of 42 southern bright star fields
(SBSFs) were selected and observed at seeing-limited resolution in
$K_\mathrm{s}$ band with SOFI at the ESO {\em New Technology Telescope}. More
details about the target selection and data can be found in Baker et al.
(\cite{baker}). The same fields have been followed up at optical wavelengths
(Davies et al. \cite{davies05}), and are now targets for VIMOS integral field
optical spectroscopy at the ESO Very Large Telescope (VLT).
In this paper we present the results of our $K_s$-band AO imaging survey of the first
21 fields in the framework of SWAN (Survey of a Wide Area with NACO), which is
the AO-assisted result of these seeing-limited preliminaries. The survey will
be introduced in the following section, and the observations will be briefly
described in section \ref{obs}. The data reduction approach will be presented
in section \ref{reduc}, while the detection criteria and technique will be
discussed in section \ref{detect}. The extraction of the morphological
parameters of the detected galaxies is analyzed in section \ref{morph},
and the method used to distinguish between stars and galaxies is described
in section \ref{stargal}. In
section \ref{counts} we take into account the selection effects
present in our data, discuss the completeness of the survey, and show
the corrected number counts. The number counts and size-magnitude relation of the
full sample of galaxies and for late and early-type systems separately are
compared with the predictions of two different galaxy evolution models in
section \ref{compare}; our conclusions follow in section \ref{concl}.
All the magnitudes are Vega relative unless otherwise specified.
\section{The Survey of a Wide Area with NACO} \label{SWAN}
Having already characterized large samples of objects in bright star
fields, as described in the previous section, we targeted them with NACO on
the VLT in order to exploit the present generation of AO technology for galaxy
evolution studies. NACO comprises the NAOS Shack-Hartmann AO module (Rousset
et al. \cite{rousset}) mated with CONICA near-infrared camera (Lenzen et
al. \cite{lenzen}). Our choice of NACO observing mode was dictated by our
desire to complement previous HST/NICMOS surveys. First, we
chose to image in $K_\mathrm{s}$, where NICMOS is less sensitive than in $J$
and $H$, thus making SWAN preferentially sensitive to red objects. Second, we
chose to prioritize survey area over depth, in order to optimize the study
of the galaxies over the last half of the Hubble time and improve SWAN's
sensitivity to rare objects and its robustness against cosmic variance (the
latter already enhanced by the survey's peculiarity of patching together small
fields at different locations on the sky). Use of NACO's $0.054\arcsec$ pixel
scale (to maximize the field of view) and the Strehl ratios of 30--60\% typically
achieved in $K_\mathrm{s}$ result in images that are slightly undersampled. As
the AO PSF is quickly changing both in time and position on the frame, in
order to extract full information from our wide-field observations we have
developed a new approach to account for the anisoplanatic PSF. The method was
presented in Cresci et al. (\cite{cresci}), hereafter Paper I, along with some
examples of galaxy morphology fitting using the derived model PSF.
Each NACO pointing provides a usable $\sim 0.75\,\mathrm{arcmin}^2$ of the
full $55.5\arcsec \times 55.5\arcsec$ detector area, due to losses from
dithering and the central star (see, e.g., Fig.~\ref{swanfield}).
Nevertheless, the anticipated survey area that will result from assembling 42
such images will be -- at $\sim 30\,\mathrm{arcmin}^2$ -- some six times
larger than the NICMOS survey of the HDF and flanking fields in $J$ and $H$
(Dickinson \cite{dickinson99}; Dickinson et al. \cite{dickinson00}).
SWAN aims to combine the
high angular resolution of a space-based survey with the shallower depth and
wider area of a ground-based survey, thereby probing sources that are compact,
faint, red, and rare more effectively than any other survey to date.
\begin{figure}
\begin{center}
\resizebox{\hsize}{!}{\includegraphics[clip]{5504fig1.ps}}
\caption{Example of a $55\arcsec \times 55\arcsec$ SWAN field:
SBSF\,24. The bright source in the center is the guide star, and
the circles are the extended
objects detected by SExtractor (SExtractor stellarity index
$\mathrm{SSI} < 0.9$); the squares are point sources ($\mathrm{SSI}
\geq 0.9$). A ghost of the bright guide star is marked with a cross.
North is up and east is right.}
\label{swanfield}
\end{center}
\end{figure}
\begin{table*}
\begin{center}
\begin{tabular}{cccccccccc}
\noalign{\smallskip} \hline \noalign{\smallskip}
\noalign{\smallskip}
Name & R.A. & DEC. & Obs. & $\Delta t$ & $\sigma_{\rm image}$$^b$ & mean & Strehl & N. of & $\theta_0$$^c$ \\
& (J2000.0) & (J2000.0) & Date$^a$ & (min) & (e$^-$\,s$^{-1}$) & airmass & \% & Stars & (\arcsec) \\
(1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) \\
\noalign{\smallskip} \hline \noalign{\smallskip}
SBSF\,02 & 00 44 31.88 & $-$29 52 30.3 & 07.09.04 & 54 & 0.29 & 1.08 & --- & 2 & 5.90 \\
SBSF\,03 & 00 45 20.62 & $-$29 56 46.0 & 08.09.04 & 54 & 0.29 & 1.24 & --- & 2 & 11.20\\
SBSF\,04 & 00 45 28.05 & $-$29 31 40.1 & 09.12.04 & 68 & 0.28 & 1.22 & 33 & 2 & 10.65 \\
SBSF\,06 & 00 50 34.70 & $-$29 26 32.0 & 09.09.04 & 54 & 0.33 & 1.17 & 24 & 0 & [12.70] \\
SBSF\,08 & 00 52 18.88 & $-$29 27 17.8 & 12.09.04 & 56 & 0.32 & 1.09 & 42 & 1 & 12.00 \\
SBSF\,14 & 06 07 06.34 & $-$13 13 37.1 & 15.12.02 & 60 & 0.22 & 1.20 & --- & 4 & 11.95\\
SBSF\,15 & 08 44 00.22 & $-$16 34 01.1 & 17.12.02 & 60 & 0.25 & 1.12 & 46 & 10 & 21.78\\
SBSF\,16 & 09 14 52.77 & $-$19 26 17.0 & 18.12.02 & 50 & 0.25 & 1.07 & --- & 1 & 12.84\\
SBSF\,17 & 09 47 44.79 & $-$21 37 12.7 & 20.03.03 & 44 & 0.28 & 1.05 & 35 & 2 & 10.61\\
SBSF\,18 & 09 49 46.99 & $-$21 45 13.3 & 17.12.02 & 40 & 0.30 & 1.06 & --- & 7 & 21.18\\
SBSF\,24 & 10 40 26.20 & $-$30 00 36.5 & 21.03.03 & 60 & 0.24 & 1.02 & 30 & 8 & 11.77\\
SBSF\,27 & 12 55 37.48 & $-$31 42 41.3 & 11.04.04 & 44 & 0.30 & 1.07 & 35 & 12 & 17.85\\
SBSF\,28 & 12 56 14.14 & $-$42 09 10.9 & 11.04.04 & 44 & 0.27 & 1.26 & 42 & 2 & 9.94\\
SBSF\,34 & 13 46 25.24 & $-$31 45 47.5 & 11.04.04 & 44 & 0.28 & 1.36 & 28 & 2 & 4.89\\
SBSF\,36 & 22 14 36.74 & $-$28 25 31.6 & 05.09.03 & 60 & 0.23 & 1.09 & 35 & 2 & 13.25\\
SBSF\,37 & 22 43 04.43 & $-$39 49 29.3 & 06.09.03 & 60 & 0.23 & 1.10 & 34 & 0 & [12.70]\\
SBSF\,38 & 22 47 06.77 & $-$40 10 01.3 & 05.09.03 & 60 & 0.22 & 1.04 & 31 & 2 & 10.68\\
SBSF\,39 & 22 49 34.23 & $-$39 33 05.3 & 15.06.03 & 60 & 0.24 & 1.04 & 22 & 1 & 10.00\\
SBSF\,40 & 22 49 49.32 & $-$39 53 15.0 & 12.06.03 & 48 & 0.34 & 1.04 & 10 & 0 & 12.70\\
SBSF\,41 & 22 50 21.28 & $-$40 07 38.6 & 14.06.03 & 60 & 0.26 & 1.04 & 33 & 3 & 19.72 \\
SBSF\,42 & 23 29 55.77 & $-$18 35 54.1 & 16.06.03 & 60 & 0.24 & 1.03 & 35 & 0 & [12.70] \\
\noalign{\smallskip} \hline \noalign{\smallskip}
\end{tabular}
\end{center}
\caption{ Observational parameters and AO performances for NACO observations of SWAN fields.
See the text for a full description of the entries.
$^a$ CONICA was fitted with a new detector in June 2004
$^b$ The noise is that measured in the resulting co-added image, scaled to
a 1\,sec integration. Its statistical properties closely follow a Gaussian
distribution with additional weak wings.
$^c$ `$\theta_0$' refers to the isoplanatic angle in $K_\mathrm{s}$ band
as measured fitting the variation of the Strehl ratio of the point sources
in the fields as described in the text.
}
\label{obspar}
\end{table*}
\section{Observations} \label{obs}
The first 21 SWAN fields were observed in $K_\mathrm{s}$ band with the NACO AO
system at the VLT, using the visible Wave Front Sensor (WFS).
An example of a SWAN image is given in
Fig.~\ref{swanfield}. Table~\ref{obspar} summarizes other observational
parameters and the AO system performance during the observations. The SBSF
name is given in column [1], and the coordinates of the guide star in the
center of each field are given in [2] and [3], accurate to $\pm 0.2\arcsec$.
Column [4] reports the date(s) on which each field was observed.
The total integration time on each field is given in [5], and
the noise measured in the resulting coadded image rescaled to 1\,sec
integration time in [6]. The mean airmass is reported in [7].
The Strehl ratio, estimated from a series of short exposures through a
narrow band filter taken before and/or after the science exposures in
order to monitor the on-axis PSF, is given in column [8].
This is calculated from the ratio of the maximum pixel to the total
flux, and includes a correction for the offset of the PSF's centroid
from the center of a pixel; this can be considerable (typically
adding 5--10\% to the Strehl ratio for the data here) due to the large
0.054\arcsec\ size of the pixels.
The number of bright point sources in each field used to evaluate
the isoplanatic angle at 2.2\,${\rm \mu m}$, fitting the
variation of their Strehl ratio in our $K_\mathrm{s}$ images
(see section \ref{morph}), is reported in [9], and the
resulting isoplanatic angle in column [10].
\section{Data reduction} \label{reduc}
The data obtained were reduced using
PC-IRAF (version~2.11.3) together with some scripts in IDL (version~6.0).
The presence of a bright star in the center of a field less than
1\arcmin\ across made the data reduction more complex than usual,
requiring extra steps to compensate.
An initial estimate of the sky background was made from the target
frames after masking all bright objects in the fields.
Each target frame then had the sky subtracted, was flat fielded, had any
residual constant background removed, and hot pixels corrected.
A mask that included dead pixels and bad regions
was then applied to each frame.
In order to correct for over-subtraction from very extended faint
scattering (and/or emission) around bright objects, a surface was fit
to each frame (ignoring regions in the object mask) and subtracted.
The frames were then aligned with sub-pixel accuracy using up to
several conspicuous isolated objects in the field, and averaged after
rejecting high and low pixels at each point according to an estimated
variance.
This initial combined frame was used to generate a new object mask, and
the entire data reduction process was repeated,
yielding a new combined frame with much less over-subtraction.
In a final step, the objects were once more masked out and a surface
fitted to the background, and this was subtracted to produce the final
image.
The sky is estimated by dithering, i.e., slightly moving the telescope
between different frames so that different pixels sample different parts
of the sky. In SWAN the offsets were chosen semi-randomly within a 7" box, due
to the limited NACO field of view. Therefore, even if great care is used
to produce the sky frames, the sky around objects larger than $\sim 3\arcsec$,
i.e., for the very bright guide star and for galaxies with effective radius
$R_e \gtrsim 1\arcsec$, may be overestimated, producing a
self-subtraction of some galaxy flux. This effect can produce fainter
magnitudes and smaller dimensions for such bright and large objects,
although these constitute less than 2\% of the total sample in our fields.
However, in section \ref{compare} we will see that this effect introduces
some systematic uncertainties in the size-magnitude relation for large galaxies.
\section{Source detection} \label{detect}
In each reduced SWAN field, sources were detected using SExtractor
(Bertin \& Arnouts \cite{bertin}), with the
appropriate parameters optimized for compact sources,
set to provide a positive detection for objects
brighter than $1.5\,\sigma$ per pixel over an area of more than
3~pixels. To improve the detection of faint sources we used a Gaussian
filter ($\sigma=1.5$~pixels) to smooth the image.
False detections at the noisy borders of the mosaic and on the spikes
and the ghost of the bright guide star were removed. For the former, a mask
that indicated the fraction of the total integration time spent on each pixel
was used; objects detected in pixels below a specified threshold were rejected.
For the latter, appropriate object masks were created. Our algorithm
deliberately does not push the detection to the faintest possible limit, as we
are more interested in the high resolution AO morphologies of the brighter
sources than in the deepest possible number counts. For this reason, our
counts (see Section \ref{counts}) are not significantly contaminated by
spurious detections due to noise. The total coverage above the detection
thresholds of the 21 fields is $15.3\,\mathrm{arcmin}^2$, within
which a total of 495 sources are detected down to a magnitude of
$K_\mathrm{s} \sim 23.5$ ($K_{\mathrm{AB}}\sim 25.3$, see section \ref{counts}).
\section{Morphological fitting} \label{morph}
The morphological parameters of the detected galaxies were derived using
GALFIT (Peng et al. \cite{peng}), a widely used software package that fits
a two-dimensional image of a galaxy and/or a point source with one or
more analytic functions that have been convolved with a model of the PSF.
To fit the galaxies in our SWAN fields we used
a single S\'ersic (\cite{sersic}) profile,
\begin{equation}
I(R) = I(R_e) \times \exp ({-b_n \times [(R/R_e)^{1/n}-1]})
\end{equation}
where $R_e$ is the effective radius that encloses half of the light,
$n$ is the S\'ersic index and $b_n$ is a constant that varies with $n$, chosen
so that $R_e$ corresponds to the half-light radius.
GALFIT needs as an input a PSF to convolve the S\'ersic profile
model. We used the off-axis AO PSF model presented in Paper I, which is
optimized for wide-field and high Galactic latitude observations.
The off-axis PSF is determined by convolving the on-axis PSF
in each of the fields with an elliptical Gaussian kernel elongated
towards the guide star. The FWHM of the kernel depends on the distance from
the guide star and on the isoplanatic angle of the field.
We therefore derived the isoplanatic angle for each field
fitting the variation of the Strehl ratio of the point sources across the
field as described in Paper I. The obtained isoplanatic angle along with the
number of point sources used in the fit are reported in Table~\ref{obspar}.
The derived isoplanatic angles for the 21 fields range from
$4.9\arcsec$ to $21.8\arcsec$. In four of the fields no bright point source
was available except the guide star, and therefore the average isoplanatic
angle for the other fields ($12.7\arcsec$) was assumed.
Initial guesses for GALFIT model parameters were
obtained from the SExtractor source catalogs. Lacking an estimate of the
S\'ersic index $n$ in the SExtractor catalogs, we used $n=2$ for all the
galaxies in the first iteration. Each galaxy was fitted twice,
using as first guesses for the second iteration the output parameters of the
first iteration. Roughly 16\% of the detected galaxies could not be fitted
satisfactorily with a single component, but required simultaneous fits with
very close companions or multiple-component fits. These can be divided in two
categories. 9\% of the total are
interacting galaxies or very close pairs, where the overlap of the isophotes
from different objects required a simultaneous fit. A further 7\% of the
total are galaxies for which a single-component S\'ersic profile was not
sufficient to fit the light profile, leaving significant residuals in the
subtraction. Half of these two-component galaxies were re-fit using a
disk component and an elliptical bulge, while the other half were re-fit by
adding a central point source to the S\'ersic component.
As we have shown by the detailed simulations in Paper I, the morphological
parameters of the galaxies detected at the depths of our images
can be derived with low uncertainties up to $K_\mathrm{s} \sim 20.5$,
while for fainter objects the
uncertainties grow as a function of the magnitude. In addition, we recall that
it is possible to set a threshold of $n = 2$ on the S\'ersic index that can
discriminate between late-type galaxies ($n<2$) and early-type galaxies
($n>2$). The results of our simulation are confirmed e.g. by Ravindranath et
al. (\cite{ravindra04}), who used GALFIT to fit single S\'ersic profiles to a
sample of nearby galaxies of known morphology from the Frei et al.
(\cite{frei96}) sample, after artificially redshifting them to $z=0.5$ and
$z=1.0$. They found that $n=2$ is the appropriate threshold to separate
disk-dominated galaxies from bulge-dominated ones, even in the presence of
morphological complexities such as dust, star-forming regions, etc.
(Ravindranath et al. \cite{ravindra04}).
Of the 383 galaxies detected to $K_\mathrm{s} \sim 23.5$ (see section
\ref{stargal} for a discussion of the 112 stars), 214 were classified
as late-type and 169 as early-type. The sources fitted with multiple
components are classified according to the S\'ersic component providing the
higher flux contribution. The galaxies are divided in
these two subclasses for the following analysis, with an average contamination
between the two subclasses of less than 10\% up to $K_\mathrm{s}=21$ (Paper I).
\begin{figure}
\begin{center}
\resizebox{0.4\textwidth}{!}{\includegraphics{5504fig2.ps}}
\caption{\label{ellitt} The distribution of the axis ratios
$b/a$ of the SWAN galaxies fitted by GALFIT with $\chi^2_{\nu} \leq2$
as a function of their S\'ersic index $n$.
As expected, while late-type galaxies ($n<2$) are observed at random
inclinations with respect to the plane of the sky,
and therefore at every $b/a$, early-type galaxies ($n>2$)
are not observed with $b/a \lesssim 0.4$.}
\end{center}
\end{figure}
In order to quantify the morphological fit quality, we used
the $\chi^2_{\nu}$ calculated by GALFIT. We classified as well-fit the
315 galaxies with $\chi^2_{\nu} \leq2$ (167 late-type and 148 early-type),
while the other fits were considered less reliable and are not considered when computing
the size-magnitude relation of the galaxies in the SWAN fields
(although they are included in the number counts). As an
additional check of our late/early-type separation, we show in
Fig.~\ref{ellitt} the distribution of the axis ratios $b/a$ of the
galaxies with $\chi^2_{\nu} \leq2$ as a function of S\'ersic
index $n$. As expected, while the late-type galaxies are observed at random
inclinations with respect to the plane of the sky, and therefore at
every $b/a$, early-type galaxies are not observed with $b/a \lesssim
0.4$ (e.g. Lambas et al. \cite{lambas92}). This confirms that our
morphological classification of early and late type galaxies based on
the S\'ersic index $n$ produces reliable results.
While the redshifts of these objects are presently unknown,
the magnitude-redshift relation of Cowie et al. (\cite{cowie})
and the K20 survey (Cimatti et al. \cite{cimatti})
indicate that at $K = 20$ the median redshift is $z \sim 0.8-1$.
At this redshift, our spatial resolution of $0.1\arcsec$, which also
corresponds to the smallest effective radius bin, is equivalent to
only 500\,pc for typical cosmologies, hinting at the exciting
potential of this work.
\section{Star-galaxy separation} \label{stargal}
The separation of Galactic foreground stars from the field galaxies
is a critical step for avoiding star contamination in our galaxy catalogue.
We classified as stars all 58 sources detected in the NACO images
with SExtractor stellarity index $\mathrm{SSI} \geq 0.9$.
The SExtractor classification should be treated with caution
since it assumes a constant PSF across each field, and elongated sources
are more likely classified as galaxies. However,
all the objects classified as stars by SExtractor lie on an upper
envelope in a Strehl versus radial distance plot, i.e., they have the
highest Strehl ratio among the sources at the same distance from
the guide star, supporting their classification as point sources.
It remains possible that some stars are not classified
as point sources by SExtractor, due to the limited isoplanatic AO
patch and their resulting elongated shape. We therefore also
classified as stars all the very compact sources fitted
by GALFIT with $R_e<0.01\arcsec$.
This is supported by simulations in which we fitted true
fiducial point sources in the SWAN fields, rescaled to several
magnitudes, with GALFIT S\'ersic profiles and obtained
very compact effective radii $R_e<0.01\arcsec$ and high S\'ersic indexes.
For very bright and elongated PSFs, the fitted $R_e$ can still
be as large as $\sim 0.2\arcsec$, due to the higher
signal in the halos of the PSF that may not be perfectly
reproduced by the PSF model. We therefore include in the star
catalogue all the sources with $K_\mathrm{s} \leq 18.5$ (in order
to have sufficiently high S/N) that are classified as
stars by SExtractor in our SOFI seeing limited images of the
SWAN fields. All the objects classified as stars in the seeing-limited images
proved to be compact in the AO-corrected ones as well, even if elongated,
with all having $R_e < 0.16\arcsec$ as fitted by GALFIT
using the appropriate local PSFs for convolution.
We have a total of 112 stars in the 21 SWAN fields analyzed.
To assess the robustness of the star/galaxy classification,
the star counts were compared with the predictions of the Bahcall et al.
(\cite{bachall80}) galaxy model, which provides the
star counts as a function of the field's Galactic longitude and latitude.
As the model provides the number of stars brighter than a certain limiting
magnitude in the $V$ band, we convert the $V$ magnitude into a $K_\mathrm{s}$
magnitude using an average color derived from the $K$-band counts at
the Galactic pole provided by Hutchings et al.
(\cite{hutchings02}). In Fig.~\ref{starsswan} we show the number of stars
in the SWAN fields as a function of Galactic latitude $b$ up to $K_\mathrm{s}
= 22$, which
corresponds to the limit where we are 100\% complete for point sources
(see Fig.~\ref{corrps}). It can be seen that the observed and predicted stellar number counts
are in very good agreement for all latitudes except the lowest latitude bin
($|b|<20^\circ$), where the Galactic model is less accurate due to the high
variability between adjacent lines of sight. However, the total excess of
selected stars with respect to the model predictions is only 18 sources, i.e.,
small compared to the total catalogue of 383 galaxies. Therefore, even if some
compact galaxies in these fields were erroneously classified as stars,
they represent less than 5\% of the sample.
\begin{figure}
\begin{center}
\resizebox{0.4\textwidth}{!}{\includegraphics{5504fig3.ps}}
\caption{\label{starsswan} Number of stars in the SWAN fields as a function
of Galactic latitude $b$ up to $K_\mathrm{s} = 22$, compared with the
predictions of
the Galaxy model of Bahcall et al. (1980).
The observed and predicted stars number counts are
in very good agreement for all latitudes except the lowest latitude
bin ($|b|<20^\circ$),
where the Galactic model is less accurate due to the high
variability between adjacent lines of sight.}
\end{center}
\end{figure}
\section{Completeness correction and number counts} \label{counts}
The probability of detecting a source in one of our images depends
on five different parameters:
\begin{figure*}
\centering
\begin{minipage}[c]{1.0\textwidth}
\centerline{\hbox{
\psfig{file=5504fig4a.eps,width=9.0cm}
\hspace{0.0cm}
\psfig{file=5504fig4b.eps,width=9.0cm}
}}
\caption{The \textit{left panel} shows the variation of the detection
probability for a late-type galaxy in SWAN as a function of the
magnitude and the effective radius
$R_e$. The probability is the average for all 21 observed
fields, and assumes a distance from the guide star
$1 < \theta/\theta_0 \le 2$. The \textit{right panel} shows
the same for early-type galaxies.}
\label{compl}
\end{minipage}
\end{figure*}
\begin{enumerate}
\item The total integrated magnitude
\item The S\'ersic index $n$, as for a given magnitude more
concentrated objects (i.e., early-like galaxies with $n >2$)
are more easily detected than exponential-like galaxies ($n<2$) with
lower concentration.
\item The effective radius $R_e$. As before,
more compact sources are more easily detected.
\item The SWAN field in which the source was observed. The
integration time and therefore the signal/noise ratio is different in
different fields. In addition the overall AO correction is different in each
observation, as is indicated by the different on-axis Strehl ratios (see
Table~\ref{obspar}).
\item The distance from the guide star $\theta/\theta_0$, as the
degree of correction of the AO system depends strongly on this
parameter (see, e.g., Paper I).
\end{enumerate}
The last two parameters are due to the distinctive attributes of our survey,
which makes use of several different fields (4) and of AO (4,5).
\begin{figure}
\begin{center}
\resizebox{0.4\textwidth}{!}{\includegraphics{5504fig5.ps}}
\caption{Comparison between the completeness for point sources and for
extended sources for the SWAN fields. The completeness for point sources
(triangles, dashed line)
was evaluated adding 100 true NACO point sources ($\theta/\theta_0=1.5$)
for each magnitude to each field and then averaging over all fields. The
completeness for extended sources (circles, solid line)
is the average over all the fields for both
late and early-type with $R_e=0.3\arcsec$, i.e., the mean for all the
detected sources in SWAN, at the same distance from the guide star used
for the point sources. The errorbars show the variance for the 21
different fields.}
\label{corrps}
\end{center}
\end{figure}
In order to derive the detection probability for each combination of
these five parameters, we ran several simulations, adding a total of
65,000 simulated galaxy profiles with known parameters -- matched to
the ones of the observed galaxies -- to the original SWAN fields at random
locations and tried to recover them running SExtractor again.
We used extended sources to evaluate the
completeness correction, as this produces results that are quite different from
those inferred using point sources alone, especially at this resolution.
SExtractor was used with the same parameters used in the science
source detection. We used the S\'ersic index $n=1$ for late-type galaxies and
$n=4$ for early-type galaxies. For both types the effective radius $R_e$
ranged from 0.1\arcsec to 1.0\arcsec. The galaxy profiles were
convolved with real NACO PSFs extracted from point sources in our data
lying at different distances from the guide star, in order to simulate
the effect of the AO correction.
The simulated galaxies have magnitudes ranging between $K_\mathrm{s}=19$, where
we are 100\% complete for every combination of the other four parameters,
to $K_\mathrm{s}=23.5$.
We consider three different regimes for the detection probability
as a function of the distance from the guide star: $\theta/\theta_0 \le 1$,
$1<\theta/\theta_0 \le 2$ and $\theta/\theta_0 >2$.
We used point sources at $\theta/\theta_0=$ 0.9, 1.5, and 2.8 respectively
for the three regimes as references for the PSF in the simulated galaxies.
Using this approach, we can derive the detection probability for a
galaxy of known magnitude, $R_e$, S\'ersic index, and distance from
the guide star in a particular field. By way of example, the
histograms of the detection probability averaged over all 21 fields as
a function of magnitude and of effective radius $R_e$ for late and early-type
galaxies with $1 < \theta/\theta_0 \le 2$ are shown in
Fig.~\ref{compl}. It is clear from comparing the panels how much more
sensitive high resolution images are to more core-concentrated sources
like the elliptical galaxies.
The detection probability can be used to correct the number counts
using the observed galaxy population as a starting point. From our simulations
we derived the detection probability $P_{\rm detect}$ for
each detected galaxy in the survey, using the measured $R_e$,
magnitude, and S\'ersic index from GALFIT fitting (see section
\ref{morph}), along with the position of the galaxy in the field.
Each galaxy is then regarded in the completeness-corrected number counts
as $1/P_{\rm detect}$ sources at its magnitude, so that, for example,
a galaxy with $P_{\rm detect}=0.80$ counts as 1.25 galaxies once the
correction is applied.
In Fig.~\ref{corrps} we show the average completeness over all the fields
for both point sources and extended objects as a function of
magnitude. The completeness for point sources was
derived by adding (100 times) to each field a true NACO point source with rescaled
flux. The point source was at a distance
$\theta/\theta_0 = 1.5$ from the guide star.
The completeness for extended objects is the average between that of late-type
and of early-type galaxies
over all the fields, using the same NACO point source PSF to convolve the
simulated galaxy profiles. The effective radius was fixed at the
average for the detected SWAN sources ($R_e=0.3\arcsec$). Obviously
the correction derived using only point sources is much smaller than the
one derived as described above, with the number of sources in the
range $20.5 \le K_\mathrm{s} \le 22$ (where no correction would be
applied in the point-source case) being particularly underestimated.
The corrected number counts, obtained using the detection probabilities
of the observed SWAN sources as weights, are shown in Fig.~\ref{corrcounts}.
\begin{table}
\begin{center}
\begin{tabular}{ccccc}
\noalign{\smallskip} \hline \noalign{\smallskip}
Mag($K_s$) & $n_\mathrm{raw}$ & $N_\mathrm{tot}$ & Late-type
& Early-type \\
(1) & (2) & (3) & (4) & (5) \\
\noalign{\smallskip} \hline \noalign{\smallskip}
14 & 3 & 7e+02 & 2e+02 & 5e+02 \\
15 & 8 & 1.9e+03 & 5e+02 & 1.4e+03 \\
16 & 8 & 1.8e+03 & 1.4e+03 & 4e+02 \\
17 & 19 & 4e+03 & 1.4e+03 & 3.0e+03 \\
18 & 62 & 1.5e+04 & 4.9e+03 & 9e+03 \\
19 & 69 & 1.6e+04 & 9e+03 & 7.8e+03 \\
20 & 96 & 2.8e+04 & 2.0e+04 & 8.7e+03 \\
21 & 67 & 4.8e+04 & 4.1e+04 & 6.8e+03 \\
22 & 35 & 9.8+04 & 9.3e+04 & 4.4e+04 \\
\noalign{\smallskip} \hline \noalign{\smallskip}
\end{tabular}
\caption{\label{tabcounts} Differential number counts in the
SWAN fields.
The raw number of detected galaxies is reported in (2). In (3)
the corrected number counts
($\textrm{N}\ \textrm{mag}^{-1}\ \textrm{deg}^{-2}$)
for the whole sample are shown, while (4) and (5) separate
late-type and early-type counts, respectively.}
\end{center}
\end{table}
\begin{figure}
\begin{center}
\resizebox{\hsize}{!}{\includegraphics{5504fig6.ps}}
\caption{$K_\mathrm{s}$ corrected (solid circles) and uncorrected
(solid triangles) number counts from SWAN, compared with other
$K$-band surveys. The points
at $K_\mathrm{s} \ge 23$ are not reliable as the completeness
correction is adding more than 90\% of the sources.
The excess at bright magnitudes is due to a selection bias, as
some of the fields were selected to contain more bright galaxies
(Baker et al. 2003). The solid line is the best fitting
power-law in the range $16 \le K_\mathrm{s} \le 22$,
with a slope $d\
\mathrm{log}\,(N)/dM=0.26$. The error bars show the
Poissonian errors.}
\label{corrcounts}
\end{center}
\end{figure}
At brighter magnitudes ($K_\mathrm{s} \le 16$), an excess is present
in our number counts with respect to others in the literature. This is
due to a selection bias, as some of the fields were chosen to have an
excess of bright galaxies (see Baker et al. \cite{baker}). At fainter
magnitudes, the points at $K_\mathrm{s} \ge 23$ are no longer
reliable, as the completeness correction adds more than 90\% of
the sources. We derive a slope $d\,{\rm log}\,(N)/dM=0.26 \pm 0.01$ in
the range $16 \le K_\mathrm{s} \le 22$. Our corrected number counts
are in good agreement with those found by other authors in the
literature (see Fig.~\ref{corrcounts}).
This result, together with what was found in Paper I, shows once more
that we are able to account for the biases introduced by the variable
AO PSF and that the data can therefore be reliably used for further analysis.
\section{Comparison with galaxy evolution models} \label{compare}
Most theoretical models of galaxy formation and evolution can be
roughly divided into two broad categories: the so-called ``backwards''
approach and the ``{\it ab initio}'' approach. In the former approach
(Tinsley \cite{tinsley80}; Yoshii \& Takahara \cite{yoshii88};
Fukugita et al. \cite{fukagita90}; Rocca-Volmerange \& Guiderdoni
\cite{rocca90}; Yoshii \& Peterson \cite{yoshii95}; Pozzetti et al.
\cite{pozzetti96}, \cite{pozzetti98}; Jimenez \& Kashlinsky
\cite{jimenez99}; Totani \& Yoshii \cite{totani00}; Totani et al.
\cite{totani01}), the evolution is probed backwards into the past to
predict observables such as galaxy counts and redshift distributions.
The local properties of galaxies, like multi-band colors and chemical
properties, are used to construct a reasonable model of the star formation
history and luminosity evolution of galaxies based on the stellar
population synthesis method. The formation epoch and merging history
of galaxies, however, cannot be predicted in this framework, as they
are introduced as phenomenological parameters that can be inferred
from comparison with observational data.
In the latter approach (Kauffmann et al. \cite{kauff93}; Cole et al.
\cite{cole94}, \cite{cole00}; Somerville \& Primack \cite{som99};
Nagashima et al. \cite{naga05}), on the other hand,
the formation epoch and merging history of galaxies are predicted
by the standard theory of structure formation in the CDM universe.
In these models the local and high redshift properties of the galaxies
such as the luminosity function, mass and size distribution,
are outputs of the model that can be compared with observations.
However, although the formation and evolution of dark matter halos are
considered to be rather well understood, our knowledge about
baryonic processes such as star formation, supernova feedback, and galaxy
merging is still very poor, and a number of phenomenological parameters must
be taken into account, making the comparison of the modeled and observed data
more complex. Here we compare the galaxy counts in SWAN and
the size-magnitude relation of the detected galaxies with
representative results of these two radically different approaches.
\subsection{The galaxy evolution models}
The first model used is a ``backwards'' pure luminosity evolution (PLE) model
developed by Totani \& Yoshii (\cite{totani00}) and Totani et al.
(\cite{totani01}), based on the present-day properties of galaxies and
their observed luminosity function. It evolves a system's luminosity
and spectral energy distribution evolution backward in time using a
standard galaxy evolution model in which star formation is tuned to
reproduce galaxies' present-day colors and chemical properties
(Arimoto \& Yoshii \cite{arimoto87}; Arimoto et al. \cite{arimoto92}).
The model includes the effects of both internal dust obscuration and
intergalactic H\,I absorption, and it does not incorporate galaxy mergers;
therefore the galaxy sizes and comoving number density do not evolve.
The number density of galaxies is normalized at $z=0$ using the local
$B$-band
luminosity function of galaxies, while the relation of the present $B$
luminosity and effective radius $R_e$ is determined from power-law fits
to the empirical relation observed for local galaxies of different types
(Bender et al. \cite{bender}; Impey et al. \cite{impey}).
Galaxies are in fact classified into six morphological types:
three of them (Sab, Sbc, and Scd) are assigned to spiral galaxies,
while an Sdm model is used for irregular galaxies. Following Totani et al.
(\cite{totani01}), we divided the E/S0 galaxies into distinct
population of giant ellipticals (gE, $M_B \lesssim -17$) and dwarf
ellipticals (dE, $M_B \gtrsim -17$). It is known that these are
two distinct populations, showing different luminosity profiles
(the $r^{1/4}$ law for giants and exponential for dwarfs; see
Barazza et al. \cite{barazza05}), different luminosity-size relations,
luminosity functions
and different physical processes that govern the evolution of each type
(see, e.g., Ferguson \& Binggeli \cite{ferguson94} and references therein;
Ilbert et al. \cite{ilbert06} for evidence of two different populations
up to $z\sim1.2$).
In the
$K_\mathrm{s}$ band, the critical separation magnitude ($M_B=-17$)
corresponds to $M_{K_\mathrm{s}} \sim -21$ for
the typical color of elliptical galaxies. Since the
contribution of early-type galaxies is more significant in the
near-infrared than in the optical, it is important to take into account
such distinct populations of elliptical galaxies in predicting the
$K_\mathrm{s}$ counts.
In addition, we compare the derived properties of the SWAN galaxies with the
predictions of the ``\textit{ab initio}'' Numerical Galaxy Catalog (NuGC)
of Nagashima et al. (\cite{naga05}), which is based on a semi-analytic (SA) model of galaxy
formation combined with high-resolution $N$-body simulations in a
$\Lambda$CDM cosmological framework. The model includes several
essential ingredients for galaxy formation, such as the merging histories
of dark halos directly derived from $N$-body simulations, radiative
gas cooling, star formation, supernova feedback, mergers of galaxies,
population synthesis, and extinction by internal dust and intervening
H\,I clouds. The high resolution used for the simulations, with a
minimum mass for dark halos of $3 \times 10^9\ M_{\odot}$, is
sufficient to resolve their effective Jeans mass.
It was shown by Nagashima et al. (\cite{naga05}) that this model is in
reasonable agreement with several observational results, like the
luminosity functions in $B$ and $K$ bands, the H\,I mass function,
the size-magnitude relations for local spirals and elliptical
galaxies, the Tully-Fisher and Faber-Jackson relations at $z=0$, faint
galaxy number counts in $BVRi'z'$ bands, and the cosmic star formation
history at high redshift. In addition, the model is able to reproduce
the distribution of $(R-K)_{AB}$ colors with redshift observed in
GOODS (Somerville et al. \cite{some04}), including extremely red
($(R-Ks)_{AB}\sim 5$) galaxies that other semi-analytic treatments
have trouble accounting for. In summary, the model is able to
reproduce several observational results for local and high-redshift
galaxies, not just those that were used to tune the model parameters.
\subsection{Addressing cosmic variance} \label{cosmvar}
The uncertainties in galaxy number counts include contributions from
Poisson errors and from the so-called ``cosmic variance'', due to the
fact that galaxies are strongly clustered and thus distributed in
overdensities and large voids on the sky.
Therefore, we have to take into account the corresponding effects
on the relative normalizations of the predicted and
observed counts in order to make a fair comparison.
\begin{figure*}
\centering
\begin{minipage}[c]{1.0\textwidth}
\centerline{\hbox{
\psfig{file=5504fig7a.ps,width=7.5cm}
\hspace{0.0cm}
\psfig{file=5504fig7b.ps,width=7.5cm}
}}
\caption{\label{tot} Comparison of total number
counts for all galaxies (\textit{left panel}), and of the mean
$R_e$ as a function of magnitude for those sources
with $\chi^2_{\nu} \leq2$ in the GALFIT fit (\textit{right panel}),
with the PLE model (solid line) and the hierarchical model (dashed line)
predictions.
In the number counts, the points at $K_\mathrm{s} \leq 15$
shows an excess due to a selection
bias, as some fields were selected to contain more bright galaxies.
Counts from other surveys in the literature for $K_\mathrm{s} \leq16$
were therefore plotted for comparison to the model in this range
(see Fig.~\ref{corrcounts} for the references of the data points).
The error bars on the number counts are the Poisson error,
while in the size-magnitude relation they represent the
standard error on the mean.}
\end{minipage}
\end{figure*}
According to Daddi et al. (\cite{daddi00}), the RMS fluctuations
$\sigma_{tot}$ of numbers counts around their mean mean value, taking
into account both the Poisson error and the cosmic variance, are given by
\begin{equation} \label{clust}
\sigma^2_{tot}=X \cdot (1+ X\ AC)
\end{equation}
where $X$ are the total number counts, $A$ is the clustering amplitude
($A \sim 1.6 \times 10^{-3}$
for the Daddi et al. (\cite{daddi00}) $K$-selected sample),
and $C$ is a factor that depends on the
field area and can be approximated by $C=58\,\mathrm{area}^{-0.4}$ for
an area expressed in arcmin$^2$.
In our case, as we observe galaxies in $N = 21$ different fields
of the same area, thus reducing the clustering uncertainty,
equation \ref{clust} can be written as
\begin{equation}
\sigma^2_{tot} \simeq X_{tot} \cdot \left(1+ \frac{X_{tot}}{N} AC\right)
\end{equation}
where $N$ is the number of fields,
$X_{tot}$ is the total number of sources detected
in all the fields, and $C$ corresponds to the area of a single field.
Therefore we derive that the relative uncertainty on the number counts is given by
\begin{equation}
\frac{\sigma_{tot}}{X_{tot}}=\sqrt{\frac{1}{X_{tot}}+\frac{AC}{N}}=0.08
\end{equation}
for the SWAN fields, while the contribution from Poisson statistics would only
be of the order of 5\%.
We expect that the predictions of any successful model should fit the
observed SWAN counts within a discrepancy of this order of magnitude.
In good agreement with these expectations, we find that a correction
of 4\% produces the best match with the total number counts in the PLE model,
while a correction of 1\% provides the best match to the hierarchical model.
The model predictions discussed in the following sections have been renormalized
accordingly, i.e., PLE and hierarchical model normalizations
are multiplied by 1.04 and 1.01 respectively.
\subsection{Comparison of the SWAN data with the model predictions}
The predictions of the PLE model for the galaxy counts and
galaxy size as a function of the $K_\mathrm{s}$ magnitude were compared
with the number counts and effective radii $R_e$ observed in the SWAN
images. We are assuming for the PLE model
$H_0=70\ \mathrm{km}\ \mathrm{s}^{-1} \mathrm{Mpc}^{-1}$,
$\Omega_m=0.27$, $\Omega_{\Lambda}=73$, and a formation redshift
$z_F=5$ for all galaxy types (Totani et al. \cite{totani01}).
The comparison between the PLE model
and the total completeness corrected number
counts is shown in Fig.~\ref{tot}a.
As explained in section~\ref{counts},
the points at $K_\mathrm{s} \leq 15$ show an excess due to a selection
bias, as some fields were selected to contain more bright galaxies.
Counts from other surveys in the literature for $K_\mathrm{s} \leq 16$ were
therefore plotted for
comparison to the model in this range, and good agreement is found between
the SWAN galaxy counts and the PLE predictions.
Figure~\ref{tot}b shows the average effective radius
$\left<\log(R_e)\right>$ of the
galaxies with the most reliable morphological fitting, i.e., those fit
by GALFIT with $\chi^2_{\nu} \leq 2$. The average must take
into account the selection effects due to the different detection
probabilities of the galaxies. We therefore weighted each galaxy
using its detection probability as derived in section
\ref{counts}. The error bars show the standard error on the mean. The
observed data points are compared with the predictions of the PLE
model, which manifests a slight overprediction of the galaxies'
$R_e$. This effect is mainly due to the late-type galaxy
population (see Fig.~\ref{sized}), for which there might
be hint of an increase in size, and will be discussed in section~\ref{typecomp}.
Our results confirm the finding of Totani et al. (\cite{totani01})
and Minowa et al. (\cite{minowa}), who found in the Subaru Deep Field and
Subaru Super Deep Field that a PLE model with no number or size
evolution gives the best fit to their $K$-selected sample's number
counts and isophotal area distribution.
\begin{figure*}
\centering
\begin{minipage}[c]{1.0\textwidth}
\centerline{\hbox{
\psfig{file=5504fig8a.ps,width=7.5cm}
\hspace{0.0cm}
\psfig{file=5504fig8b.ps,width=7.5cm}
}}
\caption{\label{countsd} Comparison of the SWAN
counts for exponential profile galaxies ($n < 2$, \textit{left panel})
and early-type galaxies ($n>2$, \textit{right panel})
with the PLE model (solid line) and hierarchical model (dashed line) predictions.
The open circles are not reliable points, as explained in the legend of
Fig.~\ref{tot}.}
\end{minipage}
\end{figure*}
The NuGC hierarchical model is also compared with the total
galaxy counts and $R_e$ distribution for SWAN in Fig.~\ref{tot}.
Within the observational uncertainties of the shape of the observed luminosity
functions, Nagashima et al. (\cite{naga05}) adopted two different
models, characterized by two different supernova feedback regimes--
their strong supernova feedback (SSFB) model and weak supernova
feedback (WSFB) models. Comparisons with the total faint number counts
and isophotal area distribution for $K$-band selected galaxies in the
Subaru Deep Field, and with redshift distributions for faint galaxies,
showed that the SSFB model is in better agreement with the properties
of the $K$-selected
sample (Nagashima et al. \cite{naga02}; Nagashima et al. \cite{naga05}),
although the predicted counts were overestimated for $K \gtrsim
23$. We therefore compare the properties of the SWAN galaxies with the SSFB
model predictions only in the following.
A $\Lambda$CDM cosmology is used for the hierarchical model, with $\Omega_m=0.3$,
$\Omega_{\Lambda}=0.7$ and $H_0=70\,\mathrm{\rm
km\,s^{-1}\,Mpc^{-1}}$. As found by
Nagashima et al. (\cite{naga05}), the model is in good agreement with
the total number counts data, and matches the $R_e$ distribution in
SWAN as well as the PLE model.
\subsection{Morphological type dependent comparison} \label{typecomp}
In addition to the total number counts and size-magnitude relation, our
high-resolution AO morphological classification of the SWAN galaxies
allows us to assess the predictions of the models for the counts and
sizes of the late type and early type galaxies separately. This is
one of the first times such a comparison has been
done in the near-IR, as both AO
observations and accurate PSF modeling are needed to obtain
reliable morphological classification of faint field galaxies at these
wavelengths. We are therefore able to compare our observational data
with untested predictions of the two models.
In Fig. \ref{countsd}a, the PLE model predictions for the number counts of
all galaxies with exponential profiles (spirals, irregulars and dwarf ellipticals)
are compared with the observations
for galaxies with S\'ersic index $n<2$ according to GALFIT. We find
good agreement between model and data for the number counts, as before.
A very similar result is found using the hierarchical model prediction for
late-type galaxies.
\begin{figure}
\centering
\psfig{file=5504fig9.ps,width=7.5cm}
\caption{\label{pletutti} Comparison of the total SWAN number
counts with the PLE model predictions.
The contributions of different galaxy types are shown as different
thin lines, while the prediction for the total population of
galaxies is shown as a solid line. The large open circles are not
reliable, as explained in the caption of Fig.~\ref{tot}.}
\end{figure}
Fig.~\ref{countsd}b shows the predictions of both models for early-type galaxies,
compared with the observations for $n>2$ galaxies in SWAN. In the SWAN data,
the elliptical counts are much flatter than the late-type counts,
showing a plateau for $K_\mathrm{s} >20$.
A much flatter slope in the early-type galaxies number counts with respect to
the late-types was found also by Teplitz et al. (\cite{teplitz}), using
HST NICMOS observations in the $H$ band, and in deep optical
observations (e.g. Abraham et al. \cite{abraham96}).
The adopted PLE model is able to convincingly reproduce the plateau
observed in the counts for $K_\mathrm{s}>20$; a similar behavior
is predicted by other PLE models (see, e.g., Pozzetti et al.
\cite{pozzetti96}). This plateau is produced in the model by a
combination of two effects. First, the luminosity function of the gE
population is bell-like (see, e.g., Totani et al. \cite{totani01}),
and the number of faint gEs decreases rapidly with decreasing luminosity.
Second, in the PLE model beyond $z > 1.5$
giant ellipticals are very faint, due to heavy extinction in the model
($\tau \gtrsim 10$), as described in Totani \& Yoshii (\cite{totani00}).
As a result, for $K_s \geq 20$ (corresponding to an $L^*$ galaxy at $z=1.5$),
going to fainter magnitudes does not correspond to an increase of the sampled
volume. Instead, beyond $K_\mathrm{s} = 20$, the model predicts we
should begin to miss the dusty high-redshift progenitors of
today's ellipticals, consistent with the plateau observed in the SWAN data.
A scenario in which massive $z \gtrsim 1.5$ ellipticals are highly
obscured by dust during their starburst phase, and therefore produce
the plateau observed in our $K$-band number counts, is consistent with
the detection of very luminous, highly obscured submillimeter galaxies
at high redshift (Blain et al. \cite{blain02}, and references
therein). In addition, galaxies with unusually red IR colors that have been
measured in deep
in near-IR observations can be explained as primordial elliptical
galaxies that are
reddened by dust and still in the starburst phase of their formation
at $z \gtrsim 3$ (e.g., Totani et al. \cite{totani01b}).
In contrast to the success of the PLE model, the hierarchical considerably underpredicts
at bright magnitudes ($K_\mathrm{s} \lesssim 20$) and overpredicts
at fainter magnitudes the observed number counts. In particular, the observed plateau in the
early-type counts, which was very well reproduced in the PLE model,
is not expected at all in the hierarchical model predictions.
This disagreement implies that the processes
that produce an elliptical galaxy, in at least this particular
hierarchical model, are not adequate to describe reality.
In the model, an elliptical galaxy
is formed through a major merger, i.e.,
a merger with mass ratio $f=m_1/m_2 \geq 0.3$, in which
it is assumed that all the cold gas turns into stars and
hot gas, and all the stars populate the bulge of a new galaxy.
Although it may be possible to increase the number of bright ellipticals by
changing the model parameter that regulates the mass ratio
distinguishing major from minor mergers,
it seems harder to decrease the model's number of $K_\mathrm{s} > 20$
ellipticals to the level observed in SWAN
(Nagashima private communication). In this case no separation is
possible between the gE and dE populations,
as all the major mergers produce galaxies with a
de Vaucouleurs profile.
We note that misclassification of galaxies between the two
categories of early and late-type is not expected to strongly affect
these conclusions. We expect only $\sim 10\%$ of late-type to be
misclassified as early-type and $\sim 10\%$ of early-type
to be misclassified as late-type at $K_\mathrm{s} = 21$ (see Paper I). Since
at faint magnitudes the number counts of spirals are $\sim 10$
times higher than those of ellipticals, it is more likely that
we are overestimating the number of faint ellipticals in our sample.
Correcting for this bias would make the discrepancy with
the hierarchical model even larger.
\begin{figure*}
\centering
\begin{minipage}[c]{1.0\textwidth}
\centerline{\hbox{
\psfig{file=5504fig10a.ps,width=7.5cm}
\hspace{0.0cm}
\psfig{file=5504fig10b.ps,width=7.5cm}
}}
\caption{\label{sized} Comparison of the
$R_e$ distribution as a function of magnitude of late-type (\textit{left panel})
and early-type galaxies (\textit{right panel})
with $\chi^2_{\nu} \leq2$ in the GALFIT fit
with the PLE model (solid line) and hierarchical model (dashed line)
predictions. The error bars are the standard error on the mean; no
error bar is drawn for a bin with one galaxy.}
\end{minipage}
\end{figure*}
We conclude that the PLE model better reproduces the observed number
counts of the SWAN galaxies. In Fig.~\ref{pletutti} the contributions
of the different galaxy types to the total observed number counts
according to the PLE model are shown. The separation between the two
populations of dE and gE galaxies
proves to be a key element in reproducing the number counts separated by
morphological type on the basis of their light profiles. In fact,
the number counts
for exponential-like profiles would have been underestimated for
$K_\mathrm{s} \gtrsim 20$, and the $r^{1/4}$ profiles number counts
overestimated, had the dE
galaxies been included
in the early-type morphological bin along with the gE galaxies.
This result confirms
the suggestions
of Totani et al. (\cite{totani01}) that such a separation better reproduced
the observed number counts in the Subaru Deep Field for $K\gtrsim20$ than
a model with a single elliptical population. \\
In Fig.~\ref{sized}, the observed size-magnitude distribution discriminating between late-type
and early-type galaxies is compared with the model predictions. At bright magnitudes,
the observed galaxies are smaller than the model predictions for both early-type and
late-type samples. This discrepancy is likely due to an uncorrected systematic effect:
large ($R_e \gtrsim 1\arcsec$) bright galaxies can be self-subtracted in the reduction process, as
explained in section~\ref{reduc}. This phenomenon will reduce the apparent sizes of sources,
although their integrated magnitudes (and thus their contributions to the number
counts) will be only minimally affected.
At faint magnitudes, the models are able to reproduce the observed distributions for
early-type galaxies. However, the PLE model better reproduces the observed distribution
at fainter magnitudes, where the hierarchical model predicts more compact objects
than are observed. In contrast, an offset may exist
between both models and the data for late-type galaxies, which appear $\sim 30\%$ smaller
than predicted at faint magnitudes. This offset is what would be expected
for modest growth in the sizes of late-type galaxies. In both of the models considered
here, the sizes of disks for a given mass are almost independent of the formation redshift.
This property is built into any PLE model, but even the hierarchical model by Nagashima
et al. (\cite{naga05}) assumes that there is almost no evolution in the stellar mass-size relation
for disk galaxies, as suggested by the observations of Barden et al. (\cite{barden05}) up to $z \sim 1$.
If our observed offset is really due to increses in size in the late-type population, the result
would be qualitatively consistent e.g. with the predictions of Mo et al. (\cite{mo98}), who estimated
that disk galaxies forming at $z = 1$ are 50\% smaller than disks forming at $z = 0$.
However, the inclusion of galaxies with many different redshifts, masses,
and $M/L$ ratios in the faint bins of Fig.~\ref{sized}b prevents any robust quantitative
conclusion. \\
Our finding that pure luminosity evolution of galaxies is favored for a
$K_\mathrm{s}$-selected sample up to $K_\mathrm{s} \sim 22$,
without evidence of relevant number evolution even when separating between late
and early-type galaxies is consistent with other results. For example,
Truijllo et al. (\cite{trujillo05}) used deep near-infrared images
from the FIRES survey, combined with GEMS and SDSS data, to confirm that
the observed size-magnitude relation evolution out to $z\sim1.7$ for late-type objects
matches very well the expected evolution for Milky-Way type
objects from infall models, while
for spheroid-like objects the evolution of the
luminosity-size relation was found to be consistent with pure luminosity
evolution of a fading galaxy population.
McIntosh et al. (\cite{mcintosh05}) studied a large sample of early-type
galaxies from the GEMS survey, finding that the luminosity-size and stellar mass-size
relations evolve in a manner that is consistent with the passive aging of ancient
stellar population.
Papovich et al. (\cite{papovich05}) suggest that passive evolution
can account for the observed luminosity-size relation
at $z\sim 1$, with merging becoming important at higher redshifts.
\section{Conclusions} \label{concl}
In this paper we have presented new results from a high resolution
adaptive optics assisted morphological study of 21 fields from SWAN,
the Survey of a Wide Area with NACO. The PSF model derived in Paper I
was used in combination with GALFIT to classify the SWAN galaxies into
the two classes of early and late type, and to derive effective radii
$R_e$ of 383 galaxies. A detailed study of the detection probability
as a function of the magnitude, S\'ersic index, effective radius,
field and distance from the guide star was performed in order
to take careful account of the selection effects affecting our sample.
The results were used to compute the completeness-corrected number counts
and to derive the average $R_e$ as a function of magnitude.
The number counts and size-magnitude relation for the total galaxy population,
and for early and late-type separately, were compared with both
a modified version of the pure luminosity evolution model of
Totani \& Yoshii (\cite{totani00}) and with the {\it a priori}
hierarchical model developed by Nagashima et al. (\cite{naga05}).
We have shown in section~\ref{compare} that while the hierarchical model
can convincingly reproduce the counts of late-type galaxies,
it is not consistent with the observed number counts of elliptical galaxies
selected in the $K_\mathrm{s}$ band. On the other hand, the PLE model
can reasonably reproduce both the late and early type count
distributions for the SWAN galaxies.
We have compared the size-magnitude distribution of the galaxies
with the predictions of the models, finding that there might be some hint
of increased size for the late-type galaxy population. Both models are consistent
with the observed distribution for early-type galaxies, although the PLE model
seems to better reproduce the observed distribution at fainter magnitudes.
Our work therefore favors pure luminosity evolution of early-type galaxies for a
$K_\mathrm{s}$-selected sample up to $K_\mathrm{s} \sim 22$. In contrast, our results show
that a representative example of currently available models based on the hierarchical galaxy
formation theory is not able to reproduce the observed properties of faint
$K_\mathrm{s}$-selected early-type galaxies in the near-IR.
These results illustrate the importance of obtaining reliable
morphological classifications for better constraining the details of
galaxy formation and evolution models, and demonstrate the unique
power of AO observations to extend such work to faint galaxies in the near-IR.
\begin{acknowledgements}
We thank the anonymous referee for useful comments and suggestions.
The authors are grateful to the staff at Paranal Observatory for their
hospitality and support during the observations. We thank
Masahiro Nagashima for
providing the $K$-band simulated data from the Numerical
Galaxy Catalog and useful discussion of our results;
Reinhard Genzel, Reiner Hofmann, Sebastian Rabien,
Niranjan Thatte, and W. Jimmy Viehhauser, for their help and
discussion of SWAN strategy and results; and Rainer
Sch\"odel for the observations of SBSF\,41.
Some of the data included in this paper were obtained as part of the MPE
guaranteed time programme. GC and AJB acknowledge MPE for support;
AJB acknowledges support from
the National Radio Astronomy Observatory, which is operated by Associated
Universities, Inc., under cooperative agreement with the National
Science Foundation.
\end{acknowledgements}
|
2,877,628,091,643 | arxiv | \section{Introduction}
Massive young clusters are rare objects that nevertheless exert a major influence on their galactic environments. Their modes of formation remain an important topic of research, echoing continuing significant uncertainty in the modes of evolution of the most massive stars that are their distinctive constituents. Clues to the early internal dynamics of massive young clusters can come from the O stars they eject \citep{Fujii2011}. With the arrival of Gaia DR2 proper motions and the availability of wide field photometric surveys, the opportunity now exists to locate ejected O stars in the environs of their birth clusters. In a previous paper \citep{Drew2018} we studied the example of Westerlund 2 and discovered a surprisingly ordered 'twin-exhaust' pattern of ejections that may favour the creation of Westerlund 2 via the merger of distinct sub-clusters. Here we move on to the example of NGC 3603, the even {\bf brighter and more compact} clustering in the region.
Like Westerlund 2, NGC 3603 is in the Carina region of the Galactic Plane. It is one of a small number of very dense and massive clusters in the Milky Way. Indeed, the extreme stellar density in the core has prompted comparisons with the extragalactic starburst phenomenon \citep[e.g.][]{Eisenhauer1998,Moffat2002,Stolte2006}. The mass of the cluster is among the highest measured in the Milky Way: \cite{Harayama2008} placed it in the range from 10\,000 up to 16\,000 M$_{\odot}$, while \cite{Rochau2010} obtained $\sim$18\,000 M$_{\odot}$. The age often cited for NGC 3603, based on the stellar content of its inner core of diameter $\sim20$~arcsec is 1 to 2 Myr \citep[e.g.][]{Sung2004,Melena2008,Kudryavtseva2012}. This core is sometimes referred to as the HD 97950 cluster. Measurements over a wider sky area out to a radius of an arcminute or so, have indicated that an older, lower density population may be present as well \citep{Melena2008,Beccari2010}.
In earlier work \citep[][hereafter MS-I and MS-II]{MMS2015, MMS2017}, we presented blue selections of OB stars from the VST
Photometric H$\alpha$ Survey of the Southern Galactic Plane and Bulge \citep[VPHAS+,][]{Drew2014} across the Carina region, and showed that their conversion to spectroscopically-confirmed OB stars is very high. The MS-II list of 5915 O-B2 candidates is accompanied by high-quality measures of extinction, along with estimates of effective temperatures that are good enough to broadly classify as early-O, later-O and early-B stars. These are derived from fitting each object's spectral energy distribution (SED) as represented by its $u/g/r/i/J/H/K$ magnitudes. Here, we reuse MS-II in order to focus on the hinterland of NGC 3603.
This paper is organised as follows. First, we select from MS-II a set of high-confidence O stars within a region of 1.5$\times$1.5 sq.deg centred on NGC 3603, and crossmatch it with the Gaia DR2 database \citep{Gaiadr2} with the aim of utilising the proper motion (PM) data (Section~\ref{sec:sample}). We then compute the mean proper motion of the cluster using stars located within 1 arcminute of the cluster centre which becomes the basis for relative proper motions (rPM, see sections~\ref{sec:core-pm} and \ref{sec:rel-pms}). This enables the identification of probable cluster escapes that turn out, intriguingly, to be located in a half ring (section \ref{sec:ejections}). After some sample decontamination, we consider the radial distribution of the O star candidates and find it largely follows the King model adopted by \cite{Harayama2008}. The paper ends with a discussion of the significance of the results and some conclusions (sections~\ref{sec:discussion} and \ref{sec:conclusions}).
\section{Construction of the sample}
\label{sec:sample}
We adopt as the central reference position in the core of NGC 3603 the Galactic coordinates, $\ell = 291^{\circ}.617$, $b = -0^{\circ}.523$. The selection box on the sky around this position is a square occupying 1.5$\times$1.5~deg$^2$, with sides oriented along the directions of constant Galactic longitude and latitude. Within this region, the MS-II database provides a list of 1663 blue-selected OB candidate stars. Of these we retain as a long list those objects for which the reported quality of fits to their optical-NIR SEDs is $\chi^2 < 25$. MS-II recommended a tighter $\chi^2$ limit than this for the selection of "good OB stars": it is relaxed here so as not to exclude detected objects close to the centre of NGC 3603 that are known already to be early-type emission line stars. This reduces the list to 1537 objects. How these stars are distributed in terms of best-fit effective temperature (or $\log(T_{\rm eff} K)$ as plotted) and extinction, $A_0$, and 2MASS $K$ magnitude is shown in Figure~\ref{fig:properties}. We remind that $A_0$ represents the monochromatic extinction at a wavelength of 5495~\AA .
To focus the long list down onto the likely O star content of the region at a distance $D \lesssim 8$ kpc, and extinction $A_0 \lesssim 10$ magnitudes, we cut both on effective temperature such that $\log(T_{\rm eff} K) > 4.44$ and on $K$ magnitude such that $K < 13.0$. The first cut admits objects with estimated effective temperatures greater than 27500~K: since O stars are associated with effective temperatures exceeding $\sim$30000~K, this builds in some margin for fit error. The reasoning behind the second cut is that, since $M_K \simeq -3$ for an O9.5 main sequence star \citep{Martins2005}, such a star at a maximal distance of 8 kpc, with $A_0 = 10$, would suffer no more than $\sim 1$ magnitude of $K$ extinction, resulting in an apparent magnitude of $K \sim 12.5$. Setting the fainter limit of $K = 13.0$ again makes some allowance for error. How the reduced sample compares with the full list from MS-II is illustrated in Figure~\ref{fig:properties}.
\begin{figure}
\begin{center}
\includegraphics[width=0.85\columnwidth]{ngc3603box-teff}
\includegraphics[width=0.85\columnwidth]{ngc3603box-a0}
\includegraphics[width=0.85\columnwidth]{ngc3603box-K}
\caption{
These histograms show how the MS-II objects inside the 1.5$\times$1.5 deg$^2$ box centred on NGC 3603, satisfying $\chi^2<25$, are distributed in effective temperature (top panel), extinction, $A_0$ (middle) and $K$ magnitude (bottom). The full sample of 1537 stars is shaded in blue. The green bars pick out the selection with logT$_{\rm eff}$ $> 4.44$ and $K < 13.0$ mag. These are the focus of this study.
}
\label{fig:properties}
\end{center}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[width=1.4\columnwidth]{ngc3603-PM1}
\caption{
The on-sky positions of the 288 selected objects in the region around NGC 3603. The background image is constructed from VPHAS+ $H\alpha$ data.
}
\label{fig:crossmatch}
\end{center}
\end{figure*}
Next, a cross-match of the reduced list of 325 stars with the Gaia DR2 catalogue was undertaken. We found that 292 objects with good astrometry were successfully matched up at position offsets ranging from $\sim$0.03 to $\sim$0.2 arcsec (with a mean of $\sim0.1$ arcsec). The astrometry is "good" in that all of them pass the mission-recommended test of astrometric quality: that the renormalised unit weight error, $u/u_0(G,BP-RP) < 1.4$, where $u = \sqrt{(\chi^2)/(N-5)}$ and $u_0(G,BP-RP)$ is the magnitude- and colour-dependent reference value.\footnote{See the Gaia mission document, GAIA-C3-TN-LU-LL-124-01 by L. Lindegren} The minimum number of Gaia satellite visibility periods per source is 13, while the median value is 18. As might be expected, given the centering of the Gaia $G$ transmission in the red part of the optical spectrum, the typical difference between MS-II $r$ and Gaia $G$ magnitudes per source is modest, being in the region of 0.1-0.2 mag. The apparent magnitudes of the selection fall mainly in the range, $12.0 < r, G < 17.5$.
Finally, we removed 4 objects that are likely to be well into the foreground of NGC 3603 -- specifically, stars with $>5\sigma$ Gaia DR2 parallaxes for which distances of under 4 kpc are indicated. This leaves 288 candidate O stars of which 15 are already in the literature. In the online appendix, a table of names, positions and other important quantities is provided. This is our sample. How it is distributed on the sky is shown in Figure~\ref{fig:crossmatch}.
The absence of candidate O stars towards higher Galactic longitudes and more negative latitudes apparent in figure~\ref{fig:crossmatch} is real, in the sense that the MS-II source catalogue only found cooler B stars in this corner (along with 4 indeterminate objects returning unacceptably high $\chi^2$ SED fits). Another prominent feature of the emerging distribution is the dense, extended core of objects around the cluster position. It is a reasonable first conjecture that most of the objects are associated with NGC 3603: within 2 arcminutes of our fiducial position there are 56 O-star candidates, rising to 115 inside a radius of 8 arcminutes.
Within the field under consideration there could be a concern that there will be 'contamination' due to objects located in the first crossing of the Carina Arm at a distance of 2--3 kpc (with NGC 3603 in the second crossing at $>6$ kpc, see section~\ref{sec:distance}). A sign of this in Figure~\ref{fig:crossmatch} is the presence of the bright nebula, NGC 3576, around $\ell \simeq 291^{\circ}.2$, $b \simeq -0^{\circ}.65$. This is known to be associated with the near Carina Arm \citep[][give a distance of 3.0$\pm$0.3 kpc]{Depree1999}. Accordingly we could anticipate some of the scatter of objects in the vicinity of this nebula to be near-arm stars.
However the visual extinctions of the exciting stars of NGC 3576 itself are high at over 10 magnitudes \citep{Figueredo2002} while the known OB stars in its vicinity are too bright to appear in our sample (e.g. EM Car, an O8V+O8V binary, $V=9.54$). Furthermore, only 15 of the 288 stars has $A_0 < 4$ -- for which a distance of under $\sim$4 kpc would be implied if $A_0$ were to rise with distance at a rate of 1 mag kpc$^{-1}$ (see the discussion in section 5.3 of MS-II). We conclude that the amount of near-arm contamination -- which would most likely be dominated by B stars -- is well under 10 percent.
\section{The distance to NGC 3603}
\label{sec:distance}
It is widely accepted that NGC 3603 is located in the far, rather than the near, Carina Arm. \cite{Melnick1989} presented and analysed UBV photometry of 74 stars over a region of $\sim$9 square arcminutes, determining a distance modulus of 14.3 (or $D = 7.24$ kpc). \cite{Sung2004} focused mainly on the inner region within 1 arcminute of the cluster centre and used dereddened VI photometry of stars to deduce a distance modulus of $14.2 \pm 0.2$ (or $D = 6.9 \pm 0.6$ kpc). Since then \cite{Melena2008} have argued for $D = 7.6$ kpc, based on a spectrophotometric analysis of mainly cluster O stars. Usefully, these authors also provided two tables summarising stellar and kinematic distances in the literature prior to their work (see their Tables 4 and 5). A kinematic measure not included in this compilation was that by \cite{Nurnberger2002} based on CS line observations: they obtained $D=7.7\pm0.2$ kpc, from CS velocities of $14.2\pm 1.6$ km s$^{-1}$ (LSR).
The emergent picture from the literature is that since the mid 1980s most estimates have ranged from 6 kpc up to 8 kpc. Here, we will work with $D = 7\pm 1$ kpc.
A distance of 7 kpc to NGC 3603 implies an astrometric parallax of 0.143 mas would be measured in the ideal case of negligible error. The Gaia DR2 results are known to exhibit an offset of -0.03 mas, and a position dependent systematic error of up to 0.1 mas \citep{Lindegren2018}. In this situation, we should not expect Gaia DR2 parallax data to do more than perhaps statistically corroborate existing measures of distance. Figure~\ref{fig:edsd-distance} shows the distribution of distances for our sample of 288 stars inferred on applying the EDSD (exponentially-decreasing space density) prior described by \cite{Luri2018}: a scale length of $L = 1.5$~kpc was adopted and 0.03 mas was added to each parallax to correct for the global Gaia DR2 offset.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\columnwidth]{ngc3603-parallax-distance}
\caption{
The histogram of distances inferred from Gaia DR2 parallaxes for the 288 selected O-star candidates, using an EDSD prior with scale length of 1.5 kpc. Every parallax was increased by 0.03 mas to allow for the known Gaia DR2 global offset. The blue histogram bars represent the entire sample. The grey bars are the distribution obtained on limiting the selection to stars within 5 arcmin of the cluster fiducial position, while the green colour picks out stars within 1 arcmin.
}
\label{fig:edsd-distance}
\end{center}
\end{figure}
Evidently the distribution in Figure~\ref{fig:edsd-distance} is very broad. This remains true when the selection is limited to those close to cluster centre. This is unsurprising in itself. What is of more interest is the trend in the mean and median measures as the sample is reduced from the full set to, first, projected separations from the centre of $<5$ arcmin, and then to $<1$ arcmin. The changes are in the sense of increased mean, or median, distance as the sky area is restricted: the means and formal errors from the distributions are
\begin{itemize}
\item full sample: $D=7.2\pm0.1$ kpc (288 stars),
\item within 5 arcmin of centre: $D=7.6\pm0.2$ kpc (93 stars),
\item within 1 arcmin of centre; $D=8.2\pm0.4$ kpc (30 stars).
\end{itemize}
The medians for these three samples are 7.2, 7.5 and 8.1 kpc respectively -- i.e. scarcely different from, if a little lower than the means.
This pattern is plausible even if the numerical values of the distances are subject to a presently undetermined systematic error (capable of shifting the means by more than 1 kpc either way). We would expect the MS-II catalogue to provide more candidates in the foreground of NGC 3603 than in the increasingly reddened background, resulting in a bias toward a shorter distance in the largest sample. This is present, but it is not strong as it only indicates a difference in the mean of around 1 kpc ($\sim 15$ percent) between the stars in the wider environment and those in the cluster centre.
Qualitatively, the outcome of this exercise is not sensitive to the EDSD scale length prior: for example, if it is doubled to 3 kpc there is again a gradual rise in the mean distance estimate as the sky area sampled is focused more onto NGC 3603.
\section{Results}
\subsection{The proper motion of the core of NGC 3603}
\label{sec:core-pm}
\begin{figure}
\includegraphics[width=1.15\columnwidth]{ngc3603-accepted-core}
\centering
\caption{A zoom into a $5\times5$arcmin$^2$ box around the centre of the cluster (it contains 69 stars). Because of the extreme confusion in the stellar core as imaged from the ground, no object lies within 12 arcsec of the centre, which is marked with a plus symbol. The 29 stars within a radius of 1 arcmin that were used to determine the mean cluster proper motion are encircled. All objects are coloured according to extinction.
}
\label{fig:zoom}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.95\columnwidth]{ngc3603-relpms}
\includegraphics[width=0.95\columnwidth]{ngc3603-pmcomp}
\caption{The proper motions of the retained sample of objects relative to the core of NGC 3603 in mas yr$^{-1}$. In the upper panel, the 29 stars within 1 arcmin of the cluster centre used to compute the cluster PM are shown in green. Stars in grey have relative proper motions of magnitude less than 0.3 mas yr$^{-1}$ -- they can be seen as commensurate with those typifying the core region. 62\% of the sample (180 stars) fall in this group. The 9 objects in pink are candidate ejections with relative PMs exceeding 0.6 mas yr$^{-1}$ and trajectories passing within 1 arcmin of the centre of NGC 3603. All other stars are in blue. The lower panel is the histogram of the longitude component of the relative PM for all objects (in blue). Notice the double-peaked character of the longitude distribution and the negative shoulder showing between $-0.4$ and $-0.8$ mas yr$^{-1}$. For comparison the distributions obtained on limiting the selection to within 5 arcmin (grey) and 1 arcmin of cluster centre (green) are superposed: the shoulder and double-peaking weaken and disappear.
}
\label{fig:relpms}
\end{center}
\end{figure}
There are 30 cross-matched stars with astrometry, accepted into the sample, that lie within 1 arcminute of our fiducial position (see Figure~\ref{fig:zoom}). But none are closer to the nominal centre than 0.2 arcmin, as a consequence of the severe source confusion in the brilliant cluster core typically seen in ground-based images. After one evident outlying object with a high relative proper motion is removed from the subset, the remaining 29 stars yield a mean PM in Galactic co-ordinates of $\mu_{\ell,*} = -5.881 \pm 0.151$ mas yr$^{-1}$, and $\mu_b = -0.209 \pm 0.151$ mas yr$^{-1}$. A check against the 6 Gaia DR2 objects, with no excess astrometric noise located closer to cluster centre, are consistent with this measure.
At a distance of $\sim$ 7~kpc, the implied mean transverse motion in the $b$ coordinate is equivalent to $\sim$7 km s$^{-1}$, directed below the plane. This fits with what we would expect if this massive young cluster has negligible vertical motion: the observed $\mu_b$ should then be equal to and opposite in sense to the Sun's motion -- which has indeed been found to be $+$7 km s$^{-1}$ to within one significant figure \citep[see e.g.][]{Schoenrich2010}.
It is also encouraging that the dispersion around the mean we obtain in both coordinates, of
0.151$\pm$0.020\footnote{The uncertainty is obtained by dividing the standard deviation by $\sqrt{2N-2}$ where $N=29$.} mas yr$^{-1}$, appears consistent with the {\sl Hubble Space Telescope} (HST) results of \cite{Rochau2010} and \cite{Pang2013} based on somewhat fainter intermediate mass stars. \cite{Rochau2010} obtained an intrinsic 1-D dispersion from 234 stars inside a radius of $\sim0.25$ arcmin of 0.141$\pm$0.027 mas yr$^{-1}$. \cite{Pang2013} considered the same region and a similar scale of sample and measured 0.146$\pm$0.016 and 0.198$\pm$0.016 mas yr$^{-1}$ in two orthogonal directions.
\subsection{Proper motions of the wider sample}
\label{sec:rel-pms}
Proper motions relative to the mean cluster value, $PM_r$, have been computed and are shown in the upper panel of Figure~\ref{fig:relpms}. There is evidently a tight clustering around the origin that is similar in dispersion to the 'core' objects. Setting an upper bound on the magnitude of the $PM_r$ of 0.3 mas yr$^{-1}$, there would be around 93 qualifying objects that lie within 10 arcmin of cluster centre -- with another 87 with similarly low $PM_r$ scattered across the wider field.
An important consideration here is whether a small relative proper motion implies proximity and/or dynamical association with NGC 3603. For some, especially those at modest angular displacements from the cluster, this is likely and credible.
But we should not forget the other option that some candidate objects are foreground or background and merely tracing Galactic rotation: a distance change of $\sim$1 kpc induces a longitudinal proper motion change of just $\sim$0.2 mas yr$^{-1}$ (depending somewhat on choice of rotation law). There is evidence this could be happening, as shown in the lower panel of Figure~\ref{fig:relpms}: the distribution in the relative PM longitude component verges on double-peaked and shows a negative shoulder that may signal the mixing in of a higher-PM foreground population.
When the sample is limited to those within a few arcminutes of the core, the skew and tendency towards double peaking reduces and the shoulder disappears.
\begin{figure}
\begin{center}
\includegraphics[width=0.95\columnwidth]{ngc3603-imppar}
\caption{
The distribution of computed impact parameters (projected nearest approach to cluster centre) up to 15 arcmin. The bars are overlaid to pick out different ranges in magnitude of proper motion as follows: blue admits all values; grey signifies objects with $|PM_r| > 0.2$ mas yr$^{-1}$; magenta, $|PM_r| > 0.4$ mas yr$^{-1}$, pink, $|PM_r| > 0.6$ mas yr$^{-1}$.
}
\label{fig:imppar}
\end{center}
\end{figure}
\subsection{Candidate ejections}
\label{sec:ejections}
\begin{table*}
\caption{Proper motions and related quantities for stars meeting the first two criteria for ejection from the centre of NGC 3603. Column 1 specifies the MS-II catalogue number. The Gaia DR2 proper motions appear in columns 2 and 3. Column 4 is the angular distance in arcminutes from the fiducial position $\ell = 291^{\circ}.617$, $b = -0^{\circ}.523$. Columns 5 and 6 give the proper motion relative to the core-region mean, $PM_r$, in Galactic coordinates. Column 7 is magnitude of the relative proper motion, while column 8 gives the trajectory impact parameter (in arcminutes). The (distance-independent) travel time from the fiducial position is in column 9.}
{\centering
\begin{tabular}{lccrrrcccl}
\hline
MS-II & \multicolumn{2}{c}{Proper motion} & Radius & \multicolumn{2}{c}{Relative PM, $PM_r$} & $|PM_r|$ & $IP$ & Travel time & Note \\
\# & \multicolumn{2}{c}{$\mu_{\alpha,*}$, $\mu_{\delta}$ mas/yr} & arcmin & \multicolumn{2}{c}{$\Delta\mu_{\ell,*}, \Delta\mu_{b}$ mas/yr} & mas/yr & arcmin & Myr & \\
\hline
13362 & $-$6.400$\pm$0.031 & 1.880$\pm$0.030 & 9.68 & $-$0.766$\pm$0.043 & $-$0.362$\pm$0.040 & 0.847$\pm$0.021 & 0.27$\pm$0.62 & 0.69 & \\
13436 & $-$6.167$\pm$0.053 & 1.315$\pm$0.044 & 12.09 & $-$0.343$\pm$0.061 & $-$0.803$\pm$0.052 & 0.874$\pm$0.027 & 0.28$\pm$1.05 & 0.83 & \\
13519 & $-$6.072$\pm$0.133 & 2.660$\pm$0.116 & 0.73 & $-$0.743$\pm$0.145 & 0.484$\pm$0.108 & 0.886$\pm$0.068 & 0.13$\pm$0.14 & 0.05 & \\
13804 & $-$5.444$\pm$0.132 & 0.871$\pm$0.114 & 17.03 & 0.492$\pm$0.139 & $-$0.955$\pm$0.113 & 1.074$\pm$0.059 & 0.57$\pm$2.77 & 0.95 & \\
13860 & $-$4.677$\pm$0.080 & 2.105$\pm$0.075 & 12.67 & 0.759$\pm$0.083 & 0.473$\pm$0.083 & 0.894$\pm$0.042 & 0.02$\pm$1.61 & 0.85 & \\
13908 & $-$4.712$\pm$0.062 & 0.990$\pm$0.056 & 15.58 & 1.130$\pm$0.069 & $-$0.578$\pm$0.062 & 1.270$\pm$0.034 & 0.79$\pm$1.06 & 0.74 & \\
13918 & $-$4.584$\pm$0.060 & 0.646$\pm$0.053 & 17.56 & 1.375$\pm$0.067 & $-$0.852$\pm$0.059 & 1.618$\pm$0.032 & 0.30$\pm$0.92 & 0.65 & \\
13931 & $-$4.050$\pm$0.060 & 1.459$\pm$0.054 & 15.81 & 1.577$\pm$0.067 & 0.099$\pm$0.061 & 1.580$\pm$0.033 & 0.25$\pm$0.65 & 0.60 & a \\
14172 & $-$4.924$\pm$0.035 & 1.925$\pm$0.036 & 30.59 & 0.593$\pm$0.043 & 0.216$\pm$0.047 & 0.631$\pm$0.022 & 0.02$\pm$2.87 & 2.91 & \\
\hline
\end{tabular}
a: this is 2MASS~J11171292-6120085 \citep{Gvaramadze2013}
}
\label{tab:ejections}
\end{table*}
\begin{table*}
\caption{Proper motions and related quantities for stars with impact parameters between 1 and 6 arcmin, and high relative PM ($> 0.6$ mas yr$^{-1}$). The columns are as in Table~\ref{tab:ejections}. Note that a timescale is only given in the final column if the uncertainty on the impact parameter leaves open the possibility that the star may have originated inside a radius of 1 arcmin around the cluster centre.}
{\centering
\begin{tabular}{lccrrrcccl}
\hline
MS-II & \multicolumn{2}{c}{Proper motion} & Radius & \multicolumn{2}{c}{Relative PM, $PM_r$} & $|PM_r|$ & $IP$ & Travel time & Note \\
\# & \multicolumn{2}{c}{$\mu_{\alpha,*}$, $\mu_{\delta}$ mas/yr} & arcmin & \multicolumn{2}{c}{$\Delta\mu_{\ell,*}, \Delta\mu_{b}$ mas/yr} & mas/yr & arcmin & Myr & \\
\hline
13280 & $-$6.309$\pm$0.072 & 3.156$\pm$0.071 & 16.09 & $-$1.144$\pm$0.076 & 0.860$\pm$0.078 & 1.431$\pm$0.038 & 2.06$\pm$1.20 & 0.67 & \\
13377 & $-$5.636$\pm$0.042 & 2.736$\pm$0.036 & 24.36 & $-$0.364$\pm$0.050 & 0.713$\pm$0.046 & 0.801$\pm$0.023 & 3.48$\pm$1.97 & & \\
13452 & $-$5.359$\pm$0.036 & 2.870$\pm$0.033 & 43.88 & $-$0.155$\pm$0.045 & 0.939$\pm$0.044 & 0.951$\pm$0.022 & 3.25$\pm$2.38 & 2.77 & \\
13573 & $-$5.384$\pm$0.119 & 1.324$\pm$0.095 & 2.60 & 0.383$\pm$0.126 & $-$0.511$\pm$0.094 & 0.639$\pm$0.053 & 1.81$\pm$0.45 & & \\
13708 & $-$6.518$\pm$0.130 & 1.093$\pm$0.120 & 2.70 & $-$0.590$\pm$0.129 & $-$1.138$\pm$0.127 & 1.282$\pm$0.064 & 2.59$\pm$0.10 & & b \\
13766 & $-$6.460$\pm$0.068 & 1.986$\pm$0.065 & 5.52 & $-$0.860$\pm$0.072 & $-$0.285$\pm$0.072 & 0.906$\pm$0.036 & 3.12$\pm$0.46 & & \\
14359 & $-$6.377$\pm$0.044 & 1.890$\pm$0.040 & 40.68 & $-$0.748$\pm$0.051 & $-$0.344$\pm$0.050 & 0.823$\pm$0.026 & 3.69$\pm$3.28 & & c \\
\hline
\end{tabular}
b: this star narrowly missed exclusion on account of large parallax -- see text\\
c: the relative proper motion of this star is directed {\em towards} the cluster
}
\label{tab:near-fast}
\end{table*}
\begin{figure*}
\begin{center}
\includegraphics[width=1.35\columnwidth]{ngc3603-PM2}
\caption{
The on-sky distribution of the O-star candidates, with relative proper motion magnitude exceeding 0.6 mas/yr. Objects coloured in red are those with impact parameters that are $<$ 1 arcmin (Table~\ref{tab:ejections}). The two objects in blue are from Table~\ref{tab:near-fast}: they have impact parameter errors that allow a trajectory consistent with ejection at reduced probability.
}
\label{fig:pms-onsky}
\end{center}
\end{figure*}
For the present purpose of initial classification, we note that at a distance of 7 kpc, a proper motion magnitude of 0.6 mas yr$^{-1}$ corresponds to an in-sky or tangential speed of 20 km~s$^{-1}$. We will regard any object that exceeds this threshold as meeting the first of two criteria required of candidate ejections from NGC 3603. 38 objects qualify in this regard. The second criterion to be satisfied is that the trajectory of motion needs to have an impact parameter (closest distance of approach to NGC 3603 centre) under 1 arcmin, and the sense of motion needs to be {\em away} from the cluster.
Figure~\ref{fig:imppar} shows how the stars with impact parameter less than 15 arcmin break down according to magnitude of proper motion. Among the 199 stars plotted there is a clear peaking of impact parameter to smaller values (irrespective of relative PM magnitude). This pattern persists as the total list is reduced by cutting on an increasing minimum relative PM magnitude. In the most extreme group, with $|PM_r| > 0.6$ mas yr$^{-1}$, 9 out the 38 stars have an impact parameter of under 1 arcmin. These are our candidate ejections meeting the first two criteria. They are coloured lighter pink in Figure~\ref{fig:relpms}, while their on-sky distribution is shown in \ref{fig:pms-onsky}. Important properties for this set of objects are given in Table~\ref{tab:ejections}.
Of the 9 candidates, 1 object (VPHAS-OB1-13519, or \#13519 in Table~\ref{tab:ejections}) is a relevant inclusion in that $|PM_r| = 0.886\pm0.068$ mas~yr$^{-1}$ is well above threshold for the first criterion -- otherwise, it is located within 1 arcmin of cluster centre, and so must have an impact parameter $<1$~arcmin. The direction of its $PM_r$ is almost opposite to that it would have if merely a foreground contaminant. \#13519 could be a very early-stage ejection, exiting the central region at a tangential speed of $\sim30$ km s$^{-1}$ (at 7 kpc)
More interest attaches to the group of 7 separated from the cluster centre by between 9.68 and 17.56 arcmin. Representative values for the impact parameter and error in this group are respectively $\sim$0.3 and $\sim$1~arcmin (see Table~\ref{tab:ejections}) -- implying that our second criterion may actually scoop up stars originating from within a radius of $\sim1.3$ arcmin. At the shortest radius in this group, the probability of a star having the right direction of travel to satisfy the second criterion (if all directions of travel are equally likely) is 0.043. The probability that it also has $|PM_r| > 0.6$ is empirically $38/288 \simeq 0.13$. Hence the total combined chance is $\sim$0.006. For the most far-flung member of the group, at almost twice the radius, the probability essentially halves. This general level of individual probability hints that perhaps one of the seven objects could be a false positive (given the total population drawn from).
One of the group of 7 has already been identified as a likely ejection by \cite{Gvaramadze2013}: it is 2MASS~J11171292-6120085, an O6V star with an associated suitably-offset bow shock. In MS-II it is VPHAS-OB1-13931 (\#13931 in Table~\ref{tab:ejections}). It is located nearly 16 arcmin from the centre of NGC 3603 and its relative PM is 1.58 mas yr$^{-1}$ (or $\sim$52 km s$^{-1}$ at 7 kpc). It is the object almost directly to the left of cluster centre in Figure~\ref{fig:pms-onsky}. However, its proposed partner object, WR 42e, displaced to the opposite side of the core of NGC 3603 is not supported as an ejection: its $|PM_r|$ is small at 0.125$\pm$0.034 mas/yr. Similarly, for none of the three additional ejection candidates, RFS 1, 2 and 8, put forward by \cite{RomanLopes2016} is their relative proper motion significant (the largest is $0.178\pm 0.020$ mas yr$^{-1}$, obtained for RFS 1). In the case of RFS 8, some 29 arcmin away from the cluster centre, runaway status is entirely ruled out as the implied timescale since ejection would have to exceed 10 Myr. RFS 1 and 2 are respectively only 0.7 and 1.0 arcmin away from cluster centre, leaving open the possibility of ejection should spectroscopic observations reveal significant relative radial velocity.
.
The on-sky pattern traced by the group of 7 is strikingly a half ring to the south of the core region. The one object, \#14172, sitting outside this zone also distinguishes itself in Table~\ref{tab:ejections} as the only candidate with an estimated time of flight appreciably larger than 1 Myr. Otherwise, the clear norm is a flight time of under 1 Myr. In section~\ref{sec:discussion} we will consider what this tidy pattern of ejections may signify.
\subsection{Other high relative proper motion objects in the region}
\label{sec:other-highpm}
Figure~\ref{fig:imppar} shows that there are high relative PM stars in the sample with trajectories that do not pass close to the cluster centre. These will be the stars shown in light pink and magenta, at impact parameters exceeding 1 arcmin. In particular there is a group of fast ($|PM_r| > 0.6$ mas yr$^{-1}$) 'near misses' with impact parameters up to 4 arcmin, which are worth brief consideration (their properties are in Table~\ref{tab:near-fast}). The uncertainties on the impact parameters of 2 of them are large enough that it cannot be ruled out that they may have been ejected from within 1 arcmin of the cluster centre. For this reason they have been included in Figure~\ref{fig:pms-onsky} and coloured in blue. One of the two, \#13280, may continue the ring of ejections discussed above in section~\ref{sec:ejections}. The other, \#13452, represents a contrast in that it is much further away from NGC 3603 and would have to have been ejected about 3 million years ago if indeed it was ejected.
The status of the remaining 5 objects in Table~\ref{tab:near-fast}, that appear never to have been in the cluster core, is less certain. There is a case to be made that one of them, \#13708, is in fact a foreground star in that its Gaia DR2 parallax is $0.3297\pm0.0731$, a $4.5\sigma$ measurement. It has only remained in the sample because the $5\sigma$ limit is not breached -- indeed it is the object with the highest measured parallax retained and it is responsible for the lowest occupied histogram bin in Figure~\ref{fig:edsd-distance}.
We can try a combination of high relative PM {\em and} high impact parameter as the means to identify 'contaminant' objects. We view it as improbable that higher relative PM stars ($|PM_r| > 0.42$ mas yr$^{-1}$, or $> 2\sigma$ in the 2-D dispersion), with impact parameters larger than some minimum value are associated with the cluster. Anticipating the result of the next section, we look for impact parameters exceeding 6 arcmin. Such a cut pulls out 30 objects. It is interesting to note that 20 in this group have significantly negative longitudinal $|PM_r|$ ($< -0.4$ mas yr$^{-1}$) -- a property consistent with being in the foreground to the cluster. Indeed, these objects dominate the negative 'shoulder' seen in Figure~\ref{fig:relpms} (lower panel), and all are at an angular separation of at least 20 arcmin from the centre of NGC 3603. The remaining 10 stars are -- with one exception only -- over $\sim$30 arcmin distant. The exception is \#13390 that is at a radius of 9.1 arcmin, with an estimated impact parameter of 6.5 arcmin: it is an IR-bright object ($K = 9.71$) that also happens to be highly obscured ($A_0 = 9.84$ mag). Potentially, it is in the background to the cluster.
\section{The O star hinterland}
\label{sec:size}
The size of region on the sky that should be associated with NGC 3603 is challenging to define. Based on $K_s$ photometry, \cite{Nurnberger2002} constructed a stellar surface density map to a range of limiting magnitudes and concluded that at a radius of 2.5$\pm0.25$ arcmin a mean field takes over from a declining cluster density profile. A King model was fit to the data, in which the core and tidal radii were respectively 23 and 1300 arcsec. This has been revisited by \cite{Harayama2008}, using NIR adaptive-optics data better able to resolve the core. They revised the core radius down to 4.8 arcsec, and proposed a tidal radius of 1260 arcsec (21 arcmin) on general dynamical grounds. The latter is much the same as the \cite{Nurnberger2002} estimate. Importantly, \cite{Harayama2008} elaborated the evidence in favour of significant mass segregation such that the bright core contains a relative concentration of the most massive (O) stars -- a result that was later reinforced by \cite{Pang2013}.
\begin{figure*}
\begin{center}
\includegraphics[width=0.95\columnwidth]{ngc3603-Orho-err}
\includegraphics[width=0.95\columnwidth]{ngc3603-inner-trend}
\caption{On a log-log scale, the upper panel shows the density of selected O stars as a function of radius out to 20 arcmin (red circles). The error bars are Poissonian. The O star count inside 0.2 arcmin is equated to the number visible in Figure 2 of Melena et al (2008): it is likely to be a lower limit. The blue data points trace the King profile obtained by Harayama et al (2008), while the cyan points show the same profile computed for a very large tidal radius. Both model curves have been rescaled to match the O star density in the 1.0-2.0 arcmin bin. The lower panel shows the radial extinction distribution for comparison, with the data points coloured according to the MS-II effective temperature estimate.
}
\label{fig:density}
\end{center}
\end{figure*}
Because we have a honed selection of O stars, across a wide field, we have the opportunity to review the radial density profile, restricted to the context of the most massive stars for which the enveloping 'field' density would most likely be low.
We have determined the density profile out to a maximum radius of 20 arcmin, supplementing the Gaia DR2 cross-matched list with 17 objects in our initial selection that so far lack matches -- this adds, as a crude average, one object per bin. We leave out \#13390 and \#13708 (see section~\ref{sec:other-highpm}). The resultant profile is shown in the upper panel of Figure~\ref{fig:density}.
As a comparison, the King model proposed by \cite{Harayama2008} is shown rescaled to coincide with the measured density in the radius range 1.0 -- 2.0 arcmin.
Two features stand out. First, the model rolls off a little quickly relative to the observed O stars, possibly indicating too short a tidal radius. If instead the tidal radius is set to be very large (cyan points in the plot), the match is better but high given that some contaminating objects almost certainly remain present. Second, over the radius range 0.2 to 1 arcminute, the King-model trend is relatively underpopulated. Given that the count of O stars inside 0.2 arcmin is most likely an underestimate \citep[see][]{Melena2008}, this could be a reflection of the mass segregation described by both \cite{Harayama2008} and \cite{Pang2013} that is most evident inside a radius of 0.5 - 1 arcmin.
The King model superposed in Figure~\ref{fig:density} works quite well with minimal contamination out to a radius of $\sim$ 5 arcmin. At larger radii it becomes necessary to consider a sliding scale of options ranging from a lot of presumed field contamination combined with the fall-off predicted by Harayama et al.'s (2008) King model, through to a little contamination on top of a distribution subject to a larger a tidal limit. If the tidal radius is $\sim 21$ arcmin, then the O star count predicted by the rescaled King model, outside a radius of 0.2 arcmin, is $\sim$138 -- we count 100 to 5 arcmin, 132 to 10 arcmin, and 166 to 20 arcmin.
The lower panel of Figure~\ref{fig:density} provides some corroborating insight into what is going on -- essentially, the dependence of O-star extinction on radius is orderly and in keeping with the earlier mapping to $\sim$4 arcmin of colour excess by \cite{Sung2004}. But from around 6 arcmin, outwards, the order begins to break down and a number of O stars begin to present with lower extinction relative to the trend seen at shorter radii. This could well be foreground contamination revealing itself.
We conclude that the 'halo' of NGC 3603 may very well extend beyond 5 arcmin, and that essentially all the O stars drawn from the MS-II catalogue inside this radius are associated objects. Out of the 100 objects inside 5 arcmin, just 15 were mentioned in the literature prior to MS-II (their names are matched to the MS-II and Gaia DR2 identifiers in table~\ref{tab:names}). The lower panel of Figure~\ref{fig:density} demonstrates that most are expected to have effective temperatures of $\sim$35 kK or more -- implying masses exceeding $\sim$20~M$_{\odot}$ \citep[see][]{Ekstrom2012}.
\section{Discussion}
\label{sec:discussion}
\subsection{The pattern of O-star ejections}
Our selection and survey of O-star proper motions relative to NGC 3603 has revealed at 9 credible ejections (Table~\ref{tab:ejections}) and 2 further less certain examples (Table~\ref{tab:near-fast}). The scale of the measured proper motions for all but two of the candidates indicates a time since ejection of under a million years. This is entirely congruent with findings that the bright compact centre of the cluster, contained inside a radius of 1 arcmin, is no older than 1--2 Myr \citep[e.g.][]{Sung2004,Kudryavtseva2012}. Indeed our result endorses this young age, and carries no dependence on the still uncertain distance to NGC 3603.
The two objects with trace-back times closer to 3 Myrs (\#14172 and, with less precision, \#13452) may be evidence of an earlier phase of star-forming activity. Or they might not survive further scrutiny. We note now that their extinctions are relatively low at respectively $A_0 = 4.31$ and 4.40 -- to be compared with $5 < A_0 > 8$ for all the ejections in the last 1 Myr. To pursue this question further requires expanding the search area to pick up more fast-moving ejections (\#13452 is right on the edge of our $1.5\times1.5$ sq.deg region), and/or follow-up spectroscopy to better characterize these objects.
The most remarkable feature of the on-sky arrangement of the candidate ejections is that the 7 most convincing are arranged in a semi-oval emphasising the south side of the cluster core (Figure~\ref{fig:pms-onsky}). At most, one of these might be a false positive. An eighth (\#13280) helps to spread the pattern a bit more into the north but it is more marginal for inclusion since its impact parameter is a factor of a few larger. Statistically there are no grounds to suspect that a bias in the sample as a whole, favouring the south over the north, shapes this -- the main difference is that north of cluster centre the density of objects drops away more quickly than towards the south. There is no reason to suspect a north-south extinction bias either \citep[see the extinction maps of][]{Marshall2006,PlanckXI}. The simplest option is to accept, provisionally, that the pattern reflects a reality beyond mere coincidence.
The pattern seen is in striking contrast to the also-orderly pattern of ejections from Westerlund 2 \citep{Drew2018}: in that case a distinctly linear pattern was present with ejections located on either side of the cluster. It was argued that sub-cluster merging to produce present-day Westerlund 2 could be responsible for the preferred axis of the ejections. Could something similar be involved here?
\cite{Fukui2014} have presented detailed CO observations in the vicinity of NGC 3603, and have deduced from them evidence of a cloud-cloud collision that took place around a million years ago. And they made a link between NGC 3603 and Westerlund 2 as two examples of 'super star clusters' -- suggesting that this status owes something to cloud-cloud collisions. In this context we note that the most massive molecular cloud component \citep[reported by][]{Fukui2014}, implicated in the case of NGC 3603, is centred on $\ell = 291^{\circ}.58$, $b = -0^{\circ}.42$. This is 5--6 arcmin north of the core of NGC 3603 and, in the plane of the sky, on the opposite side to the semi-circle of ejections. If this cloud carried most of the momentum in the putative collision, we might anticipate that most of the ejections would appear on the same side (contrary to what is seen).
From a theoretical perspective, the formation of NGC 3603 has been discussed in terms of a monolithic process -- either as a single intense star-forming event \citep{Banerjee2014} or via the prompt assembly of a compact group of sub-clusters \citep{Banerjee2015}.
The former is favoured by \citep{Banerjee2015} and resembles the early assembly concept discussed by \cite{Fujii2013}. Dynamical stellar ejection is the only relevant ejection process here, for the reason that the youth of NGC 3603 should mean no supernovae have exploded yet. In this context, there need be no particular expectation of a particular pattern of ejections: random vectors, but on a timescale comparable with the formation event would seem plausible. The times since ejection for all objects making up the ring fall within a quite narrow range, from 0.60 to 0.95 Myr. Expressed as a mean and standard deviation, the ring could be linked to an event 0.75$\pm$0.11 Myr ago. However the propagated errors on the individual timescales do not exceed $\sim$0.05 Myr (if we assume the point of origin within the cluster for each ejection is unknown to within 0.2 arcmin). Hence, a modest spread in time of ejection of up to $\sim$200,000 years appears more likely. To make progress, radial velocities for the candidate ejections would be a good next step as this would build a better view of the full three-dimensional geometry. It is possible that the projection into two dimensions exaggerates the degree of coherent spatial organisation present.
A last point of interest is that Figure~\ref{fig:pms-onsky} and the data in Tables~\ref{tab:ejections} and \ref{tab:near-fast} indicate that there are potentially two 'pairings' of ejections: \#13280 may pair with either \#13908 and \#13918, while \#13362 and \#13860 are moving in close to opposite directions. This leaves 3 (or 4) of the stars in the ring of ejections without evident partners -- one of these is \#13931 for which a partner has been claimed \citep{RomanLopes2016}, but has only a small proper motion.
\subsection{The significance of the O star halo}
We noted in the introduction that much of the work to date on NGC 3603 has focused on the inner $\sim$arcminute around the brilliant cluster core. In this study, encouraged by the initial findings reported by MS-II, we have expanded the area examined up to over a square degree. With the support of Gaia DR2 proper motions, the case has been built for an extensive hinterland of associated O stars that persists to at least a radius of 5 arcmin. The distribution is consistent with Harayama et al's (2008) preferred King model, for which a tidal radius of 21 arcmin was adopted. \cite{Harayama2008} pointed out that the tidal radius is hardly constrained at all and could be even larger. The clear implication is that a number of the O stars in our sample between $\sim$5 and $\sim$20 arcmin could be halo members too. It will take full space motions to clarify this.
It has been mooted before that NGC 3603 might be a Galactic counterpart to R136 in the 30 Doradus region of the LMC. In pursuit of this point, \cite{Melena2008} argued that R 136, the central cluster of 30 Doradus, contains between 1.1 and 2.4 times as many very high mass stars ($M_{{\rm bol}} < -10$) as found in the core of NGC 3603 -- that is, the scale factor is not so large.
We now make essentially the same comparison, basing it on the wider environment rather than the core region. \cite{Evans2011} list 100 stars with $K_S < 15.5$ ($M_K < -3$ or $M_{{\rm bol}} < -6.8$) in the annulus between 0.2 to 1 arcmin, reaching out into R 136's 'halo'. This can be compared with the count here to the same $M_K$ limit, in the equivalent angular range, after rescaling for the much shorter distance to NGC 3603 of $7\pm 1$ kpc (with R136/30 Dor at a distance of 50~kpc, the rescale is 7$\times$, giving an angular radius range of 1.4 to 7 arcmin). The analogous count is $\sim$40, implying a scale-down from R136/30 Dor by about a factor of $\sim$2.5. This is at the upper end of the \cite{Melena2008} range, and is just about compatible with it. But we have to differ with their conclusion \citep[and that of][]{Moffat2002} that there is "no surrounding massive halo of cluster stars". There evidently is. The difference between then and now is the availability of calibrated wide field multi-colour photometry.
\section{Conclusions}
\label{sec:conclusions}
The crossmatch we have carried out of 288 stars in the hinterland of the massive young cluster, NGC 3603, of high-purity O-star candidates in MS-II and the Gaia DR2 release has had two main outcomes.
1. Our appraisal of the relative proper motions has revealed up to 11 candidate O star ejections. Nine of these have been ejected within the last one million years. Indeed the timescale spread is limited to 0.6--0.95 Myrs for eight of them. This lends clear support and an interesting datum to earlier photometric studies that have argued the central cluster is no more than 1--2 Myrs old \citep{Sung2004, Melena2008, Kudryavtseva2012}. The on-sky pattern of these ejections comes as a surprise in that 7 are arranged in a partial ring of radii spanning 9--18 arcmin, favouring the south. Radial velocities are needed to begin to add the third dimension. It is hard to see how this pattern would arise from a cloud-cloud collision, given what we know about the placement of molecular clouds in the area \citep{Fukui2014}. In this respect NGC 3603 is different from Westerlund 2, where the O-star ejections are aligned with the axis of a putative prior cloud-cloud collision \citep{Drew2018}. It seems more likely that the ejections from NGC 3603 have arisen from a first cluster core collapse \citep[see e.g.][]{Fujii2013, Banerjee2014}.
2. We have put forward evidence of a notable halo of O stars around the central cluster of NC 3603 reaching out to a radius of at least 5 arcmins. At larger radii, some non-member contamination is very likely. We have counted of order 100 O stars in the halo based on a comparison with the King model obtained by \cite{Harayama2008}. Earlier work doubted that such a halo exists. Now we have detected it, it can be estimated that the HD~97950 cluster at the heart of NGC 3603 is part of a larger entity that parallels the core-halo structure of the R136/30 Dor region in the Large Magellanic Cloud, albeit scaled down to around 40\% of the total O-star population.
Appendix B (online only) provides the derived relative proper motion data forming the basis of this study, as table~\ref{tab:crossref}.
\section*{Acknowledgements}
Use has been made of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. This work has also used data from the European Space Agency mission Gaia (https://www.cosmos.esa.int/gaia), processed by the Gaia Data Processing and Analysis Consortium (DPAC, https://www.cosmos.esa.int/web/gaia/dpac/consortium). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement.
Much of the analysis presented has been carried out via TopCat \citep{Taylor2005}.
JED and MM acknowledge the support of a research grant funded by the Science, Technology and Facilities Council of the UK (STFC, ref. ST/M001008/1). NJW acknowledges receipt of an STFC Ernest Rutherford Fellowship (ref. ST/M005569/1). We thank the referee of this paper for helpful comments.
\bibliographystyle{mnras}
|
2,877,628,091,644 | arxiv | \section{Introduction}
\label{sec:intro}
As robots are deployed to new environments with greater levels of
autonomy, inevitable software defects may lead to unintentional and
potentially catastrophic outcomes.
It is now more important than ever to systematically test robotic
systems as extensively as possible to identify and eliminate defects
before those systems are deployed to the field.
Prior studies have suggested simulation-based testing as a promising
technique for revealing defects that is vastly cheaper, safer, and
more scalable than field testing~\cite{SotiropoulosNavigationBugs2017,TimperleyArdu2018,Robert2020,Gladisch2019,AfzalICST20}.
While simulation suffers from certain limitations and only provides
an abstraction of the physical world~\cite{AfzalICST21},
it allows systems to be systematically tested under a wide array of
environments, conditions, and scenarios that would otherwise be
difficult or expensive to replicate in the field.
A crucial aspect of simulation-based testing
is the generation of interesting, potentially fault-revealing scenarios
that expose the system to corner cases and undertested inputs.
We define a scenario as the description of a scene (i.e., the environment)
and an accompanying mission that the system under test (SUT)
should perform in the specified scene.
Manually generating such scenes and missions can be time consuming and difficult~\cite{AfzalICST21}.
\begin{figure}
\centering
\begin{subfigure}{0.9\columnwidth}
\small
\begin{lstlisting}[language=scenic]
ego = Car
spot = OrientedPoint on visible curb
badAngle = Uniform(1.0, -1.0) * Range(10, 20) deg
parkedCar = Car left of (spot offset by -0.5 @ 0), facing badAngle relative to roadDirection
\end{lstlisting}
\label{fig:scenic-scenario}
\caption{\small A scenario description, written in the Scenic language, detailing a scene that contains a badly parked car.}
\vspace{2mm}
\end{subfigure}
\begin{subfigure}{0.9\columnwidth}
\label{fig:scenic-scene}
\centering
\includegraphics[width=0.9\columnwidth]{figures/badlyParked.jpg}
\caption{\small A scene that was generated by Scenic according to the scenario above using
the GTA V engine~\cite{scenic}.}
\end{subfigure}
\caption{\small An exemplary Scenic scenario, and the generated
simulation scene.}
\label{fig:scenic}
\end{figure}
In recent years, researchers have proposed tools and domain-specific
languages (DSLs) to facilitate the construction of testing
scenarios~\cite{scenic,paracosm,kluck2018}.
One of the most prominent such DSLs is Scenic~\cite{scenic},
a language designed for creating simulation scenarios for autonomous vehicles.
Using Scenic, users can describe a scenario
of interest for the SUT, which is automatically parsed by the Scenic
tool to generate a plausible scene and mission that satisfy the user-specified
constraints of that scenario. The generated scene
and mission are then executed in the supported simulators to execute
the test. \Cref{fig:scenic} shows an
example scenario that is realized in the GTA V~\cite{gta} simulator.
Although Scenic provides a powerful language and tool that
simplifies the process of creating and running simulated test scenarios,
it only supports domain-specific simulators in the autonomous vehicle
sector, and is not compatible with Gazebo; the most popular, general-purpose
robotic simulator~\cite{gazebo}.
Gazebo is commonly used for simulation of systems developed using the
popular Robot Operating System (ROS) framework~\cite{ros}, and has been applied
to robots that span a wide variety of sectors such as unmanned aerial and ground vehicles,
agriculture robots, and industrial robots.
In this work, we introduce GzScenic; a tool that automatically
generates simulation scenes in Gazebo from a scenario provided in
Scenic's DSL.
Using GzScenic, developers can specify their desired
testing scenarios in Scenic's DSL without the need to manually
pre-define their models in Scenic, and automatically generate
complex scenes that satisfy the constraints of their scenario.
GzScenic automatically transfers the generated scenes to Gazebo
without the need for manual translation.
Furthermore, to support test automation for mission-based robots,
GzScenic can synthesize mission items (e.g., waypoints, action
locations, the initial position of a robot) as part of a test scenario.
These mission items can be combined with a generated scene via a
developer-provided test harness to allow automated end-to-end
testing (e.g., by spawning the robot at a given initial location,
sending it a generated set of waypoints, and monitoring its progress).
The contributions of this papers are as follows:
\begin{itemize}
\item We introduce GzScenic; a tool that allows users of the popular
Gazebo simulator to describe
test scenarios in a the Scenic high-level DSL and automatically generate
the scenes and missions for the test.
\item We provide an example of using GzScenic for the Fetch robot~\cite{fetch}.
\item We publicly release GzScenic's source code and the example
scenarios at \url{https://github.com/squaresLab/GzScenic}.
\end{itemize}
\section{Background}
\label{sec:background}
\begin{figure*}[!h]
\centering
\includegraphics[width=0.9\textwidth]{figures/gzscenic-overview}
\caption{
\small An overview of GzScenic internal process. Orange ovals represent input provided by the user, purple ovals are internal products, and green ovals are the produced outputs. Rectangles represent the three steps taken by GzScenic.}
\label{fig:overview}
\end{figure*}
\subsection{Scenic}
\label{sec:scenic}
In this section, we provide a high-level overview of the structure and important
features of Scenic~\cite{scenic}. We refer the reader to the original Scenic paper
for further details~\cite{scenic}.
Scenic is a domain-specific probabilistic programming language for
modeling the environments of robotic and cyberphysical systems such as
autonomous cars. A Scenic program (i.e., scenario)
defines a distribution over scenes, configurations of physical objects
and agents; sampling from this distribution yields concrete scenes which
can be simulated by the supported simulators. \Cref{fig:scenic}
presents an example Scenic scenario and a concrete scene produced from
that scenario using the GTA V engine.
Overall, Scenic accepts a pre-defined set of models\footnote{Referred
to as \texttt{Classes} in Scenic's documentation.} that define everything
specific to a particular simulator and SUT. For example, in \Cref{fig:scenic},
two instances of the \texttt{Car} model are created.
A portion of the pre-defined \texttt{Car} model is as follows:
\begin{lstlisting}[language=scenic]
class Car:
position: Point on road
heading: roadDirection at self.position
viewAngle: 80 deg
\end{lstlisting}
which specifies that the position of a \texttt{Car} is a point on
a region called \texttt{road} that is defined separately and represents the
roads in the GTA map. The car's heading is the same as the
\texttt{roadDirection} that is the nominal traffic direction at
a point on the road, and its \texttt{viewAngle} is 80 degrees.
To allow Scenic to parse scenarios and generate concrete scenes, a model
must be defined for each entity that can be represented within a given scene.
A concrete scene consists of a set of instantiated models, known as
objects, with concrete values as their properties.
Scenic automatically determines the spatial relationships between
objects in the scene such that they conform to the
specifications of the scenario and do not collide with each
other.\footnote{Users may override this behavior to allow collisions between objects.}
Scenic arranges objects in the scene by treating each objects as a bounding rectangle
on a two-dimensional plane.
At the time of writing, Scenic is unable to arrange objects in three dimensions.
In addition to specifying spatial relationships between objects within a scene,
Scenic can model temporal aspects of scenarios.
For
example, in \Cref{fig:scenic},
we can not only specify where the badly parked car is located, but also how
it should behave over time (e.g., \enquote{pulls into the road as the ego car approaches}).
However, modeling the dynamic scenarios requires direct connection
between Scenic and the simulator, and more complex modeling of
active agents and their behaviors. Since defining these connections
and models are system- and domain-specific, we only focus on generating
static scenes for the rest of the paper.
At the time of writing,
Scenic is compatible with
GTA V~\cite{gta}, CARLA~\cite{CARLA},
Webots~\cite{webots}, and LGSVL~\cite{lgsvl} simulators specifically
used in the
autonomous vehicle sector,\footnote{A simple set of pre-defined models
for a Mars rover in Webots are also included in Scenic~\cite{scenic}.}
and does not support Gazebo; the most popular
general-purpose simulator. Pre-defining Scenic models for a
general-purpose simulator that is commonly used in a wide range of
domains is nearly impossible since each domain requires its own
set of models. GzScenic allows the user to automatically generate
these models by providing a high-level description.
\subsection{Gazebo}
\label{sec:gazebo}
Gazebo is a popular, general-purpose robotics simulator\cite{gazebo,IgnitionGazebo},
maintained by Open Robotics,
that has been used in a wide variety of domains and is the de facto simulation platform
used by ROS.
Running a Gazebo simulation requires several components~\cite{gazebo-components}.
First of all, a world description file
should be provided that describes all the elements in a simulation,
including its objects, robots, sensors, and light sources. This file
typically has a \texttt{.world} extension
and uses the XML-based Simulation Description Format (SDFormat)~\cite{sdf}
to describe those elements.
Included within the world file are model instances, given by
\texttt{<model>} elements, which may be defined directly in the world
file, or, more commonly, included separately by external model files via
the \texttt{<include>} tag.
Defining the model files allow the model to be easily reused among many
worlds. Gazebo model files also follow the SDFormat, and define
all of the components related to modeling an entity such
as joints, collisions, visuals, and plugins.
Included within the components of a model are its collision geometries,
given by \texttt{<collision>} tags, which are used by Gazebo for collision
checking.
These geometries can take on simple shapes such as a box, cylinder, or sphere, or they
can include more complex shapes specified by 3D mesh files, which
can take one of the three supported formats of STL, Collada or OBJ,
with Collada and OBJ being the preferred formats.
\section{GzScenic}
\label{sec:gzscenic}
The goal of GzScenic is to convert a test scenario,
written in Scenic language, to a set of files and models that can be
used by Gazebo. \Cref{fig:overview}
provides an overview of GzScenic's inputs, outputs, and internal
steps.
GzScenic takes a high-level model descriptor YAML file, a set of
custom models, and a Scenic scenario as inputs, and performs
three steps to achieve its goal.
Firstly, it automatically generates Scenic models from the model
descriptor YAML and the custom models (\Cref{sec:model-generation}).
It then passes the generated Scenic models and the input scenario
to the Scenic tool, and generates a concrete scene (\Cref{sec:scene-generation}).
Finally, it translates the generated scene into a format that is
suitable for Gazebo (\Cref{sec:gazebo-translation}).
We provide a running example of generating a scene for the popular
open-source Fetch robot~\cite{fetch}. More examples of GzScenic
inputs and scenarios can
be found in the tool's repository at \url{https://github.com/squaresLab/GzScenic}.
In this example, our goal is to create a scene for Fetch that resembles
a pick and place playground.\footnote{Pick and place is an act
of picking an object, moving it to another location, and placing it at
the destination.}
Throughout the rest of this section, we explain how GzScenic achieves this goal.
\subsection{Model Generation}
\label{sec:model-generation}
\begin{figure}[h]
\centering
\begin{lstlisting}[language=yaml]
models:
- name: fetch
type: MISSION_ONLY
width: 0.57
length: 0.53
heading: -1.57
- name: waypoint
type: MISSION_ONLY
- name: cafe_table
type: GAZEBO_MODEL
- name: bookshelf
type: GAZEBO_MODEL
- name: LampAndStand
type: GAZEBO_MODEL
- name: demo_cube
type: CUSTOM_MODEL
dynamic_size: False
models_dir: models/
world: empty_world.world
\end{lstlisting}
\caption{
\small An example model descriptor YAML file for
Fetch.}
\label{fig:yml-example}
\end{figure}
As discussed in \Cref{sec:scenic}, parsing a scenario
in Scenic language
and generating a valid concrete scene
requires a set of model definitions that should be provided to Scenic.
These models should describe all entities that can be included in a scenario.
For example, in self-driving applications these models may include
entities such as cars, roads, and pedestrians. Scenic provides
some of the models out of the box for self-driving applications,
which are its primary domain.
Since Gazebo is a general-purpose
simulator that is used in a wide range of domains,
it is nearly impossible to pre-define Scenic models that describe
the entities required for simulation of all systems in different sectors. For example, an agricultural robot
requires modeling of entities such as plants and tractors,
whereas a warehouse robot requires modeling of the shelves, boxes,
and rooms.
In comparison, defining these models for a domain-specific simulator such as
GTA V and CARLA requires a one-time investment since most of the
entities that can be simulated and included in the scenarios are
shared among all systems that use these simulators.
For example,
if models are produced for CARLA in order to test a given system,
those same models may be reused in another system with minimal effort.
GzScenic allows Gazebo users to easily create Scenic models by automatically
generating them from a set of Gazebo models, provided as \texttt{.sdf}
and 3D mesh (e.g., \texttt{.dae}, \texttt{.obj}, \texttt{.stl}) files,
as described in \Cref{sec:gazebo}.
To perform this conversion, GzScenic requires that the user to provide a
list of the models that may be used in generated scenes via the YAML
model descriptor file, illustrated in \Cref{fig:yml-example}.
The description of each model in the file should specify its name,
and its type.
The three model types, described below, inform GzScenic of how it should access the
Gazebo models (if required).
\begin{itemize}
\item \texttt{GAZEBO\_MODEL}:
By default, Gazebo comes prepackaged with a common database of
models.\footnote{\url{https://github.com/osrf/gazebo_models}}
In addition to this database, Ignition Fuel web application hosts
thousands of Gazebo models publicly released by users.\footnote{\url{https://app.ignitionrobotics.org/fuel/models}}
Models of type \texttt{GAZEBO\_MODEL} refer to these models.
GzScenic automatically downloads
all the files related to models of this type from the model
distribution according to the provided name.
\item \texttt{CUSTOM\_MODEL}: Models of this type are not standard Gazebo
models. They are either made by the user for their own use, or should be downloaded from a custom source.
In the former case,
GzScenic looks for the Gazebo model files in the \texttt{models\_dir} directory
specified by the model descriptor file, and in the later case, GzScenic downloads
the files from the URL provided by the tag \texttt{url} in the YAML file.
\item \texttt{MISSION\_ONLY}: Models of this type include any entities
in the scenarios that
do not map to a simulated object that should be included in the
Gazebo \texttt{.world} file, but
are particularly important in generating interesting missions in the
scenarios. For example, mission waypoints are entities
that do not represent objects in the environment, but may be
specified in a scenario to allow missions to be generated and executed by a
test harness.
Another example is the robot itself. Robots are not typically included in the
\texttt{.world} file, and are spawned separately by \texttt{roslaunch}.
Therefore, these robots should not be included in the generated \texttt{.world}
file, but a suitable initial position should be emitted by GzScenic to allow
users to test the robot in different, valid starting positions.
We discuss the use of GzScenic to generate missions further in \Cref{sec:gazebo-translation}.
\end{itemize}
\Cref{fig:yml-example} presents an example model descriptor file for
Fetch, describing the models that may be used
in a pick and place scenario.
In this list, we have included models for \texttt{cafe\_table}, \texttt{bookshelf},
provided out of the box by the official Gazebo model database,
and \texttt{LampAndStand}, released on Ignition Fuel web application.
The Gazebo model
for \texttt{demo\_cube} is custom-made and provided in the \texttt{models/} directory.
Finally, we include model descriptions for the robot and waypoints,
\texttt{fetch} and \texttt{waypoint}, as \texttt{MISSION\_ONLY}.
Note that this is only
an example of the set of models that can be included in the scenarios.
Users can select their preferred models from thousands
of available models, or including their own custom-made models.
To generate corresponding Scenic models for the models in the model descriptor file,
GzScenic automatically determines a number of features for each model.
These features include a 2D bounding box,\footnote{Recall that Scenic treats all models as 2D rectangles.}
given by a \texttt{length} and \texttt{width},
and a flag, \texttt{dynamic\_size}, indicating whether or not the model can be dynamically
resized.
To determine the \texttt{length} and \texttt{width} of a model,
GzScenic computes a bounding box for each individual collision geometry specified
in the \texttt{.sdf} file, before determining a bounding box for the entire model.
Note that GzScenic supports 5 of the 9 types of collision geometry that
can be represented using SDF~\cite{sdf}: \texttt{empty}, \texttt{box},
\texttt{cylinder}, \texttt{sphere}, and \texttt{mesh}. Of the 283
models that are prepackaged with Gazebo, only 12 use a
geometry that is not supported by GzScenic.
After calculating the bounding box of the model, GzScenic
determines whether the model can be resized.
For example, a simple box, a tree, or a wall should be allowed
to be resized based on what the scenario requires, but a table that
consists of multiple parts (e.g., surface and legs), a robot model,
or a stop light should not be resized since they are not scalable in
the real world. As a rule of thumb, GzScenic allows dynamic resizing
down to half or up to twice the original size of the models
if they only consist of a single simple collision geometry (i.e.,
\texttt{empty}, \texttt{box}, \texttt{cylinder}, or \texttt{sphere}).
We made this decision
based on the observation that complex models that include multiple
collision geometries or meshes are more likely to be of standard size and
should not be resized. However, there are exceptions to this rule.
As a result, GzScenic allows the user to override \texttt{dynamic\_size}
feature of a model in the model descriptor file.
Once all of the models specified in the model descriptor file are transformed
to Scenic models, Scenic can interpret the input scenario.
Note that this model generation step need only occur once
for each system unless the models change.
GzScenic stores generated models for future use.
\subsection{Scene Generation}
\label{sec:scene-generation}
In this step, GzScenic runs the Scenic interpreter on the provided scenario description
using the models generated in the previous step.
Scenic, if possible, generates a concrete scene
that satisfies the scenario description, and shows a
plot of object arrangements on a 2D plane to the user (\Cref{fig:example-plot}).
\begin{figure}
\centering
\small
\begin{lstlisting}[language=scenic]
width = 8
length = 8
heading = 0
workspace = Workspace(RectangularRegion(0 @ 0, heading, width, length))
create_room(length, width, x=0, y=0, sides='NSWE')
ego = Fetch at 0 @ 0
table1 = CafeTable offset by 0 @ 1, facing 0 deg
create_room(3, 2.5, x=-2, y=2, sides='NSE')
table2 = CafeTable at -2 @ 2
Bookshelf at Range(-4,4) @ -3.5, facing 180 deg
back_right_region = RectangularRegion(-2 @ -2, 0, 3.5, 3.5)
Lampandstand in back_right_region
\end{lstlisting}
\caption{An example scenario for Fetch, written in Scenic.}
\label{fig:example-scenario}
\end{figure}
\Cref{fig:example-scenario} presents a simple example
scenario for the Fetch robot. In this scenario, we first
create a workspace\footnote{A workspace in Scenic language specifies
the region the objects must lie within.} with length and width of 8 meters.
Then, using a function provided by GzScenic, \textbf{create\_room},
we create walls surrounding the workspace on all four sides.
The rest of the scenario includes description of the objects and
their positioning in the scene, and creation of a smaller room that
has walls on three sides. The resulting scene plot generated by Scenic
is presented in \Cref{fig:example-plot}.
The instance in the center
of the plot is the ego, which is the Fetch robot in this scenario.
All other instances are represented as red rectangles. Note that the
position and orientation of 3 of the instances in this scenario are
randomly determined and can take other concrete values in other
scenes.
The output of this step is a concrete scene,
that includes all the
objects generated from the models, and their arrangements.
Next, we translate this concrete scene into a format that can be
used by Gazebo.
\subsection{Gazebo Translation}
\label{sec:gazebo-translation}
This final step of GzScenic accepts a concrete scene as an input,
and translates this scene into a) Gazebo world and models that
allow us to simulate the scene in Gazebo, and b) a YAML file
listing the position and orientation of \texttt{MISSION\_ONLY} objects,
which can facilitate automated creation of test missions.
\begin{figure*}
\centering
\begin{subfigure}{0.72\columnwidth}
\includegraphics[width=\columnwidth]{figures/fetch_plot.png}
\caption{\small 2D plot generated for the concrete scene.}
\label{fig:example-plot}
\end{subfigure}
\hspace{10mm}
\begin{subfigure}{\columnwidth}
\centering
\includegraphics[width=1.1\columnwidth]{figures/fetch_gazebo.png}
\caption{\small Gazebo simulation of the generated scene.}
\label{fig:example-gazebo}
\end{subfigure}
\caption{\small An example of the scene generated for the
Fetch robot reflecting the scenario
of \Cref{fig:example-scenario}.}
\label{fig:example}
\end{figure*}
\paragraph{Gazebo world and models}
As mentioned in \Cref{sec:gazebo}, Gazebo components
include
a world file (usually with \texttt{.world} extension) and a set of
models in the form of \texttt{.sdf} files, mesh files, and configuration files~\cite{gazebo-components}.
In this step, GzScenic translates a concrete scene description
to corresponding Gazebo \texttt{.world} and model files.
To do so, GzScenic starts from an empty world environment, which,
by default, includes only a ground plane, and adds every object
in the concrete scene to this world one by one.
The user can provide a customized empty world to GzScenic where
they can configure different aspects of the world such as its lighting, shadowing, and physics engine.
For every object in a concrete scene, GzScenic determines the
elements that must be added to the world file, and the files that
need to accompany the generated world.
For all objects generated from a \texttt{GAZEBO\_MODEL} or \texttt{CUSTOM\_MODEL},
GzScenic adds those objects to the world file via the \texttt{<include>} tag.
Additionally,
GzScenic generates the necessary Gazebo model files for
each individual object where the collision geometries must be
updated to reflect the dynamically-determined size of the object.
At the end of the process, the user ends up with a world file and a set of models.
If GzScenic is running on the same system as Gazebo,
the user can specify the output directory in such a way that
Gazebo can immediately find the
GzScenic's outputs.
Note that the path to the models directory can be passed
to Gazebo via the \texttt{GAZEBO\_MODEL\_PATH} environment variable.
If GzScenic is not running on the same system as Gazebo,
the user must transfer the output of GzScenic to the system that hosts
Gazebo.
In our example scenario (\Cref{fig:example-scenario}),
GzScenic automatically translates
the concrete scene plotted in \Cref{fig:example-plot}
to a Gazebo simulation presented in
\Cref{fig:example-gazebo}.
As shown, the position and orientation of the objects
in this simulation are aligned with the generated plot, and the
description provided in the scenario of \Cref{fig:example-scenario}.
\paragraph{Mission-only objects YAML}
Let us refer back to the example
scenario of \Cref{fig:example-scenario} for the Fetch robot.
Our ultimate goal in this case is to test
Fetch in a scene that is generated from our example scenario.
Simply launching Fetch in the automatically-generated Gazebo simulation
(\Cref{fig:example-gazebo}) will not test the system as it is
not performing any operations. The system should receive a mission
(i.e., a set of instructions to perform actions) to be tested in this
environment. For example, a mission for Fetch can instruct the
robot to pick an object from the table, move to another room, and place
the object on the other table.
A scenario may include information about the mission that should be
performed in the generated scene. For example, in the scenario of
\Cref{fig:example-scenario}, we can add instances of
\texttt{waypoint}s that reflect where the robot should move to:
\begin{lstlisting}[language=scenic]
Waypoint in back_right_region
Waypoint ahead of table2 by 1
\end{lstlisting}
GzScenic automatically generates position and orientation for these
instances. However, since their model type is \texttt{MISSION\_ONLY}
it does not include these objects in the Gazebo simulation. Instead,
it outputs a YAML file that lists the position and
orientation of each one of these \texttt{MISSION\_ONLY} objects,
grouped by their type:
\begin{lstlisting}[language=yaml]
fetch:
- heading: -1.57
x: 0
y: 0
z: 0.0
waypoint:
- heading: 4.521433387130848
x: -0.6426244725245782
y: -0.7777737656890915
z: 0.0
- heading: 2.2663887353720784
x: -2.7676780355847423
y: 1.3591564670641116
z: 0.0
\end{lstlisting}
Since the definition of the missions and how they are executed are
system-specific, there is no generic way to convert this list of
coordinates to a running mission that will work on all systems.
However, we believe that users can easily read this
output file to automatically generate their intended missions using a
custom test harness.
\section{Limitations}
As mentioned in \Cref{sec:scenic}, Scenic is only capable of
generating 2D scenes and cannot arrange objects in the scene in
a 3D environment. This creates a limitation for GzScenic as well
for scenarios that for example require multiple objects stacked on
top of each other.
While resolving this limitation is out of the scope
for GzScenic, there is a workaround that will allow users to
stack objects on top of each other in GzScenic. GzScenic by default
keeps track of the height and z coordinate of the objects. This
information has no impact on the scene that is generated by Scenic but
is used during the Gazebo translation step to create Gazebo models.
To stack two objects on top of each other, the user need to properly
set the \texttt{z} value of the objects, and allow them to collide by
setting \texttt{allowCollisions} to \texttt{True}. In the example
scenario of Figure~\ref{fig:example-scenario}, we can place a cube on the
table by adding the following lines to the scenario:
\begin{lstlisting}[language=scenic]
table.allowCollisions = True
cube = Cube at table.position, with allowCollisions (True)
cube.z = table.height + cube.height
\end{lstlisting}
Note that this is only a temporary workaround until handling 3D
positions is added to Scenic.
Another limitation of GzScenic that arises from Scenic features is the
fact that all objects are considered as rectangles in the 2D space.
As a result GzScenic computes the bounding box of models as described
in \Cref{sec:model-generation}. However, the bounding box of a
model is not always truly representative of the space that the model
is going to take. For example, imagine a hoop that is hollow inside.
The bounding box of this hoop will be considered as a square surrounding
its circumference, and the hoop is treated the same as a solid box by
Scenic. However, we may want to allow an object to be placed in the
center of the hoop, which is currently not allowed.
To partially mitigate this issue,
we plan to improve GzScenic to break large models
(e.g., model of a house) into smaller ones instead of creating a
bounding box for the whole model.
\section{Conclusion}
In this work we present GzScenic, a tool that automatically generates
scenes for the Gazebo simulation from scenarios provided in the
Scenic language. GzScenic allows the users to simply specify a list
of models they intend to use in the simulation, and it automatically
turns these models into models that are interpretable by Scenic. Using
these models, Scenic generates a scene from the scenario, which later
on is automatically converted to Gazebo models by GzScenic.
\section*{Acknowledgement}
This research was partly funded by AFRL (\#OSR-4066).
The authors are grateful for their support.
Any opinions, or findings expressed are those of the authors
and do not necessarily reflect those of the US Government.
\bibliographystyle{IEEEtran}
|
2,877,628,091,645 | arxiv | \section{Introduction}
Understanding how students learn over time holds the key to unlock the full potential of adaptive learning. Indeed, personalizing the learning experience, so that educational content is recommended based on individual need in real time, promises to continuously stimulate motivation and the learning process \citep{bauman}. Accurate detection of students' knowledge gaps is a fundamental building block of personalized learning systems \citep{knowledge_gaps} \citep{lindsey2014improving}. A number of approaches exists for modeling student knowledge and predicting student performance on future exercises including IRT \citep{irt}, BKT \citep{sequencing} and DKT \citep{dkt}. Here we propose an ensemble approach to predict student knowledge gaps which achieved highest score on both evaluation metrics for all three datasets in the 2018 Shared Task on Second Language Acquisition Modeling (SLAM) \cite{slam18}. We analyze in what cases our models' predictions could be improved and discuss the relevance of the task setup for real-time delivery of personalized content within an educational setting.
\section{Data and Evaluation Setup}
The 2018 Shared Task on SLAM provides student trace data from users on the online educational platform Duolingo \citep{slam18}. Three different datasets are given representing user’s responses to exercises completed over the first 30 days of learning English, French and Spanish as a second language.
Common for all exercises is that the user responds with a sentence in the language learnt. Importantly, the raw input sentence from the user is not available but instead the best matching sentence among a set of correct answer sentences. The prediction task is to predict the word-level mistakes made by the user, given the best matching sentence and a number of additional features provided. The matching between user response and correct sentence was derived by the finite-state transducer method \citep{fst}.
All datasets were pre-partitioned into training, development and test subsets, where approximately the last 10 \% of the events for each user is used for testing and the last 10 \% of the remaining events used for development . Target labels for token level mistakes are provided for the training and development set but not for the test set. Aggregated metrics for the test set were obtained by submitting predictions to an evaluation server provided by Duolingo. The performance for this binary classification task is measured by area under the ROC curve (AUC) and F1-score.
Although the dataset provided represents real user interactions on the Duolingo platform, the model evaluation setup does not represent a realistic scenario where the predictive modelling would be used for personalizing the content presented to a user. The reason for this is threefold: Firstly, predictions are made given the best matching correct sentence which would not be known prior to the user answering the question for questions that have multiple correct answers. Secondly, there are a number of variables available at each point in time which represent information from the future creating a form of data leakage. Finally, the fact that interactions from each student span all data partitions means that we can always train on the same users that the model is evaluated for and hence there are never first time users, where we would need to infer student mistakes solely from sequential behaviour. To estimate prediction performance in an educational production setting where next-step recommendations must be inferred from past observations, the evaluation procedure would have to be adjusted accordingly.
\section{Method}
To predict word-level mistakes we build an ensemble model which combines the predictions from a Gradient Boosted Decision Tree (GBDT) and a recurrent neural network model (RNN). Our reasoning behind this approach lies in the observation that RNNs have been shown to achieve good results for sequential prediction tasks \citep{dkt} whereas GBDTs have consistently achieved state of the art results on various benchmarks for tabular data \cite{gbdt_benchmark}.
Even though the data in this case is fundamentally sequential, the number of features and the fact that interactions for each user are available during training make us expect that both models will generate accurate predictions.
Details of our model implementations are given below.
\subsection{The Recurrent Neural Network}
The recurrent neural network model that we use is a generalisation of the model introduced by Piech \shortcite{dkt}, based on the popular LSTM architecture, with the following key modifications:
\begin{itemize}
\item All available categorical and numerical features are fed as input to the network and at multiple input points in the graph of the network (see \ref{sec:supp_model_details})
\item The network operates on a word level, where words from different sentences are concatenated to form a single sequence
\item Information is propagated backward (as well as forward) in time, making it possible to predict the correctness of a word given all the surrounding words within the sentence
\item Multiple ordinary- as well as recurrent layers are stacked, with the information from each level cascaded through skip-connections \cite{bishop} to form the final prediction
\end{itemize}
In model training, subsequences of up to 256 interactions are sampled from each user history in the train dataset, and only the second half of each subsequence is included in the loss function. The binary target variable representing word-level mistakes is expanded to a categorical variable and set to \textit{unknown} for the second half of each subsequence in order to match the evaluation setup.
Log loss of predictions for each subsequence is minimised using adaptive moment estimation \citep{adam} with a batch size of 32. Regularisation with dropout \citep{dropout} and L2 regularisation \citep{Schmidhuber14} is used for embeddings, recurrent and feed forward layers. Data points are used once over each of 80 epochs, and performance continuously evaluated on 70 \% of the dev data after each epoch. The model with highest performance over all epochs is then selected after training has finished. Finally, Gaussian Process Bandit Optimization \citep{GPBandits} is used to tune the hyperparameters learning rate, number of units in each layer, dropout probability and L2 coefficients.
\subsection{The Gradient Boosted Decision Tree}
\label{method:gbdt}
The decision tree model is built using the LightGBM framework \citep{lgbm} which implements a way of optimally partitioning categorical features, leaf-wise tree growth, as well as histogram binning for continuous variables \citep{lgbm:features}. In addition to the variables provided in the student trace data we engineer a number of features which we anticipate should have relevance for predicting the word level mistakes
\begin{itemize}
\item How many times the current token has been practiced
\item Time since token was last seen
\item Position index of token within the best matching sentence
\item The total number of tokens in the best matching sentence
\item Position index of exercise within session
\item Preceding token
\item A unique identifier of the best matching sentence as a proxy for exercise id
\end{itemize}
Optimal model parameters are learned through a grid search by training the model on the training set and evaluating on the development set to optimize AUC. The optimal GBDT parameter settings for each dataset can be found in the Supplementary Material \ref{sec:hyperparams}.
\subsection{Ensemble Approach}
The predictions generated by the recurrent neural network model and the GBDT model are combined through a weighted average. We train each model using its optimal hyperparameter setting on the train dataset and generate predictions on the dev set. The optimal ensemble weights are then found by varying the proportion of each model prediction and choosing the weight combination which yields optimal AUC score (Figure \ref{fig:ensamble:en_es}).\\
Finally, the RNN and GBDT were trained using their respective optimal hyperparameter settings on the training and development datasets to generate predictions on the test sets. The individual model test set predictions were then combined using the optimal ensemble weights to generate the final test set predictions for task submission.
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{en_es_ensamble}
\caption{Ensemble model performance as a function of the GBDT ensemble weight parameter for the en\_es dataset. 0.0 is equivalent to using only the neural network model while 1.0 is equivalent to using only GBDT.
}
\label{fig:ensamble:en_es}
\end{figure}
\section{Discussion}
Our ensemble approach yielded superior prediction performance on the test set compared to the individual performances of the ensemble components (Table \ref{tab_perf_auc}). The F1 scores of our ensemble are reported in Table \ref{tab_perf_f1}. We note that although the within-ensemble prediction correlations are high (Table \ref{tab_corr}), the prediction diversity evidently suffices for the ensemble combination to outperform the underlying models. This suggests that the RNN and the GBDT differ in performance on different word mistakes. Most likely, the temporal dynamics modelled by the neural network model complement the GBDT predictions enabling the ensemble to generalise better to unseen user events than its component parts. Notably, none of our individual models would have yielded first place in the Shared Task.
\begin{table}[h]
\centering
\small
\begin{tabular}{c}
\begin{tabular}{|l|l|l|l|}
\hline
Model &\verb|fr_en|& \verb|es_en| & \verb|en_es|\\\hline
{RNN} & 0.841 & 0.830 & 0.851 \\\hline
{GBDT} & 0.853 & 0.836 & 0.856\\\hline
{Ensemble} & 0.857 & 0.838 & 0.861\\
\hline
\end{tabular}
\end{tabular}
\caption{\label{tab_perf_auc} Model AUC scores on the test partition for all datasets.}
\end{table}
\begin{table}[h]
\centering
\small
\begin{tabular}{c}
\begin{tabular}{|l|l|l|l|}
\hline
&\verb|fr_en|& \verb|es_en| & \verb|en_es|\\\hline
{Ensemble} & 0.573 & 0.530 & 0.561\\
\hline
\end{tabular}
\end{tabular}
\caption{\label{tab_perf_f1} Model F1 scores on the test partition for all datasets.}
\end{table}
\begin{table}[h]
\centering
\small
\begin{tabular}{c}
\begin{tabular}{|l|l|l|l|}
\hline
Data partition &\verb|fr_en|& \verb|es_en| & \verb|en_es|\\\hline
{dev} & 0.881 & 0.901& 0.896 \\\hline
{test} & 0.884 & 0.894 & 0.898\\\hline
\end{tabular}
\end{tabular}
\caption{\label{tab_corr} Pearson correlations coefficients between the GBDT and RNN predictions on the dev and test set for all datasets.}
\end{table}
\subsection{Feature Importance}
Given the predictive power of our model we can use the model components to gain insight into what features are most valuable when inferring student mistake patterns. When ranking GBDT features by information gain, we note that 4 out of 5 features overlap between the three datasets (Figure \ref{tab_feature_importance}). The unique user identifier is ranked as second on all datasets, suggesting that very often a separate subtree can be built for each user. This implies that generalisation to new users for the GBDT model would result in performance degradation.
\begin{table}[h]
\centering
\begin{tabular}{c}
\begin{tabular}{|l|l|l|}
\hline
\verb|fr_en|& \verb|es_en| & \verb|en_es|\\ \hline
\textit{token} & \textit{token} & \textit{token} \\
\textit{user} & \textit{user} & \textit{user} \\
\textit{format} & \textit{format} & \textit{format} \\
\textit{exercise id} & \textit{exercise id} & \textit{exercise id} \\
\textit{time} & \textit{token attempt} & \textit{time} \\ \hline
\end{tabular}
\end{tabular}
\caption{\label{tab_feature_importance} The top 5 GBDT model features by information gain.
}
\end{table}
\subsection{Relevance for Real Time Prediction Delivery
}
In the setup at hand we have a unique identifier and most of the data available for each user during model training.
This means that for example the GBDT can naturally build a subtree representing each individual user. For the model evaluation setup where there is no need to generalize to new users this is not an issue.
In a production setting however, the model has to serve new users, which would then have to be handled separately.
Frequent retraining of the model would also be necessary to prevent performance degradation.
This means that the unique user identifier is typically replaced by engineered features that represent the user history. An alternative would be to apply state based models such as Recurrent Neural Networks which by default encode user history without computational overhead or extra engineering effort.
\subsection{Error Analysis}
Although the predictive power of our model is high, there are mistake patterns that our model is not able to capture. The following sections cover two ways of characterizing subsets of the data where the model performs worse than on average. These observations could potentially be used to improve the overall model performance.
\subsubsection{Performance Decay over Time}
Due to the sequential partitioning of the training, development and test subsets, the model does not have information about each user's mistakes for the most recent events. In Figure \ref{fig:decay} we note that this lack of information results in a degradation in performance as the predictions get further away from the horizon of labeled data points. Effects which drive this phenomenon include:
\begin{enumerate}
\item The data is non-stationary, i.e. the distribution it comes from varies over time
\item The model has seen less relevant information about each user when the prediction is far away from the label horizon
\item \label{overconfident} The model is overconfident far away from the label horizon since it has never experienced missing information on a user level during training
\end{enumerate}
We note that \ref{overconfident} would not be an issue if the model setup did not include a unique user identifier, which would be desirable in a production setting. For models that do include a unique user identifier as a feature, one way to potentially overcome this performance degradation would be to systematically sample subsequences of the training dataset on a user level, train models separately for each sample and then combine the models. In this way each submodel should be less reliant on the most recent exercise answers at any point in time and thus generalise better to the evaluation setup. This is in effect bagging with a sampling strategy taking consecutive time steps into account \citep{bagging}. We did not attempt to apply this error correction here but leave it for future work.
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{performance_decay_en_es}
\caption{
Performance decays as instances further away from the label horizon are considered. Log Loss is computed considering only instances before a given fraction of time, where time is normalized by the maximum time for each user. Here performance decay for the en\_es dataset.
}
\label{fig:decay}
\end{figure}
\subsubsection{The Influence of Rare Words}
We note that the 4\% of instances with the least common words contribute to 10\% of the prediction error measured in Log Loss, Figure \ref{fig:rare_words}.
This insight gives opportunity to increase prediction performance. Although not attempted here, future work includes building another ensemble component specialized in predicting mistake patterns of words not previously encountered.\\
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{rare_words_en_es}
\caption{
Log loss is high when considering only the $x$ most rare tokens and low when considering all tokens on the en\_es dev partition.
}
\label{fig:rare_words}
\end{figure}
In conclusion, we have developed an ensemble approach to modeling knowledge gaps applied here within a second language acquisition setting. Albeit not evaluated in a realistic production environment, our ensemble model achieves high predictive performance and allows insights about student mistake patterns. Thus our approach provides a foundation for further research on knowledge acquisition modeling applicable to any educational domain.
|
2,877,628,091,646 | arxiv | \section{Introduction}
\label{sec:intro}
In spatially-extended turbulent flows one observes similar patterns at
different spatial positions and at different times. How `similar?' If the
flow is equivariant under a group of continuous symmetries, one way of
answering this question is by measuring distances between different
states in the symmetry-reduced state space\ $\pS/\Group$, a space in which
each group orbit (class of physically equivalent states) is represented
by a single point. This distance depends on the choice of norm and on
the symmetry-reduction method.
In 1980 Phil Morrison\rf{MorrGree80} showed how to derive Hamiltonian
description of ideal fluid (plasma) dynamics from the Low
Lagrangian\rf{Low58} by a Lie symmetry reduction, which in this context
amounts to the transformation from Lagrangian to Eulerian variables: the
state space\ of position-labeled Lagrangian trajectories of `fluid parcels'
is reduced to a much smaller state space\ of Eulerian velocity
fields. It is a
difficult example of reduction; the reduction steps have to be
executed judiciously, new variables cleverly chosen, and ``one should do
the Legendre transformations slowly and carefully when there are
degeneracies\rf{CHHM98}.'' Our goal here is different. Rather than to
reduce a particular set of dynamical equations, we seek to formulate a
computationally straightforward general method of reducing continuous
symmetries, applicable to any high-dimensional chaotic/turbulent flow,
such as the fluid flows bounded by pipes or planes. The
symmetry-reduction literature is very extensive (see
\refrefs{CBcontinuous,SiCvi10} for a review), but it basically offers two
approaches (a) invariant polynomial bases, and (b) methods which pick a
representative point by slicing group orbits, generalizing the way in
which {\PoincSec}s cut time-evolving trajectories. For high-dimensional
flows the method of slices\ studied in \refrefs{%
rowley_reduction_2003,%
CBcontinuous,SiminosThesis,SiCvi10,Wilczak09}
appears to be the only computationally feasible approach. Here
the method is rederived as a distance minimization problem in the space
of patterns.
The new results reported in this paper are:
(a) A generic slice cuts across group orbits of {\em all}
states in the state space\ (\refsect{sec:frame}).
(b) Every slice carries along with it the {inflection hyperplane} % {singularity hyperplane}. We show how to
compute the jump of the reduced state space\ trajectory
(\refsect{sec:mslices}) whenever it crosses
through such singularity (\refsect{sec:singul}).
(c) We propose to avoid these singularities (artifacts of the symmetry
reduction by the method of slices) by tiling the state space\ with an atlas
constructed from a set of local slices (\refsect{sec:chart}).
Pertinent facts about symmetries of dynamical systems are summarized in
\refappe{sec:SymmDyn}. In \refappe{sec:singulProd} we show that for
continuous symmetries with product structure (such as $\SOn{2} \times
\SOn{2}$ symmetries of pipe and plane fluid flows), each symmetry induces
its own {inflection hyperplane} % {singularity hyperplane}.
In what follows we denote by `method of moving frames' the post-processing of the full
state space\ flow (\refsect{exam:CLErotAngle}), and by `method of slices' the
integration of flow confined to the reduced state space\ (\refsect{sec:mslices}).
In practice, symmetry reduction is best carried out as post-processing,
after the numerical trajectory is obtained by integrating the full
state space\ flow. In particular, the symmetry-reduction induced
singularities (\refsect{sec:singul}) are more tractable numerically
if given the full state space\ trajectory.
\begin{figure}
\begin{center}
(a)\includegraphics[width=0.33\textwidth]{Fullspace}%
~~~~~~~~
(b)\includegraphics[width=0.32\textwidth]{RedTrajNoPlane1}%
\end{center}
\caption{\label{fig:Fullspace}
(a) Complex Lorenz equations\ \refeq{eq:CLeR} exhibit a strange attractor for parameter
values \refeq{SiminosPrmts}, here projected on the $\{y_1,y_2,z\}$ axes.
{(thin line)}
A segment of generic finite time trajectory.
{(thick line)}
$\REQV{}{1}$, the only rela\-ti\-ve equilib\-rium.
(b) The same strange attractor plotted in the symmetry-reduced state space\
slice \refeq{PCsectQ}, defined by the group tangent $\sliceTan{}$ whose
choice is explained in \refeq{exmplTempl}. In the reduced state space\ rela\-ti\-ve equilib\-rium\
$\REQV{}{1}$ is reduced to equilib\-rium\ $\EQV{1}$. Note, however, the
semicircular jumps in the reduced flow. These are analyzed in
\refsect{sec:singul}. For a blow-up of the jump indicted by the small
rectangle (red), see \reffig{fig:singpass}\,(a).
}%
\end{figure}
We shall illustrate symmetry reduction by applying it to the
5-dimensional complex Lorenz equations\rf{GibMcCLE82}
\begin{eqnarray}
\dot{x}_1 &=& -\sigma x_1 + \sigma y_1
\,,\qquad\qquad\qquad
\dot{x}_2 \,=\, -\sigma x_2 + \sigma y_2
\nonumber \\
\dot{y}_1 &=& (r_1-z)\, x_1 - ~y_1 - e y_2
\,,\qquad\;
\dot{y}_2 \,=\, (r_1-z)\, x_2 + e y_1 - ~y_2
\label{eq:CLeR}\\
\dot{z}~ &=& -b z + x_1 y_1 + x_2 y_2
\,.
\nonumber
\end{eqnarray}
In all numerical calculations that follow we shall set the
parameters to \refref{SiCvi10} values,
\begin{equation}
r_1=28,\; b={8}/{3},\;
\sigma=10,\quad \mbox{and} \quad e={1}/{10}
\,,
\ee{SiminosPrmts}
for which the flow exhibits a strange attractor,
\reffig{fig:Fullspace}\,(a).
Our goal is to understand in detail this flow in the symmetry-reduced state space,
\reffig{fig:Fullspace}\,(b), in particular the singularities induced by
the symmetry reduction.
{
A flow $\dot{x}= \vel(x)$ is \emph{equivariant} under a coordinate
transformation $\LieEl$ if the form of the equations of motion is
preserved by the transformation,
\begin{equation}
\vel(x)=\LieEl^{-1}\vel(\LieEl \, x)
\,.
\ee{eq:FiniteRot1}
The totality of elements
$\LieEl$ forms \Group, the {\em symmetry group} of the flow.
}
The complex Lorenz equations\ are a simple example of a dynamical
system with a continuous (but no discrete) symmetry. They are equivariant
\refeq{eq:FiniteRot1} under \SOn{2} rotations by
\ifarticle
\begin{eqnarray}
\LieEl(\gSpace)
&=&
\exp{({\gSpace} \cdot \Lg)}
\,=\,
\left(\begin{array}{ccccc}
\cos \gSpace & \sin \gSpace & 0 & 0 & 0 \\
-\sin \gSpace & \cos \gSpace & 0 & 0 & 0 \\
0 & 0 & \cos \gSpace & \sin \gSpace & 0 \\
0 & 0 & -\sin \gSpace & \cos \gSpace & 0 \\
0 & 0 & 0 & 0 & 1
\end{array}\right)
\nonumber \\
\Lg &=&
\left(\begin{array}{ccccc}
0 & 1 & 0 & 0 & 0 \\
-1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 \\
0 & 0 &-1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0
\end{array}\right)
\label{CLfRots}
\end{eqnarray}
\else
\begin{eqnarray}
\LieEl(\gSpace)
&=&
\exp{({\gSpace} \cdot \Lg)}
\,=\,
\left(\begin{array}{ccccc}
\cos \gSpace & \sin \gSpace & 0 & 0 & 0 \\
-\sin \gSpace & \cos \gSpace & 0 & 0 & 0 \\
0 & 0 & \cos \gSpace & \sin \gSpace & 0 \\
0 & 0 & -\sin \gSpace & \cos \gSpace & 0 \\
0 & 0 & 0 & 0 & 1
\end{array}\right)
\,,\qquad \Lg \,=\,
\left(\begin{array}{ccccc}
0 & 1 & 0 & 0 & 0 \\
-1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 \\
0 & 0 &-1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0
\end{array}\right)
\label{CLfRots}
\end{eqnarray}
\fi
(for group-theoretical notation, see \refappe{sec:SymmDyn}). The group is
1\dmn\ and compact, its elements parameterized by $\gSpace \mbox{ mod }
2\pi$. The fixed-point subspace \refeq{dscr:InvPoints} is the $z$-axis.
The velocity \refeq{eq:CLeR} at a point on the $z$-axis points only in
the $z$-direction and so the trajectory remains on the $z$-axis for all
times. The action of \SOn{2}\ thus decomposes the state space\ into $m=0$
invariant subspace ($z$-axis) and $m=1$ subspace of multiplicity
2. Locally, at state space\ point $\ssp$, the infinitesimal action of the
group is given by the group tangent field $\groupTan(\ssp) = \Lg \ssp =
(x_2,-x_1,y_2,-y_1,0)$, with the flow induced by the group action normal
to the radial direction in the $(x_1,x_2)$ and $(y_1,y_2)$ planes, while
the $z$-axis is left invariant.
\section{Method of moving frames}
\label{sec:frame}
Suppose you are observing turbulence in a pipe flow, or your
defibrillator has a mesh of sensors measuring electrical currents that
cross your heart, or you have a precomputed pattern, and are sifting
through the data set of observed patterns for something like it. Here you
see a pattern, and there you see a pattern that seems much like the first
one. How `much like the first one?' Think of the first pattern
(represented by a point {\slicep} in the state space\ \pS) as a
`template'%
\rf{rowley_reconstruction_2000,%
rowley_reduction_2003,%
ahuja_template-based_2007}
or a
`reference state' and use the symmetries of the flow to slide and rotate
the `{template} % {slice-fixing point} % {reference state}' until it overlays the second pattern (a point $\ssp$ in
the state space), \ie, act with elements of the symmetry group \Group\ on
the {template} % {slice-fixing point} % {reference state} $\slicep \to \LieEl(\gSpace)\,\slicep$ until the
distance between the two patterns
\begin{equation}
|\ssp - \LieEl(\gSpace)\,\slicep|
= |\sspRed - \slicep|
\label{minDistance}
\end{equation}
is minimized. Here $\sspRed$ is the point on the group orbit of $\ssp$
(the set of all points that $\ssp$ is mapped to under the group
actions),
\begin{equation}
\ssp=\LieEl(\gSpace)\,\sspRed
\,,\qquad
\LieEl \in \Group
\,,
\ee{sspOrbit}
closest to the {template} % {slice-fixing point} % {reference state} {\slicep}, the Lie group element
$\LieEl=\LieEl(\gSpace)\propto\exp{({\gSpace} \cdot \Lg)}$ is
parameterized by angles $\gSpace =
(\gSpace_1,\gSpace_2,\cdots\gSpace_N)$, and the distance is an invariant
of the symmetry group, $|\LieEl\ssp|^2=|\ssp|^2$. We assume that \Group\
is a subgroup of the group of orthogonal transformations
$\On{d}$, and measure
distance $|\ssp|^2=\braket{\ssp}{\ssp}$ in terms of the Euclidean inner
product
\(
\braket{x}{y} = \sum_i^d {x}_i y_i
\,.
\)
Its Lie algebra {generators} $\Lg_a$ \refeq{FiniteRot} are $N$
linearly independent $[d\!\times\!d]$ antisymmetric matrices acting
linearly on the {state space} vectors $\ssp \in \pS \subset \mathbb{R}^d$.
If the state space\ is a normed function space,
\(
\braket{h}{f} = \int dx \, h(x) f(x)
\,,
\)
one customarily measures distance between two patterns in the $L^2$ norm,
$|f|^2 = \braket{f}{f}$. In computations, spatially-extended functions are
represented by discrete meshes or finite basis sets, within a (possibly
large) finite-dimensional state space\ $\pS \subset \mathbb{R}^d$. An example
is representation of a dissipative PDE by truncating the Fourier basis
\refeq{FourierExp} to a finite number of modes.
The minimal distance is a solution of the extremum conditions
\[
\frac{\partial ~~}{\partial \gSpace_a} |\ssp - \LieEl(\gSpace)\,\slicep|^2
=
{
2\, \braket{\sspRed - \slicep}{\sliceTan{a}}
}
= 0
\,,\qquad
\sliceTan{a} = \Lg_a \slicep
\,.
\]
By the antisymmetry of the Lie algebra generators we have
$\braket{\slicep}{\sliceTan{a}} = \braket{\slicep}{\Lg_{a}\slicep}=0$, so
we can replace
{
$\sspRed - \slicep \to \sspRed$,
}
and the `moving frame' transformation
parameters $\gSpace$ which map the state $\ssp$ to $\sspRed$, the group
orbit point closest to the {template} % {slice-fixing point} % {reference state} $\slicep$, satisfy
\begin{equation}
\braket{\sspRed}{\sliceTan{a}} =0
\,,\qquad
\LieEl(\gSpace)\,\sspRed = \ssp
\,.
\ee{PCsectQ}
{
Thus the set of \emph{extremal} group orbit points
}
lies in a $(d\!-\!N)$\dmn\ hyperplane, the set of vectors
orthogonal to the {template} % {slice-fixing point} % {reference state} tangent space spanned by tangent vectors
$\{\sliceTan{1},\cdots,\sliceTan{N}\}$
\begin{equation}
\sspRed_1\sliceTan{a,1} + \sspRed_2\sliceTan{a,2}
+ \cdots + \sspRed_d\sliceTan{a,d} = 0
\,,
\ee{hyperpl}
see \reffig{fig:slice}\,(a).
{
This hyperplane contains different types of extremal points. For example,
the point \emph{furthest} away from the template} % {slice-fixing point} % {reference state\ \slicep\ also
satisfies the extremal conditions. While group orbits are embedded into
the high-dimensional full state space\ in a highly convoluted manner, this
hyperplane is a linear section through them, a global extension of the
tangent space of \slicep, which can be a good description of the
`similarity' to a template} % {slice-fixing point} % {reference state\ only in a local neighborhood of \slicep. Our
goal is to reduce the symmetry of the flow by slicing the totality of
group orbits by a small set of such neighborhoods, one for each distinct
template} % {slice-fixing point} % {reference state, with each group orbit sliced only once. Group orbits close to
\slicep\ cross the hyperplane transversely; the border of the
neighborhood is defined by group orbits that reach the hyperplane
tangentially. In case of a local Poincar\'e section, determination of
such border is a nontrivial task, but as we shall see in
\refsect{sec:singul}, for group orbits this border is easy to determine.
The set of the group orbit points \emph{closest} to the template} % {slice-fixing point} % {reference state\ \slicep\
form an open connected neighborhood
of \slicep, a neighborhood in which each group orbit intersects the
hyperplane \emph{only once}.
As we shall show in \refsect{sec:singul}, this neighborhood is contained in
a half-hyperplane, bounded on one side by the intersection of \refeq{hyperpl}
with its {inflection hyperplane} % {singularity hyperplane}.
In what follows we shall refer to this connected open neighborhood
of \slicep\ as a \emph{slice} $\pSRed_{\slicep} \supset \pS/\Group$,
and
to \refeq{PCsectQ} as the \emph{slice conditions}.
}
Slice\ so defined is
a particular case of symmetry reduction by transverse sections of group
orbits\rf{FelsOlver98,FelsOlver99,OlverInv} that can be traced back to
Cartan's method of moving frames\rf{CartanMF}. \emph{Moving frame} refers to the action
$\LieEl(\gSpace)$ that brings a state space\ point \ssp\ into the slice.
We denote the full state space\ points and velocities by $\ssp$,
$\vel(\ssp)$, and the reduced state space\ points and velocities by $\sspRed$,
$\velRed(\sspRed)$.
In the choice of the {template} % {slice-fixing point} % {reference state} one should avoid solutions
that belong to the invariant or partially symmetric subspaces; for such
choices $\Lg_{a}\slicep=0$, and some or all $\sliceTan{a}$ vanish identically
and impose no slice conditions. The {template} % {slice-fixing point} % {reference state} $\slicep$ should be a
generic state space\ point in the sense that its group orbit has the full
$N$ dimensions of the group \Group. In particular, even though the
simplest solutions (laminar, \etc) often capture important physical
features of a flow, most equilib\-ria\ and short periodic orbit s have nontrivial
symmetries and thus are not suited as choices of symmetry-reducing
{template} % {slice-fixing point} % {reference state s}.
It should also be emphasized that in general a {template} % {slice-fixing point} % {reference state} is \emph{not}
a spatially-{localized} structure. We are not using translations /
rotations to superimpose a localized, `solitonic' solution over a
localized {template} % {slice-fixing point} % {reference state}. In a strongly nonlinear, turbulent flow a good
{template} % {slice-fixing point} % {reference state} is typically a nontrivial global solution.
In summary: given the minimum Euclidean distance condition, the
point $\sspRed$ in the group orbit of $\ssp$ closest
to the {template} % {slice-fixing point} % {reference state} $\slicep$ lies in a \emph{slice}, a {\em hyperplane}
normal to the group action tangent space $\sliceTan{}$, for any state space\
point $\ssp \in \pS$. {\em Symmetry reduction} by the method of moving frames\ is a
precise rule for how to pick a unique point \sspRed\ for each such
symmetry equivalence class, and compute the \emph{moving frame}
transformation $\ssp =\LieEl(\gSpace)\, \sspRed$ that relates the full
state space\ point $\ssp \in \pS$ to its symmetry reduced representative
$\sspRed \in \pSRed$.
\begin{figure}
\begin{center}
\setlength{\unitlength}{0.40\textwidth}
(a)
\begin{picture}(1,0.87085079)%
\put(0,0){\includegraphics[width=\unitlength]{slice.pdf}}%
\put(0.82835155,0.19007659){\color[rgb]{0,0,0}\rotatebox{-14.84025424}{\makebox(0,0)[lb]{\smash{$\pSRed$}}}}%
\put(0.07077338,0.28688228){\color[rgb]{0,0,0}\rotatebox{0.0313674}{\makebox(0,0)[lb]{\smash{$\LieEl\,\slicep$}}}}%
\put(0.53023327,0.26593335){\color[rgb]{0,0,0}\rotatebox{0.0313674}{\makebox(0,0)[lb]{\smash{$\slicep$}}}}%
\put(0.4284954,0.179285){\color[rgb]{0,0,0}\rotatebox{0.0313674}{\makebox(0,0)[lb]{\smash{$\sliceTan{}$}}}}%
\put(0.00798985,0.42305068){\color[rgb]{0,0,0}\rotatebox{0.0313674}{\makebox(0,0)[lb]{\smash{$\ssp(\tau)$}}}}%
\put(0.65766235,0.45412105){\color[rgb]{0,0,0}\rotatebox{0.0313674}{\makebox(0,0)[lb]{\smash{$\sspRed(\tau)$}}}}%
\put(0.06916446,0.74280851){\color[rgb]{0,0,0}\rotatebox{0.0313674}{\makebox(0,0)[lb]{\smash{$\LieEl(\tau)$}}}}%
\end{picture}%
~~~
(b)
\begin{picture}(1,0.8708158)%
\put(0,0){\includegraphics[width=\unitlength]{inflectHype.pdf}}%
\put(0.7987485,0.62071555){\color[rgb]{0,0,0}\rotatebox{-9.7016004}{\makebox(0,0)[lb]{\smash{$\pSRed$}}}}%
\put(0.38209831,0.39016168){\color[rgb]{0,0,0}\rotatebox{0.0313674}{\makebox(0,0)[lb]{\smash{$\ssp = \LieEl\,\sspRed$}}}}%
\put(0.14952406,0.52632455){\color[rgb]{0,0,0}\rotatebox{0.0313674}{\makebox(0,0)[lb]{\smash{$\sspSing$}}}}%
\put(0.29583374,0.67396856){\color[rgb]{0,0,0}\rotatebox{18.56777832}{\makebox(0,0)[lb]{\smash{$S$}}}}%
\put(0.21967016,0.09601742){\color[rgb]{0,0,0}\rotatebox{0.0313674}{\makebox(0,0)[lb]{\smash{$\Lg^2\slicep$}}}}%
\put(0.70853504,0.20366515){\color[rgb]{0,0,0}\rotatebox{0.0313674}{\makebox(0,0)[lb]{\smash{$\slicep$}}}}%
\put(0.60680126,0.11702028){\color[rgb]{0,0,0}\rotatebox{0.0313674}{\makebox(0,0)[lb]{\smash{$\sliceTan{}$}}}}%
\put(0.60809768,0.50392771){\color[rgb]{0,0,0}\rotatebox{0.0313674}{\makebox(0,0)[lb]{\smash{$\sspRed$}}}}%
\end{picture}%
\end{center}
\caption{\label{fig:slice}
The method of moving frames.
(a)
Slice\ {$\pSRed$ $(\,\,\supset \pS/\Group)$}
{lies in the $(d\!-\!N)$\dmn\ half-}hyperplane
\refeq{PCsectQ} normal to $\sliceTan{}$, where $\sliceTan{}$ is
the $N$\dmn\ tangent to the group orbit $\LieEl\,\slicep$ (dotted line) of the
{template} % {slice-fixing point} % {reference state} point $\slicep$, evaluated at $\slicep$. This is a highly idealized
sketch: A group orbit is an $N$\dmn\ manifold, and even for $\SOn{2}$ it
is usually only topologically a circle, and can intersect a {hyperplane}
any number of times. {Such hyperplane} intersects {\em all} full
state space\ group orbits (indicated by dotted lines here). The full
state space\ trajectory $\ssp(\tau)$ and the reduced state space\ trajectory
$\sspRed(\tau)$ are equivalent up to a `moving frame' rotation
$\ssp(\tau)=\LieEl(\tau)\,\sspRed(\tau)$, where $\LieEl(\tau)$ is
a shorthand for $\LieEl(\gSpace(\tau))$.
(b)
For $\SOn{2}$ two hyperplanes are associated with a given {template} % {slice-fixing point} % {reference state}
\slicep; the slice $\pSRed$, and the hyperplane of points $\sspSing$
normal to the quadratic Casimir-weighted vector $\Lg^2\slicep$, for which
the curvature \refeq{SO2inflPoint} of the distance function
\refeq{minDistance} changes sign. This defines a group-theoretic
boundary of the {template} % {slice-fixing point} % {reference state} neighborhood: For rotation angles $\gSpace$
beyond this boundary the group orbit $\LieEl(\gSpace)\,\ssp$ has left the
neighborhood. The intersection of the two hyperplanes is the {\em inflection hyperplane} % {singularity hyperplane}
$\sspRSing \in S$, within which all group tangents $\groupTan(\sspRSing)$
point into the slice and are thus normal to $\sliceTan{}$. For the
lack of dimensions, the intersection $S$ is drawn here as a `line,'
the $z$ axis in this 3\dmn\ sketch. $S$ is actually
a $(d\!-\!2)$\dmn\ hyperplane, but that is not easy to visualize.
}%
\end{figure}
\subsection{Computing the moving frame rotation angle}
\label{exam:CLErotAngle}
The idea of reducing a flow with Lie-group structure to a system of a
smaller dimension dates back to Sophus Lie.
Time-evolution and symmetry group actions foliate the state space\ into
$(N\!+\!1)$\dmn\ submanifolds: Given a state (a state space\ point $\ssp(0)$
at time $\tau=0$), we can trace its 1\dmn\ trajectory $\ssp(\tau)$ by
integrating its equations of motion, and its $N$\dmn\ group orbit by
acting on it with the symmetry group \Group. Locally, a continuous time
flow can be reduced by a \PoincSec; a slice does the same for local
neighborhoods of group orbits.
To show how the rotation into the slice\ is computed, consider first the
complex Lorenz equations. Substituting the \SOn{2}\ Lie algebra
generator and a finite angle \SOn{2} rotation \refeq{CLfRots} acting on a
5\dmn\ state space\ into the slice condition \refeq{PCsectQ}
yields
{
\(\braket{\ssp}{\sliceTan{}}\cos\gSpace
-\braket{\groupTan_{}(\ssp)}{\sliceTan{}} \sin\gSpace
= 0
\,,
\)}
and the explicit formula for frame angle $\gSpace$:
\begin{eqnarray}
\tan\gSpace &=&
{\braket{\ssp}{\sliceTan{}}}/
{\braket{\groupTan_{}(\ssp)}{\sliceTan{}}}
\,.
\label{SL:CLEsliceRot}
\end{eqnarray}
The dot product of two tangent fields in \refeq{SL:CLEsliceRot} is a
sum of inner products weighted by Casimirs \refeq{QuadCasimir},
{
\begin{equation}
\braket{\groupTan(\ssp)}{\groupTan(\slicep)}
= \sum_m C_2^{(m)} {\ssp}_i\, \delta_{ij}^{(m)} \slicep_j
\,.
\ee{braket}
}
For the complex Lorenz equations\
$\ssp = (x_1,x_2,y_1,y_2,z)$,
{
$\slicep = (\bar{x}_1',\bar{x}_2',\bar{y}_1',\bar{y}_2',\bar{z}')$,
}
and applying the moving frame condition \refeq{SL:CLEsliceRot} yields
{
\begin{equation}
\tan\gSpace =
\frac{x_1 \bar{x}_2'-x_2 \bar{x}_1'+y_1 \bar{y}_2' -y_2 \bar{y}_1'}
{x_1 \bar{x}'_1+x_2 \bar{x}'_2+y_1 \bar{y}'_1+y_2 \bar{y}'_2}
\,.
\ee{braketCL}
}
This formula is particularly simple, as in the complex Lorenz equations\
example the group acts only through $m=0$ and $m=1$ representations
(in the Fourier mode labeling of \refeq{SO2irrepAlg-Lg}).
Consider next the general form \refeq{SO2irrepAlg-m} of action of an
$\SOn{2}$ symmetry on arbitrary Fourier coefficients of a spatially
periodic function \refeq{FourierExp}. Substituting this into the slice
condition \refeq{PCsectQ}
{
and using $g^{(m)}(\gSpace)=\cos(m\gSpace){\ \hbox{{\rm 1}\kern-.6em\hbox{\rm 1}}}^{(m)} +\sin(m\gSpace)
\frac{1}{m}\Lg^{(m)}$, see \refeq{SO2irrepAlg-m}}, we find that
\begin{eqnarray}
\braket{e^{-\gSpace \Lg}\ssp}{\groupTan(\slicep)}
=\braket{\ssp}{\sum\limits_m \left(\cos(m\gSpace) {\ \hbox{{\rm 1}\kern-.6em\hbox{\rm 1}}}^{(m)}
+\sin(m\gSpace) \frac{1}{m}\Lg^{(m)}\right) \sliceTan{}}
\nonumber \\
{=\sum\limits_m
\left(
\braket{\ssp}{\Lg^{(m)} \slicep} \cos(m\gSpace)
- m\braket{\ssp}{{\ \hbox{{\rm 1}\kern-.6em\hbox{\rm 1}}}^{(m)} \slicep} \sin(m\gSpace)
\right)}
=0
\,.
\label{eq:so2sing}
\end{eqnarray}
This is a polynomial equation, with coefficients determined by
$\braket{\ssp}{\Lg^{(m)} \slicep}$ and $\braket{\ssp}{{\ \hbox{{\rm 1}\kern-.6em\hbox{\rm 1}}}^{(m)}\slicep}$,
as we can see by rewriting $\cos(m\gSpace)$, $\sin(m\gSpace)$ as
polynomials of degree $m$ in $\sin(\gSpace)$ and $\cos(\gSpace)$. Each
phase $\gSpace$ that rotates $\ssp$ into any of the group-orbit
traversals of the slice hyperplane corresponds to a real root of this
polynomial.
As a generic group orbit is a smooth $N$\dmn\ manifold embedded in the
$d$\dmn\ state space, several values of $\gSpace$ might be local extrema of
the distance function \refeq{minDistance}.
Our prescription is to pick the closest reduced state space\ point as the unique
representative of the entire group orbit. \ie, determine the global
minimum (infimum) of distance \refeq{minDistance}.
For example, group orbits of
\SOn{2}\ are topologically circles, and the distance function
has maxima, minima and inflection points as {critical points}:
if \gSpace\ is a solution of the slice condition \refeq{SL:CLEsliceRot}
for complex Lorenz equations,
so is $\gSpace+\pi$. We can pick the closest by noting that
the local minima have positive curvature,
\begin{equation}
\frac{\partial^2}
{\partial \gSpace^2}
|\sspRed - \slicep|^2
=
{- 2 \, \braket{\sspRed}{\Lg^2\slicep}}
\,.
\ee{SO2inflPoint}
For the complex Lorenz equations, this determines which moving frame angle will be used since
{the} distance function \refeq{minDistance} has
only a minimum and a maximum.
It does not matter
whether the group is compact, for example $\SOn{n}$, or noncompact, for
example the Euclidean group $E_2$ that underlies the generation of spiral
patterns\rf{Barkley94}; in either case any group orbit has one or several
locally closest passages to the {template} % {slice-fixing point} % {reference state} state, and generically only
one that is the closest one.
(Here we focus only on continuous symmetries - discrete symmetries that
flows such as the Kuramoto-Siva\-shin\-sky\ and {plane Couette flow} exhibit will also have to be taken into
account\rf{SCD07,HGC08,DasBuchMirror}.)
However, `picking the closest' point in a group orbit of a pattern very unlike
the {template} % {slice-fixing point} % {reference state} is not necessarily a sensible thing to do; as such state
evolves in time, distant points along its orbit can come closer to the
{template} % {slice-fixing point} % {reference state}, causing discontinuous jumps in the moving frame angle. We
shall show in \refsect{sec:singul} that this is a generic phenomenon for a
single-slice symmetry reduction, and propose a cure in \refsect{sec:chart}.
In summary, we do not have to compute all zeros of the slice condition
\refeq{PCsectQ} - all we care about is the zero that
corresponds to the shortest distance \refeq{minDistance}.
While post-processing of a full state space\ trajectory $\ssp(\tau_j)$
requires a numerical (Newton method) determination of the
moving frame rotation
$\gSpace(\tau_j)$ at each time step $\tau_j$, the computation is not
as onerous as it might seem, as the knowledge of $\gSpace(\tau_j)$ and
$\groupTan(\sspRed(\tau_j))$
gives us a very good guess for $\gSpace(\tau_{j+1})$. We
can go a step further, and write the equations for the flow restricted to
the reduced state space\ \pSRed.
\subsection{Dynamics within a slice}
\label{sec:mslices}
Any state space\ trajectory can be written in a factorized
form $\ssp(\tau)=\LieEl(\tau)\,\sspRed(\tau)$
(here $\LieEl(\tau)$ is a shorthand for $\LieEl(\gSpace(\tau))$,
or perhaps even $\LieEl(\gSpace(\ssp(\tau)))$).
Differentiating both sides with respect to time and
setting $\velRed={d\sspRed}/{d\tau}$ we find
\(
\vel(\ssp)=\dot{\LieEl} \, \sspRed+\LieEl \, \velRed(\sspRed)
\,.
\)
By the equivariance \refeq{eq:FiniteRot}
{\[
\vel(\sspRed)=\velRed(\sspRed) + \LieEl^{-1} \, \dot{\LieEl} \, \sspRed
\,.
\]}
Noting that $\LieEl^{-1}\dot{\LieEl}=e^{-\gSpace \cdot \Lg} \,
\frac{d ~~}{d \, \tau} e^{\gSpace \cdot \Lg}=\dot{\gSpace}\cdot \Lg$,
we obtain the equation for the velocity of the reduced flow:
\begin{equation}
\velRed(\sspRed)=\vel(\sspRed)-\dot{\gSpace}(\sspRed)\cdot \groupTan(\sspRed)
\,.
\ee{eq:redVel}
The velocity $\vel$ in the full state space\ is thus the sum of the
`angular' velocity \refeq{PC:groupTan1} along the group orbit,
{
$\dot{\gSpace} \cdot \groupTan(\sspRed)$,
}
and the remainder $\velRed$.
Eq. \refeq{eq:redVel} is true for any factorization
$\ssp=\LieEl \sspRed$, and by itself provides no
information on how to calculate $\dot{\gSpace}$. That is attained by
demanding that the reduced trajectory stays within a slice, by imposing
the slice conditions \refeq{PCsectQ}:
\begin{equation}
\braket{\vel(\sspRed)}{\sliceTan{a}}
-\braket{\dot{\gSpace}\cdot \groupTan(\sspRed)}{\sliceTan{a}}=0
\,.
\label{eq:slicecondition}
\end{equation}
This is a matrix equation in
$\braket{\groupTan_b(\sspRed)}{\sliceTan{a}}$ that
{
the authors of \refrefs{ahuja_template-based_2007,FiSaScWu96} claim one
can in principle solve for any Lie group. We consider here only the
$\SOn{2}$ case, which has a single group tangent:
}
\begin{eqnarray}
\velRed(\sspRed) &=& \vel(\sspRed)
-\dot{\gSpace}(\sspRed) \, \groupTan(\sspRed)
\nonumber \\
\dot{\gSpace}(\sspRed) &=& {\braket{\vel(\sspRed)}{\sliceTan{}}}/
{\braket{\groupTan(\sspRed)}{\sliceTan{}}}
\,.
\label{eq:so2reduced}
\end{eqnarray}
One way to think about this reduction of a flow to a slice is in terms of
Lagrange multipliers (see {Stone and Goldbart}\rf{StGo09}, Sect 1.5 for
intuitive, geometrical interpretation of Lagrange multipliers). The first
equation defines the flow confined to the slice,
the `shape', `template' or `slice' dynamics\rf{rowley_reduction_2003}, (see
\reffig{fig:Fullspace}\,(b), \reffig{fig:slice}\,(a)),
and integration of the second,
`reconstruction' equation\rf{Marsd92,MarsdRat94} enables us to track the
corresponding trajectory in the full state space. For invariant subspaces
$\dot{\gSpace}=0$, so they are always included within the slice. No
information is lost about the physical flow: if we know one point on a
trajectory, we can hop at will back and forth between the reduced
and the full state space\ trajectories, just as we can reconstruct a
continuous trajectory from its \PoincSec s.
At this point it is worth noting that imposing the global and fixed slice
\refeq{PCsectQ} is not the only way to separate equivariant dynamics into
`group dynamics' and `shape' dynamics\rf{BeTh04}. In modern mechanics
and even field theory (where elimination of group-directions is called
`gauge-fixing') it is natural to separate the flow {\em locally} into group dynamics
and a transverse, `horizontal' flow\rf{Smale70I,AbrMars78}, by the
`method of connections'\rf{rowley_reduction_2003}. From our point of
view, such approaches are not useful, as they do not reduce the dynamics
to a lower-dimensional reduced state space\ $\pS/\Group$.
\section{Inflection hyperplane}
\label{sec:singul}
If two patterns are close, their group orbits are nearly parallel, and
$\braket{\groupTan(\ssp)}{\sliceTan{}} \neq 0$. Hence a {slice} is
transverse to all group orbits in an open neighborhood of the {template} % {slice-fixing point} % {reference state}
\slicep, but not so {\em globally}. As we go away from
the {template} % {slice-fixing point} % {reference state} point, the angles of the group orbit traversals
can decrease all the way to zero, until their group tangents lie in the slice.
This set of points defines a purely group-theoretic boundary of
the {template} % {slice-fixing point} % {reference state's} neighborhood (every point has a group orbit, the dynamics
plays no role here, only the notion of distance).
Furthermore, whenever the group tangent of the reduced state space\ trajectory
points into the slice, the denominator in \refeq{eq:so2reduced} vanishes
and {angular velocity} $\dot{\gSpace}$ is not defined. We now show that these
singularities (a) also lie in a hyperplane, determined by the symmetry
group alone, and (b) induce computable jumps in the reduced state space\
trajectory.
Two hyperplanes sketched in \reffig{fig:slice}\,(b) are associated with
any given {template} % {slice-fixing point} % {reference state} \slicep; the slice \refeq{PCsectQ}, and the
hyperplane of points \sspSing\ defined by being normal to the quadratic
Casimir-weighted vector $\Lg^2\slicep$, such that from the {template} % {slice-fixing point} % {reference state}
vantage point their group orbits are not transverse, but locally
`horizontal,'
\begin{equation}
\braket{\groupTan(\sspSing)}{\sliceTan{}}
=
{-\braket{\sspSing}{\Lg^2\slicep}}
=0
\ee{sliceSingl0}
(for simplicity, in this section we specialize to the $\SOn{2}$ case).
We shall refer to the
intersection of the two as the
{\em inflection hyperplane} % {singularity hyperplane} $S$, \ie, the set of all points $\sspRSing$ which are both
{(a)} in the {slice}, and {(b)} whose group tangent $\groupTan(\sspRSing)$
is also in the {slice}:
\begin{eqnarray}
\braket{\sspRSing}{\sliceTan{}}&=&0 \nonumber \\
\braket{\groupTan(\sspRSing)}{\sliceTan{}}
&=&
{-\braket{\sspRSing}{\Lg^2\slicep}}
=0
\label{sliceSingl}
\end{eqnarray}
(this is called `singular set' in \refref{SiCvi10}).
Looking back at \refeq{SO2inflPoint}, we see that $S$ is the locus of
inflection points, a hyperplane through which the curvature of the
distance function changes sign, and a local minimum turns into a local
maximum. For example, for the complex Lorenz equations\ $\sspRSing =
(x_1^*,x_2^*,y_1^*,y_2^*,z^*)$, $\slicep = (x_1',x_2',y_1',y_2',z')$, and
the 3\dmn\ {inflection hyperplane} % {singularity hyperplane} $\sspRSing \in S \subset \pSRed$ is given by vanishing
denominator in \refeq{braketCL}:
\[
0 = {x_1^* x'_1+x_2^* x'_2+y_1^* y'_1+y_2^* y'_2}
\,.
\]
The {inflection hyperplane} % {singularity hyperplane} $S$ is purely an artifact of the choice of a {template} % {slice-fixing point} % {reference state},
and has nothing to do with the dynamics; whenever the full state space\
trajectory crosses an {inflection hyperplane} % {singularity hyperplane}, it pays it no heed whatsoever.
We next show that the singularity in
the formula \refeq{SL:CLEsliceRot} causes a discontinuous jump in the
moving frame angle $\gSpace$. Consider a full state space\ trajectory
$\ssp(\tau)$ that passes through the {inflection hyperplane} % {singularity hyperplane} \refeq{sliceSingl} at time $\tau^*$,
$\sspRSing =\ssp(\tau^*)$, which we shall set to $\tau^*=0$. At that
instant the moving frame angle $\gSpace$ is formally undefined: the
numerator $\braket{\sspRSing}{\sliceTan{}}$ in \refeq{SL:CLEsliceRot}
vanishes ($\sspRSing$ is in the slice and satisfies the slice condition
\refeq{PCsectQ}), and the denominator
${\braket{\groupTan_{}(\sspRSing)}{\sliceTan{}}}$ vanishes by
\refeq{sliceSingl}. Nevertheless, the trajectory going through the singularity
is well defined, as in the linear approximation the numerator and the
denominator in the moving frame formula \refeq{SL:CLEsliceRot} are given
by
\begin{eqnarray}
\braket{\ssp}{\sliceTan{}}
&=&
\braket{(\sspRSing+\vel(\sspRSing) \, \tau )}
{\sliceTan{a}}
\,=\, \braket{\vel(\sspRSing)}
{\sliceTan{a}} \, \tau
\label{singSetVelo}\\
\braket{\groupTan(\ssp)}{\sliceTan{}}
&=&
\braket{\groupTan(\sspRSing+\vel(\sspRSing) \, \tau )}
{\sliceTan{a}}
\,=\, \braket{\groupTan(\vel(\sspRSing))}
{\sliceTan{a}} \, \tau
\,.
\label{singSetSign}
\end{eqnarray}
In other words, the moving frame rotates the state space\ point $\ssp$ and
the velocity $\vel(\ssp)$ by the same angle, so either can be used to
compute it. The shortest distance condition demands that we pick the
solution with positive curvature \refeq{SO2inflPoint}, so
$\braket{\groupTan(\ssp)}{\sliceTan{}}\geq 0$
for all $\tau$. As the trajectory traverses $\sspRSing$, the time $\tau$
changes the sign from negative to positive; hence we must switch from the
solution $\sspRSing$ to another extremum for which
$\braket{\groupTan(\vel(\sspRSing))}{\sliceTan{}}$ is positive. In the
complex Lorenz flow\ example there are only two extrema $\{\gSpace,\gSpace+\pi\}$,
so the moving frame angle $\gSpace$ jumps discontinuously by $\pi$.
Within the reduced state space\ an {inflection hyperplane} % {singularity hyperplane} crossing has a dramatic effect: the
reduced state space\ flow $\sspRed(\tau)$ \emph{jumps} whenever it crosses the
{inflection hyperplane} % {singularity hyperplane} $\sspRSing$, by an amount that we now compute.
Suppose that a reduced state space\ trajectory passes through a singularity
$\sspRSing$ at time $\tau=\tau^*$. At
that instant the numerator $\braket{\vel(\sspRSing)}{\sliceTan{a}}$ is
(generically) finite, but as the group tangent of the point $\sspRSing$
lies in the slice, ${\braket{\groupTan_{}(\sspRSing)}{\sliceTan{}}}=0$
by \refeq{sliceSingl}, the denominator in \refeq{eq:so2reduced} vanishes,
and the {angular velocity} $\dot{\gSpace}$ shoots off to infinity.
\begin{figure}
\begin{center}
(a) \includegraphics[width=0.35\textwidth]{dthetasing}%
~~
(b) \includegraphics[width=0.35\textwidth]{dthetanearsing}%
\end{center}
\caption{\label{fig:dthetasing}
The {angular velocity} $\dot{\gSpace}$ for two complex Lorenz flow\
reduced state space\ trajectories in a slice defined by the
{template} % {slice-fixing point} % {reference state} $\slicep$ given in \refeq{exmplTempl}:
(a) As the trajectory $\sspRed(\tau)$ passes through the
singular point $\sspRSing$ given in \refeq{exmplTempl},
the {angular velocity} diverges
$\dot{\gSpace} \to \infty$ as a Dirac delta function.
(b) The {angular velocity} for a nearby trajectory going
through $\sspRSing+\delta \ssp$,
$\delta\ssp=(0.01,0,0,0,0)$ exhibits a large
but finite excursion close to the singularity.
}%
\end{figure}
As an illustration of such jump, consider a blow-up of the small rectangle indicated
in reduced state space\ flow \reffig{fig:Fullspace}\,(b). Here the {template} % {slice-fixing point} % {reference state}
\begin{eqnarray}
\slicep &=& (0.887846,-0.150461,0.4,-0.12,0)
\nonumber \\
\sliceTan{} &=& (0.150461,0.887846,0.12,0.4,0)
\label{exmplTempl} \\
\sspRSing &=& (-0.889135, -0.0401956, 1.91332, -0.150327, 24.4436)
\nonumber
\end{eqnarray}
was reverse-engineered, by picking a point $\sspRSing = \ssp(0)$ from a
segment of the full state space\ ergodic trajectory $\ssp(\tau)$ and then
computing \slicep\ such that $\sspRSing$ lies in the {inflection hyperplane} % {singularity hyperplane}.
{
As the trajectory $\sspRed(\tau)$ passes through $\sspRSing$, the moving
frame $\gSpace$ jumps by $\pi$. Any trajectory nearby $\ssp(\tau)$ in the
full space (for example, the red/dashed trajectory in \reffig{fig:singpass}\,(a)) is
assigned a nearby $\gSpace$ both before and after $\sspRed(\tau)$ passes
through $\sspRSing$. The closer the trajectory is to $\ssp(\tau)$ in the
full space, the shorter the time interval where its moving frame differs
significantly from $\ssp(\tau)$'s. If its symmetry-\-reduced trajectory does not
pass through inflection hyperplane $S$, then $\gSpace$ is continuous and
must change by $\pi$ in a short interval of time that shrinks the closer
the trajectory is to $\ssp(\tau)$. Hence,
}
as the trajectory $\sspRed(\tau)$ passes through the $\sspRSing$, the {angular velocity}
diverges $\dot{\gSpace} \to \infty$ as a Dirac delta function,
\reffig{fig:dthetasing}\,(a), and the reduced state space\ trajectory goes through
the inflection \refeq{sliceSingl} and jumps to the $\pi$-rotated extremum
of the distance function, \reffig{fig:singpass}\,(a).
The {inflection hyperplane} % {singularity hyperplane} $S$ is the intersection of two hyperplanes: (1) the slice
(the shortest distance from the group-orbit to the {template} % {slice-fixing point} % {reference state}), and (2)
the closest inflection in the distance function \refeq{sliceSingl}. While
all group orbits of a generic trajectory cross the slice, the trajectory
has vanishing probability to cross the lower-dimensional {inflection hyperplane} % {singularity hyperplane} - that
is why we had to `engineer' the slice \refeq{exmplTempl}. However, an
ergodic trajectory might come arbitrarily close to $S$ arbitrarily
often. Such nearby reduced state space\ trajectories exhibit large {angular velocities}
$\dot{\gSpace}$, \reffig{fig:dthetasing}\,(b), and very fast, nearly
semi-circular excursions close to the singularity,
\reffig{fig:singpass}\,(a). Which segment of the group orbit they follow
depends on the side from which the trajectory approached the {inflection hyperplane} % {singularity hyperplane}.
\begin{figure}
\begin{center}
(a) \includegraphics[width=0.42\textwidth]{singpass1}%
~~~~~~~~
(b)~ \includegraphics[width=0.40\textwidth]{f_1_08_1}
\end{center}
\caption{\label{fig:singpass}
{(color online).}
(a)
Blow-up of a jump in \reffig{fig:Fullspace}\,(b), indicated by a small
rectangle.
{(blue/full line)}
A trajectory that passes through the singular point
$\sspRSing$ given in \refeq{exmplTempl}. Note the instantaneous jump in
the trajectory, caused by the divergence in velocity
(\reffig{fig:dthetasing}\,(a)) as the trajectory traverses the {inflection hyperplane} % {singularity hyperplane}.
The neighboring red/dashed trajectory going through $\sspRSing+\delta
\ssp$, $\delta \ssp =(0,0.025,0,0,0)$, makes a rapid transit around the
singularity. The {green/dotted} trajectory is the group orbit of $\sspRSing$
between the two $\gSpace$ that rotate $v(\sspRSing)$ in the slice. Note
also how the red/dashed trajectory begins near the {blue/full}
trajectory, closely follows the {green/dotted} trajectory after the singularity
point, reaches the other side of the {green/dotted} arc and then resumes
closely following the {blue/full} trajectory.
(b)
Smooth dynamics (left frame) tesselated by the skeleton of periodic
points, together with their linearized neighborhoods, (right frame).
Indicated are segments of two 1-cycles and a 2-cycle that alternates
between the neighborhoods of the two 1-cycles, shadowing first one, and
then the other
(from \wwwcb{}).
}%
\end{figure}
In summary, whenever a reduced state space\ trajectory crosses the {inflection hyperplane} % {singularity hyperplane}, it
jumps instantaneously and discontinuously to the new group orbit point
with the shortest distance to the template. We
have shown that these jumps are harmless and theoretically under control.
Nearby trajectories are numerically under control if sufficient care is
taken to deal with large angular velocities. But they are an artifact of the
method of slices\ of no dynamical significance, and an uncalled-for numerical
nuisance. We now outline a strategy how to avoid them altogether by a
clever choice of a {\em set} of {template} % {slice-fixing point} % {reference state s}.
\section{Charting the reduced state space}
\label{sec:chart}
So far, the good news is that for a generic {template} % {slice-fixing point} % {reference state} $\slicep$ (\ie,
any $\slicep$ whose group orbit has the full $N$-dimensions of the
symmetry group \Group), the slice hyperplane \refeq{PCsectQ} cuts across
the group orbit of {\em every} point in the full state space\ \pS. But is
this a useful symmetry reduction of the full state space? A distant pattern
that is a bad match to a given {template} % {slice-fixing point} % {reference state} will have any number of
locally `minimal' distances, each yet another bad match. Physically it
makes no sense to use a single slice (a set of all group orbit points
that are closest to one given {template} % {slice-fixing point} % {reference state}) globally.
Work on Kuramoto-Siva\-shin\-sky\ and the work of Rowley and
Marsden\rf{rowley_reconstruction_2000} suggests how to proceed: it was
shown in \refrefs{lanCvit07,SCD07} that for turbulent/chaotic systems a
set of Poincar\'e sections is needed to capture the dynamics. The choice
of sections should reflect the dynamically dominant patterns seen in the
solutions of nonlinear PDEs. We propose to construct a global atlas of
the symmetry reduced state space\ $\pS/\Group$ by deploying both slices and
linear Poincar\'e sections across neighborhoods of the qualitatively most
important patterns, taking care that the {template} % {slice-fixing point} % {reference state s} chosen have no
symmetry. Each slice $\pSRed{}^{(j)}$, tangential to one of a finite
number of {template} % {slice-fixing point} % {reference state s} $\slicep{}^{(j)}$, provides a local chart for a
neighborhood of an important, qualitatively distinct class of solutions
(2-rolls states, 3-rolls states, \etc); together they `Voronoi'
tessellate the curved manifold in which the reduced strange attractor is
embedded by a finite set of hyperplane
tiles\rf{rowley_reconstruction_2000,RoSa00}. This is the symmetry-reduced
generalization of the idea of {state space\ tessellation} by a set of
periodic-orbits, so dear to a professional cyclist,
\reffig{fig:singpass}\,(b).
So how do we propose to implement this tessellation?
The physical task is to, for a given dynamical flow, pick a set of
qualitatively distinct {template} % {slice-fixing point} % {reference state s} whose slices are locally tangent to
the strange attractor. A `slice' is a purely group-theoretic, linear
construct, with no reference to dynamics; a given {template} % {slice-fixing point} % {reference state}
$\slicep{}^{(1)}$ defines the associated slice $\pSRed$, a
($d\!-\!1$)\dmn\ tangent hyperplane (for simplicity, in this section we
specialize to the $\SOn{2}$ case). Within it, there is a ($d\!-\!2$)\dmn\
{inflection hyperplane} % {singularity hyperplane} \refeq{sliceSingl}. If we pick another {template} % {slice-fixing point} % {reference state} point
$\slicep{}^{(2)}$, it comes along with its own slice and {inflection hyperplane} % {singularity hyperplane}. Any
neighboring pair of $(d\!-\!1)$\dmn\ slices intersects in a `ridge'
(`boundary,' `edge'), a $(d\!-\!2)$\dmn\ hyperplane, easy to compute.
A global atlas so constructed should be sufficiently
fine-grained that we never hit any {inflection hyperplane} % {singularity hyperplane} singularities. The {inflection hyperplane} % {singularity hyperplane}s
should be eliminated by requiring that they lie either on the far sides
of the slice-slice intersections, or elsewhere where the strange
attractor does not tread. Each `chart' or `tile,' bounded by ridges to
neighboring slices, should be sufficiently small so that the {inflection hyperplane} % {singularity hyperplane} is
nowhere within the part of the slice explored by the strange attractor.
Follow an ant as it traces out a symmetry-reduced trajectory
$\sspRed{}^{(1)}(\tau)$, confined to the slice $\pSRed{}^{(1)}$. The
moment $\braket{\sspRed{}^{(1)}(\tau)}{\sliceTan{}{}^{(2)}}$ changes
sign, the ant has crossed the ridge, we symmetry-reduce with respect to
the second slice, and the ant continues its merry stroll within the
$\pSRed{}^{(2)}$ slice. Or, if you prefer to track the given full
state space\ trajectory $\ssp(\tau)$, you compute the moving-frame angle
with respect to each (global) slice, and check to which tile does the
given group orbit belong.
What about the fixed-point subspace $\pS_\Group$ (see \refeq{dscr:InvPoints})?
Because of it, the action of \Group\ is globally neither free nor proper,
\etc. All intersections of slices, ridges and {inflection hyperplane} % {singularity hyperplane s} contain the
fixed-point subspace $\pS_\Group$. Should we worry?
{
There are spurious singularities that are artifacts of a linear slice,
described by the associated {inflection hyperplane} % {singularity hyperplane}, and there are genuine, symmetry induced
singularities, such as the embedding of an invariant subspace in the full
state space\ (here $z$-axis). A {inflection hyperplane} % {singularity hyperplane} includes the invariant subspace and
cannot `cure' those. Indeed, we have tried to construct an example of a
two-slice chart, but for complex Lorenz equations\ we have not been able to find a good one.
As the trajectory approaches the $z$-axis from various directions, we
have not found of a way to choose two slices such that it the group orbit
is tangent to one but not to the other. This is not a serious problem, as
the Poincar\'e section and the associated return map\rf{SiCvi10}
can be chosen to lie away from either {inflection hyperplane} % {singularity hyperplane}.
}
The objective of the method of slices\ is to freeze\rf{BeTh04} all equivariant
coordinates; once frozen, they together with the $\pS_\Group$
coordinates span the symmetry-reduced state space.
There is a rub, though - you need to know how to pick the phases of
neighboring {template} % {slice-fixing point} % {reference state s}. This is a reflection of the flaw inherent in use
of a slice hyperplane globally: a slice is derived from the Euclidean
notion of distance, but for nonlinear flows the distance has to be
measured curvilinearly, along unstable
manifolds\rf{Christiansen97,DasBuch}. We nevertheless have to stick with
tessellation by linearized tangent spaces, as curvilinear charts appear
computationally prohibitive. Perhaps a glance at
\reffig{fig:singpass}\,(b) helps visualize the problem; imagine that the
trajectories drawn are group orbits, and that the tiles belong to the
slices through {template} % {slice-fixing point} % {reference state} points on these orbits. One could slide
{template} % {slice-fixing point} % {reference state s} along their group orbits until the pairs of straight line
segments connecting neighboring {template} % {slice-fixing point} % {reference state} points are minimized, but
that is not physical: one would like the dynamical trajectories to cross
ridges as continuously as possible. So how is one to pick the phases of
the {template} % {slice-fixing point} % {reference state s}? The phase of the first {template} % {slice-fixing point} % {reference state} is for free, but
the moving frame transformation \refeq{sspOrbit} is global, and can be
applied only once. The choice of the first template thus fixes all {\em
relative phases} to the succeeding {template} % {slice-fixing point} % {reference state s}, as was demonstrated in
\refref{SCD07}: the universe of all other solutions is rigidly fixed
through a web of heteroclinic connections between them. This insight
garnered from study of a 1-dimensional Kuramoto-Siva\-shin\-sky\ PDE is more remarkable still
when applied to the plane Couette flow\rf{GHCW07}, with 3-$d$ velocity
fields and two translational symmetries. The {\em relative phase} between
two {template} % {slice-fixing point} % {reference state s} is thus fixed by the shortest heteroclinic connection,
a rigid bridge from one neighborhood to the next. Once the relative phase
between the templates {template} % {slice-fixing point} % {reference state s} is fixed, so are their their slices,
\ie, their tangent hyperplanes, and their intersection, \ie, the ridge
joining them.
\section{What lies ahead}
\label{sec:concl}
Many physically important spatially-extended and fluid dynamics systems
exhibit continuous symmetries. For example, excitable
media\rf{ZaZha70,Winfree73,Winfree1980,BaKnTu90,Barkley94}, Kuramoto-Siva\-shin\-sky\
flow\rf{ku,siv,SCD07}, {plane Couette flow}\rf{Visw07b,GHCW07,HGC08,GibsonMovies}, and
pipe flow\rf{Wk04,Kerswell05} are invariant (equivariant) under
combinations of translational (Euclidean), rotational and discrete
symmetries. If a physical problem has a symmetry, one should use it - one
does not want to compute the same solution over and over, all one needs
is to pick one representative solution per each symmetry related
equivalence class. Such procedure is called symmetry reduction. In this
paper we have investigated symmetry reduction by the method of slices, a linear
procedure particularly simple and practical to implement, and answered
affirmatively the two main questions about the method:
(1) does a slice cut the group orbit of \emph{every} point in the
dynamical state space?
(2) can one deal with the {inflection hyperplane} % {singularity hyperplane s} that the method necessarily
introduces?
We have shown here that a symmetry-reduced trajectory passes through such
singularities through computable jumps, a nuisance numerically, but cause
to no conceptual difficulty. However, while a slice intersects each group
orbit in a neighborhood of a {template} % {slice-fixing point} % {reference state} only once, extended globally any
slice intersects every group orbit multiple times. So even though every
slice cuts all group orbits, it makes no sense physically to use one
slice (a set of \emph{all} group orbit points that are closest to a given
{template} % {slice-fixing point} % {reference state}) globally. We propose instead to construct a global atlas by
deploying sets of slices and linear Poincar\'e sections as charts of
neighborhoods of the most important (relative) equilibria and/or
(relative) periodic orbits.
Such global atlas should be sufficiently fine-grained so that an
unstable, ergodic trajectory never gets too close to any of the {inflection hyperplane} % {singularity hyperplane
s}. Why does this proposal have none of the elegance of, let's say,
Killing-Cartan classification of simple Lie algebras? Why is this
symmetry reduction purely a numerical procedure, rather than an analytic
change of equivariant coordinates to invariant ones? The theory of
\emph{linear} representations of compact Lie groups is a well developed
subject, but role of symmetries in \emph{nonlinear} settings is
altogether another story. It is natural to express a dynamical system
with a symmetry in the symmetry's linear eigenfunction basis (let us
say, Fourier modes), but for a nonlinear flow different modes are
strongly coupled, and group orbits embedded in such coordinate bases can
be highly convoluted, in ways that no single global slice
hyperplane can handle intelligibly.
It should be emphasized that the atlas so constructed retains the
dimensionality of the original problem. The full dynamics is faithfully
retained, we are \emph{not} constructing a lower-dimensional model of the
dynamics. Neighborhoods of unstable equilib\-ria\ and periodic orbit s are dominated by
their unstable and least contracting stable eigenvalues and are, for all
practical purposes, low-dimensional. Traversals of the ridges are,
however, higher dimensional. For example, crossing from the neighborhood
of a two-rolls state into the neighborhood of a three-rolls state entails
going through a pattern `defect,' a rapid transient whose precise
description requires many Fourier modes. Nevertheless, the recent
progress on separation of `physical' and `hyperbolically isolated'
covariant Lyapunov
vectors\rf{PoGiYaMa06,ginelli-2007-99,YaTaGiChRa08,TaGiCh09} gives us
hope that the proposed atlas could provide a systematic and controllable
framework for construction of lower-dimensional models of `turbulent'
dynamics of dissipative PDEs.
While it has been demonstrated in \refref{SiCvi10} that the method of moving frames,
with a judicious choice of the {template} % {slice-fixing point} % {reference state} and {\PoincSec}, works for a
system as simple as the complex Lorenz flow, one still has to show that the method can
be implemented for a truly high-dimensional flow.
Siminos\rf{SiminosThesis} has used a modified method of moving frames\ to compute
analytically a 128\dmn\ invariant basis for reduced state space\ $\pSRed
= \pS/\SOn{2}$, and shown that the unstable manifolds of rela\-ti\-ve equilib\-ria\ play
surprisingly important role in organizing the geometry of Kuramoto-Siva\-shin\-sky.
In \refref{SCD07} it
was found that the coexistence of four equilib\-ria, two rela\-ti\-ve equilib\-ria\
(traveling waves) and a
nested fixed-point subspace\ structure in an effectively $8$-dimensional Kuramoto-Siva\-shin\-sky\ system
complicates matters sufficiently that no symmetry reduction by the method of slices\ has been
attempted so far.
More importantly, a symmetry reduction of pipe flows, which
due to the translational symmetry have only relative (traveling)
solutions, remains an outstanding challenge\rf{ACHKW11}.
\medskip
\noindent{\bf Acknowledgments}
We sought in vain Phil Morrison's sage counsel on how to reduce
symmetries, but none was forthcoming - hence this article. We are
grateful to
D.~Barkley,
W.-J.~Beyn,
C.~Chandre,
K.A.~Mitchell,
B.~Sandstede,
R.~Wilczak,
and in particular E.~Siminos and R.L.~Davidchack
for spirited exchanges.
S.F. work was supported by the National Science Foundation grant
DMR~0820054 and a Georgia Tech President's Undergraduate Research Award.
P.C. thanks Glen Robinson Jr. for support.
|
2,877,628,091,647 | arxiv | \section*{#1}}
\newcounter{ctr}
\theoremstyle{plain}
\newtheorem{theorem}{Theorem}[section]
\newtheorem*{claim}{Claim}
\newtheorem*{lemma*}{Lemma}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{conjecture}[theorem]{Conjecture}
\theoremstyle{definition}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{remark}[theorem]{Remark}
\newtheorem{example}[theorem]{Example}
\renewcommand{\a}{\ensuremath{\mathfrak{a}}}
\newcommand{\text{\rm Aut}\,}{\text{\rm Aut}\,}
\renewcommand{\AA}{\ensuremath{\mathbb{A}}}
\newcommand{\ensuremath{\mathcal{A}b}}{\ensuremath{\mathcal{A}b}}
\newcommand{\text{\rm Ann}\,}{\text{\rm Ann}\,}
\newcommand{\stack}[2]{\genfrac{}{}{0pt}{2}{#1}{#2}}
\renewcommand{\b}{\ensuremath{\mathfrak{b}}}
\newcommand{\ensuremath{\mathbb{C}}}{\ensuremath{\mathbb{C}}}
\newcommand{\text{\rm char}\,}{\text{\rm char}\,}
\newcommand{\text{\rm Cl}}{\text{\rm Cl}}
\newcommand{\text{\rm coim}}{\text{\rm coim}}
\newcommand{\text{\rm coker}}{\text{\rm coker}}
\newcommand{\text{\rm codim}\,}{\text{\rm codim}\,}
\newcommand{\ensuremath{\mathcal{D}}}{\ensuremath{\mathcal{D}}}
\newcommand{\text{\rm depth}\,}{\text{\rm depth}\,}
\newcommand{\der}[2]{\ensuremath{\frac{d #1}{d #2}}}
\renewcommand{\div}{\text{\rm Div}}
\newcommand{\ensuremath{\mathscr{E}}}{\ensuremath{\mathscr{E}}}
\newcommand{\ensuremath{\mathbb{E}}}{\ensuremath{\mathbb{E}}}
\newcommand{\text{\rm End}}{\text{\rm End}}
\newcommand{\text{\rm Ext}}{\text{\rm Ext}}
\newcommand{\ensuremath{\mathscr{E}\!xt}}{\ensuremath{\mathscr{E}\!xt}}
\newcommand{\ensuremath{\mathscr{F}}}{\ensuremath{\mathscr{F}}}
\newcommand{\ensuremath{\mathbb{F}}}{\ensuremath{\mathbb{F}}}
\newcommand{\ensuremath{\mathfrak{g}}}{\ensuremath{\mathfrak{g}}}
\newcommand{\ensuremath{\mathfrak{gl}}}{\ensuremath{\mathfrak{gl}}}
\newcommand{\ensuremath{\mathscr{G}}}{\ensuremath{\mathscr{G}}}
\renewcommand{\H}{\ensuremath{\mathscr{H}}}
\newcommand{\ensuremath{\mathfrak{h}}}{\ensuremath{\mathfrak{h}}}
\renewcommand{\hom}{\text{\rm Hom}}
\newcommand{\ensuremath{\mathscr{H}\!om}}{\ensuremath{\mathscr{H}\!om}}
\newcommand{\text{\rm ht}\,}{\text{\rm ht}\,}
\newcommand{\text{\rm Id}}{\text{\rm Id}}
\newcommand{\ensuremath{\mathbb{I}}}{\ensuremath{\mathbb{I}}}
\newcommand{\text{\rm im\,}}{\text{\rm im\,}}
\newcommand{\ensuremath{\mathscr{I}}}{\ensuremath{\mathscr{I}}}
\newcommand{\text{\rm Ind}}{\text{\rm Ind}}
\newcommand{\ensuremath{\mathfrak{J}}}{\ensuremath{\mathfrak{J}}}
\newcommand{\ensuremath{\mathscr{K}}}{\ensuremath{\mathscr{K}}}
\renewcommand{\k}{\ensuremath{\mathfrak{k}}}
\renewcommand{\L}{\ensuremath{\mathscr{L}}}
\newcommand{\ensuremath{\mathcal{L}ag}}{\ensuremath{\mathcal{L}ag}}
\newcommand{\ensuremath{\mathfrak{m}}}{\ensuremath{\mathfrak{m}}}
\newcommand{\ensuremath{\mathcal{M}}}{\ensuremath{\mathcal{M}}}
\renewcommand{\matrix}[4]{\ensuremath{{\big (} %
\genfrac{}{}{0cm}{}{#1}{#3} \genfrac{}{}{0cm}{}{#2}{#4} {\big )}}}
\newcommand{\mat}[1]{\ensuremath{\left( %
\begin{array}{cccccccccccccccccccccccccccc} #1 \end{array}\right)}}
\newcommand{\ensuremath{\mathbf{Mod}}}{\ensuremath{\mathbf{Mod}}}
\newcommand{\ensuremath{\mathbb{M}}}{\ensuremath{\mathbb{M}}}
\newcommand{\ensuremath{\mathcal{N}}}{\ensuremath{\mathcal{N}}}
\renewcommand{\O}{\ensuremath{\mathscr{O}}}
\newcommand{\ensuremath{\mathbb{P}}}{\ensuremath{\mathbb{P}}}
\newcommand{\ensuremath{\mathfrak{p}}}{\ensuremath{\mathfrak{p}}}
\newcommand{\pder}[2]{\ensuremath{\frac{\partial #1}{\partial #2}}}
\newcommand{\text{\rm Proj\,}}{\text{\rm Proj\,}}
\newcommand{\text{\rm Pic\,}}{\text{\rm Pic\,}}
\newcommand{\ensuremath{\mathfrak{q}}}{\ensuremath{\mathfrak{q}}}
\newcommand{\ensuremath{\mathbb{Q}}}{\ensuremath{\mathbb{Q}}}
\newcommand{\ensuremath{\mathscr{R}}}{\ensuremath{\mathscr{R}}}
\newcommand{\ensuremath{\mathbb{R}}}{\ensuremath{\mathbb{R}}}
\newcommand{\ensuremath{\mathfrak{Sch}}}{\ensuremath{\mathfrak{Sch}}}
\renewcommand{\sl}{\ensuremath{\mathfrak{sl}}}
\newcommand{\text{\rm sgn}}{\text{\rm sgn}}
\newcommand{\text{\rm Spec\,}}{\text{\rm Spec\,}}
\newcommand{\text{\rm Supp}}{\text{\rm Supp}}
\renewcommand{\t}{\ensuremath{\mathfrak{t}}}
\newcommand{\text{\rm Tor}}{\text{\rm Tor}}
\newcommand{\ensuremath{\mathbb{T}}}{\ensuremath{\mathbb{T}}}
\newcommand{\ensuremath{\mathcal{U}}}{\ensuremath{\mathcal{U}}}
\newcommand{\ensuremath{\Omega}}{\ensuremath{\Omega}}
\newcommand{\ensuremath{\omega}}{\ensuremath{\omega}}
\newcommand{\ensuremath{\mathbb{Z}}}{\ensuremath{\mathbb{Z}}}
\newcommand{\leftexp}[2]{{\vphantom{#2}}^{#1}{#2}}
\newcommand{\leftsub}[2]{{\vphantom{#2}}_{#1}{#2}}
\newcommand{\lj}[2]{{#1}_{#2}}
\newcommand{\lJ}[2]{{#1}^{#2}}
\newcommand{\rj}[2]{\leftsub{#2}{#1}}
\newcommand{\rJ}[2]{\leftexp{#2}{#1}}
\newcommand{\text{\rm Res}}{\text{\rm Res}}
\renewcommand{\S}{\ensuremath{\mathcal{S}}}
\newcommand{\GA}{\ensuremath{\ensuremath{\mathbb{Q}}(\S_n)}}
\newcommand{\ensuremath{\otimes}}{\ensuremath{\otimes}}
\newcommand{\ensuremath{\mathscr{C}}}{\ensuremath{C'}}
\newcommand{\ensuremath{{\tilde{C'}}\negmedspace}}{\ensuremath{{\tilde{C'}}\negmedspace}}
\newcommand{\ensuremath{\tilde{P}}}{\ensuremath{\tilde{P}}}
\newcommand{\ensuremath{\widetilde{E}}}{\ensuremath{\widetilde{E}}}
\newcommand{\ensuremath{\widetilde{F}}}{\ensuremath{\widetilde{F}}}
\newcommand{\ensuremath{\widetilde{T}}}{\ensuremath{\widetilde{T}}}
\newcommand{\ensuremath{\widehat{E}}}{\ensuremath{\widehat{E}}}
\newcommand{\ensuremath{\widehat{F}}}{\ensuremath{\widehat{F}}}
\newcommand{\small \ensuremath{\boxtimes}}{\small \ensuremath{\boxtimes}}
\newcommand{\HJ}{\ensuremath{{\H_{J}}}}
\newcommand{\br}[1]{\ensuremath{\overline{#1}}}
\renewcommand{\u}{\ensuremath{u}}
\newcommand{\ensuremath{u^{-1}}} %q^{-1/2}{\ensuremath{u^{-1}}}
\newcommand{\ensuremath{\tilde{T}}}{\ensuremath{\tilde{T}}}
\newcommand{\coprodsub}[1]{{\coprod\vphantom{#1}}_{#1}}
\newcommand{\ensuremath{\mathfrak{C}}}{\ensuremath{\mathfrak{C}}}
\newcommand{\ensuremath{\mathcal{T}}}{\ensuremath{\mathcal{T}}}
\newcommand{\ensuremath{\mathcal{IC}}}{\ensuremath{\mathcal{IC}}}
\newcommand{\ensuremath{{W_e}}}{\ensuremath{{W_e}}}
\newcommand{\ensuremath{{W_e^+}}}{\ensuremath{{W_e^+}}}
\newcommand{\eH}{\ensuremath{\widehat{\H}}}
\newcommand{\pH}{\ensuremath{\widehat{\H}^{+}}}
\newcommand{\ensuremath{Y_{\geq 0}}}{\ensuremath{Y_{\geq 0}}}
\newcommand{\ensuremath{\widetilde{\H}}}{\ensuremath{\widetilde{\H}}}
\newcommand{\widehat{\S}}{\widehat{\S}}
\newcommand{\widehat{\S}^+}{\widehat{\S}^+}
\newcommand{\ensuremath{{W_a}}}{\ensuremath{{W_a}}}
\newcommand{\text{\rm jdt}}{\text{\rm jdt}}
\newcommand{\text{\rm reading}}{\text{\rm reading}}
\newcommand{\text{\rm sh}}{\text{\rm sh}}
\newcommand{\text{\rm evac}}{\text{\rm evac}}
\newcommand{\ensuremath{tS^{2}V}}{\ensuremath{tS^{2}V}}
\newcommand{\ensuremath{Z^2}}{\ensuremath{Z^2}}
\newcommand{\ensuremath{\widehat{tS^{2}}_{\text{red}}V}}{\ensuremath{\widehat{tS^{2}}_{\text{red}}V}}
\newcommand{\ensuremath{S^2_{\text{red}}V}}{\ensuremath{S^2_{\text{red}}V}}
\newcommand{\ensuremath{t\Lambda^2 V}}{\ensuremath{t\Lambda^2 V}}
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\end{equation}}{\end{equation}}
\renewcommand{\ng}{\text{-}}
\newcommand{\ensuremath{\text{\ng 1}}}{\ensuremath{\text{\ng 1}}}
\newcommand{\ensuremath{\text{\ng 4}}}{\ensuremath{\text{\ng 4}}}
\newcommand{\alpha}{\alpha}
\newcommand{\beta}{\beta}
\newcommand{\ensuremath{\leftexp{J}{W}_{2}}}{\ensuremath{\leftexp{J}{W}_{2}}}
\newcommand{\ensuremath{W_{1}\stackrel{J}{\times} {W}_{2}}}{\ensuremath{W_{1}\stackrel{J}{\times} {W}_{2}}}
\newcommand{\ensuremath{\H_1\tsr_J \H_{2}}}{\ensuremath{\H_1\ensuremath{\otimes}_J \H_{2}}}
\newcommand{\ensuremath{W_{1}\stackrel{J_1}{\times} \ldots\stackrel{J_{d-1}}{\times}W_{d}}}{\ensuremath{W_{1}\stackrel{J_1}{\times} \ldots\stackrel{J_{d-1}}{\times}W_{d}}}
\newcommand{\ensuremath{\H_{1}\tsr_{J_1}\ldots\tsr_{J_{d-1}}\H_{d}}}{\ensuremath{\H_{1}\ensuremath{\otimes}_{J_1}\ldots\ensuremath{\otimes}_{J_{d-1}}\H_{d}}}
\begin{document}
\author{Jonah Blasiak}
\title{$W$-graph versions of tensoring with the $\S_n$ defining representation}
\begin{abstract}
We further develop the theory of inducing $W$-graphs worked out by Howlett and Yin in \cite{HY1}, \cite{HY2}, focusing on the case $W = \S_n$. Our main application is to give two $W$-graph versions of tensoring with the $\S_n$ defining representation $V$, one being $\H \ensuremath{\otimes}_{\H_J} -$ for $\H, \H_J$ the Hecke algebras of $\S_n, \S_{n-1}$ and the other $(\pH \ensuremath{\otimes}_{\H} -)_1$, where $\pH$ is a subalgebra of the extended affine Hecke algebra and the subscript signifies taking the degree 1 part. We look at the corresponding $W$-graph versions of the projection $V \ensuremath{\otimes} V \ensuremath{\otimes} - \to S^2 V \ensuremath{\otimes} -$. This does not send canonical basis elements to canonical basis elements, but we show that it approximates doing so as the Hecke algebra parameter $\u \to 0$. We make this approximation combinatorially explicit by determining it on cells. Also of interest is a combinatorial conjecture stating the restriction of $\H$ to $\H_J$ is ``weakly multiplicity-free'' for $|J| = n-1$, and a partial determination of the map $\H \ensuremath{\otimes}_{\H_J} \H \xrightarrow{\beta} \H$ on canonical basis elements, where $\beta$ is the counit of adjunction.
\end{abstract}
\maketitle
\section{Introduction}
The polynomial ring $R := \ensuremath{\mathbb{C}}[x_1,\ldots,x_n]$ is well understood as a $\ensuremath{\mathbb{C}}\S_n$-module, but how this $\ensuremath{\mathbb{C}}\S_n$-module structure is compatible with the structure of $R$ as a module over itself is not.
This work came about from an attempt to construct a combinatorial model for $R$ as a $\ensuremath{\mathbb{C}}\S_n$-submodule that takes into account multiplication by the $x_i$. The hope is that such a model would lead to a better understanding of the Garsia-Procesi modules, particularly, the combinatorics of cyclage and catabolism. We also might hope to find modules corresponding to the $k$-atoms of Lascoux, Lapointe, and Morse and uncover combinatorics that governs them.
Such a model might look something like this: decompose the tensor algebra $TV$ into canonically chosen
irreducible $\ensuremath{\mathbb{C}} \S_n$-submodules, where $V$ is the degree 1 part of $R$.
Define a poset in which an irreducible $E'$ is less than an irreducible $E$ if $E' \subseteq V\ensuremath{\otimes} E$. Somehow project this picture onto a canonical decomposition of $R$ into $\ensuremath{\mathbb{C}}\S_n$-irreducibles. Lower order ideals of the projected poset would correspond to $\ensuremath{\mathbb{C}}\S_n$-modules that are also $R$-modules. Edges would be controlled by a local rule saying that a path of length two $(E,E'), (E',E'')$ must satisfy $E'' \subseteq S^2V \ensuremath{\otimes} E$.
The main results of this paper are a first step towards this approach; further work will appear in \cite{B}. To obtain a nice decomposition of $TV$ and $R$ into irreducibles, we replace $\ensuremath{\mathbb{C}} \S_n$ with the Hecke algebra $\H$ of $W = \S_n$ and apply the theory of canonical bases. The functor $V \ensuremath{\otimes} -$ is replaced by $\H \ensuremath{\otimes}_{\H_J} -$, $J = \{ s_2, \ldots, s_{n-1} \} \subseteq S$, $S$ the simple reflections of $W$. We are naturally led to a construction that takes an $\H_J$-module $E$ coming from a $W_J$-graph and produces a $W$-graph structure on $\H \ensuremath{\otimes}_{\H_J} E$. This construction of inducing $W$-graphs, found independently by the author, is due to Howlett and Yin \cite{HY1}. We spend a good deal of this paper (\textsection\ref{s Hecke algebra} -- \textsection\ref{s tableau combinatorics}) developing this theory, proving some basic results of interest for their own sake as well as for this application.
Once this groundwork is laid, we can form a $W$-graph version of $TV \ensuremath{\otimes} E$, $TV$ being the tensor algebra of $V$, for any $\H$-module $E$ coming from a $W$-graph. We can then try to project this onto a $W$-graph version of $SV \ensuremath{\otimes} E = R \ensuremath{\otimes} E$. This is even interesting for $T^2 V$ and $S^2 V$ and is what we focus on in this paper. Define $T^2_\text{red} V := \ensuremath{\mathbb{Z}} \{ x_i \ensuremath{\otimes} x_j : i \neq j\}$ and $S^2_\text{red} V := \ensuremath{\mathbb{Z}} \{ x_i \ensuremath{\otimes} x_j + x_j \ensuremath{\otimes} x_i: i \neq j\}$. We show in Proposition \ref{p red notred} that our $W$-graph version of $T^2 V \ensuremath{\otimes} E$ has a cellular decomposition into $\ensuremath{\widetilde{F}}^2 := \H \ensuremath{\otimes}_{J \backslash s_2} E$ and $\H \ensuremath{\otimes}_J E$, which at $\u=1$ become $T^2_\text{red} V \ensuremath{\otimes} E$ and $V \ensuremath{\otimes} E$. There is a canonical map (\ref{e reduced v tsr v to reduced s^2})
$$ \ensuremath{\widetilde{F}}^2 \xrightarrow{\tilde{\beta}} \H \ensuremath{\otimes}_{S \backslash s_2} E,$$
specializing at $\u =1$ to the projection $T^2_\text{red} V \ensuremath{\otimes} E \to S^2_\text{red} V \ensuremath{\otimes} E$. The map $\tilde{\beta}$ does not send canonical basis elements to canonical basis elements, but it approximates doing so as the Hecke algebra parameter $\u \to 0$ (Corollary \ref{c approx at u=1}). This partitions the canonical basis of $\ensuremath{\widetilde{F}}^2$ into two parts--the approximate kernel, which we refer to as combinatorial wedge, and the approximate inverse image of the canonical basis of $\H \ensuremath{\otimes}_{S \backslash s_2} E$, which we refer to as combinatorial reduced sym. Theorem \ref{t combinatorial sym} determines this partition in terms of cells.
We also consider a $W$-graph version of tensoring with $V$ coming from the extended affine Hecke algebra. This mostly parallels the version just described, but there are some interesting differences. Most notably, the combinatorics of this $W$-graph version of the inclusion $T^2_\text{red} V \ensuremath{\otimes} E \to V \ensuremath{\otimes} V \ensuremath{\otimes} E$ is transpose to that of the other; compare Theorems \ref{t combinatorial sym} and \ref{t combinatorial sym affine}.
This paper is organized mainly in order of decreasing generality. We begin in \textsection\ref{s Hecke algebra} by introducing the Hecke algebra, $W$-graphs, and the inducing $W$-graph construction. We reformulate some of this theory using the formalism of IC bases as presented in \cite{Du}. This has the advantage of avoiding explicit calculations involving Kazhdan-Lusztig polynomials, or rather, hides these calculations in the citations of \cite{HY1}, \cite{HY2}. This allows us to focus more on cells and cellular subquotients. In \textsection\ref{s H1H2} we specialize to
the case where $W$-graphs come from iterated induction from the regular representation. In this case we prove that all left cells are isomorphic to those occurring in the regular representation of $W$ (Theorem \ref{t HJHcells}). Next, in \textsection\ref{s tableau combinatorics}, we review the combinatorics of cells in the case $W = \S_n$ . As was first observed in \cite{R}, there is a beautiful connection between the Littlewood-Richardson rule and the cells
of an induced module $\H \ensuremath{\otimes}_{\H_J} E$ (Proposition \ref{p cells of induce}). The combinatorics of the cells
of the restriction $\text{\rm Res}_{\H_J} \H$ is less familiar; see Conjecture \ref{c weak mult free}. Sections \ref{s computation c_w} and \ref{s canonical maps} give a nice result about how canonical basis elements behave under the projection $\H \ensuremath{\otimes}_J \H \to \H$. The remaining sections \ref{s tensor V}, \ref{s decomposing VVE}, and \ref{s combinatorial approximation S2} contain our main results just discussed.
\section{IC bases and inducing $W$-graphs}
\label{s Hecke algebra}
\subsection{}
We will use the following notational conventions in this paper. If $A$ is a ring and $S$ is a set, then $A S$ is a free $A$-module with basis $S$ (possibly endowed with some additional structure, depending on context). Elements of induced modules $\H \ensuremath{\otimes}_{\H_J} E$ will be denoted $h \small \ensuremath{\boxtimes} e$ to distinguish them from elements of a tensor product over $\ensuremath{\mathbb{Z}}$, $F \ensuremath{\otimes}_\ensuremath{\mathbb{Z}} E$, whose elements will be denoted $f \ensuremath{\otimes} e$. The symbol $[n]$ is used for the set $\{1,\ldots,n\}$ and also for the $\u$-integer (defined below), but there should be no confusion between the two.
\subsection{}
\label{ss coxeter group}
Let $W$ be a Coxeter group and $S$ its set of simple reflections. The \emph{length} $\ell(w)$ of $w \in W$ is the minimal $l$ such that $w=s_1\ldots s_l$ for some $s_i\in S$. If $\ell(uv)=\ell(u)+\ell(v)$, then $uv = u\cdot v$ is a \emph{reduced factorization}. The notation $L(w) = \{s\in S : sw < w\}, R(w) = \{s\in S : ws < w\}$ will be used for the left and right descent sets of $w$.
For any $J\subseteq S$, the \emph{parabolic subgroup} $W_J$ is the subgroup of $W$ generated by $J$. Each left (resp. right) coset $wW_J$ (resp. $W_Jw$) contains an unique element of minimal length called a minimal coset representative. The set of all such elements is denoted $W^J$ (resp. $\leftexp{J}W$). For any $w \in W$, define $\lJ{w}{J}$, $\rj{w}{J}$ by
\begin{equation} w=\lJ{w}{J} \cdot \rj{w}{J},\ \lJ{w}{J} \in W^J,\ \rj{w}{J} \in W_J.\end{equation}
Similarly, define $\lj{w}{J}$, $\rJ{w}{J}$ by
\begin{equation} w= \lj{w}{J} \cdot \rJ{w}{J},\ \lj{w}{J} \in W_J,\ \rJ{w}{J} \in \leftexp{J}W.\end{equation}
\subsection{}
Let $A = \ensuremath{\mathbb{Z}}[\u,\ensuremath{u^{-1}}} %q^{-1/2]$ be the ring of Laurent polynomials in the indeterminate $\u$, $A^{-}$ (resp. $A^{+}$) be the subring $\ensuremath{\mathbb{Z}}[\ensuremath{u^{-1}}} %q^{-1/2]$ (resp. $\ensuremath{\mathbb{Z}}[\u]$), and $\br{\cdot}:A\to A$ be the involution given by $\br{u} = \ensuremath{u^{-1}}} %q^{-1/2$. The \emph{Hecke algebra} $\H$ of $W$ is the free $A$-module with basis $\{T_w :\ w\in W\}$ and relations generated by
\begin{equation} \label{e Hecke algebra def} \begin{array}{ll}T_uT_v = T_{uv} & \text{if } uv = u\cdot v\ \text{is a reduced factorization},\\
(T_s - \u)(T_s + \ensuremath{u^{-1}}} %q^{-1/2) = 0 & \text{if } s\in S.\end{array}\end{equation}
For each $J\subseteq S$, $\H_J$ denotes the subalgebra of $\H$ with $A$-basis $\{T_w:\ w\in W_J\}$, which is also the Hecke algebra of $W_J$.
The involution, $\br{\cdot}$, of $\H$ is the additive map from $\H$ to itself extending the involution $\br{\cdot}$ on $A$ and satisfying $\br{T_w} = T_{w^{-1}}^{-1}$. Observe that $\br{T_{s}} = T_s^{-1} = T_s + \ensuremath{u^{-1}}} %q^{-1/2 - u$ for $s \in S$. Some simple $\br{\cdot}$-invariant elements of $\H$ are $\ensuremath{\mathscr{C}}_\text{id} := T_\text{id}$ and $\ensuremath{\mathscr{C}}_s := T_s + \ensuremath{u^{-1}}} %q^{-1/2 = T_s^{-1} + u$, $s\in S$. The $\br{\cdot}$-invariant $\u$-integers are $[k] := \frac{\u^k - \u^{-k}}{\u - \ensuremath{u^{-1}}} %q^{-1/2} \in A$.
\subsection{}
\label{ss IC basis}
\newcommand{\ensuremath{\prec}}{\ensuremath{\prec}}
Before introducing $W$-graphs and the Kazhdan-Lusztig basis, we will discuss a slightly more general setup for canonical bases.
The presentation here follows Du \cite{Du}. This formalism originated in \cite{KL} and was further developed by Lusztig and Kashiwara (see the references in \cite{Du}).
Given any $A$-module $E$ (no Hecke algebra involved), we can try to construct a \emph{canonical basis} or \emph{IC basis} from a \emph{standard basis} and involution $\br{\cdot}:E\to E$. Let $\{t_i:\ i\in I\}$ be an $A$-basis of $E$ (the standard basis) for some index set $I$ and assume the involution $\br{\cdot}$ \emph{intertwines} the involution $\br{\cdot}$ on $A$: $\br{at}=\br{a}\br{t}$ for any $a\in A$, $t\in E$. Define the lattice $\L$ to be $A^{-}\{t_i:\ i\in I\}$. If there exists a unique $\br{\cdot}$-invariant basis $\left\{c_i : i\in I\right\}$ of the free $A^{-}$-module $\L$ such that $c_i \equiv t_i \mod \ensuremath{u^{-1}}} %q^{-1/2\L$, then $\left\{c_i : i\in I\right\}$ is an IC basis of $E$, denoted
\begin{equation} \ensuremath{\mathcal{IC}}_E(\{t_i: i \in I\},\br{\cdot}). \end{equation}
\begin{theorem}[Du \cite{Du}]\label{t IC basis}
With the notation above, if $(I, \ensuremath{\prec})$ is a poset such that for all $j \in I$, $\{i \in I:i \ensuremath{\prec} j\}$ is finite and $\br{t_j} \equiv t_j \mod A \{t_i: i\ensuremath{\prec} j\}$, then the IC basis $\ensuremath{\mathcal{IC}}_E(\{t_i: i \in I\},\br{\cdot})$ exists.
\end{theorem}
In the remainder of this paper, $\br{\cdot}$ will be clear from context so will be omitted from the $\ensuremath{\mathcal{IC}}()$ notation.
An observation that will be used in \textsection \ref{s cells} and \textsection \ref{s H1H2} is that this construction behaves well with taking lower order ideals.
\begin{proposition}\label{p ic lower ideal}
With the notation of Theorem \ref{t IC basis}, if $I'$ is a lower order ideal of $I$ and $E' := A\{t_i:\ i\in I'\}$ , then $$ \ensuremath{\mathcal{IC}}_{E'}(\{t_i:\ i\in I'\}) = \{c_i: i \in I'\} \subseteq \ensuremath{\mathcal{IC}}_E(\{t_i:\ i\in I\} )$$
\end{proposition}
\begin{proof}
The poset $I'$ and the involution $\br{\cdot}$ restricted to $E'$ satisfy the necessary hypotheses so that Theorem \ref{t IC basis} applies. Label the resulting IC basis by $d_i$, $i\in I'$ and put $\L' = A^{-}\{t_i:\ i\in I'\}$. Then $d_i \equiv t_i \mod \ensuremath{u^{-1}}} %q^{-1/2\L'$ for $i\in I'$ certainly implies $d_i \equiv t_i \mod \ensuremath{u^{-1}}} %q^{-1/2\L$. Uniqueness of the IC basis then implies $d_i = c_i \ (i \in I')$.
\end{proof}
We now come to the main construction studied in this paper. Let $E$ be an $\H_J$-module with an involution $\br{\cdot} : E\to E$ intertwining $\br{\cdot}$ on $\H_J$ ($\br{h e} = \br{h} \br{e}$ for all $h \in \H_J$ and $e \in E$). Suppose $\Gamma$ is a $\br{\cdot}$-invariant $A$-basis of $E$. Put $\ensuremath{\widetilde{E}} = \H \ensuremath{\otimes}_{\H_J} E$. We will apply Theorem \ref{t IC basis} to $\ensuremath{\widetilde{E}}$ with standard basis $\ensuremath{\widetilde{T}} := \{ \ensuremath{\tilde{T}}_\mathbf{z} : \mathbf{z} \in W^J \times \Gamma\}$, where $\ensuremath{\tilde{T}}_{w,\gamma} := T_w \small \ensuremath{\boxtimes} \gamma$. The lattice $\L$ is then $A^{-}\ensuremath{\widetilde{T}}$. Define the involution $\br{\cdot}$ on $\ensuremath{\widetilde{E}}$ from the involutions on $E$ and $\H$:
\begin{equation} \br{h\small \ensuremath{\boxtimes} e} = \br{h}\small \ensuremath{\boxtimes} \br{e}, \text{ for every $h\in\H, e\in E$}. \end{equation}
It is easy to check (and is done in \cite{HY1}) that the definition of $\br{\cdot} : \ensuremath{\widetilde{E}} \to\ensuremath{\widetilde{E}}$ is sound, that it's an involution and intertwines the involution $\br{\cdot}$ on $\H$.
Let $\ensuremath{\prec}$ be the partial order on $W^J\times \Gamma$ generated by the rule: $(w',\gamma')\ensuremath{\prec}(w,\gamma)$ if $\ensuremath{\tilde{T}}_{w',\gamma'} $ appears with non-zero coefficient in $(\br{T_w}-T_w) \small \ensuremath{\boxtimes} \gamma$ expanded in the basis $\ensuremath{\widetilde{T}}$. Since $\br{T_w} - T_w$ is an $A$-linear combination of $T_x$ for $x < w$, it is easy to see that $\br{\ensuremath{\tilde{T}}_{w,\gamma}} - \ensuremath{\tilde{T}}_{w,\gamma}$ ($w\in W^J, \ \gamma \in \Gamma$) is an $A$-linear combination of $\{ \ensuremath{\tilde{T}}_{x , \delta} : x<w, \delta \in \Gamma \}$, so the definition of $\ensuremath{\prec}$ is sound. To see that $D_{w,\gamma} := \{(w',\gamma'):(w',\gamma')\preceq(w,\gamma)\}$ is finite, induct on $\ell(w)$. The set $D_{w,\gamma}$ is the union of $\{ (w, \gamma) \}$ and $D_{w',\gamma'}$ over those $(w',\gamma')$ such that $\ensuremath{\tilde{T}}_{w',\gamma'}$ appears with non-zero coefficient in $(\br{T_w}-T_w) \small \ensuremath{\boxtimes} \gamma$, each of which is finite by induction.
Thus Theorem \ref{t IC basis} applies and we obtain a canonical basis $\Lambda = \ensuremath{\mathcal{IC}}_{\ensuremath{\widetilde{E}}}(\ensuremath{\widetilde{T}}) = \{\ensuremath{{\tilde{C'}}\negmedspace}_{w,\gamma} : w \in W^J, \gamma \in \Gamma\}$ of $\ensuremath{\widetilde{E}}$. This is one way of proving the following theorem that is Theorem 5.1 in \cite{HY1} (there they use the basis $C_{w,\gamma}$ that is $\equiv\ensuremath{\tilde{T}}_{w,\gamma}\mod \u A^+\ensuremath{\widetilde{T}})$.
\begin{theorem}[Howlett, Yin \cite{HY1}]
\label{t HY canbas exists}
There exists a unique $\br{\cdot}$-invariant basis $\Lambda=\{\ensuremath{{\tilde{C'}}\negmedspace}_{w,\gamma} : w \in W^J, \gamma \in \Gamma\}$ of $\ensuremath{\widetilde{E}}$ such that $\ensuremath{{\tilde{C'}}\negmedspace}_{w,\gamma} \equiv \ensuremath{\tilde{T}}_{w,\gamma}\mod \ensuremath{u^{-1}}} %q^{-1/2 \L$.\end{theorem}
Applied to $J=\emptyset$ and $\Gamma$ the free $A$-module of rank one, this yields the usual Kazhdan-Lusztig basis $\Gamma_W:=\{\ensuremath{\mathscr{C}}_w:w\in W\}$ of $\H$.
\subsection{}
In \cite{KL}, Kazhdan and Lusztig introduce $W$-graphs as a combinatorial structure for describing an $\H$-module with a special basis. A $W$-graph consists of a vertex set $\Gamma$, an edge weight $\mu(\delta,\gamma)\in \ensuremath{\mathbb{Z}}$ for each ordered pair $(\delta,\gamma)\in\Gamma\times\Gamma$, and a descent set $L(\gamma) \subseteq S$ for each $\gamma\in\Gamma$. These are subject to the condition that $A\Gamma$ has a left $\H$-module structure given by
\begin{equation}\label{Wgrapheq}\ensuremath{\mathscr{C}}_{s}\gamma = \left\{\begin{array}{ll} [2] \gamma & \text{if}\ s \in L(\gamma),\\
\sum_{\substack{\{\delta \in \Gamma : s \in L(\delta)\}}} \mu(\delta,\gamma)\delta & \text{if}\ s \notin L(\gamma). \end{array}\right.\end{equation}
We will use the same name for a $W$-graph and its vertex set. If an $\H$-module $E$ has an $A$-basis $\Gamma$ that satisfies (\ref{Wgrapheq}) for some choice of descent sets, then we say that $\Gamma$ gives $E$ a \emph{$W$-graph structure}, or $\Gamma$ is a $W$-graph on $E$.
It is convenient to define two $W$-graphs $\Gamma,\Gamma'$ to be isomorphic if they give rise to isomorphic $\H$-modules with basis. That is, $\Gamma\cong\Gamma'$ if there is a bijection $\alpha:\Gamma\to\Gamma'$ of vertex sets such that $L(\alpha(\gamma)) = L(\gamma)$ and $\mu(\alpha(\delta),\alpha(\gamma)) = \mu(\delta,\gamma)$ whenever $L(\delta)\not\subseteq L(\gamma)$.
Given a $W$-graph $\Gamma$, we always have an involution
\begin{equation} \label{e W-graph inv}
\br{\cdot}: A \Gamma \to A \Gamma, \text{ with } \br{\gamma} = \gamma \text{ for every } \gamma \in \Gamma,\end{equation}
and extended $A$-semilinearly using the involution on $A$. It is quite clear from (\ref{Wgrapheq}) (and checked in \cite{HY1}) that this involution intertwines $\br{\cdot}$ on $\H$.
\subsection{}\label{ss induced W graph}
Now let $\Gamma$ be a $W_J$-graph, $E = A \Gamma$, and $\br{\cdot} : E \to E$ be as just mentioned in (\ref{e W-graph inv}). Then we are in the setup of \textsection \ref{ss IC basis} except $\Gamma$ is a $W_J$-graph instead of any $\br{\cdot}$-invariant basis of $E$. Maintaining the notation of \textsection \ref{ss IC basis}, let $\Lambda = \ensuremath{\mathcal{IC}}_{\ensuremath{\widetilde{E}}}(\ensuremath{\widetilde{T}}) = \left\{\ensuremath{{\tilde{C'}}\negmedspace}_{w,\gamma} :\ w\in W^J, \gamma\in\Gamma\right\}$. As would be hoped, $\Lambda$ gives $\ensuremath{\widetilde{E}}$ a $W$-graph structure.
Define $\ensuremath{\tilde{P}}_{x,\delta,w,\gamma}$ by the formula
\begin{equation}\ensuremath{{\tilde{C'}}\negmedspace}_{w,\gamma} = \sum\limits_{(x,\delta)\in W^J\times\Gamma} \ensuremath{\tilde{P}}_{x,\delta,w,\gamma}\ \ensuremath{\tilde{T}}_{x,\delta}.\end{equation}
For every $(x,\delta),(w,\gamma)\in W^J \times \Gamma$ define
\begin{equation}\label{e mudef}
\mu(x,\delta,w,\gamma) = \left\{\begin{array}{ll}
\text{coefficient of}\ \ensuremath{u^{-1}}} %q^{-1/2 \ \text{in}\ \ensuremath{\tilde{P}}_{x,\delta,w,\gamma} & \text{if } x < w,\\
\mu(\delta,\gamma) & \text{if } x = w, \\
1 & \text{if } x = sw,\ x > w,\ s\in S,\ \delta = \gamma,\\
0 & \text{otherwise}. \end{array}\right.\end{equation}%
Also define $L(w,\gamma)=L(w)\cup \{s\in S : sw = wt,\ t\in L(\gamma)\}$.
Now we can state the main result of Howlett and Yin.
\begin{theorem}[{\cite[Theorem 5.3]{HY1}}] \label{t HY} With $\mu$ and $L$ as defined above, $\Lambda$ gives $\ensuremath{\widetilde{E}} = \H\ensuremath{\otimes}_{\H_J} A\Gamma$ a $W$-graph structure.
\end{theorem}
We will often abuse notation and refer to a module when we really mean the $W$-graph on that module, but there should be no confusion as there will never be more than one $W$-graph structure on a given module. We will use the notation $\H \ensuremath{\otimes}_{\H_J} \Gamma$ to mean the $\Lambda$ in this theorem, in case we want refer to its vertex set or to emphasize the $W$-graph rather than the module.
\begin{remark}A $W$-graph is \emph{symmetric} if it is isomorphic to a $W$-graph with $\mu(x,w) = \mu(w,x)$ for all vertices $x,w$. The $W$-graph $\Gamma_W$ on the regular representation of $\H$ is symmetric. The $W$-graph $\Lambda$ defined above is symmetric if and only if $\Gamma$ is symmetric, although this is not obvious from the definition of $\mu$ (\ref{e mudef}). In \cite{HY1}, the $W$-graph for $\Lambda$ is defined so that it is clearly symmetric, and then it is proved later that it is isomorphic to the $W$-graph $\Lambda$ defined here.
\end{remark}
\subsection{}
\label{s cells}
Let $\Gamma$ be a $W$-graph and put $E=A\Gamma$. The preorder $\leq_{\Gamma}$ on the vertex set $\Gamma$ is generated by
\begin{equation} \delta\leq_{\Gamma}\gamma \begin{array}{c}\text{if there is an $h\in\H$ such that $\delta$ appears with non-zero}\\ \text{coefficient in the expansion of $h\gamma$
in the basis $\Gamma$}. \end{array}
\end{equation}
Equivalence classes of $\leq_{\Gamma}$ are the \emph{left cells} of $\Gamma$, or just \emph{cells} since we will almost exclusively work with left cells. Sometimes we will speak of the cells of $E$ or the preorder on $E$ to mean that of $\Gamma$, when the $W$-graph $\Gamma$ is clear from context.
A \emph{cellular submodule, quotient, or subquotient} of $E$ is a submodule, quotient, or subquotient of $E$ that is spanned by a subset of $\Gamma$ (and is necessarily a union of cells). We will abuse notation and sometimes refer to a cellular subquotient by its corresponding union of cells.
We will give one result about cells in the full generality of \textsection\ref{ss induced W graph} before specializing $W$ and the $W_J$-graph $\Gamma$. Let $D$ be a cellular submodule of $E$ spanned by a subset $\Gamma_D$ of $\Gamma$ and $p : E\to E/D$ the projection. Put $\Gamma_{E/D} = p(\Gamma\backslash \Gamma_D)$. The $W_J$-graph $\Gamma$ yields $W_J$-graphs $\Gamma_D$ on $D$ and $\Gamma_{E/D}$ on $E/D$. The involution $\br{\cdot}$ on $E$ restricts to one on $D$ and projects to one on $E/D$; elements of $\Gamma_D$ (resp. $\Gamma_{E/D}$) are fixed by the involution $\br{\cdot}$ on $D$ (resp. $E/D$). Since $\H$ is a free right $\H_J$-module, we have the exact sequence
\begin{equation}
\xymatrix{0 \ar[r] & \H\ensuremath{\otimes}_JD \ar[r] & \H\ensuremath{\otimes}_JE \ar[r]_<<<<{\tilde{p}}& \H\ensuremath{\otimes}_JE / D \ar[r]& 0,}
\end{equation}
where the shorthand $\H\ensuremath{\otimes}_{J}E := \H\ensuremath{\otimes}_{\H_J}E$ will be used here and from now on. In other words, inducing commutes with taking subquotients. It is also true that inducing and taking canonical bases commutes with taking cellular subquotients:
\begin{proposition}\label{CBSubquotientprop} With the notation above and that of \textsection\ref{ss induced W graph}, let $\ensuremath{\widetilde{T}}_D = \left\{\ensuremath{\tilde{T}}_{w,\gamma} : w\in W^J, \gamma\in \Gamma_D\right\}$ and $\ensuremath{\widetilde{T}}_{E/D} = \left\{T_w\small \ensuremath{\boxtimes} \gamma: w\in W^J, \gamma\in \Gamma_{E/D}\right\}$. Then
\begin{list}{(\roman{ctr})}{\usecounter{ctr} \setlength{\itemsep}{1pt} \setlength{\topsep}{2pt}}
\item $\ensuremath{\mathcal{IC}}_{\H\ensuremath{\otimes}_J D}\left(\ensuremath{\widetilde{T}}_D \right) = \left\{\ensuremath{{\tilde{C'}}\negmedspace}_{w,\gamma} : w\in W^J, \gamma\in \Gamma_D \right\}\subseteq \ensuremath{\mathcal{IC}}_{\ensuremath{\widetilde{E}}}(\ensuremath{\widetilde{T}})$,
\item $\ensuremath{\mathcal{IC}}_{\H\ensuremath{\otimes}_J E/D}\left(\ensuremath{\widetilde{T}}_{E/D}\right) = \left\{\tilde{p}(\ensuremath{{\tilde{C'}}\negmedspace}_{w,\gamma}) : w\in W^J, \gamma\in \Gamma\backslash \Gamma_D \right\}\subseteq \tilde{p}\left(\ensuremath{\mathcal{IC}}_{\ensuremath{\widetilde{E}}}(\ensuremath{\widetilde{T}})\right)$.
\end{list}
\end{proposition}
\begin{proof}
Statement (i) is actually a special case of Proposition \ref{p ic lower ideal}. From the definition of $\ensuremath{\prec}$ in \textsection \ref{ss IC basis} we can see that $W^J \times \Gamma_D$ is a lower order ideal of $W^J \times \Gamma$.
We prove (ii) directly. The lattice $\L_{E/D} := A^{-}\ensuremath{\widetilde{T}}_{E/D}$ is the quotient $\L / \L_D = \tilde{p}(\L)$. Therefore, given $w\in W^J$ and $\gamma\in \Gamma\backslash \Gamma_D$, we have
\begin{equation}\tilde{p}(\ensuremath{{\tilde{C'}}\negmedspace}_{w,\gamma}) = \tilde{p}(T_w \small \ensuremath{\boxtimes} \gamma + \ensuremath{u^{-1}}} %q^{-1/2 x) \equiv \tilde{p}(T_w \small \ensuremath{\boxtimes} \gamma) = T_w \small \ensuremath{\boxtimes} p(\gamma),\end{equation}
where $x$ is some element of $\L$ and the congruence is $\mod \ensuremath{u^{-1}}} %q^{-1/2 \L_{E/D}$. By definition, $p(\gamma) \in \Gamma_{E/D}$ so $\tilde{p}(\ensuremath{{\tilde{C'}}\negmedspace}_{w,\gamma})$ is the element of $\ensuremath{\mathcal{IC}}_{\H\ensuremath{\otimes}_J E/D}\left(\ensuremath{\widetilde{T}}_{E/D}\right)$ congruent to $T_w \small \ensuremath{\boxtimes} p(\gamma)\mod\ensuremath{u^{-1}}} %q^{-1/2\L_{E/D}$.
\end{proof}
This proposition is essentially \cite[Theorem 4.3]{HY2}, though the proof here is different. It also appears in \cite[Theorem 1]{G1} in the case that $\Gamma = \Gamma_{W_J}$ (the usual $W_J$-graph on $\H_J$) but in the generality of unequal parameters.
\section{Iterated induction from the regular representation}
\label{s H1H2}
In this paper we will primarily be interested in the case where $E$ is obtained by some sequence of inductions and restrictions of the regular representation of a Hecke algebra, or subquotients of such modules. In this section, let $\ensuremath{\widetilde{E}}$ denote $\H_1 \ensuremath{\otimes}_J E$, where $E = A \Gamma, \Gamma = \Gamma_{W_2}$ unless specified otherwise.
\subsection{}
Suppose we are given Coxeter groups $W_1$, $W_2$ with simple reflections $S_1,S_2$ and a set $J$ with inclusions $i_k: J\to S_k, k=1,2$ such that ${(W_1)}_{i_1(J)} \cong {(W_2)}_{i_2(J)}$ as Coxeter groups. Define the set
\begin{equation}\ensuremath{W_{1}\stackrel{J}{\times} {W}_{2}}:=
\left\{(w_1,w_2) : w_1\in W_1, w_2\in W_2\right\} \big/ \langle (w_1w,w_2)\sim(w_1,ww_2) : w\in W_J\rangle,\end{equation}
where $W_J := {W_1}_J \cong {W_2}_J$. The set $\ensuremath{W_{1}\stackrel{J}{\times} {W}_{2}}$ can also be identified with any of $W_1\times \ensuremath{\leftexp{J}{W}_{2}}$, ${W_1}^J\times W_2$, or $W_1^{J}\times W_J\times \ensuremath{\leftexp{J}{W}_{2}}$. These sets label canonical basis elements of Hecke algebra modules obtained by inducing from the regular representation just as a Coxeter group labels the canonical basis elements of its regular representation.
The material that follows in this subsection is somewhat tangent from our main theme, but we include it for completeness. We omit the details of proofs.
The set $\ensuremath{W_{1}\stackrel{J}{\times} {W}_{2}}$ comes with a left action by $W_1$, a length function, and a partial order generalizing the Bruhat order, as described in the following proposition.
\begin{proposition}\label{W-set prop}Let $(w_1,w_2)\in \ensuremath{W_{1}\stackrel{J}{\times} {W}_{2}}$. The set $\ensuremath{W_{1}\stackrel{J}{\times} {W}_{2}}$ comes equipped with \begin{list}{(\roman{ctr})} {\usecounter{ctr} \setlength{\itemsep}{1pt} \setlength{\topsep}{2pt}}
\item A left action by $W_1 :\ x\cdot (w_1, w_2) = (xw_1, w_2)$,
\item a length function: $\ell(w_1,w_2) = \ell(w_1)+\ell(w_2)$ whenever $w_1\in {W_1}^J$,
\item a partial order: $(w_1',w_2')\leq (w_1,w_2)$, whenever there exists $(w_1'',w_2'')\sim (w_1',w_2')$ such that $w_i''\leq w_i,\ w_i',w_i'' \in {W}_i, i = 1,2,$ and $w_1\in {W_1}^J$.
\end{list}
\end{proposition}
\begin{proposition}
The $W_1$-graph $\ensuremath{\widetilde{E}}$ is bipartite in the sense of \cite[Definition 3.1]{HY2}. Moreover, if $\mathbf{z}, \mathbf{z'} \in \ensuremath{W_{1}\stackrel{J}{\times} {W}_{2}}$, and $\ell(\mathbf{z})-\ell(\mathbf{z'})$ is even (resp. odd), then $\ensuremath{\tilde{P}}_{\mathbf{z'},\mathbf{z}}$ involves only even (resp. odd) powers of $\u$.
\end{proposition}
\begin{proof}
This follows from \cite[Proposition 3.2]{HY2}.
\end{proof}
\begin{proposition}
The $W_1$-graph $\ensuremath{\widetilde{E}}$ is ordered in the sense of \cite[Definition 1.1]{HY2}. Stronger, $\ensuremath{W_{1}\stackrel{J}{\times} {W}_{2}}$ has a partial order from Proposition 2.2 of \cite{HY2} using the Bruhat order on $W_2$, and this agrees with $\leq$ of Proposition \ref{W-set prop}. Therefore if $\mathbf{z, z'} \in \ensuremath{W_{1}\stackrel{J}{\times} {W}_{2}}$ and $\ensuremath{\tilde{P}}_{\mathbf{z', z}} \neq 0$, then $\mathbf{z'} \leq \mathbf{z}$.
\end{proposition}
\begin{proof}
Showing the partial orders from \cite{HY2} and Proposition \ref{W-set prop} are equal takes some work. The rest is a citation of results in \cite{HY2}.
\end{proof}
\subsection{}
A similar definition to that in the previous subsection can be given for $\ensuremath{W_{1}\stackrel{J_1}{\times} \ldots\stackrel{J_{d-1}}{\times}W_{d}}$. To work with these sets, introduce the following notation.
A representative $(w_1,\ldots,w_d)$ of an element of $\ensuremath{W_{1}\stackrel{J_1}{\times} \ldots\stackrel{J_{d-1}}{\times}W_{d}}$ is $\emph{i-stuffed}$ if
\begin{equation} w_1\in W^{J_1}_1,\ldots,w_{i-1}\in W_{i-1}^{J_{i-1}},\ w_i\in W_i,\ w_{i+1}\in \leftexp{J_i}{W_{i+1}},\ \ldots,\leftexp{J_{d-1}}{W_d}.\end{equation}
It is convenient to represent the element $\mathbf{z}\in \ensuremath{W_{1}\stackrel{J_1}{\times} \ldots\stackrel{J_{d-1}}{\times}W_{d}}$, somewhat redundantly, in \emph{stuffed} notation: $\mathbf{z} = (z_1, z_2, \ldots,z_d)$, where $z_i$ is the $i$-th component of the $i$-stuffed expression for $\mathbf{z}$.
\subsection{}
\label{ss TTTC}
The main ideas in this subsection also appear in \cite[\textsection 4]{G2} where they are used to adapt Lusztig's $a$-invariant to give results about the partial order on the cells of $\text{\rm Res}_J \Gamma_W$.
For any $X \subseteq W_1 \times W_2$, define the shorthands
\begin{equation}
\begin{array}{c}
TT(X) := \left\{T_{w_1}\small \ensuremath{\boxtimes} T_{w_2} : (w_1, w_2) \in X \right\},\\
TC(X) := \left\{T_{w_1}\small \ensuremath{\boxtimes} \ensuremath{\mathscr{C}}_{w_2} : (w_1, w_2) \in X \right\},\\
CT(X) := \left\{\ensuremath{\mathscr{C}}_{w_1}\small \ensuremath{\boxtimes} T_{w_2} : (w_1, w_2) \in X \right\}.
\end{array}
\end{equation}
The construction from \textsection \ref{ss IC basis} applied to $\Gamma_{W_2}$ gives the IC basis $\ensuremath{\mathcal{IC}}_{\ensuremath{\widetilde{E}}}(TC({W_1^J} \times W_2))$ of $\ensuremath{\widetilde{E}}$. The next proposition shows that the same canonical basis can be constructed from two other standard bases, and this will be used implicitly in what follows.
\begin{proposition} \label{TCTTprop}
The standard bases
$$ TC({W_1^J} \times W_2),TT({W_1^J} \times W_2) = TT(W_1 \times \ensuremath{\leftexp{J}{W}_{2}} ), CT(W_1 \times \ensuremath{\leftexp{J}{W}_{2}})$$
of $\ensuremath{\widetilde{E}}=\ensuremath{\H_1\tsr_J \H_{2}}$ have the same $A^{-}$-span, denoted $\L$. Moreover,
$$ T_{w_1}\small \ensuremath{\boxtimes} \ensuremath{\mathscr{C}}_{vw_2} \equiv T_{w_1}\small \ensuremath{\boxtimes} T_{vw_2} = T_{w_1v}\small \ensuremath{\boxtimes} T_{w_2} \equiv \ensuremath{\mathscr{C}}_{w_1v}\small \ensuremath{\boxtimes} T_{w_2}\mod \ensuremath{u^{-1}}} %q^{-1/2\L$$
for every $w_1\in {W_1}^J, v\in W_J, w_2\in \ensuremath{\leftexp{J}{W}_{2}}$. Therefore, the corresponding IC bases are identical:
$$ \ensuremath{\mathcal{IC}}_{\ensuremath{\widetilde{E}}}(TC({W_1^J} \times W_2))=\ensuremath{\mathcal{IC}}_{\ensuremath{\widetilde{E}}}(TT({W_1^J} \times W_2))=\ensuremath{\mathcal{IC}}_{\ensuremath{\widetilde{E}}}(CT(W_1 \times \ensuremath{\leftexp{J}{W}_{2}})) $$
(and these will be denoted $\Lambda=\{\ensuremath{\mathscr{C}}_{w_1,w_2} : (w_1,w_2)\in \ensuremath{W_{1}\stackrel{J}{\times} {W}_{2}} \}$).
\end{proposition}
\begin{proof}
The lattices $A^{-}\{T_{w_2}: w_2 \in W_2\}$ and $A^{-}\{\ensuremath{\mathscr{C}}_{w_2}: w_2 \in W_2\}$ are equal by the definition of an IC basis (\textsection \ref{ss IC basis}). Thus $A^{-} TC(W_1^J \times W_2) = A^{-} \ TT(W_1^J \times W_2)$ and similarly $A^{-} \ TT(W_1 \times \ensuremath{\leftexp{J}{W}_{2}} ) = A^{-} \ CT(W_1 \times \ensuremath{\leftexp{J}{W}_{2}}).$ The remaining statements are clear.
\end{proof}
Now given any lower order ideal $I$ in $\ensuremath{\leftexp{J}{W}_{2}}$, define $D_{I} = A \ CT(W_1 \times I)$, thought of as an $\H_1$-submodule of $\ensuremath{\widetilde{E}}$. Applying Proposition \ref{p ic lower ideal} to $D_{I} \subseteq \ensuremath{\widetilde{E}}$ with poset $W_1 \times \ensuremath{\leftexp{J}{W}_{2}}$ and lower ideal $W_1 \times D_{I}$ shows that $D_I$ has canonical basis $\left\{\ensuremath{\mathscr{C}}_{w_1,w_2} :\ w_1\in W_1, w_2\in I\right\}$ (Proposition \ref{TCTTprop} is used implicitly). The next theorem now comes easily.
Let $D_{\leq x} = D_{\{w \in \ensuremath{\leftexp{J}{W}_{2}}: w \leq x\}}$ and $D_{< x} = D_{\{w \in \ensuremath{\leftexp{J}{W}_{2}}: w < x\}}$. Recall that $\Gamma_{W_1}$ is the usual $W_1$-graph of the regular representation of $\H_1$.
\begin{theorem}
\label{t HJHcells}
The module $\ensuremath{\widetilde{E}}$ (with $W_1$-graph structure $\Lambda$) has a filtration with cellular subquotients that are isomorphic as $W_1$-graphs to $\Gamma_{W_1}$. In particular, the left cells of $\Lambda$ are isomorphic to those occurring in $\Gamma_{W_1}$.
\end{theorem}
\begin{proof}
For any $x\in \ensuremath{\leftexp{J}{W}_{2}}$, the map $\pi:D_{\leq x}\to \H_1$ given by $\pi(D_{< x}) = 0$ and $\ensuremath{\mathscr{C}}_{w}\small \ensuremath{\boxtimes} T_x\mapsto\ensuremath{\mathscr{C}}_{w}$
is an $\H_1$-module homomorphism. Hence the exact sequence
\begin{equation} \xymatrix{0 \ar[r] & D_{< x} \ar[r] & D_{\leq x} \ar[r]_{\pi}& \H_1 \ar[r]& 0}.\end{equation}
Moreover, $\pi(\ensuremath{{\tilde{C'}}\negmedspace}_{w,x}) = \ensuremath{\mathscr{C}}_{w}$, which is clear when viewing the $\ensuremath{{\tilde{C'}}\negmedspace}_{w,x}$ as being constructed from the standard basis $CT(W_1\times\ensuremath{\leftexp{J}{W}_{2}})$. This gives an isomorphism of $W_1$-graphs $D_{\leq x} / D_{<x} \cong \H_1$.
\end{proof}
Letting $\H$ be the Hecke algebra of $(W, S)$ and setting $\H_1= \H_J$, $\H_2=\H$, $J\subseteq S$, we obtain
\begin{corollary}
The left cells of $\text{\rm Res}_J\H$ are isomorphic as $W_J$-graphs to the left cells of the regular representation of $\H_J$.
\end{corollary}
This corollary is implied by results from \cite[\textsection 5]{HY2}, but the method of proof is different. It is also a consequence of \cite[Theorem 5.2]{R}.
By the same methods we can check that the canonical basis construction for induced modules is well-behaved for nested parabolic subgroups.
\begin{proposition}\label{p nested parabolic}
Let $\H$ be the Hecke algebra of $(W,S)$, $J_2 \subseteq J_1 \subseteq S$, $E$ a left $\H_{J_2}$-module with involution $\br{\cdot}$ intertwining that of $\H_{J_2}$, and $\Gamma$ a $\br{\cdot}$-invariant basis of $E$ (like the setup in \textsection \ref{ss IC basis}). Let $\Lambda_{J_1}=\ensuremath{\mathcal{IC}}_{\H_{J_1}\ensuremath{\otimes} E}(\{\ensuremath{\tilde{T}}_{w,\gamma}:w\in W_{J_1}^{J_2},\gamma\in\Gamma\})$. Then, putting $\ensuremath{\widetilde{E}} = \H_{J_2} \ensuremath{\otimes} E$, we have
\begin{equation} \label{e nested parab}\ensuremath{\mathcal{IC}}_{\ensuremath{\widetilde{E}}}(\{T_w\small \ensuremath{\boxtimes} \gamma:w\in W^{J_2},\gamma\in\Gamma\})=\ensuremath{\mathcal{IC}}_{\ensuremath{\widetilde{E}}}(\{T_w\small \ensuremath{\boxtimes} \delta:w\in W^{J_1},\delta\in\Lambda_{J_1}\}). \end{equation}
\end{proposition}
\begin{proof}
By the same argument as in Proposition \ref{TCTTprop}, the right-hand side of (\ref{e nested parab}) can also be constructed from the standard basis
$\{T_{v_1}\small \ensuremath{\boxtimes} T_{v_2}\small \ensuremath{\boxtimes} \gamma:v_1\in W^{J_1},v_2\in W_{J_1}^{J_2},\gamma\in\Gamma\}$. It remains to check that $W^{J_1} \times W_{J_1}^{J_2} = W^{J_2}$ by $(v_1,v_2) \mapsto v_1 v_2$. As $v_1$ ranges over left coset representatives of $W_{J_1}$ and $v_2$ over left coset representatives of $W_{J_2}$ inside $W_{J_1}$, $v_1 v_2$ ranges over left coset representatives of $W_{J_2}$ in $W$ (true for any pair of nested subgroups in a group). To see that $v_1 v_2$ is a minimal coset representative, let $x \in W_{J_2}$; then $v_2 \cdot x$ is a reduced factorization and $v_2 x \in W_{J_1}$ (and $v_1$ minimal in $v_1 W_{J_1}$) implies $v_1 \cdot v_2 x$ is a reduced factorization and thus so is $v_1 \cdot v_2 \cdot x$.
\end{proof}
\subsection{}
\label{ss locallabels}
\newcommand{\ensuremath{\Upsilon}}{\ensuremath{\Upsilon}}
The set of cells of a $W$-graph $\Gamma$ is denoted $\ensuremath{\mathfrak{C}}(\Gamma)$.
We will describe the cells of $\ensuremath{\H_{1}\tsr_{J_1}\ldots\tsr_{J_{d-1}}\H_{d}}$ using the results of the previous subsection \textsection \ref{ss TTTC}.
Let $\ensuremath{\Upsilon}$ be a cell of $\ensuremath{\H_1\tsr_J \H_{2}}$. By Theorem \ref{t HJHcells} and its proof, $\ensuremath{\Upsilon} = \{\ensuremath{\mathscr{C}}_{w_1,x_2}: w_1 \in \ensuremath{\Upsilon}'\}$ for some cell $\ensuremath{\Upsilon}'$ of $\Gamma_{W_1}$ and $x_2 \in \ensuremath{\leftexp{J}{W}_{2}}$. We say that $\ensuremath{\Upsilon}'$ is the \emph{local label} of $\ensuremath{\Upsilon}$. By Theorem \ref{t HJHcells}, the cells $\ensuremath{\Upsilon}$ and $\ensuremath{\Upsilon}'$ are isomorphic as $W_1$-graphs so that the isomorphism type of a cell is determined by its local label. Thus $\ensuremath{\mathfrak{C}}(\ensuremath{\H_1\tsr_J \H_{2}})$ has a description via the bijection $\ensuremath{\mathfrak{C}}(\ensuremath{\H_1\tsr_J \H_{2}}) \cong \ensuremath{\mathfrak{C}}(\H_1)\times \ensuremath{\leftexp{J}{W}_{2}}$, $\ensuremath{\Upsilon} \mapsto (\ensuremath{\Upsilon}',x_2)$, taking a cell to its local label and an element of $\ensuremath{\leftexp{J}{W}_{2}}$. Unfortunately, from this description it is difficult to determine the cells of a cellular subquotient $\H_1\ensuremath{\otimes}_{J} A\Gamma$ of $\ensuremath{\H_1\tsr_J \H_{2}}$ for some $\Gamma \in \ensuremath{\mathfrak{C}}(H_2)$ (this is a cellular subquotient of $\ensuremath{\H_1\tsr_J \H_{2}}$ by Proposition \ref{CBSubquotientprop}).
Essentially the same argument used in Theorem \ref{t HJHcells} yields a similar expression for the general case:
\begin{equation} \label{e bad cell label} \ensuremath{\mathfrak{C}}(\ensuremath{\H_{1}\tsr_{J_1}\ldots\tsr_{J_{d-1}}\H_{d}}) \cong \ensuremath{\mathfrak{C}}(\H_d)\times \leftexp{J_{1}}W_2\times\ \ldots \times\ \leftexp{J_{d-1}}W_d,\end{equation} taking a cell to its local label and a tuple of right coset representatives. This of course has the same drawback of it being difficult to identify the subset of cells obtained by taking a cellular subquotient of $\H_d$. We now address this deficiency.
Put $\ensuremath{\widetilde{E}}^k = \H_{d-k}\ensuremath{\otimes}_{J_{d-k}}\ldots\ensuremath{\otimes}_{J_{d-1}}\H_{d}$. The collection of cells $\coprod_{k=0}^{d-1} \ensuremath{\mathfrak{C}}(\ensuremath{\widetilde{E}}^k)$ can be pictured as vertices of an acyclic graph $G$ (see Figure \ref{f VVe+} of \textsection \ref{ss reduced non-reduced}). The subset $\ensuremath{\mathfrak{C}}(\ensuremath{\widetilde{E}}^k)$ of vertices is the \emph{kth level} of $G$. There is an edge between $\ensuremath{\Upsilon}^k$ of level $k$ and $\ensuremath{\Upsilon}^{k+1}$ of level $k+1$ if $\ensuremath{\Upsilon}^{k+1}\in \ensuremath{\mathfrak{C}}(\H_{d-(k+1)}\ensuremath{\otimes}_{J_{d-(k+1)}}\ensuremath{\Upsilon}^k)$. Here we are using Proposition \ref{CBSubquotientprop} to identify $\H_{d-(k+1)}\ensuremath{\otimes}_{J_{d-(k+1)}}\ensuremath{\Upsilon}^k$ with a cellular subquotient of $\ensuremath{\widetilde{E}}^{k+1}$. Note that from a vertex of level $k+1$ there is a unique edge to a vertex of level $k$ since the cells of a module $\ensuremath{\widetilde{E}}^k$ are the composition factors of a composition series for $\ensuremath{\widetilde{E}}^k$, thereby yielding composition factors for the induced module of $\ensuremath{\widetilde{E}}^{k+1} = \H_{d-(k+1)}\ensuremath{\otimes}_{J_{d-(k+1)}}\ensuremath{\widetilde{E}}^k$.
A vertex $\ensuremath{\Upsilon}^k$ in the $k$-th level of $G$ has a unique path to a vertex $\ensuremath{\Upsilon}^0$ in the $0$-th level. The local labels $(\Gamma^k,\ldots,\Gamma^0)$ of the vertices in this path is the \emph{local sequence} of $\ensuremath{\Upsilon}^k$ (where $\Gamma^i$ is the local label of the vertex in the $i$-th level).
The cell of $\ensuremath{\widetilde{E}}^{d-1}$ containing $\ensuremath{{\tilde{C'}}\negmedspace}_{\mathbf{z}}, \ \mathbf{z}\in\ensuremath{W_{1}\stackrel{J_1}{\times} \ldots\stackrel{J_{d-1}}{\times}W_{d}}$ is the end of a path with local labels $(\Gamma_{1},\ldots,\Gamma_d)$, where $\Gamma_i\in \ensuremath{\mathfrak{C}}(\Gamma_{W_{i}})$ is the cell containing $\ensuremath{\mathscr{C}}_{z_i}$ and $(z_1,\ldots,z_d)$ is stuffed notation for $\mathbf{z}$.
A local sequence $(\Gamma^{d-1},\ldots,\Gamma^0)$ does not in general determine a cell of $\ensuremath{\widetilde{E}}^{d-1}$ uniquely. For instance, the cells of $\H_{J} \ensuremath{\otimes}_{J} \H$ with $J = \emptyset$ are just single canonical basis elements of $\H$, so a local sequence does not determine a cell unless the cells of $\H$ are of size $1$. We say that the tuple $(\ensuremath{\widetilde{E}}^{d-1}, \ldots, \ensuremath{\widetilde{E}}^0)$ is \emph{weakly multiplicity-free} if there is at most one cell of $\ensuremath{\widetilde{E}}^{d-1}$ with local sequence $(\Gamma^{d-1},\ldots,\Gamma^0)$ for all $\Gamma^i \in \ensuremath{\mathfrak{C}}(\Gamma_{W_{d-i}})$. Pure induction $(\H \ensuremath{\otimes}_J \H_J, \H_J)$ is trivially weakly multiplicity-free since the local label of a cell in $\H \ensuremath{\otimes}_J \H_J = \H$ is the same thing as the cell itself. It is not hard to see that $(\ensuremath{\widetilde{E}}^{d-1}, \ldots, \ensuremath{\widetilde{E}}^0)$ is weakly multiplicity-free if and only if the restriction $(\H_{J_i} \ensuremath{\otimes}_{J_i} \H_{i+1}, \H_{i+1})$ is for all $i$.
We have seen that the restriction $(\H_J \ensuremath{\otimes}_J \H, \H)$ is not always weakly multiplicity-free, but a natural question is whether it always is for $J$ of size $|S|-1$. This fails for $W$ of type $B_2$ and $B_3$ for all choices of $J$ (and presumably for $B_n$, $n > 3$). This failure may only be because cells in type $B$ do not always correspond to irreducible modules, so this question should be investigated in the unequal parameter setting. We conjecture the following for type $A$.
\begin{conjecture}\label{c weak mult free}
If \H is the Hecke algebra of $(W,S) = (\S_n, \{s_1,\ldots,s_{n-1}\})$ and $|J| = |S|-1$, then the restriction $(\H_J \ensuremath{\otimes}_J \H, \H)$ is weakly multiplicity-free.
\end{conjecture}
This conjecture was verified for $n=10$, $J=S\backslash \{s_5\}$ using Magma, and for $n=16$ and a few arbitrary choices of a cell $\Gamma$, we checked that $(\H_J \ensuremath{\otimes}_J \Gamma, \Gamma)$ is weakly multiplicity-free. Strangely, it does not seem to be amenable to typical RSK, jeu de taquin style combinatorics. See \textsection\ref{ss restrict cells} for more about the combinatorics involved here.
\section{Tableau combinatorics}
\label{s tableau combinatorics}
\subsection{}
We will make the description of cells from the previous section combinatorially explicit in the case $W = \S_n$. In this section fix $S = \{s_1,\ldots,s_{n-1}\}$ and $\H$ the Hecke algebra of type $A_{n-1}$. As is customary, we will think of an element of $\S_n$ as a word of length $n$ in the numbers $1,\ldots,n$. We want to maintain the convention used thus far of looking only at left $\H$-modules, however tableau combinatorics is a little nicer if a right action is used. To get around this, define the word associated to an element $w = s_{i_1} s_{i_2} \dots s_{i_k} \in W$ to be $w^{-1}(1) \ w^{-1}(2) \dots w^{-1}(n)$, where (to be completely explicit) $w^{-1}(i) = s_{i_k} s_{i_{k-1}} \dots s_{i_1}(i)$ and $s_j$ transposes $j$ and $j+1$. The left descent set of $w\in \S_n$ is $\{s_i :\ w^{-1}(i) > w^{-1}(i+1) \}$.
The RSK algorithm gives a bijection between $\S_n$ and pairs of standard Young tableau (SYT) of the same shape sending $w\in \S_n$ to the pair $(P(w),Q(w))$, written $w\xrightarrow{RSK} (P(w),Q(w))$, where $P(w)$ and $Q(w)$ are the insertion and recording tableaux of the word of $w$ (equal to $w^{-1}(1)\ w^{-1}(2)\dots w^{-1}(n)$ by our convention). As was shown in \cite{KL}, the left cells of $\H$ are in bijection with the set of SYT and the cell containing $\ensuremath{\mathscr{C}}_w$ corresponds to the insertion tableau of $w$ under this bijection. The cell containing those $\ensuremath{\mathscr{C}}_w$ such that $w$ has insertion tableau $P$ is the cell \emph{labeled by} $P$. Note that the shape of the tableau labeling a cell is the transpose of the usual convention for Specht modules, i.e. the trivial representation is labeled by the tableau of shape $1^n$, sign by the tableau of shape $n$.
For the remainder of this paper let $r \in \{1,\ldots,n-1\}$, $J_r = \{s_1,\ldots,s_{r-1}\}$, $J'_{n-r} = \{s_{r+1},\ldots,s_{n-1}\}$, and $J = J_r \cup J'_{n-r}$.
\subsection{}
\label{ss induced cells}
Let $\Gamma$ be a cell of $W_J$ labeled by a pair of insertion tableaux $(T,T') \in \ensuremath{\mathcal{T}}_{1^r0^{n-r}}\times\ensuremath{\mathcal{T}}_{0^r1^{n-r}}$, where $\ensuremath{\mathcal{T}}_\alpha$ is the set of tableau with $\alpha_i$ entries equal to $i$. Here we are using the easy fact, proven carefully in \cite{R}, that a cell of $\Gamma_{W_1\times W_2}$ is the same as a cell of $\Gamma_{W_1}$ and one of $\Gamma_{W_2}$. We will describe the cells of $\H \ensuremath{\otimes}_J A\Gamma$.
For any $w\in W$, in the notation of \textsection \ref{ss coxeter group}, $\rj{w}{J} = (a, b)\in W_{J_r}\times W_{J'_{n-r}}$, where $a$ (resp. $b$) is the permutation of numbers $1,\ldots,r$ (resp. $r+1, \ldots,n$) obtained by taking the subsequence of the word of $w$ consisting of those numbers. For example, if $n=6$, $w = 436125$, and $r = 3$, then $a = 312$ and $b = 465$.
The induced module $\H\ensuremath{\otimes}_{J}A\Gamma$ has canonical basis $\{\ensuremath{\mathscr{C}}_w: P(\rj{w}{J}) = (T,T')\}$, where we define $P(a, b)$ for $(a,b)\in W_{J_r}\times W_{J'_{n-r}}$ to be $(P(a),P(b))$.
For any tableau $P$, let $\text{\rm jdt}(P)$ denote the unique straight-shape tableau in the jeu de taquin equivalence class of $P$. From the most basic properties of insertion and jeu de taquin it follows that if $\rj{w}{J} = (a, b)$, then $P(w)_{\leq r} = P(a),\ P(w)_{> r} = \text{\rm jdt}(P(b))$, where $P_{\leq r}$ (resp. $P_{> r}$) is the (skew) subtableau of $P$ with entries $1,\ldots,r$ (resp. $r+1,\ldots,n$). See, for instance, \cite[A1.2]{St} for more on this combinatorics. We now have the following description of cells.
\begin{proposition}
\label{p cells of induce}
With $\Gamma$ labeled by $T,T'$ as above, the cells of $\H\ensuremath{\otimes}_{J}A\Gamma \subseteq \H$ are those labeled by $P$ such that $P_{\leq r} = T$, $\text{\rm jdt}(P_{> r}) = T'$.
\end{proposition}
\begin{example}
Let $n=6$, $r=3$, and $T,T'=\left({\tiny\young(12,3),\young(46,5)}\right)$. Then the cells of $\H\ensuremath{\otimes}_{J}A\Gamma$ are labeled by
$$ {\tiny\young(1246,35)},\ {\tiny\young(124,356)},\ {\tiny\young(1246,3,5)},\ {\tiny\young(126,34,5)},\ {\tiny\young(124,36,5)},\ {\tiny\young(12,34,56)},\ {\tiny\young(126,3,4,5)},\ {\tiny\young(12,36,4,5)}.$$
\end{example}
This is, of course, the Littlewood-Richardson rule. The combinatorics of the Littlewood-Richardson rule matches beautifully with the machinery of canonical bases. This version of the Littlewood-Richardson is due to Sch\"utzenberger and its connection with canonical bases was also shown in \cite{R}.
Let $V_{\lambda}$ be the Specht module corresponding to the partition $\lambda$, and put $\mu = \text{\rm sh}(T),\ \nu=\text{\rm sh}(T')$.
It was established in \cite{KL} that all left cells of $\H$ isomorphic at $\u=1$ to $V_{\lambda}$ are isomorphic as $W$-graphs. This, together with the fact that the $W$-graph of Theorem \ref{t HY} depends only on the isomorphism type of the $W_J$-graph $\Gamma$, shows that the multiplicity of $V_{\lambda}$ in $\text{\rm Ind}^W_{W_J}
(V_{\mu}\boxtimes V_{\nu})$ is given by the combinatorics above and is independent of the chosen insertion tableaux $T, T'$.
\subsection{}
\label{ss restrict cells}
Let $\Gamma$ be a cell of $\H$ labeled by $P$ with $\text{\rm sh}(P) = \lambda$. We will describe the cells of $\text{\rm Res}_J A \Gamma$.
For any $w\in W$, $\lj{w}{J} = (a, b)\in W_{J_r}\times W_{J'_{n-r}}$, where $a$ (resp. $b$) is the permutation of numbers $1,\ldots,r$ (resp. $r+1, \ldots,n$) with the same relative order as $w^{-1}(1) \ w^{-1}(2) \dots w^{-1}(r)$ (resp. $w^{-1}(r+1) \dots w^{-1}(n)$). For example, if $n=6$, $w = 436125$, and $r = 3$, then $a = 213$ and $b = 456$.
Specifying a cell $\ensuremath{\Upsilon}$ of $\text{\rm Res}_J \H$ is equivalent to giving $x \in \leftexp{J}{W}$ and $(T,T') \in \ensuremath{\mathcal{T}}_{1^r0^{n-r}}\times\ensuremath{\mathcal{T}}_{0^r1^{n-r}}$. Under this correspondence, $\ensuremath{\Upsilon} = \{\ensuremath{\mathscr{C}}_w: P(\lj{w}{J}) = (T,T'), \rJ{w}{J} = x\}$.
Given $\mu\vdash r,\nu\vdash n-r$, define
\begin{equation} \mu\sqcup\nu =(\nu_1+\mu_1,\nu_2+\mu_1,\ldots,\nu_{\ell(\nu)}+\mu_1,\mu_1,\mu_2,\ldots,\mu_{\ell(\mu)}), \end{equation}
where $\ell(\mu)$ is the number of parts of $\mu$.
Expressing the tableaux on $1,\ldots,r$ and $r+1,\ldots,n$ that label the cells of $\text{\rm Res}_JA\Gamma$ in terms of $P$ is tricky: first define the set
\begin{equation} X := \{(T,T') :\ |T| = r, |T'| = n-r, \text{\rm jdt}(T T') = P\},\end{equation}
where $T T'$ is the tableau of shape $\mu \sqcup\nu / \rho$ ($\text{\rm sh}(T)=\mu$, $\text{\rm sh}(T')=\nu$, $\rho = {\mu_1}^{\ell(\nu)}$) obtained by adding $T'$ to the top right of $T$.
The multiset of local labels of the cells of $\text{\rm Res}_JA\Gamma$ (Conjecture \ref{c weak mult free} says this is actually a set) is obtained by projecting each element of $X$ onto the set $\ensuremath{\mathcal{T}}_{1^r0^{n-r}}\times\ensuremath{\mathcal{T}}_{0^r1^{n-r}}$ by replacing the entries of $T$ (resp. $T'$) by $1,\ldots,r$ (resp. $r+1,\ldots,n$) so that relative order is preserved.
\begin{example}
If $n=6$, $r=3$, and $P={\tiny\young(125,36,4)}$, then $X$ is
$$\left\{ \left({\tiny\young(36,4),\young(125)}\right), \left({\tiny\young(1,3,4),\young(25,6)}\right), \left({\tiny\young(16,4),\young(25,3)}\right), \left({\tiny\young(146),\young(25,3)}\right), \left({\tiny\young(13,4),\young(25,6)}\right), \left({\tiny\young(13,4),\young(2,5,6)}\right) \right\}.$$
Hence the cells of $\text{\rm Res}_JA\Gamma$ have local labels
$$\left({\tiny\young(13,2),\young(456)}\right), \left({\tiny\young(1,2,3),\young(45,6)}\right), \left({\tiny\young(13,2),\young(46,5)}\right), \left({\tiny\young(123),\young(46,5)}\right), \left({\tiny\young(12,3),\young(45,6)}\right), \left({\tiny\young(12,3),\young(4,5,6)}\right).$$
\end{example}
A slightly better description of the cells of $\text{\rm Res}_J A\Gamma$ is as follows. Fix $\mu \vdash r$, $\nu \vdash n-r$ such that $\lambda \subseteq \mu \sqcup \nu$, and $B$ a tableau of the rectangle shape $\rho := {\mu_1}^{\ell(\nu)}$. Now consider the jeu de taquin growth diagrams with lower left row corresponding to $P$, lower right row corresponding to $B$, and the partition at the top equal to $\mu \sqcup \nu$ (see, e.g., \cite[A1.2]{St}). The upper right row of such a growth diagram necessarily corresponds to some $T T'$ such that $\text{\rm jdt}(TT') = P$, and the upper left row corresponds to some $A$ such that $\text{\rm jdt}(A) = B$. Since a growth diagram is constructed uniquely from either of its sides, we obtain the bijection
\begin{equation} \left\{(T,T') :\ \text{\rm sh}(T)=\mu,\text{\rm sh}(T')=\nu,\text{\rm jdt}(T T') = P\right\} \cong \{A :\ \text{\rm sh}(A) = \mu\sqcup\nu/\lambda,\text{\rm jdt}(A) = B\}.\end{equation}
>From an $A$ in the set above, one obtains the corresponding $(T,T')$ as follows: perform jeu de taquin to $P$ in the order specified by the entries of $A$ to obtain a tableau of shape $\mu\sqcup\nu /\rho$; split this into a tableau of shape $\mu$ and one of shape $\nu$. This can be used to give another description of the set $X$. This description has the advantage that the same choice of $B$ can be used for all tableau $P$ of shape $\lambda$.
\subsection{}
\label{ss sb}
If $r=1$ or $r = n-1$, then restricting and inducing are multiplicity-free. Therefore, we only need to keep track of the shapes of the tableaux rather than the tableaux themselves, except at the first step $\ensuremath{\mathfrak{C}}(\H_d)$, in order to determine a cell of $\ensuremath{\H_{1}\tsr_{J_1}\ldots\tsr_{J_{d-1}}\H_{d}}$. However, it is often convenient for working concrete examples to keep track of all tableaux.
If $r=1$ or $r=n-1$, then
the cells of $\text{\rm Res}_JA\Gamma$, with $\Gamma$ labeled by $P$, can be described explicitly. If $r=1$ (resp. $r=n-1$), they are labeled by the tableaux obtained from $P$ by column-uninserting (resp. row-uninserting) an outer corner
and replacing the entries of the result with $2,\ldots,n$ (resp. $1,\ldots,n-1$) so that relative order is preserved.
We will work with both $r=1$ and $r=n-1$ in this paper because tableau combinatorics is easier with $r=n-1$, but $r =1$ is preferable for our work in \textsection\ref{s tensor V} and beyond. It is therefore convenient to be able to go back and forth between these two conventions.
On the level of algebras, this is done by replacing any $\H_K$-module by the $\H_{w_0 K w_0}$-module obtained by twisting by the isomorphism $\H_{w_0 K w_0} \cong \H_K, T_{s_i} \mapsto T_{s_{n-i}}$, where $w_0$ is the longest element of $W$.
Combinatorially, this corresponds to replacing a word $x_1 x_2 \dots x_n$ with $x^\sharp := n+1-x_1\ n+1-x_2 \dots n+1-x_n$. The local label of a cell changes from $T$ to $\text{\rm evac}(T)$, where $T \mapsto \text{\rm evac}(T)$ is the Sch\"utzenberger involution (see, e.g., \cite[A1.2]{St}). More precisely, the local label $(T,T') \in \ensuremath{\mathcal{T}}_{1^j0^{n-j}}\times\ensuremath{\mathcal{T}}_{0^j1^{n-j}}$ of a cell of an $\H_{S\backslash s_j}$-module becomes $(\text{\rm evac}(T')^*,\text{\rm evac}(T)^*)$, where $\text{\rm evac}(T')^*$ (resp. $\text{\rm evac}(T)^*$) is obtained from $\text{\rm evac}(T')$ by adding a constant to all entries so that $\text{\rm evac}(T')^* \in \ensuremath{\mathcal{T}}_{1^{n-j}0^{j}}$ (resp. $\text{\rm evac}(T)^* \in \ensuremath{\mathcal{T}}_{0^{n-j}1^{j}}$).
\subsection{}
\label{ss affinecells}
In this subsection we give a combinatorial description of cells of a certain submodule of $\text{\rm Res}_\H\eH$, where $\eH$ is the extended affine Hecke algebra of type $A$. We digress to introduce this object. See \cite{X}, \cite{H} for a more thorough introduction.
First of all, everything we have done so far for Coxeter groups also holds for extended Coxeter groups. An extended Coxeter group, defined from a Coxeter group $(W, S)$ and an abelian group $\Pi$ acting by automorphisms on $(W, S)$, is the semi-direct product $\Pi \ltimes W$, denoted $\ensuremath{{W_e}}$. The length function and partial order on $W$ extend to $\ensuremath{{W_e}}$: $\ell(\pi v) = \ell(v)$, and $\pi v \leq \pi' v'$ if and only if $\pi = \pi'$ and $v \leq v'$, where $\pi, \pi' \in \Pi$, $v, v' \in W$. The definitions of left and right descent sets, reduced factorization, the $\br{\cdot}$-involution, and definition of the Hecke algebra (\ref{e Hecke algebra def}) of \textsection\ref{s Hecke algebra} carry over identically. The Hecke algebra elements $T_\pi$ for $\pi \in \Pi$ will be denoted simply by $\pi$; note that these are $\br{\cdot}$-invariant.
Although it is possible to allow parabolic subgroups to be extended Coxeter groups, we define a parabolic subgroup of $\ensuremath{{W_e}}$ to be an ordinary parabolic subgroup of $W$ to simplify the discussion (this is the only case we will need later in the paper). With this convention, each coset of a parabolic subgroup $\ensuremath{{W_e}}_J$ contains a unique element of minimal length.
In the generality of extended Coxeter groups, a $\ensuremath{{W_e}}$-graph $\Gamma$ must satisfy $\pi\gamma \in \Gamma$ for all $\pi \in \Pi$, $\gamma \in \Gamma$ in addition to (\ref{Wgrapheq}). The machinery of IC bases carries over without change. Everything we have done so far holds in this setting; the only thing that needs some comment is Theorem \ref{t HY}. Presumably the proof carries over without change, however it is also easy to deduce this from Theorem \ref{t HY} for ordinary Coxeter groups: use the fact that $\ensuremath{\tilde{P}}_{\pi x, \delta, \pi v, \gamma} = \ensuremath{\tilde{P}}_{x, \delta, v, \gamma}$ to deduce that with the definition (\ref{e mudef}) for $\mu$, $\mu(\pi x, \delta, \pi v, \gamma) = \mu(x, \delta, v, \gamma)$ ($x, v \in W, \pi \in \Pi$); the identity $\ensuremath{{\tilde{C'}}\negmedspace}_{\pi v, \gamma} = \pi \ensuremath{{\tilde{C'}}\negmedspace}_{v, \gamma}$ together with the theorem for ordinary Coxeter groups give it for extended Coxeter groups.
Let $W, \ensuremath{{W_a}}$ be the Weyl groups of type $A_{n-1}, \tilde{A}_{n-1}$ respectively. Put $K_j = \{s_0, s_1, \ldots,\hat{s}_j,\ldots, s_{n-1}\}$. Let $Y\cong \ensuremath{\mathbb{Z}}^n,\ Q\cong \ensuremath{\mathbb{Z}}^{n-1}$ be the weight lattice, root lattice of $GL_n$. The extended affine Weyl group $\ensuremath{{W_e}}$ is both $Y \rtimes W$ and $\Pi \ltimes \ensuremath{{W_a}}$ where $\Pi \cong Y / Q \cong \ensuremath{\mathbb{Z}}$. For $\lambda \in Y$, let $y^\lambda$ be the corresponding element of $\ensuremath{{W_e}}$ and let $y_i = y^{\epsilon_i}$, where $\epsilon_1, \ldots, \epsilon_n$ is the standard basis of $Y$. Also let $\pi$ be the generator of $\Pi$ such that $s_i\pi = \pi s_{i-1}$, where subscripts are taken mod $n$. The isomorphism $Y \rtimes W \cong \Pi \ltimes \ensuremath{{W_a}}$ is determined by
\begin{equation} \label{e Y W=Pi Wa} y_i \to s_{i-1} \ldots s_1 \pi s_{n-1} \ldots s_i, \end{equation}
and the condition that $W \hookrightarrow Y \rtimes W \cong \Pi \ltimes \ensuremath{{W_a}}$ identifies $W$ with $\ensuremath{{W_a}}_{K_0}$ via $s_i \mapsto s_i$, $i \in [n]$.
Another description of $\ensuremath{{W_e}}$, due to Lusztig, is as follows. The group $\ensuremath{{W_e}}$ can be identified with the group of permutations $w: \ensuremath{\mathbb{Z}} \to \ensuremath{\mathbb{Z}}$ satisfying $w(i+n) = w(i)+n$ and $\sum_{i = 1}^n (w(i) - i) \equiv 0 \mod n$. The identification takes $s_i$ to the permutation transposing $i+kn$ and $i+1+kn$ for all $k \in \ensuremath{\mathbb{Z}}$, and takes $\pi$ to the permutation $k \mapsto k+1$ for all $k \in \ensuremath{\mathbb{Z}}$. We can then express an element $w$ of $\ensuremath{{W_e}}$ in \emph{window notation} as the sequence of numbers $w^{-1}(1) \dots w^{-1}(n)$, also referred to as just the word of $w$. For example, if $n = 4$ and $w = \pi^2 s_2 s_0 s_1$, then the word of $w$ is $\text{-}3203$.
Let $\ensuremath{Y_{\geq 0}} = \ensuremath{\mathbb{Z}}^n_{\geq 0}$ and $\ensuremath{{W_e^+}} = \ensuremath{Y_{\geq 0}} \rtimes W$. There is a corresponding subalgebra $\pH$ of $\eH$, equal to both $A \{ T_w : w \in \ensuremath{{W_e^+}} \}$ and $A \{ \ensuremath{\mathscr{C}}_w : w \in \ensuremath{{W_e^+}} \}$ \cite{B}. Let $\Gamma$ be a $W$-graph and put $E = A\Gamma$. The \emph{positive, degree $d$} part of $\text{\rm Res}_\H \eH \ensuremath{\otimes}_\H E$ is
\begin{equation} (\pH \ensuremath{\otimes}_\H E)_d := A\{\ensuremath{{\tilde{C'}}\negmedspace}_{y^\lambda v,\gamma}:\lambda \in \ensuremath{Y_{\geq 0}}, |\lambda| = d, v \in W \text{ such that } y^\lambda v \in \ensuremath{{W_e}}^{K_0}, \gamma\in\Gamma\}. \end{equation}
\begin{proposition}
$(\pH \ensuremath{\otimes}_\H E)_d$ is a cellular submodule of $\text{\rm Res}_\H \eH \ensuremath{\otimes}_\H E$.
\end{proposition}
\begin{proof}
The $A$-basis above can be rewritten as $\{ \pi^d \ensuremath{{\tilde{C'}}\negmedspace}_{w,\gamma} : \pi^d w \in \ensuremath{{W_e^+}}, w \in \ensuremath{{W_a}}, \gamma \in \Gamma \}$. It is easy to see this is left stable by the action of $\H$, given that $\pH$ is a subalgebra of $\eH$ containing $\H$.
\end{proof}
Let $\Gamma$ be a cell of $\H$ labeled by $T$ and $\ensuremath{\widehat{E}}^1= (\pH \ensuremath{\otimes}_{\H} A \Gamma)_1$. We now return to give a combinatorial description of the cells of $\ensuremath{\widehat{E}}^1$. The restriction $(\text{\rm Res}_\H \eH \ensuremath{\otimes}_\H E, \eH \ensuremath{\otimes}_\H E)$ is not weakly multiplicity-free, so we have to use the description (\ref{e bad cell label}). In this case, we have found it most convenient to use a hybrid of the description in (\ref{e bad cell label}) and local labels, which we now describe.
Given $x \in \ensuremath{{W_e}}$, define $P(x)$ to be the insertion tableau of the word of $x$. Since $\lj{x}{K_0}$ is the permutation of $1, \ldots, n$ with the same relative order as the word of $x$, $P(\lj{x}{K_0})$ is obtained from $P(x)$ by replacing the entries with $1,\ldots,n$ and keeping relative order the same.
Let $a_k = s_{k-1}\ldots s_{1}$ for $k\in \{2,\ldots,n\}$, $a_1 = 1$ be the minimal left coset representatives of $W_{J'_{n-1}}$. Then $\ensuremath{\widehat{E}}^1 = A \{\ensuremath{{\tilde{C'}}\negmedspace}_{a_k \pi, w}: k \in [n], P(w) = T \}$. In this case, define the local label of the cell containing $\ensuremath{{\tilde{C'}}\negmedspace}_{a_k \pi, w}$ to be $P(a_k \pi w)$. A caveat to this is that if we then form some induced module $\H_1 \ensuremath{\otimes}_{J_1} \ensuremath{\widehat{E}}^1$, it is good to convert the local labels of $\ensuremath{\widehat{E}}^1$ to be the tableaux $P(\lj{(a_k \pi w)}{K_0})$ before computing local labels of $\H_1 \ensuremath{\otimes}_{J_1} \ensuremath{\widehat{E}}^1$ (see Figure \ref{f VVe+ affine} of \textsection\ref{ss reduced non-reduced}).
Combinatorially, the cells of $\ensuremath{\widehat{E}}^1$ may be described as follows. Let $w \in W$ with $P(w) = T$ and define $Q = Q(w)$. Let $\lj{w}{J_{n-1}}^*$ be the word obtained from $w$ by deleting its last number (see Example \ref{ex affine insert}). Then $\lj{w}{J_{n-1}}^*\xrightarrow{RSK}(T^-,Q_{\leq n-1})$, where $T^-$ is obtained from $T$ by uninserting the square $Q\backslash Q_{\leq n-1}$; let $c$ be the number uninserted. Write $a_k\pi w$ in window notation, which is $\lj{w}{J_{n-1}}^*$ with a $c-n$ inserted in the $k$-th spot. Let $Q^+$ be the tableau obtained by column-inserting $k$ into the tableau obtained from $Q_{\leq n-1}$ by replacing entries with $\{1,\ldots,k-1,k+1,\ldots,n\}$ and keeping the same relative order. We have $a_k\pi w\xrightarrow{RSK}(T^+,Q^+)$, where $T^+$ is $\text{\rm jdt}(T^-,Q^+\backslash Q_{\leq n-1})$ with the number $c-n$ added to the top left corner (so that the resulting tableau has a straight-shape).
This implies the following result about the cells of $\ensuremath{\widehat{E}}^1$.
\begin{proposition}
The local labels of the cells of $\ensuremath{\widehat{E}}^1$ are those tableaux obtained from $T$ by uninserting some outer corner then performing jeu de taquin to some inner corner, and finally filling in the missing box in the top left with a $c-n$, where $c$ is the entry bumped out in the uninsertion.
\end{proposition}
\begin{example}
\label{ex affine insert}
For the element $(a_3 \pi, 346512) \in \ensuremath{{W_e}}^{K_0} \times W$, the insertion and recording tableaux discussed above are
\hoogte=10pt
\breedte=9pt
\dikte=0.3pt
\begin{equation} \begin{array}{cccc}
& a_3\pi w & \lj{w}{J_{n-1}}^* & w\\
& 34\ng 4651 & 34651 & 346512\\
P &
{\tiny\young(\ensuremath{\text{\ng 4}} 15,34,6)}
& {\tiny\young(145,3,6)} & {\tiny\young(125,34,6)}\\
\ \\
Q & {\tiny\young(124,35,6)} & {\tiny\young(123,4,5)} & {\tiny\young(123,46,5)}
\end{array} \end{equation}
\end{example}
\section{Computations of some $\ensuremath{\mathscr{C}}_w$}
\label{s computation c_w}
Suppose in what follows that $r=n-1$. Let $b_k := s_k\ldots s_{n-1}$ for $k\in [n-1]$ and $b_n = 1$ be the elements of $W^J$. It is possible to write down explicitly an element from each cell of $\H\ensuremath{\otimes}_{J}\H$ in terms of the canonical basis of $\H$. This is the content of the following theorem, which we include mainly for its application in the next section. It is quite interesting for its own sake, however, given that it does not seem to be known how to write down an element from each cell of $\H$ in terms of the $T$'s.
\begin{proposition}\label{p easyCw}
Let $\Gamma$ be a $W_J$-graph, and $\gamma\in\Gamma$ satisfying $K:=\{s_k,\ldots,s_{n-2}\}\subseteq L(\gamma)$. Then
$$ \ensuremath{{\tilde{C'}}\negmedspace}_{b_k,\gamma}=\frac{1}{[n-k]!}\ensuremath{\mathscr{C}}_{b_k\lj{w_0}{K}}\small \ensuremath{\boxtimes}\gamma = \left(T_{b_{k}} + \ensuremath{u^{-1}}} %q^{-1/2 T_{b_{k+1}} + \ldots + \u^{k-n}T_{b_n}\right)\small \ensuremath{\boxtimes}\gamma. $$
\end{proposition}
\begin{proof}
The right hand equality follows from $K\subseteq L(\gamma)$ and the well-known identity $\ensuremath{\mathscr{C}}_{b_k\lj{w_0}{K}} = \ensuremath{\mathscr{C}}_{\lj{w_0}{K \cup s_{n-1}}} = \left(T_{b_k} + \ensuremath{u^{-1}}} %q^{-1/2 T_{b_{k+1}} + \ldots + \u^{k-n}T_{b_n}\right)\ensuremath{\mathscr{C}}_{\lj{w_0}{K}}$ ($\lj{w_0}{K}$ is the longest element of $W_K$). Once this is known we have produced an element that is both $\br{\cdot}$-invariant (being equal to $\frac{1}{[k-1]!}\ensuremath{\mathscr{C}}_{b_k\lj{w_0}{K}}\small \ensuremath{\boxtimes}\gamma$) and congruent to $T_{b_k}\small \ensuremath{\boxtimes}\gamma\mod\ensuremath{u^{-1}}} %q^{-1/2\L$ (being equal to $\left(T_{b_{k}} + \ensuremath{u^{-1}}} %q^{-1/2 T_{b_{k+1}} + \ldots + \u^{k-n}T_{b_n}\right)\small \ensuremath{\boxtimes}\gamma$).
\end{proof}
\begin{theorem}\label{specialelementstheorem}
Let $\ensuremath{\Upsilon}$ be the cell of $\H\ensuremath{\otimes}_{J}\H$ determined by $\lambda^{(1)},\mu,P$, where $\lambda^{(1)},\mu,\text{\rm sh}(P)$ are partitions of $n,n-1,$ and $n$ respectively satisfying $\mu \subseteq \lambda^{(1)},\text{\rm sh}(P)$. Then $\ensuremath{\Upsilon}$ contains an element
$$ \ensuremath{{\tilde{C'}}\negmedspace}_{b_{k'},w} = \left(T_{b_{k'}} + \ensuremath{u^{-1}}} %q^{-1/2 T_{b_{k'+1}} + \ldots + \u^{-k+1}T_{b_n}\right)\small \ensuremath{\boxtimes} \ensuremath{\mathscr{C}}_w,$$
where the $k$-th row of $\lambda^{(1)}$ contains the square $\lambda^{(1)}/\mu$, $k' = n + 1 - k$, and $w$ satisfies $\{s_{k'},\ldots,s_{n-2}\}\subseteq L(w)$.
\end{theorem}
\begin{proof}
To construct a desired $w$, let $Q$ be any tableau of shape $\lambda^{(2)}$ such that $Q_{< n}$ has a $k'-1+r$ in the last box of the $r$-th row for $r \in \{1,\ldots,k-1\}$ (see Example \ref{e special element}) and $Q_{\geq n}$ is the square $\lambda^{(2)}/\mu$. Define $w$ by $w\xrightarrow{RSK}(P,Q)$.
Consider the element $(b_{k'},w) = \mathbf{z}\in W \stackrel{J}{\times} W$, which is $(b_{k'}\lj{w}{J},w)$ in stuffed notation. Now $Q(b_{k'}\lj{w}{J}) = P(\lj{w}{J}^{-1}s_{n-1}\dots s_{k'}) = Q_{<n}^* \leftarrow k'$, where $Q_{<n}^*$ is $Q_{<n}$ with $1$ added to all numbers $\geq k'$, and $T \leftarrow a$ denotes the row-insertion of $a$ into $T$. By construction of $Q_{<n}$, the bumping path of inserting $k'$ into $Q_{<n}^*$ consists of the last square in rows $1,\ldots,k$, the last square in the $k$-th row being the newly added square. Therefore, $\ensuremath{{\tilde{C'}}\negmedspace}_{\mathbf{z}}$ is contained in $\ensuremath{\Upsilon}$ because the shape of $Q_{<n}^* \leftarrow k'$ is $\lambda^{(1)}$.
Remembering our convention for the word of $w$, the left descent set $L(w)$ can be read off from $Q$: it is the set of $s_i$ such that $i+1$ occurs in a row below the row containing $i$. In particular, $K := \{s_{k'},\ldots,s_{n-2}\}\subseteq L(w)$. The theorem follows from Proposition \ref{p easyCw}.
\end{proof}
\begin{example}\label{e special element}
\newcommand{\ensuremath{\mathbf{6}}}{\ensuremath{\mathbf{6}}}
\newcommand{\ensuremath{\mathbf{7}}}{\ensuremath{\mathbf{7}}}
\newcommand{\ensuremath{\mathbf{8}}}{\ensuremath{\mathbf{8}}}
If $n=9$, $k=4$, $\mu = (3,2,2,1)$, and $P = {\tiny\young(158,269,37,4)}$, we could choose
$Q = {\tiny \young(12\ensuremath{\mathbf{6}},3\bfseven9,4\ensuremath{\mathbf{8}},5)}$ or any tableau with the given numbers in bold. Then,
$$ b_k'\lj{w}{J},\lj{w}{J},w=473219865,47321865,473219658, $$
and $Q_{<n} = {\tiny \young(126,37,48,5)}$, $Q(b_k'\lj{w}{J}) = {\tiny \young(126,37,48,59)}$.
\end{example}
\section{Canonical maps from restricting and inducing}
\label{s canonical maps}
\subsection{}
The functor $\H\ensuremath{\otimes}_J-:\H_J$-$\ensuremath{\mathbf{Mod}} \to \H$-$\ensuremath{\mathbf{Mod}}$ is left adjoint to $\text{\rm Res}_J:\H$-$\ensuremath{\mathbf{Mod}}\to \H_J$-$\ensuremath{\mathbf{Mod}}$. Let $\alpha$ (resp. $\beta$) denote the unit (resp. counit) of the adjunction so that $\alpha(F)\in \hom_{\H_J\text{-}\ensuremath{\mathbf{Mod}}}(F,\text{\rm Res}_J\H\ensuremath{\otimes}_JF)$ corresponds to $\text{\rm Id}_{\H\ensuremath{\otimes}_JF}$ (resp. $\beta(E)\in \hom_{\H\text{-}\ensuremath{\mathbf{Mod}}}(\H\ensuremath{\otimes}_J\text{\rm Res}_JE,E)$ corresponds to $\text{\rm Id}_{\text{\rm Res}_JE}$). The unit (resp. counit) is a natural transformation from the identity functor on $\H_J$-$\ensuremath{\mathbf{Mod}}$ to the functor $\text{\rm Res}_J \H \ensuremath{\otimes}_J-$ (resp. from the functor $\H \ensuremath{\otimes}_J \text{\rm Res}_J$ to the identity functor on $\H$-$\ensuremath{\mathbf{Mod}}$). We will omit the argument $F$ or $E$ in the notation for the unit and counit when there is no confusion. Explicitly, $\alpha:F\to\text{\rm Res}_J\H\ensuremath{\otimes}_JF$ is given by $f\mapsto 1\small \ensuremath{\boxtimes} f$, and $\beta:\H\ensuremath{\otimes}_JE\to E$ is given by $h\small \ensuremath{\boxtimes} e\mapsto he$. It is clear from these formulas that the unit and counit intertwine the involution $\br{\cdot}$.
\subsection{}
The unit behaves in a simple way on canonical basis elements.
\begin{proposition}\label{p res ind}
Let $F=A\Gamma$ be any $\H_J$-module coming from a $W_J$-graph $\Gamma$. The map $\alpha:F \to \text{\rm Res}_J \H\ensuremath{\otimes}_J F$ takes canonical basis elements to canonical basis elements. Therefore $\text{\rm im\,}(\alpha)$ is a cellular submodule isomorphic to $A\Gamma$ as a $W_J$-graph.
\end{proposition}
\begin{proof}
The elements $\ensuremath{{\tilde{C'}}\negmedspace}_{1,\gamma} = \alpha(\gamma)$ ($\gamma\in \Gamma$) are canonical basis elements and are an $A$-basis for the image of $\alpha$.
\end{proof}
\subsection{}
Again, restrict to the case where $W$ and $\H$ are of type $A_{n-1}$, $S = \{s_1,\ldots,s_{n-1}\}$, and $J = S\backslash s_{n-1}$.
We are not able to give a good description of where the counit $\beta$ takes canonical basis elements in general, but we have a partial result along these lines, assuming the following conjecture.
\begin{conjecture}\label{c dominance}
Let $\Lambda$ be the $W_1$-graph on $\ensuremath{\H_{1}\tsr_{J_1}\ldots\tsr_{J_{d-1}}\H_{d}}$ with $\H_1$ of type $A$. If $\mathbf{y} \leq_\Lambda \mathbf{z}$, $\mathbf{y,z} \in \Lambda$ and $\mathbf{y,z}$ are in cells with local labels of shape $\lambda, \mu$ respectively, then $\lambda < \mu$ in dominance order.
\end{conjecture}
Let $\Gamma$ be a cell of $\H$ and $\tau: A\Gamma \to A\Gamma$ an $\H$-module homomorphism. We want to conclude that $\tau$ is multiplication by some constant $c \in A$. This can be seen, for instance, by tensoring with $\ensuremath{\mathbb{C}}$ over $A$ using any map $A \to \ensuremath{\mathbb{C}}$ that does not send $\u$ to a root of unity; Schur's Lemma applies as $\H \ensuremath{\otimes}_A \ensuremath{\mathbb{C}} \cong \ensuremath{\mathbb{C}} \S_n$ and $A \Gamma \ensuremath{\otimes}_A \ensuremath{\mathbb{C}}$ is irreducible. Thus $\tau \ensuremath{\otimes}_A \ensuremath{\mathbb{C}} = a \ \text{\rm Id}$, $a \in \ensuremath{\mathbb{C}}$ for infinitely many specializations of $\u$ implies $\tau = c \ \text{\rm Id}$, $c \in A$.
Let $X_{\lambda}$ be the two-sided cell of $\H$ consisting of the cells labeled by tableaux of shape $\lambda \vdash n$. Let $\Gamma$ be a cell of $\H\ensuremath{\otimes}_{J}X_{\lambda}$ with local sequence $P_1,P_2$ both of shape $\lambda$. Conjecture \ref{c dominance} implies $A \Gamma$ is a submodule of $(\H\ensuremath{\otimes}_{J}X_{\lambda})/ X$, where $X$ is the cellular submodule consisting of those cells of dominance order $< \lambda$. By a similar argument to the one above, $\beta(X) = 0$. Therefore the map $\H\ensuremath{\otimes}_{J}X_{\lambda} \xrightarrow{\beta} X_\lambda$ gives rise to a map $A \Gamma \xrightarrow{\beta} X_\lambda$.
Letting
$\ensuremath{\Upsilon}$ be a cell of $X_\lambda$, we have
\begin{equation} \label{e [k]}
A\Gamma \xrightarrow{\beta} X_{\lambda} \xrightarrow{p} A\ensuremath{\Upsilon} \cong A\Gamma. \end{equation}
The map $p$ is a cellular quotient map by \cite[Corollary 1.9]{L} and the rightmost isomorphism of $W$-graphs comes from the fact that any two cells with the same local label are isomorphic as $W$-graphs (\textsection\ref{ss locallabels}). We now can state the main application of Theorem \ref{specialelementstheorem}.
\begin{corollary}
\label{c [k]}
Assuming Conjecture \ref{c dominance} and with the notation above, if the square $P_1 \backslash (P_1)_{<n}$ lies in the $k$-th row, then the composition of the maps in (\ref{e [k]}) is $[k]\ \text{\rm Id}$ if $\ensuremath{\Upsilon}$ is labeled by $P_2$ and 0 otherwise.
\end{corollary}
\begin{proof}
By the discussion above, this composition must be $c \ \text{\rm Id}$ for some $c \in A$.
Apply Theorem \ref{specialelementstheorem} with $\lambda^{(1)} = \lambda$, $\mu = \text{\rm sh}((P_1)_{<n})$, $P = P_2$, noting that from the construction of $w$ in the proof, $\{k',\ldots,n-1\}\subseteq L(w)$ in this case. Therefore \begin{equation} \beta(\ensuremath{{\tilde{C'}}\negmedspace}_{b_{k'},w}) = \left(T_{b_{k'}} + \ensuremath{u^{-1}}} %q^{-1/2 T_{b_{k'+1}} + \ldots + \u^{-k+1}T_{b_n}\right)\ensuremath{\mathscr{C}}_w = [k]\ensuremath{\mathscr{C}}_w.\end{equation}
\end{proof}
It is tempting to conjecture that $\beta(\ensuremath{{\tilde{C'}}\negmedspace}_{b_{k'}, w})$ is a constant times a canonical basis element of $\H$, where $(b_{k'},w)$ is as constructed in Theorem \ref{specialelementstheorem}, but this is false in general. The following counterexample was found using Magma.
\begin{example}Let $n=6$, $k=2$, $k'=4$, $w=521634\xrightarrow{\tiny{RSK}}\left({\tiny\young(134,26,5), \ \young(146,25,3)}\right)$. Then,
\begin{equation} \ensuremath{{\tilde{C'}}\negmedspace}_{b_{k'},
w} = \left(T_{b_4} + \ensuremath{u^{-1}}} %q^{-1/2 T_{b_5} + \u^{-2}\right)\small \ensuremath{\boxtimes} \ensuremath{\mathscr{C}}_{521634} \stackrel{\vphantom{}^\beta}{\longmapsto} [2]\ensuremath{\mathscr{C}}_{521643} + [2]\ensuremath{\mathscr{C}}_{321654}.\end{equation}
The element $(b_{k'},w) \in W \stackrel{J}{\times} {W}$ is $(421653,521634)$ in stuffed notation and the cell containing it has local label ${\tiny\young(13,25,46)}$ . The labels of the cells containing $521643, 321654$ are ${\tiny\young(13,24,56),\young(14,25,36)}$ respectively.
\end{example}
\section{Some $W$-graph versions of tensoring with the defining representation}
\label{s tensor V}
Let $V$ denote the $n$-dimensional defining representation of $\S_n$: $V = \ensuremath{\mathbb{Z}}\{ x_1,\ldots,x_n\}$, $s_i(x_j) = x_{s_i(j)}$. In this section, we will explore three $W$-graph versions of tensoring with $V$. We then look at $W$-graphs corresponding to tensoring twice with $V$ and show that these decompose into a reduced and non-reduced part. We make a habit of checking what our $W$-graph constructions become at $\u = 1$ in order to keep contact with our intuition for this more familiar case.
\subsection{}
In what follows, $E$ denotes an $\H$-module or $\ensuremath{\mathbb{Z}} \S_n$-module, depending on context. A useful observation, and indeed, what motivated us to study inducing $W$-graphs is that $V\ensuremath{\otimes} E\cong \ensuremath{\mathbb{Z}}\S_n\ensuremath{\otimes}_{\ensuremath{\mathbb{Z}}\S_{n-1}}E$ for any $\ensuremath{\mathbb{Z}}\S_n$-module $E$. This is well-known, but the proof is instructive.
\begin{proposition}
\label{p gk}
Given a finite group $G$, a subgroup $K$, and a $\ensuremath{\mathbb{Z}}(G)$-module $E$, there is a ($\ensuremath{\mathbb{Z}} G$-module) isomorphism, natural in $E$
\begin{equation}
\label{e gk}
\begin{array}{ccc}
\ensuremath{\mathbb{Z}} G\ensuremath{\otimes}_{\ensuremath{\mathbb{Z}} K}E & \cong & \left(\ensuremath{\mathbb{Z}} G\ensuremath{\otimes}_{\ensuremath{\mathbb{Z}} K}\ensuremath{\mathbb{Z}}\right) \ensuremath{\otimes}_{\ensuremath{\mathbb{Z}}} E,\\
g\small \ensuremath{\boxtimes} e & \to & (g\small \ensuremath{\boxtimes} 1)\ensuremath{\otimes} ge,
\end{array}
\end{equation}
where $\ensuremath{\mathbb{Z}}$ denotes the trivial representation of $K$.
\end{proposition}
\begin{proof}
The expressions $g k\small \ensuremath{\boxtimes} k^{-1} e$ and $g \small \ensuremath{\boxtimes} e$ ($k \in K$) are sent to the same element so this map is well-defined. Similarly, its inverse $(g \small \ensuremath{\boxtimes} 1) \ensuremath{\otimes} e \mapsto g\small \ensuremath{\boxtimes} g^{-1} e$ is well-defined. These maps clearly intertwine the action of $G$.
\end{proof}
Maintain the notation $W = \S_n$, $J_{n-1} = \{s_1,\ldots,s_{n-2}\}$, $J'_{n-1} = \{s_2,\ldots,s_{n-1}\}$ of the previous sections. Recall that $b_k = s_{k} \ldots s_{n-1}$ for $k \in [n-1]$, $b_n = 1$ are the minimal left coset representatives of $W_{J_{n-1}}$, and $a_k = s_{k-1}\ldots s_{1}$ for $k\in \{2,\ldots,n\}$, $a_1 = 1$ the minimal left coset representatives of $W_{J'_{n-1}}$.
\begin{corollary}\label{c indresiso}
For the inclusions $\S_{n-1}=W_{J'_{n-1}} \hookrightarrow W = \S_n$ and $\S_{n-1}=W_{J_{n-1}} \hookrightarrow W = \S_n$, we have $ \ensuremath{\mathbb{Z}}\S_n\ensuremath{\otimes}_{\ensuremath{\mathbb{Z}}\S_{n-1}}E \cong V\ensuremath{\otimes}_\ensuremath{\mathbb{Z}} E$ for any $\ensuremath{\mathbb{Z}}\S_n$-module $E$.
\end{corollary}
\begin{proof}
Put $G=\S_n$. If $K=W_{J'_{n-1}}$, then $\ensuremath{\mathbb{Z}} G\ensuremath{\otimes}_{\ensuremath{\mathbb{Z}} K} \ensuremath{\mathbb{Z}}\cong V$ by $a_i\small \ensuremath{\boxtimes} 1\mapsto x_i$. If $K=W_{J_{n-1}}$, then $\ensuremath{\mathbb{Z}} G\ensuremath{\otimes}_{\ensuremath{\mathbb{Z}} K} \ensuremath{\mathbb{Z}}\cong V$ by $b_i\small \ensuremath{\boxtimes} 1\mapsto x_i$.
\end{proof}
The Hecke algebra is not a Hopf algebra in any natural way, so it is not clear what a Hecke algebra analogue of $F\ensuremath{\otimes} E$ should be for $F,E$ $\ensuremath{\mathbb{Z}}\S_n$-modules. If $F= V$, however, then $\H \ensuremath{\otimes}_J E$ is a $\u$-analogue of $\ensuremath{\mathbb{Z}}\S_n\ensuremath{\otimes}_{\ensuremath{\mathbb{Z}}\S_{n-1}}E\cong V\ensuremath{\otimes} E$, where $r$ is either $n-1$ or $1$ (and $J = J_r \cup J'_{n-r}$). These choices for $r$ give isomorphic representations at $\u=1$, but do not give isomorphic $W$-graphs in general.
\begin{example} \label{ex 2wgraph}
Let $e^{+}$ be the trivial representation of $\H$. Then compare $\H\ensuremath{\otimes}_{{J'_{n-1}}}e^{+}$ (first row) with $\H\ensuremath{\otimes}_{{J_{n-1}}}e^{+}$ (second row) for $n=4$ :
$$
\xymatrix{
{\ensuremath{{\tilde{C'}}\negmedspace}_{a_4,e^{+}}}^{1,2,3} & {\ensuremath{{\tilde{C'}}\negmedspace}_{a_3,e^{+}}}^{1,2} \ar[l] \ar@{-}[r] & {\ensuremath{{\tilde{C'}}\negmedspace}_{a_2,e^{+}}}^{1,3} \ar@{-}[l] \ar@{-}[r] & {\ensuremath{{\tilde{C'}}\negmedspace}_{a_1,e^{+}}}^{2,3} \ar@{-}[l]
}$$
$$\xymatrix{
{\ensuremath{{\tilde{C'}}\negmedspace}_{b_1,e^+}}^{1,2,3} & {\ensuremath{{\tilde{C'}}\negmedspace}_{b_2,e^+}}^{2,3} \ar[l] \ar@{-}[r] & {\ensuremath{{\tilde{C'}}\negmedspace}_{b_3,e^+}}^{1,3} \ar@{-}[l] \ar@{-}[r] & {\ensuremath{{\tilde{C'}}\negmedspace}_{b_4,e^+}}^{1,2} \ar@{-}[l]
}$$
Evidently, these are not isomorphic as $W$-graphs.
In this paper $W$-graphs are drawn with the following conventions: vertices are labeled by canonical basis elements and descent sets appear as superscripts; an edge with no arrow indicates that $\mu =1$ and neither descent set contains the other; an edge with an arrow indicates that $\mu =1$ and the descent set of the arrow head strictly contains that of the arrow tail; no edge indicates that $\mu = 0$ or the descent sets are the same.
\end{example}
For the remainder of this paper, let $J = J'_{n-1}$ ($r$ = 1) since this is
preferable for comparing $\H \ensuremath{\otimes}_J E$ with $(\pH \ensuremath{\otimes}_\H E)_1$ (see \textsection\ref{ss affine tensor V}, below).
See \textsection\ref{ss sb} for how to go back and forth between the $J'_{n-1}$ and $J_{n-1}$ pictures.
\subsection{}
\label{ss affine tensor V}
There is another $\u$-analogue of tensoring with $V$ that comes from the extended affine Hecke algebra $\eH$. See \textsection \ref{ss affinecells} for a brief introduction to this algebra.
The module $\pH \ensuremath{\otimes}_\H E$ is a $\u$-analogue of $\ensuremath{\mathbb{Z}} \ensuremath{{W_e^+}} \ensuremath{\otimes}_{\ensuremath{\mathbb{Z}} \ensuremath{{W_e^+}}_{K_0}} E$, which, together with the following proposition, shows that $(\pH \ensuremath{\otimes}_\H E)_1$ is a $\u$-analogue of $V \ensuremath{\otimes} E$.
\begin{proposition}
\label{p affine u=1}
The correspondence
\begin{equation} \begin{array}{ccc}
\text{\rm Res}_{\ensuremath{\mathbb{Z}}\S_n} \ensuremath{\mathbb{Z}} \ensuremath{{W_e^+}} \ensuremath{\otimes}_{\ensuremath{\mathbb{Z}} \ensuremath{{W_e^+}}_{K_0}} E & \cong &\ensuremath{\mathbb{Z}}[{x_1},\ldots, {x_n}] \ensuremath{\otimes}_\ensuremath{\mathbb{Z}} E,\\
y^\lambda \small \ensuremath{\boxtimes} e & \longleftrightarrow & x^\lambda \ensuremath{\otimes} e,
\end{array}
\end{equation} is a degree-preserving isomorphism of $\ensuremath{\mathbb{Z}} \S_n$-modules, natural in $E$, where $\S_n$ acts on the polynomial ring by permuting the variables.
\end{proposition}
\begin{proof}
Recalling that $\ensuremath{{W_e}} = Y \rtimes W$ with $W$ acting on $Y$ by permuting the coordinates, we have $s_i (y^\lambda \small \ensuremath{\boxtimes} e) = s_i y^\lambda s_i \small \ensuremath{\boxtimes} s_i e = y^{s_i(\lambda)} \small \ensuremath{\boxtimes} s_i e$ and $s_i(x^\lambda \ensuremath{\otimes} e) = x^{s_i( \lambda)} \ensuremath{\otimes} s_i e$.
\end{proof}
\begin{example}
To compare with the $W$-graphs in Example \ref{ex 2wgraph}, here is the $W$-graph on $(\pH \ensuremath{\otimes}_\H e^{+})_1$. In this case it is isomorphic to the $W$-graph on $\H\ensuremath{\otimes}_{{J'_{n-1}}}e^{+}$, but this is not true in general as can be seen by comparing Figures \ref{f VVe+} and \ref{f VVe+ affine}.
$$\xymatrix{
{\ensuremath{{\tilde{C'}}\negmedspace}_{a_4\pi,e^{+}}}^{1,2,3} & {\ensuremath{{\tilde{C'}}\negmedspace}_{a_3\pi,e^{+}}}^{1,2} \ar[l] \ar@{-}[r] & {\ensuremath{{\tilde{C'}}\negmedspace}_{a_2\pi,e^{+}}}^{1,3} \ar@{-}[l] \ar@{-}[r] & {\ensuremath{{\tilde{C'}}\negmedspace}_{a_1\pi,e^{+}}}^{2,3} \ar@{-}[l]}.$$
\end{example}
The general relationship between $(\pH \ensuremath{\otimes}_\H E)_1$ and $\H\ensuremath{\otimes}_{{J'_{n-1}}}E $ can be explained as a special case of a $W$-graph version of Mackey's formula due to Howlett and Yin \cite[\textsection 5]{HY2}, which we now recall.
Let $\Gamma$ be a $W_I$-graph, and $K, I \subseteq S$. Put $F = A \Gamma$.
Let $\leftexp{K}{W}^I$ be the set of minimal double coset representatives $\{d : d$ of minimal length in $W_KdW_I\}$. For each $d\in \leftexp{K}{W}^I$, the \emph{d-subgraph} of (the $W_K$-graph on) $\text{\rm Res}_K\H \ensuremath{\otimes}_I F$ is $\{\ensuremath{{\tilde{C'}}\negmedspace}_{wd,\gamma} :\ w\in W_K^L, L = {K\cap dId^{-1}}, \gamma\in\Gamma\}$.
For any $d \in \leftexp{K}{W}^I$, let $L = K \cap dId^{-1}$. Then $d^{-1} L d = d^{-1} Kd \cap I \subseteq I$ so the restriction $\text{\rm Res}_{d^{-1}Ld} F$ makes sense. This $W_{d^{-1}Ld}$-graph naturally gives rise to a $W_L$-graph, denoted $d\Gamma$, obtained by conjugating descent sets by $d$. Explicitly, the descent set of a vertex $d\gamma$ of $d\Gamma$ is
\begin{equation} L(d\gamma) = \{dsd^{-1}: s \in L(\gamma) \subseteq I \text{ and } dsd^{-1} \in K\} \subseteq L. \end{equation}
The edge weights of $d\Gamma$ are the same as those of $\Gamma:\ \mu(d\delta,d\gamma) = \mu(\delta,\gamma)$ for all $\delta, \gamma \in \Gamma$.
\begin{theorem}[Howlett, Yin \cite{HY2}]\label{t mackey}
The $d$-subgraphs of $\text{\rm Res}_K\H \ensuremath{\otimes}_I F$ partition its canonical basis. Each $d$-subgraph is a union of cells and is isomorphic to $\H_K \ensuremath{\otimes}_L dF$ ($L = K \cap dId^{-1}$) as a $W_K$-graph via the correspondence $\ensuremath{{\tilde{C'}}\negmedspace}_{wd,\gamma}\leftrightarrow\ensuremath{{\tilde{C'}}\negmedspace}_{w,d\gamma}, w \in W^L_K$.
\end{theorem}
\begin{remark}
It is probably the case that each $d$-subgraph is a cellular subquotient rather than just a union of cells, however this is not proven in \cite{HY2}. This issue does not come up, however, because in the applications in this paper we can easily show that the $d$-subgraph is a cellular subquotient and sometimes the stronger statement that it is a cellular quotient or submodule.
\end{remark}
In the present application, put $K = I = \{s_1,\ldots,s_{n-1}\}$. Then $\pi \in \leftexp{K}{W}^I$ and the $\pi$-subgraph of $\text{\rm Res}_\H \eH \ensuremath{\otimes}_\H E$ is $\{\ensuremath{{\tilde{C'}}\negmedspace}_{a_k \pi,\gamma} : k\in [n],\gamma\in\Gamma\}$ since $K \cap \pi I \pi^{-1} = J'_{n-1}$. This is isomorphic as a $W$-graph to $\H \ensuremath{\otimes}_{J'_{n-1}} \pi E$. The $W_{J'_{n-1}}$-graph $\pi E$ is just $\text{\rm Res}_{J_{n-1}} E$, with each element of its descent sets shifted up by one. We have proved the following.
\begin{proposition}\label{p affine equals resind}
The $W$-graphs $(\pH \ensuremath{\otimes}_\H E)_1$ and $\H \ensuremath{\otimes}_{J'_{n-1}} \pi E$ are isomorphic.
\end{proposition}
\begin{remark}
Though this suggests that the $W$-graph versions of $V \ensuremath{\otimes} E$, $(\pH \ensuremath{\otimes}_\H E)_1$ and $\H \ensuremath{\otimes}_{J'_{n-1}} E$, behave in essentially the same way, some care must be taken. At $\u =1$, $\H \ensuremath{\otimes}_{J'_{n-1}} \pi E$ is not isomorphic to $V \ensuremath{\otimes}_\ensuremath{\mathbb{Z}} \pi E$ using Proposition \ref{p gk} since $\pi E$ is only a $\ensuremath{\mathbb{Z}} W_{J'_{n-1}}$-module, not a $\ensuremath{\mathbb{Z}} W$-module. Thus $(\pH \ensuremath{\otimes}_\H E)_1|_{\u=1}$ and $(\H \ensuremath{\otimes}_{J'_{n-1}} E)|_{\u=1}$ are only isomorphic to $V \ensuremath{\otimes} E$ by the rather different looking routes Proposition \ref{p affine u=1} and Corollary \ref{c indresiso}.
\end{remark}
\subsection{}
\label{ss reduced non-reduced}
\begin{figure}
\begin{center}
$$
\xymatrix{
{\ensuremath{{\tilde{C'}}\negmedspace}_{a_4,a_4,e^{+}}}^{1,2,3} & {\ensuremath{{\tilde{C'}}\negmedspace}_{a_3,a_4,e^{+}}}^{1,2} \ar[l] \ar@{-}[r] & {\ensuremath{{\tilde{C'}}\negmedspace}_{a_2,a_4,e^{+}}}^{1,3} \ar@{-}[l] \ar@{-}[r] & {\ensuremath{{\tilde{C'}}\negmedspace}_{a_1,a_4,e^{+}}}^{2,3} \ar@{-}[l] \\
{\ensuremath{{\tilde{C'}}\negmedspace}_{a_4,a_3,e^{+}}}^{1,3} \ar[u] \ar@{-}[r] & {\ensuremath{{\tilde{C'}}\negmedspace}_{a_3,a_3,e^{+}}}^{1,2} \ar@{-}[l] & {\ensuremath{{\tilde{C'}}\negmedspace}_{a_2,a_3,e^{+}}}^{1} \ar[l] \ar@{-}[r] \ar[u] \ar[d] & {\ensuremath{{\tilde{C'}}\negmedspace}_{a_1,a_3,e^{+}}}^{2} \ar@{-}[l] \ar[u] \ar@{-}[d]\\
{\ensuremath{{\tilde{C'}}\negmedspace}_{a_4,a_2,e^{+}}}^{2,3} \ar[d] \ar@{-}[u] & {\ensuremath{{\tilde{C'}}\negmedspace}_{a_3,a_2,e^{+}}}^{2} \ar[l] \ar[u] \ar@{-}[r] \ar[drr] \ar[d] & {\ensuremath{{\tilde{C'}}\negmedspace}_{a_2,a_2,e^{+}}}^{1,3} & {\ensuremath{{\tilde{C'}}\negmedspace}_{a_1,a_2,e^{+}}}^{3} \ar[l] \ar[l] \ar@{-}[u] \ar[d]\\
{\ensuremath{{\tilde{C'}}\negmedspace}_{a_4,a_1,e^{+}}}^{1,2,3} & {\ensuremath{{\tilde{C'}}\negmedspace}_{a_3,a_1,e^{+}}}^{1,2} \ar[l] \ar@{-}[r] & {\ensuremath{{\tilde{C'}}\negmedspace}_{a_2,a_1,e^{+}}}^{1,3} \ar@{-}[l] \ar@{-}[r] & {\ensuremath{{\tilde{C'}}\negmedspace}_{a_1,a_1,e^{+}}}^{2,3} \ar@{-}[l]\\
}$$
\end{center}
\begin{pspicture}(10,6.7){
\psset{unit=1cm}
\tiny
\newdimen\hcent
\hcent=5cm
\newdimen\ycor
\ycor=5.7cm
\advance\ycor by 0pt
\newdimen\hspacedim
\hspacedim=100pt
\newdimen\xcor
\xcor=\hcent
\newdimen\temp
\temp=140pt
\multiply \temp by 0 \divide \temp by 2
\advance\xcor by -\temp
\hoogte=9pt
\breedte=9pt
\dikte=0.2pt
\rput(\xcor,\ycor){\rnode{v1h1}
{
\begin{Young}
1\cr2\cr3\cr4\cr
\end{Young}
}
}
\advance\xcor by \hspacedim
\advance\ycor by -70pt
\hspacedim=180pt
\newdimen\xcor
\xcor=\hcent
\newdimen\temp
\temp=180pt
\multiply \temp by 1 \divide \temp by 2
\advance\xcor by -\temp
\rput(\xcor,\ycor){\rnode{v2h1}
{
\begin{Young}
1\cr2\cr3\cr4\cr
\end{Young}
}
}
\advance\xcor by \hspacedim
\rput(\xcor,\ycor){\rnode{v2h2}
{
\begin{Young}
1&2\cr3\cr4\cr
\end{Young}
}
}
\advance\xcor by \hspacedim
\advance\ycor by -70pt
\newdimen\hspacedim
\hspacedim=60pt
\newdimen\xcor
\xcor=\hcent
\newdimen\temp
\temp=60pt
\multiply \temp by 6 \divide \temp by 2
\advance\xcor by -\temp
\newdimen\ytmp
\ytmp=-0.1cm
\rput(\xcor,\ycor){\rnode{v3h1}
{
\begin{Young}
1\cr2\cr3\cr4\cr
\end{Young}
}
}
\rput(\xcor,\ytmp){sym}
\advance\xcor by \hspacedim
\rput(\xcor,\ycor){\rnode{v3h2}
{
\begin{Young}
1&2\cr3\cr4\cr
\end{Young}
}
}
\rput(\xcor,\ytmp){wedge}
\advance\xcor by \hspacedim
\rput(\xcor,\ycor){\rnode{v3h3}
{
\begin{Young}
1\cr2\cr3\cr4\cr
\end{Young}
}
}
\rput(\xcor,\ytmp){non-red}
\advance\xcor by \hspacedim
\rput(\xcor,\ycor){\rnode{v3h4}
{
\begin{Young}
1&2\cr3\cr4\cr
\end{Young}
}
}
\rput(\xcor,\ytmp){non-red}
\advance\xcor by \hspacedim
\rput(\xcor,\ycor){\rnode{v3h5}
{
\begin{Young}
1&3\cr2\cr4\cr
\end{Young}
}
}
\rput(\xcor,\ytmp){sym}
\advance\xcor by \hspacedim
\rput(\xcor,\ycor){\rnode{v3h6}
{
\begin{Young}
1&3\cr2&4\cr
\end{Young}
}
}
\rput(\xcor,\ytmp){sym}
\advance\xcor by \hspacedim
\rput(\xcor,\ycor){\rnode{v3h7}
{
\begin{Young}
1&2&3\cr4\cr
\end{Young}
}
}
\rput(\xcor,\ytmp){wedge}
\ncline{-}{v1h1}{v2h1}
\ncline{-}{v1h1}{v2h2}
\ncline{-}{v2h1}{v3h1}
\ncline{-}{v2h1}{v3h2}
\ncline{-}{v2h2}{v3h3}
\ncline{-}{v2h2}{v3h4}
\ncline{-}{v2h2}{v3h5}
\ncline{-}{v2h2}{v3h6}
\ncline{-}{v2h2}{v3h7}
}
\end{pspicture}
\caption{The $W$-graph on $\H \ensuremath{\otimes}_{J'_{n-1}} \H \ensuremath{\otimes}_{J'_{n-1}} e^+$ and the graph $G$ of \textsection \ref{ss locallabels}. The vertices of the tree $G$ are marked by local labels. Each cell in the $W$-graph corresponds to the path from a leaf to the root that is its local sequence.}
\label{f VVe+}
\end{figure}
\begin{figure}
$$
\xymatrix{
{\ensuremath{{\tilde{C'}}\negmedspace}_{a_4\pi,a_4\pi,e^{+}}}^{1,2,3} & {\ensuremath{{\tilde{C'}}\negmedspace}_{a_3\pi,a_4\pi,e^{+}}}^{1,2} \ar[l] \ar@{-}[r] & {\ensuremath{{\tilde{C'}}\negmedspace}_{a_2\pi,a_4\pi,e^{+}}}^{1,3} \ar@{-}[l] \ar@{-}[r] & {\ensuremath{{\tilde{C'}}\negmedspace}_{a_1\pi,a_4\pi,e^{+}}}^{2,3}\\
{\ensuremath{{\tilde{C'}}\negmedspace}_{a_4\pi,a_3\pi,e^{+}}}^{1,2,3} & {\ensuremath{{\tilde{C'}}\negmedspace}_{a_3\pi,a_3\pi,e^{+}}}^{1,2} \ar[l] & {\ensuremath{{\tilde{C'}}\negmedspace}_{a_2\pi,a_3\pi,e^{+}}}^{1,3} \ar@{-}[l] \ar@{-}[r] & {\ensuremath{{\tilde{C'}}\negmedspace}_{a_1\pi,a_3\pi,e^{+}}}^{2,3}\\
{\ensuremath{{\tilde{C'}}\negmedspace}_{a_4\pi,a_2\pi,e^{+}}}^{1,3} \ar@{-}[d] \ar@{-}[r] \ar[u] & {\ensuremath{{\tilde{C'}}\negmedspace}_{a_3\pi,a_2\pi,e^{+}}}^{1,2} & {\ensuremath{{\tilde{C'}}\negmedspace}_{a_2\pi,a_2\pi,e^{+}}}^{1} \ar[l] \ar[d] \ar[u] & {\ensuremath{{\tilde{C'}}\negmedspace}_{a_1\pi,a_2\pi,e^{+}}}^{2} \ar@{-}[l] \ar[u] \ar@{-}[d]\\
{\ensuremath{{\tilde{C'}}\negmedspace}_{a_4\pi,a_1\pi,e^{+}}}^{2,3} & {\ensuremath{{\tilde{C'}}\negmedspace}_{a_3\pi,a_1\pi,e^{+}}}^{2} \ar[l] \ar[u] \ar@{-}[r] & {\ensuremath{{\tilde{C'}}\negmedspace}_{a_2\pi,a_1\pi,e^{+}}}^{1,3} & {\ensuremath{{\tilde{C'}}\negmedspace}_{a_1\pi,a_1\pi,e^{+}}}^{3} \ar[l]\\
}$$
\begin{pspicture}(10,6.7){\tiny
\newdimen\hcent
\hcent=5cm
\newdimen\ycor
\ycor=5.8cm
\advance\ycor by 0pt
\newdimen\hspacedim
\hspacedim=160pt
\newdimen\xcor
\xcor=\hcent
\newdimen\temp
\temp=140pt
\multiply \temp by 0 \divide \temp by 2
\advance\xcor by -\temp
\hoogte=10pt
\breedte=10pt
\dikte=0.2pt
\rput(\xcor,\ycor){\rnode{v1h1}{
\begin{Young}
1\cr2\cr3\cr4\cr
\end{Young}
}
}
\advance\xcor by \hspacedim
\advance\ycor by -70pt
\newdimen\hspacedim
\hspacedim=180pt
\newdimen\xcor
\xcor=\hcent
\newdimen\temp
\temp=180pt
\multiply \temp by 1 \divide \temp by 2
\advance\xcor by -\temp
\rput(\xcor,\ycor){\rnode{v2h1}{
\begin{Young}
-3\cr2\cr3\cr4\cr
\end{Young}
}}
\advance\xcor by \hspacedim
\rput(\xcor,\ycor){\rnode{v2h2}{
\begin{Young}
-3&2\cr3\cr4\cr
\end{Young}
}}
\advance\xcor by \hspacedim
\advance\ycor by -70pt
\newdimen\hspacedim
\hspacedim=60pt
\newdimen\xcor
\xcor=\hcent
\newdimen\temp
\temp=60pt
\multiply \temp by 6 \divide \temp by 2
\advance\xcor by -\temp
\newdimen\ytmp
\ytmp=\ycor
\advance \ytmp by -27pt
\rput(\xcor,\ycor){\rnode{v3h1}{
\begin{Young}
\ng3\cr2\cr3\cr4\cr
\end{Young}
}}
\rput(\xcor,\ytmp){non-red}
\advance\xcor by \hspacedim
\rput(\xcor,\ycor){\rnode{v3h2}{
\begin{Young}
\ng3&2\cr3\cr4\cr
\end{Young}
}}
\rput(\xcor,\ytmp){non-red}
\advance\xcor by \hspacedim
\rput(\xcor,\ycor){\rnode{v3h3}{
\begin{Young}
-2\cr1\cr3\cr4\cr
\end{Young}
}}
\rput(\xcor,\ytmp){sym}
\advance\xcor by \hspacedim
\rput(\xcor,\ycor){\rnode{v3h4}{\begin{Young}
-2&1\cr3\cr4\cr
\end{Young}}}
\rput(\xcor,\ytmp){wedge}
\advance\xcor by \hspacedim
\rput(\xcor,\ycor){\rnode{v3h5}{\begin{Young}
-2&3\cr1\cr4\cr
\end{Young}}}
\rput(\xcor,\ytmp){sym}
\advance\xcor by \hspacedim
\rput(\xcor,\ycor){\rnode{v3h6}{
\begin{Young}
-2&3\cr1&4\cr
\end{Young}
}}
\rput(\xcor,\ytmp){sym}
\advance\xcor by \hspacedim
\rput(\xcor,\ycor){\rnode{v3h7}{
\begin{Young}
-2&1&3\cr4\cr
\end{Young}
}}
\rput(\xcor,\ytmp){wedge}
\ncline{-}{v1h1}{v2h1}
\ncline{-}{v1h1}{v2h2}
\ncline{-}{v2h1}{v3h1}
\ncline{-}{v2h1}{v3h2}
\ncline{-}{v2h2}{v3h3}
\ncline{-}{v2h2}{v3h4}
\ncline{-}{v2h2}{v3h5}
\ncline{-}{v2h2}{v3h6}
\ncline{-}{v2h2}{v3h7}
}
\end{pspicture}
\caption{The $W$-graph on $(\pH\ensuremath{\otimes}_\H(\pH\ensuremath{\otimes}_\H e^+)_1)_1$ and the graph $G$ of \textsection \ref{ss locallabels}, with the labeling conventions of \textsection\ref{ss affinecells}.}
\label{f VVe+ affine}
\end{figure}
Let $\Gamma$ be a $W$-graph, and put $E = A \Gamma$, $F = \text{\rm Res}_J A \Gamma$, $\ensuremath{\widetilde{E}}^2:= \H\ensuremath{\otimes}_{J}\H\ensuremath{\otimes}_{J} A\Gamma$.
We will show that $\ensuremath{\widetilde{E}}^2$ decomposes into what we call a reduced and non-reduced part. Towards this end, consider the exact sequence
\begin{equation} \label{e mackey}\xymatrix@R=.2cm{0 \ar[r]& F \ar[r]_<<<<{{\alpha}} & \text{\rm Res}_J \H \ensuremath{\otimes}_J F \ar[r]_<<<<{\tau} & \H_{J}\ensuremath{\otimes}_{J'_{n-2}}\text{\rm Res}_{J'_{n-2}} F \ar[r]& 0. \\
& \gamma \ar@{|->}[r] & 1 \small \ensuremath{\boxtimes} \gamma \ar@{|->}[r]& 0 &\\
&& T_{a_k}\small \ensuremath{\boxtimes}\gamma \ar@{|->}[r]& T_{s_{k-1} \ldots s_{2}} \small \ensuremath{\boxtimes} \gamma &}\end{equation}
By Proposition \ref{p res ind} the image of $\alpha$ is a cellular submodule. The map $\tau$ induces an isomorphism of $W_J$-graphs $\text{\rm Res}_J \H \ensuremath{\otimes}_J F / \text{\rm im\,}(\alpha) \cong \H_{J}\ensuremath{\otimes}_{J'_{n-2}}\text{\rm Res}_{J'_{n-2}} F $; given that the sequence is exact, this is equivalent to taking canonical basis elements to canonical basis elements or to 0. That $\tau$ is an isomorphism can be seen directly by observing that it takes standard basis elements of $\H\ensuremath{\otimes}_JF$ to standard basis elements of $\H_J \ensuremath{\otimes}_{J'_{n-2}} F$ or to 0, takes the lattice $A^- \H \ensuremath{\otimes}_J \Gamma$ to the lattice $A^- \H_J \ensuremath{\otimes}_{J'_{n-2}} \Gamma$, and intertwines the involutions $\br{\cdot}$.
This decomposition also comes another application of the $W$-graph version of Mackey's formula (Theorem \ref{t mackey}). For this application, put $K = I = J (= \{s_2,\ldots,s_{n-1}\})$. Then $\leftexp{K}{W}^I = \{1,s_{1}\}$. The $1$-subgraph of $\text{\rm Res}_J \H \ensuremath{\otimes}_J F$ is $\{\ensuremath{{\tilde{C'}}\negmedspace}_{1,\gamma} : \gamma\in\Gamma\}$ and the $s_{1}$-subgraph is $\{\ensuremath{{\tilde{C'}}\negmedspace}_{w s_{1},\gamma} :\ w\in W_J^{J'_{n-2}}, \gamma\in\Gamma\}$. These are isomorphic as $W_J$-graphs to $\H_J \ensuremath{\otimes}_J F = F$ and $\H_{J}\ensuremath{\otimes}_{J'_{n-2}}\text{\rm Res}_{J'_{n-2}} F$ respectively (since we have $d^{-1} Ld = L$ for all $d$, $d F$ and $F$ are identical).
Next, tensor (\ref{e mackey}) with $\H$ to obtain
\begin{equation} \label{e mackey2}\xymatrix{0 \ar[r]& \H \ensuremath{\otimes}_J A \Gamma \ar[r]_<<<<{\H \ensuremath{\otimes}_J{\alpha}} & \H \ensuremath{\otimes}_J \H \ensuremath{\otimes}_J A\Gamma \ar[r]_<<<<{\H \ensuremath{\otimes}_J \tau} & \H\ensuremath{\otimes}_{J'_{n-2}} A \Gamma \ar[r]& 0.}\end{equation}
Put $\ensuremath{\widetilde{F}}^2 = \H\ensuremath{\otimes}_{J'_{n-2}} A \Gamma$. The quotient $\ensuremath{\widetilde{F}}^2$ (resp. the submodule $\H \ensuremath{\otimes}_J A \Gamma$) is the \emph{reduced} (resp. \emph{non-reduced}) part of $\ensuremath{\widetilde{E}}^2$.
\begin{proposition}
\label{p red notred}
The submodule and quotient of $\ensuremath{\widetilde{E}}^2$ given by (\ref{e mackey2}) are cellular and the maps in (\ref{e mackey2}) take canonical basis elements to canonical basis elements or to 0.
\end{proposition}
\begin{proof}
This follows from the application of Theorem \ref{t mackey} described above and Proposition \ref{CBSubquotientprop}.
\end{proof}
\begin{example}
The non-reduced part of $\ensuremath{\widetilde{E}}^2$ for $E = e^+$ is the bottom row of the $W$-graph in Figure \ref{f VVe+}. The cells comprising it are labeled ``non-red'' below the tree.
\end{example}
\subsection{}
\label{ss red notred u=1}
Let us determine what the decomposition of $\ensuremath{\widetilde{E}}^2$ into reduced and non-reduced parts becomes at $\u = 1$.
\begin{proposition}
\label{p mackey2 u=1}
At $\u=1$, (\ref{e mackey2}) becomes
\begin{equation} \label{e mackey2 u=1}
\xymatrix@R=.2cm{0 \ar[r]& V \ensuremath{\otimes} E \ar[r] & V \ensuremath{\otimes} V \ensuremath{\otimes} E \ar[r] & T^2_\text{red} V \ensuremath{\otimes} E \ar[r]& 0. \\
& x_k \ensuremath{\otimes} \gamma \ar@{|->}[r] & x_k \ensuremath{\otimes} x_k \ensuremath{\otimes} \gamma \ar@{|->}[r]& 0 &\\
&& x_i \ensuremath{\otimes} x_j \ensuremath{\otimes} \gamma \ar@{|->}[r]& x_i \ensuremath{\otimes} x_j \ensuremath{\otimes} \gamma, &}
\end{equation}
where $i \neq j$ and $T^2_\text{red} V := \ensuremath{\mathbb{Z}} \{ x_i \ensuremath{\otimes} x_j : i \neq j, i,j \in [n]\} \subseteq V \ensuremath{\otimes} V$.
\end{proposition}
To see this, first define $a_{k,l} = s_{k-1} \ldots s_{1}s_{l-1}\ldots s_{2}$ for $k\in [n],\ l \in \{2,\ldots,n\}$; then
\begin{equation} \label{e aklcoset}
\begin{array}{rcl}
a_{k,l} \cdot s_1 &=& a_{l,k+1} \text{ if } k < l, \\
W^{J'_{n-2}} &=& \{a_{k,l} : k \in [n],\ l \in \{2, \ldots,n\}\}, \\
W^{S\backslash {s_{2}}} s_{1} &=& \{a_{k,l} : k \geq l>1\}, \text{ and}\\
W^{S \backslash {s_{2}}} &=& \{a_{k,l} : k < l\}.
\end{array}
\end{equation}
Apply Corollary \ref{c indresiso} twice to obtain
\begin{equation}
\label{e vve at u=1}
\begin{array}{rcccl}
\ensuremath{\widetilde{E}}^2|_{\u=1} & \cong & \ensuremath{\mathbb{Z}}\S_n\ensuremath{\otimes}_{\ensuremath{\mathbb{Z}}\S_{n-1}}V \ensuremath{\otimes} E & \cong & V \ensuremath{\otimes} V\ensuremath{\otimes} E\\
a_k \small \ensuremath{\boxtimes} a_l \small \ensuremath{\boxtimes} \gamma & \leftrightarrow & a_k \small \ensuremath{\boxtimes} (x_l\ensuremath{\otimes} a_l(\gamma)) & \leftrightarrow & \left\{
\begin{array}{ll}
x_k \ensuremath{\otimes} x_k \ensuremath{\otimes} a_k a_l(\gamma) & \text{ if } l = 1,\\
x_k \ensuremath{\otimes} x_l \ensuremath{\otimes} a_k a_l(\gamma) & \text{ if } k < l, \\
x_k \ensuremath{\otimes} x_{l-1} \ensuremath{\otimes} a_k a_l(\gamma) & \text { if } k \geq l > 1. \end{array}\right.
\end{array} \end{equation}
\begin{proof}[Proof of Proposition \ref{p mackey2 u=1}]
The interesting part of the calculation is the following diagram
\begin{equation}
\label{e red notred diagram}
\xymatrix{
\ensuremath{\widetilde{E}}^2|_{\u=1} \ar@{<->}[d]_<<<<{\cong}\ar@{->>}[r]& \ensuremath{\widetilde{F}}^2|_{\u=1} \ar@{<->}[d]_<<<<{\cong} & a_k \small \ensuremath{\boxtimes} a_l \small \ensuremath{\boxtimes} \gamma \ar@{|->}[r]\ar@{<->}[d] & \ar@{<->}[d] a_{k,l} \small \ensuremath{\boxtimes} \gamma \\
V \ensuremath{\otimes} V \ensuremath{\otimes} E \ar@{->>}[r]& T^2_\text{red} V \ensuremath{\otimes} E & x_k \ensuremath{\otimes} a_k (x_l) \ensuremath{\otimes} a_k a_l(\gamma) \ar@{|->}[r] & x_k \ensuremath{\otimes} a_k(x_l) \ensuremath{\otimes} a_{k,l} s_1 (\gamma)
}
\end{equation}
where $k \in [n], l \in \{2,\ldots,n\}$.
There is a slightly tricky point here: the left-hand isomorphism of (\ref{e red notred diagram}) comes from (\ref{e vve at u=1}), but the right-hand isomorphism does not come from a similar application of Proposition \ref{p gk}. However, Proposition \ref{p gk} also holds with the isomorphism $g \small \ensuremath{\boxtimes} e \mapsto (g \small \ensuremath{\boxtimes} 1) \ensuremath{\otimes} g ce$ replacing (\ref{e gk}), where $c \in G$ commutes with all of $K$. In this case we must choose $c = s_1$ (which commutes with $K = J'_{n-2}$) to make the diagram (\ref{e red notred diagram}) commute.
\end{proof}
\subsection{}
There is a similar decomposition of $\ensuremath{\widehat{E}}^2 := (\pH\ensuremath{\otimes}_\H(\pH\ensuremath{\otimes}_\H A\Gamma)_1)_1$ into a reduced and non-reduced part. Two applications of Proposition \ref{p affine equals resind} yield $\ensuremath{\widehat{E}}^2 = \H \ensuremath{\otimes}_J \pi(\H \ensuremath{\otimes}_J \pi E)$.
First, let us apply Theorem \ref{t mackey} to $\text{\rm Res}_{J_{n-1}} \H \ensuremath{\otimes}_{J'_{n-1}} \pi E $ analogously to the application in the previous subsection. In this case $I = J'_{n-1}, \ K = J_{n-1}$, and therefore $\leftexp{K}{W}^I = \{1, a_n\}$.
The $1$-subgraph is $\{\ensuremath{{\tilde{C'}}\negmedspace}_{a_k, \pi \gamma} : k < n, \pi \gamma \in \pi \Gamma\}$ and spans a cellular submodule of $\text{\rm Res}_{J_{n-1}} \H \ensuremath{\otimes}_{J'_{n-1}} \pi E $. This can be seen, for instance, by applying Proposition \ref{p ic lower ideal} with the order $\ensuremath{\prec}$ of \textsection \ref{ss IC basis} to obtain
\begin{equation} A \{\ensuremath{{\tilde{C'}}\negmedspace}_{a_k ,\pi \gamma}: k < n, \pi \gamma \in \pi \Gamma\} = A \{\ensuremath{\tilde{T}}_{a_k ,\pi \gamma}: k < n, \pi \gamma \in \pi \Gamma\}; \end{equation}
it is clear that this $A$-span of $\ensuremath{\tilde{T}}$'s is left stable under the action of $\H_{J_{n-1}}$. Now this submodule is isomorphic to $\H_{J_{n-1}} \ensuremath{\otimes}_{J_{n-1}\backslash s_1} \pi E $ (as a $W_{J_{n-1}}$-graph) by Theorem \ref{t mackey}.
The $a_n$-subgraph is $\{\ensuremath{{\tilde{C'}}\negmedspace}_{a_n, \pi \gamma} : \pi \gamma \in \pi \Gamma\}$ and spans a cellular quotient since the only other $d$-subgraph spans a submodule. This quotient is isomorphic to $a_n \pi E$ as a $W_{J_{n-1}}$-graph. Moreover, $a_n \pi E$ is exactly $\text{\rm Res}_{J_{n-1}} E$ as $L = K \cap a_n I a_n^{-1} = K =
{J_{n-1}}$. The following exact sequence summarizes what we have so far.
\begin{equation} \label{e mackey affine}\xymatrix@R=.2cm{
0 \ar[r]& \H_{J_{n-1}} \ensuremath{\otimes}_{J_{n-1}\backslash s_1} \pi E \ar[r] & \text{\rm Res}_{J_{n-1}} \H \ensuremath{\otimes}_{J'_{n-1}} \pi E \ar[r] & \text{\rm Res}_{J_{n-1}} E \ar[r]& 0.
}\end{equation}
Applying $\pi$ to the $W_{J_{n-1}}$-graphs in this sequence to obtain $W_{J'_{n-1}}$-graphs (as explained before Theorem \ref{t mackey}) and then tensoring with $\H$ yields
\begin{equation} \label{e mackey2 affine}\xymatrix@C=.24cm@R=.47cm{0 \ar[r]& \H \ensuremath{\otimes}_{J'_{n-1}} \pi (\H_{J_{n-1}} \ensuremath{\otimes}_{J_{n-1}\backslash s_1} \pi E) \ar[r] \ar@{<->}[d]^{\cong} & \H \ensuremath{\otimes}_{J'_{n-1}} \pi (\H \ensuremath{\otimes}_{J'_{n-1}} \pi E) \ar[r]\ar@{<->}[d]^{\cong} & \H \ensuremath{\otimes}_{J'_{n-1}} \pi E \ar[r]\ar@{<->}[d]^{\cong}& 0\\
0 \ar[r] & \H \ensuremath{\otimes}_{J'_{n-2}} \text{\rm Res}_{J'_{n-2}}\pi^2 E \ar[r] & (\pH\ensuremath{\otimes}_\H(\pH\ensuremath{\otimes}_\H E)_1)_1 \ar[r] & (\pH \ensuremath{\otimes}_\H E)_1 \ar[r]& 0, }\end{equation}
where $\text{\rm Res}_{J'_{n-2}}\pi^2 E$ is the $W_{J'_{n-2}}$-graph obtained from $\text{\rm Res}_{J_{n-2}} E$ by increasing descent set indices by 2. The leftmost isomorphism comes from the isomorphism of Coxeter group pairs $(W_{J_{n-1}\backslash s_1},W_{J_{n-1}}) \cong (W_{J'_{n-2}},W_{J'_{n-1}})$ given by conjugation by $\pi$. The other two isomorphisms are applications of Proposition \ref{p affine equals resind} .
The submodule $\ensuremath{\widehat{F}}^2 :=\H \ensuremath{\otimes}_{J'_{n-2}} \text{\rm Res}_{J'_{n-2}}\pi^2 E$ (resp. the quotient $(\pH \ensuremath{\otimes}_\H E)_1$) is the \emph{reduced} (resp. \emph{non-reduced}) part of $\ensuremath{\widehat{E}}^2$. We have proved the following analogue of Proposition \ref{p red notred}.
\begin{proposition}\label{p red notred affine}
The submodule and quotient of $\ensuremath{\widehat{E}}^2$ given by (\ref{e mackey2 affine}) are cellular
and the maps in (\ref{e mackey2 affine}) take canonical basis elements to canonical basis elements or to 0.
\end{proposition}
\begin{example}
The non-reduced part of $\ensuremath{\widehat{E}}^2$ for $E = e^+$ is the top row of the $W$-graph in Figure \ref{f VVe+ affine}. The cells comprising it are labeled ``non-red'' below the tree.
\end{example}
At $\u=1$, the decomposition (\ref{e mackey2 affine}) becomes
\begin{equation} \ensuremath{\widehat{E}}^2|_{\u=1} \cong V \ensuremath{\otimes} V \ensuremath{\otimes} E \cong T^2_\text{red} V \ensuremath{\otimes} E \oplus V \ensuremath{\otimes} E\end{equation}
(with the left-hand isomorphism from Proposition \ref{p affine u=1}), but the computation is different from that of \textsection\ref{ss red notred u=1}. We omit the details.
\section{Decomposing $V \ensuremath{\otimes} V \ensuremath{\otimes} E$ and the functor $\ensuremath{Z^2}$}
\label{s decomposing VVE}
In this section we study a $W$-graph version of the decomposition $V \ensuremath{\otimes} V \ensuremath{\otimes} E \cong S^2V\ensuremath{\otimes} E \oplus \Lambda^2V\ensuremath{\otimes} E$. Along the way, we come across a mysterious object, the sym-wedge functor $\ensuremath{Z^2}$. At $\u = 1$, this is some kind of mixture of the functors $S^2_\text{red} V \ensuremath{\otimes}-$ and $\Lambda^2 V \ensuremath{\otimes}-$, where $S^2_\text{red} V = \ensuremath{\mathbb{Z}} \{ x_i \ensuremath{\otimes} x_j + x_j \ensuremath{\otimes} x_i : i \neq j \} \subseteq S^2 V \subseteq V \ensuremath{\otimes} V$.
\subsection{}
Let $\Lambda$ be the $W_{S \backslash s_2}$-graph on $\H_{S \backslash s_2} \ensuremath{\otimes}_{J'_{n-2}} A \Gamma$ obtained from Theorem \ref{t HY canbas exists}.
For any $W$-graph $\ensuremath{\Upsilon}$ and $s\in S$, define $\ensuremath{\Upsilon}^-_s = \{\gamma \in \ensuremath{\Upsilon}: s \in L(\gamma)\}$ and $\ensuremath{\Upsilon}^+_s = \{\gamma \in \ensuremath{\Upsilon}: s \notin L(\gamma)\}$. In this case, $\Lambda^-_{s_1} = \{\ensuremath{{\tilde{C'}}\negmedspace}_{s_1,\gamma}:\gamma \in \Gamma\}$, and $\Lambda^+_{s_1} = \{\ensuremath{{\tilde{C'}}\negmedspace}_{1,\gamma}:\gamma \in \Gamma\}$ as $L(w, \gamma) = L(w) \cup L(\gamma)$. Also note that $\ensuremath{{\tilde{C'}}\negmedspace}_{1,\gamma} = \ensuremath{\mathscr{C}}_{1} \small \ensuremath{\boxtimes} \gamma$ and $\ensuremath{{\tilde{C'}}\negmedspace}_{s_1,\gamma} = \ensuremath{\mathscr{C}}_{s_1} \small \ensuremath{\boxtimes} \gamma$.
It is clear that in the case $\Gamma = \Gamma_{W_{J'_{n-2}}}$, $A \Lambda^-_{s_1}$ is a cellular submodule of $A \Lambda$. This is actually true in full generality as we will see shortly (Lemma \ref{l s1 descent submodule}). Now define the \emph{sym-wedge} functor $\ensuremath{Z^2}$ by $\ensuremath{Z^2} A\Gamma = \H\ensuremath{\otimes}_{{S \backslash s_2}} A\Lambda^-_{s_1}$, with a $W$-graph structure coming from Theorem \ref{t HY canbas exists}.
\begin{theorem}\label{t twisted S2}
The $\H$-module $\ensuremath{Z^2} A\Gamma$ is a cellular submodule of $\ensuremath{\widetilde{F}}^2 := \H\ensuremath{\otimes}_{J'_{n-2}} A \Gamma$.
\end{theorem}
\begin{proof}
By Lemma \ref{l s1 descent submodule} (below), $A \Lambda^-_{s_1}$ is a cellular submodule of $A \Lambda$. Proposition \ref{CBSubquotientprop} shows that $\H \ensuremath{\otimes}_{S \backslash s_2} A\Lambda^-_{s_1}$ is a cellular submodule of $\H \ensuremath{\otimes}_{S \backslash s_2} A\Lambda$, and $\H \ensuremath{\otimes}_{S \backslash s_2} \Lambda$ and $\H\ensuremath{\otimes}_{J'_{n-2}} \Gamma$ give the same
$W$-graph structure on $\ensuremath{\widetilde{F}}^2$ by Proposition \ref{p nested parabolic}.
\end{proof}
The sym-wedge functor was discovered by looking at examples. The preceding proof sort of explains why such a cellular submodule should exist, but it is still somewhat surprising it does not agree with $S^2_\text{red} V\ensuremath{\otimes}-$ at $\u = 1$. We will determine what $\ensuremath{Z^2} A\Gamma$ is at $\u=1$ in \textsection \ref{ss u=1 tS2} and address its relation with $S^2_\text{red} V \ensuremath{\otimes}-$ and $\Lambda^2 V \ensuremath{\otimes}-$ in \textsection \ref{ss S2 vs twisted S2}. It will be useful for us later to know the following additional structure possessed by $\ensuremath{Z^2}$.
\begin{proposition} \label{p cbsubquotientprop tsymred}
The rule $E \mapsto \ensuremath{Z^2} E$ is a functor $\ensuremath{Z^2} : \H$-$\ensuremath{\mathbf{Mod}} \to \H$-$\ensuremath{\mathbf{Mod}}$. Moreover, if $E = A\Gamma$ for some $W$-graph $\Gamma$, then taking cellular submodules or quotients of $E$ gives rise to cellular submodules and quotients of $\ensuremath{Z^2} E$ in the same way induction does in Proposition \ref{CBSubquotientprop}.
\end{proposition}
\begin{proof}
As explained above, the proposed functor $\ensuremath{Z^2}$ is the composition
\begin{equation} \H\text{-}\ensuremath{\mathbf{Mod}} \xrightarrow{\text{\rm Res}_{J'_{n-2}}} \H_{J'_{n-2}}\text{-}\ensuremath{\mathbf{Mod}} \xrightarrow{\H_{S \backslash s_2}\ensuremath{\otimes}-} \H_{S \backslash s_2}\text{-}\ensuremath{\mathbf{Mod}} \xrightarrow{\zeta} \H_{S \backslash s_2}\text{-}\ensuremath{\mathbf{Mod}} \xrightarrow{\H\ensuremath{\otimes}-} \H\text{-}\ensuremath{\mathbf{Mod}}, \end{equation}
where $\zeta(F)$ is the kernel of $F \xrightarrow{m_{\ensuremath{\mathscr{C}}_{s_1}-[2]}} F$ and $m_{h}$ is left multiplication by $h$ (by Lemma \ref{l s1 descent submodule}, $m_{\ensuremath{\mathscr{C}}_{s_1}-[2]}$ is an $\H_{S \backslash s_2}$-module homomorphism and its kernel equals $A\Lambda^-_{s_1}$ in the case $F = A\Lambda$). Thus it suffices to show that $\zeta$ is a functor and respects cellular subquotients as claimed.
Let $F$ and $F^*$ be $W_{S \backslash s_2}$-graphs and $f: F \to F^*$ be an $\H_{S \backslash s_2}$-module homomorphism. As $f\ m_{\ensuremath{\mathscr{C}}_{s_1}-[2]} = m_{\ensuremath{\mathscr{C}}_{s_1}-[2]} f$, $f(\text{\rm ker\,}(m_{\ensuremath{\mathscr{C}}_{s_1}-[2]})) \subseteq \text{\rm ker\,}(m_{\ensuremath{\mathscr{C}}_{s_1}-[2]})$. Thus $f \mapsto f|_{\text{\rm ker\,}(m_{\ensuremath{\mathscr{C}}_{s_1}-[2]})}$ defines $\zeta$ on morphisms and this certainly respects composition of morphisms.
For the second statement, just observe that if $\Lambda$ is a $W_{S \backslash s_2}$-graph and $\ensuremath{\Upsilon} \subseteq \Lambda$ spans a cellular submodule, then $\zeta(A\ensuremath{\Upsilon})$ is the intersection of the cellular submodules $A\ensuremath{\Upsilon}$ and $A\Lambda^-_{s_1}$, which is a cellular submodule of $\zeta(A\Lambda) = A\Lambda^-_{s_1}$. Similarly, if $\ensuremath{\Upsilon}^*$ is the vertex set $\Lambda \backslash \ensuremath{\Upsilon}$, then $\zeta(A\ensuremath{\Upsilon}^*) = A(\ensuremath{\Upsilon}^* \cap \Lambda^-_{s_1})$, which is the cellular quotient $A\Lambda^-_{s_1} / \zeta(A\ensuremath{\Upsilon})$ of $A\Lambda^-_{s_1}$.
\end{proof}
\begin{lemma} \label{l s1 descent submodule}
For any $W$-graph $\Lambda$ and $s \in S$, the kernel of the map of abelian groups $m_{\ensuremath{\mathscr{C}}_s-[2]} :A\Lambda \to A\Lambda$ (where $m_h$ is left multiplication by $h$) is equal to $A\Lambda^-_s$. If $s$ commutes with $t$ for all $t \in S$ and $F$ is any $\H$-module, then $m_{\ensuremath{\mathscr{C}}_s-[2]}:F \to F$ is an $\H$-module homomorphism. Therefore, if $\Lambda$ is a $W_{S\backslash s_2}$-graph, then $A \Lambda^-_{s_1}$ is a cellular submodule of $A \Lambda$.
\end{lemma}
\begin{proof}
Certainly any $h \in A\Lambda^-_s$ is in the kernel of $m_{\ensuremath{\mathscr{C}}_s - [2]}$. To see that the kernel is no bigger, let $h = \sum_{\lambda \in \Lambda} c_\lambda \lambda$ ($c_\lambda \in A$) be an element of $A\Lambda$ satisfying $(\ensuremath{\mathscr{C}}_s-[2]) h = 0$. We may assume that $c_\lambda = 0$ for $\lambda \in \Lambda^-_s$. Also, by multiplying the $c$'s by some power of $\u$, we may assume that $c_\lambda \in A^-$ for all $\lambda$ and $c_\lambda \notin \ensuremath{u^{-1}}} %q^{-1/2 A^-\Lambda$ for at least one $\lambda$. Then computing mod $A^-\Lambda$, we have
\begin{equation} 0 = \sum\limits_{\lambda \in \Lambda} c_\lambda \left(\sum\limits_{\{\delta : s \in L(\delta)\}} \mu(\delta, \lambda)\delta\right) - [2] \sum\limits_{\lambda \in \Lambda} c_\lambda \lambda \equiv -\u \sum\limits_{\lambda \in \Lambda} c_\lambda \lambda. \end{equation}
Therefore $c_\lambda \in \ensuremath{u^{-1}}} %q^{-1/2 A^- \Lambda$ for all $\lambda$, contradicting the earlier assumption.
The second statement is a special case of the fact that $m_h$ is an $\H$-module homomorphism whenever $h$ is in the center of $\H$.
\end{proof}
\subsection{}
\label{ss u=1 tS2}
To better understand the functor $\ensuremath{Z^2}$, let us determine what it becomes at $\u = 1$.
\begin{proposition}
The image of $\ensuremath{Z^2} E |_{\u=1}$ under the isomorphism $\ensuremath{\widetilde{F}}^2 |_{\u=1} \cong T^2_\text{red} V\ensuremath{\otimes} E$ of (\ref{e red notred diagram}) is $S^2_\text{red} V \ensuremath{\otimes} E$ (resp. $\Lambda^2 V \ensuremath{\otimes} E$) if $\text{\rm Res}_{W_{\{s_1\}}} E$ is a sum of copies of the trivial (resp. sign) representation.
\end{proposition}
\begin{proof}
Under the isomorphism $\ensuremath{\widetilde{F}}^2 |_{\u=1} \cong T^2_\text{red} V\ensuremath{\otimes} E$, the standard basis for $\ensuremath{Z^2} E$ coming from realizing it as $\H \ensuremath{\otimes}_{S \backslash s_2} A \Lambda^-_{s_1}$ (see the discussion before Theorem \ref{t twisted S2}) satisfies
\begin{equation} \label{e Z2 u=1}
(T_{a_{k,l}} \small \ensuremath{\boxtimes}_{S \backslash s_2} \ensuremath{\mathscr{C}}_{s_1} \small \ensuremath{\boxtimes}_{J'_{n-2}} \gamma)|_{\u=1} = (a_{k,l} + a_{l,k+1} )\small \ensuremath{\boxtimes} \gamma \longleftrightarrow x_k \ensuremath{\otimes} x_l \ensuremath{\otimes} a_{k,l} s_1 \gamma + x_l \ensuremath{\otimes} x_k\ensuremath{\otimes} a_{k,l} \gamma \quad (k < l),
\end{equation}
where (\ref{e aklcoset}) has been used freely.
Therefore if $s_{1}$ acts trivially on $E$, then the rightmost expression in (\ref{e Z2 u=1}) becomes $(x_k \ensuremath{\otimes} x_l + x_l \ensuremath{\otimes} x_k) \ensuremath{\otimes} a_{k,l} \gamma$. If $s_{1}$ acts by -1 on $E$, then it becomes $(-x_k \ensuremath{\otimes} x_l + x_l \ensuremath{\otimes} x_k) \ensuremath{\otimes} a_{k,l} \gamma$. The proposition then follows, as $\ensuremath{\mathbb{Z}} \{a_{k,l} \gamma: \gamma \in \Gamma\} = E|_{\u=1}$.
\end{proof}
\subsection{}
A correct $W$-graph version of tensoring $S_{\text{red}}^2 V$ with $E$ is $\H \ensuremath{\otimes}_{S\backslash s_2} E$, and the projection $T^2_\text{red} V \ensuremath{\otimes} E \twoheadrightarrow S^2_\text{red} V \ensuremath{\otimes} E$ corresponds to
\begin{equation} \label{e reduced v tsr v to reduced s^2} \ensuremath{\widetilde{F}}^2 = \H\ensuremath{\otimes}_{J'_{n-2}} E = \H\ensuremath{\otimes}_{S\backslash s_2} \H_{S\backslash s_2}\ensuremath{\otimes}_{J'_{n-2}} E \xrightarrow{\tilde{\beta}(E)} \H\ensuremath{\otimes}_{S\backslash s_2} E, \end{equation} where $\tilde{\beta} (E) = \H\ensuremath{\otimes}_{S\backslash s_2}\beta (E)$. This is justified by the following calculation at $\u = 1$.
\begin{proposition}
The module $\H \ensuremath{\otimes}_{S \backslash s_2} E$ is a $\u$-analogue of $S^2_\text{red} V \ensuremath{\otimes} E$ (via the right vertical map of the following diagram, to be defined) in a way so that the diagram commutes.
\begin{equation}
\xymatrix@C=1cm{
(\H \ensuremath{\otimes}_{J'_{n-2}} E) |_{\u = 1} \ar@{<->}[d]^\cong\ar@{->>}[r]_<<<<<{\tilde{\beta}(E)}& (\H \ensuremath{\otimes}_{S \backslash s_2} E)|_{\u = 1} \ar@{<->}[d]^\cong\\
T^2_\text{red} V \ensuremath{\otimes} E \ar@{->>}[r] & S^2_\text{red} V \ensuremath{\otimes} E
}
\end{equation}
\end{proposition}
\begin{proof}
Here we will think of $S^2_\text{red} V$ as the subspace $\ensuremath{\mathbb{Z}} \{x_k x_l : k \neq l\}$ of $(\ensuremath{\mathbb{Z}}[x_1,\ldots,x_n])_2$, and the map $T^2_\text{red} V \ensuremath{\otimes} E \to S^2_\text{red} V \ensuremath{\otimes} E$ as the one sending $x_k \ensuremath{\otimes} x_l$ to $x_k x_l$. The right vertical map comes from an application of the modified Proposition \ref{p gk} (in which $g \small \ensuremath{\boxtimes} e \mapsto (g \small \ensuremath{\boxtimes} 1) \ensuremath{\otimes} g ce$ replaces (\ref{e gk}), where $c \in G$ commutes with all of $K$). In this application, use $G = W, K = W_{S \backslash s_2}$, $c = s_1$. We have $\ensuremath{\mathbb{Z}} G \ensuremath{\otimes}_{\ensuremath{\mathbb{Z}} K} \ensuremath{\mathbb{Z}} \cong S^2_\text{red} V$ by $a_{k, l} \small \ensuremath{\boxtimes} 1 \mapsto x_k x_l$ for $k < l$. It is straightforward to check that this is a $\ensuremath{\mathbb{Z}} G$-module homomorphism; the most interesting case is $s_k a_{k, k+1} \small \ensuremath{\boxtimes} 1 = a_{k+1, k+1} \small \ensuremath{\boxtimes} 1 = a_{k, k+1} s_1 \small \ensuremath{\boxtimes} 1 = a_{k, k+1} \small \ensuremath{\boxtimes} 1$, which matches $s_k (x_k x_{k+1}) = x_k x_{k+1}$. It can be checked directly on the basis $\{a_{k,l} \small \ensuremath{\boxtimes} \gamma: k \in [n], l \in \{2,\ldots,n\}, \gamma \in \Gamma\}$ of $(\H \ensuremath{\otimes}_{J'_{n-2}} E) |_{\u = 1}$ that the diagram commutes.
\end{proof}
\subsection{}
\label{ss diagram commutes}
It is immediate from Proposition \ref{p affine u=1} that the right-hand vertical map in the diagram below is a $\u$-analogue of the surjection $V \ensuremath{\otimes} V \ensuremath{\otimes} E \to S^2 V \ensuremath{\otimes} E$. Let us check that this is compatible with the projection $\tilde{\beta}(\pi^2 E)$ -- the $\u$-analogue of the projection $T^2_\text{red} V \ensuremath{\otimes} E \to S^2_\text{red} V \ensuremath{\otimes} E$. This amounts to checking that the following diagram commutes, where the top horizontal map is from (\ref{e mackey2 affine}) and the bottom horizontal map we take to be the inclusion of the $\pi^2$-subgraph of $\text{\rm Res}_\H \eH \ensuremath{\otimes}_\H E$.
\begin{equation}
\xymatrix{
\H \ensuremath{\otimes}_{J'_{n-2}} \pi^2 E \ar[r]\ar@{->>}[d]_<<<<{\tilde{\beta}(\pi^2 E)} & (\pH \ensuremath{\otimes}_{\H} (\pH \ensuremath{\otimes}_{\H} E)_1)_1 \ar@{->>}[d]^<<<<{\beta((\pH \ensuremath{\otimes}_{\H} E)_1)} \\
\H \ensuremath{\otimes}_{S \backslash s_2} \pi^2 E \ar[r]& (\pH \ensuremath{\otimes}_{\H} E)_2
}
\end{equation}
It is straightforward to check, given Theorem \ref{t mackey} and the derivation of (\ref{e mackey2 affine}), that standard basis elements behave as shown under the horizontal maps. This proves that the diagram commutes.
\begin{equation}
\xymatrix{
\ensuremath{\tilde{T}}_{a_{k,l}, \pi^2 \gamma} \ar@{|->}[r]\ar@{|->}[d] & \ensuremath{\tilde{T}}_{a_k \pi, a_{l-1} \pi, \gamma}\ar@{|->}[d] & \ensuremath{\tilde{T}}_{a_{k,l}, \pi^2 \gamma} \ar@{|->}[r]\ar@{|->}[d]& \ar@{|->}[d] \ensuremath{\tilde{T}}_{a_k \pi, a_{l-1} \pi, \gamma} \\
\ensuremath{\tilde{T}}_{a_{k,l}, \pi^2\gamma} \ar@{|->}[r] & \ensuremath{\tilde{T}}_{a_{k,l} \pi^2, \gamma} & T_{a_{l-1,k}} \small \ensuremath{\boxtimes} T_{s_1} (\pi^2\gamma) \ar@{|->}[r] & T_{a_{l-1,k}} \pi^2 \small \ensuremath{\boxtimes} T_{s_{n-1}} \gamma}
\end{equation}
The left-hand diagram is for $k < l$ and the right for $k \geq l > 1$.
This calculation will be used to show that the work we do in the next subsection for the $\H \ensuremath{\otimes}_J -$ version of tensoring with $V$ is also useful for the $(\pH \ensuremath{\otimes}_\H -)_1$ version.
\subsection{}
\label{ss S2 vs twisted S2}
In this subsection we will partially determine the projection $\tilde{\beta}(E)$ on canonical basis elements. Despite the fact that $\H \ensuremath{\otimes}_{S \backslash s_2} E$ is a $\u$-analogue of $S^2_\text{red}V \ensuremath{\otimes} E$ and $\ensuremath{Z^2} E$ is not, our study of $\ensuremath{Z^2}$ was not wasted. It will be helpful for determining what $\tilde{\beta}(E)$ does to canonical basis elements. This is not so easy to see directly, as it does not simply send canonical basis elements to canonical basis elements.
By Lemma \ref{l s1 descent submodule}, $A\Gamma^-_{s_1}$ is a cellular submodule of $\text{\rm Res}_{S \backslash s_2} A\Gamma$ with corresponding quotient $A\Gamma^+_{s_1}$, hence the exact sequence
\begin{equation} 0 \to A\Gamma^-_{s_1} \to \text{\rm Res}_{S\backslash s_2} A\Gamma \to A\Gamma^+_{s_1} \to 0. \end{equation}
Since $\ensuremath{\widetilde{F}}^2, \ensuremath{Z^2} A\Gamma, \ensuremath{S^2_{\text{red}}V} A\Gamma$ only depend on $\text{\rm Res}_{S\backslash s_2} A\Gamma,$ this sequence yields the three columns in the diagram below. The left column is exact by Proposition \ref{p cbsubquotientprop tsymred} and the other two are exact by exactness of induction. The left two squares commute because $\zeta$ (of the proof of Proposition \ref{p cbsubquotientprop tsymred}) of a morphism just restricts its domain, and the right two squares commute because $\beta$ is a natural transformation.
\begin{equation}\label{e 5x3 diagram}
\xymatrix@C=1.5cm{0 \ar[d] & 0 \ar[d] & 0 \ar[d]\\
\ensuremath{Z^2} A\Gamma^-_{s_1} \ar[r]\ar[d] & \H\ensuremath{\otimes}_{J'_{n-2}} A\Gamma^-_{s_1} \ar[d]\ar[r]_<<<<<<<{\tilde{\beta}(A\Gamma^-_{s_1})} & \H\ensuremath{\otimes}_{S\backslash s_2} A\Gamma^-_{s_1} \ar[d]\\
\ensuremath{Z^2} A\Gamma \ar[r]\ar[d] & \H\ensuremath{\otimes}_{J'_{n-2}} A\Gamma \ar[d]\ar[r]_<<<<<<<{\tilde{\beta}(A\Gamma)}\ar[d] & \H\ensuremath{\otimes}_{S\backslash s_2} A\Gamma \ar[d]\\
\ensuremath{Z^2} A\Gamma^+_{s_1} \ar[r]\ar[d] & \H\ensuremath{\otimes}_{J'_{n-2}} A\Gamma^+_{s_1} \ar[d]\ar[r]_<<<<<<<{\tilde{\beta}(A\Gamma^+_{s_1})} \ar[d]& \H\ensuremath{\otimes}_{S\backslash s_2} A\Gamma^+_{s_1} \ar[d]\\
0 & 0 & 0
}\end{equation}
\newcommand{\ensuremath{\L'}}{\ensuremath{\L'}}
\begin{lemma}
\label{l L to L*}
Given $w \in W^{J'_{n-2}}$, $\gamma \in \Gamma$, suppose that either $s_1 \notin R(w)$ or $s_1 \notin L(\gamma)$. Then $\tilde{\beta}(A\Gamma)(\ensuremath{{\tilde{C'}}\negmedspace}_{w,\gamma})$, $\ensuremath{{\tilde{C'}}\negmedspace}_{w, \gamma} \in \H \ensuremath{\otimes}_{J'_{n-2}} A \Gamma$
lies in the lattice $\ensuremath{\L'} := A^- \H \ensuremath{\otimes}_{S \backslash s_2} \Gamma$.
\end{lemma}
\begin{proof}
First note that the standard basis for $\H \ensuremath{\otimes}_{J'_{n-2}} A \Gamma$ coming from realizing $\H \ensuremath{\otimes}_{J'_{n-2}} A \Gamma$ as $\H \ensuremath{\otimes}_{S \backslash s_2} \H_{S \backslash s_2} \ensuremath{\otimes}_{J'_{n-2}} A \Gamma$ satisfies
\begin{equation}
\label{e TvC mapsto}
\begin{array}{cccclr}
\ensuremath{\tilde{T}}_{v, \ensuremath{{\tilde{C'}}\negmedspace}_{1,\gamma}}& = & T_v \small \ensuremath{\boxtimes}_{S \backslash s_2} 1 \small \ensuremath{\boxtimes}_{J'_{n-2}} \gamma & \stackrel{\tilde{\beta}(A\Gamma)}{\longmapsto} &
T_v \small \ensuremath{\boxtimes}_{S \backslash s_2} \gamma, \text{ and} \\
\ensuremath{\tilde{T}}_{v, \ensuremath{{\tilde{C'}}\negmedspace}_{s_1,\gamma}} &=&T_v \small \ensuremath{\boxtimes}_{S \backslash s_2} \ensuremath{\mathscr{C}}_{s_1} \small \ensuremath{\boxtimes}_{J'_{n-2}} \gamma &
\stackrel{\tilde{\beta}(A\Gamma)}{\longmapsto}&
\left\{
\begin{array}{lc}
[2] T_v \small \ensuremath{\boxtimes}_{S \backslash s_2} \gamma & \text{if } s_1 \in \Gamma,\\
\sum\limits_{\{\delta:s_1 \in L(\delta)\}} \mu(\delta, \gamma )T_v \small \ensuremath{\boxtimes}_{S \backslash s_2} \delta & \text{if } s_1 \notin \Gamma,
\end{array}
\right.
\end{array}
\end{equation}
for $v \in W^{S \backslash s_2}$.
Then since the elements $T_v \small \ensuremath{\boxtimes}_{S \backslash s_2} \gamma$ are a standard basis for $ \H\ensuremath{\otimes}_{S \backslash s_2}A\Gamma$, the lattice $\L = A^- \H \ensuremath{\otimes}_{J'_{n-2}} \Gamma$ is sent to $\u \ensuremath{\L'}$ by $\tilde{\beta}(A\Gamma)$. Now for $w \in W^{{J'_{n-2}}}$, $s_1 \notin R(w)$ implies $w \in W^{S \backslash s_2}$. In this case,
\begin{equation} \ensuremath{{\tilde{C'}}\negmedspace}_{w, \gamma} \in \ensuremath{\tilde{T}}_{w, \ensuremath{{\tilde{C'}}\negmedspace}_{1,\gamma}} + \ensuremath{u^{-1}}} %q^{-1/2 \L \xrightarrow{\tilde{\beta}(A\Gamma)} T_w \small \ensuremath{\boxtimes}_{S \backslash s_2} \gamma + \ensuremath{\L'} = \ensuremath{\L'}.\end{equation}
On the other hand if $s_1 \in R(w)$, then $w = v s_1$ for $v \in W^{S \backslash s_2}$, and in this case we are assuming $s_1 \notin L(\gamma)$. Hence
\begin{equation} \label{e Cvs1 mapsto}
\ensuremath{{\tilde{C'}}\negmedspace}_{v s_1, \gamma} \in \ensuremath{\tilde{T}}_{v, \ensuremath{{\tilde{C'}}\negmedspace}_{s_1,\gamma}} + \ensuremath{u^{-1}}} %q^{-1/2 \L \xrightarrow{\tilde{\beta}(A\Gamma)} T_v \small \ensuremath{\boxtimes}_{S \backslash s_2} A^- \Gamma + \ensuremath{\L'} = \ensuremath{\L'}. \end{equation}
\end{proof}
For the remainder of the subsection set $\L^* = A^- \H \ensuremath{\otimes}_{S \backslash s_2} \Gamma^-_{s_1}$.
\begin{theorem}\label{t bigdiagram}
The arrows in (\ref{e 5x3 diagram}) are compatible with the $W$-graph structures in the following sense.
\begin{list}{(\roman{ctr})}{\usecounter{ctr} \setlength{\itemsep}{1pt} \setlength{\topsep}{2pt}}
\item Vertical arrows take canonical basis elements to canonical basis elements or to 0.
\item The top non-zero row, on canonical basis elements, satisfies
\begin{equation}\label{e mtrx1}\xymatrix@R=.16cm@C=1cm{
\ensuremath{{\tilde{C'}}\negmedspace}_{w,\ensuremath{\mathscr{C}}_{s_1, \gamma}} \ar@{|->}[r] & \ensuremath{{\tilde{C'}}\negmedspace}_{ws_1,\gamma} \ar@{|->}[r]_<<<<<<{\tilde{\beta}} & [2]\ensuremath{{\tilde{C'}}\negmedspace}_{w, \gamma}, \text{ and} & (w \in W^{S \backslash s_2}) \\
\ & \ensuremath{{\tilde{C'}}\negmedspace}_{w,\gamma} \ar@{|->}[r]_<<<<<<<{\tilde{\beta}} &0\mod \L^* &\ }\end{equation}
\item The bottom non-zero row, on canonical basis elements, satisfies
\begin{equation}\label{e mtrx2}
\xymatrix@R=.16cm@C=1.4cm{
\ensuremath{{\tilde{C'}}\negmedspace}_{w,\ensuremath{\mathscr{C}}_{s_1, \gamma}} \ar@{|->}[r] & \ensuremath{{\tilde{C'}}\negmedspace}_{ws_1,\gamma} \ar@{|->}[r]_<<<<<<{\tilde{\beta}} & 0, \text{ and} & **[l] (w \in W^{S \backslash s_2}) \\
& \ensuremath{{\tilde{C'}}\negmedspace}_{w,\gamma} \ar@{|->}[r]_<<<<<<<{\tilde{\beta}} & \ensuremath{{\tilde{C'}}\negmedspace}_{w,\gamma} &}
\end{equation}
\end{list}
\end{theorem}
\begin{proof}
Statement (i) follows from Proposition \ref{CBSubquotientprop} and Proposition \ref{p cbsubquotientprop tsymred}.
The horizontal arrows on the left side of (\ref{e 5x3 diagram}) are understood from Theorem \ref{t twisted S2}; each is the inclusion of a cellular submodule.
To see (ii), first observe that $\text{\rm Res}_{S \backslash s_2} \Gamma^-_{s_1}$ and $\Lambda^-_{s_1} \subseteq \H_{S \backslash s_2} \ensuremath{\otimes}_{J'_{n-2}} A\Gamma^-_{s_1}$ (as in Theorem \ref{t twisted S2}) are isomorphic as $W_{S \backslash s_2}$-graphs. This is clear from the remarks preceding Theorem \ref{t twisted S2} and from (\ref{e mudef}). An isomorphism, up to a global constant, between these two objects is given by
\begin{equation} A\Lambda^-_{s_1} \xrightarrow{\beta(A\Gamma^-_{s_1})}A\Gamma^-_{s_1},\ \ensuremath{{\tilde{C'}}\negmedspace}_{s_1,\gamma} \mapsto [2]\gamma.\end{equation} Therefore, tensoring $\beta(A\Gamma^-_{s_1})$ with $\H$ and applying the construction of Theorem \ref{t HY canbas exists} yields a map taking each canonical basis element to [2] times a canonical basis element. This map is the composite of the maps in the top non-zero row of (\ref{e 5x3 diagram}).
The second line of (\ref{e mtrx1}) follows from Lemma \ref{l L to L*}.
The proof of (iii) is similar to that of (ii). The $W_{S \backslash s_2}$-graphs $\text{\rm Res}_{S \backslash s_2} \Gamma^+_{s_1}$ and $\Lambda^+_{s_1} \subseteq \H_{S \backslash s_2} \ensuremath{\otimes}_{J'_{n-2}} A\Gamma^+_{s_1}$ are isomorphic via
\begin{equation} A\Lambda^+_{s_1} \xrightarrow{\beta(A\Gamma^+_{s_1})}A\Gamma^+_{s_1},\ \ensuremath{{\tilde{C'}}\negmedspace}_{1,\gamma} \mapsto \gamma. \end{equation}
Tensoring with $\H$ yields a map taking canonical basis elements to canonical basis elements, and this map is the bottom right horizontal map of (\ref{e 5x3 diagram}).
To see the first line of (\ref{e mtrx2}), first observe that $\ensuremath{{\tilde{C'}}\negmedspace}_{s_1,\gamma} = \ensuremath{\mathscr{C}}_{s_1} \small \ensuremath{\boxtimes} \gamma
\stackrel{\beta(A\Gamma^+_{s_1})}{\longmapsto}
\ensuremath{\mathscr{C}}_{s_1}\gamma = 0$, with the equality by definition of the quotient $A\Gamma^+_{s_1}$. Then use the fact that any $\ensuremath{{\tilde{C'}}\negmedspace}_{w,\ensuremath{{\tilde{C'}}\negmedspace}_{s_1,\gamma}}$ is in $A \{ T_x \small \ensuremath{\boxtimes} \ensuremath{{\tilde{C'}}\negmedspace}_{s_1,\gamma} : x \in W^{S \backslash s_2}, \gamma \in \Gamma^+_{s_1} \}$ (see Theorem \ref{t twisted S2} and the preceding discussion).
\end{proof}
\begin{theorem}
\label{t main theorem}
The map $\tilde{\beta}(A\Gamma)$ (the middle right horizontal map of (\ref{e 5x3 diagram})), on canonical basis elements, satisfies
\begin{equation}
\label{e tilde counit map}
\begin{array}{rclc}
\ensuremath{{\tilde{C'}}\negmedspace}_{ws_1,\gamma} & \longmapsto & [2]\ensuremath{{\tilde{C'}}\negmedspace}_{w,\gamma} & \text{if } s_1 \in L(\gamma), \\
\ensuremath{{\tilde{C'}}\negmedspace}_{w,\gamma} & \longmapsto & 0 \mod \L^* & \text{if } s_1 \in L(\gamma), \\
\ensuremath{{\tilde{C'}}\negmedspace}_{ws_1,\gamma} & \longmapsto & 0 \mod \L^* & \text{if } s_1 \notin L(\gamma), \\
\ensuremath{{\tilde{C'}}\negmedspace}_{w,\gamma} & \longmapsto & \ensuremath{{\tilde{C'}}\negmedspace}_{w,\gamma} \mod \L^* & \text{if } s_1 \notin L(\gamma), \\
\end{array}
\end{equation}
where $w$ is any element of $W^{S \backslash s_2}$ (and $\L^* = A^- \H \ensuremath{\otimes}_{S \backslash s_2} \Gamma^-_{s_1}$).
\end{theorem}
\begin{proof}
The first and second line of (\ref{e tilde counit map}) follow from Theorem \ref{t bigdiagram} (ii) and the top right square of (\ref{e 5x3 diagram}), as each vertical map in this square is the inclusion of a cellular submodule.
For the third line, apply Theorem \ref{t bigdiagram} (iii) to show that $\ensuremath{{\tilde{C'}}\negmedspace}_{ws_1,\gamma} \in \H \ensuremath{\otimes}_{J'_{n-2}} A\Gamma$, going down and then right, maps to $0 \in \H\ensuremath{\otimes}_{S\backslash s_2} A\Gamma^+_{s_1}$. Therefore (going right) $\tilde{\beta}(A\Gamma)(\ensuremath{{\tilde{C'}}\negmedspace}_{ws_1,\gamma}) \in \H\ensuremath{\otimes}_{S \backslash s_2} A\Gamma^-_{s_1} \subseteq \H\ensuremath{\otimes}_{S \backslash s_2} A\Gamma$. Combining this with Lemma \ref{l L to L*} yields the desired result. A similar argument proves the fourth line.
\end{proof}
In a way made precise by the corollary below, the sets
\begin{equation} \ensuremath{Z^2} \Gamma^-_{s_1} \cup (\H \ensuremath{\otimes}_{J'_{n-2}} \Gamma^+_{s_1} \backslash \ensuremath{Z^2} \Gamma^+_{s_1})
\text{ and } \ensuremath{Z^2} \Gamma^+_{s_1} \cup (\H \ensuremath{\otimes}_{J'_{n-2}} \Gamma^-_{s_1} \backslash \ensuremath{Z^2} \Gamma^-_{s_1})\end{equation}
are canonical bases for $S^2_\text{red} V \ensuremath{\otimes} A\Gamma$ and $\Lambda^2V \ensuremath{\otimes} A\Gamma$, respectively, as $\u \to 0$. We therefore call these subsets of $\H\ensuremath{\otimes}_{J'_{n-2}} \Gamma$ \emph{combinatorial reduced sym} and \emph{combinatorial wedge} respectively.
\begin{corollary} \label{c approx at u=1}
After adjoining $\frac{1}{[2]}$ to $A$, there exists a $\br{\cdot}$-invariant basis $\{ c_{x, \gamma}: x \in W^{J'_{n-2}}, \gamma \in \Gamma \}$ of $\H \ensuremath{\otimes}_{J'_{n-2}} A \Gamma$ so that the transition matrix to the basis $\H \ensuremath{\otimes}_{J'_{n-2}} \Gamma$ tends to the identity matrix as $\u \to 0$, and so that under the map $\tilde{\beta}(A\Gamma)$
\begin{equation}
\begin{array}{cclc}
c_{ws_1,\gamma} & \longmapsto & [2] \ensuremath{{\tilde{C'}}\negmedspace}_{w,\gamma} & \text{if } s_1 \in L(\gamma), \\
c_{w,\gamma} & \longmapsto & 0 & \text{if } s_1 \in L(\gamma), \\
c_{ws_1,\gamma} & \longmapsto & 0 & \text{if } s_1 \notin L(\gamma), \\
c_{w,\gamma} & \longmapsto & \ensuremath{{\tilde{C'}}\negmedspace}_{w,\gamma} & \text{if } s_1 \notin L(\gamma), \\
\end{array}
\end{equation}
where $w$ is any element of $W^{S \backslash s_2}$.
\end{corollary}
Theorem \ref{t main theorem} and Corollary \ref{c approx at u=1} also apply with $\pi^2 A\Gamma$ replacing $A\Gamma$. There is a potential pitfall here as $\pi^2 A\Gamma$ is not the restriction of an $\H$-module to $\H_{J'_{n-2}}$. However, it is an $\H_{S \backslash s_2}$-module, since $K_0 \cap \pi^2 K_0 \pi^{-2} = S \backslash s_2$, which is all that is needed to apply the results in this subsection. Also, by \textsection\ref{ss diagram commutes} the projection $\tilde{\beta}(\pi^2 E)$ specializes to the projection $T^2_\text{red} V\ensuremath{\otimes} E \to S^2_\text{red} V \ensuremath{\otimes} E$ at $\u =1$. Thus we can write $\H \ensuremath{\otimes}_{J'_{n-2}} \pi^2 \Gamma$ as the disjoint union of
\begin{equation}
\ensuremath{Z^2} \pi^2 \Gamma^-_{s_{n-1}} \cup (\H \ensuremath{\otimes}_{J'_{n-2}} \pi^2 \Gamma^+_{s_{n-1}} \backslash \ensuremath{Z^2} \pi^2 \Gamma^+_{s_{n-1}} ) \text{ and } \ensuremath{Z^2} \pi^2 \Gamma^+_{s_{n-1}} \cup (\H \ensuremath{\otimes}_{J'_{n-2}} \pi^2 \Gamma^-_{s_{n-1}} \backslash \ensuremath{Z^2} \pi^2 \Gamma^-_{s_{n-1}} ),
\end{equation}
which will also be called combinatorial reduced sym and combinatorial wedge.
\begin{example}
In the $W$-graph in Figure \ref{f VVe+}, combinatorial reduced sym is the lower triangular region consisting of the first $i$ entries of row $i$ for $i = 1, 2, 3$; combinatorial wedge is the upper triangular region consisting of the last $4-i$ entries of row $i$ for $i = 1, 2, 3$. For general $\Gamma$, the picture would be similar: the $W$-graph could be drawn in $n$ by $n$ chunks and combinatorial reduced sym would consist of lower triangular regions for $\gamma \in \Gamma^-_{s_1}$ and upper triangular regions for $\gamma \in \Gamma^+_{s_1}$.
In the $W$-graph in Figure \ref{f VVe+ affine}, combinatorial reduced sym is the lower triangular region consisting of the first $i-1$ entries of row $i$ for $i = 2, 3, 4$; combinatorial wedge is the upper triangular region consisting of the last $5-i$ entries of row $i$ for $i = 2, 3, 4$.
The labels ``sym'' and ``wedge'' below the trees mark the cells in combinatorial reduced sym and combinatorial wedge.
\end{example}
\section{Combinatorial approximation of $V \ensuremath{\otimes} V \ensuremath{\otimes} E \twoheadrightarrow S^2 V \ensuremath{\otimes} E$}
\label{s combinatorial approximation S2}
For this section, let $\Gamma$ be a cell of $\Gamma_W$ labeled by a tableau $T^0$. We will describe the results of \textsection\ref{s tensor V} and \textsection\ref{s decomposing VVE} in terms of cells and their tableau labels.
\subsection{}
For a tableau $P$, let $P_{r, c}$ be the square of $P$ in the $r$-th row and $c$-th column. Suppose that $P_{r_1, c_1}, \ldots, P_{r_l,c_l}$ are squares of $P$ such that $P_{r_i, c_i}$ is an outer corner of $P^{i-1} := P \backslash \{ P_{r_1, c_1}, \ldots, P_{r_{i-1}, c_{i-1}} \}$. Then referring to the sequence of tableau $P, P^1, \ldots, P^l$, we say that $P_{r_1, c_1}, \ldots, P_{r_l, c_l}$ are \emph{removed from $P$ as a horizontal strip} (resp. \emph{removed from $P$ as a vertical strip}) if $c_1 > c_2 > \dots > c_l$ (resp. $r_1 > r_2 > \dots > r_l$). Equivalently, if $P^*$ is the skew tableau of squares $\{ P_{r_1, c_1}, \ldots, P_{r_l,c_l} \}$ with $l+1-i$ in $P_{r_i,c_i}$, then $\text{\rm jdt} (P^*)$ is a single row (resp. column). Similarly, referring to the sequence of tableau $P^l, \ldots, P^1, P$, we say that $P_{r_l,c_l}, \ldots, P_{r_1,c_1}$ are \emph{added to $P^l$ as a horizontal strip} (resp. \emph{added to $P^l$ as a vertical strip}) if $c_1 > c_2 > \dots > c_l$ (resp. $r_1 > r_2 > \dots > r_l$).
Recall the local rules for the RSK growth diagram (see, e.g., \cite[7.13]{St}). Letting $\lambda, \mu, \nu$ be partitions with $\mu \subseteq \lambda, \nu$, we notate these local rules by
\begin{equation}
\begin{array}{ll}
\ensuremath{\mathscr{G}}(0; \lambda, \mu, \nu) &= \left\{ \begin{array}{ll}
\lambda & \text{if } \lambda = \mu = \nu, \\
(\lambda_1, \ldots, \lambda_i, \lambda_{i+1} + 1, \lambda_{i+2}, \ldots) & \text{if } \lambda = \nu = (\mu_1, \ldots, \mu_i + 1, \mu_{i+1}, \ldots), \\
\lambda \cup \nu & \text{if } \lambda \neq \nu,
\end{array}
\right. \\
\ensuremath{\mathscr{G}}(1; \lambda, \mu, \nu) &= (\lambda_1 + 1, \lambda_2, \ldots) \quad \quad\quad\quad\quad\quad\quad\quad \text{ if } \lambda = \mu = \nu.
\end{array}
\end{equation}
Here $\lambda \cup \nu$ denotes the partition whose $i$-th part is $\max(\lambda_i,\nu_i)$.
Let $a\rightarrow P$ (resp. $a \leftarrow P$) denote the column (resp. row) insertion of $a$ into $P$. For the next theorem we will use freely the descriptions of cells given in \textsection\ref{s tableau combinatorics}. The shorthand $P_>$ will be used for $P_{>c} = \text{\rm jdt} (P^*)$, where $P^*$ is the skew subtableau of $P$ with entries $> c$ and $c$ is the smallest entry of $P$. In what follows we will use the somewhat redundant local sequences for cells that come from writing $\ensuremath{\widetilde{E}}^1, \ensuremath{\widetilde{E}}^2, \ensuremath{\widetilde{F}}^2, \ensuremath{\widetilde{F}}^2$ as $\H \ensuremath{\otimes}_J \H_J \ensuremath{\otimes}_J A \Gamma$, $\H \ensuremath{\otimes}_J \H_J \ensuremath{\otimes}_J \H \ensuremath{\otimes}_J \H_J \ensuremath{\otimes}_J A \Gamma$, $\H \ensuremath{\otimes}_J \H_J \ensuremath{\otimes}_{J'_{n-2}} \H_{J'_{n-2}} \ensuremath{\otimes}_{J'_{n-2}} \H_J \ensuremath{\otimes}_J A \Gamma$, $\H \ensuremath{\otimes}_{S \backslash s_2} \H_{S \backslash s_2} \ensuremath{\otimes}_{J'_{n-2}} \H_{J'_{n-2}} \ensuremath{\otimes}_{J'_{n-2}} \H_{S \backslash s_2} \ensuremath{\otimes}_{S \backslash s_2} A \Gamma$ respectively; these last two will be referred to as $\ensuremath{\widetilde{F}}^2_J$ and $\ensuremath{\widetilde{F}}^2_{S \backslash s_2}$ respectively.
\begin{theorem}\
\label{t combinatorial sym}
\begin{list}{(\roman{ctr})}{\usecounter{ctr} \setlength{\itemsep}{1pt}
\setlength{\topsep}{2pt}
\setlength{\leftmargin}{0pt}
}
\item The map $\H \ensuremath{\otimes}_J \alpha : \ensuremath{\widetilde{E}}^1 \to \ensuremath{\widetilde{E}}^2$ of (\ref{e mackey2}) is given on cells
by $(T^1, T^1_>, T^0) \mapsto (T^1, T^1_>, P, T^1_>, T^0)$, where $P = 1
\to T^1_>$. In particular, \[\text{\rm sh}(P) = \ensuremath{\mathscr{G}}(1; \text{\rm sh}(T^1_>), \text{\rm sh}(T^1_>), \text{\rm sh}(T^1_>)).\]
\item The inverse of the map $\ensuremath{\widetilde{E}}^2 \to \ensuremath{\widetilde{F}}^2_J$ of (\ref{e mackey2}) is given
on cells by $(T^2, T^2_>,T^2_{>2}, T^1, T^0) \mapsto (T^2, T^2_>, P, T^1, T^0)$, where
$$\text{\rm sh}(P) = \ensuremath{\mathscr{G}}(0; \text{\rm sh}(T^2_>), \text{\rm sh}(T^2_{>2}), \text{\rm sh}(T^1));$$
$P$ is determined by its shape and $P_> = T^1$.
\item The isomorphism of $W$-graphs $\ensuremath{\widetilde{F}}^2_J \to \ensuremath{\widetilde{F}}^2_{S \backslash s_2}$ of Proposition \ref{p nested
parabolic} is given on cells by $(T^2, T^2_>, T^2_{>2}, T^1, T^0)
\mapsto (T^2, (P^2, T^2_{>2}), T^2_{>2}, (P^1, T^2_{>2}), T^0)$, where
$P^1$ is the tableau ${\tiny\young(12)}$ (resp. ${\tiny\young(1,2)}$) if
$T^0 \backslash T^1, T^1 \backslash T^2_{>2}$ are removed from $T^0$
as a horizontal strip (resp. vertical strip), and $P^2$ is the tableau
{\tiny\young(12)} (resp. {\tiny\young(1,2)}) if $T^2_> \backslash
T^2_{>2}, T^2 \backslash T^2_>$ are added to $T^2_{>2}$ as a horizontal
strip (resp. vertical strip).
\item The cells of $\ensuremath{\widetilde{F}}^2_{S \backslash s_2}$ in combinatorial reduced sym
(resp. combinatorial wedge) are those with local sequences
$(T^2, (P^2, T^2_{>2}), T^2_{>2}, (P^1, T^2_{>2}), T^0)$ such that $P^2$
and $P^1$ have the same shape (resp. different shape).
\end{list}
\end{theorem}
\begin{proof}
For (i)--(iii), we will use $J = J_{n-1}$ instead of $J = J'_{n-1}$ and the comments in \textsection\ref{ss sb} to go back and forth between these conventions.
The map $\H \ensuremath{\otimes}_J
\alpha$ on canonical basis elements is given in stuffed notation by
\begin{equation}
(z_1, \rj{z_1}{J}, z_0) \mapsto (z_1, \rj{z_1}{J}, \rj{z_1}{J} n, \rj{z_1}{J}, z_0).
\end{equation}
Here the $z_i$ are thought of as words so that $\rj{z_1}{J} n$ is just the word
$\rj{z_1}{J}$ with $n$ appended at the end. The map on cells is then
$(T^1, T^1_{<n}, T^0) \mapsto (T^1, T^1_{<n}, P, T^1_{<n}, T^0)$, where $P =
T^1_{<n} \leftarrow n$. Statement (i) then follows by applying the
Sch\"utzenberger involution.
For (ii), observe that the inverse of $\H \ensuremath{\otimes}_J \tau$ of (\ref{e mackey2}) is given in stuffed notation by
\begin{equation} (z_2, \rj{z_2}{J},\lj{z_0}{J_{n-2}}, \lj{z_0}{J}, z_0) \mapsto (z_2, \rj{z_2}{J}, z_1, \lj{z_0}{J}, z_0),
\end{equation}
where $z_1 = \rj{z_2}{J}^* k$ ($\rj{z_2}{J}^*$ is obtained from $\rj{z_2}{J}$ by increasing all numbers $\geq k$ by 1)
and $k$ is such that $\lj{z_0}{J} = \lj{z_0}{J_{n-2}}^* k$ ($\lj{z_0}{J_{n-2}}^*$ is obtained from $\lj{z_0}{J_{n-2}}$ by increasing all numbers $\geq k$ by 1). Thus if $(T^2_{<n})^* := P(\rj{z_2}{J}^*)$ and $(T^2_{<n-1})^* := P(\rj{z_0}{J_{n-2}}^*)$, then
\begin{equation} \label{e main ii}
P:=P(z_1) = (T^2_{<n})^* \leftarrow k, \text{ and } T^1:=P(\lj{z_0}{J}) = (T^2_{<n-1})^* \leftarrow k.
\end{equation}
Note that $k \neq n$ and $\lj{z_0}{J_{n-2}} = \rj{z_2}{J_{n-2}}$ imply $(T^2_{<n})^* \backslash (T^2_{<n-1})^*$ is a square containing an $n$. The element $k$ inserts in these tableau exactly the same way, except that the final step of $(T^2_{<n})^* \leftarrow k$ may bump the $n$ down one row; this case corresponds exactly to the case $\text{\rm sh}((T^2_{<n})^*) = \text{\rm sh}(T^1)$.
Statement (iii) is really two separate statements, one for a bijection
of local sequences corresponding to $\text{\rm Res}_{J'_{n-2}} \text{\rm Res}_J A \Gamma
\cong \text{\rm Res}_{J'_{n-2}} \text{\rm Res}_{S \backslash s_2} A \Gamma$, and one for a
bijection of local sequences corresponding to $\H \ensuremath{\otimes} \H_J \ensuremath{\otimes} \ensuremath{\Upsilon}
\cong \H \ensuremath{\otimes} \H_{S \backslash s_2} \ensuremath{\otimes} \ensuremath{\Upsilon} \ (\ensuremath{\Upsilon} \text{ some cell of }
\Gamma_{W_{J'_{n-2}}})$. The first bijection follows from \cite[Lemma
7.11.2]{St} (This includes the statement that if $P$ is a tableau and $j
\leq k$, then the square $(P \leftarrow j) \backslash P$ lies strictly
to the left of $((P \leftarrow j) \leftarrow k) \backslash (P \leftarrow
j)$. We also need that if $j > k$, then the square $(P \leftarrow j)
\backslash P$ lies weakly to the right of $((P \leftarrow j) \leftarrow
k) \backslash (P \leftarrow j)$, which is similar.) The second bijection
is the definition of adding as a horizontal or vertical strip in the
case that $J = J_{n-1}$.
To see (iv), observe that the local labels of the cells of $\text{\rm Res}_{S
\backslash s_2} \Gamma^+_{s_1}$ (resp. $\text{\rm Res}_{S \backslash s_2}
\Gamma^-_{s_1}$) are of the form $({\tiny\young(12)}, T^2_{>2})$ (resp.
$({\tiny\young(1,2)}, T^2_{>2})$); the local labels of the cells of
$\Lambda^+_{s_1}$ (resp. $\Lambda^-_{s_1}$) are of the form
$({\tiny\young(12)}, T^2_{>2})$ (resp. $({\tiny\young(1,2)},
T^2_{>2})$), where $\Lambda = \H_{S \backslash s_2} \ensuremath{\otimes}_{J'_{n-2}} \Gamma$.
\end{proof}
\begin{example}
Suppose $T^0 = {\tiny\young(123,46,5)}$. On the left is the local sequence of a cell of $\ensuremath{\widetilde{E}}^2$ (reading from left to right, ignoring the bottom middle tableau) and the local sequence of the corresponding cell of $\ensuremath{\widetilde{F}}^2_J$ (reading from left to right, ignoring the top middle tableau). The tableau are arranged this way to match an RSK growth diagram picture. Above the tableau are the coordinates of the stuffed notation for a canonical basis element in this cell.
On the right is the local sequence of the corresponding cell of $\ensuremath{\widetilde{F}}^2_{S \backslash s_2}$.
\begin{pspicture}(10,3.9){
\psset{unit=1cm}
\rput(2,3){
$\tiny\begin{array}{ccccc}
362145 && 526134 && 541623 \\
& 36245 && 52634 & \\
&& 3645 &&
\end{array}$
}
\rput(2,1){
$\begin{array}{ccccc}
\tiny\young(145,26,3) & & \tiny\young(134,26,5) & & \tiny\young(123,46,5)\\
& \tiny\young(245,36) && \tiny\young(234,56) & \\
&& \tiny\young(345,6) &&
\end{array}$}
\rput(9.5,3){
$\tiny\begin{array}{lcccr}
362145 && && 541623 \\
& 21,3645 && 21,3645 & \\
&& 3645 && \\
\end{array}$}
\rput(9.5,1){
$\begin{array}{ccccc}
\tiny\young(145,26,3) & & & & \tiny\young(123,46,5)\\
& \tiny\young(1,2),\young(345,6) && \tiny\young(1,2),\young(345,6) & \\
&& \tiny\young(345,6) &&
\end{array}$}
}
\end{pspicture}
\end{example}
\subsection{}
For the $W$-graph version of tensoring with $V$ coming from the affine Hecke algebra, we have a similar theorem. Let
$\ensuremath{\mathscr{G}}'(a;\lambda, \mu, \nu) = (\ensuremath{\mathscr{G}}(a;\lambda',\mu',\nu'))'$, where $'$ of a partition denotes its transpose. Let $\ensuremath{\widehat{E}}^1, \ensuremath{\widehat{E}}^2, \ensuremath{\widehat{F}}^2_J, \ensuremath{\widehat{F}}^2_{S \backslash s_2}$ be defined analogously to $\ensuremath{\widetilde{E}}^1, \ensuremath{\widetilde{E}}^2, \ensuremath{\widetilde{F}}^2_J, \ensuremath{\widetilde{F}}^2_{S \backslash s_2}$. More precisely, $\ensuremath{\widehat{E}}^1$ and $\ensuremath{\widehat{E}}^2$ will use three- and five-term local sequences as in Examples \ref{ex affine insert} and \ref{ex hE2 cell}; $\ensuremath{\widehat{F}}^2_J$ refers to $\H \ensuremath{\otimes}_J \H_J \ensuremath{\otimes}_{J'_{n-2}} \pi^2(\H_{J_{n-2}} \ensuremath{\otimes}_{J_{n-2}} \H_{J_{n-1}} \ensuremath{\otimes}_{J_{n-1}} A \Gamma)$ with a six-term local sequence as in Example \ref{ex hE2 cell}, and $\ensuremath{\widehat{F}}^2_{S \backslash s_2}$ refers to $\H \ensuremath{\otimes}_{S \backslash s_2} \H_{S \backslash s_2} \ensuremath{\otimes}_{J'_{n-2}} \pi^2(\H_{J_{n-2}} \ensuremath{\otimes}_{J_{n-2}} \H_{S \backslash s_{n-2}} \ensuremath{\otimes}_{S \backslash s_{n-2}} A \Gamma)$ also with a six-term local sequence.
\begin{theorem}\
\label{t combinatorial sym affine}
\begin{list}{(\roman{ctr})}{\usecounter{ctr} \setlength{\itemsep}{1pt}
\setlength{\topsep}{2pt}
\setlength{\leftmargin}{0pt}
}
\item The inverse of the map $\ensuremath{\widehat{E}}^2 \to \ensuremath{\widehat{E}}^1$ of (\ref{e mackey2 affine}) is given on cells by $(T^1, T^1_>, T^0) \mapsto (P^2, P^2_>, P^1, T^1_>, T^0)$, where $P^1$ is determined by $\text{\rm sh}(P^1) = \ensuremath{\mathscr{G}}'(1; \text{\rm sh}(T^1_>), \text{\rm sh}(T^1_>), \text{\rm sh}(T^1_>))$ and the entries in $P^2, P^2_>$ have the same relative order as those in $T^1, T^1_>$.
\item The map $\ensuremath{\widehat{F}}^2_J \to \ensuremath{\widehat{E}}^2$ of (\ref{e mackey2 affine}) is given on cells by $(T^2, T^2_>, T^2_{>2}, \pi^{-2} T^2_{>2}, T^1, T^0) \mapsto (P^2, P^2_>, P^1, P^1_>, T^0)$, where $P^1$ is determined by $\text{\rm sh}(P^1) = \ensuremath{\mathscr{G}}'(0; \text{\rm sh}(T^2_>), \text{\rm sh}(T^2_{>2}), \text{\rm sh}(T^1))$ and the entries of $P^2, P^2_>, P^1_>$ have the same relative order as those in $T^2, T^2_>, T^1$.
\dikte=0.2pt
\item The isomorphism of $W$-graphs $\ensuremath{\widehat{F}}^2_J \cong \ensuremath{\widehat{F}}^2_{S \backslash s_2}$ of Proposition \ref{p nested parabolic} is given on cells by \[ (T^2, T^2_>, T^2_{>2}, \pi^{-2} T^2_{>2}, T^1, T^0) \mapsto (T^2, (P^2, T^2_{>2}), T^2_{>2}, \pi^{-2} T^2_{>2}, (\pi^{-2} T^2_{>2}, P^1), T^0), \] where $P^1$ is the tableau ${\tiny\begin{Young}{n-1} & n\cr \end{Young}}$ (resp. ${\tiny\begin{Young}{n-1} \cr n\cr \end{Young}}$)
if $T^0 \backslash T^1, T^1 \backslash \pi^{-2} T^2_{>2}$ are removed from $T^0$ as a horizontal strip (resp. vertical strip), and $P^2$ is the tableau {\tiny\young(12)} (resp. {\tiny\young(1,2)}) if $T^2_> \backslash T^2_{>2}, T^2 \backslash T^2_>$ are added to $T^2_{>2}$ as a horizontal strip (resp. vertical strip).
\item The cells of $\ensuremath{\widehat{F}}^2_{S \backslash s_2}$ in combinatorial reduced sym (resp. combinatorial wedge) are those with local sequences \[ (T^2, (P^2, T^2_{>2}), T^2_{>2}, \pi^{-2} T^2_{>2}, (\pi^{-2} T^2_{>2}, P^1), T^0) \] such that $P^2$ and $P^1$ have the same shape (resp. different shape).
\end{list}
\end{theorem}
\begin{proof}
Similar to that of Theorem \ref{t combinatorial sym}, the main difference being for (ii): after applying the Sch\"utzenberger involution, the analogous statement to (\ref{e main ii}) is with column insertions instead of row insertions.
\end{proof}
\begin{example}
\label{ex hE2 cell}
The local sequence for a cell of $\ensuremath{\widehat{E}}^2$ (top) and the local sequence for the corresponding cell of $\ensuremath{\widehat{F}}^2_J$ (bottom).
\begin{center}
$
\begin{array}{ccccc}
\tiny\young(\ensuremath{\text{\ng 1}} 12,36,4) & & \tiny\young(\ensuremath{\text{\ng 1}} 14,26,3) & & \tiny\young(145,26,3)\\
& \tiny\young(12,36,4) && \tiny\young(14,26,3) & \\
&& &&
\end{array}$
$\begin{array}{cccccc}
\tiny\young(123,46,5) & & & & & \tiny\young(145,26,3)\\
& \tiny\young(23,46,5) &&& \tiny\young(14,25,3) & \\
&& \tiny\young(36,4,5)& \tiny\young(14,2,3)&&
\end{array}
$
\end{center}
\end{example}
\begin{corollary}
\label{c main}
Theorem \ref{t combinatorial sym} gives a partition of the cells of $\ensuremath{\widetilde{E}}^2$ into three parts: the non-reduced part corresponding to the image of (i), the inverse image of combinatorial reduced sym under (ii), and the inverse image of combinatorial wedge under (ii). Similarly, Theorem \ref{t combinatorial sym affine} gives a partition of the cells of $\ensuremath{\widehat{E}}^2$ into three parts: the non-reduced part corresponding to the inverse image of (i), the image of combinatorial reduced sym under (ii), and the image of combinatorial wedge under (ii).
\end{corollary}
\begin{remark}
There is an obvious bijection between the cells of $\ensuremath{\widetilde{E}}^2$ and $\ensuremath{\widehat{E}}^2$ obtained by taking a local sequence ${\bold \ensuremath{\Upsilon}}$ of a cell of $\ensuremath{\widetilde{E}}^2$ to the cell of $\ensuremath{\widehat{E}}^2$ with the same sequence of shapes as those of ${\bold \ensuremath{\Upsilon}}$. Under this bijection, the cells of any of the three parts of $\ensuremath{\widetilde{E}}^2$ coming from Corollary \ref{c main} do not match the corresponding parts of $\ensuremath{\widehat{E}}^2$.
\end{remark}
\section{Future work}
There are some natural questions to ask about the inducing $W$-graphs construction that, as far as we know, remain unanswered. One question is whether the edge weights $\mu$ of the $W_J$-graph $\Gamma$ being nonnegative implies the same for the coefficients $\ensuremath{\tilde{P}}_{x,\delta,w,\gamma}$ of (\ref{e mudef}) or for the structure constants $h_{x,\mathbf{y},\mathbf{z}}$, defined by $\ensuremath{\mathscr{C}}_{x} \ensuremath{{\tilde{C'}}\negmedspace}_{\mathbf{y}} = \sum_{\mathbf{z}} h_{x,\mathbf{y},\mathbf{z}} \ensuremath{{\tilde{C'}}\negmedspace}_{\mathbf{z}}$, $x \in W,\ \mathbf{y},\mathbf{z} \in W^J \times \Gamma$. Our computations in the case $W = \S_n$ are consistent with these positivity conjectures, but we have not investigated the inducing $W$-graphs construction outside this case. Presumably these should be provable in the special case that $(W,S)$ is crystallographic, and $\Gamma$ is the iterated induction of Hecke algebras of crystallographic Coxeter groups, by the same methods used to show the non-negativity of the usual Kazhdan-Lusztig polynomials for such $W$.
Another question concerns the partial order on the cells of $\ensuremath{\widetilde{E}}^{d-1} = \ensuremath{\H_{1}\tsr_{J_1}\ldots\tsr_{J_{d-1}}\H_{d}}$. For type $A$, we have stated Conjecture \ref{c dominance}. For general type, we might hope to extend Lusztig's $a$-invariant
to the induced $W$-graph setting. In particular, each cell of $\ensuremath{\widetilde{E}}^{d-1}$ is contained in a cellular subquotient isomorphic to $\Gamma_{W_1}$ (Theorem \ref{t HJHcells}), so inherits an $a$-invariant from this isomorphism; a natural question is whether $\mathbf{z} \leq_\Lambda \mathbf{z'}$ and $\mathbf{z}$, $\mathbf{z'}$ in different cells implies $a(\mathbf{z}) > a(\mathbf{z'})$, where $\Lambda$ is the $W_1$-graph structure on $\ensuremath{\widetilde{E}}^{d-1}$. In \cite{G2}, Geck shows a slightly weaker version of this statement in the case $\ensuremath{\widetilde{E}}^{d-1} = \text{\rm Res}_{J_1} \H_2$, $d=2$ and $W_2$ crystallographic and bounded in the sense of \cite[1.1 (d)]{L}. It seems likely that a similar proof will work for the general case, with all Coxeter groups crystallographic and bounded.
In the forthcoming paper \cite{B}, we look at the partial order on the cells of $\text{\rm Res}_\H \pH \ensuremath{\otimes}_{\H} e^+$. It appears that there are other important invariants besides the $a$-invariant and dominance order that put restrictions on this partial order.
We have put much effort into extending the results of \textsection\ref{s tensor V}-\textsection \ref{s combinatorial approximation S2} to higher symmetric powers of $V$ and have had only partial success. In a way, this is the subject of the forthcoming paper \cite{B}, however this focuses more on the extended affine Hecke algebra and less on iterated restriction and induction.
\section*{Acknowledgments}
This paper would not have been possible without the generous advice from and many detailed discussions with Mark Haiman. I am also grateful to John Stembridge for pointing out references \cite{HY1}, \cite{HY2}, and to Michael Phillips, Ryo Masuda, and Kristofer Henriksson for help typing and typesetting figures.
|
2,877,628,091,648 | arxiv | \section{Introduction}
The vast majority of galaxies can be adequately described as
consisting of a compact smooth spheroidal component containing a
predominantly pressure-supported old [$\alpha$/Fe]-enhanced stellar
population, and/or an extended flattened star-forming disc component
containing intermediate and young stars with a wide range in
metallicities, having smooth rotation and embedded in an extensive gaseous
cold gas disc. Exceptions exist, most notably the dwarf populations
which, while dominant in terms of number density, actually contribute
only a modest amount to the baryon budget at the present time ($<16$
per cent; Driver 1999; Geller et al.~2012). This dichotomy of galaxies
into spheroids and discs has been known for over a hundred years
stretching back to even before the confirmation that galaxies are
external systems (e.g., Hubble~1926, 1936; Zwicky~1957 and references
therein). To some extent this dichotomy has been recently
``rediscovered'', through the statistical studies of large populations
as a galaxy bimodality (Strateva et al.~2001; Baldry et al.~2004).
Driver et al.~(2006) argued that this bimodality is better
interpreted in terms of the earlier bulge-disc dichotomy and advocated
routine structural decomposition as vital (e.g., Allen et al.~2006;
Simard et al.~2011; Lackner \& Gunn 2012) to directly trace the
independent evolutionary histories of the spheroidal and disc
components.
Numerical models of galaxy formation struggle to produce realistic
galaxy systems with a tendency to form overly cuspy cores and
difficulty in maintaining extended disc structures with a high axis
ratio (White \& Navarro~1993; Navarro \& Steinmetz~2000; Abadi et
al.~2003; House et al.~2011). Both problems are likely to be connected
to the different fundamental properties of the dark matter and the
baryons, and in particular their ability to experience pressure and
their ability to dissipate energy. In the core regions the
gravitational coupling of the baryons with the dark-matter may allow
it to exhibit a pseudo-pressure, whereas in the outer-regions the
ability of the baryons to dissipate energy on a timescale which is
faster than the free-fall timescale may allow for the formation
of a thin rotating baryonic disk. This picture while simple to
articulate has proven extremely hard to simulate, with the need to
partition and redistribute the angular momentum in a quite specific
manner to result in galaxies with realistic appearances. In particular
merger events are extremely disruptive to this process, imparting both
energy and angular momentum to the baryonic disc, which is easily
disrupted or 'plumped up' (see Barnes \& Hernquist 1992 for extensive
discussion and early references on this topic, also Hopkins et
al.~2009 for updated simulations on the survivability of discs during
merger events). In general the greater the merger-rate the more
bulge-dominated the final galaxy population appears.
More contemporary hydrodynamical simulations (e.g., Governato et
al.~2010; Agertz, Romain \& Moore~2011; Scannapieco et al.~2011;
Domenech-Moral et al.~2012) are now starting to show significant
success at producing realistic ``looking'' bulge-disc systems by
incorporating a greater level of cold gas infall than previously
assumed, as argued earlier by Keres et al.~(2005) and Dekel et
al.~(2009). These focused hydrodynamical studies, however, are
inevitably extracted from numerical simulations with particularly
quiescent merger histories, suggesting such systems should be the
exception rather than the norm. Hence, while numerical simulations of
the development of the dark matter haloes find a continual process of
halo merging, it appears that the baryons and what we identify as
galaxies (baryonic condensates), might not develop in the same
way. Martig \& Bournaud (2009) argued that feedback from low and
intermediate mass stars can contribute significantly to the
redistribution of mass from the bulge to the disc through extensive
(or even excessive) mass-loss. This baryonic outflow could help
alleviate the problem of excessive bulge-formation by allowing some
fraction of the collapsed stellar mass (up to $\sim 50$ per cent),
to return to the halo and contribute to the later growth of a disc ---
thereby coupling bulge and disc growth. However this mechanism also
relies on a fairly quiescent merger history during later times, and
does not easily explain the broad diversity of bulge-disc ratios
seen. As an aside, simulations have also demonstrated that baryonic
outflows from the core regions can help provide a plausible
explanation to the core-cusp problem (Governato et al.~2012; Zolotov
et al.~2012). Clearly feedback and infall are both crucial processes,
whose motivation is as much driven by the requirement to produce
realistic looking images as by fundamental physics, and which are both
more effective if the merger rate is either low or at least confined
to earlier epochs.
As argued in the opening paragraph, we advocate a more heuristic
approach where we put aside the issue of dark-matter assembly and
start by asking whether the dichotomy of galaxy structure is best
explained by two distinct formation mechanisms. Following the earlier
discussion and lessons learnt from the simulations, the obvious two
mechanisms can loosely be termed as a hot and cold mode. In the hot
mode spheroids are formed early and rapidly via dynamically hot
(turbulent) processes (collapse, fragmentation, and merging). In the
cold mode discs are formed more slowly, from an extended quiescent
phase of cold gas infall regulated by internal feedback (i.e.,
supernova). This basic concept is of course not new (e.g., Larson
1976; Tinsley \& Larson 1978) but has laid dormant for sometime
overshadowed by the dominance of merger-driven evolution. However the
revival is also being championed via a series of semi-analytic studies
by Cook et al.~(2009; 2010a; 2010b, see also Dekel et al.~2009)
inspired by behaviour seen in numerical simulations in which an
initial rapid hot merger phase is typically followed by a more
quiescent phase of accretion (see also L'Huillier, Combes \& Semelin
2012).
The two-phase model is both obvious (given the bulge-disc nature of
galaxies) and controversial, as it marginalises the merger rate
required for dark matter assembly to earlier epochs than simulations
typically suggest. This low merger rate is arguably corroborated by
the local studies of dynamically close pairs (in particular see Patton
et al.~2002; De Propris et al.~2005, 2007, 2010) --- although a
correct derivation of the merger rates requires a robust understanding
of the merger timescales, which are currently poorly
constrained. Perhaps more compelling, however, is the result that only
40\% of the present day stellar mass resides in spheroidal systems
(Driver et al.~2008; Gadotti~2009; Tasca \& White~2011). A key
inference is then: {\it If discs are destroyed/thickened during
mergers, yet the majority of stellar mass resides in discs, the
dominant formation mechanism cannot be merger-driven, but presumably
the more quiescent process of cold gas accretion.} This statement
becomes more profound when one realises that the stellar mass in discs
today only measures that unaffected by mergers, and that some of the
stars currently in spheroidal systems may have originally formed
within discs via cold accretion prior to a merger event. In some bulge
formation scenarios, star-formation via merging is dispensed with
altogether and replaced by the migration of massive star-formation
clumps formed within deeply turbulent discs (e.g., Elmegreen, Bournaud
\& Elmegreen 2008). This potentially relegates the stellar-mass
build-up driven by mergers to be in the 0---40 per cent range by
mass. Clearly mergers do occur at all redshifts and similarly discs
may form, be disrupted, and reform at any redshift. Recently
L'Huillier, Combes \& Semelin (2012) reported that 77 per cent of a
galaxy's mass is formed via gas accretion and 23 per cent via direct
merging from simulations. Other empirical studies also seem to suggest
that the bulk ($\sim 70$ per cent) of the stellar mass is mostly
assembled by $z \sim 1$, again marginalising the role of late time
major mergers (e.g., Bundy et al.~2004; Brown et al.~2007, 2008).
Focused studies of nearby galaxies are also unveiling significant
levels of gas accretion in some nearby systems (Sancisi et al.~2008) and
studies of the very rapid evolution of galaxy sizes have argued
(e.g., Graham et al.~2011) that the compact elliptical systems seen at
intermediate redshift ($1.4 < z < 2.5$) by Daddi et al.~(2005) and
Trujillo et al.~(2006) (see also Bruce et al.~2012) might represent
the naked bulges of present day spiral systems.
In essence the two-phase model is an attempt to highlight,
conceptually, the possibility of a distinct change in the primary galaxy
formation mechanism occurring at some transition redshift from an era
where the {\it dominant} mode is major mergers leading to spheroid
formation, to an era where the {\it dominant} mode is accretion
leading to disc formation.
At early cosmic epochs we see a prevalence of distinct
phenomena, in particular highly asymmetrical morphology in
massive/luminous systems (Driver et al.~1998; Conselice, Blackburn \&
Papovich.~2005; Ravindranath et al.~2006) and significantly increased
AGN activity (Fan et al.~2001,2003; Croom et al.~2004; Richards et
al.~2006). AGN activity is directly linked to the formation and growth
of the associated super-massive black holes (SMBH; Hopkins et
al.~2008a) which in turn is linked to spheroid formation via the well
established SMBH-bulge relations (see, for example, the review by Ferrarese
\& Ford 2005 or the recent near-IR SMBH-bulge luminosity relation by
Vika et al.~2012). Recent studies also argue, from more direct
empirical evidence, that AGN activity is almost always coincident with
massive star-formation and that the two-processes do indeed appear to
occur hand-in-hand (e.g., Rafferty et al.~2011). This AGN-SMBH-bulge
connection therefore implies a clear timescale for the formation of
the spheroid systems (see also Pereira \& Miranda~2011 for a similar
argument, albeit applied in the opposite direction).
In Section 2 we describe the $z=0$ empirical data describing the
cosmic spectral energy distribution (CSED) of spheroids and discs as
recently reported by the Galaxy And Mass Assembly team (Driver et
al.~2011, 2012). In Section 3 we take the above arguments to their
natural conclusion and use the AGN-SMBH-Bulge connection to define the
independent star-formation history of spheroids and assign the
residual star-formation, implied by the cosmic star-formation history,
to describe that of the discs. In Section 4 we use our star-formation
histories to produce predictions of the CSED of spheroids and discs,
and in Section 5 compare the predictions to the data.
Throughout we use $H_0$=70 h km s$^{-1}$ Mpc$^{-3}$ and adopt
$\Omega_M=0.27$ and $\Omega_{\Lambda}=0.73$ (Komatsu et al.~2011).
\section{The $z=0$ cosmic spectral energy distribution}
In Driver et al.~(2012) we reported the empirical measurement of the
cosmic spectral energy distribution (CSED) in the nearby Universe,
corrected for dust attenuation, and spanning the wavelength range from
0.1 to 2.1 $\mu$m, i.e., the regime over which direct stellar-light
dominates. These data were derived from the combination of the GAMA
spectroscopic survey, currently underway on the Anglo-Australian
Telescope (Driver et al.~2011), coupled with reprocessed and aperture
matched data from GALEX, SDSS, and UKIRT LAS (see Seibert et al., in
prep. and Hill et al.~2010). Driver et al.~(2012) also provided the
CSED subdivided according to spheroid-dominated and disc-dominated
systems. The division into spheroid and disc dominated was achieved
via visual classification, as neither a simple colour nor S\'ersic
index division appears to cleanly separate the two populations (see
also Kelvin et al.,~2012, figure~20).
The sample originated from a common volume of $2.8 \times 10^5$
(Mpc/h)$^{3}$ over the redshift range $0.013 < z < 0.1$. Although the
GAMA survey currently contains about 180,000 galaxies with known
redshifts, the adopted redshift range significantly reduces the sample
size to around 10,000. It also simplifies and removes any luminosity
bias arising from large scale clustering as the sample is
pseudo-volume limited around the $L^*$ region --- i.e., those galaxies
which dominate the luminosity density measurements.
As the GAMA survey lies entirely within the Sloan Digital Sky Survey,
the GAMA CSED was renormalised to the full SDSS survey area. This
reduces the cosmic/sample variance from around 15 per cent to 5 per
cent (using the formula for estimating cosmic variance given by eqn.~4
of Driver \& Robotham 2010).
\subsection{The CSED of spheroids and discs}
The final CSED values we use here are the spheroid-dominated and {\it
attenuation corrected} disc-dominated values taken directly from
Table~7 of Driver et al.~(2012).
As dust attenuation is such a crucial issue it is worth mentioning the
genesis of the corrections used by the GAMA team. First, the dust
correction is only applied to the disc-dominated data and the spheroid
population is assumed dust free (e.g., Rowlands et al.~2012). Second,
the corrections are based on the radiative transfer models of Tuffs et
al.~(2002) and Popescu et al.~(2011) which have been fine-tuned to the
multiwavelength (FUV--far-IR) data of NGC891, and incorporate three
distinct dust components; an extended low opacity disc, a compact high
opacity disc and dust clumps. This fiducial model is calibrated to the
galaxy population at large by modifying the $B$-band central face on
opacity until the predicted variation of flux with inclination matches
the trend of $M^*$ with inclination seen in the Millennium Galaxy
Catalogue data (Driver et al.~2007). This calibrated model was then
used to derive the combined face-on and inclination dependent
correction for a population of galaxies averaged over all viewing
angles and over a wavelength range of $0.1-2.1\,\mu$m (Driver et
al.~2008). This photon escape fraction (varying from 24 per cent in
the FUV to 89 per cent in the $K$-band) was then used to correct the
CSED of disc-dominated systems to the intrinsic CSED, which we use
here.
The CSED of spheroid-dominated and disc-dominated galaxies, however,
is not precisely what we require, as some proportion of the CSED flux
in the disc-dominated class may be coming from the central bulges.
Likewise, some proportion of the flux in the spheroid-dominated class
may be due to faint discs. In order to assess how much of a problem
this might be, we can compare the ratio of the $K$-band luminosity
densities of the spheroid-dominated to non-spheroid dominated samples,
to the ratio of the stellar mass densities of bulge+elliptical systems
to disc systems from Driver et al.~(2007)\footnote{The Driver et
al.~(2007) study is based on bulge-disc decompositions of the
Millennium Galaxy Catalogue data (Liske et al.~2003; Driver et
al.~2005) described in full in Allen et al.~(2006)}. This test
assumes that the $K$-band luminosity is a suitable single-band proxy
for stellar mass. We find reasonable agreement (within $12$ per cent),
suggesting that a comparable amount of flux needs to be redistributed
in either direction. In detail, the $K$-band luminosity densities are
1.2 and 2.2 ($\times 10^{34}$ h W Mpc$^{-3}$) for the
spheroid-dominated and non-spheroid-dominated populations respectively
(taken from Table~7 of Driver et al.~2012). Meanwhile the stellar mass
densities for {{\bf spheroids}} (bulge+ellipticals) and discs are 2.9
and 4.7 ($\times 10^8$h M$_{\odot}$ Mpc$^{-3}$) respectively (taken
from Table~1 of Driver et al.~2007). This gives agreement to $\sim 10$
per cent and suggests that for the moment we can adopt the following
approximation:
~
elliptical+bulge CSED $\approx$ Spheroid-dominated CSED
~
disc CSED $\approx$ non-Spheroid dominated CSED
~
In due course all galaxies at $z<0.1$ in the GAMA survey will be
decomposed into bulge and disc components to enable a direct
derivation of the true spheroid and disc CSEDs.
\section{The star-formation history of spheroids and discs}
The local CSED should be a predictable quantity if the cosmic
star-formation history (CSFH) is known, the initial mass function
(IMF) is universal and known, and a plausible stellar evolution code
applied. Of course this is not quite so simple and in particular the
metallicity enrichment adopted will significantly modify the
predicted CSED shape. Upcoming papers will explore these issues in
more detail, but here we wish to construct a basic first-look model
and focus on the viability of the hypothesis that galaxy formation
progressed in two fairly distinct phases: rapid spheroid formation
followed by more quiescent disc growth.
In order to construct a model of the present day spheroid and disc
CSEDs we need not just the CSFH, but the CSFH sub-divided into
spheroids and discs. These CSED predictions can then be compared to
the data from Section 2.
The existence of various super-massive black-hole bulge relations
(e.g., Ferrarese \& Ford 2005), provides the obvious smoking gun, as
it couples SMBH growth to bulge growth. This is because SMBHs are
believed to grow via mergers, resulting in an active-galactic nucleus
phase (Hopkins et al.~2006). The growth of spheroids is therefore,
arguably, mirrored via the more readily observable AGN activity
history. This logical connection, from a correlation to causality, is
the key assumption underpinning our model and forms the first of our
two axioms. In the recent study by Richards et al.~(2006), the
integrated AGN activity versus redshift was reported and, ignoring any
significant lag (in either spheroid formation or AGN activity), can be
used as a proxy in shape for the spheroid cosmic star-formation
history.
The amplitude of the spheroid SFH can be set from comparison of the
AGN activity shape to the global CSFH. For our second axiom we elect
to maximise the spheroid CSFH by setting the amplitude as high as
possible without exceeding the global CSFH (i.e., a maximal spheroid
formation scenario). Conceptually then, the heart of the two-phase
model can be defined empirically from two axioms:
\begin{figure*}
\centerline{\psfig{file=figs/csfh.ps,width=\textwidth}}
\caption{\label{fig:csfh} ({\it upper panel}) the cosmic
star-formation history from Hopkins \& Beacom~(2006), Table~2 (black
line) and the actual data (grey points) calibrated to a
modified-Salpeter IMF (see Table~\ref{tab:mult}). Overlain are the
QSO luminosity density data from Richards et al.~(2006), Figure
20. The QSO luminosity density is scaled until the peak aligns with
the peak of the CSFH. ({\it lower panel}) the same data but now
shown with the ordinate in units of age. Overlain are our
parametric fits to these data which provide our inferred CSFHs for
the Spheroid and Disc systems. On both panels we also show the SFR
derived from the SDSS/GALEX FUV luminosity function given in
Robotham \& Driver (2006) converted to a modified-Salpeter A IMF
(see Table~\ref{tab:mult} or Hopkins \& Beacom 2006).}
\end{figure*}
~
{\bf 1) AGN activity traces spheroid growth}
~
{\bf 2) Spheroid formation dominates at high-z}
~
As the above two axioms are already constrained by empirical data,
this provides a zero-parameter starting point for the two-phase model
--- bypassing the need for any initial conditions or detailed
numerical simulations. Fig.~\ref{fig:csfh} (upper) shows the fit (solid
curve) to the cosmic star-formation history data (grey data points)
taken from Hopkins \& Beacom~(2006; see their figure~2a and table~1
column 2). This adopts the parametric form defined by Cole et
al.~(2001) and where the UV data has been calibrated to a
modified-Salpeter~(1955) IMF. The data describing the AGN luminosity
density are taken from Richards et al.~(2006) and rescaled such that
the peak of the AGN luminosity density lies on the CSFH curve
(requiring an arbitrary multiplication by a factor of $3.51 \times
{10^6}$M$_{\odot}$/yr$^{-1}$L$^i_{\odot}$). Immediately noticeable is
the apparent discrepancy/uncertainty at very high
redshift. Particularly as the axioms above require that at the very
highest redshift the CSFH and AGN activity curves should have the same
form. To some extent the evident discrepancy simply reflects data
uncertainty, as the dust corrections on the CSFH at high-z are poorly
constrained with significant ongoing debate as to the true shape of
the CSFH at the highest redshifts. For example, measurements of the
star-formation history based on gamma-ray bursts (Yuksel et al.~2008;
Kistler et al.~2009) often find higher star-formation rates. Likewise
the incidence of dust-obscured AGN is an equally hotly debated topic
(e.g., Polletta et al.~2006; Triester et al.~2010). {{\bf Most
recently Behroozi, Wechsler \& Conroy (2012) argue that the
compendium by Hopkins \& Beacom (2006) potentially leads to an
over-estimate of the cosmic star-formation history at very high
redshifts as the pre-2006 UV luminosity densities may have been
over-estimated and find a modified CSFH which agrees very well
with the high-z AGN data (see their figure 2).}}
When presented as a function of time (Fig.~\ref{fig:csfh}, lower), it
is clear that this discrepancy is not actually that significant, and
the accumulated stellar mass able to be formed during this high-z
interval is small compared to subsequent mass growth. We can now
trivially fit the modified-Richards data to derive the star-formation
history of spheroids. This fit can then be subtract from the global
star-formation history and in turn fit to recover the implied disc
formation history. The resulting expressions are given below and
represent the star-formation rate ($\dot{\rho}$) versus time in Gyrs
($t_{\rm Gyrs}$) since the Big Bang for the spheroidal ($S$) and disc
($D$) populations:
\begin{eqnarray}
\dot{\rho}_{S}=\xi 1.03 \times 10^{-5} h_{0.7}^{3} (\frac{21.86}{t_{\rm Gyrs} h_{0.7}})^{8.57}\exp(-\frac{21.86}{t_{\rm Gyrs} h_{0.7}}) \\
\dot{\rho}_{D}=\xi 1.80 \times 10^{-3} h_{0.7}^{3}(\frac{29.39}{t_{\rm Gyrs} h_{0.7}})^{5.50}\exp(-\frac{29.39}{t_{\rm Gyrs} h_{0.7}})
\end{eqnarray}
where $\xi$ is the IMF multiplier as given in
Table~\ref{tab:mult}. The guideline error at any particular time
should be taken as $\sim \pm 25$\% based on the scatter of the
original data shown on Fig.~\ref{fig:csfh} (lower) i.e., $\sim 70$\%of
the grey data points lie on the grey shaded regions. At this point it
is also worth highlighting that while the AGN data (mauve points)
place a strong constrain on the shape of the spheroid star-formation
history the normalisation is very uncertain and based on the
relatively limited high-z cosmic star-formation history data and as we
shall see in Section.~4 will require a modest downward correction of
25\% to match the observed CSED (i.e., within the grey shaded error
bounds).
~
As an alternative for the spheroid population, one could instead use
the total CSFH for $t<3$Gyrs combined with the AGN luminosity density
data for $t>3$Gyrs, which is given by:
\begin{equation}
\dot{\rho}_{S2}=\xi 2.72 \times 10^{-4} h_{0.7}^{3} (\frac{16.82}{t_{\rm Gyrs} h_{0.7}})^{6.97}\exp(-\frac{16.82}{t_{\rm Gyrs} h_{0.7}})
\end{equation}
As stated these CSFHs are calibrated, via the UV data, to the
modified-Salpeter IMF used by Hopkins \& Beacom (2006) which was first
laid down as Sal A by Baldry \& Glazebrook (2003). To convert to other
IMFs one needs to {\it multiply} by the factor ($\xi$) shown in
Table~\ref{tab:mult}.
\begin{table}
\caption{CSFH multiplication factors ($\xi$) for various
IMFs. \label{tab:mult}}
\begin{tabular}{ll} \hline
IMF & Multiplier \\
& $\xi$ \\ \hline
Salpeter~(1955) & $\times 1.3$ \\
modified-Salpeter (Hopkins \& Beacom 2006) & $\times 1.0$ \\
Baldry \& Glazebrook~(2003) & $\times 0.7$ \\
Kroupa~(1993) & $\times 1.7$ \\
Kroupa~(2001) & $\times 0.85$ \\
Chabrier (2003) & $\times 0.85$ \\ \hline
\end{tabular}
\end{table}
The simple expressions above are shown in Fig.~\ref{fig:csfh} (lower)
in red (spheroid), blue (disc), and green (spheroid + disc) and
provide a good fit to the data given the accuracy to which the data
are known. These equations now provide a blueprint for the formation
of the present day spheroids and discs over the full age of the
Universe, leading to a clear prediction of the stellar energy output
and stellar mass growth.
Of particular interest should be the transition point around 4.2\,Gyrs
($z \approx 1.6$) from which point star-formation resulting in disc
growth dominates over star-formation resulting in spheroid
growth. This suggests a key epoch at which the Universe switches from
merger dominated evolution to accretion dominated evolution and meshes
very well with the evident change in the morphological appearance of
galaxies from highly disturbed to more ordered systems at $z \sim 1.5$
(see Driver et al.~1998; and also van den Bergh 2002).
\section{Constructing the model}
With the CSFH of spheroids and discs defined, we now build our
empirical two-phase model adopting ``vanilla'' choices at every
opportunity. In summary the key inputs and assumptions to the model are:
~
\noindent
1) The star-formation history shown in Fig.~1 as discussed in section~3.
\noindent
2) The adoption of a Universal IMF, in this case Baldry \& Glazebrook~(2003, henceforth BG03).
\noindent
3) The adoption of {\sc pegase.2} (Fioc \& Rocca-Volmerange~1997; 1999) to
model the spectral output of the evolving stellar population (using default options throughout).
\noindent
4) The assumption that {{\bf the gas-phase}} metallicity increases
linearly with star-formation from $Z=0.0$ to $Z=0.030$ for spheroids
and to $Z=0.010$ for discs, with no time lag (i.e., instantaneous
enrichment).
~
\begin{figure}
\centerline{\psfig{file=figs/metals.ps,width=\columnwidth}}
\caption{\label{fig:metal} The adopted {{\bf (gas-phase)}} metallicity
for spheroids (red solid line) and discs (cyan solid line) as a
function of time. Solid data points show the accepted mean $z=0$
values taken from Tremonti et al.~(2004). Also shown are approximate
data values from Erb et al.,~(2006) and Zahid et al.,~(2011). Note
that all data have an arbitrary error of $\pm 10$ \%. The {{\bf
gas-phase}} metallicity assumes no lag between star-formation
and enrichment. The dotted lines show the implied metallicity of the
stellar populations for the spheroids and discs. The grey line shows
the mean by mass of the integrated stellar metallicity and the
instantaneous integrated gas-phase metallicity.}
\end{figure}
\begin{figure*}[h]
\centerline{\psfig{file=figs/set1.ps,width=\textwidth}}
\caption{\label{fig:model1} The zero-parameter output, assuming a BG03
IMF for various metallicity histories as indicated and adopting the
star-formation histories given by Eqns.~1 \& 2. Note that the
star-formation rates have be multiplied by a factor of $0.55$ to
convert from a Salpeter (1955) IMF to that of BG03. The data points are
transcribed directly from the CSED reported in Driver et al.(~2012)
where the red points represent spheroid-dominated, the blue
disc-dominated and the black the sum of the two. The model lines are
for spheroids (orange), discs (cyan) and the sum (black).}
\end{figure*}
\subsection{Metal/chemical enrichment history}
Perhaps the most uncertain of the above list is the appropriate
metallicity history to adopt. Here we have been guided by the {{\bf
study of Tremonti et al.~(2004) to define our gas-phase
metallicity at redshift zero to be $Z=0.030$ and $Z=0.010$ for the
spheroids and discs. These values were determined by noting the
metallicity at $10^{11}$M$_{\odot}$ (predominantly spheroids) and
at $10^{9}$M$_{\odot}$ (predominantly discs). To convert the
given $12+\log_{10}(O/H)$ values to those shown on
Fig.\ref{fig:metal} we adopt a solar metallicity of
$Z_{\odot}=0.019$ with $12+\log_{10}(O/H)_{\odot}=8.9$.}} We then
argue that in the absence of other factors the mean metallicity will
rise approximately linearly with the cumulative cosmic star-formation
history normalised to the present day values. This ignores the
prospect of either pre-enrichment via, for example, Population III
stars, or any lag between the star-formation and the increase in
metallicity (i.e., instantaneous enrichment).
Conceptually these are loosely consistent with a closed-box model for
spheroids (i.e., rising to a metallicity close to typical yields), and
an infall model for discs or one in which the disc is gradually
growing from a large gas ``reservoir'' (e.g., perhaps analogous to the
``equilibrium model'' put forward by Dav{\'e}, Finlator \& Oppenheimer
2012). To explore the bounds and importance of this metal enrichment,
however, we also show the CSED predictions using our simple evolving
metallicity history and for constant metallicity at the highest and
lowest values. Fig.~\ref{fig:metal} shows the implied metallicity
histories derived from Eqns.~1\&2, for the two populations (as
indicated by the red and cyan solid lines). {{\bf Note that the grey
lines on Fig.~\ref{fig:metal} show the combined metallicity of all
stars formed (grey dotted line) and the integrated gas-phase
metallicity (grey solid line). This shows interesting behaviour in
that the average gas-phase metallicity (of the gas about to form
stars) peaked at $z \sim 2$.}}
Constraints from the literature on the mean metallicity at
intermediate and high redshift are minimal, however we note that
Zahid, Kewley \& Bresolin (2011) find from a sample of 1350 galaxies
drawn from DEEP2 that massive systems have comparable {{\bf
gas-phase}} metallicity at $z=0.8$ to local systems, while low
mass systems have a {{\bf gas-phase} metallicity reduced by
0.15dex. However Erb et al.,~(2006) find that the implied {{\bf
gas-phase}} metallicity, {{\bf for massive, i.e.,
spheroidal-like systems,}} at $z \sim 2$ is approximately half
that at $z=0$. Both of these results are crudely consistent with our
inferred metallicity history if one equates (as we explicitly do),
the massive systems to spheroids and low mass systems to discs.
{{\bf Note that one natural byproduct of this is that as
intermediate mass systems have both a spheroid and a disc
component their systemic metallicity will lie somewhere between
the two extremes and exhibit strong radial gradients as one
moves from the central spheroid component to the outer disc
component. As the mean bulge-to-total ratio increases fairly
smoothly with stellar mass this naturally gives rise to the
mass-metallicity relation (Tremonti et al.2004).}} Note that the
$\pm 10$\% error ranges shown on Fig.~\ref{fig:metal} are purely
indicative as the actual ranges are poorly constrained.
\subsection{Stellar population synthesis}
To construct the redshift zero CSED the {\sc pegase.2} code (Fioc \&
Rocca-Volmerange~1997; 1999) was used to produce a series of
single-stellar population (SSP) templates with an appropriate range of
metallicities (Z = 0.000 to 0.025 in 0.001 intervals) and with the {\sc
pegase.2} default steps in ages, (i.e., roughly logarithmic from
0-20Gyrs). For all SSP templates the star formation is set to a short
continuous burst over a 1Myr period with constant metallicity (leading
to the formation of $2.0\times 10^{-3}$ M$_{\odot}$ in {\sc
pegase.2}-{\it normalised} stellar mass units). These SSP spectra
were then combined to create a library of 1\,Gyr time averaged spectra
from 0-1\,Gyr to 13-14\,Gyr in 1\,Gyr intervals, and for each
metallicity class. Note that the 0-1\,Gyr bin which dominates the FUV
and NUV region is extremely hard to model correctly because of the
rapidly changing UV flux and requires more care. Here we take the
rather simplistic approach of combining all the spectra provided by
{\sc pegase.2} in the 0-1\,Gyr range in the following manner i.e.,
~
\noindent
$0-1 {\rm Gyr}=\frac{\frac{\frac{1+2+3+...+10}{10}+20+30+...+100}{10}+200+300+...+1000}{10}$\,Myr
~
To create the CSED at any redshift we then sum all previously formed
populations, aged appropriately, drawn from the appropriate
metallicity class, and scaled by the required star-formation rate. The
modelling approach we adopt is therefore relatively simplistic and
effectively assumes all values (star-formation rate, metallicity etc)
are held constant over a 1\,Gyr time period. At this stage we feel
this is sufficient time resolution given the inherent uncertainties in
the initial assumptions (i.e., the input CSFH and AGN activity data).
CSEDs were then derived at all 13 time steps and combined to produce
simple evolution movies available from:
~
\noindent
http://star-www.st-and.ac.uk/$\sim$spd3/model1.gif --- evolving metallicity
~
\noindent
http://star-www.st-and.ac.uk/$\sim$spd3/model2.gif --- constant high metallicity
~
\noindent
http://star-www.st-and.ac.uk/$\sim$spd3/model3.gif --- constant zero metallicity
~
\subsection{Normalisation of the CSEDs}
In order to determine the correct normalisation we need to multiply
the output {\sc pegase.2} SSP spectra which are in units of
erg\,s$^{-1}$\,$\AA^{-1}$ by $\frac{10^9 \lambda}{10^7 0.002}$. Here
the factor $10^9$ scales to 1\,Gyr bins, the factor $10^7$ converts
erg\,s$^{-1}$ to W, the wavelength is in Angstroms, and the factor
0.002 scales the spectra to 1 solar mass. In applying Eqns.~1-3 we set
$\xi$ to 0.7 to correct the CSFHs to the BG03 IMF.
Finally we allow the normalisations to float by $\pm 25$\% to account
for the uncertainty highlighted in Fig.~\ref{fig:csfh} by the grey
shading, along with uncertainties in the multiplication factors in
Table~\ref{tab:mult}, the cosmic variance in the GAMA CSED data, and
the impact of metallicity on star-formation rates. These 1
values are shown in brackets in Figs.~\ref{fig:model1} \&
\ref{fig:model2}.
\begin{figure}
\centerline{\psfig{file=figs/set2.ps,width=\columnwidth}}
\caption{\label{fig:model2} The zero-parameter output for alternative
IMFs using the evolving metallicity shown in Fig.\ref{fig:metal} and
adopting the star-formation histories given by Eqns.~1 \& 2. Note
that in generating the models we modify the input star-formation by
factors of $\times 1.0$, $\times 1.3$ and $\times 0.7$
respectively.}
\end{figure}
\begin{figure}
\centerline{\psfig{file=figs/smass.ps,width=\columnwidth}}
\caption{\label{fig:mass} The implied build-up of stellar mass in
spheroids and discs versus recent measurements from the Millennium
Galaxy Catalogue. Shown in grey are the compendium of data from
Wilkins, Trentham \& Hopkins.~2008a.}
\end{figure}
\section{Models v data}
\subsection{The CSED and adopted metallicity}
Fig.~\ref{fig:model1} shows the direct comparison of our $z=0$ CSED
models against the recent GAMA data (Driver et al.~2012), for the
three assumed metallicity histories (as indicated). The top panel,
which adopts the evolving metallicity, shows a remarkable agreement
across the full wavelength range and for both the spheroid and disc
systems. Note that in achieving these fits the spheroid data have been
renormalised downwards by 25\% which is within the specified range of
uncertainty. The central and lower panels of Fig.~\ref{fig:model1}
show the same models except for a constant high or low metallicity.
{{\bf This has a negligible impact on the disc-CSED, suggesting very
little dependency on the assumed metallicity evolution for discs
(perhaps in part due to the low value adopted)}}. Conversely the
impact on the spheroid CSED is quite marked with the CSED tilting
either redward or blueward for constant high or constant low
metallicity respectively. This perhaps lends some argument against any
very strong pre-enrichment phase as intermediate and low metallicity
stars in spheroids are required to produce a plausible CSED. The
obvious caveat is whether the shape of the currently adopted spheroid
CSED is significantly modified/contaminated by young disc light.
\subsection{Dependency on assumed IMF}
We now briefly explore the impact of the adopted
IMF. Fig.~\ref{fig:model2} shows the CSED predictions for the evolving
metallicity scenario using either a (top) Salpeter~(1955), (centre)
Kroupa et al.~(1993), (bottom) or Kroupa~(2001) IMF. Note that the
Kroupa~(2001) IMF is extremely close in form to the Chabrier~(2003)
IMF and therefore the Kroupa (2001) prediction can be taken as
representative of both.
Essentially all IMFs provide an equally good fit to the CSED except in
the UV regime. At wavelengths longer than $u$-band ($>0.4\mu$m) the
resulting shape is not particularly sensitive to the detailed shape of
the IMF. This is mainly because at $z=0$ stars close to solar
luminosity, where the IMF is least contentious, are dominating most of
the CSED. Systems at the very high-mass end which formed at high
redshift will no longer be contributing to the CSED whereas very
low-mass stars are yet to dominate the near-IR flux. The CSED is
therefore unable to constrain the IMF other than the normalisations
required for these IMFs are generally higher than for the models based
on BG03.
\subsection{Stellar mass history}
Fig.~\ref{fig:mass} shows the implied build-up of stellar mass in
spheroids and discs (and combined), as indicated. Note that we show
both the total cumulative stellar mass formed (dashed lines), along
with that remaining based on default {\sc pegase.2} assumptions as to
mass-loss (solid lines). Also shown are the direct empirical stellar
mass measurements from the Millennium Galaxy Catalogue (Liske et
al.~2003; Driver et al.~2005), which includes corrections for dust
attenuation (Driver et al.~2008). The agreement is reasonable with the
discs agreeing with the MGC data to within the error and the spheroid
mass over-predicting the MGC value by a modest amount. It is worth
noting from Fig.~\ref{fig:mass} that the stellar mass of spheroids is
actually declining, with mass-loss having exceeding mass-gain for the past 9
billion years. For the discs, the two almost exactly balance, such
that the overall stellar mass density appears to asymptote to a
constant value around the present day.
Also shown in Fig.~\ref{fig:mass} as grey shaded data are the
compendium of total stellar mass estimates given in Wilkins, Trentham
\& Hopkins (2008a). These data clearly fall significantly below the
black shaded line, highlighting a significant discrepancy between the
total stellar mass inferred from the cosmic star-formation history and
that derived from direct empirical constraints. This offset is well
known and discussed in detailed in Wilkins, Trentham \& Hopkins
(2008a), here we make two additional comments: (1) the shape of the
data and the black curve do broadly agree with a $\times 2$ offset at
almost all ages, (2) the $z=0$ data from the MGC includes detailed
dust corrections for both the optically thin {\it and} optically thick
regions and is typically a factor of $\times 2$ higher than most local
measurements. It is possible then that the values from the literature
are missing mass embedded in optically thick regions. Perhaps a more
likely explanation, also put forward by Wilkins et al.~(2008) is that
the IMF was simply lighter at earlier times. This would reconcile
quite nicely as the low-z CSED is fairly impervious to the very low
mass-end of the IMF.
\subsection{Discussion}
At this point we have a simple heuristic model which adopts two simple
axioms motivated by the physically distinct appearance of spheroids
and discs in the nearby Universe. These axioms combined with the
empirical compendium of AGN and cosmic star-formation activity/history
are able to reproduce the $z=0$ CSEDs of spheroids and predict the
mean mass and metallicity evolution of present day discs and
spheroids. The model also provides a complete description of the
energy output from stars within those systems which will eventually
make up the local spheroid population (projenitors) as a function of
redshift, the metallicity build-up, and suggests key cross-over epoch
at $z \sim 1.6$ between the hot and cold mode evolution. This later
transition redshift is consistent with the obvious change in
morphologies seen in HST images at this redshift (e.g., Driver et al.,
1998, figure 3).
However an obvious weakness is that the model provides no clear
prediction of the morphological, size- and shape- evolution, merger
rates, or the clustering of the galaxy population. Furthermore the
model does not stipulate the actual mechanism by which star-formation
is occurring and for the hot mode could be some combination of
monolithic collapse, major merging, and/or clump migration. Within the
recent literature the exact status of the $z \sim 2$ population is
also unclear. Deep IFU studies (e.g., Forster Schreiber et
al.,~2009,2011) find that the majority of star-formation at $z \sim 2$
appears to be taking place in rotating clumpy disc structures with no
obvious central bulge component. Similarly Chevance et al.,~(2012, see
also Weinzirl et al.,~2011) from a study of 31 high-z galaxies, find
that the S\'ersic indices are significantly flatter than one would
expect for low-z spheroids and obvious disc structures are present in
many cases. From our Fig.~1 we can see that at $z\sim 2$ we are still
within the epoch where spheroid formation should be
dominating. However spheroid formation does not necessarily imply
spheroid morphologies until after some unspecified time-lag in which
the system settles. In fact violently star-forming systems will
inevitably appear blue, asymmetrical, gas-rich, and dusty, i.e., quite
disc-like in many aspects. Other studies, e.g., van Dokkum (2008),
find that 45\% of massive galaxies at $z\sim 2.3$ do indeed have
evolved stellar populations, little or no ongoing star-formation, and
compact early-type morphologies. Hence a picture of a spheroid
population emerging from a highly turbulent progenitor phase around $z
\sim 2$ appears to be qualitatively consistent with our
model. Alternatively our model may need to be adjusted to allow for
gas infall and disc-formation from the outset with some fraction of
the disc-formed stars merging into bulges, i.e., a relaxation of the
maximal spheroid formation axiom. This would have the net effected of
also increasing the cross-over redshift to $>1.6$
A further intriguing observations is that high-z spheroids are
significantly more compact than nearby ellipticals by factors of
$\times 3-4$ at fixed stellar mass (e.g., Daddi et al.,~2005; van
Dokkum et al.,~2008 etc). Within our scenario this could be consistent
with the high-z sample being ``naked''-bulges yet to grow discs or yet
to be ``puffed-up'' through successive minor merger interactions or
adiabatic expansion. These two pathways, disc-growth verses
``puffing'', are likely to be strongly environmentally dependent with
minor mergers more frequent in high density environments, and gas
infall more prevalent in low-density environments. A particularly
interesting comparison might therefore be the mass-size relation of
high-z spheroids to low-z bulges.
With the caveat that the morphology and size evolution within our
model is unspecified we nevertheless appear to have a prediction of
the energy output of spheroid and discs projenitors over all epochs
(Fig.~\ref{fig:model1} top panel), the mean {{\bf gas-phase}
metallicity history for each population (Fig.~\ref{fig:metal}), and
the build-up of stellar mass (Fig.~\ref{fig:mass}). Whether one can
readily distinguish these populations observational however is an
open question.
Finally it is worthwhile reiterating that this model contains {\it no
tunable parameters nor any dependency on initial conditions beyond
the underlying cosmology}. The model is built entirely from
empirical data and provides a fully consistent empirical scaffolding
upon which more physically motivated models can be built. Our
conclusion is that the initial axioms on which the model is based are
viable and the star-formation histories defined are tenable.
Further studies of the variation of the $z=0$ CSED and its dependency
on environment should enable an investigation into dependencies on
clustering, and to assess whether star-formation proceeds more rapidly
or whether it is merely the relative mix of spheroid v disc formation
which is changing. Similarly, using observations at intermediate
redshift, it should be possible to compare data from high-z studies to
the predictions of our two-phase model. Both of these avenues will be
explored in future papers.
\begin{figure}
\centerline{\psfig{file=figs/csfhadj.ps,width=\columnwidth}}
\caption{As for Fig.~\ref{fig:csfh} (lower) except with the spheroid
star-formation history down-weighted by 25\% by incorporating the
CSED constraints from Fig.~\ref{fig:model1}
(upper). \label{fig:csfhadj}}
\end{figure}
\section{Conclusions}
From two very simple axioms: (1) that AGN activity traces spheroid
formation, and (2) that the CSFH is dominated by spheroid formation at
high redshift, we are able to derive simple expressions for the cosmic
star-formation histories of spheroids and discs. Following comparisons
to the $z=0$ CSED for spheroids and discs we find a modest downward
adjustment of 25\% provides the optimal fit resulting in our final recommended star-formation histories of:
\begin{eqnarray}
\dot{\rho}_{S}=\xi 0.77 \times 10^{-5} h_{0.7}^{3} (\frac{21.86}{t_{\rm Gyrs} h_{0.7}})^{8.57}\exp(-\frac{21.86}{t_{\rm Gyrs} h_{0.7}}) \\
\dot{\rho}_{D}=\xi 1.80 \times 10^{-3} h_{0.7}^{3}(\frac{29.39}{t_{\rm Gyrs} h_{0.7}})^{5.50}\exp(-\frac{29.39}{t_{\rm Gyrs} h_{0.7}})
\end{eqnarray}
where $\xi$ is the IMF multiplier as given in Table~\ref{tab:mult}.
Fig.~\ref{fig:csfhadj} shows these final relations compared to the
compendium of data provided by Hopkins \& Beacom (2006) and despite
the renomalisation of the spheroid star-formation history still
provide a perfectly satisfactory description of the global CSFH.
Adopting a Baldry \& Glazebrook~(2003) IMF and using these
expressions to predict the $z=0$ CSED, we are able to provide a
satisfactory explanation of the observed CSEDs of spheroids and discs
from the FUV to the $K$-band.
The corollary of the simplicity of the two-phase model, however, is
that it lacks any prediction of the clustering signature,
environmental dependencies, or the merger histories, although these can
be built in at a later stage. Perhaps the key gain, in an era of
hidden tunable parameters, is that with the adoption of a universal
IMF and a stellar evolution code there are essentially no free
parameters. Strictly speaking this is not precisely true as the
detailed modelling of stellar evolution typically comes with options
and there is arguably a choice of IMFs and also whether it is
Universal or varies over cosmological time (see for example Wilkins et
al.~2008a, 2008b; Gunawardhana et al.~2011). On this last subject of
the IMF it is worth reiterating that longward of $0.4\mu$m the $z=0$
CSED is not sensitive to the high-mass shape of the IMF
(unless taken to the extreme). This is because at almost all wavelengths,
in the declining star-formation era today, the CSED is dominated by either
the tip of the main sequence, which lies just below a solar mass and well
above the mass range of contention (mid-optical to NIR), or the most
recently formed stars (FUV to mid-optical).
As a byproduct, the two-phase model also provides the CSED of
spheroids and discs at every epoch in the Universe, along with the
prediction of a clear-cut transition redshift at around $z \approx
1.7$ where galaxy evolution switches from evolution being dominated by
major mergers to evolution being dominated by cold gas infall. Future
work will include a broader wavelength baseline, bulge-disc
decompositions, inclusion of the AGB energy output, and development of
the model via comparisons to selected external data.
\section*{Acknowledgments}
We thank the referee for insightful comments during the refereeing
process which has helped improve the paper.
\section*{References}
\reference Allen P.D., Driver S.P., Graham A.W., Cameron E., Liske J., de Propris R., 2006, MNRAS, 371, 2
\reference Abadi M.G., Navarro J.F., Steinmetz M., Eke V.R., 2003, ApJ, 591, 499
\reference Agertz O., Romain T., Moore B., 2009, MNRAS, 397, 64
\reference Baldry I.K., Glazebrook K., ApJ, 2003, 593, 258
\reference Baldry I.K., 2004, ApJ, 600, 681
\reference Barnes J.E., Hernquist L., 1996, ARA\&A, 30, 705
\reference Behroozi P.S., Wechsler R.H., Conroy C., 2012, ApJ, submitted (arXiv:1207.6105)
\reference Brown M.J.I., Dey A., Jannuzi B.T., Brand K., Benson A.J.,
Brodwin M., Croton D., Eisenhardt P.R., 2007, ApJ, 654, 858
\reference Brown M.J.I., et al., 2008, ApJ, 682, 937
\reference Bundy K., Fukugita M., Ellis R.S., Kodama T., Conselice C., 2004, ApJ, 601, 123
\reference Chabrier G., 2003, PASP, 115, 763
\reference Chevance M., et al.,~2012, ApJ, 754, 24
\reference Cole S. et al., 2001, MNRAS, 326, 255
\reference Conselice C.J., Blackburne J.A., Popovich C., 2005, ApJ, 620, 564
\reference Cook M., Lapi A., Granato G.L., 2009, MNRAS, 397, 534
\reference Cook M., Barausse E., Evoli C., Lapi A., Granato G.L., 2010b, MNRAS, 402, 2113
\reference Cook M., Evoli C., Barausse E., Granato G.L., Lapi A., 2010a, MNRAS, 402, 941
\reference Daddi E., et al., 2005, ApJ, 626, 680
\reference Dav\'e R., Finlator K., Oppenheimer B.D., 2012, MNRAS, 421, 98
\reference Dekel A., et al., 2009, Nature, 457, 451
\reference De Propris R., Liske J., Driver S.P., Allen P.D., Cross N.J.G., 2005, MNRAS, AJ, 130, 1516
\reference De Propris R., Conselice C., Liske J., Driver S.P., Patton D.R., Graham A.W., Allen P.D., 2007, ApJ, 666, 212
\reference De Propris R., et al., 2010, AJ, 139, 794
\reference Domenech-Moral M. Martinez-Serrano F.J. Dominguez-Tenreiro R., Serna A., 2012, MNRAS, 412, 2510
\reference Driver S.P. Fernandze-Soto A., Couch W.J., Odewahn S.C.,
Windhorst R.A., Phillipps S., Lanzetta K., Yahil A., 1998, ApJ, 496,
93
\reference Driver S.P., 1999, ApJ, 526, 69
\reference Driver S.P., Liske J., Cross N.J.G., De Propris R., Allen P.D., 2005, MNRAS, 360, 81
\reference Driver S.P., et al., 2006, MNRAS, 368, 414
\reference Driver S.P., Popescu C.C., Tuffs R.J., Liske J., Graham A.W., Allen P.D., de Propris R., 2007, MNRAS, 379, 1022
\reference Driver S.P., Popescu C.C., Tuffs R.J., Graham A.W., Liske J., Baldry I., 2008, ApJ, 678, 101
\reference Driver S.P., Robotham A.S.G., 2010, MNRAS, 407, 2131
\reference Driver S.P., et al., 2012, MNRAS, in press (astro-ph/1209.0259)
\reference Driver S.P., et al., 2011, MNRAS, 413, 971
\reference Elmegreen B.G., Bournaud F., Elmegreen D.M., 2007, ApJ, 658, 67
\reference Erb, D., Shapley A., Pettini M., Steidel C.C., Reddy N.A., Adelberger K.L., 2006, ApJ, 644, 813
\reference Ferrarese L., Ford H., 2005, SSRv, 116, 523
\reference Fioc M., Rocca-Volmerange B., 1997, A\&A, 326, 950
\reference Fioc M., Rocca-Volmerange B., 1999, (arXiv:9912179)
\reference Forster Schreiber N.M., et al.~2011, AJ, 739, 45
\reference Forster Schreiber N.M., et al.~2009, ApJ, 706, 1364
\reference Gadotti D., 2009, MNRAS, 393, 1531
\reference Geller M.J., Diafero A., Kutz M.J., Dell'Antonio I.P., Fabricant D.G., 2012, AJ, 143, 102
\reference Graham A.W., 2011, in Planets, Stars and Stellar Systems, (Publ: Springer)
\reference Governato F., et al., 2010, Nature, 463, 203
\reference Governato F., et al., 2012, MNRAS, 422, 1231
\reference Hill, D., et al., 2011, MNRAS, 412, 765
\reference House E.L., et al., 2011, MNRAS, 415, 2652
\reference Hopkins A.M., Beacom J.F., 2006, ApJ, 651, 142
\reference Hopkins P.F., Hernquist L., Cox T.J., Di Matteo T., Robertsan B., Springel V., 2006, ApJS, 163, 1
\reference Hopkins P.F., Hernquist L., Cox T.J., Keres D., 2008a, ApJS, 175, 356
\reference Hopkins A.M., McClure-Griffiths N.M., Gaensler B.M., 2008b, ApJ, 682, L13
\reference Hopkins P.F., Cox T.J., Younger J.D., Hernquist L., 2009, ApJ, 691, 1168
\reference Hubble E., 1926, ApJ, 64, 321
\reference Hubble E., 1936 in Realm of the Nebulae (Pub: Yale Uni. Press)
\reference Keres D., Katz N., Weinberg D.H., Dave R., 2005, MNRAS, 363, 2
\reference Kistler M.D., Yuksel H., Beacom J.F., Hopkins A.M., Wyithe J.S.B., 2009, ApJ, 705, 104
\reference Koda J., Milosavljevic M., Shapiro P.R.,. 2009, ApJ, 696, 254
\reference Komatsu et al., 2011, ApJS, 192, 18
\reference Kroupa P., 1993, MNRAS, 262, 545
\reference Kroupa P., 2001, MNRAS, 322, 231
\reference Larson R.B., 1976, MNRAS, 176, 31
\reference L'Huillier B., Combes F., Semelin, B, 2012, A\&A, 544, 68
\reference Liske J., Lemon D.J., Driver S.P., Cross N.J.G., Couch W.J., 2003, MNRAS, 344, 307
\reference Lackner C.N., Gunn J.E., 2012, MNRAS, 421, 2277
\reference Martig M., Bournaud F., 2010, ApJ, 714, 275
\reference Navarro J.F., Steinmetz M., 2000, ApJ, 538, 477
\reference Patton D.N., et al., 2002, ApJ, 565, 208
\reference Pereira, E.S., Miranda O.D., 2011, MNRAS, 418, 30
\reference Polletta M., et al., 2006, ApJ, 642, 673
\reference Popescu C.C., Tuffs R.J., Dopita M.A., Fischera J., Kylafis N.D., Madore B.F., 2011, A\&A, 527, 109
\reference Rafferty D.A., Brandt W.N., Alexander D.M., Xue Y.Q., Bauer F.E., Lehmer B.D., Luo B., Papovich C., 2011, ApJ, 742, 3
\reference Ravindranath S., et al., 2006, ApJ, 652, 963
\reference Richards G., et al 2006, AJ, 131, 2766
\reference Robotham A.S.G., Driver S.P., 2011, MNRAS, 413, 2570
\reference Salpeter E.E., 1955, ApJ, 121, 161
\reference Sancisi R., Fraternall F., Oosterloo T., van der Hulst J.M., 2008, A\&A Rv, 15, 189
\reference Scannapieco C, White S.D.M., Springer V., Tissera P.B., 2011, MNRAS, 417, 154
\reference Simard L., Mendel J.T., Patton D.R., Ellison S.L., McConnachie A.W., 2011, ApJS, 196, 11
\reference Strateva I., et al., 2001, AJ, 122 1861
\reference Tasca L.A.M., White S.D., 2011, A\&A, 530, 106
\reference Tinsley B.M., Larson R.B., 1978, ApJ, 221, 554
\reference Tremonti C.A., et al., 2004, ApJ, 613, 898
\reference Triester E, Urry M.C., Schawinski K., Cardamone C.N., Sanders D.B., 2010, ApJ, 722, 238
\reference Tuffs R., Popescu C.C., Volk HJ., Kylafis N.D., Dopita M.A., 2004, A\&A, 419, 821
\reference van den Bergh S., 2002, PASP, 114, 797
\reference van Dokkum P., et al.,~2008, ApJ, 677, 5
\reference Vika, M., Driver S.P., Cameron E., Kelvin L., Robotham A.S.G., 2012, MNRAS, 419, 2264
\reference Weinzirl T., et al., 2011, ApJ, 743, 87
\reference White S.D.M., Navarro J.F., 1993, Nature, 366, 429
\reference Wilkins S., Trentham N., Hopkins A.M., 2008a, MNRAS, 385, 687
\reference Wilkins S., Hopkins A.M., Trentham N., Tojeiro R., 2008b, MNRAS, 391, 363
\reference Yuksel H., Kistler M.D., Beacom J.F., Hopkins A.M., 2008, ApJ, 683, 5
\reference Zahid H.J., Kewley L.J., Bresolin F., 2011, ApJ, 730, 137
\reference Zolotov A., et al., 2012, ApJ, submitted (astro-ph/1207.0007)
\reference Zwicky F., 1957, in Morphological Astronomy (Pub: Springer)
\end{document}
|
2,877,628,091,649 | arxiv | \section{Introduction}
\label{sec:intro}
Radio relics are steep-spectrum radio sources that are usually detected in the
outer parts of galaxy clusters, at distances of $\sim 0.5-3$ Mpc from their centres. They are very often found in clusters with a perturbed dynamical
state. There is good evidence for their association with powerful merger shocks, as early suggested by \citet[][]{1998A&A...332..395E}.
Among them are {\it giant double-relics} that show two large sources on opposite sides of the host cluster's centre \citep[e.g., ][]{fe12,2012MNRAS.426...40B,fdg14}. These relics are associated with shocks in the intracluster medium that occur in the course of cluster mergers. Only a tiny fraction of the kinetic power dissipated by typical cluster merger shocks ($\ll 10^{-3}$) is necessary to power the relics, and diffusive shock acceleration (DSA, \citealt[e.g.][for modern reviews]{2012JCAP...07..038C,kr13}) has so far been singled out as the most likely mechanism to produce the relativistic electrons and to produce the observed power-law radio spectra \citep[e.g.][]{hb07}.
However, if the standard DSA model is correct, the same process should also lead to the acceleration of cosmic-ray (CR) protons. Indeed, the process should be much more efficient for protons, owing to their larger Larmor radius {\footnote{Instead, since the Larmor radius of thermal electrons is much smaller than the typical shock thickness, thermal electrons {\it cannot} be easily accelerated to relativistic energies by DSA. This so called injection problem for electrons is still largely unresolved \citep[e.g.][]{2014IJMPD..2330007B}}}. \\
To date, high-energy observations of nearby galaxy clusters have not revealed any diffuse $\gamma$-ray emission resulting from the interaction between relativistic protons and thermal particles of the intracluster medium (ICM) \citep[][]{re03,aha09,al10,alek12,arl12,2014MNRAS.440..663Z}. Recently, the non-detection of diffuse $\gamma$-ray emission from clusters by \emph{Fermi} has put the lowest upper limits on the density of CRs in the ICM, $\leq$ a few percent of the thermal gas energy within the clusters virial radius \citep{ack10,fermi14}.
Moreover, the stacking of subsets of cluster observations leads to even lower upper-limits \citep[][]{2013A&A...560A..64H,2014ApJ...795L..21G}.
These low limits on the energy content of CRs can be used to constrain shock-acceleration models.
Recently, \citet{va14relics} have already suggested that the present statistics of radio observations, combined with available upper limits by \emph{Fermi} places constraints on DSA as the source of giant radio relics. In \citet{va14relics}, we assumed that the population of clusters with radio relics was similar to the population of (non-cool-core) clusters for which the stacking of \emph{Fermi} clusters was available. In the present paper, we repeat a similar analysis by comparing to a more realistic stacking of the
\emph{Fermi} data. Our method is outlined in Sec.~\ref{subsec:algo}, while our results are given in Sec. \ref{sec:results}. In the latter Section, we also discuss on the role
played by the several open parameters in our modeling. We find (Sec.~\ref{sec:results}) that the present upper limits from \emph{Fermi} imply energy densities of CR-protons that are too low to be explained by standard DSA: if
DSA produces the electrons in relics, then we should have already detected hadronic $\gamma$-ray emission in some clusters, or in stacked samples. In our conclusions (Sec.~\ref{sec:conclusions}), we discuss possible solutions to this problem, as suggested by recent hybrid and particle-in-cell simulations of weak, collisionless shocks.
\section{Methods}
\label{sec:methods}
\subsection{Semi-analytical cluster mergers}
\label{subsec:algo}
Our aim is to test DSA by making quantitative predictions of the hadronic $\gamma$-ray flux assuming that the CR-protons come from the same shocks that produce radio relics (Sec.~\ref{subsubsec:radio}). Semi-analytical methods with initial conditions tuned to match observable parameters of radio halos have been widely used in the literature \citep[e.g.][]{2003ApJ...583..695G,2005MNRAS.357.1313C,hb07}. They have obvious limitations owing to the lack
of 3-dimensional detail and the crude geometrical assumption on the merger scenario (i.e. spherical symmetry and simple radial distribution for the gas). Still, for this specific problem they are helpful, as the energy density of CR-protons should be a simple function of the shock parameters and of the volume crossed by each shock wave.\\
Our approach is similar to the method used in \citet[][]{va14relics}: for each cluster in our sample, we model the shock trajectories and the associated CR-acceleration. We use simple 1D models of shock propagation in stratified atmospheres and use observables from radio and X-ray data, such as the spectral index, the distance from the centre, the radio power and the largest linear scale of each relic \citet{2012MNRAS.426...40B} (see also \citealt{fdg14}).
Below we summarise the most important steps from \citet[][]{va14relics} (see also Fig.~\ref{fig:sketch}), while in Sec.\ref{subsec:uncert} we discuss the most relevant uncertainties in our model and assess their impact on our
results:
\begin{enumerate}
\item We infer the Mach number, $M$, from the spectral index of the radio spectrum, $s$, at the injection region via $M= \sqrt \frac{\delta+2}{\delta-2}$ ($\delta$ is the slope of the particle energy spectra and $\delta=2 s$). This assumes that the radio spectrum is dominated by the freshly injected CR-electrons at the shock, an assumption that might be poor for spectra integrated over large downstream volumes (in which cases the radio spectrum is steeper by $0.5)$. Bootstrapping with random deviates from the Mach number thus determined can quantify the errors (see Sec.~\ref{sec:results}).
\item The upstream (i.e. pre-shock) gas density, $n_{\rm u}$, at the relic is computed using a $\beta$-model profile for each host cluster, with $\beta=0.75$ and the core radii scaling as $r_{\rm c}=r_{\rm c, Coma}(T_{\rm Coma}/T)^{1/2}$ (which follows from the self-similar scaling), where $r_{\rm c, Coma}=290$ kpc. This is the only way to regularise our dataset, as a more detailed reconstruction of the gas density from the literature is missing for most of these objects. Both are clearly a simplification but in reality we probably find higher gas densities and temperatures along the merger axis, due to clumping and enhanced gas compression which will only increase the $\gamma$-ray emission. Moreover, the propagation of each merger shock is determined by the temperature {\it in front of it}, i.e. in the upstream region, which present X-ray observation can hardly constrain for any of these objects. Therefore we always consider an upstream gas temperature, $T_{\rm u}$, based on the $L_{\rm X} - T_{\rm 500}$ relation for each host cluster \citep[][]{2009A&A...498..361P}.
\item We compute the kinetic power for each shock as $\Phi_{\rm kin}=n_{\rm u} v_{\rm s}^3 S/2 $, where $v_{\rm s}=M c_{\rm s}$ ($c_s \propto \sqrt{T_{\rm u}}$).
\item We assume that a fraction of the kinetic power goes into CR-protons $\Phi_{\rm CRp}=\eta(M)\Phi_{\rm kin}$, where the efficiency, $\eta(M)$, is a non-linear function of the Mach number. It has been derived for several DSA scenarios \citep[e.g.][]{kj07,kr13,2014ApJ...785..133H}. Here we use $\eta(M)$ given by \citet{kr13}, which were estimated based on simulations of nonlinear DSA,
considering an upstream $\beta=100$ plasma and including a phenomenological model for the magnetic field amplification and Alfvenic drift
in the shock precursor, due to accelerated CRs. This function predicts an acceleration efficiency of $\approx 1$ percent for $M=3$, steeply rising to $\sim 10$ percent for $M=5$ \citep[see Fig.~4 of ][]{kr13}. The corresponding power into CR-electrons is set by assuming a fixed electron-to-proton ratio, $K_{\rm e/p}=0.01$: $\Phi_{\rm CRe}=K_{\rm e/p}\eta(M)\Phi_{\rm kin}$. This ratio is already conservative, as recent models of particle acceleration in supernovae suggest an even lower value, $K_{\rm e/p} \sim 10^{-3}$ \citep[][]{2014arXiv1412.0672P}, which would result into a $10$ times larger hadronic $\gamma$-ray flux than for our value of $K_{\rm e/p}$.
\item The magnetic field at the relic, $B$, is derived from the radio power via the equations given by \citet{hb07}. In this model
the monochromatic radio power at frequency $\nu$, $P_{\nu}$, depends on the shock surface area, $S$ (which we derive from the projected size of the relic, assuming that the relic has a circular shape), the upstream electron density, $n_{\rm e}$ (computed from $n_{\rm u}$ by assuming a mean molecular mass of $0.6$), the upstream electron temperature, $T_{\rm e} \approx T_{\rm u}$, the spectral index of the radio emission, $s$, and the relic magnetic field, $B_{\rm relic}$, in the following way:
\begin{equation}
\frac{dP}{d\nu}=\frac{6.4 \cdot 10^{34} \rm erg}{\rm s \cdot Hz} \cdot S \cdot n_e \cdot \eta(M) K_{\rm e/p} \frac{T_{\rm e}^{3/2}}{\nu^{s/2}} \cdot \frac {B_{\rm relic}^{1+s/2}}{B_{\rm CMB}^2+B_{\rm relic}^2} ,
\label{eq:hb}
\end{equation}
where $B_{\rm CMB}$ is the equivalent field of the cosmic microwave background.{\footnote {We notice that, compared to the original Equation by \citet{hb07}, we use here upstream values for density and temperature, as the function $\eta(M)$ also accounts for the shock compression factors.}} In our simplest model ("basic model") we let the magnetic field
as a free parameter without upper limits, while in a more realistic scenario ("Bcap" scenario) we imposed a maximum magnetic field of $B_{\rm relic,max}=10 ~\mu G$ for all relics (Sec.\ref{subsubsec:Bfield}).
\item Explaining the observed radio emission from $M \leq 3$ shocks is a problem, as the required electron acceleration efficiency at these weak shocks can become unrealistically large \citep[][]{2011ApJ...728...82M}. It has been suggested that the contribution from shock re-accelerated electrons can alleviate this problem, as it would mimic the effect
of having a higher acceleration efficiency \citep[][]{ka12,pinzke13}. To model re-acceleration, as in \citet[][]{va14relics},
we used an increased CR-proton acceleration efficiency as a function of Mach number, following \citet[][]{ka07} and \citet{kr13} who showed that the net effect of re-accelerated and freshly injected cosmic rays can be modeled by a rescaled efficiency $\eta(M)$. This depends on the
energy ratio between pre-existing cosmic rays ($E_{\rm CR}$) and the thermal gas ($E_{\rm g}$). In the following we parametrize this ratio using the parameter $\epsilon=E_{\rm CR}/E_{\rm g}$, and explore the cases $\epsilon=0$ (single injection, no-reaccelerated electrons), and bracket the trend of re-acceleration using $\epsilon=0.01$ and $\epsilon=0.05$. Notice that the latest limits from \emph{Fermi} only allows $\epsilon \leq $ a few percent, for flat radial distributions of CRs \citep[][]{fermi14}.
\item In order to compute the spectrum of electrons in the case of a re-accelerated pool of CRs, we follow \citet[][]{ka12} and
assume that the blend of several populations of pre-existing CR-electrons are characterised by a power law with index $\delta_e = (4\delta+1)/3$, where $\delta$ is the spectral index of the energy spectrum derived at the relic as before. As in \citet[][]{ka12} we fix the spectrum of re-accelerated CRs to the particle spectrum at the relic in those cases where the spectrum is flatter than the (steep) spectrum associated with the $M=2$ shock that is supposed to re-accelerate them.
\item Once the shock parameters are fixed, we estimate the energy injected in CR-protons that are assumed to stay where they have been predicted. We assume that each shock surface scales with the cluster-centric distance, $r$, as $S(r)=S_0 (r/R_{\rm relic})^2$ (i.e. the lateral extent of the shock surface is set to largest linear size of the relic, which decreases with $\propto r$ inwards). The cumulative energy dissipated into CR-protons is given by
\begin{equation}
E_{\rm CR}= \int_{r_c}^{R_{\rm relic}}{\Phi_{\rm CRp}(r)v_{\rm s} ~ dr } = \int_{r_c}^{R_{\rm relic}} \frac{\eta(M) n_{\rm u}(r) v_{\rm s}^3 S(r)}{2} ~ dr.
\end{equation}
The shock surface and strength will vary with radius, and so will the CR-proton acceleration efficiency, $\eta(M)$. Our fiducial model assumes that the Mach number of shocks released by the merger is constant across the whole volume of interest, while in
Sec. \ref{subsubsec:Mrad} we also test a scenario in which the Mach number scales with radius. The lower integration limit in the equation for $E_{\rm CR}$ is the core radius, $r_c$, meaning that the shocks are assumed to be launched only outside of the cluster core, which is supported by simulations \citep[e.g.][]{va12relic,sk13}
\item We compute the hadronic $\gamma$-ray emission, $I_{\rm \gamma}$, following \citep[][]{pe04,donn10},with the only difference that for the hadronic cross-section we use the parametrisation of the proton-proton cross section given by \citet{2006PhRvD..74c4018K}, as in \citet{2013A&A...560A..64H}.
In detail, we compute for each radius the source function of $\gamma$-rays as:
\begin{eqnarray}
q_{\gamma}(E_{\gamma})={{ 2^{4-\delta_{\gamma}} }\over{
3\delta_{\gamma}}}
{{\sigma_{\mathrm{pp}}(E) c n_{\mathrm{e}}(r) K_p}\over{\delta -1}}
(E_{\mathrm{min}})^{-\delta} \frac{E_{\mathrm{min}}}{\mathrm{GeV}} \times\frac{m_{\pi^{0}}c^{2}}{\mathrm{GeV}},
\end{eqnarray}
where $n_{\rm e}$ is the upstream electron density, computed from the gas density by assuming a molecular mean weight $\mu=0.6$. The spectrum of the $\gamma$-ray emission depends on the assumed Mach number across the cluster, and therefore is either a function of $M(r)$ in the single acceleration model, or a constant in the re-acceleration case. Once the spectral index, $\delta$, of the particle spectrum is fixed, the spectrum of the $\gamma$-ray emission is given by $\delta_{\gamma}=4(\delta-1/2)/3$ \citep[][]{pe04}. Here, we consider hadronic emission in the energy range [0.2-300] GeV, which are compared to the stacked emission from \emph{Fermi} described in the following section (Sec.~\ref{subsubsec:gamma_obs}). $m_{\pi}$ and $m_p$ are the masses of the $\pi^0$ and the proton, respectively. The threshold proton energy is taken as $E_{\rm min}=780 ~\rm MeV$ and the maximum is $E_{\rm max}=10^{5}$ MeV (the actual value is actually irrelevant, given the steep spectra of our objects)
The effective cross-section we used is
\begin{equation}\label{eq:sigma}
\sigma_{pp}(E)=(34.3+1.88L+0.25L^2)\left[1-\left(\frac{E_{th}}{E}\right)^2\right]^4 \mbox{mb},
\end{equation}
with $L=\ln(E/\mbox{1 TeV})$ and $E_{th}=m_p+2m_{\pi}+m^2_{\pi}/2m_p\sim1.22$ GeV \citep[Eq. 79 of ][]{2006PhRvD..74c4018K}. The normalisation factor $K_p$ is:
\begin{equation}
K_p=\frac{(2-\delta)E_{\rm CR}(r)}{(E_{\rm max}^{-\delta+2}-E_{\rm min}^{-\delta+2})}.
\end{equation}
The emission per unit of volume, $\lambda_{\gamma}$, is
obtained by integrating the source function over the energy range:
\begin{eqnarray}
\label{eq:gamma}
\lambda_{\gamma}(r) &= \int\limits_{E_{1}}^{E_{2}}
\mathrm{d}E_{\gamma} q_{\gamma}(E_{\gamma}) \nonumber \\ &=
\frac{\sigma_{\mathrm{pp}} m_{\pi}c^{3}}{3
\delta\delta_{\gamma}}
\frac{n_{\mathrm{u}}(r) K_p}{\delta -1}
\frac{ (E_{\mathrm{min}})^{-\delta}}{2^{\delta_{\gamma}-1}}
\frac{E_{\mathrm{min}}}{\mathrm{GeV}} \left(
\frac{m_{\pi_{0}}c^{2}}{\mathrm{GeV}}
\right)^{-\delta_{\gamma}} \nonumber \\ &\times
\left[\mathcal{B}_{\mathrm{x}}\left(
\frac{\delta_{\gamma}+1}{2\delta_{\gamma}},
\frac{\delta_{\gamma}-1}{2 \delta_{\gamma}} \right)
\right]^{x_{1}}_{x_{2}}
\end{eqnarray}
where $\mathcal{B}_{\mathrm{x}}(a,b)$ denotes the incomplete
$\beta$-function and $[f(x)]^{a}_{b} = f(a) - f(b)$. Integrations over radii are performed by summing over radial shells of thickness 10 kpc.
Finally, the hadronic $\gamma$-ray emission for each radial shell along the trajectory of the shock is given by $\lambda_{\gamma}(r) S(r) ~dr$ and the total hadronic emission in the downstream region of each relic is given by the integral, $I_{\rm \gamma}= \int_{r_c}^{R_{\rm relic}} \lambda_{\gamma}(r) S(r) ~dr$.
\end{enumerate}
The above set of approximations minimises the hadronic $\gamma$-ray emission from clusters: the presence of gas clumping and substructure is neglected, the injection of CRs is limited to regions outside of the dense
cluster cores, and additional acceleration of CRs by earlier shocks, turbulence, supernovae and AGN is not taken into account. A number of assumptions have to be made in the previous steps. We discuss all most important in Sec. \ref{subsec:uncert} and explore the effects uncertainties in the parameters of the model.
\begin{figure}
\includegraphics[width=0.49\textwidth]{images/sketch.ps}
\caption{Schematic view of our method for computing $\gamma$-ray emission from accelerated CR-protons downstream of the double relics.}
\label{fig:sketch}
\end{figure}
\subsection{Observations}
\label{subsec:obs}
\subsubsection{Radio data}
\label{subsubsec:radio}
We restrict our analysis of to double radio relics, as these systems are caused most clearly by major merger events \citep[e.g.][]{1999ApJ...518..594R,vw12sim,sk13}, and they should be less affected by projection effects because of large cluster-centric distances \citep[][]{va12relic}. We select double-relic sources from the collection of \citet{2012MNRAS.426...40B} and further restrict analysis to the sources at high Galactic latitude ($|b|>15^{\circ}$) to avoid strong contamination by the bright diffuse $\gamma$-ray emission from the Galactic plane (see below). The final sample is made of 20 relics from 10 clusters: MACSJ1752, A3667, A3376, A1240, A2345, A3365, MACSJ1149, MACSJ1752, PLCKG287, ZwClJ2341 and RXCJ1314. The values of radio parameters for these objects (e.g. total power, radio spectral slope, $s$, largest linear scale
of each object and distance from the centre of the host cluster) are given in Tab.1.
\subsubsection{$\gamma$-ray data}
\label{subsubsec:gamma_obs}
We analysed $\gamma$-ray data collected by the Large Area Telescope (LAT) on board \emph{Fermi} (hereafter \emph{Fermi}-LAT). \emph{Fermi}-LAT \citep{atwood09} is a pair-conversion $\gamma$-ray telescope operating in the $20$ MeV - $300$ GeV band. We collected all the data obtained during the period 2008-08-04 to 2013-11-01 and analysed them using the \emph{Fermi} Science Tools software package \texttt{v9r32p5} and the \texttt{P7SOURCE\_V6} instrument response files. For each source listed in Table 1, we extracted a rectangular region of interest (ROI) with side $\sim 20^{\circ}\times 20^{\circ}$ centred on the source. For each source, we then constructed a model including the Galactic emission \footnote{http://fermi.gsfc.nasa.gov/ssc/data/analysis/software/aux/gal\_2yearp7v6\_v0.fits}, the isotropic diffuse background \footnote{http://fermi.gsfc.nasa.gov/ssc/data/analysis/software/aux/iso\_p7v6source.txt}
and all sources listed in the 2FGL catalog in a radius of 20$^\circ$ around our source, with spectral models and parameters fixed to the values given in 2FGL. We then performed a binned likelihood analysis \citep{mattox96} on each individual ROI to determine the parameters of the $\gamma$-ray emission model. The normalization of each component (diffuse background components and point sources) was left free to vary during the fitting procedure. \\
As an alternative hypothesis, we then added to the model a pointlike test source fixed to the cluster position. The test source had a spectrum computed from the simulated CR proton population, accelerated at shocks (Sec. \ref{subsec:algo}), which should be injected by $M \sim 3$ shocks in most cases. This spectrum is the best guess from our cosmological numerical simulations (\citealt{scienzo14}) and is close to the "universal CR-spectrum" suggested by \citet[][]{2010MNRAS.409..449P}.
The resulting photon spectrum was discretised and used as a template for the expected emission of the shock-accelerated CR protons. This spectral model closely resembles a power law with a photon index of $\sim2.6$ at energies $>1$ GeV and is significantly flatter at lower energies because of the strong dependence of the $p-p$ interaction cross section near the threshold energy for pion production \citep[see Appendix A.1 of ][]{2013A&A...560A..64H}. We then estimated the test statistic (TS) defined as
\begin{equation}
\mathrm{TS} = -2\,\mathrm{ln}\, \frac{\mathcal{L}^{max}_{0}}{\mathcal{L}^{max}_{1}},
\label{eq:TS}
\end{equation}
where $\mathcal{L}^{max}_{0}$ and $\mathcal{L}^{max}_{1}$ are the maximum likelihood values for the null and alternative hypotheses, respectively. No significant signal ($TS>25$) was observed for any of the clusters from Table 1. Therefore, we computed upper limits at the 95\% confidence level (CL) for all sources as in \citet{fermi14}, which we report in Table 3 for two energy ranges ($0.2-300$ and $1-300$ GeV). The choice of a pointlike source at the cluster positions is not critical. The mean spectra assumed for the relics are steep, and most of the photons should come from the low energy end of the spectrum, where the resolution of \emph{Fermi} is too coarse to distinguish between poinlike and extended sources in our objects ($\sim 4^{\circ}$ at $200$ MeV). \\
In order to lower these limits, we performed a stacking analysis of the sources in our sample following the method presented in \citet{huber12,2013A&A...560A..64H}. The stacking of the sources is performed by adding step by step the individual ROIs after having simulated and subtracted the surrounding point sources. The resulting co-added map for our stacked sample is given in Fig.~\ref{fig:stack_fermi}. The co-added data are then fit using the same source+background model as described above. Again, no significant emission is measured within the stacked volume of these clusters ($TS_{\rm stacked}=0.21$), which results in an upper limit of $F_{\rm mean}<1.90\cdot 10^{-10}$ \rm ph/cm$^{2}$/s (95\% CL, $0.2-300$ GeV band) to the average flux per cluster. For more details on the stacking procedure and a thorough validation of the method using simulated data, we refer the reader to \citet{huber12}.
\begin{figure}
\includegraphics[width=0.45\textwidth]{images/stackedmap.ps}
\caption{Co-added \emph{Fermi}-LAT count map in the 200 MeV - 300 GeV energy range for all the clusters listed in Table \ref{tab:tab1}. The circles indicate apertures of 2.5$^\circ$ and 5$^\circ$ around the stacked cluster position to highlight the source and background regions, respectively.}
\label{fig:stack_fermi}
\end{figure}
\begin{table*}
\label{tab:tab1}
\caption{Main observational parameters for the radio relics and clusters considered in this paper: redshift (2nd column), X-ray luminosity (3rd), distance from the cluster centre for each relic in the pair 4th and 5th column), largest linear scale (6-7), radio power (8-9), radio spectral index (10-11) of relics, upper limits of $\gamma$-ray emission at $0.2-300$ and $1-300$ GeV for each host cluster (12-13) and value of the test statistics (Eq. \ref{eq:TS}). The radio data are taken from \citet{fe12} and \citet{2012MNRAS.426...40B}, while the $\gamma$-ray data have been derived from the \emph{Fermi} catalogue in this work.}
\centering \tabcolsep 5pt
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c}
object & z & $L_x$ & $r_1$ & $r_2$ & $R_1$ & $R_2$ & $\log_{\rm 10}(P_{\rm R,1})$ & $\log_{\rm 10}(P_{\rm R,1})$ & $s_1$ & $s_2$ & $\log_{\rm 10}(UL_{\rm 0.2-300})$ & $\log_{\rm 10}(UL_{\rm 1-300}$) & TS \\
& & $10^{44}$ erg/s & Mpc & Mpc & Mpc & Mpc & [erg/s] & [erg/s] & & & $[\rm ph/(s ~cm^2)]$ & $[\rm ph/(s ~cm^2)]$ & \\ \hline
A3376 & 0.047 & 1.04 & 0.80 & 0.95 & 1.43 & 0.52 & 40.026 & 39.936 & 1.20 & 1.20 & -8.65 & -9.71 & 1.36 \\
A3365 & 0.093 & 0.41 & 0.56 & 0.23 & 1.00 & 0.70 & 40.086 & 39.146 & 1.55 & 1.93& -8.61 & -9.66 & 6.17\\
A1240 & 0.159 & 0.48 & 0.64 & 1.25 & 0.70 & 1.10 & 39.748 & 39.991 & 1.20 & 1.30 & -9.21 & -9.89 & 0\\
A2345 & 0.176 & 2.56 & 1.50 & 1.15 & 0.89 & 1.00 & 40.544 & 40.577 & 1.30 & 1.50 & -8.55 & -9.55 & 6.19\\
RXCJ1314 & 0.244 & 5.26 & 0.91 & 0.91 & 0.57 & 0.94 & 40.350 & 40.714 & 1.40 & 1.40 & -9.06 & -9.94 & 0\\
MACSJ1149 & 0.540 & 6.75 & 0.82 & 0.76 & 1.39 & 1.14 & 40.879 & 41.100 & 1.20 & 1.42 & -9.53 & -10.11 & 0\\
MACSJ1752 & 0.366 & 3.95 & 1.34 & 0.86 & 1.13 & 0.91 & 41.651 & 41.292 & 1.21 & 1.13 & -8.92 & -9.98 & 0.04\\
A3667 & 0.046 & 1.02 & 1.42 & 1.83 & 1.30 & 1.30 & 40.044 & 39.945 & 1.39 & 1.31 & -8.59 & -9.64 & 3.16\\
ZwCLJ2341 & 0.270 & 2.00 & 0.25 & 1.20 & 1.18 & 0.76 & 40.726 & 40.377 & 1.90 & 1.22 & -9.41 & -10.35 & 0\\
PLCKG287 & 0.390 & 8.29 & 1.93 & 1.62 & 1.58 & 3.00 & 41.401 & 41.322 & 1.26 & 1.54 & -8.81 & -9.68 & 0.49\\
\end{tabular}
\end{table*}
\begin{figure}
\includegraphics[width=0.45\textwidth]{images/1D_plotA3367_prof.eps}
\caption{1D profiles of various quantities along the projected propagation
radius of the two relics in A3667, as assumed or predicted by the basic model (Sec. \ref{subsec:algo}). First panel: gas density profile. Second panel: profile of the cumulative shock energy and CRp-energy dissipated in the downstream. The additional numbers in colours give the magnetic field necessary to reproduce the observed radio power using Eq. \ref{eq:hb}. Third panel: predicted $\gamma$-ray emission. The vertical lines delimit the regions where the shocks have been launched, while the horizontal line gives the single-object limit from \emph{Fermi}.}
\label{fig:example}
\end{figure}
\section{Results}
\label{sec:results}
\subsection{Basic model}
A typical set of predictions from our baseline model is given in Fig.~\ref{fig:example} for the two relics in A3667.
The integrated kinetic energy that crosses the shock (upper lines) and dissipated CR-proton energy (lower lines) in the second panel are integrated along the shock trajectory. In the single injection model ($\epsilon=0$) the predicted acceleration of CRs is very different for the two relics, and follows from the
different assumed Mach numbers: $M=3.8$ for the northern relic and $M=1.7$ for the southern relic. In the first case, the radio power is matched with the modest field strength of $B_{\rm relic} \approx 0.7 ~\rm \mu G$, while in the weaker southern shock the required magnetic field is $\approx 219\rm ~\mu G$. Re-accelerated electrons
can explain the observed radio power in both cases using fields in the range $B_{\rm relic} \sim ~ 1-2 \rm \mu G$. The $\gamma$-ray emission (lower panel) is the integrated hadronic emission in the downstream. Hence, the last bins on the left and on the right give the total emission from the two downstream regions and should be compared with the \emph{Fermi} limit for A3667. When the contribution from both relics is summed up, the predicted hadronic emission is very close to the {\it single-object} limit from \emph{Fermi} for this cluster. We find similar results for the relics in ZwCLJ2341 (see Tab. 2).
In the next sections we will discuss the predicted magnetic fields and the $\gamma$-ray emission from the full dataset and under different assumptions in our model. \\
The full range of estimates from our fiducial model (Sec. \ref{subsec:algo}) is shown in Figure \ref{fig:first}, where we plot: a) the distribution of predicted $\gamma$-ray emission for the full sample (histograms with different colours for each model) and compare the mean emission of each run (thin lines) with the limits we derive from our stacking of \emph{Fermi} exposures on these objects (Sec. \ref{subsubsec:gamma_obs}). This stacking gives us the most robust testing of DSA, since it comes from the same set of objects simulated with our semi-analytical method. In the same figure, we also show for completeness the result of stacking only non-cool-core (NCC) clusters in a larger sample of objects observed by \emph{Fermi} \citep{2013A&A...560A..64H}, which we converted into the $[0.2-300]$ GeV energy range by assuming a $\gamma$-ray spectral index of $-2.6$. The limit here is $\sim 5$ times lower than our stacking limit (see also \citealt{2014ApJ...795L..21G} for a slightly lower limit), given the larger sample of objects (32) and the fact that this sample contains more nearby objects than our list of radio relics. This limit comes from a bigger population, yet comparing it to our simulated population is useful, under the hypothesis that two parent population of objects are dynamically similar. This is likely, because to the best of our knowledge all objects of our sample are NCC and all show evidence of very perturbed dynamical states in X-ray \citep[e.g.][see also http://www.mergingclustercollaboration.org/merging-clusters.html]{2003MNRAS.339..913E,cav09,2011ApJ...736L...8B,2012MNRAS.426...40B}. The magnetic fields at each relic required by our modelling of the radio power, using Eq.\ref{eq:hb}. We also show for comparison the upper limits derived on the magnetic field at the location of relics in the Coma cluster \citep[][based on the analysis of Faraday Rotation]{2013MNRAS.433.3208B} and in the cluster CIZA 2242.2+5301 \citep[][based on the analysis of the brightness profile across the relic]{vw10}, as well as the range of values inferred by \citet{2010ApJ...715.1143F} for the relic north of A3667, based on the lack of Inverse Compton emission. \\
In our model, both $\epsilon=0$ and $\epsilon=5$\% runs predict a very high mean level of hadronic emission, well above the stacking of this dataset in the single injection case, and just below it in the case of $\epsilon=5$\% (but larger than the stacking of the full \emph{Fermi} catalog). In both cases the mean emission
is kept at a high value by $1/5$ of bright objects in the sample (A3667 and ZwCLJ2341), however the bulk of all remaining objects is also characterised by emission of the the order of the full stacking of non-cool-core clusters.
On the other hand, the $\epsilon=1$\% model predicts a mean emission below the the stacking of this dataset. However, the magnetic fields required in this case are very large. Here 6 out of 20 relics require $B_{\rm relic} \geq 10 ~\rm \mu G$ where the radio emission is detected, which is at odds with the few available estimates of magnetic fields from observations (gray arrows).
In a general sense, explaining fields larger than a few $\sim \rm \mu G$ at the large radii where these relics are found is difficult based on both observational and theoretical facts (see Sec.~\ref{subsubsec:Bfield} for a detailed discussion).\\
To summarise, none of the scenarios we investigated with our semi-analytical method is able to simultaneously predict a level of $\gamma$-ray emission compatible with \emph{Fermi} and to use a reasonable magnetic field level in all objects. The reacceleration model assuming $1$ \% of CR energy to be reaccelerated by merger shocks survives the comparison with \emph{Fermi}, but makes use of very large magnetic fields in $\sim 1/3$ of our objects. The models with single-injection or significant re-acceleration instead predict a mean emission for the sample which is in tension with stacking of \emph{Fermi} observations for this sample, or for the larger sample of non-cool-core clusters, which very likely has the same characteristics of the double-relics one.
The following sections will show how this problem is worsened as soon as all relevant assumptions made in our modeling are relaxed.
\begin{table*}
\label{tab:tab_gamma}
\caption{Forecast of hadronic $\gamma$-ray emission from the downstream region of our simulated clusters in the $[0.2-300]$ GeV range. The first three columns show the prediction for the three assumed ratios $\epsilon$, and without imposing a maximum magnetic field at relics. The second three columns show our predictions by imposing a $B \leq 10 \mu G$ cap to the magnetic field.
(see Sec. \ref{subsubsec:Bfield} for details). For comparison, we show the upper limits from \emph{Fermi} given in Table 1 for the same energy range. We mark with stars the objects for which the single-object comparison with {\emph Fermi} data is problematic in several models.}
\centering \tabcolsep 2pt
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c}
model & $\epsilon=0$ & $\epsilon=0.01$ & $\epsilon=0.05$& $\epsilon=0, B_{\rm cap}$ & $\epsilon=0.01, B_{\rm cap}$ & $\epsilon=0.05, B_{\rm cap}$ & observed \\
& $\log_{\rm 10}(e_{\gamma})$ & $\log_{\rm 10}(e_{\gamma})$ & $\log_{\rm 10}(e_{\gamma})$ & $\log_{\rm 10}(e_{\gamma})$ & $\log_{\rm 10}(e_{\gamma})$ & $\log_{\rm 10}(e_{\gamma})$ & $\log_{\rm 10}(UL_{\rm Fermi})$\\
object &$[\rm ph/(s ~cm^2)]$ & $[\rm ph/(s ~cm^2)]$ & $[\rm ph/(s ~cm^2)]$ & $[\rm ph/(s ~cm^2)]$ & $[\rm ph/(s ~cm^2)]$ & $[\rm ph/(s ~cm^2)]$ &$[\rm ph/(s ~cm^2)]$ \\ \hline
A3376 & -10.07 & -10.54 & -9.83 & -9.78 & -9.77 & -9.77 & -8.65 \\
A3365 & -12.52 & -12.12 & 11.41 & -11.94 & -12.02 & -11.31 & -8.61 \\
A1240 & -10.67 & -11.60 & -10.90 & -10.84 & -10.83 & -10.84 &-9.21 \\
A2345 & -10.24 & -10.75 & -10.05 & -10.00 & -9.98 & -9.98 & -8.55\\
RXCJ1314 & -10.69 & -10.94 & -10.24 & -10.18 & -10.16 & -10.16 & -9.06 \\
MACSJ1149 & -11.31 & -12.24 & -11.53 & -11.47 & -11.46 & -11.46 & -9.53\\
MACSJ1752 & -10.14 & -10.45 & -10.86 & -10.80 & -10.75 & -10.79 & -8.92 \\
*A3667* & -8.53 & -9.28 & -9.28 & -9.23 & -9.21 & -9.20 & -8.59 \\
*ZwCLJ2341* & -8.80 & -9.81 & -9.11 & -9.05 & -9.04 & -9.04 & -9.41 \\
PLCKG287 & -10.75 & -11.44 & -10.74 & -10.68 & -10.67 & -10.67 & -8.81 \\
\end{tabular}
\end{table*}
\begin{figure*}
\includegraphics[width=0.45\textwidth]{images/sausage_tabulated_models_kr13_MC1_Bfree.dat_gamma_Bfree.eps}
\includegraphics[width=0.45\textwidth]{images/sausage_tabulated_models_kr13_MC1_Bfree.dat_Bfield_Bfree.eps}
\caption{Left: distribution of predicted $\gamma$-ray emission from our cluster sample, for different fiducial models (colored histograms). The thin vertical lines show the mean emission for the sample according to each model, and should be compared with the upper limits from the stacking of all non-cool-core clusters observed by \emph{Fermi}, or by the stacking limited to our sample of clusters with double relics (gray arrows). Right: distribution of magnetic fields (colored histograms) required for each relic to match the observed radio power using Eq. \ref{eq:hb}. The additional gray arrows give the range of values
inferred for the few observations of magnetic fields in real relics (see text for details), while the hatching marks the values of magnetic fields that we regard as physically unrealistic (see Sec.\ref{subsubsec:Bfield}).}
\label{fig:first}
\end{figure*}
\subsection{Model uncertainties}
\label{subsec:uncert}
\subsubsection{Magnetic field}
\label{subsubsec:Bfield}
First, we investigate the role played by the magnetic field in relics. Explaining magnetic fields larger than a few $\rm \mu G$ outside of cluster cores is very difficult for several reasons.\\
In the only case in which a good volume coverage of the ICM is obtained through Faraday Rotation \citep[i.e. in the Coma cluster,][]{bo10,2013MNRAS.433.3208B}, the inferred trend of magnetic field is $B \sim n^{-\alpha_{\rm B}}$, with $\alpha_{\rm B} \sim 0.5-0.9$, which implies that on average the field drops below $1 \mu G$ at half of the virial radius. This scaling is supported by simulations \citep[][]{do99,bo10,va14mhd} and it implies that the magnetic field in
the ICM is not dynamically important, i.e. $\beta_{\rm pl} \sim 100$ everywhere (where $\beta_{\rm pl}=n k_{\rm B} T / P_{\rm B}$ and $P_{\rm B}$ is the magnetic pressure). Instead, a field of the order $\geq 10 \mu G$ at half of the virial radius or beyond implies $\beta \leq 1$, which is
hard to justify theoretically. Indeed, the turbulence around the relic should be modest and dominated
by compressive modes, and can only raise the magnetic field by a small factor \citep[][]{2012MNRAS.423.2781I,sk13}. It has been suggested that CRs can cause magnetic field amplification via CR-driven turbulent amplification \citep[][]{2013MNRAS.tmp.2295B}, but not to the extreme level required by our modelling. Moreover, all cosmological simulations predict some local amplification at shocks, but this is always smaller than the steep
increase of the gas thermal pressure, due to Rankine-Hugoniot jump conditions \citep[][]{do99,br05,xu09,va14mhd}. Based on simulations, the observed mass-radio Luminosity relation for double relics is better explained by assuming magnetic fields of the order of $\sim 2 ~\rm \mu G$ in most objects \citep[][]{fdg14}.\\
Evidence of {\it polarised} radio emissions from a few radio relics (including a few contained in our sample here) exclude the presence of large $\gg 10 ~\mu G$ fields distributed in scales below the radio beam (a few $\sim \rm ~kpc$), which would otherwise totally depolarise the emission \citep[][]{vw10,bonafede11,2012MNRAS.426...40B}.
The highest (indirect) indication of magnetic fields in a giant radio relic so far is of the order of $\sim 7 ~\rm \mu G$ \citep[][]{vw10}. Moreover, all observations of Faraday Rotation outside of cluster cores are consistent with magnetic fields of a few $\mu \rm G$ at most \citep[][]{mu04,gu08,bo10,vacca10}.
Finally, we notice that the acceleration efficiencies usually assumed within DSA \citep[e.g.][]{kj07,hb07,kr13} are based on the assumption of shocks running in a high $\beta$ plasma. In the case of high magnetisation or relics, $\beta \ll 1$, the physics of the intracluster plasma changes dramatically and the efficiencies from DSA are not applicable anymore.
Hence, we test a scenario in which we
cap the magnetic field at $B_{\rm relic, max}=10 ~\rm \mu G$. In the many cases where the magnetic field inferred from Eq.\ref{eq:hb} would be larger than the $10 ~\rm \mu G$ upper limit, we allow for the presence of reaccelerated electrons and iteratively increase $\epsilon$ (by $0.1$ \% at each iteration) so that the acceleration efficiency is increased. We stop the iterations when the radio emission from Eq.\ref{eq:hb} matches the radio power (within a $10$ \% tolerance). We then assume this ratio to be constant in the downstream region {\footnote {In this case, the values of $\epsilon=0$, $\epsilon=1$\% and $\epsilon=5$\% quoted in the labels actually refer to the initial assumed value for each cluster, while the final value depends on the iterations described here. However, in all cases the iterations stopped after reaching $\epsilon \sim$ a few percent.}}.
Fig. \ref{fig:Bcap} shows our results (see also Table 2). All magnetic fields are now more in line with the range of uncertainties given by the (scarce) observational data.
In this case, the difference between all models is reduced, given that the energy ratio $\epsilon$ had to be increased in several objects also when the starting model is the single-injection one. The model producing
the largest emission is the one with $\epsilon=5$\%, but the difference with the two others is now limited to a few percent in the $\gamma$-ray flux. In this case, all models are now at the level of the observed stacking for this dataset, and a factor
$\sim 2$ above the full stacking of non-cool-core clusters in \emph{Fermi} \citep[][]{2013A&A...560A..64H}.
We think that this set of runs gives the most stringent test to the DSA model because it includes the effect of CR re-acceleration to
explain radio relics for modest magnetic field \citep[][]{ka12,pinzke13}, yet this idea fails when also the hadronic emission from accelerated CRs is taken into account.
In the following we will discuss the remaining model uncertainties based on the "Bcap" model.
\begin{figure*}
\includegraphics[width=0.45\textwidth]{images/sausage_tabulated_models_kr13_MC1_Bcap.dat_gamma_Bcap.eps}
\includegraphics[width=0.45\textwidth]{images/sausage_tabulated_models_kr13_MC1_Bcap.dat_Bfield_Bcap.eps}
\caption{Similar to Fig.\ref{fig:first}, but here we impose a maximum magnetic field of $10 ~\rm \mu G$ at the location of relics, and allow CR re-acceleration in all models (See Sec.~\ref{subsubsec:Bfield} for details).}
\label{fig:Bcap}
\end{figure*}
\subsubsection{Upstream gas density and temperature}
Our assumption for the upstream gas density follows from the simplistic assumption of a $\beta$-model profile along the direction of propagation of merger shocks. However, clusters hosting double relics are known to be perturbed. Observations provide evidence that radio relics are aligned with the merger axis of clusters \citep[][]{2011A&A...533A..35V} and cosmological simulations show that the gas density along
the major axis of merging clusters is significantly higher than the average profile, up to $\sim 20-30$ \% close to the virial radius \citep[][]{va11scatter, 2013MNRAS.431..954K,va13clumping}. X-ray observations also suggest departures of this level in the outer parts of clusters \citep[][]{2012A&A...541A..57E,2013A&A...551A..22E,2014MNRAS.437.3939U,2015arXiv150104095M}. \\
We tested the impact of a systematically $20$ \% higher upstream gas density in all our clusters, producing the results given in Fig. \ref{fig:second} (left). The enhanced density exacerbates the problems with the $\gamma$-ray emission because this scales as $\propto n^2$. The localised presence of denser clumps along the major axis of relics can only make this problem worse. We conclude that the hadronic emission predicted by our baseline model probably underestimates the level of $\gamma$-ray emission that DSA should produce in these objects.
Moreover, our assumption on the upstream gas density can be relaxed by considering that very likely before the heating by the crossing merger shocks the medium
in front of the relic had a temperature $\leq T_{\rm 500}$. As a very conservative case, we considered a pre-shock temperature lower by a factor $\sim 2$ compared to $T_{\rm 500}$, corresponding to a $M \approx 2$ shock. The right panel of Fig. \ref{fig:second} shows the result of this test. The predicted $\gamma$-ray emission is somewhat reduced compared to the "Bcap" model, most notably in the single-injection case because the propagation history of the single shock is the only crucial parameter. In this case the mean emission is a factor $\sim 2$ below the {\emph Fermi} limits. However, in this case even more relics in all models require $\sim 10 \mu G$ fields in more objects because a decrease in the upstream temperature in Eq.\ref{eq:hb} must be balanced by an increased magnetic field to match the radio power.
In a more realistic case, we expect that the two above effects are combined since large-scale infall pattern along the major axis of merger clusters push cold dense un-virialized material further into the virial radius of the main halo, and therefore the problems of our DSA modeling of radio relics should become even worse.
\begin{figure*}
\includegraphics[width=0.45\textwidth]{images/sausage_tabulated_models_kr13_MC1_dens.dat_gamma_dens.eps}
\includegraphics[width=0.45\textwidth]{images/sausage_tabulated_models_kr13_MC1_temp.dat_gamma_temp.eps}
\caption{Left panel: distribution of predicted $\gamma$-ray emission from our cluster sample, similar to Fig. \ref{fig:first} but here assuming a $20$ \% higher
upstream gas density (left) compared to the basic model. Right panel: as in the left panel but assuming a $50$ \% lower upstream gas temperature compared to the basic model (see text for details). In both cases we assume a capping of the magnetic field at $10 ~\rm \mu G$.}
\label{fig:second}
\end{figure*}
\subsubsection{Radial dependence of the Mach number}
\label{subsubsec:Mrad}
Our assumption of a constant Mach number within the volume of interest follows from the assumption that the kinetic energy flux across shocks is conserved during their propagation. This implies that the dissipation of kinetic energy by these shocks is negligible, which is appropriate for the weak shocks considered here. Also the upstream medium is assumed to be isothermal (i.e. $n_{\rm u}(r) (M c_s)^3 S(r)/2={\rm constant}$ which gives $M \propto [n_{\rm u}(r) \cdot S(r)]^{-1/3} \approx \rm constant$ because $n_{\rm u}(r) \propto r^{-2}$ outside of the cluster core in the $\beta$-model). We also tested the possibility of a shallow radial dependence of the Mach number, $M(r) =M_{\rm 0} (r/R_{\rm relic})^{1/2}$ (where $M_{\rm 0}$ is the Mach number estimated at the location of the relic), as in\citet{va14relics}. This was derived from the observed radial dependence of the radio spectral index of relics with radius \citep[][]{vw09}. However, when only giant radio relics are considered, this trend is not significant
\citep[][]{2012MNRAS.426...40B,fdg14}. To the best of our knowledge the average radial trend of Mach number in clusters was discussed only by \citet[][]{va09shocks,va10kp} and more recently by \citet{2014ApJ...785..133H}. All these works confirm a very shallow functional dependence with radius of the average Mach number of shocks, typically going from $M \sim 1.5-2$ in the centre to $M \sim 3$ in the cluster
periphery. This trend is consistent with $M \propto M_{\rm 0} (r/r_{\rm c})^{1/2}$ (with $M_{\rm 0} \sim 2$). In Fig. \ref{fig:third} we show the results if we impose a $M \propto r^{1/2}$ instead of a constant Mach number.
The predicted $\gamma$-ray emission downstream of relics is significantly reduced only in the single injection case ($\epsilon=0$), but the average emission of the sample still remains larger than both stacking limits. We conclude that the radial trend of Mach number with radius, as long as this is shallow as suggested by simulations, is not a crucial point.
In the shock re-acceleration case, we also tested a scenario in which the re-acceleration is done by a $M=3$ shock instead of the $M=2$ as in our baseline model (Fig. \ref{fig:third}, right). In this case the requirement on the magnetic field are lowered and the number of relics for which we require $B_{\rm relic} \geq 10 \mu G$ is limited to one object (not shown). However, the predicted level of hadronic emission is now much increased and also the $\epsilon=1$\% run hits the \emph{Fermi} stacking limits for this sample of objects, while the $\epsilon=5$\% now predicts an average emission which is $\sim 4-5$ times larger than this.
\begin{figure*}
\includegraphics[width=0.45\textwidth]{images/sausage_tabulated_models_kr13_MC1_Mrad.dat_gamma_Mrad.eps}
\includegraphics[width=0.45\textwidth]{images/sausage_tabulated_models_kr13_MC1_M3.dat_gamma_M3.eps}
\caption{Left: distribution of predicted $\gamma$-ray emission from our cluster sample, similar to Fig. \ref{fig:first} but here considering a radial scaling of the Mach number downstream of relics, $M(r) =M_{\rm 0} (r/R_{\rm relic})^{1/2}$. Right: same as in Fig. \ref{fig:first}, but here assuming that the re-acceleration in the $\epsilon=1$\% and $\epsilon=5$\% cases is done by a $M=3$ shock.}
\label{fig:third}
\end{figure*}
\subsubsection{Mach number from the radio spectrum}
\label{subsubsec:Mach_MC}
The estimate of the Mach number at the position of relics is crucial in our modeling, as it determines the level of CR acceleration in the DSA scenario we are testing. However, several effects can make the Mach number we derive from the radio spectrum in Sec.~\ref{subsec:algo} uncertain.
In several observations the Mach number derived from the radio spectrum is found to be higher than the one estimated through X-ray analysis \citep[][]{2013MNRAS.433..812O,2013PASJ...65...16A}. The surface of complex shocks is described by a range of values of Mach numbers rather than by a single value \citep{sk13} and the radio emission will be dominated by electrons probing larger Mach numbers compared to the mean \citep{2014ApJ...785..133H}. Additionally, radio observations with only a few beams across the relic can only produce integrated radio spectra, which are expected to be by $\approx 0.5$ steeper than the injection spectrum. A blend of several populations of electrons seen in projection can yield spectra with time-dependent biases, as recently discussed by \citet{2014arXiv1411.7513K}.
To assess this effect, we run a set of Monte Carlo methods and extract uniform random deviates within $M \pm \Delta M$, where $\Delta M \leq 0.5 M$. For each relic we randomly extracted 200 values of $\Delta M$ and compute the downstream $\gamma$-ray emission in all cases.
Figure \ref{fig:Mach_MC} shows the results for the single injection and reacceleration models (colored histograms). In this case the simulated mean emission in the plot is the average of each set of 200 realisations (i.e. we first compute the mean emission for the cluster sample, for one random combination of extractions for $\Delta M$, and then compute the average emission and dispersion within the full dataset of 200 random realisation). The number of 200 realisation was chosen based on the fact that the errors in the mean emission do not change significantly for larger numbers of realisations.
Compared to our fiducial model, a random variation in the assumed Mach number overall increases the level of hadronic $\gamma$-ray emission, suggesting that on average our baseline model underestimates the total emission. The underestimate is obviously more significant in the single-shock model ($\epsilon=0$), where the mean emission is $\sim 5$ smaller than what we obtain as an average from the 200 random extractions. In the recceleration cases, the effect is smaller, and the fiducial model probably underestimates the hadronic emission by a factor $\sim 1.5-2$. For the same set of random extractions, the problem with requiring too large magnetic fields for a significant fraction of objects ($\sim 1/3$ in the $\epsilon=0$ and $\epsilon=1$ \%) remains (not shown).
We conclude that realistic uncertainties in the Mach number do not change the robustness of our results.
\begin{figure}
\includegraphics[width=0.45\textwidth]{images/sausage_tabulated_models_kr13_MC1_MachMC.dat_gamma_MachMC.eps}
\caption{Distribution of predicted $\gamma$-ray emission from our cluster sample including a random deviate from the Mach number derived from the radio, in the range $M=M_{\rm radio} \pm 0.5 M_{\rm radio}$ (see Sec.~\ref{subsubsec:Mach_MC} for details). We extracted 200 random deviates for each object and computed the downstream hadronic emission for the three reacceleration models. Differently from the previous Figures, in this case the vertical lines show the mean emission from the 200 stacked samples (i.e. we computed the mean emission within the sample, one time for each random extraction, and then computed the mean emission over the 200 realisations).}
\label{fig:Mach_MC}
\end{figure}
\subsubsection{Viewing angle}
So far we assumed that all relics trace shocks which propagate exactly in a plane perpendicular to the line of sight. Numerical simulations
of relics support this scenario and limit the inclination along the line of sight of the propagation plane down to $\leq 10-20$ degrees \citep[][]{vw12sim,ka12}. In general, simulated radio relics assuming DSA resemble the observational properties of most relics only when they lie close to the plane of the sky \citep[][]{va12relic,sk13}.
However, very small inclinations can be present and we checked if the inclusion of small ($|\Delta \omega| \leq 30$ degrees) along the line of sight can alter the picture in any significant way. Similar to the previous test, we randomly extracted 200 values of $\Delta \omega$ for a uniform distribution in the "Bcap" model, and accordingly recalculated the total volume spanned by shocks (which can only become {\it bigger} compared to the $\Delta \omega=0$ case), and computed the average value of the 200 realisations of cluster stackings.
Fig. \ref{fig:angle_MC} shows the results of this test. The average $\gamma$-ray emission from all realisations is of the order the fiducial model ($\Delta \omega=0$) and at the level of the {\emph Fermi} stacking for this sample, and larger than the stacking by \citet{fermi14}. The outcome in the distribution of magnetic fields at relics is even worse than in the fiducial case, because in the case of large angles along the line of sight the relics are located further out, where the gas density is lower than in the $\Delta \omega=0$ case, and the magnetic field
must increase dramatically to match the radio power. As for all previous tests, we conclude that the presence of small but unavoidable projection effects has a small effect. However, on average these projection effects should yield an even larger hadronic emission from our dataset if DSA is at work.
\begin{figure}
\includegraphics[width=0.45\textwidth]{images/sausage_tabulated_models_kr13_MC1_angleMC.dat_gamma_angleMC.eps}
\caption{Similar to Fig.\ref{fig:Mach_MC}, but here by extracting 200 random values for the angle in the plane of the sky for each
relic, $|\delta \omega| \leq 30$ degrees.}
\label{fig:angle_MC}
\end{figure}
\subsubsection{Uncertainties in Cosmic Ray physics}
The efficiencies that we have tested produce the lowest amount of CR-acceleration \citep[][]{kr13}. However, previously suggested functions for the $\eta(M)$ acceleration efficiency \citep[e.g.][]{kj07} give a larger injection of CRs from $M \leq 5$ shocks, and can only make the problem with \emph{Fermi} limits worse.
Other uncertainties in the physics of CRs after their injection are briefly discussed here.
Outside of cluster cores, the CRs are only weakly subject to hadronic and Coulomb losses, owing to the low gas density. For the sake of our analysis, it does not make any difference if they diffuse in the cluster volume at constant radius (since their contribution to the $\gamma$-ray emission only depends on the gas density and not on the exact location in the cluster atmosphere). Only diffusion in the vertical sense can change our estimate. However, this cannot be a big effect since CRs are thought to be frozen into tangled magnetic fields, and in this case their spatial diffusion is slow (i.e. $\tau \sim 2 \cdot 10^8 \rm yr ~(R/Mpc)^2 ~(E/GeV)^{-1/3}$ for a constant $B=1 ~ \rm \mu G$ magnetic field and assuming Bohm diffusion, e.g. \citealt{bbp97}).
More recently, it has been suggested that the fast ($v_{\rm streaming} \geq v_{\rm A}$, where $v_{\rm A}$ is the Alfv\'{e}n velocity) streaming of CRs can progressively deplete the downstream region of the shock and reduce radio and
hadronic emission \citep[e.g.][]{2011A&A...527A..99E}, offering a way to reconcile hadronic models for radio halos with the observed bimodality in the distribution of diffuse radio emission in clusters \citep[][]{gb07}. However, the validity of this scenario is controversial \citep[e.g.][]{2013arXiv1306.3101D,2013MNRAS.434.2209W} and this mechanism has been suggested to be maximally efficient in relaxed clusters. Instead, our sample of clusters with double relics is all made by objects with a very unrelaxed X-ray morphology. In the case of turbulent intracluster medium, the detailed calculation by \citet{2013MNRAS.434.2209W} shows that even fast streaming can also rapidly diminishes the $\gamma$-ray luminosities
in the $E=300-1000$ GeV energies probed by imaging air Cerenkov telescopes (\emph{MAGIC, HESS, VERITAS}), but not in the lower energies probed by \emph{Fermi}. Therefore, the latter is a more robust probe of the CR injection history, and our modeling here is well justified.
Finally, our work neglects the (re)acceleration of CRs by other mechanisms, such as turbulent reacceleration in the downstream of relics \citep[e.g.][]{2004MNRAS.350.1174B}, which should re-energise radio emitting electrons as well CR-protons. Including this effect in our simple modeling is beyond the goal of this work, however its net effect can only be that of further increasing the mean emission we predict here.
\subsection{Other observational proxies: inverse Compton emission}
Relativistic electrons accelerated by shocks can also emit in the hard-X ray band through Inverse Compton (IC) emission \citep{1979ApJ...227..364R,1998ApJ...494L.177S}. In principle, this can offer a complementary way of testing our models without having to make assumptions for the magnetic field.
Hence, we have computed the IC emission from each object in our sample, under the assumption of stationary shock acceleration following \citet{1999ApJ...520..529S}:
\begin{equation}
\epsilon_{\rm IC} \approx 0.17 \frac{E_{\rm CR,e}}{\Delta t} (\gamma \leq 5 \cdot 10^3),
\end{equation}
where $\Delta t$ is given by the shock
crossing time for each radial shell, $E_{\rm CR,e}$ is the energy of CR-electrons injected by the shock, and our integration is limited to the cooling region close to each shock. This is computed using Eq. (12) in \citet[][]{ka12}:
\begin{equation}
l_{\rm cool} \approx 890 \rm ~kpc \frac{v_{\rm s}}{10^3 \rm km/s} \cdot \frac{B^{1/2}}{B^{2}+B_{\rm CMB}^{2}} \cdot \frac{\nu}{1 \rm ~GHz}(1+z)^{-1/2} ,
\end{equation}
where $z$ is the redshift.
Here we show our prediction for the last investigated case, where we cap the magnetic field at $B=10 ~\rm \mu G$ and allow for the inclusion of CRs re-acceleration when necessary to match the observed radio power (Sec.~ref{subsubsec:Bfield}).
Figure \ref{fig:IC} gives our predicted emission for the single injection case (blue) and for the extreme re-acceleration case ($\epsilon=0.05$), to emphasize
the weak dependence of the predicted IC emission on the assumed re-acceleration model. Table 3 gives the predicted flux in IC emission from the downstream of all double relics in the $[20-100] \rm keV$ range, and the assumed radiative lengths in the downstream region.
In all cases the predicted emission lies below the detection threshold by the hard-X ray satellite \emph{NUSTAR}{\footnote {http:\/\/www.nustar.caltech.edu\/ }}, which has been estimated to be of the order of a few $\sim 10^{-12} \rm erg/(s ~cm^2)$ in the case of the recent observations of the Coma cluster \citep{2014arXiv1411.1573G} and of the Bullet cluster \citep{2014ApJ...792...48W}. However, a significantly lower sensitivity might be reached in the case of peripheral relics, given that in the latter clusters the contamination from the hot thermal gas in the hard-X ray range hampers the detection of the IC signal.
This might be the case for the most powerful targets in our sample, represent by A3376 (both relics) and by the most powerful relic in ZwCLJ2341. In these cases, the large distance from the centre of host clusters ($\sim 1.2-1.3$ Mpc) might indeed offer a better chance of detection of the IC signal. In the next years, the \emph{Astro-H} satellite should be able to probe the inverse Compton emission in the same clusters \citep[][]{2014SPIE.9144E..26A,2015arXiv150106940B}.
\begin{table*}
\label{tab:tab_IC}
\caption{Forecast of IC emission from the downstream region of relics in our simulated clusters, for $[20-100]$ keV. The predictions are here only given for the $\epsilon=0$ using our model with a capping of the magnetic field at $10 ~\rm \mu G$, see Sec. \ref{subsubsec:Bfield} for details), as all others yield extremely similar results.}
\centering \tabcolsep 2pt
\begin{tabular}{c|c|c|c|c|c|c|c|c}
object & $M_1$ & $M_2$ & $\log_{\rm 10}(e_{\rm IC,1})$ & $\log_{\rm 10}(e_{\rm IC,2})$ & $l_{\rm cool,1}$ & $l_{\rm cool,2}$ & $B_1$ & $B_2$\\
& & & $[{\rm erg/(s ~cm^2)}]$ & $[{\rm erg/(s~cm^2)}]$ & [kpc] & [kpc] & [$\mu$G] & [$\mu$G] \\ \hline
A3376 & 3.3 & 3.3 & -13.98 & -13.19& 273.6 & 191.3 & 2.25 & 0.54\\
A3365 & 2.1 & 1.8 & -16.78 & -18.78& 87.6 & 48.6 & 5.41 & 8.69\\
A1240 & 3.3 & 2.8 & -15.29 & -15.24& 100.7 & 96.5 & 0.99 & 0.89\\
A2345 & 2.8 & 2.2 & -14.41 & -15.40& 107.2 & 131.1 & 0.60 & 0.97\\
RXCJ1314 & 2.4 & 2.4 & -15.11 & -15.28& 86.4 & 111.2 & 0.38 & 0.65\\
MACSJ1149 & 3.3 & 2.4 & -15.54 & -16.29& 101.7 & 96.4 & 1.75 & 1.46\\
MACSJ1752 & 3.2 & 4.0 & -14.88 & -14.92& 122.2 & 89.1 & 3.21 & 5.92\\
A3667 & 1.7 & 3.8 & -14.53 & -14.12& 164.4 & 159.7 & 1.56 & 1.32\\
ZwCLJ2341 & 1.8 & 3.2 & -16.96 & -13.16& 185.7 & 97.7 & 2.15 & 0.36\\
PLCKG287 & 2.9 & 2.2 & -14.64 & -16.05& 125.4 & 128.4 & 1.40 & 3.89\\
\end{tabular}
\end{table*}
\begin{figure}
\includegraphics[width=0.45\textwidth]{images/sausage_tabulated_models_kr13.datIC_all.eps}
\caption{Forecast of Inverse Compton emission from our simulated relics, in the extreme cases of $\epsilon=0$ (blue) and $\epsilon=0.05$ (red).}
\label{fig:IC}
\end{figure}
\section{Discussion and conclusions}
\label{sec:conclusions}
Most of the evidence from X-ray and radio observations suggests a link between radio relics and merger shocks: merger axes of clusters and relic orientations correlate \citep[][]{2011A&A...533A..35V}, the power of radio relics scales with X-ray luminosities \citep[][]{2012MNRAS.426...40B,fe12} and mass \citep[][]{fdg14} of the host cluster.
Cosmological simulations produce emission patterns consistent with observed radio relics just using a tiny fraction of the kinetic
energy flux across shock waves \citep[e.g.][]{ho08,2008MNRAS.385.1242P,2009MNRAS.393.1073B,sk11,va12relic,2012MNRAS.420.2006N,sk13}.\\
Still, a number of recent observations have revealed some open issues, including uncertain merger scenarios \citep[e.g.][]{2013MNRAS.433..812O}, departures from power-law spectra \citep[e.g.][]{2014MNRAS.445.1213S,2014arXiv1411.1113T}, missing associations between radio emission and X-ray maps \citep[e.g.][]{2011MNRAS.417L...1R,2014MNRAS.443.2463O}, efficiencies problems \citep[e.g.][]{2011ApJ...728...82M,2012MNRAS.426...40B}, inconsistencies between Mach numbers derived from X-ray and radio observations \citep[e.g.][]{2012MNRAS.426.1204K,2013PASJ...65...16A,2013MNRAS.433.1701O} and apparent connections to radio galaxies \citep[][]{2014ApJ...785....1B}.\\
In this work, we used a simple semi-analytical model of expanding merger shocks in clusters to reconstruct the propagation history of shocks leading. We used a spherically symmetric model and assumed that cosmic ray protons are trapped in the intracluster medium on all relevant timescales. A range of realistic scenarios for the acceleration of relativistic electrons and protons via DSA, varying the upstream gas conditions, the shock parameters and the budget of pre-existing cosmic rays, gives very similar results. In all realistic scenarios, a significant fraction of our objects ($\sim 1/2-1/3$) has difficulties in matching at the same time the observed radio emission and the constraints imposed by the \emph{Fermi} limits, unless the magnetic field in all problematic objects is much larger than what usually considered realistic ($\gg 10 ~\rm \mu G$).
The scenario in which radio emitting electrons comes from the re-acceleration of pre-existing electrons \citep[][]{ka12,pinzke13} can alleviate the tension with \emph{Fermi} if the pre-existing electrons are not the result of previous injection by shocks as we investigated here, but are instead released by mechanisms that mostly inject leptons (e.g. leptonic-dominated jets from AGN), as already discussed in \citet{va14relics}.
Based on our semi-analytical model, the standard DSA scenario with thermal leakage that predicts that $E_{\rm CRe} \ll E_{\rm CRp}$ cannot simultaneously explain radio relics and produce less $\gamma$-radiation than the upper limits from \emph{Fermi}, unless unrealistically large magnetic fields are assumed at the position of relics (e.g. $B_{\rm relic} \ge 10-100 ~\rm \mu G$).
This result is very robust, at least in the statistical sense, against all investigated variations of our fiducial parameters for the modeling of the shock acceleration of CRs. Additional effects that go beyond our idealized modeling of cluster mergers, e.g. a clumpy ICM, a succession of mergers and the additional acceleration of CRs by AGN, supernovae, turbulence or reconnection exacerbate this discrepancy. \\
Despite its obvious degree of simplification, a semi-analytical method is useful to tackle the case of double relic systems. In these systems it is reasonable to assume that most of the energetics is related to the observed pair of giant merger shocks. Their shape and location is rather regular and symmetric with respect to the cluster centre, suggesting that one can make reasonable estimates for their propagation history. This setup allows us to run very fast testing of different possible acceleration scenarios, and as we showed in our various tests it generally gives a {\it lower limit} on the expected $\gamma$-ray emission. This method is meant to be complementary to fully cosmological numerical simulations where the effects of multiple shocks, particle advection and cooling, as well as inputs from galaxy formation and other mechanisms can be taken into account at run-time \citep[e.g.][]{pf07,scienzo}. However, a thorough exploration of models is computationally demanding because of the required high resolution and the complexity of the numerics. Also the agreement between different numerical techniques on this topic is still unsatisfactory \citep[see discussion in][]{va11comparison}.
Another way of illustrating our result is found by rescaling the efficiency for proton acceleration, $\eta(M)$, such that the upper limits from \emph{Fermi} are not violated. In the case without pre-existing CRs, this is a simple exercise as we only need to rescale $\eta(M)$ for each relic separately, and compute the average of the efficiencies for each bin of the Mach number (here we chose a bin size of $\Delta M=0.6$ to achieve a reasonable sampling of the sparse distribution of Mach numbers in the dataset). Here, we keep the magnetic field fixed at $B=2 \mu$G as suggested by recent observations \citep[][]{fdg14}. The result is shown in Fig.~\ref{fig:last}, where we show the maximally allowed acceleration efficiency for CR-electrons, protons, as well as $K_{\rm e/p}$ as a function of $M$. This relation results from a somewhat coarse simplification of the problem but it is a rough estimate of the acceleration efficiencies in weak ICM shocks.
For shocks with $M \leq 2$, the flux ratio of injected electrons is larger than that in protons, $K_{\rm e/p} \sim 1-100$, at odds with standard DSA (even including re-accelerated electrons). For $M \geq 2.5$ the acceleration efficiency of protons can become significant ($\sim 10^{-3}-10^{-2}$) while the acceleration efficiency of electrons flattens and $K_{\rm e/p} \sim 10^{-2}$. The functional shape of the acceleration efficiency for protons is consistent with the \citep[][]{kr13} model, but the absolute normalisation is lower by a factor $\sim 10-100$.
\begin{figure}
\includegraphics[width=0.495\textwidth]{images/sausage_tabulated_models_kr13_Bfix.dat_efficiency_SDA.eps}
\caption{Acceleration efficiency of CR-protons (green, rescaled by a factor $\times 10$ down) and CR-electrons (blue), and electron to proton acceleration ratio (red) allowed by our combined radio and $\gamma$-ray comparison with observations. In this case, we assumed a fixed magnetic field of $B=2 \mu G$ for all relics.}
\label{fig:last}
\end{figure}
A possible solution has been suggested by \citet{2014ApJ...788..142K}, who assumed that electrons and protons follow a $\kappa$-distribution near the shock transition. A $\kappa$-distribution is characterized by a power-law rather than by an exponential cutoff at high energies, thus ensuring a more efficient
injection of high-energy particles into the DSA cycle. This distribution is motivated by spacecraft measurements of the solar wind as well as by observations of HII and planetary nebulae \citep[e.g.][and references therein]{2012ASSP...33...97L}. \citet{2014ApJ...788..142K} explored the application of the $\kappa$-distribution to $M \leq 2$ shocks in the ICM, and concluded that the distribution can have a different high-energy tail as a function of the shock obliquity and of the plasma parameters. In the ICM, the distribution might be more extended towards high energies for electrons than for protons thus justifying a higher acceleration efficiency for electrons than for protons. However, in order to explain the origin of these wider distributions, one must resort to detailed micro-physical simulations of collisionless shocks.\\
The most promising explanation for the non-observation of $\gamma$-rays has been suggested by \citet[][]{2014arXiv1409.7393G} who studied the acceleration of electrons with particle-in-cell (PIC) simulations under conditions relevant to merger shocks. They showed that $M \leq 3$ shocks can be efficient accelerators of electrons in a Fermi-like process, where electrons gain energy via shock drift acceleration (SDA). The electron gain energy from the motion of electric field and scatter off oblique magnetic waves that are self-generated via the firehose instability. They found that this mechanism can work for high plasma betas and for nearly all magnetic field obliquities. However, these simulations have been performed in 2D, and could not follow the acceleration of electrons beyond a supra-thermal energy because of computing limitations. At the same time, hybrid simulations of proton acceleration by \citet{2014ApJ...783...91C} have shown that the acceleration efficiency is a strong function of the obliquity
angle. If indeed the magnetic field in radio relics is predominantly perpendicular to the shock normal, as found e.g. in the relic in the cluster CIZA 2242.2+5301, then the prediction is that the acceleration efficiency of protons is strongly suppressed, thus explaining the non-detection of hadronic emission. It remains to be seen if the results of these simulations hold in 3D, with realistic mass ratios between electrons and protons and coupled to a large scale MHD flow. It is also not clear whether the magnetic field is quasi-perpendicular in all the relics of this sample and how the alignment of the magnetic fields with the shock surface observed on large scales can be scaled down to scales of the ion gyro radius.
\section*{acknowledgments}
FV and MB acknowledge support from the grant FOR1254 from the Deutsche Forschungsgemeinschaft. DE and BH thank Andrea Tramacere and Christian Farnier for their help with the development of the Fermi tools. We acknowledge fruitful scientific discussions with F. Zandanel, A. Bonafede, F. Gastaldello and T. Jones for this work.
\bibliographystyle{mnras}
|
2,877,628,091,650 | arxiv | \section{Introduction}
Double perovskite oxides A$_{2}$BB$'$O$_{6}$ (A is a
rare-earth/alkaline-earth cation; B and B$'$ are transition metal
cations), discovered in 1960s~\cite{Sleight}, have attracted
enormous attention in the past decades. They have been found to
show diverse properties, such as large room-temperature
magnetoresistance~\cite{Kobayashi98},
multiferroicity~\cite{Azuma05},
half-metallicity~\cite{Kobayashi98,Jeng03,Philipp003}, and
magneto-optic (MO) effects~\cite{vidya2004huge,Das08}, depending
on the compositions of the A, B, and B$'$ cations. More recently,
monolayers and multilayers of these double perovskites containing
heavy cations were predicted to host various topological
insulating phases such as quantum anomalous Hall
phase~\cite{Cook14,Cook16,Zhang14,Dong16}. Therefore, double
perovskite oxides offer ample possibilities for exploration of
spin-related physics and also for magnetic, magneto-electric and
MO device applications.
Recently, Feng {\it et al.}~\cite{Feng16} synthesized new double
perovskite Ba$_{2}$NiOsO$_{6}$ and found it to be a rare
ferromagnetic (FM) semiconductor with Curie temperature $T_{C}$ =
100 K~\cite{Feng16}. It crystallizes in a cubic $Fm\bar{3}m$
structure with lattice constant $a = 8.0428$ \AA, where the
Ni$^{2+}$ and Os$^{6+}$ ions are perfectly ordered on the B and
B$'$ sites, respectively~\cite{Feng16}. This is interesting
because FM semiconductors are rare and also useful for the
development of spintronic devices. Surprisingly, we note that Ni
and Os ions are ferromagnetically coupled, which is very rare
between the B and B$'$ cations in double perovskite
oxides\cite{Wang06,Wang09}. Furthermore, first-principles
electronic structure calculations~\cite{Feng16} showed that the
spin-orbit coupling (SOC) of Os$^{6+}$ plays a crucial role in
opening the semiconducting gap, and thus double perovskite
Ba$_{2}$NiOsO$_{6}$ is called a Dirac-Mott insulator. However, it
was inferred from the x-ray absorption spectra (XAS)~\cite{Feng16}
that the formal electronic configurations for Ni and Os in
Ba$_{2}$NiOsO$_{6}$ are Ni$^{2+}$ 3$d$$^{8}$($\emph{t}_{2g}^{6}$
$\emph{e}_{g}^{2}$; $S=1$; 2 $\mu$$_{B}$) and Os$^{6+}$
5$d$$^{2}$($\emph{t}_{2g}^{2}$ $\emph{e}_{g}^{0}$; $S=1$; 2
$\mu$$_{B}$), respectively. Consequently, the total moment should
be 4 $\mu$$_{B}$/f.u. for the FM ground state. However, the
measured saturation magnetization for Ba$_{2}$NiOsO$_{6}$ is
approximately 3.46 $\mu$$_{B}$/f.u. at 5 K and 50 kOe, much
smaller than the spin only magnetic moment estimated from the
simplified ionic model. Therefore, it would be interesting to
study the origin of the abnormal ferromagnetism as well as the
spin and orbital magnetic moments of Ba$_{2}$NiOsO$_{6}$.
When a linearly polarized light beam hits a magnetic material, the
polarization vector of the reflected and transmitted light beams
rotates. The former and latter are known as Kerr and Faraday
effects, respectively. Discovered in the 19th Century, they are
two well-known MO effects~\cite{Oppeneer2001,Antonov2004}.
Currently, MO Kerr effect (MOKE) is widely used as a powerful
probe of the electronic and magnetic properties of materials, such
as two-dimensional ferromagnetic
order~\cite{Gong2017,Huang2017,Fang2018}, spin Hall
effect~\cite{Stamm17}, skyrmion Hall effect\cite{Jiang17},
magnetic anisotropy~\cite{Lehnert10,HeLY15}, and topological
insulator~\cite{MacDonald10,Armitage12}. Furthermore, because of
its applications in modern high-density data-storage
technology~\cite{Mansuripur95}, an enormous amount of effort has
been devoted to search for materials with large MO signals.
Band exchange splitting caused by magnetization together with
relativistic SOC has been recognized as the origin of
MOKE~\cite{Oppeneer2001,Antonov2004,Argyres55,Erskine73a,Erskine73b}.
Localized 3$d$ orbitals tend to have large band exchange
splittings. However, their SOC is weak. However, 4$d$ or 5$d$
transition metal atoms have a strong SOC. Nonetheless, their
intra-atomic exchange interaction is rather small because of their
more extended $d$ orbitals which result in small band exchange
splittings. Therefore, an effective way to enhance the MOKE is to
make alloys or multilayers of 3$d$ transition metals with 4$d$ or
5$d$ transition metals\cite{Guo96}. Consequently, the
magneto-optical properties of various 3$d$ FM transition metal
alloys and multilayers with heavier 4$d$ or 5$d$ transition metal
atoms have been investigated extensively. For example,
PtMnSb~\cite{van Engen83} has been found to be an excellent MO
metal with a maximum Kerr rotation of 2.5$^{\circ}$. Double
perovskites, A$_{2}$BB$'$O$_{6}$ (B = 3$d$ and B$'$ = 4$d$ or 5$d$
transition metal elements), which can establish an unusual
renormalization of the intra-atomic exchange strength to enhance
the band exchange splitting at the 4$d$ or 5$d$ sites arising from
the so called hybridization driven
mechanism~\cite{Sarma000,Kanamori001}, is also expected to have
large MOKE. However, the B and B$'$ atoms in most of double
perovskite materials~\cite{Jeng03,Wang06,Wang09} prefer an
antiferromagnetic coupling. This may reduce the net magnetization
and thus results in a small MO effect~\cite{Das08}. As mentioned
above, simultaneous occurrence of ferromagnetism and
semiconducting gap in double perovskites is very rare. Combination
of the strong SOC of Os atoms and ferromagnetism thus would make
Ba$_{2}$NiOsO$_{6}$ an excellent semiconductor for not only
semiconductor-based spintronics but also magneto-optical devices.
The recent development in synthesizing artificial
atomic-scale transition metal oxide heterostructures provides
great tunability over fundamental physical parameters to realize
novel properties and functionalities~\cite{Mannhart010,Hwang12},
such as the conductive interface between two insulating oxides
~\cite{Ohtomo04,Lee2016,Lu15}. This also stimulates extensive
investigations on the topology of the electronic band structure of
transition metal oxide heterostructures~\cite{Xiao11,Cook14,Zhang14,Dong16,Chandra17,Hslu2018}.
Indeed, the quantum anomalous Hall phase was predicted to occur in (001)
double-perovskite La$_{2}$MnIrO$_{6}$ monolayer~\cite{Zhang14} and
also (111) double-perovskite La$_{2}$FeMoO$_{6}$ and
Ba$_{2}$FeReO$_{6}$ monolayers~\cite{Cook14,Baidya16}.
Therefore, it would also be interesting to study the topological properties of
Dirac-Mott semiconductor Ba$_{2}$NiOsO$_{6}$ and its heterostructures.
In this paper, we present a systematic first-principles study of
magnetism, electronic structure, magneto-optical effects and
topological property of cubic double perovskite
Ba$_{2}$NiOsO$_{6}$ and its (111)
(Ba$_{2}$NiOsO$_{6}$)$_{1}$/(BaTiO$_{3}$)$_{10}$ monolayer
superlattice. First, we find that both structures are narrow band
gap FM semiconductors. The ferromagnetism is driven by the rare
nearest-neighbor FM coupling between Ni and Os atoms, which is
attributed to the FM superexchange mechanism caused by the
abnormally strong hybridization between the half-filled Ni $e_{g}$
and unoccupied Os $e_{g}$ orbitals. Second, the strong SOC on the
Os atom is found to not only open the semiconducting gap but also
give rise to a large negative orbital magnetic moment on the Os
atom, thus resulting in a measured total magnetic moment of less
than 4 $\mu_B$/f.u.~\cite{Feng16}. Third, we also find that
because of the enhanced intra-atomic exchange interaction on the
Os atoms caused by the Ni 3$d$ - Os 5$d$ hybridization and the
strong SOC on the Os site, the MO effects are large in these two
structures. Our theoretical findings thus suggest that double
perovskite Ba$_{2}$NiOsO$_{6}$ and its (111) superlattice are
valuable ferromagnetic semiconductors for not only
semiconductor-based spintronics but also magneto-optical devices.
Finally, our calculated anomalous Hall conductivity reveals that
the band gap just below the Fermi level in the superlattice is
topologically nontrivial with the gap Chern number of 2. This
indicates that the (111) Ba$_{2}$NiOsO$_{6}$ monolayer
superlattice and related 5$d$ double-perovskite monolayers may
provide an interesting material platform for exploring magnetic
topological phases and phase transitions.
\section{THEORY AND COMPUTATIONAL DETAILS}
We consider cubic double perovskite Ba$_{2}$NiOsO$_{6}$ [Fig.
1(a)] and its (Ba$_{2}$NiOsO$_{6}$)$_{1}$/(BaTiO$_{3}$)$_{10}$
monolayer superlattice grown along the [111] direction [Figs. 1(b)
and 1(c)]. The Brillouin Zone (BZs) of bulk Ba$_2$NiOsO$_6$ and
the (111) (Ba$_2$NiOsO$_6$)$_{1}$/(Ba$_2$TiO$_3$)$_{10}$
superlattice are shown in Figs. 1(e) and 1(f), respectively.
Clearly, the latter is the folded BZ of the former along the
$\Gamma$ - L direction. In the present calculations of the
electronic structure and magneto-optical properties of bulk
Ba$_{2}$NiOsO$_{6}$, the experimental lattice constant of 8.0428
\AA $_{}$ is adopted. Since the BaTiO$_{3}$ slab in the
(Ba$_{2}$NiOsO$_{6}$)$_{1}$/(BaTiO$_{3}$)$_{10}$ superlattice is
much thicker than the Ba$_2$NiOsO$_6$ layer, the BaTiO$_{3}$ slab
could be regarded as the substrate. Therefore, the in-plane
lattice constant is fixed at ${\sqrt{2}}$${a_{0}}$ = 5.6962 \AA,
where ${a_{0}}$ is the theoretically determined lattice constant
of cubic perovskite BaTiO$_3$. In our structural optimization, the
in-plane hexagonal symmetry is fixed but lattice constant c and
internal coordinates of all the atoms in the superlattice are
optimized theoretically. The lattice parameters and atom positions
for bulk Ba$_{2}$NiOsO$_{6}$ and its (111) monolayer superlattice
are given, respectively, in Tables S1 and S2 in the Supplementary
Materials (SM)~\cite{See Supplemental}. The electronic structure
and magnetic structure are calculated based on the density
functional theory (DFT) with the generalized gradient
approximation (GGA)~\cite{Perdew96}. The accurate
projector-augmented wave (PAW) method~\cite{PEB}, as implemented
in the Vienna {\it ab initio} simulation package
(VASP)~\cite{Kresse93}, is used. The relativistic PAW potentials
are adopted to include the SOC. The valence configurations of Ba,
Ni, Os, Ti and O atoms adopted in the present calculations are
5\emph{s}$^{2}$5\emph{p}$^{6}$6\emph{s}$^{2}$,
3\emph{p}$^{6}$3\emph{d}$^{8}$4\emph{s}$^{2}$,
5\emph{p}$^{6}$5\emph{d}$^{6}$6\emph{s}$^{2}$,
3\emph{s}$^{2}$3\emph{p}$^{6}$3\emph{d}$^{2}$4\emph{s}$^{2}$ and
2\emph{s}$^{2}$2\emph{p}$^{4}$, respectively. To better account
for the on-site electron correlation on the \emph{d} shells of Os
and Ni atoms, the GGA + U method~\cite{dudarev98} is adopted with
the effective Coulomb repulsion energy $U_{Os}$ = 2.0 eV and
$U_{Ni}$ = 5.0 eV, respectively. The results of test calculations
using different $U$ values of 3 $\sim$ 5 eV for Ni and 1 $\sim$ 3
eV for Os, are given in Appendix A below. A large plane-wave
cutoff of 450 eV and the small total energy convergence criterion
of 10$^{-5}$ eV are used throughout. Fine Monkhorst-Pack
\emph{k}-meshes of 30$\times$30$\times$30 and
10$\times$10$\times$2 are used for the bulk and superlattice
calculations, respectively.
To find the ground state magnetic configuration and to understand
the magnetic interactions in both systems, we consider four
possible magnetic structures, as labeled FM, AFM, FI1, and FI2 in
Figs. 2 and 3. One can then evaluate the nearest-neighbor Ni-Os
(\emph{J$_{1}$}), Os-Os (\emph{J$_{2}$}) and Ni-Ni
(\emph{J$_{3}$}) magnetic coupling parameters by mapping the
calculated total energies of the FM, AFM, FI1, and FI2 magnetic
configurations to the classical Heisenberg model \emph{H} =
\emph{E$_{0}$} -
$\sum$$_{i>j}$\emph{J$_{ij}$}($\hat{e}_{i}\cdot\hat{e}_{j}$),
where \emph{J$_{ij}$} is the exchange coupling parameter between
sites \emph{i} and \emph{j}, and $\hat{e}_{i}$ denotes the
direction of spin on site \emph{i}.
\begin{figure}
\includegraphics[width=8cm]{BaNiOsO6Fig1.eps}
\caption{(a) Cubic crystal cell of bulk Ba$_2$NiOsO$_6$. Black
lines indicate the fcc primitive unit cell. (b) Side view and (c)
top view of the crystal structure of the (111)
(Ba$_2$NiOsO$_6$)$_{1}$/(Ba$_2$TiO$_3$)$_{10}$ superlattice. In
(c), two different colors denote the X atoms on the two different
planes in the (111) Ba$_2$NiOsO$_6$ monolayer forming a buckled
honeycomb lattice. (e) The Brillouin Zone of bulk Ba$_2$NiOsO$_6$
and (f) the BZ of its (111)
(Ba$_2$NiOsO$_6$)$_{1}$/(Ba$_2$TiO$_3$)$_{10}$ superlattice, with
the basis vectors $\vec{b_{1}}$, $\vec{b_{2}}$ and $\vec{b_{3}}$
of the reciprocal lattices as well as some high-symmetry points
labeled.}
\end{figure}
\begin{figure}
\includegraphics[width=6cm]{BaNiOsO6Fig2.eps}
\caption{(a) Ferromagnetic (FM), (b)
antiferromagnetic (AFM), (c) type I ferrimagnetic (FI1)
and (d) type II ferrimagnetic (FI2) configurations in bulk Ba$_2$NiOsO$_6$.}
\end{figure}
\begin{figure}
\includegraphics[width=6cm]{BaNiOsO6Fig3.eps}
\caption{(a) Ferromagnetic (FM), (b) antiferromagnetic (AFM), (c)
type I ferrimagnetic (FI1) and (d) type II ferrimagnetic (FI2)
configurations in (111) monolayer Ba$_{2}$NiOsO$_{6}$.}
\end{figure}
For a ferromagnetic solid with at least threefold rotational
symmetry (i.e., tetragonal, trigonal, hexagonal and cubic) and the
magnetization along the rotational $z_{-}$axis, the optical
conductivity tensor can be written as\cite{Wooten72}
\begin{equation}
\bm{\sigma}= \begin{pmatrix} \sigma_{xx} & \sigma_{xy} & 0 \\
-\sigma_{xy} & \sigma_{yy} & 0 \\
0 & 0 & \sigma_{zz}
\end{pmatrix}.
\end{equation}
Within linear response Kubo formalism~\cite{Wang74},
the absorptive parts of the conductivity tensor elements due to interband transitions are given by
\begin{equation}
\sigma_{1aa} (\omega) = \frac{\alpha}{\omega}
\sum_{i,j}\int_{BZ}\frac{d{\bf k}}{(2\pi)^3}|p_{ij}^{a}|^{2}
\delta(\epsilon_{{\bf k}j}-\epsilon_{{\bf k}i}-\hbar\omega),
\end{equation}
\begin{equation}
\sigma_{2xy} (\omega) = \frac{\alpha}{\omega}
\sum_{i,j}\int_{BZ}\frac{d{\bf k}}{(2\pi)^3}\text{Im}[p_{ij}^{x}p_{ji}^{y}]
\delta(\epsilon_{{\bf k}j}-\epsilon_{{\bf k}i}-\hbar\omega),
\end{equation}
where summations $i$ and $j$ are over the valence and conduction
bands, respectively. $\alpha=\frac{{\pi}{e^{2}}}{{\hbar}{m^{2}}}$
is a material specific constant, $\hbar$$\omega$ is the photon
energy, and $\epsilon_{{\bf k}i}$ is the $i$th band energy at
${\bf k}$ point. Dipole matrix elements $p_{ij}^{a} =
\langle\textbf{k}\emph{j}|\hat{p}_{a}|\textbf{k}i\rangle$ where
$\hat{p}_a$ denotes Cartesian component $a$ of the dipole
operator, are obtained from the band structures within the PAW
formalism\cite{Adolph01}, as implemented in the VASP package. The
integration over the BZ is carried out by using the linear
tetrahedron method (see Ref. [\onlinecite{Temmerman89}] and
references therein). The dispersive parts of the conductivity
tensor elements can be obtained from the corresponding absorptive
parts by use of the Kramer-Kronig transformation~\cite{Bennett65},
\begin{equation}
\sigma{_{2aa}}(\omega) = -
\frac{2\omega}{\pi}P\int_{0}^{\infty}\frac{\sigma_{1aa}(\omega')}{\omega'^{2}-\omega^{2}}d\omega',
\end{equation}
\begin{equation}
\sigma{_{1xy}}(\omega) =
\frac{2}{\pi}P\int_{0}^{\infty}\frac{\omega'\sigma_{2xy}(\omega')}{\omega'^{2}-\omega^{2}}d\omega',
\end{equation}
where $P$ denotes the principal integral.
In the polar geometry, the complex Kerr angle can then be
calculated from the optical conductivity tensor
via~\cite{Guo94,Guo95},
\begin{equation}
\theta_{K} + i\varepsilon_{K} =
\frac{-{\sigma}_{xy}}{{\sigma}_{xx}\sqrt{1+i(4{\pi}/\omega){\sigma}_{xx}}},
\end{equation}
which $\theta$$_{K}$ is the Kerr rotation angle and
$\varepsilon$$_{K}$ the Kerr ellipticity. For a magnetic thin
film, the complex Faraday rotation angle can be written
as~\cite{ravindran1999},
\begin{equation}
\theta _{F}+i\epsilon _{F}=\frac{\omega d}{2c}(n_{+}-n_{-}),
\end{equation}
where $n_+$ and $n_-$ represent the refractive indices for left- and right-handed polarized lights, respectively,
and are related to the corresponding dielectric function (or optical conductivity via expressions
$n_{\pm }^{2}=\varepsilon_{\pm}=1+{\frac{4\pi i}{\omega}}\sigma _{\pm}= 1+{\frac{4\pi i}{\omega}}(\sigma _{xx}\pm i \sigma _{xy})$.
For many magnetic materials, the $\sigma_{xx}$ is generally much larger than the corresponding $\sigma_{xy}$.
Therefore,
$n_{\pm }=[1+{\frac{4\pi i}{\omega}}(\sigma _{xx}\pm i \sigma _{xy})]^{1/2}$
$\approx [1+{\frac{4\pi i}{\omega}}\sigma _{xx}]^{1/2} \mp {\frac{2\pi}{\omega}}(\sigma _{xy}/\sqrt{1+\frac{4\pi i}{\omega}\sigma _{xx}})$.
Consequently,
\begin{equation}
\theta _{F}+i\epsilon _{F}\approx -\frac{2\pi d}{c}\frac{\sigma _{xy}}{\sqrt{1+\frac{4\pi i}{\omega}\sigma _{xx}}}.
\end{equation}
The anomalous Hall conductivity (AHC) is calculated based on the
Berry-phase formalism~\cite{XiaoD10}. Within this Berry-phase
formalism, the AHC ($\sigma_{ij}^{A} = J^c_i/E_j$) is given as a
BZ integration of the Berry curvature for all the occupied
(valence) bands,
\begin{eqnarray}
\sigma_{xy}^{A} = -\frac{e^2}{\hbar}\sum_{n \in VB} \int_{BZ}\frac{d{\bf k}}{(2\pi)^3}\Omega_{xy}^n({\bf k}),\nonumber \\
\Omega_{xy}^n({\bf k}) = -\sum_{n'\neq n}
\frac{2{\rm Im}[p_{ij}^{x}p_{ji}^{y}]}
{(\epsilon_{{\bf k}n}-\epsilon_{{\bf k}n'})^2},
\end{eqnarray}
where ${\Omega_{ij}^n({\bf k})}$ is the Berry curvature for the
$n$th band at ${\bf k}$. $J^c_i$ is the $i$ component of the
charge current density ${\bf J}^c$ and $E_j$ is the $j$-component
of the electric field ${\bf E}$. Note that the AHC is nothing but
$\sigma$$_{1xy}(\omega)$ in the dc limit, i.e., $\sigma_{xy}^{A} =
\sigma$$_{1xy}(\omega = 0)$. From the Kramers-Kronig relations, we
can obtain a sum rule for $\sigma$$_{1xy}(\omega = 0)$,
\begin{equation}
\sigma{_{1xy}}(\omega = 0) =
\frac{2}{\pi}P\int_{0}^{\infty}\frac{\sigma_{2xy}(\omega')}{\omega'}d\omega'.
\end{equation}
Putting Eq.(3) into this sum rule would result in Eq. (9). Since a
large number of $k$ points are needed to get accurate AHCs, we use
the efficient Wannier interpolation method~\cite{WangX06, LopezMG}
based on maximally localized Wannier functions
(MLWFs)~\cite{MarzariN}. Since the energy bands around the Fermi
level are dominated by Os $t_{2g}$ orbitals, 6 MLWFs per unit cell
of Os $t_{2g}$ orbitals are constructed by fitting to the
GGA+U+SOC band structure in the energy window from -0.69 eV to
2.51 eV for the bulk, and from -0.54 eV to 1.66 eV for the
monolayer. The band structure obtained by the Wannier
interpolation agrees well with that from the GGA+U+SOC
calculation. The AHC ($\sigma_{xy}^A$) was then evaluated by
taking a very dense k-point mesh of 200 $\times 200$ $\times$ 200
and 200 $\times 200$ $\times$ 1 in the BZ for bulk Ba$_2$NiOsO$_6$
and its (111) monolayer, respectively.
\section{Results and discussion}
\subsection{Magnetic properties}
We study four magnetic configurations in both bulk Ba$_2$NiOsO$_6$
and its (111) monolayer, as illustrated in Figs. 2 and 3,
respectively. The calculated total energies of these magnetic
configurations are listed in Table I. It is clear that in both
structures the FM configuration is the ground state. Therefore, we
list in Table II the calculated magnetic moments and band gap of
only the FM state. Table II shows that in bulk Ba$_2$NiOsO$_6$,
both Ni and Os atoms have large spin magnetic moments, being 1.78
$\mu$$_{B}$ and 1.22 $\mu$$_{B}$, respectively. Nevertheless,
because of the hybridization among O $p$, Os $d$, and Ni $d$
orbitals, these spin magnetic moments fall short of 2.0
$\mu$$_{B}$ expected from Ni$^{2+}$ 3$d^8$ ($\emph{t}_{2g}^{6}$
$\emph{e}_{g}^{2}$; $S=1$) and Os$^{6+}$
5$d^{2}$($\emph{t}_{2g}^{2}$ $\emph{e}_{g}^{0}$; $S=1$) ions.
Interestingly, both Ni and especially Os atoms have significant
orbital magnetic moments, being 0.21 $\mu$$_{B}$ and -0.55
$\mu$$_{B}$, respectively. Hund's second rule states that the spin
and orbital moments would be antiparallel if the $d$ shell is less
than half-filled, and otherwise they would be parallel. In
consistence with Hund's second rule, the Ni orbital moment is
parallel to the Ni spin moment while the Os orbital moment is
antiparallel to the Os spin moment. Consequently, because of the
large negative Os orbital moment, the total magnetic moment is
3.37 $\mu$$_{B}$/f.u. in bulk Ba$_2$NiOsO$_6$. This theoretical
value agrees rather well with the total magnetic moment of 3.46
$\mu$$_{B}$/f.u. deduced from the magnetic susceptibility
experiment~\cite{Feng16}. The calculated magnetic moments for the
(111) Ba$_2$NiOsO$_6$ monolayer are similar to that of bulk
Ba$_2$NiOsO$_6$ (see Table II).
\begin{table}\footnotesize
\caption{\label{tab:table1} The properties of bulk and (111)
monolayer Ba$_{2}$NiOsO$_{6}$ from the GGA+U calculations.
\emph{E$_{FM}$}, \emph{E$_{AFM}$}, \emph{E$_{FI1}$}, and
\emph{E$_{FI2}$} denote the total energies of the FM, AFM, FI1,
and FI2 configurations, respectively (see Figs. 2 and 3).
\emph{J$_{1}$} (\emph{d$_{Ni-Os}$}), \emph{J$_{2}$}
(\emph{d$_{Os-Os}$}), and \emph{J$_{3}$} (\emph{d$_{Ni-Ni}$})
represent the nearest Ni-Os, Os-Os, and Ni-Ni exchange coupling
parameters (interatomic distances), respectively. \emph{T$_{c}$}
is the magnetic ordering temperature.}
\begin{ruledtabular}
\begin{tabular}{ccc}
& Bulk & (111) monolayer \\ \hline
\emph{E$_{FM}$} (meV/f.u.) & 0 & 0 \\
\emph{E$_{AFM}$} (meV/f.u.) & 77.80 & 35.57 \\
\emph{E$_{FI1}$} (meV/f.u.) & 61.89 & 6.635 \\
\emph{E$_{FI2}$} (meV/f.u.) & 38.20 & 17.57 \\
\emph{d$_{Ni-Os}$} (\AA) & 4.021 & 4.103 \\
\emph{d$_{Os-Os}$} (\AA) & 5.687 & 5.696 \\
\emph{d$_{Ni-Ni}$} (\AA) & 5.687 & 5.696 \\
\emph{J$_{1}$} (meV) & 6.48 & 5.928 \\
\emph{J$_{2}$} (meV) & 2.87 & -2.787 \\
\emph{J$_{3}$} (meV) & -0.09 & -0.054 \\
\emph{T$_{c}$} (K) &$\sim$150 ($\sim$ 100\footnotemark[1])& $\sim$69\\
\end{tabular}
\end{ruledtabular}
\footnotemark[1]{Experimental value from
Ref.~[\onlinecite{Feng16}].}
\end{table}
As mentioned before, using the calculated total energies for the
four magnetic configurations, we evaluate the exchange coupling
parameters between magnetic atoms. The obtained nearest neighbor
Ni-Os ($J_1$), Os-Os ($J_2$), and Ni-Ni ($J_3$) exchange coupling
parameters together with their distances are listed in Table I.
Interestingly, in both systems the magnetic interaction between B
(Ni) and B$'$ (Os) is ferromagnetic and is rather strong. This FM
coupling between B and B$'$ cations is very rare in double
perovskite oxides~\cite{Jeng03,Wang09}. This explains why the FM
state is the ground state, quite unlike many other double
perovskite oxides in which the AFM is often
favored~\cite{Jeng03,Wang09}. The second near neighbor Os-Os
exchange coupling ($J_2$) is, however, AFM in the monolayer,
although it is still FM in the bulk (Table I). Furthermore, the
second near neighbor Ni-Ni magnetic coupling ($J_3$) is much
smaller than $J_1$. The smallness of $J_3$ could be attributed to
the much localized Ni 3$d$ orbitals in comparison with that of Os
5$d$ orbitals. Note that we also perform the total energy
calculations for the (111) Ba$_2$NiOsO$_6$ monolayer superlattice
in the zigzag-AFM and stripy-AFM configurations (see Figs. 2(c)
and 2(d) in Ref.~\cite{Hslu2018}) to estimate the next-nearest
neighbor Ni-Os coupling parameter (J$^{NN}_{Ni-Os}$). We obtain a
small J$^{NN}_{Ni-Os}$ value of 0.12 meV. This is much smaller
than the nearest neigbor Ni-Os exchange coupling parameter (see J1
in Table I), and thus the J$^{NN}_{Ni-Os}$ values are not listed
in Table I.
Based on the calculated $J_1$ values, we could estimate magnetic
ordering temperature ($T_c$) within a mean-field approximation
given by $ k_BT_c=\frac {1}{3} zJ_1$ where $z$ is the number of
Ni-Os pairs for either Ni or Os atom. Table I shows that such
estimated $T_c$ of 150 K for the bulk agrees quite well with the
experimental value of 100 K~\cite{Feng16}. The estimated $T_c$ of
69 K for the monolayer is smaller than that of the bulk. This
could be expected because of the number of nearest Ni-Os exchange
couplings decreases from six in the bulk to three in the
monolayer.
\begin{table
\caption{Spin ($m_s$) and orbital ($m_o$) magnetic moments as well
as band gap ($E_g$) of ferromagnetic Ba$_2$NiOsO$_6$ and its (111)
monolayer from the GGA+U+SOC calculations. The magnetization is
along the $c$-axis.}
\begin{ruledtabular}
\begin{tabular}{ccc}
& Bulk & (111) monolayer \\ \hline
\emph{m$_{s}^{Os}$} ($\mu_B$/atom) & 1.22 & 1.22 \\
\emph{m$_{o}^{Os}$} ($\mu_B$/atom) & -0.55 & -0.50 \\
\emph{m$_{s}^{Ni}$} ($\mu_B$/atom) & 1.78 & 1.75 \\
\emph{m$_{o}^{Ni}$} ($\mu_B$/atom) & 0.21 & 0.21 \\
\emph{m$_{s}^{O}$} ($\mu_B$/f.u.) & 0.61 & 0.61 \\
\emph{m$_{o}^{O}$} ($\mu_B$/f.u.) & -0.08 &-0.14 \\
\emph{m$_{t}^{Os}$} ($\mu_B$/atom) & 0.67 (0.97\footnotemark[1]) & 0.72 \\
\emph{m$_{t}^{Ni}$} ($\mu_B$/atom) & 1.99 (2.13\footnotemark[1]) & 1.96 \\
\emph{m$_{t}^{tot}$} ($\mu_B$/f.u.) & 3.37 (3.46\footnotemark[1]) & 3.78 \\
$E_g$ (eV) & 0.22 (0.31\footnotemark[1]) & 0.37 \\
\end{tabular}
\end{ruledtabular}
\footnotemark[1]{Experimental values from
Ref.~[\onlinecite{Feng16}].} \label{MI}
\end{table}
\subsection{Electronic structure}
\begin{figure}
\includegraphics[width=8cm]{BaNiOsO6Fig4.eps}
\caption{(a, c) Relativistic band structure and (b, d) anomalous
Hall conductivity ($\sigma^A_{xy}$) of bulk Ba$_{2}$NiOsO$_{6}$
(upper panels) and its (111) monolayer (lower panels). The FM
magnetization is along the $c$ axis. The zero of energy is placed
at the top of valence bands. In (c), only the Ba2NiOsO6
monolayer-dominated bands are displayed (see the maintext). The
symmetry of band states at the $\Gamma$ point are labeled
according to the irreducible representations listed in Tables IV
and V in Appendix C. The principal inter-band transitions and the
corresponding peaks in the $\sigma_{xy}$ in Figs. 8(c) and 9(c)
are indicated by pink and green arrows.}
\end{figure}
Now let us examine the FM electronic structure of bulk
Ba$_2$NiOsO$_6$ and its (111)
(Ba$_{2}$NiOsO$_{6}$)$_{1}$/(BaTiO$_{3}$)$_{10}$ monolayer
superlattice, which is needed for the following discussion of the
optical conductivity tensor and the magneto-optical effects. The
calculated fully relativistic and scalar-relativistic band
structures are plotted in Figs. 4 and 5, respectively. For clarity
and ease of comparison with bulk, we only show the main
contributions from monolayer Ba$_2$NiOsO$_6$ for (111)
superlattice. Furthermore, the calculated atom- and
orbital-decomposed densities of states (DOSs) for both structures
are displayed in Fig. 6. Figure 4 shows that both structures are a
semiconductor with a small indirect band gap (Table II). Figure 6
indicates that the band gap falls within the spin-up Os 5$d$
$t_{2g}$ dominant bands. In bulk Ba$_2$NiOsO$_6$, the calculated
band gap of 0.22 eV is comparable to the experimental one of
$\sim$ 0.31 eV~\cite{Feng16}. Interestingly, the
scalar-relativistic band structures of bulk Ba$_2$NiOsO$_6$ and
its (111) monolayer are a metal and a semi-metal (Fig. 5),
respectively. When the SOC is included, the $t_{2g}$ (equivalent
to $l=1$) states split into doubly degenerate ($j = 3/2$) occupied
state and nondegenerate ($j = 1/2$) unoccupied state (Fig. 6).
This confirms that the SOC plays an essential role in the
semiconducting gap-opening, and thus Ba$_2$NiOsO$_6$ is known as a
very rare FM Dirac-Mott insulator~\cite{Feng16}.
\begin{figure}
\includegraphics[width=8cm]{BaNiOsO6Fig5.eps}
\caption{Scalar-relativistic band structures of bulk
Ba$_{2}$NiOsO$_{6}$ (a) and its (111) monolayer (b). The zero of
energy is placed at the top of valence bands. In panel (b), only
the Ba$_2$NiOsO$_6$ monolayer dominated bands are displayed (see
the maintext). The symmetry of band states at the $\Gamma$ point
are labeled according to the irreducible representations listed in
Tables IV and V in Appendix C. The principal inter-band
transitions and the corresponding peaks in the $\sigma_{1xx(zz)}$
in Figs. 7(a) and 7(c) are indicated by pink arrows.}
\end{figure}
\begin{figure}
\includegraphics[width=8cm]{BaNiOsO6Fig6.eps}
\caption{Os 5$d$, O, and Ni 3$d$ partial densities of states (DOS)
of bulk Ba$_{2}$NiOsO$_{6}$ (a) and its (111) monolayer (b). The
zero of energy is placed at the top of valence bands.}
\end{figure}
Generally speaking, the DOS of the bulk and the (111) monolayer
are rather similar (see Fig. 6). The energy bands near Fermi level
are predominantly of the Os 5$d$ orbitals. The Os 5$d$ and Ni 3$d$
bands appear in the energy regions of -7.8$\sim$-3.6 eV and
-0.8$\sim$2.4 eV with significant inter-orbital mixing as well as
some small admixture of O 2$p$ states, while the O 2$p$ bands are
mainly located between them. Consequently, because of the strong
Ni 3$d$ - Os 5$d$ hybridization through the O 2$p$ orbital, the Os
5$d$ states are split into bonding and anti-bonding states. The
bonding bands occur in the energy range from -6.8 eV to -4.8 eV,
while the anti-bonding ones are located in the region from -0.8 to
1.6 eV in the vicinity of the Fermi level. However, compared with
the bulk band structure, although the bandwidths of the 6 Os 5$d$
$t_{2g}$-dominated bands near the Fermi level in the monolayer are
only slightly reduced, the bandwidths of the lower valence bands
and upper conduction bands further away from the Fermi level
become noticeably narrowed (see Figs. 4 and 5), due to the reduced
number of neighboring Os atoms in the monolayer. Significantly,
there are bands crossings at K points, forming the so-called Dirac
nodal points in (111) monolayer; In contrast, these band crossings
do not occur at the corresponding W point in bulk Ba$_2$NiOsO$_6$
(Fig. 5). This difference would result in contrasting topological
properties of the two systems, as will be discussed in Sec. III E
below.
In the cubic double perovskite structure, the crystal field at the
transition metal atoms, which sit at the centers of the oxygen
octahedrons and occupy the perovskite B sites alternatively,
should split the $d$ states into two upper energy levels
$e_{g}$($3z^{2}-1$, $x^{2}-y^{2}$) and three bottom energy levels
$t_{2g}$($xy$, $yz$, and $xz$). However, the electronic structure
of Ni 3$d$ in the cubic Ba$_2$NiOsO$_6$ do not follow the theory
of crystal field. In Fig. 5(a), it is clearly that Ni $e_{g}$
states lies lower than Ni $t_{2g}$ states in the up-spin channel.
Additionally, the exchange splitting energy between the up-spin
and down-spin $3d$ $e_{g}$ electrons on the Ni atom is about 9.7
eV, much larger than that of Ni 3$d$ $t_{2g}$ bands of $\sim$ 1.0
eV. This is because that $e_{g}$ orbitals with the wave function
directly pointing to that of O 2$p$ orbitals have much stronger
hybridization with O $2p$ states than that of $t_{2g}$ bands,
which causes the bonding occupied $e_{g}$ states shift to the
lower energy level and the anti-bonding unoccupied $e_{g}$ ones
move toward the higher energy direction. Moreover, it is
interesting to find that the exchange splitting of Os $5d$ (up to
$\sim$ 1.2 eV) states is large, being comparable to the Ni $3d$
band exchange splitting, which is due to the unusual
renormalization of the intra-atomic exchange strength at the Os
sites arising from the Os-Ni interaction, similar to the case of
Sr$_2$FeMoO$_6$~\cite{Sarma000}.
Two different mechanisms of magnetism in double perovskite oxides
have been proposed in the earlier literatures. One is the
hybridization-driven mechanism~\cite{Kanamori001} which leads to a
negative spin polarization at the 4$d$ or 5$d$ site, that is, the
intrinsic spin splitting at the 3$d$ site and an induced spin
splitting at the 4$d$ or 5$d$ site which is oppositely aligned.
However, in Ba$_2$NiOsO$_6$, the Os $5d$ and Ni $3d$ states is
ferromagnetic rather than anti-ferromagnetic coupling. Thus, the
hybridization-driven mechanism is not the origin of the magnetic
coupling between Ni 3$d$ ions and Os 5$d$ ions in Ba$_2$NiOsO$_6$.
The other is the well-known superexchange mechanism based on
Goodenough-Kanamori (G-K) rules~\cite{Goodenough-Kanamori}. In
Ba$_2$NiOsO$_6$, Ni $t_{2g}$ orbitals are completely filled, and
this rules out Os $t_{2g}$- Ni $t_{2g}$ interaction. Although Ni
$e_{g}$ orbitals are half-filled and Os $t_{2g}$ orbitals are
partially filled, they are orthogonal and thus do not contribute
to the magnetic exchange. The remaining superexchange interaction
is between half-filled Ni $e_{g}$ and empty Os $e_{g}$ orbitals,
which should lead to the ferromagnetic coupling. Generally
speaking, being a 5$d$ transition metal, Os has a large
$t_{2g}$-$e_{g}$ crystal-field splitting, thus driving the $e_{g}$
states out of the FM coupling picture in such systems as
Sr$_2$NiOsO$_6$ and Ca$_2$NiOsO$_6$. Nevertheless, Ni $e_{g}$ and
Os $e_{g}$ orbitals could hybridize strongly, as shown clearly in
Fig. 6. It is this interaction between the Ni $e_{g}$ and Os
$e_{g}$ orbitals that leads to the strong ferromagnetic coupling
between the Ni and Os ions in Ba$_2$NiOsO$_6$.
\begin{figure}
\includegraphics[width=8cm]{BaNiOsO6Fig7.eps}
\caption{(a, c) Real part ($\sigma$$_{1}$) and (b, d) imaginary
part ($\sigma$$_{2}$) of the diagonal elements of the optical
conductivity tensor of bulk Ba$_{2}$NiOsO$_{6}$ (a, b) and its
(111)(Ba$_{2}$NiOsO$_{6}$)$_{1}$/(BaTiO$_{3}$)$_{10}$ monolayer
superlattice (c, d).}
\end{figure}
\begin{figure}
\includegraphics[width=8cm]{BaNiOsO6Fig8.eps}
\caption{(a) Real part ($\sigma$$_{1xy}$) and (c) imaginary part ($\sigma$$_{2xy}$)
of the off-diagonal element of the optical conductivity tensor
as well as (b) Kerr rotation angle ($\theta_K$) and (d) Kerr ellipticity
($\varepsilon_K$) of bulk Ba$_{2}$NiOsO$_{6}$.
}
\end{figure}
\subsection{Optical conductivity}
We calculate the optical conductivity tensors of bulk
Ba$_2$NiOsO$_6$ and its (111) monolayer. The diagonal elements
$\sigma$$_{xx}$ (for in-plane electric field polarization E $\bot$
c) and $\sigma$$_{zz}$ (for out-of-plane electric field
polarization E $\|$ c) of the optical conductivity are displayed
in Fig. 7 for both systems. Overall, the calculated spectra of the
diagonal elements for the different electric field polarizations
in bulk Ba$_2$NiOsO$_6$ are very similar, i.e., this material is
optically isotropic. In particular, they have several identical
peaks. Taking the $\sigma$$_{1xx}$ and $\sigma$$_{1zz}$ spectra as
an example, there are a small peak around 0.7 eV, a prominent twin
peak centered at 3.0 and 3.5 eV, and a broad valley from 5.6 to
8.4 eV. This optical isotropy could be expected from such highly
symmetric crystals as cubic double perovskites. However, for the
(111) superlattic, surprisingly, $\sigma$$_{1xx}$ and
$\sigma$$_{1zz}$ are also similar. The reduced symmetry in the
superlattice causes only small differences. For example, the
prominant B$_3$ peak at $\sim$4.2 eV in the $\sigma$$_{1xx}$
spectrum is only slightly higher than that in the $\sigma$$_{1zz}$
spectrum [see Fig. 7(c)] due to the reduced crystal symmetry.
Nevertheless, compared with bulk case, although the spectral lines
of (111) superlattice are similar at low frequency region, the
peaks in high energy, such as $B_{3}$ peak, are noticeably higher
and narrower. This can be attributed to the noticeably narrowed
bandwidths of the energy bands further away from the Fermi level
[see Fig. 5(b)], as mentioned in the preceding subsection.
The real ($\sigma$$_{1xy}$) and imaginary ($\sigma$$_{2xy}$) parts
of the off-diagonal element of the optical conductivity for bulk
Ba$_2$NiOsO$_6$ are shown in Figs. 8(a) and 8(c), respectively.
These spectra exhibit pronounced oscillatory peaks. Notably, a
large positive peak appears at $\sim$ 3.2 eV in $\sigma$$_{1xy}$,
and that in $\sigma$$_{2xy}$ emerges near 2.0 and 3.5 eV. They
also have a pronounced negative peak at $\sim$ 2.5 eV in
$\sigma$$_{1xy}$ and $\sim$ 2.9 eV in $\sigma$$_{2xy}$. Positive
(negative) $\sigma$$_{2xy}$ suggests that the inter-band
transitions are dominated by the excitations due to the
left-circularly (right-circularly) polarized light. For example,
the negative value in $\sigma$$_{2xy}$ around 2.9 eV suggests that
inter-band transitions induced by right-circularly polarized light
should be stronger. However, the peaks near 2.0 and 3.5 eV
indicate the dominance of inter-band transitions due to
left-circularly polarized light.
Broadly speaking, the $\sigma$$_{1xy}$ and $\sigma$$_{2xy}$ for
the (111) Ba$_2$NiOsO$_6$ monolayer, shown in Figs. 9(a) and 9(c),
respectively, are similar to that of the bulk spectra, and the
positive peak positions such as 3.2 eV in $\sigma$$_{1xy}$, $\sim$
2.0 and 3.5 eV in $\sigma$$_{2xy}$, remain unchanged.
Nevertheless, differences exist for the negative peak positions.
For example, negative peaks appear at $\sim$ 2.2 and $\sim$ 4.1 eV
in $\sigma$$_{1xy}$, and at $\sim$ 2.7 and $\sim$ 4.3 eV in
$\sigma$$_{2xy}$, respectively.
\begin{figure}
\includegraphics[width=8cm]{BaNiOsO6Fig9.eps}
\caption{(a) Real part ($\sigma$$_{1xy}$) and (c) imaginary part
($\sigma$$_{2xy}$) of the off-diagonal element of the optical
conductivity tensor as well as (b) Kerr rotation angle
($\theta_K$) and (d) Kerr ellipticity ($\varepsilon_K$) of the
(111) (Ba$_{2}$NiOsO$_{6}$)$_{1}$/(BaTiO$_{3}$)$_{10}$ monolayer
superlattice. }
\end{figure}
As Eqs. (2) and (3) suggest, the absorptive parts of the optical
conductivity elements, i.e., $\sigma$$_{1xx}$ and
$\sigma$$_{2xy}$, are directly related to the dipole-allowed
inter-band transitions. This would allow us to understand the
origins of the main peaks in the $\sigma$$_{1xx}$ and
$\sigma$$_{2xy}$ spectra by determining the symmetries of the
calculated band states and the dipole selection rules. The
symmetries of band states at the $\Gamma$-point of the
scalar-relativistic and relativistic band structures of bulk
Ba$_{2}$NiOsO$_{6}$ and its (111) monolayer are displayed in Figs.
4 and 5. Using the dipole selection rules (see Tables VI and VII
in Appendix C), we could assign the main peaks in the
$\sigma$$_{1xx}$ in Fig. 7(a) and 7(c) and the $\sigma$$_{2xy}$ in
Fig. 8(c) and Fig. 9(c) to the inter-band transitions at the
$\Gamma$ point displayed in Figs. 4 and 5 for the two systems.
Taking bulk Ba$_{2}$NiOsO$_{6}$ as an example, we could relate the
A$_{3}$ peak at $\sim$ 3 eV in the $\sigma$$_{1xx}$ [see Fig.
7(a)] to the inter-band transition mainly from the
$\Gamma{_{4}^{-}}$ or $\Gamma{_{5}^{-}}$ state of the down-spin
valence band to the conduction band state $\Gamma{_{5}^{+}}$ or
$\Gamma{_{3}^{+}}$. Of course, in addition to this, there may be
contributions from different inter-band transitions at other $k$
points. Note that without SOC, these band states are doubly
degenerate. When the SOC is included, these band states split [see
Fig. 4(a)], and this results in the magnetic circular dichroism.
Therefore, we could assign the main peaks in the $\sigma$$_{2xy}$
to the principal inter-band transitions at the $\Gamma$-point only
in the relativistic band structure, e.g., displayed in Fig. 4(a).
In particular, we could attribute the pronounced peak N$_{3}$ at
$\sim$ 3.0 eV in the $\sigma$$_{2xy}$ in Fig. 8(c) to the
inter-band transition from the $\Gamma{_{5}^{-}}$,
$\Gamma{_{6}^{-}}$, $\Gamma{_{7}^{-}}$ or $\Gamma{_{8}^{-}}$
states of valence band to the bottom conduction band state
$\Gamma{_{5}^{+}}$ or $\Gamma{_{8}^{+}}$ shown in Fig. 4(a).
\begin{figure}
\includegraphics[width=8cm]{BaNiOsO6Fig10.eps}
\caption{(a, c) Faraday rotations ($\theta_F$) and (b, d) Faraday
ellipticities ($\varepsilon_F$) of bulk Ba$_{2}$NiOsO$_{6}$ (upper
two panels) and its (111) monolayer (lower two panels).}
\end{figure}
\subsection{Magneto-optical Kerr and Faraday effects}
After examining the electronic, magnetic, and optical properties
of bulk Ba$_2$NiOsO$_6$ and its (111) monolayer, let us now turn
our attention to their magneto-optical Kerr and Faraday effects.
The calculated complex Kerr rotation angles of bulk and (111)
monolayer Ba$_{2}$NiOsO$_{6}$ are displayed in Figs. 8 and 9,
respectively. For bulk Ba$_{2}$NiOsO$_{6}$, the Kerr rotation
angle is remarkably large, reaching up to -1.5{${}^{\circ}$} at $\sim$
0.8 eV, 1.5{${}^{\circ}$} at $\sim$ 2.3 eV and -6{${}^{\circ}$} at $\sim$
3.2 eV. These large values imply that the large MOKE exists in
bulk Ba$_{2}$NiOsO$_{6}$. As discussed already in Sec. I, this
large MOKE stems from the combined effect of the enhanced band
exchange splitting of the Os 5$d$ $t_{2g}$ orbitals caused by the
significant Ni 3$d$ - Os 5$d$ hybridization and the strong SOC of
the Os atoms~\cite{Guo96}. The shape of the Kerr rotation spectrum
for the (111) superlattice, shown in Fig. 9, is similar to that of
bulk Ba$_{2}$NiOsO$_{6}$. The notable difference between the two
systems is the amplitude of the prominent peaks. The Kerr rotation
angles get reduced in the (111) superlatice, mainly because of the
fact that the (111) Ba$_{2}$NiOsO$_{6}$ monolayer has a smaller
density of the magneto-optically active atoms especially Os atoms
than that of the bulk.
Now let us compare the MOKE of the two systems
with that of well-known MO materials. The Kerr rotation angles of
most 3$d$ transition metals and their compounds seldom exceed
0.5{${}^{\circ}$} except, e.g., FePt, Co$_2$Pt~\cite{Guo96}, and
PtMnSb~\cite{van Engen83}. Manganese pnictides generally have
excellent MO properties. In particular, MnBi films possess a large
Kerr rotation angle of 2.3{${}^{\circ}$} at 1.84 eV in low
temperatures~\cite{Ravindran999,di1996optical}. The famous MO
material Y$_3$Fe$_5$O$_{12}$ harbors a Kerr rotation of
0.23{${}^{\circ}$} at 2.95 eV. Owing to the strong SOC of 4$d$ and 5$d$
transition metal elements, the large MOKE has also been observed
in half-metallic double perovskites containing 4$d$ and 5$d$
elements. Among these double perovskites, Sr$_2$FeWO$_6$ exhibits
a maximum Kerr rotation of 3.87{${}^{\circ}$}~\cite{vidya2004huge}. On
the whole, the Kerr rotation angles of bulk Ba$_2$NiOsO$_6$ and
its (111) monolayer are at least comparable to these well-known MO
materials.
Figures 8 and 9 show that the Kerr rotation ($\theta_K$) and Kerr
ellipticity ($\varepsilon_K$) spectra in both structures resemble,
respectively, the real part ($\sigma_{1xy}$) and imaginary part
($\sigma_{2xy}$) of the off-diagonal conductivity element except a
reversal of sign. This is not surprising because the Kerr effect
and the off-diagonal conductivity element are connected via Eq.
(6). Indeed, Eq. (6) indicates that the complex Kerr rotation
angle would be linearly related to the $\sigma_{xy}$ if the
longitudinal conductivity ($\sigma_{xx}$) varies smoothly.
For the photon energy below 1.0 eV, the complex Kerr rotation
angles become unphysically large. This is because the
$\sigma_{xx}$ which is in the denominator of Eq. (6), becomes very
small.
The calculated complex Faraday rotation angles for both bulk and
(111) monolayer Ba$_2$NiOsO$_6$ are displayed in Fig. 10. The
Faraday rotation spectra are rather similar to the corresponding
Kerr rotation spectra as well as the $\sigma_{xy}$ (see Figs. 8
and 9). Figures 7-9 show that the $\sigma_{xx}$ is generally much
larger than the corresponding $\sigma_{xy}$. Therefore, $n_{\pm
}=[1+{\frac{4\pi i}{\omega}}(\sigma _{xx}\pm i \sigma
_{xy})]^{1/2}$ $\approx [1+{\frac{4\pi i}{\omega}}\sigma
_{xx}]^{1/2} \mp {\frac{2\pi}{\omega}}(\sigma
_{xy}/\sqrt{1+\frac{4\pi i}{\omega}\sigma _{xx}})$. Consequently,
$\theta _{F}+i\epsilon _{F}\approx -\frac{2\pi d}{c}(\sigma
_{xy}/\sqrt{1+\frac{4\pi i}{\omega}\sigma _{xx}})$, and this
explains why the complex Faraday rotation more or less follows
$\sigma _{xy}$ (see Figs. 8, 9, and 10).
Remarkably, the maximum Faraday rotation angles are as large as
$\sim$$-250$ deg/$\mu$m at $\sim$ 3.3 eV in bulk Ba$_2$NiOsO$_6$
[see Fig. 10(a)] and $\sim$$160$ deg/$\mu$m at $\sim$ 4.1 eV in
the monolayer [see Fig. 10(c)]. As mentioned above, manganese
pnictides usually have excellent MO properties, and among them
MnBi films possess the largest Faraday rotations of $\sim 80.0$
deg/$\mu$m at 1.77 eV at low
temperatures~\cite{Ravindran999,di1996optical}. However, the
famous MO material Y$_3$Fe$_5$O$_{12}$ possess only a moderate
Faraday rotation of $0.19$ deg/$\mu$m at 2.07 eV. By substituting
Y with Bi, Vertruyen {\it et al.} obtained an enhanced Faraday
rotation of $\sim35.0$ deg/$\mu$m at 2.76 eV in
Bi$_3$Fe$_5$O$_{12}$~\cite{vertruyen08}. Also as mentioned above,
large magneto-optical effects are observed in some half-metallic
double perovskites containing 4$d$ and 5$d$ transition metals. For
example, Sr$_2$FeWO$_6$ possess a large Faraday rotation of $45.0$
deg/$\mu$m~\cite{vidya2004huge}. Clearly, the Faraday rotation
angles for both bulk Ba$_2$NiOsO$_6$ and its (111) monolayer are
larger than these well-known MO materials. Therefore, because of
their excellent MO properties, these Ba$_2$NiOsO$_6$ materials
could find promising applications for, e.g., MO sensors and high
density MO data-storage devices.
\begin{figure}
\includegraphics[width=8cm]{BaNiOsO6Fig11.eps}
\caption{(a) Scalar-relativistic and (c) relativistic band
structures as well as (d) anomalous Hall conductivity
($\sigma_{xy}^A$) of the (111) Ba$_{2}$NiReO$_{6}$ monolayer. The part
in the box in (a) is enlarged and displayed in (b). The FM magnetization
is along the $c$-axis and the Fermi level is at 0 eV.}
\end{figure}
\begin{figure}
\includegraphics[width=8cm]{BaNiOsO6Fig12.eps}
\caption{(a) Fully-relativistic and (b) scalar-relativistic band
structures of the (111) Ba$_{2}$NiOsO$_{6}$ monolayer. The Fermi
level is at 0 eV. In panel (b), red and blue curves represent
up-spin and down-spin bands, respectively.}
\end{figure}
\subsection{Anomalous Hall conductivity and topological phases}
As mentioned before, bulk Ba$_2$NiOsO$_6$ and its (111)
superlattice are found to be FM semiconductors when the SOC is
included. We thus could expect that the band gap would be
topologically nontrivial and they could be Chern insulators. To
verify the topological nature of this insulating gap, we calculate
the anomalous Hall conductivity (AHC) ($\sigma_{xy}^A$) for the
two structures. For a three-dimensional (3D) quantum Hall
insulator, $\sigma_{xy}^A$ = $n$ $e^{2}/hc$, where $c$ is the
lattice constant along the $c$ axis normal to the plane of
longitudinal and Hall currents and $n$ is an integer known as the
Chern number ($n_{C}$)~\cite{Halperin87,Zhou2016}. For a normal FM
insulator, however, $\sigma_{xy}^A$ = $0$. The calculated AHC of
bulk Ba$_2$NiOsO$_6$ and its (111) superlattice are displayed in
Figs. 4(b) and 4(d), respectively. Unfortunately, $\sigma_{xy}^A$
is zero within the band gap in both systems, that is, the gaps are
topologically trivial and they are just normal FM insulators.
Here our design principle for engineering topological insulators
is to find a material with its scalar-relativistic band structure
that possesses Dirac points in the BZ, and then examine whether an
energy gap would be opened at these Dirac points when the SOC is
turned-on. For bulk Ba$_2$NiOsO$_6$, an ideal cubic perovskite
structure, the Ni and Os ions sit on a simple cubic lattice with
the Ni 3$d$ or Os 5$d$ orbitals being split into twofold degenerate $e_{g}$ and threefold degenerate
$t_{2g}$ levels by the octahedral crystal-field. However, such a
lattice geometry usually does not support Dirac points, as one can
see from the calculated scalar-relativistic band structure in Fig.
5(a). This explains that bulk Ba$_2$NiOsO$_6$ remains
topologically trivial when the SOC is switched-on. Recently, Xiao
et al. discovered~\cite{Xiao11} that in a metallic (111) ABO$_{3}$
perovskite bilayer, in which TM B ions form a buckled honeycomb
lattice [see, e.g., Fig. 1(c)], the TM B $e_{g}$ and $t_{2g}$
bands would form Dirac points at the K point in the BZ. As a
result, when the SOC is switched-on, a topologically nontrivial
energy gap would be opened at the Dirac points and hence the
system would be a topological insulator. Indeed, it has been
subsequently demonstrated by many researchers (see, e.g., Refs.
~\cite{Chandra17,Hslu2018,Baidya16} and references therein) that
the topological phase can be achieved in either a (111) simple
perovskite bilayer or a (111) double-perovskite monolayer. In the
(111) Ba$_2$NiOsO$_6$ monolayer, Ni and Os ions form a honeycomb
lattice [see, e.g., Fig. 1(c)], and thus Dirac points appear at
the K point below the Fermi level [Fig. 5(b)]. Therefore, when the
SOC is turned-on, the energy gap opened at the Dirac point is
topologically nontrivial [see Fig. 4(b)]. Nevertheless, we have
also calculated the relativistic band structure for the (001)
Ba$_2$NiOsO$_6$ monolayer. As expected, the (001) monolayer is a
topologically trivial metal.
Interestingly, the calculated $\sigma_{xy}^A$ in (111)
Ba$_2$NiOsO$_6$ superlattice is 2.0 $e^{2}/hc$ within the band gap
opened at these Dirac points with the SOC turned on [Fig. 4(d)],
i.e., the band gap is topologically nontrivial. Therefore, within
the rigid band model, one may speculate that the quantum anomalous
Hall phase would appear in the (111) Ba$_2$NiOsO$_6$ superlattice
when doped with one hole. There are several ways of hole doping
such as chemical substitution~\cite{Richter2017} and electrostatic
gating~\cite{Huang2018,Jiang2018}. Here we explore both the
chemical substitution and electrostatic gating.
Specifically, we first consider three kinds of chemical
substitutions, namely, replacing one Ba$^{2+}$ ion with an alkali
metal (X = Li,Na,K) atom as BaXNiOsO$_6$, replacing the Ni$^{2+}$
ion with a transition metal (Y = Sc, Mn, Co, Cu, Ru) atom as
Ba$_2$YOsO$_6$, and also substitution of Re$^{6+}$ for Os$^{6+}$.
Unfortunately, the resultant band structures for the second kind
of substitutions are all metallic. For the first and third kinds
of substitutions, the resultant compounds are semiconductors.
Nonetheless, the semiconducting gaps are all topologically
trivial, i.e., $\sigma_{xy}^A$ is zero within the band gap. As an
example, we display the calculated scalar-relativistic and
relativistic band structures as well as AHC ($\sigma_{xy}^A$) of
the (111) Ba$_2$NiReO$_6$ superlattice in Fig. 11.
Figure 11(d) shows clearly that the $\sigma_{xy}^A$ of the
Ba$_2$NiReO$_6$ superlattice is zero within the semiconducting
gap. We note that the intersection at the K point near the Fermi
level does not exist even without the SOC [Fig. 11 (b)]. This may
explain why the (111) Ba$_2$NiReO$_6$ superlattice is
topologically trivial. We also simulate the hole doping by
electrostatic gating. Here, we perform self-consistent electronic
structure calculations with one less valence electron per f.u.
Unfortunately, the resultant band structure becomes metallic,
indicating that the rigid band model is inapplicable here because
of too strong perturbation due to the hole doping.
\begin{table*}\footnotesize
\caption{\label{tab:table2} Calculated total and atomic spin (\emph{m$_{s}$}) and orbital (\emph{m$_{o}$}) magnetic moments
as well as band gap (\emph{E$_{g}$}) of bulk Ba$_2$NiOsO$_6$ as a
function of effective on-site Coulomb repulsions $U_{Os}$ and
$U_{Ni}$ in the GGA + U + SOC method.}
\begin{ruledtabular}
\begin{tabular}{ccccccccccccc}
$U_{Os}$&$U_{Ni}$&\emph{m$_{s}^{Os}$}&\emph{m$_{s}^{Ni}$}&\emph{m$_{s}^{O}$}
&\emph{m$_{o}^{Os}$}&\emph{m$_{o}^{Ni}$}&\emph{m$_{o}^{O}$}&\emph{m$_{s}^{tot}$}&\emph{m$_{o}^{tot}$}&\emph{E$_{g}$}\\
(eV) & (eV) & ($\mu_B$/atom)&($\mu_B$/atom)&($\mu_B$/f.u.)&($\mu_B$/atom)&($\mu_B$/atom)&($\mu_B$/f.u.)&($\mu_B$/f.u.)&($\mu_B$/f.u.)&(eV)\\
\hline
1 & 3 &1.155 & 1.712 & 0.681 & -0.505 & 0.195 &-0.079 & 3.738 & -0.368 & 0.0 \\
1 & 4 &1.150 & 1.746 & 0.650 & -0.505 & 0.205 &-0.079 & 3.732 & -0.359 & 0.0\\
1 & 5 &1.142 & 1.779 & 0.621 & -0.505 & 0.217 &-0.078 & 3.725 & -0.346 & 0.0\\
2 & 3 &1.232 & 1.710 & 0.672 & -0.547 & 0.196 &-0.085 & 3.771 & -0.413 & 0.2\\
2 & 4 &1.229 & 1.744 & 0.642 & -0.549 & 0.205 &-0.084 & 3.768 & -0.405 & 0.2\\
2 & 5 &1.224 & 1.777 & 0.614 & -0.550 & 0.211 &-0.083 & 3.764 & -0.399 & 0.2\\
3 & 3 &1.314 & 1.708 & 0.655 & -0.570 & 0.197 &-0.086 & 3.793 & -0.435 & 0.4\\
3 & 4 &1.312 & 1.743 & 0.624 & -0.571 & 0.206 &-0.085 & 3.791 & -0.427 & 0.4\\
3 & 5 &1.308 & 1.776 & 0.597 & -0.572 & 0.216 &-0.084 & 3.789 & -0.416 & 0.4\\
\end{tabular}
\end{ruledtabular}
\end{table*}
\section{Conclusions}
In conclusion, by performing systematic first-principles density
functional calculations, we have investigated magnetism,
electronic structure, magneto-optical effects and topological
property of cubic double perovskite Ba$_{2}$NiOsO$_{6}$ and its
(111) (Ba$_{2}$NiOsO$_{6}$)$_{1}$/(BaTiO$_{3}$)$_{10}$ monolayer
superlattice. Interestingly, we find that both structures are rare
FM semiconductors, and the ferromagnetism is driven by strong FM
coupling between neighboring Ni and Os atoms, which in turn arises
from the FM superexchange mechanism due to the abnormally strong
hybridization between half-filled Ni $e_{g}$ and unoccupied Os
$e_{g}$ orbitals. The strong SOC on the Os atom not only opens the
semiconducting gap but also results in a large negative orbital
magnetic moment on the Os atom, thus leading to a total magnetic
moment (3.37 $\mu_B$/f.u.) of less than 4.0
$\mu_B$/f.u.~\cite{Feng16}, expected from the Ni$^{2+}$ 3$d^8$
($\emph{t}_{2g}^{6}$ $\emph{e}_{g}^{2}$; $S=1$) and Os$^{6+}$
5$d^{2}$($\emph{t}_{2g}^{2}$ $\emph{e}_{g}^{0}$; $S=1$) ions. We
also find that because of the enhanced effective intra-atomic
exchange splitting of the Os atoms caused by the Ni 3$d$ - Os 5$d$
hybridization and the strong SOC on the Os sites,
Ba$_{2}$NiOsO$_{6}$ exhibits large MO effects. In particular, the
Kerr and Faraday rotations can be as large as 6$^{\circ}$ and
$250$ deg/$\mu$m, respectively, which are much larger than that of
best-known MO materials. For the (111)
(Ba$_{2}$NiOsO$_{6}$)$_{1}$/(BaTiO$_{3}$)$_{10}$ superlattice, a
large Kerr rotation of $\sim2^{\circ}$ and a large Faraday
rotation of about $160$ deg/$\mu$m is also predicted, although
they are smaller than that bulk Ba$_{2}$NiOsO$_{6}$, mainly due to
the reduced density of magneto-optically active atoms especially
Os atoms in the superlattice. These theoretical findings therefore
suggest that cubic double perovskite Ba$_{2}$NiOsO$_{6}$ and its
(111) superlattice are excellent materials for not only
semiconductor-based spintronics but also magneto-optical devices.
Finally, the calculated AHC reveals that the band gap just below
the Fermi level in the monolayer superlattice is topologically
nontrivial with the gap Chern number of 2 although both structures
are ordinary FM semiconductors. This indicates that the (111)
Ba$_{2}$NiOsO$_{6}$ and related 5$d$ double-perovskite monolayers
may provide an interesting material platform for exploring
magnetic topological phases and phase transitions. This work is
thus expected to stimulate further experimental and theoretical
investigations on these interesting materials.
\begin{acknowledgments}
The authors acknowledge support from the Ministry of Science and
Technology, National Center for Theoretical Sciences, and Academia
Sinica of the Republic of China. H.-S.L. is also supported by the
National Natural Science Foundation of China under Grant No.
11704046.
\end{acknowledgments}
\section*{Appendix A: GGA+U+SOC calculations}
The calculated spin, orbital, and total magnetic moments for Ni
and Os atoms as well as band gap for bulk Ba$_{2}$NiOsO$_{6}$ from
GGA + U + SOC are listed in Table III. Clearly, the Coulomb
repulsion U on both atoms has little effect on the calculated
magnetic moments. However, the Coulomb repulsion U at Os site
(U$_{Os}$) plays an essential role in opening the band gap, and
the band gap increases with U$_{Os}$. The obtained band gap agrees
well with published experimental value of 0.31 eV when U$_{Os}$
ranging from 2.0 to 3.0 eV. Therefore, the effective Coulomb
repulsions U$_{Os}$ = 2.0 eV and U$_{Ni}$ = 5.0 eV are adopted in
this paper.
\section*{Appendix B: Band structure of (111) Ba$_{2}$NiOsO$_{6}$ monolayer superlattice}
The full band structure of the (111) Ba$_{2}$NiOsO$_{6}$ monolayer
superlattice is displayed in Fig. 12(a) (with SOC) and Fig. 12(b)
(no SOC). The corresponding band structure with only the
monolayer-dominated bands being displayed, is given in Figs. 4(b)
and 5(b), respectively.
\section*{Appendix C: Symmetry analysis}
\begin{table
\caption{\label{tab:table3} Symmetry adapted Ni and Os basis functions of the $O_{h}$ point group at
$\Gamma$ for bulk Ba$_2$NiOsO$_6$. $E$ denotes the degeneracy of the band states.}
\begin{ruledtabular}
\begin{tabular}{ccc}
Symmetry & $E$ & Basis functions \\
\hline
$\Gamma_{1+}$ ($\Gamma_{1}$, A$_{1g}$) & 1 &$s$ \\
$\Gamma_{3+}$ ($\Gamma_{12}$, E$_{g}$) & 2 &$x^{2}-y^{2}, 2z^{2}-x^{2}-y^{2}$ \\
$\Gamma_{4+}$ ($\Gamma_{15}^{'}$, T$_{1g}$) & 3 &$xy(x^{2}-y^{2}), yz(y^{2}-z^{2}), zx(z^{2}-x^{2})$\\
$\Gamma_{5+}$ ($\Gamma_{25}^{'}$, T$_{2g}$) & 3 &$xy, yz, zx$\\
$\Gamma_{2-}$ ($\Gamma_{2}^{'}$, A$_{2u}$) & 1 &$xyz$ \\
$\Gamma_{4-}$ ($\Gamma_{15}$, T$_{1u}$) & 3 &$x, y, z$ \\
$\Gamma_{5-}$ ($\Gamma_{25}$, T$_{2u}$) & 3 &$z(x^{2}-y^{2}), x(y^{2}-z^{2}), y(z^{2}-x^{2})$
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{table
\caption{\label{tab:table4} Symmetry adapted Ni and Os basis functions of the $C_{3v}$ point group at
$\Gamma$ for the (Ba$_2$NiOsO$_6$)$_1$/(BaTiO$_3$)$_{10}$ superlattice. $E$ denotes the degeneracy of the band states.}
\begin{ruledtabular}
\begin{tabular}{ccccccccccccc}
Symmetry & $E$ & Basis functions \\
\hline
$\Gamma_{{1}}$ (A$_{1}$) & 1 &$z$; $x^{2}+y^{2}$; $z^{2}$\\
$\Gamma_{{2}}$ (A$_{2}$) & 1 & $R_{z}$ \\
$\Gamma_{{3}}$ (E) & 2 &$x, y$; $x^{2}-y^{2}, xy$; $xz, yz$; $R_{x}, R_{y}$
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{table
\caption{\label{tab:table3} Dipole selection rules between the band states of point group $O_{h}$ at $\Gamma$.
Relationships between the single group and double group representations are:
$\Gamma_1^+\rightarrow\Gamma_6^+$; $\Gamma_3^+\rightarrow \Gamma_5^+ , \Gamma_8^+$;
$\Gamma_4^+\rightarrow\Gamma_5^+ , \Gamma_6^+ , \Gamma_7^+ , \Gamma_8^+$;
$\Gamma_5^+\rightarrow\Gamma_6^+ , \Gamma_7^+ , \Gamma_8^+$;
$\Gamma_2^-\rightarrow\Gamma_7^-$;
$\Gamma_4^-\rightarrow\Gamma_5^- , \Gamma_6^- , \Gamma_7^- , \Gamma_8^+$;
$\Gamma_4^-\rightarrow\Gamma_5^- , \Gamma_6^- , \Gamma_7^- , \Gamma_8^+$.
\cite{Eberhardt}}
\begin{ruledtabular}
\begin{tabular}{cc}
& $E\perp c$ $\&$ $E\parallel c$ \\ \hline
Single group& $\Gamma_{1+} \longleftrightarrow \Gamma_{4-} $ \\
& $\Gamma_{3+} \longleftrightarrow \Gamma_{4-}, \Gamma_{5-}$ \\
& $\Gamma_{4+} \longleftrightarrow \Gamma_{1-}, \Gamma_{3-}, \Gamma_{4-}, \Gamma_{5-}$ \\
& $\Gamma_{5+} \longleftrightarrow \Gamma_{2-}, \Gamma_{3-}, \Gamma_{4-}, \Gamma_{5-}$ \\
& $\Gamma_{2-} \longleftrightarrow \Gamma_{5+} $ \\
& $\Gamma_{4-} \longleftrightarrow \Gamma_{1+}, \Gamma_{3+},\Gamma_{4+}, \Gamma_{5+}$ \\
& $\Gamma_{5-} \longleftrightarrow \Gamma_{2+}, \Gamma_{3+},\Gamma_{4+}, \Gamma_{5+}$ \\
Double group& $\Gamma_{1+} \longleftrightarrow \Gamma_{4-} $ \\
& $\Gamma_{3+} \longleftrightarrow \Gamma_{4-}, \Gamma_{5-}$ \\
& $\Gamma_{4+} \longleftrightarrow \Gamma_{1-}, \Gamma_{3-}, \Gamma_{4-}, \Gamma_{5-}$ \\
& $\Gamma_{5+} \longleftrightarrow \Gamma_{2-}, \Gamma_{3-}, \Gamma_{4-}, \Gamma_{5-}$ \\
& $\Gamma_{2-} \longleftrightarrow \Gamma_{5+} $ \\
& $\Gamma_{4-} \longleftrightarrow \Gamma_{1+}, \Gamma_{3+},\Gamma_{4+}, \Gamma_{5+}$ \\
& $\Gamma_{5-} \longleftrightarrow \Gamma_{2+}, \Gamma_{3+},\Gamma_{4+}, \Gamma_{5+}$
\end{tabular}
\end{ruledtabular}
\end{table}
Bulk Ba$_2$NiOsO$_6$ has the space group $Fm\bar{3}m$ with its
point group $O_{h}$ having 48 symmetry operations. The
site-symmetry point group for Ni and Os atoms is also $O_{h}$.
(111)(Ba$_2$NiOsO$_6$)$_1$/(BaTiO$_3$)$_{10}$ monolayer
superlattice, however, has the space group $P3ml$ with its point
group $C_{3v}$ including six symmetry operations. The
site-symmetry point group for Ni and Os atoms is $C_{3v}$. To
determine the symmetry of the band states at the center
($\Gamma$-point) of the Brillouin zone (BZ), the symmetry adapted
basis functions, formed from the atomic orbitals localized at the
Ni and Os sites, are derived using the projection method of group
theory~\cite{Dresselhaus09}, as listed in Tables V and VI for the
$O_{h}$ and $C_{3v}$ point groups, respectively. By comparing the
calculated orbital characters of the band states at the
$\Gamma$-point with the symmetry-adapted basis functions (Tables V
and VI), one can determine the symmetries of the $\Gamma$-point
band states for bulk Ba$_2$NiOsO$_6$ and its
(Ba$_2$NiOsO$_6$)$_1$/(BaTiO$_3$)$_{10}$ superlattice, as shown in
Figs. 4 and 5.
\begin{table
\caption{\label{tab:table4} Dipole selection rules between the band states of point group $C_{3v}$ at $\Gamma$.
Relationships between the single group and double group representations are:
$\Gamma_1\rightarrow\Gamma_5, \Gamma_6$; $\Gamma_1\rightarrow\Gamma_5, \Gamma_6$;
$ \Gamma_3\rightarrow \Gamma_4, \Gamma_5, \Gamma_6$.\cite{Eberhardt}}
\begin{ruledtabular}
\begin{tabular}{ccc}
& $E\perp c$ & $E\parallel c$ \\ \hline
Single group & $\Gamma_{1} \longleftrightarrow \Gamma_{3}$ & $\Gamma_{1} \longleftrightarrow \Gamma_{1}$ \\
& $\Gamma_{2} \longleftrightarrow \Gamma_{3}$ & $\Gamma_{2} \longleftrightarrow \Gamma_{2}$ \\
& $\Gamma_{3} \longleftrightarrow \Gamma_{1}, \Gamma_{2}, \Gamma_{3} $ & $\Gamma_{3} \longleftrightarrow \Gamma_{3}$ \\
double group & $\Gamma_{1} \longleftrightarrow \Gamma_{3}$ & $\Gamma_{1} \longleftrightarrow \Gamma_{1}$ \\
& $\Gamma_{2} \longleftrightarrow \Gamma_{3}$ & $\Gamma_{2} \longleftrightarrow \Gamma_{2}$ \\
& $\Gamma_{3} \longleftrightarrow \Gamma_{1}, \Gamma_{2}, \Gamma_{3} $ & $\Gamma_{3} \longleftrightarrow \Gamma_{3}$
\end{tabular}
\end{ruledtabular}
\end{table}
Given the known symmetries of the band states at a $k$-point in
the BZ, the possible direct inter-band transitions can be worked
out using the dipole selection rules. The dipole selection rules
for the $O_{h}$ and $C_{3v}$ point groups\cite{Eberhardt} are
listed in Tables VI and VII, respectively.
Using these selection rules, we assign the prominent peaks in the
optical conductivity (Figs. 7, 8 and 9) of bulk Ba$_2$NiOsO$_6$
and its (Ba$_2$NiOsO$_6$)$_1$/(BaTiO$_3$)$_{10}$ superlattice to
the principal inter-band transitions at the $\Gamma$ point, as
shown in Figs. 4 and 5.
|
2,877,628,091,651 | arxiv | \section{Introduction}
\label{sect:intro}
The earliest stages of the process leading to the formation of a low-mass, Sun-like star are expected to be associated with supersonic (around 100 km\,s$^{-1}$) collimated jets. This ejection phenomenon is invoked to remove angular momentum from the protostar-disk system and allow the material to accrete from the disk onto the central object \citep[see, e.g., ][ and references therein]{frank14}.
Jets, in turn, accelerate the dense material of the cloud surrounding the protostar creating slower ($\sim 10$ km\,s$^{-1}$) molecular outflows, which are observable on a large scale (typically 0.1 pc) mainly through CO low-J rotational lines \citep[see, e.g., ][]{lada85}.
Although these jets are thought
to be driven by a magneto-centrifugal process which extracts and accelerates the material from the rotating star-disk system, the launching mechanism is still far from being clear.
As a consequence, what is also unclear is the region the jet is launched from: whether it comes from inside the dust sublimation radius at fractions of au from the star or from a larger region of the accretion disk.
In the first case, the molecular jet may be driven by the stellar surface \citep["stellar wind", ][]{glassgold91} or the inner disk region, either by an X-wind \citep{shang07} or by a dust-free magneto-hydrodynamical (MHD) disk wind as was shown in the recent modeling carried out by \citet{tabone20}. In the second case, rather, the molecular jet originates from an extended disk region as in the models of dusty magnetized disk winds \citep{pudritz07,panoglou12}.
In the last 15 years there have been a number of surveys in the (sub-)millimeter range aimed to address different aspects of the star-formation process at its earliest stages, that is, the protostellar multiplicity, the magnetic field topology in star-forming regions, and the presence of disks and outflows. These surveys mainly targeted continuum emission and CO low-J transitions at intermediate resolution ($3-4\arcsec$) \citep[e.g., ][]{jorgensen07,hull14,lee-k16,tobin16a,tobin18} and have shown that large-scale outflow emission is commonly associated with protostellar objects. However, the lack of angular resolution had not always allowed us to investigate whether all protostellar sources in multiple systems are associated with outflows and to reveal the outflow-accelerating engine, that is, the high-velocity collimated jet, which is directly ejected from the star-disk system and, therefore, is key in regulating the accretion and ejection process and the angular momentum removal.
To date, there have only been a few detailed studies at sub-arcsecond resolution that have targeted the primary molecular jet in a few prototypical protostellar objects, such as in HH 211, HH 212, L1448-C, L1157, or B335, by using selective jet tracers, such as SiO \citep{cabrit07b,cabrit12,codella07,codella14b,hirano06,hirano10,podio15,podio16,lee07a,lee07b,lee09b,lee17b,bjerkeli19}.
These studies revealed that protostellar jets may be as collimated as atomic jets from Class II sources observed at optical/NIR wavelengths \citep{dougados00,bacciotti00,woitas02,cabrit07a,agraamboage11} and they show a similar correlation between the mass ejection and mass accretion rates. In a few cases, they also show signatures of rotation \citep[e.g., ][and references therein]{lee17b,bjerkeli19,lee20}.
Recently, a survey of five molecular outflows in the Serpens star forming region was performed with ALMA \citep{tychoniec19}. However, a statistical study at sub-arcsecond angular resolution of a large sample of protostellar jets is still lacking.
It is now time to enlarge the sample of observed protostellar jets and to perform a statistical study on jets and outflows occurrence and properties using selected molecular line tracers in the mm-spectral window.
To this aim, here we perform a survey of 21 protostars in the most active star-forming regions visible in the northern hemisphere conducted with IRAM PdB array in the context of the CALYPSO large program\footnote{\url{http://irfu.cea.fr/Projects/Calypso}\\
\url{https://www.iram-institute.org/EN/content-page-317-7-158-240-317-0.html}}, targeting three lines which are typical outflow and jet tracers, that is, CO ($2-1$), SO ($5_6-4_5$), and SiO ($5-4$).
The first goal of this effort is to answer a simple but crucial question regarding whether jets, and, in general, mass ejection phenomena are commonly observed in Class 0 protostars. The second goal is to derive the jet properties, that is, the jet velocity and width, and the molecular abundances, which are crucial for understanding what region of the disk-protostar system jets are launched from and for constraining models of jet launching, as well as to reconstruct the mass ejection and mass accretion history from the protostellar to the pre-main sequence stage.
Finally, a third goal is to obtain a large database for follow-up observations at the extremely high-spatial resolution as the observations performed for the HH 212 jet \citep[e.g., ][]{codella14b,lee17b}, for instance.
The paper is structured as follows. The sample of protostars covered by the CALYPSO Large Program and analyzed in the context of this paper is presented in Sect. \ref{sect:sample}. Then in Sect. \ref{sect:obs}, we describe the acquired observations and the data reduction process. The methodology we applied to establish the occurrence of outflows and jets and to derive their properties and the obtained results are presented in Sect. \ref{sect:results}. Then in Sect. \ref{sect:discussion}, we discuss the occurrence and the properties of the detected protostellar jets and we compare them with those of jets from pre-main sequence sources. Finally, we summarize our conclusions in Sect. \ref{sect:conclusions}.
\section{The sample}
\label{sect:sample}
The CALYPSO survey was carried out with the IRAM-PdB interferometer towards 16 fields centered on known Class 0 protostars (i.e., 10$^4$--10$^5$~yr old solar analogue protostars, \citealt{andre00,andre10}), observed at 94 GHz, 219 GHz, and 231 GHz.
The targeted sources are all located in the Gould Belt clouds at $d< 450$ pc.
Seven of the targeted fields are located in the most active sites of star formation in the Perseus cloud: that is, L1448 (2A, NB, and C objects) and NGC1333 (IRAS2A, SVS13, IRAS4A, and IRAS4B).
In addition, we observed four sources located in different portions of the Serpens Main and South regions: SerpM-S68N, SerpM-SMM4, SerpS-MM18, and SerpS-MM22.
The selected sample contains also:
(i) three Class 0 sources in Taurus (IRAM04191, L1521-F, and L1527),
(ii) L1157, located in the Cepheus constellation, and driving the prototypical chemically rich outflow, and
(iii) the GF9--2 protostar, located in the east-west filamentary dark cloud GF9.
Several of these fields are associated with more than one protostar (e.g., L1448-NA and L1448-NB are in the same field), or to multiple systems (e.g., L1448-NB1 and L1448-NB2) identified from the analysis of the millimeter continuum emission in the CALYPSO maps by \citet{maury19}. Therefore the CALYPSO sample consists of 25 Class 0 protostars, four Class I\ and one continuum source whose nature is still unknown (VLA3), as summarized in Table \ref{tab:sample}.
In the table, we report for each source the position of the continuum peak at 1.3~mm and the systemic velocity ($V_{\rm sys}$) extracted from the CALYPSO dataset \citep{maury19,maret20}, the distance (d), the internal luminosity ($L_{\rm int}$), the mass of the envelope ($M_{\rm env}$), and the Class (0 or I).
Following \citet{dunham08}, the internal luminosity is derived by Ladjelate et al. (in prep.) using the 70 $\mu$m measurements provided by the {\it Herschel} Gould Belt survey \citep{andre10} and is a more reliable probe of the accretion luminosity than the bolometric luminosity, $L_{\rm bol}$, because it is not affected by external heating of the envelope by the interstellar radiation field, $L_{\rm ext}$ ($L_{\rm bol} = L_{\rm int} + L_{\rm ext}$). The latter adds on average a few tenths of a solar luminosity and significantly contributes to $L_{\rm bol}$\, in sources with low accretion luminosity.
Protostars belonging to multiple systems are grouped together in Table \ref{tab:sample} and the same systemic velocity and distance of the primary protostar (marked in boldface) is assumed.
For the binaries with no mass estimate of their individual envelopes, the value corresponds to the fraction of the total envelope mass in proportion of the peak flux densities at 1.3~mm (see \citealt{maury19,belloche20}).
As shown in Table \ref{tab:sample} the CALYPSO sample covers a wide range of internal luminosities ($L_{\rm int}$\, from 0.035 to 47 L$_{\odot}$), and envelope masses ($M_{\rm env}$ from 0.5 to 9.9 M$_{\odot}$), which makes it a unique laboratory for the study of the occurrence and properties of collimated jets as a function of the properties of the driving protostars.
Of the 30 sources identified by \citet{maury19} in the CALYPSO observations, 7 are tentative protostellar candidates, reported in parenthesis in Table \ref{tab:sample}.
Between these, 2 are part of a close binary system, that is, they are located less than one beam away from the primary (L1448-2Ab and L1448-NB2). In these cases, we consider the binary system as a single source for the assessment of the outflow or jet occurrence as it is not possible at the resolution of our dataset to understand whether only one or both components of the close binary systems are driving an outflow or jet.
Moreover, the two Class 0 and the Class I protostars indicated by $^{**}$ in Table \ref{tab:sample} lie outside of the primary beam at 231 GHz (L1448-NW, SVS13C, and SerpM-S68Nc). This causes a strong attenuation in the emission, possibly leading to the non-detection of faint line emission from jets. In fact, previous observations at lower resolutions covering a larger field-of-view report slowly outflowing extended gas in CO ($2-1$) and CO ($1-0$) for two of these sources (the Class 0 protostars L1448-NW and SVS13C, \citealt{lee15,plunkett13}), which is not detected in our CALYPSO CO maps due to attenuation.
As we want to assess the jet occurrence in an homogeneous sample where all sources are observed down to roughly the same sensitivity threshold, we excluded these three sources from our statistics.
Also, VLA3 is not included in the sample for the outflow or jet survey because its protostellar nature and its internal luminosity cannot be derived as it cannot be separated from the much brighter source SVS13A in the Herschel maps. No outflow associated with this continuum source was detected in the CALYPSO maps presented by \citet{lefevre17}.
Based on the above criteria, the sample analyzed to investigate the jet occurrence consists of 21 Class 0 and 3 Class I protostars, which are those reported in Table \ref{tab:jet-occurrence}.
Due to the small number, for the Class I sources the analysis is limited to establish the detection rate of outflows and jets and the derivation of their position angle (PA) (see Sect. \ref{sect:detection-rate}) but they are not further discussed in this paper, which focuses on the properties of jets from Class 0 protostars.
\begin{table*}
\caption{Properties of the protostars identified from the analysis of the CALYPSO continuum maps (see \citealt{maury19}).}
\begin{center}
\begin{tabular}{lccccccc}
\hline \hline
Source & $\alpha({\rm J2000})$$^a$ & $\delta({\rm J2000})$$^a$ & $V_{\rm sys}$$^b$ & $d^c$
& $L_{\rm int}$$^d$ & $M_{\rm env}$$^e$ & Class$^f$ \\
& ($^h$ $^m$ $^s$) & ($\degr$ $\arcmin$ $\arcsec$) & (km s$^{-1}$) & (pc)
& (L$_{\odot}$) & (M$_{\odot}$) & \\
\hline
{\bf L1448-2A} & 03 25 22.405 & +30 45 13.26 & +4.2 & 293 & 4.7 (0.5) & 1.2 & 0 \\
\vspace{0.3cm}
(L1448-2Ab) & 03 25 22.355 & +30 45 13.16 & & & $<4.7$ & 0.6 & 0 \\
{\bf L1448-NB1} & 03 25 36.378 & +30 45 14.77 & +4.9 & 293 & $<3.9$ & 3.3 & 0 \\
(L1448-NB2) & 03 25 36.315 & +30 45 15.15 & & & $3.9$ & 1.6 & 0 \\
L1448-NA & 03 25 36.498 & +30 45 21.85 & & & 6.4 (0.6) & 0.8 & I \\
\vspace{0.3cm}
L1448-NW$^{**}$ & 03 25 36.680 & +30 45 33.86 & & & -- & -- & 0 \\
{\bf L1448-C} & 03 25 38.875 & +30 44 05.33 & +5.1 & 293 & 11 (1) & 1.9 & 0 \\
\vspace{0.3cm}
L1448-CS & 03 25 39.132 & +30 43 58.04 & & & 3.6 & 0.16& I \\
\vspace{0.3cm}
{\bf IRAS2A1} & 03 28 55.570 & +31 14 37.07 & +7.5 & 293 & 47 (5) & 7.9 & 0 \\
{\bf SVS13B} & 03 29 03.078 & +31 15 51.74 & +8.5 & 293 & 3.1 (1.6) & 2.8 & 0 \\
SVS13A & 03 29 03.756 & +31 16 03.80 & & & 44 (5) & 0.8 & I \\
SVS13C$^{**}$ & 03 29 01.980 & +31 15 38.14 & & & -- & -- & 0 \\
\vspace{0.3cm}
(VLA3) & 03 29 03.378 & +31 16 03.33 & & & -- & -- & unknown\\
{\bf IRAS4A1} & 03 29 10.537 & +31 13 30.98 & +6.3 & 293 & $<4.7$ & 9.9 & 0 \\
\vspace{0.3cm}
IRAS4A2 & 03 29 10.432 & +31 13 32.12 & & & 4.7 (0.5) & 2.3 & 0 \\
{\bf IRAS4B1} & 03 29 12.016 & +31 13 08.02 & +6.8 & 293 & 2.3 (0.3) & 3.3 & 0 \\
\vspace{0.3cm}
(IRAS4B2) & 03 29 12.841 & +31 13 06.84 & & & $<0.16$ & 1.4 & 0 \\
\vspace{0.3cm}
{\bf IRAM04191} & 04 21 56.899 & +15 29 46.11 & +6.7 & 140 & 0.05 (0.01)& 0.5 & 0 \\
\vspace{0.3cm}
{\bf L1521-F} & 04 28 38.941 & +26 51 35.14 & +6.6 & 140 & 0.035 (0.01)& 0.7 & 0 \\%Vsys from Codella+97
\vspace{0.3cm}
{\bf L1527} & 04 39 53.875 & +26 03 09.66 & +5.7 & 140 & 0.9 (0.1) & 1.2 & 0 \\
{\bf SerpM-S68N}& 18 29 48.091 & +01 16 43.41 & +9.2 & 436 & 11 (2) & 11 & 0 \\
$^*$SerpM-S68Nb & 18 29 48.707 & +01 16 55.53 & & & 1.8 (0.2) & -- & 0 \\
\vspace{0.3cm}
$^*$(SerpM-S68Nc)$^{**}$& 18 29 48.811& +01 17 04.24& & & 1.4 (0.2) & -- & I \\
{\bf SerpM-SMM4a}& 18 29 56.716 & +01 13 15.65 & +8.8 & 436 & 2.2 (0.2)& 6.7 & 0 \\%Vsys from CO 3-2 Dionatos+ 2010
\vspace{0.3cm}
(SerpM-SMM4b) & 18 29 56.525 & +01 13 11.58 & & & $<2.6$ & 1.0 & 0 \\
{\bf SerpS-MM18a}& 18 30 04.118 & --02 03 02.55& +8.1 & 350 & 13 (4) & 4.5 & 0 \\
\vspace{0.3cm}
(SerpS-MM18b) & 18 30 03.541 & --02 03 08.33& & & 16 (4) & 0.9 & 0 \\
\vspace{0.3cm}
{\bf SerpS-MM22} & 18 30 12.310 & --02 06 53.56& +6.2 & 350 & 0.4 (0.2) & 0.9 & 0 \\
\vspace{0.3cm}
{\bf L1157} & 20 39 06.269 & +68 02 15.70 & +2.6 & 352 & 4.0 (0.4) & 3.0 & 0 \\
{\bf GF9-2} & 20 51 29.823 & +60 18 38.44 & -3.0 & 474 & 1.7 & 2.8 & 0 \\%Vsys from 13CO 2-1
\hline
\end{tabular}
\end{center}
$^a$ Positions of the 1.3 mm continuum peak emission are extracted from the CALYPSO dataset \citep{maury19}.
$^b$ Systemic velocities, $V_{\rm sys}$, correspond to the mean velocity of C$^{18}$O ($2-1$) emission on the source continuum peak position as fit by \citet{maret20}, except for GF9-2 (from $^{13}$CO ($2-1$)), IRAM04191 \citep{belloche02}, L1521-F (from NH$_3$ (1,1), \citealt{codella97}), and SerpM-SMM4 (from CO ($3-2$), \citealt{dionatos10b}). For multiple systems the systemic velocity of the primary is reported.
$^c$ Distances: \citet{ortiz-leon18a} for the Perseus sources; \citet{zucker19} for the Taurus sources and L1157; \citet{ortiz-leon17,ortiz-leon18b} for SerpM; Palmeirim et al., in prep. for SerpS; C. Zucker, priv. comm. for GF9-2. For multiple systems the distance of the primary is reported.
$^d$ Internal luminosities are derived by Ladjelate et al., in prep., from the 70 $\mu$m flux from the {\it Herschel} Gould Belt survey observations at 8$\arcsec$ spatial resolution \citep{andre10}, except for GF9-2 for which we use the value by \citet{wiesemeyer97} rescaled to the distance given in the fifth column. The uncertainty is in parentheses when available, and is larger for SVS13B because of the proximity to SVS13A. For SerpM-SMM4b the upper limit is given by the bolometric luminosity \citep{aso19}.
$^e$ Envelope mass from \citet{maury19} and references therein. For the binaries with no mass estimate of their individual envelopes, the value corresponds to the fraction of the total envelope mass in proportion of the peak flux densities at 1.3 mm given by \citet{maury19}. The masses have been re-scaled to the distances given in column 5.
$^f$ The classification as Class 0, I, or candidate protostellar object (in parentheses) is based on \citet{maury19}.
$^*$ For SerpM-S68Nb and SerpM-S68Nc we follow the same naming as \citet{maury19}. The names of the two sources are inverted with respect to the classification by \citet{williams00}, also followed by \citet{dionatos10b}.
$^{**}$ protostellar companions which lie outside of the primary beam at 231 GHz ($\sim 21\arcsec$).
\label{tab:sample}
\end{table*}
\section{Observations and data reduction}
\label{sect:obs}
The CALYPSO sources were observed with the IRAM-PdBI during several tracks between September 2010 and March 2013 using the six-antenna array in the most extended (A) and intermediate (C) configurations. The shortest and longest baselines are 16 m and 762 m, respectively, allowing us to recover emission at scales from $\sim$ 8$\arcsec$ down to $\sim$ 0$\farcs$4.
WideX\footnote{\url{https://www.iram-institute.org/EN/content-page-120-4-35-47-118-120.html}}
backends were used to cover the full 3.8 GHz spectral window at the spectral resolution of 2 MHz ($\sim$ 2.6 km s$^{-1}$ at 1.3 mm) for three spectral setups centered at 231, 219, and 94 GHz (corresponding to 1.30, 1.37, and 3.19~mm, respectively).
In addition, higher resolution backends ($\sim$ 0.1 km s$^{-1}$, subsequently smoothed to 1 km s$^{-1}$) were employed to observe the CO ($2-1$), SO ($5_6-4_5$), and SiO ($5-4$) lines (see Table \ref{tab:lines}).
The phase root mean square (rms) was typically $\le$ 50$\degr$ and 80$\degr$ for the A and C tracks, respectively, the precipitable water vapor (pwv) was 0.5–-1 mm (A) and $\sim$ 1--2 mm (C), and system temperatures were usually $\sim$ 100--160 K (A) and 150--250 K (C).
Calibration was carried out following standard procedures, using GILDAS-CLIC\footnote{\url{https://www.iram.fr/IRAMFR/GILDAS/}}. Strong quasars such as 3C273 and 3C454.3 were used to calibrate the correlator bandpass, while the absolute flux density scale was mainly estimated by observing MWC349 and 3C84, with a final uncertainty less than 15\%.
The continuum emission was removed from the visibility tables to produce continuum-free line tables.
Self-calibration was performed on the source continuum emission for all sources with bright dust continuum emission, that is, with continuum peak flux $>80$ mJy beam$^{-1}$ at 231 GHz (see Table 3 of \citealt{maury19}) and the self-calibrated phase solutions were applied to the continuum-subtracted line tables.
We carried out imaging with the GILDAS/MAPPING software, adopting a robust weighting parameter of 1\footnote{For more information on weighting schemes performed by the GILDAS/MAPPING software, see
\url{https://www.iram.fr/IRAMFR/GILDAS/doc/html/map-html/node32.html}
}
and obtaining typical synthesized beams of $0\farcs4-0\farcs9$, except for the weakest continuum emission sources in the sample (i.e., IRAM04191, L1521F, and GF9-2), for which we used a natural weighting to maximize the sensitivity to point sources obtaining synthesized beams of $0\farcs7-1\farcs0$ \citep{maury19}. These beam sizes correspond to an angular resolution of: $\sim 130-220$ au for the sources in Perseus ($d=293$ pc, \citealt{ortiz-leon18a}) and Taurus (two out of the three sources in Taurus have weak continuum emission, hence the larger beam sizes are compensated by the smaller distance, $d=140$ pc, \citealt{zucker19}, with the exception of L1527, for which an angular resolution of $\sim 60-130$ au is reached); and of $\sim 220-350$ au for the sources located in Serpens ($d=436$ pc for Serpens M, \citealt{ortiz-leon17,ortiz-leon18b}, $d=350$ pc for Serpens S, Palmeirim et al., in prep.), Cepheus ($d=352$ pc, \citealt{zucker19}), and GF9 ($d=474$ pc, C. Zucker, priv. comm.). The synthetized beam and corresponding angular resolution for the 16 targeted fields are summarized in Tabs. \ref{tab:beam_CO}, \ref{tab:beam_SO}, \ref{tab:beam_SiO} and shown in Fig. \ref{fig:jets1}.
The present paper is based on analysis of the 1.3 mm and 1.4 mm observations of CO ($2-1$), SiO ($5-4$), and SO ($5_6-4_5$), which are three standard tracers of molecular jets and outflows, hence they are ideal when tackling the array of open questions on protostellar jets presented in Sect. \ref{sect:intro}. As the jet and outflow emission is faint and spread over a wide range of blue-shifted and red-shifted velocities, we preferentially analyzed the WideX datacubes, which are at lower spectral resolution to increase the signal-to-noise ratio of the line emission. The WideX datacubes were resampled at a resolution of 3.25 km\,s$^{-1}$ (CO ($2-1$)), 3.4 km\,s$^{-1}$ (SO ($5_6-4_5$), and SiO ($5-4$)), reaching a typical rms noise per channel for the final continuum-subtracted datacubes of $\sim$ 2--10 mJy beam$^{-1}$. In some cases, we analyzed the higher resolution datacubes (1 km\,s$^{-1}$). The spectral resolution and the rms noise per channel of the CO ($2-1$), SO ($5_6-4_5$), and SiO ($5-4$) datacubes for the 16 targeted fields are listed in Tables. \ref{tab:beam_CO}, \ref{tab:beam_SO}, and \ref{tab:beam_SiO}.
\begin{table}
\caption{Properties of the lines targeted by the CALYPSO survey.}
\begin{tabular}[h]{ccccc}
\hline
\hline
Line & Frequency$^{a}$ & $E_{\rm up}$$^{a}$ & log$_{10}$($A_{\rm ij}$)$^{a}$ & $n_{\rm cr}$$^{b}$ \\
& (MHz) & (K) & (s$^{-1}$) & (cm$^{-3}$) \\
\hline
CO ($2-1$) & 230538.000 & 16.6 & $-6.2$ & $7.3 \times 10^3$ \\
SO ($5_6-4_5$) & 219949.442 & 35.0 & $-3.9$ & $7.7 \times 10^5$ \\
SiO ($5-4$) & 217104.980 & 31.3 & $-3.3$ & $1.6 \times 10^6$ \\
\hline
\end{tabular}\\
\small
$^{a}$ molecular parameters from the CDMS database \citep{muller01}. \\
$^{b}$ the critical densities are computed at T=100 K, using collisional rate coefficients from \citet{yang10} (CO), \citet{lique06} (SO), and \citet{balanca18} (SiO). \\
\label{tab:lines}
\end{table}
\section{Methodology and results}
\label{sect:results}
In this section, we present the methodology we applied in analysing the CALYPSO CO ($2-1$), SO ($5_6-4_5$), and SiO ($5-4$) line cubes and our obtained results (discussed in Sect. \ref{sect:discussion}).
The integrated line emission maps for all the Class 0 sources in our sample are presented in Sect. \ref{sect:detection-rate}, Figure \ref{fig:jets1}. From the maps, we establish the detection rate of outflows and jets (Fig. \ref{fig:jet-occurrence}), we estimate the position angles of the detected flows (Table \ref{tab:jet-occurrence}), and extract position-velocity (PV) diagrams of the emission along them (Fig. \ref{fig:PV-block1}).
For the sources where an SiO jet is detected at $>10\sigma$ in the integrated maps\footnote{\label{note1} The contours of the integrated maps shown in Fig. \ref{fig:jets1} are from $5\sigma$ with steps of $5\sigma$, therefore a $5\sigma$ detection corresponds to one contour, a $10\sigma$ detection to two contours. The $5\sigma$ level of the integrated maps is different for each source and tracer as it depends on the rms noise per channel of the corresponding datacube (as reported in Tabs. \ref{tab:beam_CO}, \ref{tab:beam_SO}, and \ref{tab:beam_SiO}) and on the velocity interval over which the emission is integrated. The velocity interval of integration and the corresponding $5\sigma$ level are labeled in each map in Fig. \ref{fig:jets1} and spans between $10-50$ mJy\,km\,s$^{-1}$\,beam$^{-1}$ when line emission is detected only in one channel to $\sim100-1700$ mJy\,km\,s$^{-1}$\,beam$^{-1}$ when emission is integrated on several velocity channels (up to 21 channels, i.e. $\sim 70$ km\,s$^{-1}$, for L1448-C). The $5\sigma$ levels reported in Fig. \ref{fig:jets1} are rounded.}, we estimate the jet properties, that is, their velocity and spatio-kinematical structure (Sect. \ref{sect:kinematics}, Fig. \ref{fig:distri-vrad}), the width and opening angle (Sect. \ref{sect:jet-width}, Figs. \ref{fig:jet-width_all}, \ref{fig:jet-width-main}, \ref{fig:jet-width-hv}), the molecular column densities and abundances (Sect. \ref{sect:jet-abundances}, Table \ref{tab:jets-energetics}, Fig. \ref{fig:jet-abundances}), and the jet energetics, that is, the jet mass-loss and momentum rates and mechanical luminosities (Sect. \ref{sect:jet-energetics}, Table \ref{tab:jets-energetics}, Fig. \ref{fig:jet-energetics}).
\begin{figure*}
\begin{centering}
\includegraphics[width=13.5cm]{l1521f_map-eps-converted-to.pdf}
\vspace{0.2cm}\\
\includegraphics[width=13.5cm]{iram04191_map-eps-converted-to.pdf}
\vspace{0.2cm}\\
\includegraphics[width=13.5cm]{n1333-irs4b2_map-sc-eps-converted-to.pdf}
\vspace{0.2cm}\\
\includegraphics[width=13.5cm]{aqu-mms2_map-eps-converted-to.pdf}
\vspace{0.2cm}\\
\includegraphics[width=13.5cm]{l1527_map-sc-eps-converted-to.pdf}
\caption{Integrated maps of the CO ($2-1$), SO ($5_6-4_5$), and SiO ($5-4$) emission for the Class 0 protostars of the CALYPSO sample ({\it left, center, and right panels}). Continuum at 1.3~mm (grey scale) and integrated line emission at systemic, blue-, and red- shifted velocities (green, blue, and red contours) are shown. The systemic velocity (one channel) and the velocity of the first and last channels over which blue- and red-shifted emission was integrated (in km\,s$^{-1}$) are labeled in the top-left corner (in green, blue, and red respectively). When the emission is detected on a single channel, its central velocity is labeled. The 5$\sigma$ intensity of the corresponding integrated emission (in mJy km\,s$^{-1}$\, beam$^{-1}$) is labeled in the top-right corners with the same colour coding. The 5$\sigma$ intensity of the continuum (in mJy beam$^{-1}$) is also labeled in black. The contours are from 5$\sigma$ with steps of 5$\sigma$. When the emission is faint the contours are from 3$\sigma$ with steps of 3$\sigma$ and the corresponding values are indicated in parentheses. The black stars (or triangles) indicate the positions of the protostars (or candidate protostars) identified by \citet{maury19}, the black solid line shows the jet or outflow PA. The beam size is shown in the bottom-left corner.}
\label{fig:jets1}
\end{centering}
\end{figure*}
\begin{figure*}
\setcounter{figure}{0}
\begin{centering}
\includegraphics[width=13.5cm]{gf9-2_map-eps-converted-to.pdf}
\vspace{0.2cm}\\
\hspace{0.4cm}
\includegraphics[width=13.1cm]{serp-68nb_map-eps-converted-to.pdf}
\vspace{0.2cm}\\
\includegraphics[width=13.3cm]{n1333-irs4b1_map-sc-eps-converted-to.pdf}
\vspace{0.2cm}\\
\includegraphics[width=13.5cm]{serp-smm4_map-sc-eps-converted-to.pdf}
\vspace{0.2cm}\\
\includegraphics[width=13.5cm]{svs13-b_map-sc-eps-converted-to.pdf}
\caption{{\it Continued}}
\label{fig:jets2}
\end{centering}
\end{figure*}
\begin{figure*}
\setcounter{figure}{0}
\begin{centering}
\includegraphics[width=13.5cm]{l1448-n_map-sc-eps-converted-to.pdf}
\vspace{0.2cm}\\
\includegraphics[width=13.5cm]{l1157_map-sc-eps-converted-to.pdf}
\vspace{0.2cm}\\
\includegraphics[width=13.5cm]{n1333-irs4a_map-sc-eps-converted-to.pdf}
\vspace{0.2cm}\\
\includegraphics[width=13.5cm]{l1448-2a_map-sc-eps-converted-to.pdf}
\vspace{0.2cm}\\
\includegraphics[width=13.5cm]{l1448-c_map-sc-eps-converted-to.pdf}
\caption{{\it Continued}}
\label{fig:jets3}
\end{centering}
\end{figure*}
\begin{figure*}
\setcounter{figure}{0}
\begin{centering}
\includegraphics[width=13.5cm]{serp-68n_map-eps-converted-to.pdf}
\vspace{0.2cm}\\
\includegraphics[width=13.5cm]{aqu-mms1_map-sc-eps-converted-to.pdf}
\vspace{0.2cm}\\
\includegraphics[width=13.5cm]{n1333-ir2a_map-sc-eps-converted-to.pdf}
\caption{{\it Continued}}
\label{fig:jets4}
\end{centering}
\end{figure*}
\subsection{Detection rate of outflows and jets}
\label{sect:detection-rate}
\begin{figure*}
\centering
\includegraphics[width=.95\textwidth]{jet-occurence-frac.pdf}
\caption{Number of sources associated with outflows and jets as traced by CO ($2-1$) (black, {\it left}), SO ($5_6 - 4_5$) (red, {\it middle-left}), and SiO ($5-4$) (blue, {\it middle-right}) as a function of the internal luminosity, $L_{\rm int}$, of the 21 Class 0 protostars in the CALYPSO sample. The detection rate for each $L_{\rm int}$ bin (in logarithmic scale) is reported above the histogram, while the detection rate for the whole sample is labeled on top of each panel.
The detection rate of jets and outflows for $L > L_{\rm int}$ is shown in the right panel, with the same colour coding. The grey histogram shows the fraction of sources for $L > L_{\rm int}$.}
\label{fig:jet-occurrence}
\end{figure*}
\begin{table*}
\caption{Detection of outflows and jets as traced by CO ($2-1$), SO ($5_6 - 4_5$), and SiO ($5-4$) lines in the CALYPSO sample of Class 0 and I protostars.}
\begin{tabular}[h]{cccccccc}
\hline
\hline
Source & $L_{\rm int}$ & COMs$^{a}$ & Disk$^{b}$ & \multicolumn{3}{c}{Outflow/Jet$^{c}$} & PA$_{\rm jet/outflow}$$^{d}$ \\
& (L$_{\odot}$) & & & CO & SO & SiO & ($\degr$)\\
\hline
\multicolumn{8}{c}{{\bf Class 0}}\\
L1521-F & $0.035 (0.01)$ & & & Y & & & $+260$(B)$^{e}$ \\
IRAM04191 & $0.05 (0.01)$ & & & Y & & & $+20$(R) \\
(IRAS4B2) & $<0.16$ & & & Y* & Y* & Y* & $-99$ \\
SerpS-MM22 & $0.4 (0.2)$ & & & Y & & & $+230$ \\
L1527 & $0.9 (0.1)$ & & Y & Y & D & & $-90$$^{e}$\\
GF9-2 & $1.7$ & & & Y &D/E?& & $50$ \\
SerpM-S68Nb & $1.8 (0.2)$ & & & Y* & & Y & $+108$ \\
SerpM-SMM4a & $2.2 (0.2)$ & & & Y & & & $0$(B)$^{e}$ \\
IRAS4B1 & $2.3 (0.3)$ & Y & y & Y & Y & Y & $+165$ \\
(SerpM-SMM4b) & $<2.6$ & & & Y & Y & Y & $+10$(B)/$+135$(R)\\
SVS13B & $3.1 (1.6)$ & & y & Y* & Y* & Y & $+167$ \\
L1448-NB1 (+NB2) & $3.9$ & & y & Y & Y & Y & $-80$ \\
L1157 & $4.0 (0.4)$ & y & & Y &D/E?& Y & $+163$ \\
IRAS4A1 & $<4.7$ & & & Y & Y & Y & $+180$ \\
IRAS4A2 & $4.7 (0.5)$ & Y & & Y & Y & Y & $+182$ \\
L1448-2A (+2Ab) & $4.7 (0.5)$ & & & Y &D/E?& Y* & $+118^{f}$ \\
L1448-C & $11 (1)$ & Y & Y & Y & Y & Y & $-17$ \\
SerpM-S68N & $11 (2)$ & y & & Y & Y & Y & $-45$ \\
SerpS-MM18a & $13 (4)$ & Y & y & Y & Y & Y & $+188$ \\
(SerpS-MM18b) & $16 (4)$ & & & Y &D/E?& & $+205$(R) \\
IRAS2A1 & $47 (5)$ & Y & y & Y & Y & Y & $+205$ \\
\hline
\multicolumn{8}{c}{{\bf Class I}}\\
L1448-CS & $3.6$ & & & Y & Y & Y & $+60$(R) \\
L1448-NA & $6.4 (0.6)$ & & & Y$^{e}$ & & & $+40$$^{e}$\\
SVS13A (+VLA3) & $44 (5)$ & Y & & Y & Y & Y & $+155$ \\
\hline
\end{tabular}\\
\small
$^{a}$ emission from three ("y") or more than three ("Y") complex organic molecules (COMs) based on the CALYPSO survey \citep{belloche20}.\\
$^{b}$ "y" indicates disk candidates, i.e., sources which show a velocity gradient perpendicular to the jet axis (within $\pm 45\degr$) at a few hundred au scales in one of the disk tracers ($^{13}$CO ($2-1$), C$^{18}$O ($2-1$), and SO ($5_6-4_5$)). "Y" indicates the detection of a Keplerian rotating disk \citep[based on the CALYPSO survey, see Table 3 by ][]{maret20}.\\
$^{c}$ "Y" indicates outflow/jet blue- and/or red-shifted emission detected in CO ($2-1$), SO ($5_6-4_5$), and SiO ($5-4$) at $> 10\sigma$ in the integrated maps. The asterix ("Y*") indicates emission at $5-10\sigma$. For SO ($5_6-4_5$): "D" indicates emission from the disk, "D/E?" indicates compact emission at low blue- and red-shifted velocities, and with a velocity gradient perpendicular to the jet (except for L1448-2A), originating from the inner envelope or the disk. \\
$^{d}$ The jet or outflow position angles are given for the blue-shifted lobe from North to East. If the blue- and red-shifted lobes have different PA or if only one of the two lobes is detected, this is indicated by the label (B) or (R).\\
$^{e}$ For L1521-F, L1527, and SerpM-SMM4a, we do not detect SiO and SO, and CO emission does not show a clear structure. The reported PA are taken from \citet{tokuda16,hogerheijde98,tobin12,aso18}. For L1448-NA we do not detect emission in any of the tracer, and the PA is from CO ($2-1$) observations by \citet{lee15}.\\
$^{f}$ The jet PA is taken to be in the middle of the CO cavities.\\
\label{tab:jet-occurrence}
\end{table*}
To assess the presence of outflows and jets associated with the protostars identified in the CALYPSO survey (\citealt{maury19}, see Table \ref{tab:sample}), we analyzed the CO ($2-1$), SO ($5_6-4_5$), and SiO ($5-4$) datacubes and searched for emission at blue- and red-shifted velocities with respect to the source systemic velocity, $V_{\rm sys}$, reported in Table \ref{tab:sample}.
Maps of the CO ($2-1$), SO ($5_6-4_5$), and SiO ($5-4$) emission are produced by integrating the data-cubes on the blue- and red-shifted velocity channels where line emission at $\ge5\sigma$ is detected.
Figure \ref{fig:jets1} presents the integrated line emission maps for all the sources in the CALYPSO sample. The $V_{\rm LSR}$ velocity of the first and last channels over which the line emission is integrated is labeled (only one velocity value is given when emission is detected only in one channel), along with the 5$\sigma$ intensity of the integrated emission (on the top-left and top-right corners of each panel, respectively).
We claim that the outflow is detected if the map of CO ($2-1$) shows resolved blue- and/or red-shifted emission at $\ge5\sigma$ associated with the source in the integrated maps\textsuperscript{\ref{note1}}, while blue- or red-shifted SiO ($5-4$) emission is used as a probe of collimated jets. If SO ($5_6-4_5$) is detected at $\ge5\sigma$ along the same direction and on a similar velocity range as SiO, we can claim that the jet is detected also in SO.
If SiO is detected the jet position angle (PA$_{\rm jet}$) is determined from the SiO peaks located closest to the source; when instead SiO is not detected, the outflow PA (PA$_{\rm outflow}$) is inferred from the distribution of CO ($2-1$).
Then, position-velocity diagrams of the CO ($2-1$), SO ($5_6-4_5$), and SiO ($5-4$) emission are extracted along the outflow or jet PA and shown in Fig. \ref{fig:PV-block1}.
The detection of outflows and jets, that is, of blue- and red-shifted emission in the CO ($2-1$), SO ($5_6-4_5$), and SiO ($5-4$) lines, in the CALYPSO sample of Class 0 and I protostars and the estimated PAs are summarized in Table \ref{tab:jet-occurrence}.
The sources are listed by increasing internal luminosity, as $L_{\rm int}$ is a probe of the accretion luminosity, which, in turn, is expected to correlate with the mass ejection rate hence with the jet brightness \citep[see, e.g., ][]{hartigan95}.
In Cols. 3 and 4, we report whether the source is associated with a hot corino or a disk identified by the analysis of the source spectra and of the $^{13}$CO ($2-1$), C$^{18}$O ($2-1$), and SO ($5_6-4_5$) line maps, respectively \citep{belloche20,maret20}.
Cols. 5, 6, 7, and 8 report the detection of blue- red-shifted emission in the three jet tracers (CO ($2-1$), SO ($5_6-4_5$), SiO ($5-4$)) and the position angle of the detected emission (PA$_{\rm jet/outflow}$). For L1521-F, L1527, and SerpM-SMM4a we do not detect emission in the SiO and SO lines, and the CO emission does not show a clear structure. Therefore, the position angle reported in Table \ref{tab:jet-occurrence} is taken from previous studies \citep{tokuda16,hogerheijde98,tobin12,aso18} (see Sect. \ref{app:notes-on-sources} for the notes on these sources).
Figure \ref{fig:jet-occurrence} shows the number of detections and the detection rate of jets and outflows in each of the three tracers (CO ($2-1$), SO ($5_6-4_5$), SiO ($5-4$)) as a function of the source internal luminosity, $L_{\rm int}$, for the Class 0 protostars in our sample. The figure indicates that outflow emission in the CO ($2-1$) line maps is detected in 21 Class 0 protostars out of 21.
Emission in SiO ($5-4$) is detected in 14 sources, which means that at least 67\% of the Class 0 protostars drive an SiO jet.
Finally, 11 of the 14 protostars with an SiO jet show emission along the jet PA in SO ($5_6-4_5$), indicating that about 79\% of the SiO jets are also detected in the SO line (with a detection rate of SO jets of 52\% over the whole sample of 21 Class 0 protostars).
Five more sources out of the 21 Class 0 protostars show blue- and red-shifted emission in SO ($5_6-4_5$) (L1527, GF9-2, L1157, L1448-2A, and SerpS-MM18b). In these sources, however, the spatial and velocity distribution of the SO emission is not in agreement with that of CO ($2-1$) (and SiO ($5-4$) in the case of L1157), which probes the outflow (see the integrated maps in Fig. \ref{fig:jets1} and the position-velocity diagrams in Fig. \ref{fig:PV-block1}). The possible origin of the SO ($5_6-4_5$) for these five sources is discussed in Sect. \ref{sect:discussion}.
Concerning the three Class I protostars in the CALYPSO sample, there is blue- or red-shifted CO ($2-1$) emission (or both) detected in two sources out of three. The exception is the Class I source L1448-NA for which no line emission is detected in our CALYPSO line maps. However, previous lower resolution observations of CO ($2-1$) show evidence of slow outflowing gas \citep{lee15}. Taking into account this previous detection, CO outflows are detected in three Class I protostars out of three, hence, all Class I sources are associated with outflows. SiO and SO jet emission is detected in two of them (L1448-CS and SVS13A), with the exception of the Class I L1448-NA, hence the detection rate of SiO jets is 67\%, the same as for Class 0 protostars.
We note, however, that the emission from the L1448-CS jet overlaps on the bright emission of the jet driven by L1448-C, impeding us to derive the jet properties (see Appendix \ref{app:notes-on-sources}). On the other hand, the morphology and properties of the jet associated with SVS13A based on the CALYPSO data are already presented in \citet{lefevre17}.
Therefore, the properties of the jets from the three Class I sources in our sample are not presented in the following sections.
\subsection{Jet spatio-kinematical structure}
\label{sect:kinematics}
Based on the integrated maps in Fig. \ref{fig:jets1} and the PV diagrams in Fig. \ref{fig:PV-block1} we investigate the spatio-kinematical properties of the flows for the 12 Class 0 sources that exhibit an SiO jet detected at $>10\sigma$ in the integrated maps (see Table \ref{tab:jet-occurrence}).
For these jets, the CO ($2-1$), SO ($5_6-4_5$), and SiO ($5-4$) line spectra are extracted at the position of the blue- and red-shifted SiO emission peaks (knots) located closest to the driving protostar, denoted as B and R (see Fig. \ref{fig:spec1} in Appendix \ref{app:spectra}). The RA and Dec offsets of the innermost SiO knots, B and R, and their distance from the driving source are given in Table \ref{tab:fluxes}.
The line spectra in Fig. \ref{fig:spec1} show several emission components: SiO ($5-4$), which is commonly used as a selective probe of the jet, emits mainly at high velocity (HV), that is, at velocities of $15-80$ km\,s$^{-1}$\, with respect to the source systemic velocity, which correspond, for a median jet inclination of $60\degr$ to the line of sight, to deprojected velocities of $30-160$ km\,s$^{-1}$; the bulk of CO ($2-1$) emission, instead, is at low velocity (LV: $<15$ km\,s$^{-1}$\, with respect to $V_{\rm sys}$).
This kinematical difference between SiO and CO also reflects in a different spatial distribution of the two molecules. The line maps in Fig. \ref{fig:jets1} and the PV diagrams in Fig. \ref{fig:PV-block1} show that the SiO emission is more collimated and is concentrated on smaller areas of the spatial-velocity space than CO in most of the sources (IRAS4B1, SerpM-SMM4b, L1448NB, L1157, IRAS4A1 and A2, L1448-C, and IRAS2A1). In these sources, SiO probes the collimated jet and, in some cases, it also probes bright terminal shocks with (IRAS4B1, L1448NB) or without (IRAS4A1 and A2) a CO counterpart.
The CO emission is transversally more open and in some cases, it also extends out to almost twice larger distances than SiO (SerpM-SMM4b, L1157, L1448-2A, IRAS2A1). This suggests that the CO emission is dominated by emission from the outflow, that is, from the surrounding material that is put into motion or entrained by the jet.
However, in most of the jet sources, CO ($2-1$) has a secondary peak at high velocity and the CO HV component looks co-spatial with SiO in the PV diagrams, suggesting that in the HV range, CO probes the jet similarly to SiO. There are a few exceptions, for instance, for sources where in the line spectra, there is no clear separation between low and high-velocity components in all three tracers that is likely due to the low inclination of the flow (IRAS4A2, SerpM-S68N, and SerpS-MM18a) and sources where the high-velocity jet emission seen in SiO has no counterpart in CO (SerpM-S68Nb blue lobe, SVS13B both lobes, L1448-NB blue lobe, and SerpM-S68N red lobe). We refer to the latter objects as "CO-poor" jets.
Furthermore, SO ($5_6-4_5$), when detected in the 12 SiO jet sources, may have different origins: in the jet, it exhibits spatio-kinematical distribution that is very similar to that of SiO in the maps and in the PV diagrams (SerpM-SMM4b, SVS13B, L1448NB, IRAS4A1 and A2, L1448-C, SerpM-S68N, SerpS-MM18a, IRAS2A1), in the terminal shocks (IRAS4B1), or in a compact region around the source, where it likely probes the disk or the inner envelope (L1157, L1448-2A).
A more robust comparison of the spatio-kinematical distribution of the SiO emission with that of the CO and SO LV and HV components requires spatial resolution of $\sim 10$ au, as shown for example for the study of the prototypical protostellar jet from HH 212 \citep{lee18b}.
Based on the spectra we define for each jet lobe the high-velocity ranges where CO ($2-1$) and SO ($5_6-4_5$) are likely to trace the same jet component as SiO ($5-4$). The HV ranges (in $V_{\rm LSR}$) are summarized in Table \ref{tab:fluxes}.
The identification of the high-velocity ranges is further supported by the inspection of the position-velocity diagrams (see Fig. \ref{fig:PV-block1} in Appendix \ref{app:jet-pv}). As explained above, CO emission is mostly dominated by low or intermediate velocity material that corresponds to the wider flow surrounding the jet. However, the high-velocity SiO emission which probes the jet, also has counterparts in CO and SO for many of the sources and they appear co-spatial in the PV diagrams, which suggests that they all trace the same gas component in the jet with little or no contamination by the outflow or by entrained material.
Then we estimate the jet radial velocity in the two innermost knots B and R along the blue- and red-shifted lobes as the velocity of the SiO emission peak in the B and R spectra with respect to the systemic velocity ($V_{\rm rad}= V_{\rm peak} - V_{\rm sys}$) and we report them in Table \ref{tab:fluxes}. The estimated jet radial velocities are affected by an uncertainty of $\pm 1.7$ km\,s$^{-1}$\, due to the resolution of our spectra.
The distribution of radial velocities of the jet lobes, $V_{\rm rad}$, inferred from the SiO spectra is shown in Fig. \ref{fig:distri-vrad}. The distribution is flat within the statistical uncertainty, in agreement with a randomly oriented jet distribution.
This indicates that the jets for which the SiO emission peak is detected at low velocity (IRAS4A2, SerpM-S68N, and SerpS-MM18a) may actually be high-velocity jets seen close to the plane of the sky. The derived median radial velocity of the high-velocity jet component is $30 \pm 10$ km\,s$^{-1}$.
Estimates of the jet inclination and of the deprojected jet velocity are available only for a few jets in our sample, as detailed in the notes on the individual sources (Appendix \ref{app:notes-on-sources}) and discussed in Sect. \ref{sect:jet-energetics}.
Many of the jets in our sample show asymmetries between the two lobes either
in their morphology or their velocity.
First of all, one out of 12 SiO jets is monopolar, the jet from IRAS2A1 \citep{codella14a}. High-velocity emission in the three tracers is detected only in the blue lobe, while the low-velocity emission associated with the outflow is detected in both lobes.
In the sub-sample of the 11 SiO bipolar jets, morphological asymmetries between the two lobes are also observed (see the integrated maps in Fig. \ref{fig:jets1}).
The jet from SerpM-SMM4b shows different PA (and inclination, \citealt{aso18}) between the blue and red lobes, while 5 out of the 12 jets show an S-shaped morphology, which suggests jet precession around its axis (namely IRAS4B1, L1448-NB, L1157, IRAS4A1, and IRAS4A2). The precession patterns of the jets from L1157 and IRAS4A2 based on the CALYPSO data are discussed in two previous papers \citep{podio16,santangelo15}.
The jet from SerpS-MM18a is wiggling, which is also suggestive of a slight precession, as was previously discussed by \citet{plunkett15b}.
Hence, 50\% of the SiO jets show detectable precession or wiggling at our resolution. This is the first time that the statistical occurrence of this property can be estimated.
Finally, some of the eleven SiO bipolar jets show an asymmetry in the velocity between the two lobes.
The degree of the velocity asymmetry is quantified by estimating the ratio between the radial velocities of the blue- and red-shifted innermost SiO knots, B and R. In particular, it is computed as the ratio between the velocity of the faster knot, $V_{\rm rad, f}$, over the velocity of the slower knot, $V_{\rm rad, s}$, as reported in Table \ref{tab:fluxes}. The error on this ratio depends on the uncertainty on the determination of the radial velocity of the two jet lobes ($\pm 1.7$ km\,s$^{-1}$) due to the resolution of our spectra. For the jets of IRAS4A2, SerpM-S68N and SerpS-MM18a, which show very low radial velocities (possibly due to low inclination), the radial velocities ratio, $V_{\rm rad, f}/V_{\rm rad, s}$, is affected by a large uncertainty and this prevents us from drawing a conclusion on whether the jet is or is not asymmetric in velocity.
Based on the derived $V_{\rm rad, f}/V_{\rm rad, s}$ ratios and their uncertainties, the radial velocity of the blue-shifted and red-shifted lobes differ by factor of $1.4-2.1$ for three jets (namely SVS13B, L1157, and IRAS4A1), and by a factor of $7.8$ for SerpM-S68Nb. As the detection of velocity asymmetries for jets at low inclination is hindered by the low spectral resolution of our data (3.4 km\,s$^{-1}$), the number of detected velocity asymmetric jets should be considered a lower limit. We conclude that at least 33\% of the observed jets has a velocity asymmetry of $1.3-7.8$.
\begin{figure}
\centering
\includegraphics[width=.49\textwidth]{distrib-eps-converted-to.pdf}
\caption{Distribution of jet radial velocities in the CALYPSO sample. The values correspond to the velocity of the SiO emission peak in the spectra extracted at the position of the blue- and red-shifted SiO knots close to the driving protostar, B and R (see Fig. \ref{fig:spec1} in Appendix \ref{app:spectra}).}
\label{fig:distri-vrad}
\end{figure}
\subsection{Jet width and opening angle}
\label{sect:jet-width}
We investigate the collimation properties of the flows for the 12 Class 0 sources that exhibit an SiO jet detected at $>10\sigma$ in the integrated maps (see Table \ref{tab:jet-occurrence}). The width of the red and blue jet and outflow lobes in each of the three tracers (CO, SO, and SiO) is measured from the emission maps integrated on the red-shifted and blue-shifted velocity ranges, respectively, as indicated in Fig. \ref{fig:jets2}. The widths are measured by fitting the spatial profile of the line emission perpendicular to the jet axis (PA given in Table \ref{tab:jet-occurrence}) with a Gaussian profile at each position along the jet. Deconvolved widths, corresponding to twice the jet radius, 2$R_{\rm jet}$, are then derived by correcting the full width at half maximum (FWHM) of the observed emission, FWHM$_{\rm obs}$, for the size of the beam transverse to the jet axis (see Appendix \ref{app:jet-width}). The deconvolution method allows us to infer jet widths smaller than the beam if the signal-to-noise ratio (S/N) is high enough to measure a FWHM$_{\rm obs}$ larger than the transverse beam size. Therefore, this procedure is applied to each source and each molecular line if detected at $> 10 \sigma$ in the integrated maps. Spatially resolved emission showing asymmetric cavity walls or complex bow-shock structures that strongly deviate from a Gaussian distribution have been discarded (e.g., SiO red-shifted emission in IRAS4A, see Fig. \ref{fig:jets2}). Figure \ref{fig:jet-width-main} collects the jet width, 2$R_{\rm jet}$, in the three tracers (CO ($2-1$), SO ($5_6-4_5$), and SiO ($5-4$)) for the 12 sources driving an SiO jet as a function of the distance from source. The estimates of the flow width for each source separately are presented in Figure \ref{fig:jet-width_all} in Appendix \ref{app:jet-width}.
After deconvolution by the beam, jet widths appear to be always larger than $\simeq 50$~au, even for SiO jets observed very close to the driving source. For example, the beam-corrected width of the SiO jet driven by L1157 is about 80 au at a distance from the source of 100 au. In contrast, ALMA observations of SiO at very high angular resolution \citep[e.g., ][0.02$''$ or 8~au resolution]{lee17b} and VLBI observations \citep[e.g., ][]{claussen98} towards Class 0 jets suggest that jet widths are $<20$~au within 100~au distance. Our beam deconvolution method neglects the effect of residual phase noise and cleaning artifacts which result in an effective resolution that is lower than the resolution implied by the clean beam. Consequently, we choose to consider the widths that are smaller than the size of the transverse beam to be only (inclusive) upper limits on the true width of the jet (cyan points in Fig. \ref{fig:jet-width-main}).
Measurements taken within a beam from the driving source are also discarded since the true flow width may vary a lot within a beam.
Figure \ref{fig:jet-width-main} shows that the width of the flow globally increases with the distance from the source. However, the complex variation of the jet width with distance prevents us from defining an opening angle of the flow for each source.
For comparison with opening angles measured in atomic Class II and Class I jets on similar scales, following a statistical approach we derive the average collimation properties of the sample by fitting the measured flow widths in Fig. \ref{fig:jet-width-main} with straight lines according to the following equation:
\begin{equation}
2 R_{\text{jet}} = 2 R_{0} + 2 \tan{(\alpha/2)} z,
\label{eq:stright-line}
\end{equation}
where $z$ is the distance from source, $\alpha$ is the apparent full opening angle of the flow\footnote{due to projection effects, the true opening angle is smaller: $tan(\alpha_{\rm true}) = tan(\alpha) \times sin(i)$, where $i$ is the inclination to the line of sight.}, and $R_{0}$ is a constant offset. The last quantity is introduced following studies of Class II jets \citep{agraamboage11,dougados00,hartigan04,maurri14}. Since Eq. (\ref{eq:stright-line}) is simply a parametric formula to fit the jet width at distances of $\sim 200-1500$~au from the driving source, $R_{0}$ should not be interpreted as the launching point of the flow.
Figure \ref{fig:jet-width-main} shows that the collimation properties of the velocity-integrated emission depend markedly on the molecular tracer. Opening angles of the CO ($2-1$) emission span a wide range of value between $8^{\circ}$ and $35^{\circ}$ with a median value of $\sim 25^{\circ}$. SO ($5_6-4_5$) emission is more collimated and shows typical opening angles of $\sim 15^{\circ}$. SiO ($5-4$) emission is even more collimated with a typical opening angle of about $10^{\circ}$ and with minimum values as low as $4^{\circ}$.
Thus, the bulk emission of the three lines highlights different regions of the flow, with SiO tracing the most collimated jet, CO the broader component, and SO a flow of intermediate collimation.
We also measured the width of the high-velocity CO emission and compare the results with the width of the SiO emission in Fig. \ref{fig:jet-width-hv}.
As explained in Sect. \ref{sect:kinematics} the high-velocity ranges where CO ($2-1$) and SO ($5_6-4_5$) trace the same jet component as SiO ($5-4$) are defined based on the line spectra extracted at the position of the innermost SiO knots (see Table \ref{tab:fluxes} and Fig. \ref{fig:spec1} in Appendix \ref{app:spectra}) and on the inspection of the PV diagrams (see Fig. \ref{fig:PV-block1}).
When selecting only the high-velocity component, the CO emission is as collimated as the SiO emission, which is a further indication that high-velocity CO traces the same component probed by SiO, namely, the jet. The same conclusion applies to SO.
\begin{table*}
\caption{Properties of the blue- and red-shifted SiO knots closest to the driving sources, B and R, as derived from the spectra in Fig. \ref{fig:spec1}.
}
\begin{tabular}[h]{cccccccccc}
\hline
\hline
Source & & RA$_{\rm off}$, Dec$_{\rm off}$$^{a}$ & Distance$^{b}$ & HV$^{c}$ & $V_{\rm rad}^{d}$ & $V_{\rm rad, f}/V_{\rm rad, s}$$^{e}$ &$F_{\rm CO}$$^{f}$ & $F_{\rm SO}$$^{f}$ & $F_{\rm SiO}$$^{f}$ \\
& & (\arcsec, \arcsec) & (\arcsec) & (km\,s$^{-1}$) & (km\,s$^{-1}$) & &(K km\,s$^{-1}$) & (K km\,s$^{-1}$) & (K km\,s$^{-1}$) \\
\hline
SerpM-S68Nb & B & (+5.99, -2.05) & 6.33 & -50/-20 & $-45$ & $7.8\pm2.3$ & $<0.8$ & -- & 25.0 \\
& R & (-6.91, +2.05) & 7.21 & +17/+40 & $+6$ & & 6.5 & -- & 20.3 \\
IRAS4B1 & B & (+0.57, -1.27) & 1.39 & -30/-5 & $-17$ & $1.0\pm0.1$ & 27.8 & -- & 72.1 \\
& R & (-0.53, +2.53) & 2.58 & +16/+50 & $+17$ & & 55.4 & 5.6 & 169.3 \\
SerpM-SMM4b & B & (-0.28, +0.91) & 0.95 & -38/-8 & $-39$ & $1.1\pm0.1$ & 47.0 & 27.8 & 68.3 \\
& R & (+0.23, -0.49) & 0.54 & +30/+60 & $+36$ & & 74.4 & 8.8 & 69.2 \\
SVS13B & B & (+0.37, -1.61) & 1.65 & -37/+8.5 & $-24$ & $1.4\pm0.2$ & $<3.0$ & -- & 50.2 \\
& R & (-0.03, +0.39) & 0.39 & +8.5/+58 & $+16$ & & $<3.0$ & -- & 77.6 \\
L1448NB & B & (-1.98, +0.53) & 2.05 & -34/-20 & $-34$ & $1.1\pm0.1$ & $<1.7$ & -- & 41.7 \\
& R & (+0.82, +0.03) & 0.82 & +40/+54 & $+37$ & & 27.8 & 1.1 & 17.6 \\
L1157 & B & (+0.00, -0.22) & 0.22 & -60/-20 & $-34$ & $1.6\pm0.1$ & 22.0 & -- & 143.9 \\
& R & (-0.10, +0.58) & 0.58 & +30/+70 & $+54$ & & 45.2 & -- & 125.4 \\
IRAS4A1 & B & (+0.31, -1.47) & 1.50 & -30/-10 & $-16$ & $2.1\pm0.2$ & 77.5 & 9.4 & 16.9 \\
& R & (+0.21, +1.33) & 1.35 & +30/+70 & $+34$ & & 276.6 & 37.6 & 28.5 \\
IRAS4A2 & B & (+0.11, -2.31) & 2.31 & -10/+6.3 & $-6$ & $1.1\pm0.4$ & 177.5 & 47.2 & 25.2 \\
& R & (+0.71, +2.29) & 3.40 & +6.3/+40 & $+7$ & & 145.1 & 58.1 & 55.2 \\
L1448-C & B & (-0.30, +1.10) & 1.14 & -70/-22 & $-51$ & $1.1\pm0.1$ & 160.5 & 64.7 & 501.1 \\
& R & (+0.30, -0.80) & 0.85 & +25/+85 & $+48$ & & 222.5 & 75.8 & 869.7 \\
SerpM-S68N & B & (-5.46, +5.39) & 7.67 & -7/+5 & $-4$ & $1.5\pm1.1$ & 30.0 & 7.6 & 32.6 \\
& R & (+2.54, -2.91) & 3.86 & +12/+21 & $+3$ & & $<0.7$ & -- & 8.5 \\
SerpS-MM18a & B & (-0.10, -0.90) & 0.91 & -17/-2 & $-10$ & $1.3\pm0.3$ & 191.1 & 63.7 & 159.2 \\
& R & (+0.30, +2.90) & 2.92 & +21/+32 & $+13$ & & 98.9 & 34.3 & 71.2 \\
IRAS2A1 & B & (-0.58, -1.07) & 1.22 & -32/-9 & $-24$ & -- & 150.3 & 113.2 & 181.6 \\
\hline
\end{tabular}\\
\small
$^{a}$ RA and Dec offset of the innermost SiO knots B and R.\\
$^{b}$ Distance of the innermost SiO knots B and R.\\
$^{c}$ High-velocity range (HV) where CO ($2-1$) and SO ($5_6-4_5$) trace jet emission as SiO ($5-4$).\\
$^{d}$ Jet radial velocity in the innermost SiO knots B and R estimated as the velocity of the SiO emission peak in the B and R spectra with respect to the source systemic velocity given in Table \ref{tab:sample}. The estimated jet radial velocities are affected by an uncertainty of $\pm 1.7$ km\,s$^{-1}$\, due to the resolution of our spectra.\\
$^{e}$ Ratio between the radial velocities, $V_{\rm rad}$, of the blue- and red-shifted innermost knots, B and R. The ratio is computed between the velocity of the faster knot, $V_{\rm rad, f}$, over the velocity of the slower knot, $V_{\rm rad, s}$.\\
$^{f}$ CO ($2-1$), SO ($5_6-4_5$), and SiO ($5-4$) line intensity integrated on the high-velocity range (HV) ($F_{\rm CO}$, $F_{\rm SO}$, $F_{\rm SiO}$).\\
\label{tab:fluxes}
\end{table*}
\begin{figure*}
\centering
\includegraphics[width=.32\textwidth]{width-CO-v4.pdf}
\includegraphics[width=.32\textwidth]{width-SO-v4.pdf}
\includegraphics[width=.32\textwidth]{width-SiO-v4.pdf}
\caption{Deconvolved widths (2$R_{\rm jet}$) of CO (a), SO (b), and SiO (c) emission along the jet axis within 1500 au from protostars. Only flow detected above 10$\sigma$ are measured. Non-axisymmetric structures that are well spatially resolved are discarded. Cyan lines indicate widths that are smaller that the transverse beam and consequently considered as inclusive upper limits.
Green dashed lines correspond to straight lines with full opening angles of $\alpha = 3^\circ, 8^\circ, 12^\circ$, and $35^\circ$ and initial width of $30, 50, 80,$ and $120$~au outlining the collimation properties of the flow in different tracers.}
\label{fig:jet-width-main}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=.4\textwidth]{width-CO-HV.pdf}
\caption{Deconvolved widths (2$R_{\rm jet}$) of SiO (triangles) and high-velocity CO (dots) emission (high-velocity intervals defined in Table \ref{tab:fluxes}). Only the jet lobes where both SiO and high-velocity CO are detected above 10 $\sigma$ at the same positions along the jet are plotted. Colors code the sources. At high velocity, the CO emission appears as collimated as the SiO jet emission. Green dashed lines correspond to straight lines with full opening angles of $\alpha = 3^\circ, 8^\circ, 12^\circ,$ and $35^\circ$ and initial width of $30, 50, 80,$ and $120$~au outlining the collimation properties of the flow.}
\label{fig:jet-width-hv}
\end{figure}
\subsection{Molecular column densities and abundances}
\label{sect:jet-abundances}
For the 12 Class 0 protostars driving an SiO jet detected at $>10\sigma$ in the integrated maps (see Table \ref{tab:jet-occurrence}), we estimate the beam-averaged column density and the abundance of CO, SO, and SiO in the high-velocity jet component for the blue- and red-shifted lobes.
To this aim,
the CO ($2-1$), SO ($5_6-4_5$), and SiO ($5-4$) line intensities extracted at the position of the innermost SiO knots, B and R, are integrated on the high-velocity range shown in Fig. \ref{fig:spec1} and summarized in Table \ref{tab:fluxes}.
As shown by the position-velocity diagrams in Fig. \ref{fig:PV-block1} and by the measurements of the flow width in Fig. \ref{fig:jet-width-hv}, in the defined high-velocity ranges, the emission in the CO ($2-1$) and SO ($5_6-4_5$) lines is co-spatial with that of SiO ($5-4$) and shares the same width.
Therefore, we assume that in the HV range the emission in the three tracers originate from the same gas component. Moreover, based on the high radial velocity of the HV component ($15-80$ km\,s$^{-1}$) and on its collimation (see Fig. \ref{fig:jet-width-hv}), we assume that the bulk of the HV emission originates primarily from the jet and that the contribution from entrained material along the jet is negligible.
We further assume local thermodynamic equilibrium (LTE) at a kinetic temperature, $T_{\rm K} = 100$ K, and optically thin emission. Possible deviations from these assumptions are discussed below.
Based on the above assumptions we derive the molecules' beam-averaged column densities in the jet, $N_{\rm CO}$, $N_{\rm SO}$, and $N_{\rm SiO}$, from the line intensities integrated on HV.
The non-LTE analysis of multiple SiO and CO transitions in the large velocity gradient (LVG) approximation in a few prototypical Class 0 jets has shown that the typical gas density and temperature in the high-velocity range are of $10^5-10^7$ cm$^{-3}$ and $30-500$ K \citep[e.g., ][]{gibb04,cabrit07b,podio15,spezzano20}.
Therefore, the assumption of LTE is well justified for the CO ($2-1$) line, whose critical density is well below the typical gas density in jets ($\sim 7.3 \times 10^3$ cm$^{-3}$, see Table \ref{tab:lines}), while the SO and SiO lines may be sub-thermally excited if the gas density is lower than their critical density ($\sim 7.7 \times 10^5$ cm$^{-3}$\, and $\sim 1.6 \times 10^6$ cm$^{-3}$, see Table \ref{tab:lines}). To estimate the uncertainty affecting the estimates of the column density in the non-LTE case we use the statistical equilibrium, one-dimensional radiative transfer code RADEX adopting plane parallel slab geometry \citep{vandertak07}. We find that the SO and SiO column densities may be underestimated by a factor of up to 5 if $n_{\rm H_2}=10^5$ cm$^{-3}$.
Moreover, the column densities of CO, SO, and SiO are overestimated by a factor of $\sim 1.5$ if the gas temperature is lower than assumed ($20$ K), and are underestimated by a factor of $\sim1.6$, $\sim3.5$ if the temperature is higher (200 K and 500 K, respectively). These uncertainties are illustrated in Fig. \ref{fig:conv-fac} in the Appendix.
If lines are optically thick, the beam averaged column density derived in the optically thin limit is only a lower limit on the true column density. Previous studies have shown that jet emission may be optically thick even at high velocities \citep[e.g., ][]{cabrit12,podio15}. Therefore, in Appendix \ref{app:uncertainties}, we propose a criterion to flag lines that are (or may be) optically thick by using the ratio of the brightness temperature of the CO ($2-1$), SO ($5_6 - 4_5$), and SiO ($5-4$) lines in the HV range ($T^{\rm CO}_{\rm B}$, $T^{\rm SO}_{\rm B}$, $T^{\rm SiO}_{\rm B}$, respectively).
Based on the discussion in Appendix \ref{app:uncertainties}, we note that the opacity of the CO and SiO lines is constrained using the following criteria.
1) If $0.7 \times T^{\text{CO}}_{\text{B}} \le T^{\text{SiO}}_{\text{B}} \le T^{\text{CO}}_{\text{B}}$, both lines are very likely optically thick and the beam averaged $N_{\rm CO}$ and $N_{\rm SiO}$ derived in the optically thin limit are both considered as strict lower limits. No constraint on the SiO/CO abundance ratio can be obtained in this case.
2) If $T^{\text{SiO}}_{\text{B}} > T^{\text{CO}}_{\text{B}}$, CO ($2-1$) is optically thin and SiO ($5-4$) may be optically thick. We then consider the derived $N_{\rm SiO}$ as an inclusive lower limit and we can derive a lower limit on the SiO/CO abundance ratio.
3) If $T^{\text{SiO}}_{\text{B}} < 0.7 \times T^{\text{CO}}_{\text{B}}$, the SiO ($5-4$) line is considered to be optically thin whereas the CO ($2-1$) line may be optically thick. We then consider the derived $N_{\rm CO}$ as an inclusive lower limit and we obtain an upper limit on the SiO/CO abundance ratio.
The brightness temperature of the SO ($5_6-4_5$) line is always much smaller than the brightness temperature of CO ($2-1$), suggesting that the SO emission is optically thin.
Based on the above criteria, the brightness temperature ratios $T^{\text{SiO}}_{\text{B}}/ T^{\text{CO}}_{\text{B}}$ measured at the SiO emission peak in the B and R spectra in Fig. \ref{fig:spec1} indicate that for all but two jets in our sample (IRAS4A1 and IRAS4A2), that is, for $\sim 82\%$ of the SiO knots, the SiO ($5-4$) emission is (or may be) optically thick.
Therefore, the estimated SiO column densities are strict (for L1448-NB knot R, SerpM-SMM4b knot R, SerpM-S68N knot B, SerpS-MM18a, and IRAS2A1 knot B) or inclusive (for all the other jet lobes) lower limits. For CO, we also obtain strict lower limits for the five sources above, while we estimate an upper limit on the CO abundance for the jet lobes where no CO is detected in the B and R knots (the "CO-poor" jets: SerpM-S68Nb knot B, SVS13B knots B and R, L1448-NB knot B, and SerpM-S68N knot R).
Based on the estimated column densities and on the assumption that the emission in the three tracers originates from the same jet component, the abundances of SO and SiO are derived as $X_{\rm SO}$ = $X_{\rm CO}$ $\times$ $N_{\rm SO}$/$N_{\rm CO}$\, and $X_{\rm SiO}$ = $X_{\rm CO}$ $\times$ $N_{\rm SiO}$/$N_{\rm CO}$,\, where $X_{\rm CO}$ $=10^{-4}$ is the assumed CO abundance in the jet with respect to H$_2$\footnote{\label{note:CO-abu} The adopted value of [CO]/[H] $= 5 \times 10^{-5}$ is based on the correlation of the total CO abundance (gas+ice) and the visual extinction $A_{\rm V}$ in the well shielded ISM \citep{whittet10}
using a standard $N_{\rm H} / A_{\rm V} = 2 \times 10^{21}$ cm$^{-2}$ mag$^{-1}$. Since high-velocity protostellar jets are launched well inside the CO ice-line, all CO is in the gas phase. If the jet is launched from inside the dust sublimation radius,
the CO abundance could be larger or smaller by a factor of $\ge 3$ \citep[see, e.g., ][]{glassgold91,tabone20}}.
The estimated molecular column densities and abundances averaged over the beam and the HV range for the blue- and red-shifted inner knots of the high-velocity jets are summarized in Table \ref{tab:jets-energetics} and in Figure \ref{fig:jet-abundances}, where the values are shown as a function of the source internal luminosity, $L_{\rm int}$. As the CO abundance in jets is uncertain (see footnote \ref{note:CO-abu}), in Figure \ref{fig:jet-abundances}, we also plot the SO and SiO abundance with respect to CO ($X_{\rm SO}$/$X_{\rm CO}$\,=\,$N_{\rm SO}$/$N_{\rm CO}$, and $X_{\rm SiO}$/$X_{\rm CO}$\,=\,$N_{\rm SiO}$/$N_{\rm CO}$).
We find that the beam-averaged column density of SO is $N_{\rm SO}$$\sim 10^{13} - 3 \times 10^{15}$ cm$^{-2}$, and $N_{\rm SiO}$\, goes from a minimum of $4 \times 10^{13}$ cm$^{-2}$\, to $> 2 \times 10^{15}$ cm$^{-2}$\, for the jets where the SiO emission is (or could be) optically thick.
The CO column density ranges from $\sim 10^{16}$ cm$^{-2}$\, up to $> 3 \times 10^{17}$ cm$^{-2}$, with the exception of the "CO-poor" jets for which we derive an upper limit of a few $10^{15}$ cm$^{-2}$. The abundance of SO with respect to H$_2$ goes from values $< 10^{-7}$ to $10^{-6}$. For SiO, the inferred abundances, $X_{\rm SiO}$, are lower limits for the jets where the SiO emission is (or could be) optically thick ($X_{\rm SiO}$\, ranges from values larger than a few $10^{-7}$ to values larger than a few $10^{-6}$ for the "CO-poor" jets), whereas for the two jets where CO and SiO are optically thick in both lobes (SerpS-MM18a, IRAS2A1), it was not possible to derive an estimate of $X_{\rm SiO}$. Low SiO abundances are found only for IRAS4A1 and IRAS4A2 ($X_{\rm SiO}$$\le2-6 \times 10^{-8}$).
\subsection{Jet energetics}
\label{sect:jet-energetics}
For the 12 Class 0 protostars driving an SiO jet detected at $>10\sigma$ (see Table \ref{tab:jet-occurrence}), we estimate the jet mass-loss and momentum rates, and the jet mechanical luminosity from the CO beam-averaged column density inferred from the high-velocity CO emission at the position of the innermost blue- and red-shifted SiO knots, B and R.
As discussed in Sects. \ref{sect:kinematics} and \ref{sect:jet-abundances}, while CO emission at low velocity probes the outflowing entrained material, the emission at high velocity is more collimated and has the same width and displacement as the SiO emission (see the width of the HV emission in Fig. \ref{fig:jet-width-hv} and the PV of SiO and CO emission in Fig. \ref{fig:PV-block1}). Hence, we assume that the bulk of the CO HV emission originates from the jet and that the contribution from entrained gas is negligible.
To further minimize the possible contribution from entrained material, the CO column densities and mass-loss rates are inferred close to the driving source, that is, at the position of the B and R inner knots. In fact, at larger distances, the jet could be more affected by the interaction with the surrounding medium, hence, by gas entrainment, and part of the emission could be filtered out by the interferometer.
This approach also guarantees that the jet properties are derived via the same methodology with regard to all the jets in our sample, including those where only a single red-shifted and blue-shifted knot is detected.
The mass-loss rate of the molecular jet is typically estimated assuming that the mass in the jet flows at constant density and speed along the jet axis over the beam length. However, the gas in the SiO knots where the gas column density is estimated is highly compressed by shocks, therefore the mass-loss rate is corrected for a factor of $1/\sqrt{C} \sim1/3$, where C is the compression factor \citep[e.g., ][]{hartigan94}. Taking shock compression into account, we infer the mass-loss rate as $\dot{M}_{\rm jet} = 1/\sqrt{C} \times m_{H_2} \times (N_{\rm CO}/X_{\rm CO}) \times b_t \times V_{\rm tan}$ \citep[e.g., ][]{lee07b,podio15}, where $m_{H_2}$ is the mass of molecular hydrogen, $N_{\rm CO}$ the CO beam-averaged column density over the HV range, $X_{\rm CO} = 10^{-4}$ the assumed CO abundance (see footnote \ref{note:CO-abu}), $b_t$ the linear size of the transverse beam (see Appendix \ref{app:jet-width}), and $V_{\rm tan}$ the tangential component of the jet velocity, $V_{\rm jet}$, that is,$V_{\rm tan} = \sqrt{V_{\rm jet}^2 - V_{\rm rad}^2}$.
The true jet velocity, $V_{\rm jet}$, as well as its tangential component, $V_{\rm tan}$, can be recovered by correcting the observed radial velocity, $V_{\rm rad}$, for the jet inclination. However, estimates of the inclination are available only for a few jets in our sample
and they are obtained using different methods:
for SerpM-SMM4, \citet{aso18} suggest that the two lobes have different inclination based on the assumption that they have the same intrinsic momentum ($i\sim36\degr$ and $i\sim70\degr$ from the line of sight for the blue and red lobe, respectively), which gives deprojected velocities of $\sim105$ km\,s$^{-1}$\, and $\sim 50$ km\,s$^{-1}$, respectively;
for SVS13B \citet{seguracox16} estimates a disk inclination of $\sim 71\degr$, which implies $V_{\rm jet} = 74$ km\,s$^{-1}$\, for the blue lobe, and $49$ km\,s$^{-1}$\, for the red lobe if we deproject the $V_{\rm rad}$ at the emission peak (see Table \ref{tab:fluxes}), and velocities up to $150$ km\,s$^{-1}$\, if we consider the maximum $V_{\rm rad}$ in our spectra (in agreement with \citealt{bachiller98b});
for L1157 the model by \citet{podio16} indicates precession on a cone inclined by $73\degr$ to the line of sight, which implies $V_{\rm jet} \sim 87$ km\,s$^{-1}$\, and $\sim 137$ km\,s$^{-1}$\, in the blue and red lobe, respectively;
finally, for L1448-C the jet velocity derived from proper motion studies is $V_{\rm jet} \sim 98 \pm4$ km\,s$^{-1}$\, and $\sim 78\pm1$ km\,s$^{-1}$\, for the blue- and red-shifted lobes, respectively ($i=34\pm4\degr$ and $i=46\pm5\degr$ from the plane of sky, \citealt{yoshida20}).
In conclusion, even if the estimated jet inclinations are obtained with different methods and are affected by large uncertainties, the derived $V_{\rm jet}$\, values are always consistent with $V_{\rm jet} = 100 \pm 50$ km\,s$^{-1}$.
Moreover, this value is also consistent with observations of other prototypical Class 0 jets where SiO proper motions have been measured (e.g.,
$V_{\rm jet}$\,$\sim 115\pm50$ km\,s$^{-1}$\, for HH 212, \citealt{lee-cf15}, and $\sim 114 \pm 50$ km\,s$^{-1}$\, for HH 211, \citealt{jhan16}).
Therefore, we assume that the jet velocity is $100$ km\,s$^{-1}$\, for all the jets in our sample. This ensures that the same method to derive $\dot{M}_{\rm jet}$ is applied to all the jets in the sample, without making assumptions on the jet inclination when estimates are not available or very uncertain\footnote{Rough estimates of the inclination are available for a few other jets, e.g. IRAS4A and IRAS4B1, but are affected by even larger uncertainties. For example, for IRAS4B1 early studies suggest that the jet is almost perpendicular to the plane of the sky based on its morphology ($i\sim0\degr$, \citealt{maret09}; $i\sim15-30\degr$, \citealt{yildiz12}) or on VLBI H$_2$O water maser observations ($i\sim10-35\degr$, \citealt{desmurs09}), while \citet{marvel08} found the maser outflows to be nearly in the plane of the sky ($i \sim77\degr$). Similarly for IRAS4A \citet{yildiz12} suggest an inclination of $\sim45-60\degr$ to the line of sight, \citet{koumpia16} of $\sim70\degr$, and \citet{marvel08} of $\sim88\degr$.}.
The determination of the mass-loss rate allows us to estimate the momentum rate, $\dot{P}_{\rm jet} = \dot{M}_{\rm jet} \times V_{\rm jet}$, and the jet mechanical luminosity $L_{\rm jet} = 1/2 \times \dot{M}_{\rm jet} \times V_{\rm jet}^2$ at the position of the innermost blue- and red-shifted SiO knots, B and R.
As the inferred mass-loss and momentum rates, and mechanical luminosities depend linearly on the estimated beam-averaged CO column density, $N_{\rm CO}$, they are affected by the uncertainties on $N_{\rm CO}$\, due to the assumed temperature and LTE conditions. Moreover, as we assume optically thin emission to derive $N_{\rm CO}$, the estimated $\dot{M}_{\rm jet}$, $\dot{P}_{\rm jet}$, and $L_{\rm jet}$\, values should be regarded as lower limits for all jets where the comparison of CO and SiO spectra indicates that the CO emission is (or could be) optically thick (i.e., when $T_{\rm B}^{\rm SiO}$/$T_{\rm B}^{\rm CO} \le1$, see Sect. \ref{sect:jet-abundances}).
Finally, the $\dot{M}_{\rm jet}$, $\dot{P}_{\rm jet}$, and $L_{\rm jet}$\, values are uncertain by a factor of 3-10 due to the assumption on the CO abundance, on $V_{\rm jet}$, and on the compression factor, which depends on the unknown shock parameters (magnetic field and shock speed, \citealt{hartigan94}).
This, however, does not affect the general trends and the comparison with the mass accretion rates and the mass-loss rates estimated for Class II sources, which are discussed in Sect. \ref{sect:disc-ejec-accr}.
The estimated mass-loss and momentum rates, and jet mechanical luminosities for the blue and red lobes of the jets are summarized in Table \ref{tab:jets-energetics}. Figure \ref{fig:jet-energetics} shows the sum of the jet mass-loss rate, $\dot{M}_{\rm jet}$, and of the jet mechanical luminosity, $L_{\rm jet}$, on the innermost knots of the two jet lobes, B and R. For the jets with one "CO-poor" lobe, the other lobe shows CO emission in the HV range (sometimes possibly thick), hence the total $\dot{M}_{\rm jet}$\, and $L_{\rm jet}$\, shows up as a firm value (SerpM-S68Nb) or lower limit (L1448-NB, SerpM-S68N) in Fig. \ref{fig:jet-energetics}. The exception is SVS13B for which no CO is detected in both lobes and we could only infer an upper limit on the total mass-loss rate and mechanical luminosity. We find that the two-sided jet mass-loss rates (sum over the blue and red inner knots) span from values of $\sim 7 \times 10^{-8}$ M$_{\odot}$\,yr$^{-1}$\, up to $\sim 3 \times 10^{-6}$ M$_{\odot}$\,yr$^{-1}$. Consequently, the total jet momentum rates vary between $\sim 7 \times 10^{-6}$ M$_{\odot}$\,yr$^{-1}$ km\,s$^{-1}$\, up to $\sim 3 \times 10^{-4}$ M$_{\odot}$\,yr$^{-1}$ km\,s$^{-1}$, while the jet mechanical luminosity summed on the two lobes (i.e. the total jet power) is between 0.06 L$_{\odot}$\, and $2$ L$_{\odot}$. The exception is the "CO-poor" jet SVS13B for which we obtain $\dot{M}_{\rm jet}$\,$<2 \times 10^{-8}$ M$_{\odot}$\,yr$^{-1}$, $\dot{P}_{\rm jet}$\,$<2 \times 10^{-6}$ M$_{\odot}$\,yr$^{-1}$ km\,s$^{-1}$, and $L_{\rm jet}$\,$< 0.02$ L$_{\odot}$. The obtained values are discussed in Sect. \ref{sect:disc-ejec-accr}.
\begin{figure*}
\centering
\includegraphics[width=0.7\textwidth]{abundances.pdf}
\caption{Beam-averaged column densities ($N_{\rm X}$ in cm$^{-2}$, {\it left}), and SO and SiO abundances ({\it right}) versus the source internal luminosity ($L_{\rm int}$ in L$_{\odot}$).
Molecular abundances are with respect to H$_2$ on the left axis (assuming [CO]/[H$_2$] $= 10^{-4}$) and with respect to CO on the right axis. The values are inferred for the 12 Class 0 protostars driving an SiO jet detected at $> 10\sigma$ in the integrated maps (see Table \ref{tab:jet-occurrence}) at the position of the closest blue- and red-shifted SiO knots, B and R (filled and empty small diamonds, respectively; see Tables \ref{tab:fluxes} and \ref{tab:jets-energetics}). Black, red, and blue symbols are for CO, SO, and SiO, respectively. Lower and upper limits are indicated by upward and downward arrows. The jet knots where no CO is detected (the "CO-poor" jets) are indicated by larger empty diamonds. For these knots we derive upper limits on $N_{\rm CO}$, and lower limits on $X_{\rm SiO}$.}
\label{fig:jet-abundances}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.7\textwidth]{mjet-ljet_new.pdf}
\caption{Two-sided jet mass-loss rates ($\dot{M}_{\rm jet}$\, in M$_{\odot}$\,yr$^{-1}$, {\it left}), and jet mechanical luminosities ($L_{\rm jet}$\, in L$_{\odot}$, {\it right}) versus the source internal luminosity ($L_{\rm int}$ in L$_{\odot}$). The values are inferred for the 12 Class 0 protostars driving an SiO jet detected at $> 10\sigma$ in the integrated maps (see Table \ref{tab:jet-occurrence}). $\dot{M}_{\rm jet}$ and $L_{\rm jet}$\, are the sum over the closest blue- and red-shifted SiO knots, B and R (see Tables \ref{tab:fluxes} and \ref{tab:jets-energetics}). Lower and upper limits are indicated by upward and downward arrows. The jet knots where no CO is detected (the "CO-poor" jets) are indicated by larger empty diamonds. For the jets with only one "CO-poor" lobe, the total $\dot{M}_{\rm jet}$\, and $L_{\rm jet}$\, is dominated by the estimate obtained for the other lobe, hence, they show up as a firm value or lower limit (depending if CO in the other lobe is optically thin or thick). The exception is SVS13B for which no CO is detected in both lobes, hence, only an upper limit is derived for the total $\dot{M}_{\rm jet}$\, and $L_{\rm jet}$.
The black lines in the left panel indicate $\dot{M}_{\rm jet} = [0.01, 0.1, 1] \times \dot{M}_{\rm acc}$, with $\dot{M}_{\rm acc}$ estimated from $L_{\rm int}$, assuming a protostellar mass of $0.05$ M$_{\odot}$\, or $0.25$ M$_{\odot}$\, (solid and dotted lines, respectively). The $\dot{M}_{\rm acc}$ values labeled on the upper x-axis correspond to $M_{*} = 0.05$ M$_{\odot}$. The black solid lines in the right panel indicate $L_{\rm jet} = [0.01, 0.1, 0.5, 1] \times L_{\rm int}$.}
\label{fig:jet-energetics}
\end{figure*}
\begin{table*}
\caption[]{\label{tab:jets-energetics} Beam-averaged molecular column densities and abundances, mass-loss and momentum rates, and jet mechanical luminosities for the 12 sources driving an SiO jet detected at $>10\sigma$ in the integrated maps (see Table \ref{tab:jet-occurrence}).}
\small{
\begin{tabular}[h]{ccc|ccc|cc|ccc}
\hline
\hline
Source & & $L_{\rm int}$ & $N_{\rm CO}$$^a$ & $N_{\rm SO}$$^a$ & $N_{\rm SiO}$$^a$ & $X_{\rm SO}$$^b$ & $X_{\rm SiO}$$^b$ & $\dot{M}_{\rm jet}$$^c$ & $\dot{P}_{\rm jet}$$^c$ & $L_{\rm jet}$$^c$ \\
& & (L$_{\odot}$) & (10$^{17}$ cm$^{-2}$) & (10$^{14}$ cm$^{-2}$) & (10$^{14}$ cm$^{-2}$) & ($10^{-7}$) & ($10^{-7}$) & (10$^{-7}$ M$_{\odot}$\,yr$^{-1}$) & (10$^{-5}$ M$_{\odot}$\,yr$^{-1}$ km\,s$^{-1}$) & (L$_{\odot}$)\\
\hline
SerpM-S68Nb & B & 1.8 & $<0.01$ & -- & $\ge0.5$ & -- & $>51.5$ & $<0.08$ & $<0.08$ & $<0.006$ \\
& R & & 0.08 & -- & $\ge0.4$ & -- & $\ge5.1$ & 0.7 & 0.7 & 0.06 \\
IRAS4B1 & B & 2.3 & 0.4 & -- & $\ge1.5$ & -- & $\ge4.3$ & 1.2 & 1.2 & 0.1 \\
& R & & 0.7 & 1.2 & $\ge3.6$ & 1.7 & $\ge5.0$ & 2.3 & 2.3 & 0.2 \\
SerpM-SMM4b & B &$<2.6$& 0.6 & 6.1 & $\ge1.4$ & 10.2 & $\ge2.4$ & 3.4 & 3.4 & 0.3 \\
& R & & $>1.0$ & 1.9 & $>1.5$ & $<2.0$ & -- & $>8.7$ & $>8.7$ & $>0.7$ \\
SVS13B & B & 3.1 & $<0.04$ & -- & $\ge1.1$ & -- & $>27.6$ & $<0.1$ & $<0.1$ & $<0.01$ \\
& R & & $<0.04$ & -- & $\ge1.6$ & -- & $>42.7$ & $<0.1$ & $<0.1$ & $<0.01$ \\
L1448NB & B & 3.9 & $<0.02$ & -- & $\ge0.9$ & -- & $>40.4$ & $<0.07$ & $<0.07$ & $<0.006$ \\
& R & & $>0.4$ & 0.2 & $>0.4$ & $<0.6$ & -- & $>1.2$ & $>1.2$ & $>0.1$ \\
L1157 & B & 4.0 & 0.3 & -- & $\ge3.0$ & -- & $\ge10.8$& 1.6 & 1.6 & 0.1 \\
& R & & 0.6 & -- & $\ge2.7$ & -- & $\ge4.6$ & 2.9 & 2.9 & 0.2 \\
IRAS4A1 & B &$<4.7$& $\ge1.0$ & 2.1 & 0.4 & $\le2.1$ & $\le0.4$ & $\ge4.0$ & $\ge4.0$ & $\ge0.3$ \\
& R & & $\ge3.5$ & 8.3 & 0.6 & $\le2.3$ & $\le0.2$ & $\ge14$ & $\ge14$ & $\ge1.1$ \\
IRAS4A2 & B & 4.7 &$\ge2.3$ & 10.4 & 0.5 & $\le4.6$ & $\le0.2$ & $\ge9.5$ & $\ge9.5$ & $\ge0.8$ \\
& R & &$\ge1.9$ & 12.8 & 1.2 & $\le6.9$ & $\le0.6$ & $\ge7.8$ & $\ge7.8$ & $\ge0.6$ \\
L1448-C & B & 11 & 2.1 & 14.2 &$\ge10.6$ & 6.9 & $\ge5.1$ & 9.7 & 9.7 & 0.8 \\
& R & & 2.9 & 16.7 &$\ge18.4$ & 5.9 & $\ge6.4$ & 13.8 & 13.8 & 1.1 \\
SerpM-S68N & B & 11 & $>0.4$ & 1.7 & $>0.7$ & $<4.3$ & -- & $>3.3$ & $>3.3$ & $>0.3$ \\
& R & & $<0.01$ & -- & $\ge0.2$ & -- & $>19.9$ & $<0.08$ & $<0.08$ & $<0.006$\\
SerpS-MM18a & B & 13 & $>2.5$ & 14.0 & $>3.4$ & $<5.7$ & -- & $>11.5$ & $>11.5$ & $>0.9$ \\
& R & & $>1.3$ & 7.6 & $>1.5$ & $<5.9$ & -- & $>5.9$ & $>5.9$ & $>0.5$ \\
IRAS2A1 & B & 47 & $>1.9$ & 24.9 & $>3.8$ & $<12.9$ & -- & $>6.7$ & $>6.7$ & $>0.5$ \\
\hline
\end{tabular}\\
$^a$ beam-averaged CO, SO, and SiO column densities in the jet ($N_{\rm CO}$, $N_{\rm SO}$, $N_{\rm SiO}$) are derived from the CO, SO, and SiO line intensity integrated on the high-velocity range at the position of the blue-shifted and red-shifted SiO emission knots closest to the driving sources, B and R (see Table \ref{tab:fluxes} and the spectra in Fig. \ref{fig:spec1}). We assume LTE, optically thin emission at $T_{\rm K}$=100 K. The lower and upper limits reported in the table refer to the cases where the CO ($2-1$) and/or SiO ($5-4$) emission is (or could be) optically thick based on the $T_{\rm B}^{\rm SiO}$/$T_{\rm B}^{\rm CO}$ ratio (see Sect. \ref{sect:jet-abundances}).\\
$^b$ the abundances of SO and SiO are derived as $X_{\rm SO}$ = $X_{\rm CO}$ $\times$ $N_{\rm SO}$/$N_{\rm CO}$\, and $X_{\rm SiO}$ = $X_{\rm CO}$ $\times$ $N_{\rm SiO}$/$N_{\rm CO}$\, assuming $X_{\rm CO} = N_{\rm CO}/N_{\rm H_2} = 10^{-4}$.\\
$^c$ Mass-loss and momentum rates ($\dot{M}_{\rm jet}$, $\dot{P}_{\rm jet}$), and the jet mechanical luminosity ($L_{\rm jet}$) are derived from $N_{\rm CO}$\, assuming $V_{\rm jet}$ $= 100$ km\,s$^{-1}$.\\
}
\end{table*}
\section{Discussion}
\label{sect:discussion}
\subsection{Considering whether outflows and jets could be ubiquitous at the protostellar stage}
As summarized in Sect. \ref{sect:detection-rate}, Table \ref{tab:jet-occurrence}, and Figure \ref{fig:jet-occurrence}, outflow emission in CO ($2-1$) is detected in all the Class 0 and Class I protostars in our sample (21 and 3 sources, respectively), indicating that the outflow phenomenon is ubiquitous in the CALYPSO sample of protostars.
High-velocity collimated jets emitting in the SiO ($5-4$) line are detected in 67\% of the sample, which is a higher detection rate than previously found.
For example, \citet{gibb04} found that only 45\% of the Class 0 sources in their sample are associated with SiO jets, likely due to the lower angular resolution and sensitivity of their observations.
SO ($5_6-4_5$) jet and outflow emission is detected in 52\% of the Class 0 protostars and 67\% of the Class I sources, and for all of them, the spatio-kinematical distribution of the SO emission follows that of SiO (see the position-velocity diagrams in Fig. \ref{fig:PV-block1} and line spectra in Fig. \ref{fig:spec1}).
In five more sources (L1527, GF9-2, L1157, L1448-2A, SerpS-MM18b), that is, 24\% of the Class 0 protostars, the detected SO ($5_6-4_5$) emission is compact and elongated perpendicularly to the jet or outflow PA with a velocity gradient along the same direction (perpendicular to the jet or outflow).
This indicates that the SO emission does not probe ejection but, instead, could trace the inner flattened envelope, the disk, or the accretion shock at disk-envelope interface \citep[e.g., ][]{sakai14a,maret20}. Unfortunately, with the exception of L1527 whose SO emission has been analyzed in detail by \citet{maret20}, for the other sources, the SO emission is too weak to allow for a study of the gas kinematics to assess their origin. Follow-up observations at higher angular resolution and sensitivity would be crucial for achieving an understanding of the origin of SO ($5_6-4_5$) in these sources.
Therefore, based on our survey, we find that SiO emission unambiguously probes the jet, while SO may also be a probe of the inner envelope and disk.
Despite the small size of our sample, Fig. \ref{fig:jet-occurrence} indicates that the detection rate of jets increases with the source internal luminosity up to $\sim 80\%$ for sources with $L_{\rm int} >1$ L$_{\odot}$.
The internal luminosity, $L_{\rm int}$, is a probe of the mass accretion rate onto the source, $\dot{M}_{acc}$, and the mass ejection rate is expected to be proportional to the mass accretion rate ($\dot{M}_{\rm jet} \sim 0.01-0.3 \dot{M}_{\rm acc}$, according to magnetohydro-dynamical models of the jet launch and observations of Class II sources, e.g., \citealt{ferreira06,hartigan95}). Hence, the fact that the jet detection rate increases with $L_{\rm int}$\, confirms the correlation between accretion and ejection rates found for more evolved sources and suggests that the jets driven by the less-accreting and less-ejecting protostars in our sample may remain undetected at the sensitivity level of our observations.
Finally, it is interesting to note that high-velocity collimated emission in all three jet tracers (CO, SO, and SiO) is detected for the protostellar candidates, IRAS4B2 and SerpM-SMM4b, and only in CO for the candidate SerpS-MM18b. This strongly supports their identification as protostars.
\subsection{Jet velocity}
The median radial velocity of the SiO jets detected in the CALYPSO sample, as derived at the position of the innermost SiO knots, is about 30 km\,s$^{-1}$\, (see Fig. \ref{fig:distri-vrad}).
In contrast, \citet{2018A&A...609A..87N} find a median radial velocity of 70 km\,s$^{-1}$\, for the high-velocity components (HVC) seen in the [OI] line at 6300~$\AA$ in a large sample of Class II sources. The difference in velocity between protostellar and Class II jets may depend on the different tracers (SiO and [OI], respectively). However, if we assume that both SiO and [OI] probe the collimated jet launched directly by the disk and not entrained gas,
this difference indicates that the velocity of the jet increases from the protostellar stage to the Class II stage by a factor of about $2$. Because of the selection bias of the HVC component in Class II sources (an [OI] jet seen on the plane of the sky would not be considered as jet component), this number should be considered as an upper limit. In the following, we assume that the CALYPSO sample probes a Class 0 population that will eventually form Class II objects with a similar IMF as the sample analyzed in \citet{2018A&A...609A..87N}. Adopting the model of \citet{bontemps96} to describe the growth of a protostar, and assuming that the median age of the Class 0 protostars in our CALYPSO sample is half of the duration of the Class 0 phase ($\sim 3 \times 10^4$~yrs, according to the former reference), the median mass of Class 0 protostars is $0.3 M_{*}$, where $M_{*}$ is the mass of the final star. The Keplerian velocity at a given distance from the central object would then increase by a factor of $\sim 1.8$ between samples of Class 0 and Class II objects. According to disk winds models of the jet launch, the velocity of the jet scales with the Keplerian velocity at the launching radius \citep[see, e.g.,][]{ferreira06}. In this scenario, the increase of the jet velocity with age is consistent with a jet launched from the same region of the disk around a central object of increasing mass.
Moreover, for at least 33\% of the detected SiO bipolar jets the radial velocity of the two lobes differ by a factor of $1.4-2.1$ (and $7.8$ for SerpM-S68Nb). The occurrence and degree of velocity asymmetries inferred for the protostellar jets in the CALYPSO sample are in agreement with what was found for optical jets from T Tauri stars. For example, \citet{hirth94} find that about 53\% of their sample of 15 bipolar optical jets show a difference in velocity between the two jet lobes of a factor of $1.4-2.6$ \citep[see also, ][]{hirth97,melnikov09,podio11}.
\subsection{Jet collimation}
The CALYPSO maps and position-velocity diagrams of the detected jets, and the estimates of their widths (see Figs. \ref{fig:jets1}, \ref{fig:PV-block1}, \ref{fig:jet-width-main}, and \ref{fig:jet-width-hv}) show, on a statistical basis, that protostellar molecular flows exhibit an onion-like structure with nested components of different velocities that are highlighted by different lines: CO ($2-1$) probes slow and wide angle outflows ($\sim 8-35\degr$ opening angle), while the high-velocity jet is collimated and best traced by SiO ($\sim 10\degr$ opening angle) but is also seen in CO and SO when high-velocity components are selected. Interestingly, higher angular resolution observations toward the HH 212 Class 0 system show a similar onion-like structure on a smaller scale (<200~au) with SiO tracing a collimated jet launched from $\sim 0.3~$au, and SO tracing a broader and slower outflow, possibly associated with an MHD disk wind launched from a larger radius in the disk \citep{lee17b,2017A&A...607L...6T,lee18b}. In the latter studies, rotation signatures detected in both the jet and the outflow were used to derive the launching zone. We note, however, that our CALYPSO resolution is too low to uncover rotation signatures and derive the launching radius.
The jet width, 2$R_{\rm jet}$, derived from the spatial profile of the SiO ($5-4$) line can be compared with the results obtained for atomic jets driven by Class I and Class II sources. The lowest dashed green curve in Figure \ref{fig:jet-width-main} shows the typical collimation inferred for Class I and Class II high-velocity atomic jets on the same scale probed by our observations, that is, $300-1500$ au, with an opening angle of $\sim 3\degr$ and a width of $\sim80$ au at $800$ au distance \citep[e.g., ][]{agraamboage11,dougados00,reipurth00,woitas02}. Figures \ref{fig:jet-width-main} and \ref{fig:jet-width_all} show that except for IRAS4B1, SVS13B, and L1448-NB, most of the high-velocity SiO jets in our sample appear systematically broader than Class I and Class II atomic jets. This could be due to the lower resolution of the CALYPSO observations ($\sim 0\farcs4-1.1\arcsec$, i.e., $\sim120-480$ au at $d=293-436$ pc) compared to studies of Class I and II sources ($\sim 0\farcs1$, i.e. $14-46$ au $d=140-460$), as well as to projection effects, and to the lower temperature regime probed by SiO emission with respect to the atomic lines, which makes SiO sensitive to emission by bow-shocks wings further out than optical lines in atomic jets. Higher angular resolutions observations are required to probe the collimation properties of SiO jets down to $<100$ au scale as for atomic jets from Class II sources. To date jet width measurements on $<100$ au scales were obtained only for two protostellar jets observed at a few au resolution with ALMA, that is, HH 212 \citep{lee17b} and B355 \citep{bjerkeli19}, who find that the jet width is comparable or even smaller than what was found for Class II atomic jets \citep[see, e.g., the recent review by ][]{lee20}.
\subsection{Origins of gas phase SiO}
The SiO abundances inferred for the high-velocity jets in the CALYPSO sample are summarized in Table \ref{tab:jets-energetics} and Figure \ref{fig:jet-abundances}.
For eight out of the twelve SiO jets detected at $>10\sigma$ in the integrated maps, we infer a lower limit on the SiO abundance which ranges from $>2.4 \times 10^{-7}$ up to $> 5 \times 10^{-6}$. These values are larger than what was estimated by previous low resolution observations \citep[e.g., ][]{gibb04} and they require for $>1\%$ up to $>10\%$ of elemental Silicon is released in the gas phase, assuming the silicon solar abundance, [Si/H]$_{\odot} \sim 3.5 \times 10^{-5}$ \citep{holweger01}.
Two types of scenarios have been invoked to account for the release of silicon into the gas phase, and the subsequent formation of SiO, in jets \citep[see, e.g., ][]{cabrit12}:
(i) shock processing of silicate grains in a dusty wind launched from the disk \citep[e.g., ][]{panoglou12} and
(ii) silicon release at the base of dust-free jets launched from within the dust-sublimation radius of silicates \citep[e.g., ][]{1991ApJ...373..254G,tabone20}.
In the following, we discuss these two scenarios by comparing the SiO abundances inferred for the jets in the CALYPSO sample with what is predicted by models of shocks and dust-free jets.
In the first scenario, SiO jets trace dusty material, either in a magneto-hydrodynamical disk wind launched from beyond the dust sublimation radius of silicates ($R_{\rm sub} \simeq 0.15\,{\rm au} \sqrt{L_{\rm bol}/L_{\odot}}$, i.e. $\sim 0.5$~au for a bolometric luminosity $L_{\rm bol} \sim 10$ $L_{\odot}$, \citealt{2016A&A...585A..74Y}) or envelope material entrained by the jet. Most of the silicon is initially locked in the grain cores but may be released in the gas phase in the shocks occurring along the jet. Models of C-type magnetized shocks show that due to ion-neutral decoupling, grain cores are sputtered and silicon is released into the gas phase \citep{1997A&A...321..293S}. Subsequent reactions with O$_2$ or OH produce SiO in the warm post-shock region. Stationary C-shock models predict an SiO abundance ($X_{\rm SiO} = N_{\rm SiO}/N_{\rm H_2}$) in the range of $8 \times 10^{-8} \le X_{\rm SiO} < 6 \times 10^{-7}$ for shock velocities of $20-50$ km\,s$^{-1}$\, and pre-shock densities of $n_{\rm H} = 10^4 - 10^6$ cm$^{-3}$ \citep{2008A&A...482..809G}. The upper edge of the predicted values is at the lower edge of the lower limits on $X_{\rm SiO}$\, inferred for the jets in the CALYPSO sample ($X_{\rm SiO}$\, from $>2.4 \times 10^{-7}$ up to $> 5 \times 10^{-6}$). Moreover, the timescale to sputter Si from grains and produce large abundance of SiO ($> 10^{-7}$) in C-type shocks is $>100$ years, unless the pre-shock density is high ($n_{\rm H} = 10^6$ cm$^{-3}$). This timescale may be too long to account for the observed SiO abundances in the knots close to the source, which have short dynamical timescales ($< 100$ years).
For example, the SiO abundance in the L1157 jet is $> 10^{-6}$, which implies that at least $2\%$ of elemental silicon is released in the gas phase in the knot B located at a distance of $\sim 80$~au from the driving source (see Tables \ref{tab:fluxes} and \ref{tab:jets-energetics}), indicating that this large abundance is reached in $\sim 5$~yr if the jet travels at $\sim 100$ km\,s$^{-1}$\, \citep{podio16}. On the other hand, C-type shock models assuming that $5\%-10\%$ of the silicon is initially locked in the dust mantles in the form of SiO \citep{gusdorf08b}, or models including dust shattering and grain vaporization at high density ($n_{\rm H} \ge 10^{5}$ cm$^{-3}$, \citealt{guillet11}) enhance the release of silicon into the gas phase (up to $\sim 8\%$) and the formation of SiO (up to an abundance of $4 \times 10^{-7} - 10^{-6}$) on a timescale of $\le 10$ years.
Dust processing could efficiently produce SiO also in J-type shocks, though requiring higher shock velocities and high magnetization \citep{guillet09}. In both C-type and J-type shock models, the fraction of silicon released into the gas phase and converted into SiO depends crucially on the shock velocity and pre-shock density. However, whatever the considered processes and shock parameters (pre-shock density and shock velocity), the lower limits on the SiO abundance inferred for the jets in our sample (from $>2.4 \times 10^{-7}$ up to $> 5 \times 10^{-6}$) are at the upper edge or larger than the values predicted by shocks models ($10^{-6}$ at most).
Alternatively, if SiO jets are launched within the dust-sublimation radius of silicates, the majority of silicon is released into the gas phase at the base of the jet. However, pioneering models of dust-free stellar winds have shown that the abundance of molecules, such as SiO and CO, can be severely reduced when a far ultra-violet (FUV) field is present for two reasons: (i) in the absence of dust, the FUV field can more easily penetrate the unscreened flow and photodissociate molecules; (ii) H$_2$ formation on grains, the starting point of molecule synthesis, is severely reduced \citep{1991ApJ...373..254G}. On the other hand, recent models of laminar dust-free disk winds show that if the jet is launched from the disk within the dust-sublimation radius, CO and SiO may be abundant if the mass-loss rate is $\gtrsim 10^{-6}$ M$_{\odot}$\,yr$^{-1}$\, and the temperature is $T \gtrsim 800$~K \citep{tabone20}. In the optimal case, SiO is the main carrier of elemental Si with SiO/CO ratio as high as $0.1$. For lower mass-loss rates, the abundance of SiO and CO is predicted to drop by several orders of magnitudes. Our observations show that two of the jets that are SiO rich also have high observed mass-loss rates (as derived from CO) (SerpM-SMM4b, L1448-C), in agreement with the predictions by dust-free disk winds. However, even jets with observed mass-loss rates $< 10^{-6}$ M$_{\odot}$\,yr$^{-1}$\, (namely, SerpM-S68Nb, IRAS4B1, SVS13B, L1448-NB, L1157, and SerpM-S68N) show large SiO abundances ($X_{\rm SiO}$\, from $> 5 \times 10^{-7}$ to $> 5 \times 10^{-6}$). This could be due to the presence of a non-vanishing fraction of surviving dust or to the impact of shocks which are expected to increase the SiO abundance by compressing and heating the gas \citep{tabone20}.
In conclusion, our finding that protostellar jets are SiO rich on a large sample of sources support the pioneering result of \citet{cabrit12} who found a SiO abundance which accounts for up to 40\% of elemental silicon in the jet from HH 212. In this context, dust-free jets can be a viable scenario to explain SiO-rich jets.
\subsection{Ejection and accretion properties}
\label{sect:disc-ejec-accr}
The estimated mass-loss rates, $\dot{M}_{\rm jet}$, and jet mechanical luminosities, $L_{\rm jet}$, are key to the understanding of the role of jets in the energy budget of the star-formation process. To date, these parameters have been estimated only for a handful of Class 0 protostellar jets \citep[e.g., ][and references therein]{lee20}.
In the following, we compare the mass-loss rates estimated for the Class 0 sources in the CALYPSO sample with the $\dot{M}_{\rm jet}$\, derived for Class II sources and with estimates of the mass accretion rates.
There is increasing observational evidence that accretion and ejection in young stars are episodic \citep[e.g., ][]{audard14,plunkett15b}. However, an empirical correlation of the time-averaged CO outflow force with the quiescent (current) accretion luminosity is observed in Class 0 sources \citep[e.g., ][]{bontemps96}. Similarly, a correlation between the time-averaged mass-loss rates of atomic jets and the accretion rates has been observed for T Tauri stars \citep[e.g., ][]{hartigan95,ellerbroek13}. Therefore, even if outbursts dominate the integrated momentum injected in outflows, it appears that the time-averaged results correlates with the quiescent level of accretion. The reason for such a behavior is unclear. Simulations proposed to explain the knotty structure of jets show that even if the mass-loss rate of the jet is constant, the knots can be produced by a small periodical variation of the ejection velocity \citep[e.g., ][]{raga90,lee04}.
Our goal is to use the empirical correlations between ejection and accretion rates as a proxy for investigating potential similarities or differences in the ejection mechanism with source age by making a comparison with similar correlations found for other samples.
Figure \ref{fig:jet-energetics} shows that the two-sided jet mass-loss rates, $\dot{M}_{\rm jet}$, inferred from high-velocity CO emission, range from $7 \times 10^{-8}$ M$_{\odot}$\,yr$^{-1}$\, up to $3 \times 10^{-6}$ M$_{\odot}$\,yr$^{-1}$\, for $L_{\rm int}$\,$\sim 1-50$ L$_{\odot}$. These values are in agreement with those found for a few other molecular jets from Class 0 protostars \citep[e.g., ][ and references therein]{lee20} and are larger by up to five orders of magnitude than the $\dot{M}_{\rm jet}$\, values estimated for atomic jets driven by Class II sources (from $\sim 10^{-11}$ to a few $10^{-8}$ M$_{\odot}$\,yr$^{-1}$, e.g., \citealt{hartigan95,coffey08,podio12}). This indicates that the ejection rate decreases from a few $10^{-6}$ to $10^{-11}$ M$_{\odot}$\,yr$^{-1}$\, as the source evolves from the Class 0 to the Class II stages and accretion proceeds at a slower pace.
Observational studies of Class II sources compare the jet mass-loss rate, inferred from the luminosity of atomic emission lines, with the mass accretion rate, derived from veiling or Hydrogen emission lines \citep[e.g., ][]{hartigan95,muzerolle98c} and show that the ejection and accretion rates are correlated; in particular, $\dot{M}_{\rm jet}$/$\dot{M}_{\rm acc}$\, varies between 0.01 and 0.3 \citep{hartigan95,coffey08,cabrit07a,ellerbroek13}.
To compare the ejection properties of the Class 0 protostellar jets detected by CALYPSO with those of the Class II atomic jets, we derive the mass accretion rates of the protostars in our sample from their internal luminosity. Assuming that the source internal luminosity is due to the gravitational energy released by the accretion onto the protostar, that is, $L_{\rm int} = L_{\rm acc}$, the source mass accretion rate ($\dot{M}_{\rm acc}$) can be estimated as $\dot{M}_{\rm acc} = L_{\rm int} \, R_* / (G \, M_*)$.
We assume that the stellar radius is $R_* = 2$ R$_{\odot}$\, \citep{stahler88} and for the protostellar mass we assume a range of values $M_* = 0.05-0.25$ M$_{\odot}$\, \citep[e.g., ][]{yen17}. This range of masses encompasses the kinematic estimates obtained for a few Class 0 protostars, including three sources in our sample (L1157, IRAS4A2, L1448-C), from the fit of the rotation curves of their disk \citep[e.g., ][]{choi10,kwon15,lee20}. The estimated mass accretion rates are highly uncertain because they depend strongly on the protostellar properties which are unknown for most of the sources in our sample, and because accretion may be episodic and characterized by accretion bursts. However, Fig. \ref{fig:jet-energetics} shows that assuming low protostellar mass, $M_* = 0.05$ M$_{\odot}$, the estimated mass-loss rates are 10\%-50\% of the mass accretion rates for two-sided jets, while for the monopolar jet IRAS2A1 and the jets with one "CO-poor" lobe the mass-loss rate is 1\%-10\% of $\dot{M}_{\rm acc}$. The exception is SVS13B, where both jets lobes are "CO-poor", for which we find $\dot{M}_{\rm jet} <0.01 \dot{M}_{\rm acc}$. If we assume that the protostellar mass is $M_* = 0.25$ M$_{\odot}$, the inferred $\dot{M}_{\rm acc}$\, values are lower by a factor of 5. Therefore, for three jets (SerpM-SMM4b, IRAS4A1, and IRAS4A2), we find $\dot{M}_{\rm jet} \ge \dot{M}_{\rm acc}$, while for the rest of our sample we find $\dot{M}_{\rm jet} \sim 0.01-0.5 \dot{M}_{\rm acc}$.
Despite the large spread of $\dot{M}_{\rm jet}$\, values (by about 1 order of magnitude at a given $L_{\rm int}$) our estimates indicate that the jet mass-loss rates increases with the protostellar internal luminosity between $1$ and $50$ L$_{\odot}$, hence, with the mass accretion rate (between $\sim 1.3 \times 10^{-6}$ M$_{\odot}$\,yr$^{-1}$\, and $\sim 6 \times 10^{-5}$ M$_{\odot}$\,yr$^{-1}$, respectively, for $M_* = 0.05$ M$_{\odot}$). Moreover, the correlation between the mass accretion rate and the mass ejection rate holds from the early protostellar stage probed by our CALYPSO survey ($10^4$ years) to evolved T Tauri stars of 1 Myr \citep[e.g., ][]{hartigan95}.
The total jet power, that is, the sum of the jet mechanical luminosity over the innermost knots along the blue and red lobes (B and R), $L_{\rm jet}$, is 10\%-50\% of the source internal luminosity, $L_{\rm int}$, for most of the jets in our sample (7 out of 12 jets), confirming the results found by \citet{lee20} collecting all the recent studies of six Class 0 protostellar jets from the literature.
Despite the uncertainties affecting our estimates of $L_{\rm jet}$\, mainly due to the uncertainty on the estimated CO column density, and on the assumed jet velocity, compression factor, and CO abundance, the correlation between $L_{\rm jet}$\, and $L_{\rm int}$\, suggests that the gravitational energy released by accretion onto the protostar could be efficiently extracted and converted into mechanical energy transported by the jet.
The exceptions are the jets where one lobe is "CO-poor", as in the case of SerpM-S68N and Nb, L1448-NB, and in the case of the monopolar jet IRAS2A1, which exhibit lower luminosity jets (i.e., $L_{\rm jet} \ge 0.01-0.1 \, L_{\rm int}$).
Finally, CO is not detected in both lobes of the SVS13B jet, therefore we infer an upper limit on its mechanical luminosity, $L_{\rm jet} <0.02$ L$_{\odot}$, i.e. $< 0.01$ $L_{\rm int}$. This value is much lower than what is derived for all the other jets in the sample, which could be explained by, for instance, an abnormally low CO abundance in this jet.
\section{Conclusions}
\label{sect:conclusions}
In this paper, we present a statistical survey of the occurrence and properties of jets and outflows in a sample of 21 Class 0 protostars mainly located in Perseus, Taurus, and Serpens. Our analysis is based on IRAM-PdBI observations of CO ($2-1$), SO ($5_6-4_5$), and SiO ($5-4$) taken in the context of the CALYPSO Large Program.
The main results of our analysis may be summarized as follows:
\begin{itemize}
\item[-] The observed tracers show the following differentiation: SiO ($5-4$) probes the collimated jet; CO ($2-1$) traces wide angle outflows at low velocity and the collimated jet at high velocities, where it is co-spatial with SiO; SO ($5_6-4_5$) is associated to the jet similarly to SiO in 52\% of the sources (e.g., IRAS4A1, IRAS4A2, L1448-C, SerpS-MM18a, IRAS2A1).
In 25\% of the sources (L1527, GF9-2, L1157, L1448-2A, SerpS-MM18b), the SO emission probes a compact circumstellar region with a small velocity gradient perpendicular to the jet and outflow axis ($|V - V_{\rm sys}| < 3$ km\,s$^{-1}$). This suggests that in these sources either SO traces the inner envelope or the disk, or the accretion shock at the interface between them, as in the case of L1527 \citep{sakai14a,maret20}.\\
\item[-] Collimated high-velocity jets traced by SiO ($5-4$) are detected in 67\% of the sources and 79\% of these also show jet emission in SO ($5_6-4_5$). The detection rate of jets increases with $L_{\rm int}$, which is a probe of the mass accretion rate onto the protostar.
This confirms the expected correlation between the mass accretion and the mass ejection rate (which, in turn, is proportional to the brightness of the emission lines). Hence, the non-detection of jet emission associated with the less-accreting sources could be an observational bias and deeper observations could demonstrate that jets are ubiquitous in our sample.
We detect for the first time high-velocity collimated SiO and SO jets in IRAS4B1, L1448-NB, and SerpS-MM18a, and in two protostellar candidates (IRAS4B2, and SerpS-MM18b, with the latter only in CO), supporting their identification as Class 0 protostars.\\
\item[-]
Slow outflow emission in CO ($2-1$) is detected in 100\% of the 21 Class 0 protostars suggesting that ejection phenomena are ubiquitous at the protostellar stage. We report for the first time the detection of a CO outflow for SerpS-MM22 and SerpS-MM18b.\\
\item[-]
The median radial velocity of the detected SiO protostellar jets is $30$ km\,s$^{-1}$, that is, about two times smaller than the median radial velocity of atomic jets driven by Class II sources \citep{2018A&A...609A..87N}. Assuming that the velocity of the jet scales with the Keplerian velocity at the launching point, the increase of the jet velocity with age is consistent with a jet launched from the same region of the disk around a central object of increasing mass.
Moreover, at least 33\% of the detected SiO bipolar jets show a velocity asymmetry between the two lobes by a factor of $1.3-2.1,$ with the exception of SerpM-S68Nb, which shows a larger difference (by a factor of $7.8$). The occurrence and degree of velocity asymmetries inferred for the protostellar jets in the CALYPSO sample are in agreement with what was found for optical atomic jets from T Tauri stars \citep{hirth94}. The similarity in knot spacings and velocity asymmetries between Class 0 and Class II jets suggests that the jet launching mechanism in protostars of $10^4$ yr might be similar to that in Class II sources ($10^6$ yr).\\
\item[-]
We find that 50\% of the 12 SiO jets detected in our sample show non-straight jets and this might indicate precession or wiggling.\\
\item[-] The observed protostellar flows have an onion-like structure: SiO ($5-4$) emission (the "jet probe") is more collimated than SO ($5_6-4_5$) emission, which in turn is narrower than CO ($2-1$) (the "outflow probe"), with median opening angles of 10$^{\circ}$, 15$^{\circ}$, and 25$^{\circ}$, respectively. However, high-velocity CO emission is as collimated as SiO. This indicates that low-velocity CO probes entrained material in the outflow, while high-velocity CO traces the collimated jet.
At scales larger than $300$ au, most of the high-velocity SiO jets are broader ($\sim 4\degr-12\degr$ collimation) than Class I and Class II atomic jets ($\sim 3\degr$ collimation). This could be due to projection effects as well as to contamination by the bow-shock wings at the low temperature probed by SiO.\\
\item[-]
We find that SiO ($5-4$) is optically thick in 26\% of the inner jet knots, and possibly thick in another 56\%. At least 67\% (8/12) of the jets are SiO rich ($X_{\rm SiO}$\, goes from $> 2.4 \times 10^{-7}$ to $> 5 \times 10^{-6}$), which requires that $>1\%-10\%$ of silicon is released in the gas phase, confirming the pioneering result by \citet{cabrit12} for the HH 212 jet.
This is difficult to explain in a scenario where dusty material launched from outside the dust sublimation radius is processed in shocks, especially in the inner knots which have short dynamical timescales ($\le 10$ years) \citep[see, e.g., shock models by][]{gusdorf08a,gusdorf08b}. On the other hand, formation of SiO in dust-free jets can be a viable scenario to explain SiO-rich jets for mass-loss rates $\ge 10^{-6}$ M$_{\odot}$\,yr$^{-1}$.\\
\item[-] The mass-loss rates of the detected Class 0 molecular jets, $\dot{M}_{\rm jet}$, as derived from high-velocity CO emission, range from $7 \times 10^{-8}$ M$_{\odot}$\,yr$^{-1}$\, up to values of $\sim 3 \times 10^{-6}$ M$_{\odot}$\,yr$^{-1}$\, for internal luminosities of the driving protostars of $\sim 1-50$ L$_{\odot}$. These $\dot{M}_{\rm jet}$\, values are larger by up to five orders of magnitude than those measured for atomic jets driven by Class II sources (from $\sim 10^{-11}$ to a few $10^{-8}$ M$_{\odot}$\,yr$^{-1}$, \citealt{hartigan95,coffey08,podio12}). Moreover, despite the uncertainties affecting the estimates of the mass accretion rates for Class 0 protostars, due to the unknown protostellar mass and radius, we find that $\dot{M}_{\rm jet} \sim 0.1 - 0.5 \dot{M}_{\rm acc}$ for most of the jets, with the exception of the "CO-poor" and monopolar jets for which $\dot{M}_{\rm jet} \sim 0.01 - 0.1 \dot{M}_{\rm acc}$. These $\dot{M}_{\rm jet}$/$\dot{M}_{\rm acc}$\, ratios are similar to those found for atomic Class II jets ($\dot{M}_{\rm jet} \sim 0.01 - 0.3 \dot{M}_{\rm acc}$, \citealt{hartigan95,cabrit07a,coffey08,podio12}) and indicate that the correlation between ejection and accretion holds over the whole star-formation process, from protostellar objects of 10$^4$ years to pre-main sequence stars of 1 Myr.\\
\item[-] The total jet power ($L_{\rm jet} = 1/2 \times \dot{M}_{\rm jet} \times V_{\rm jet}^2$) is 10\%-50\% of the source internal luminosity for $\sim 60\%$ of the jets in the sample. This indicates that the gravitational energy released by accretion onto the protostar could be efficiently extracted and converted into mechanical energy in the jet. For "CO-poor" and monopolar jets the jet power is lower ($\ge 1\%-10\%$ of the internal luminosity, with the exception of SVS13B, for which $L_{\rm jet } < 1\% L_{\rm int}$).\\
\end{itemize}
\begin{acknowledgements}
We thank the IRAM staff for their support in carrying out the CALYPSO observations and the INSU “Action Spécifique ALMA” for their financial support to the CALYPSO collaboration. We are grateful to the annonymous referee for their instructive comments and suggestions.
This work was also supported by the PRIN-INAF 2016 "The Cradle of Life - GENESIS-SKA (General Conditions in Early Planetary Systems for the rise of life with SKA)", and the European MARIE SKŁODOWSKA-CURIE ACTIONS under the European Union's Horizon 2020 research and innovation program, for the Project “Astro-Chemistry Origins” (ACO), Grant No 811312. L.P. acknowledges the European Union FP7, GA No. 267251. B.T. acknowledges support from the research program Dutch Astrochemistry Network II with project number 614.001.751, which is (partly) financed by the Dutch Research Council (NWO).
\end{acknowledgements}
\bibliographystyle{aa}
|
2,877,628,091,652 | arxiv | \section{Introduction}
\label{sec:intro}
The {\it flux} of cosmic-ray (CR) electrons and positrons ($e^-$ and $e^+$) has been measured with unprecedented precision over more than four orders of magnitude of energy.
One of the most accurate measurements on single CR $e^-$ and $e^+$ and inclusive ($e^+ + e^-$) fluxes is provided by
AMS-02 on board the International Space Station (ISS), between 0.1 GeV to 1 TeV energy, and with errors reaching the few percent level
\citep{2014PhRvL.113l1101A,2014PhRvL.113l1102A,Aguilar:2014fea}.
The {\it Fermi} Large Area Telescope (LAT) has collected almost seven years of $e^+ + e^-$ events in the 7~GeV-2~TeV energy range \citep{2017PhRvD..95h2007A}.
CALET on the ISS, and HESS on the ground, are providing $e^+ + e^-$ data up to 3 TeV and 30 TeV energy, respectively \citep{PhysRevLett.119.181101, HESSICRC17, PhysRevLett.120.261102}.
The DAMPE Collaboration has recently reported the direct detection of a break at around 1 TeV in the flux of the $e^+ + e^-$ measured between 25 GeV to 4.6 TeV \citep{Ambrosi:2017wek}.
Many theoretical interpretations have been proposed for the AMS-02 lepton data, invoking sources of $e^{+}$ and/or $e^-$ in the Interstellar Medium (ISM),
from Pulsar Wind Nebulae (PWNe) and Supernova Remnants (SNRs)\cite{Kobayashi:2003kp,Pohl:2012xs, 2009APh....32..140G, Gaggero:2013nfa,Delahaye:2014osa,DiMauro:2014iia,Manconi:2016byt}, and also
in the context of annihilation and decay of dark matter particles \cite{Bergstrom:2013jra,DiMauro:2015jxa}.
In addition to the flux, the LAT team has also published the spectrum of upper limits on the $e^+ + e^-$ {\it dipole anisotropy} \cite{Abdollahi:2017kyf}.
Since the typical propagation length of TeV $e^\pm$ is smaller than $\sim 0.3$~kpc, $e^+$ and $e^-$ detected at TeV energies are most probably emitted from local sources, leaving a possible signature in the dipole anisotropy \cite{Manconi:2016byt}.
At variance, nuclei suffer mainly from diffusion rather than energy losses, so the hadronic flux from local sources is typically spread, and sets below the cumulative contribution of all Galactic sources.
The contribution from the local source candidates is usually associated with high uncertainties, primarily connected to the properties of the accelerated and emitted $e^-$ and $e^+$.
Moreover, the completeness of current catalogs, such as SNRs, is assessed by means of the observed surface brightness (see e.g. \cite{2015MNRAS.454.1517G}), thus leaving open the possibility that nearby and
very old sources may contribute to the flux at Earth even if they are no longer visible at any energy of the electromagnetic band.
A strategy to constrain the source contributions of local known sources is to model their multi-wavelength emission and to connect it to the emitted CRs.
For example, the lepton emission from sources embedded in a magnetic field, such as $e^-$ from SNRs, can be connected with their synchrotron emission at {\it radio} frequencies
(see \cite{2010A&A...524A..51D,DiMauro:2014iia,Manconi:2016byt} and references therein).
In addition, the most recent experimental upper bounds on the dipole anisotropy could set further limits on the properties of local and dominant sources.
In the present paper we use this strategy to quantify the contribution of local known sources, in particular from two SNRs which are widely considered as the main candidates to contribute significantly to the high energy part of the $ e^-$ flux at Earth
(often measured cumulatively through $e^-+e^+$), namely Vela and Cygnus Loop, see e.g. \cite{Kobayashi:2003kp}.
For the first time, we present a multi-component model that explains the $e^+$ and $e^-+e^+$ fluxes from five experiments and in a wide energy range, and that is simultaneously compatible with the upper bounds on the dipole anisotropy and the radio emission from the most intense and closest SNRs.
The paper is structured as follows.
\reply{Our model for the cosmic-ray electrons from SNRs is outlined in Sec.~\ref{sec:model}.} In Sec.~\ref{sec:radio} the constrains imposed by radio data on our sample of local SNRs are presented. The constraints imposed from $e^-+e^+$ and dipole data are discussed respectively in Sec.~\ref{sec:flux} and Sec.~\ref{sec:dipole}. A model combining the multi-wavelength data for local SNRs that explains the most recent flux and dipole data is presented in Sec.~\ref{sec:multiw}, before concluding in Sec.~\ref{sec:conc}.
\section{\label{sec:model}Cosmic-ray electrons from SNRs}
\reply{Cosmic-ray $e^\pm$} can be injected \reply{in the interstellar medium (ISM)} by shocked stellar
environments - SNRs as well as PWNe - according to the first order Fermi acceleration mechanism
(for a comprehensive review on the SNR paradigm for Galactic CRs see \cite{Blasi:2013rva} and references therein).
\reply{We focus here on SNRs.}
For a detailed treatment of the injection of $e^-$ by SNRs and their propagation in the Galaxy we refer to \cite{2010A&A...524A..51D, Manconi:2016byt}.
\reply{We here remind the basics of our model, along with an additional new treatment for the injection of $e^-$ by SNRs and for the synchrotron radio emission from known SNRs. }
\subsection{Injection of cosmic-ray electrons from SNRs into the ISM}
The details of the release mechanism of $e^-$ from SNRs are poorly known and still under debate \cite{2010APh....33..160C, 2012MNRAS.427...91O, Blasi:2013rva, 2009MNRAS.396.1629G}, and could affect the properties of the escaping $e^-$, above all the energy spectrum.
\reply{We implement here two different models, the burst-like injection and the evolutionary model. }
\reply{The injection of $e^-$ accelerated by SNRs is commonly described through a \textit{burst-like} approximation \cite{2010A&A...524A..51D},
in which all the $e^-$ are released in the ISM at a time equal to the age of the source. Under this hypothesis,} the energy spectrum $Q(E)$ of accelerated $e^-$ can be described by the function
\begin{equation}
Q(E)= Q_{0} \left( \frac{E}{E_0}\right)^{- \gamma} \exp \left(-\frac{E}{E_c} \right),
\label{eq:Q_E}
\end{equation}
where $Q_{0}$ is in units of GeV$^{-1}$, $E_c$ is a cutoff energy and $E_0= 1$~GeV.
Given the injection spectrum in Eq.~\ref{eq:Q_E}, the total energy emitted in $e^-$ from SNR (or $e^\pm$ for PWN) in units of GeV (or erg) can be obtained as (see \cite{2010A&A...524A..51D})
\begin{equation}
E_{\rm tot} = \int _{E_1} ^\infty dE \, E \,Q(E) \,,
\label{eq:Etot}
\end{equation}
where we fix $E_1 = 0.1$ GeV. The normalization of the spectrum in Eq.~\ref{eq:Q_E} can be constrained
from available catalog quantities for single sources \reply{(see Sec.~\ref{sec:synchro})}, or by using average population properties for the smooth galactic component \cite{2010A&A...524A..51D,DiMauro:2014iia,Manconi:2016byt}.
\reply{The burst-like assumption is considered appropriate at high energy, since $e^-$ of energy $E\gsim100$ GeV are believed to be released within a few kyr from the initial burst, and this timescale is much smaller than the age of the sources typically considered to explain the CR $e^-$ data at Earth \cite{2009MNRAS.396.1629G}.}
\reply{In addition to the burst-like approximation, we also implement the {\em evolutionary model} for the escape of $e^-$ from SNRs as derived in Ref.~\cite{2012MNRAS.427...91O}.
The authors of Ref.~\cite{2012MNRAS.427...91O} assume analytical models for the temporal evolution of the shock radius and its velocity.
They also derive the timescales and the space-energy distributions for trapped and runaway $e^-$.
During the Sedov phase, the escape-limited maximum energy $E_{\rm m, esc}(T)$ below which CRs are still trapped in the SNR is defined as \cite{2012MNRAS.427...91O}:
\begin{equation}\label{eq:Emesc}
E_{\rm m, esc}(t)= E_{\rm knee} \left(\frac{T}{t_{\rm Sedov}}\right)^{-\alpha}
\end{equation}
where $E_{\rm knee}=10^{15.5}$~eV is the energy at the knee of the CR all-particle spectrum, $t_{\rm Sedov}$ is the start time of the Sedov phase, $\alpha$ describes the evolution of the maximum energy during the Sedov phase, and $T$ is the SNR age.
For energies smaller than $E_{\rm m, esc}(T)$, the $e^-$ are still trapped in the SNR, and their energy spectrum is described as:
\begin{align}\label{eq:QTrap}
Q_{\rm trap}(E, T)&= A \left( \frac{E_{\rm m, esc}(T)}{E_0}\right)^{- (\gamma + \beta/\alpha)} \left(\frac{E}{E_{\rm m, esc}(T)}\right)^{-\gamma} \exp \left(-\frac{E}{E_c} \right) = \\
&= Q_{\rm 0, trap}(T) \left(\frac{E}{E_{0}}\right)^{-\gamma} \exp \left(-\frac{E}{E_c} \right) \label{eq:QTrap2}
\end{align}
where $A$ is a normalization factor, and $\beta$ describes the evolution of the electron number inside the SNR.
The $Q_{\rm 0, trap}(T)$ is obtained by recasting Eq.~\ref{eq:QTrap} using the explicit form of $E_{\rm m, esc}(T)$ in Eq.~\ref{eq:Emesc}.
The energy spectrum $Q_{\rm esc}(E)$ of runaway $e^-$ is instead described by:
\begin{equation}\label{eq:Qesc}
Q_{\rm esc}(E)= A \left( \frac{E}{E_0}\right)^{- (\gamma + \beta/\alpha)} \exp \left(-\frac{E}{E_c} \right)\quad.
\end{equation}
The $e^-$ at a given energy $E$ escape from the SNR at a time:
\begin{equation}\label{eq:Tesc}
T_{\rm esc}=t_{\rm Sedov} \left( \frac{E}{E_{\rm knee}}\right)^{-\frac{1}{\alpha}}\quad.
\end{equation}}
\reply{The idea of an escape-limited maximum energy $E_{\rm m, esc}(T)$, or equivalently of $T_{\rm esc}$ after which $e^-$ of a given energy can run away from the SNR,
determines the difference between the simpler burst-like approximation and this evolutionary escape model.
Focusing on specific sources, as for example the Vela YZ and Cygnus Loop SNRs ($T=$11.3~kyr and 20~kyr respectively, see below),
this escape model states that CRs $e^-$ with energies $E<E_{\rm m, esc}=88$~GeV and $E<E_{\rm m, esc}=17$~GeV are still trapped in Vela YZ and Cygnus Loop, respectively.
On the opposite, $e^-$ with energies $E>E_{\rm m, esc}$ have been released in the ISM.
Also, we note that the energy spectral index of runaway $e^-$ is modified with respect to the one of trapped ones, as stated by Eqs. \ref{eq:QTrap} and \ref{eq:Qesc}.
We will study the consequences of these spectral modifications in the following Sections.
The specific values of $E_{\rm m, esc}(T)$, or equivalently the ages of the two sources, make the burst like approximations a more suitable description for the older Cygnus Loop than for Vela YZ.
We fix $t_{\rm Sedov}=200$~yr, $\alpha=2.6$ and $\beta=0.6$, as in Ref.~\cite{2012MNRAS.427...91O}.
When not differently stated, our results are shown for the burst-like model.
}
\subsection{Propagation in the Galaxy}
The propagation of $e^{-}$ and $e^{+}$ from their sources to the Earth has been treated as in \cite{2010A&A...524A..51D,DiMauro:2014iia,Manconi:2016byt},
\reply{from which we remind here some basic ingredients. We refer the reader to Ref.~\cite{Manconi:2016byt} for futher details.
The cosmic-ray $e^{-}$ and $e^{+}$ number density $\psi = \psi(E, \mathbf{x}, t)\equiv dn/dE$ per unit volume and energy
obeys the transport equation:
\begin{equation}
\frac{\partial \psi}{\partial t} - \mathbf{\nabla} \cdot \left\lbrace K(E) \mathbf{\nabla} \psi \right\rbrace +
\frac{\partial }{\partial E} \left\lbrace \frac{dE}{dt} \psi \right\rbrace = q(E, \mathbf{x}, t)
\label{eq:diff}
\end{equation}
where $K(E)$ is the energy dependent diffusion coefficient, $dE/dt\equiv b(E)$ accounts for the energy losses and $q(E, \mathbf{x}, t)$ is the $e^-$ and $e^+$ source term. The flux of electron $\Phi$ at the Earth is connected to the number density through $\Phi=v /4\pi \;\psi$.
}
We solve the transport equation in Eq.~\ref{eq:diff} in a semi-analytic model,
assuming a spatially uniform diffusion coefficient:
\begin{equation}
\label{eq:diff_coeff}
K(E)= \beta K_0 (\mathcal{R}/1 \text{GV})^\delta \simeq K_0 (E/1 \text{GeV})^{\delta}
\end{equation}
where $\beta=v/c$ (for relativistic $e^\pm$, as in this analysis, $\beta=1$)
and $\mathcal{R}$ is the particle rigidity.
We include $e^\pm$ energy losses by Inverse Compton scattering off the interstellar radiation field,
and synchrotron losses on the Galactic magnetic field.
A full-relativistic treatment of Inverse Compton losses has been implemented in the Klein-Nishina regime, according to Ref.~\cite{2010A&A...524A..51D}.
\reply{The black body approximation for the interstellar photon populations
at different wavelengths has been taken from \cite{2010A&A...524A..51D} (model M2 in their Table 2).
The Galactic magnetic field intensity
has been assumed $B=3.6\; \mu$G, as resulting from the sum (in quadrature) of the regular and turbulent components \citep{2007A&A...463..993S}.}
At the energies considered here, the energy losses dominate over diffusion effects.
Therefore, modifications of the diffusion coefficient are not expected to modify significantly our conclusions.
The propagation parameters are fixed according to the fits to CR data performed within a semi-analytical diffusion model in \cite{Kappl:2015bqa} (K15) and \cite{Genolini:2015cta} (G15)
(see also \cite{Manconi:2016byt}).
As for the K15 model, it is found to be $K_0=0.0967$ kpc$^2$/Myr and \reply{$\delta=0.408$, while for the G15 model $K_0=0.05$ kpc$^2$/Myr and $\delta=0.445$}.
The values found in these two papers are also compatible with the ones derived in \cite{Johannesson:2016rlh,Korsmeier:2016kha}.
\reply{Given our focus on single sources, we report here the explicit solutions of the time-dependent transport equation for the CR flux from a single source of $e^-$, which can be also found in several literature works (see e.g. \cite{PhysRevD.52.3265,Aharonian:1995zz,Mlyshev:2009twa, 2010A&A...524A..51D}).
In the burst-like approximation, the CR $e^-$ and $e^+$ density $\mathcal{\psi}(E,\mathbf{x})$ at a position $\mathbf{x}$ (in Galactic coordinates) and energy $E$, and considering an infinite diffusion halo, reads:
\begin{equation}\label{eq:singlesourcesolution}
\mathcal{\psi}(E,\mathbf{x}) = \frac{b(E_s)}{b(E)} \frac{1}{(\pi \lambda^2)^{\frac{3}{2}}} \exp\left({-\frac{|\mathbf{x} -\mathbf{x_{s}} |^2}{ \lambda^2}}\right)Q(E_s)
\end{equation}
where $b(E)$ is the energy loss function, $\mathbf{x_{s}}$ indicates the source position. $\lambda$ is the typical propagation scale length:
\begin{equation}
\label{eq:lambda}
\lambda^2= \lambda^2 (E, E_s) \equiv 4\int _{E} ^{E_s} dE' \frac{K(E')}{b(E')},
\end{equation}
where \replyy{$E_s(E)\equiv E_s(E;t,t_s)$} is the initial energy of $e^\pm$ that cool down to $E$ in a {\rm loss time} $\Delta \tau$:
\begin{equation}
\Delta \tau (E, E_s) \equiv \int_{E} ^{E_s} \frac{dE'}{b(E')} = t-t_{{\rm s}} \quad ,
\end{equation}
and $t_{{\rm s}}$ is the source age.
}
\reply{As for the evolutionary escape model in Ref.~\cite{2012MNRAS.427...91O} the $\mathcal{\psi_{\rm esc}}(E,\mathbf{x})$ of runaway CR $e^\pm$ at a position $\mathbf{x}$ and energy $E$, and considering an infinite diffusion halo is:
\begin{equation}\label{eq:sol_esc}
\mathcal{\psi}_{\rm esc}(E,\mathbf{x}) = \frac{b(E_s(E))}{b(E)} \frac{1}{(\pi \lambda^2)^{\frac{3}{2}}} \exp\left({-\frac{|\mathbf{x} -\mathbf{x_{s}} |^2}{ \lambda^2}}\right)Q_{\rm esc}(E_s(E))
\end{equation}
where $E_s$ is again the initial energy of $e^\pm$ that cool down to $E$, defined now as:
\begin{equation}
\int_{E} ^{E_s(E)} \frac{dE'}{b(E')} = t-T_{\rm esc}(E_s)\,,
\end{equation}
\replyy{and $T_{\rm esc}$ is given by Eq.~\ref{eq:Tesc}.}
\replyy{The energy spectrum $Q_{\rm esc}(E_s(E))$ is defined in Eq.~\ref{eq:Qesc}, and is obtained as \cite{2012MNRAS.427...91O}:
\begin{equation}
Q_{\rm esc}(E) = \int dt\int d \mathbf{x} \, q(E, \mathbf{x}, t)\,,
\end{equation}
and the solution reported in Eq.~\ref{eq:sol_esc} is given for a source term in the form $q(E, \mathbf{x}, t)= \delta(\mathbf{x}) \delta (t-T_{\rm esc}(E))Q_{\rm esc}(E) )$.}
}
\subsection{The radio synchrotron emission from nearby SNRs}
\label{sec:synchro}
One of the key points of this paper is the inspection of selected near SNRs in terms of the available flux radio data, in order to obtain a better understanding of $e^-$ and $e^+$ flux data.
The very-high-energy $e^-$ and $e^+$ flux and the radio data for nearby sources are connected under the hypothesis that the radio emission from the source is due to synchrotron radiation from $e^-$ accelerated and interacting with the SNR magnetic field.
Under the hypothesis that the radio emission from the source is due to synchrotron radiation from $e^-$ accelerated and interacting with the SNR magnetic field $B$, the normalization of the injection spectrum $Q_{0,\rm{SNR}}$ can be connected to the radio flux density $B^{\nu}_r(\nu)$:
\begin{equation}
\label{eq:Br}
Q_{0,\rm{SNR}} = 1.2 \cdot 10^{47} \text{GeV}^{-1} (0.79)^{\gamma} \frac{B^{\nu}_r(\nu)}{\text{Jy}}
\left[ \frac{d}{\text{kpc}}\right]^{2}
\left[ \frac{\nu}{\text{GHz}}\right]^{\frac{\gamma -1}{2}}
\left[ \frac{B}{100 \mu\text{G}}\right]^{-\frac{\gamma+1}{2}}.
\end{equation}
The derivation of this expression is extensively provided in \cite{2010A&A...524A..51D}, and it was successively used also in \cite{DiMauro:2014iia,Manconi:2016byt}.
The energy emitted at a given frequency is radiated at the energy loss rate \replyy{$b(E(\nu))$}
(see Eqs.~48-50 in \cite{2010A&A...524A..51D}), and one implicitly assumes that observation takes place within the
time-interval $E(\nu)/b(E(\nu))$ \replyy{($E(\nu)=h \nu$)} during which the $e^-$ radiate
after the burst, and that the flux is quasi constant within this time interval.
We note in Eq.\eqref{eq:Br} the well-known relation between the index of the $e^-$ distribution $\gamma$ and the radio index $\alpha_r= (\gamma -1)/2$.
\reply{The $Q_{0,\rm{SNR}}$ term in Eq. \ref{eq:Br} is implemented by Eq. \ref{eq:Q_E} in case of the burst-like approximation.
In case of the evolutionary escape model, we instead implement in Eq.~\ref{eq:Br} the $Q_{0,\rm{trap}}$ of Eq.~\ref{eq:QTrap2}. }
\begin{figure}[t]
\includegraphics[width=0.5\textwidth]{fig1left_near_SNRs_winkler_1kpc_noradioEc10TeV_democratic_gamma2_Etot7e47.pdf}
\includegraphics[width=0.5\textwidth]{fig1right_near_SNRs_winkler_1kpc_noradioEc10TeV_catalogQ0.pdf}
\caption{Electron flux at Earth from near SNRs in the Green catalog at $d<1$~kpc from the Earth.
Left: A common spectral index of $\gamma=2.0$ and a total energy released in $e^-$ of $E_{\rm tot}=7 \cdot 10^{47}$~erg has been assumed for each source.
Right: The spectral index and the $Q_0$ for each source are fixed according to the catalog data and Eq.\ref{eq:Br} for a single frequency.
All the curves are computed for $E_c=10$~TeV and K15 propagation model.
}\label{fig:nearsnr}
\end{figure}
Our search for the sources that can contribute most to the $e^-$ flux rests on the computation of the $e^-$ from
catalogued sources. We consider the sources in the Green SNR catalog \cite{Green:2014cea}, and find seven SNRs which are located at $d<1$ kpc from the Earth.
In order to illustrate the role of these SNRs, we compute their flux of $e^-$ at Earth. We first assume that they all inject $e^-$ in the ISM with the
common spectral index of $\gamma=2.0$ and with a total energy released in $e^-$ of $E_{\rm tot}=7 \cdot 10^{47}$~erg, as very often assumed in the literature (see e.g. \cite{Kobayashi:2003kp}).
The only catalogued parameters here are the distance and the age of the source.
The results are shown in the left panel of Fig.~\ref{fig:nearsnr}. Vela YZ turns out to be the most powerful source, followed by Cygnus Loop. Electrons from the other sources have fluxes smaller than up one order of magnitude.
Indeed, the Green catalog \cite{Green:2014cea} also provides the spectral index and the radio properties for each source that,
when implemented in Eq.~\ref{eq:Q_E}, lead to the fluxes in Fig.~\ref{fig:nearsnr}, right panel.
This more realistic approach demonstrates that the only two powerful sources are indeed Vela YZ and Cygnus Loop,
while the other SNRs contribute with an $e^-$ flux at Earth which is at the percent level of the Vela YZ one.
We identify Vela YZ and Cygnus Loop as the candidates
expected to contribute most significantly to the high-energy tail of $e^{+}+e^{-}$ flux, given their distance, age and radio flux \cite{Kobayashi:2003kp,DiMauro:2014iia,Manconi:2016byt}.
As shown in the following, Vela Jr can emerge as a significant contributor to the $e^{+}+e^{-}$ flux in the TeV range when the leptonic model inferred in \cite{2008ApJ...678L..35K} is considered, given the high value for the cutoff of $E_c=25$~TeV and the low magnetic field ($12\mu$G).
\section{\label{sec:radio}Results on the SNR properties from radio data}
\begin{figure}[t]
\includegraphics[width=0.5\textwidth]{fig2left_Snu_velaYZfit.pdf}
\includegraphics[width=0.5\textwidth]{fig2right_Snu_cygnusfit.pdf}
\caption{A fit to the radio spectrum of Vela SNR (left panel) and Cygnus Loop (right panel) using Eq.~\eqref{eq:Br}.
The red line represents the best fit model to the data.
The integrated flux densities $B_r$ are taken from \cite{2001A&A...372..636A, 2004A&A...426..909U}.
}\label{fig:radio}
\end{figure}
With respect to previous analysis where usually a single frequency was considered (see, e.g., \cite{DiMauro:2014iia,DiMauro:2017jpu}),
we use here the radio spectrum in the widest available range of frequencies:
from 85.7~MHz to 2700~MHz for Vela YZ \cite{2001A&A...372..636A} and from $22$~MHz to $4940$~MHz for Cygnus Loop \cite{2004A&A...426..909U}.
We fix the Vela YZ (Cygnus Loop) distance and age to be: $d=$ 0.293~kpc (0.54~kpc) and $T=$11.3~kyr (20~kyr) \cite{2003ApJ...596.1137D,2005AJ....129.2268B,1994PASJ...46L.101M,2004A&A...426..909U}, respectively.
The magnetic field of galactic SNRs is often inferred from multi-wavelength analysis, and the values typically range between few $\mu$G to even $10^3\mu$G \cite{2012SSRv..166..231R}.
The magnetic field of Vela YZ is here fixed to $B=$ 36 $\mu G$, corresponding to a mean of the values inferred from X-ray data for the Y and Z regions \cite{Sushch:2013tna}, while for Cygnus Loop we consider the best fit value of $B=60$~$\mu G$ of the hadronic model for the gamma-ray analysis in \cite{2011ApJ...741...44K}.
In Fig.~\ref{fig:radio} we display the results for the fit to the available radio data of both Vela YZ and Cygnus Loop.
We then invert Eq.~\ref{eq:Br} to fit $B^{\nu}_r(\nu)$ as a function of $\gamma$ and $Q_{0,\rm{SNR}}$ for all the
available frequencies $\nu$.
We tune the injection spectrum of local SNRs in order to reproduce the radio data, since at this wavelength the $e^-$ are the main emitters.
It is worth noting that \reply{in the case of burst-like approximation} we work under the assumption that the electromagnetic emission we observe today from those SNRs reflects the properties of the $e^-$ population that has been released and injected in the ISM.
The best fit parameters are:
$\gamma_{\rm{Vela}}= 2.47\pm0.10$, $E_{\rm{tot}, \rm Vela}= (2.28\pm0.06)\cdot 10^{47}$ erg, $\gamma_{\rm{Cygnus}}= 2.04\pm0.04 $ and $E_{\rm{tot}, \rm Cygnus}= (1.18\pm 0.16)\cdot 10^{47}$ erg. The numbers for the Vela YZ are in agreement with the findings of \cite{Sushch:2013tna}.
The parameter space $E_{\rm{tot}}$ - $\gamma$ selected by the fit to the radio spectrum is reported in the left panel
of Fig.~\ref{fig:radioresults} for both Vela YZ and Cygnus Loop, and for $3\sigma$, $2\sigma$ and $1\sigma$ confidence levels.
This figure shows that radio data select narrow ranges for $\gamma$ and $E_{\rm{tot}}$.
For example, the $1\sigma$ contour for $\gamma_{\rm{Vela}}$ and $E_{\rm{tot, Vela}}$ is a few \% from the best fit.
Moreover, $E_{\rm{tot}}$ of the order of $10^{47}$ erg is in agreement with the usual expectations for the SNR energy budget, given the total energy released by a SN explosion in the ISM of $\sim 10^{51}$~erg \cite{2007Sci...315..825M} and a fraction conferred to $e^-$ of $\sim 10^{-5}-10^{-3}$ \cite{2009A&A...499..191T}.
We now evaluate the consequences of these results on the $e^++e^-$ flux.
In the right panel of Fig.~\ref{fig:radioresults} we plot the data on the $e^++e^-$ flux
along with the predictions for the flux from Vela YZ and Cygnus Loop obtained by the parameters selected within the $2\sigma$ contours in the left panel.
The $e^++e^-$ flux data have not been used in this analysis and are displayed in the figure for illustrative purposes.
The information in Fig.~\ref{fig:radioresults} is remarkable: the flux of $e^-$ from the closest SNRs as derived from a fit to radio data is slightly below the data on the inclusive flux.
The flux from Vela YZ and Cygnus Loop can skim the HESS data, when all the uncertainties are considered.
In the assumption that all the radio emission is synchrotron radiation from $e^-$, our predictions indicate the highest flux expected from these sources
can shape the high energy tail of the $e^++e^-$ flux data.
\begin{figure}[t!]
\includegraphics[width=0.5\textwidth]{fig3left_Etot_vela_cygnus_radio.pdf}
\includegraphics[width=0.5\textwidth]{fig3right_radioband_2sigma_def.pdf}
\caption{Results of the fit to the radio spectrum for Vela YZ (gray) and Cygnus Loop (magenta).
Left: Regions of the parameter space $E_{\rm{tot}}$, $\gamma$ selected by the fit to the radio spectrum.
The solid, dashed and long-dashed lines refer to respectively $3\sigma$, $2\sigma$ and $1\sigma$ contours for each source.
Right: Prediction for the $ e^-$ flux from Vela YZ and Cygnus Loop using the values of $E_{\rm{tot}}$, $\gamma$
within $2\sigma$ from the best fit to the radio spectrum shown in the left panel.
The $e^++e^-$ {\it Fermi}-LAT, AMS-02, DAMPE, HESS and CALET data with their statistics and systematic errors are also shown.
}\label{fig:radioresults}
\end{figure}
\begin{figure}[t]
\includegraphics[width=0.5\textwidth]{fig_esc_left_Etot_vela_cygnus_radio_col.pdf}
\includegraphics[width=0.5\textwidth]{fig_esc_right_radioband_2sigma_col.pdf}
\caption{\reply{Results of the fit to the radio spectrum for Vela YZ (gray) and Cygnus Loop (magenta) for the evolutionary model of the injection of $e^-$ from SNRs in Ref.~\cite{2012MNRAS.427...91O}.
Left: Regions of the parameter space $E_{\rm{tot,trap}}$, $\gamma$ selected by the fit to the radio spectrum for Vela YZ (gray) and Cygnus Loop (magenta).
The derived regions for $E_{\rm{tot, esc}}$, $\gamma + \beta/\alpha$ are also reported for Vela YZ and Cygnus Loop.
The solid, dashed and long-dashed lines refer to respectively $3\sigma$, $2\sigma$ and $1\sigma$ contours for each source.
Right: Prediction for the $ e^-$ flux from Vela YZ and Cygnus Loop using the values of $E_{\rm{tot, esc}}$, $\gamma+\beta/\alpha$
within $2\sigma$ from the best fit to the radio spectrum shown in the left panel.
The $e^++e^-$ data are shown as in Fig.~\ref{fig:radioresults}, right panel.
}}\label{fig:radioresults_esc}
\end{figure}
\reply{We have explored the effects of the evolutionary escape model on the interpretation of radio spectrum for our two selected sources.
In this case, the radio data are fitted through Eq.~\ref{eq:Br} to tune the normalization and spectral index of the trapped $e^-$, namely the $Q_{0,\rm{trap}}$ and the index $\gamma$ of Eq.~\ref{eq:QTrap2}. The total energy of trapped $e^-$ is obtained as:
\begin{equation}
E_{\rm tot, trap} = \int _{E_1} ^{ E_{\rm m, esc}(T)} dE \, E \,Q_{\rm trap}(E) \,,
\label{eq:Etottrap}
\end{equation}
for each source.
The normalization $A$ and spectral index of the escaped electrons are then derived by their relations with the trapped $e^-$, as derived in the evolutionary escape model in Ref.~\cite{2012MNRAS.427...91O}.
By comparing Eq.~\ref{eq:QTrap2} and Eq.~\ref{eq:Qesc}, we obtain:
\begin{equation}\label{eq:relation_trap_esc}
A=Q_{\rm 0,trap} E_{\rm knee}^{\beta/\alpha} \, \left( \frac{t_{\rm Sedov}}{T}\right)^{\beta}\quad,
\end{equation}
while the spectral index of $e^-$ in Eq.~\ref{eq:Qesc} is simply $\gamma+\beta/\alpha$.
The total energy of runaway $e^-$ for each source is then obtained as:
\begin{equation}
E_{\rm tot, esc} = \int _{E_{\rm m, esc}(T)} ^{ \infty} dE \, E \,Q_{\rm esc}(E) \,.
\label{eq:Etotesc}
\end{equation}
In Fig.~\ref{fig:radioresults_esc} (left panel) the parameter space $E_{\rm{tot, trap}}$ - $\gamma$ ( $E_{\rm{tot, esc}}$ - $\gamma +\beta/\alpha$) selected by the fit to the radio spectrum of Vela YZ and Cygnus Loop are shown for $3\sigma$, $2\sigma$ and $1\sigma$ confidence levels.
The selected intervals are narrow, and similar to the burst-like case for Vela YZ, see Fig.~\ref{fig:radioresults}.
For Cygnus Loop the total energy of trapped $e^-$ is reduced by less than a factor of two.
This is understood through Eq.~\ref{eq:Etottrap}. The upper limit of the integral is indeed $E_{\rm m, esc}(T)\sim 88~(17)$~GeV for Vela YZ (Cygnus Loop).
The derived constraints on the parameter space $E_{\rm{tot, esc}}$ - $\gamma + \beta/\alpha$ are also reported.
Each of the two regions shows a strong correlation.
The total energy $E_{\rm tot, esc}$ of runaway $e^-$ is more than one orders of magnitude lower with respect to $E_{\rm tot, trap}$ for Vela YZ. As for Cygnus Loop, the difference is a factor of 2-3 for the best fit.
This is again understood by the different $E_{\rm m, esc}(T)$ of the two sources.
The consequences of these results on the $e^++e^-$ flux are reported in Fig.~\ref{fig:radioresults_esc} (right panel).
We plot again the data on the $e^++e^-$ flux,
along with the predictions for the flux of runaway $e^-$ from Vela YZ and Cygnus Loop.
The flux of CR $e^-$ at Earth is computed by using Eq.~\ref{eq:sol_esc}, and by using
the parameters for the escaped $e^-$ selected within the $2\sigma$ contours in the left panel.
The flux from Vela YZ and Cygnus Loop is softer with respect to the burst-like approximation, reflecting the effect of the escape mechanism described in Eq.~\ref{eq:Qesc}.
Moreover, compared to the burst-like approximation, the presence of an escape-limited maximum energy $E_{\rm m, esc}(T)$ for each source depletes the flux at Earth for $E<E_{\rm m, esc}(T)$.
Considering all the uncertainties, under the evolutionary escape model the flux from Vela YZ and Cygnus Loop is predicted to contribute at most few percent to the data on the inclusive flux at TeV energies, a rough factor of two less than what is shown in Fig.~\ref{fig:radioresults}.
}
\section{\label{sec:flux}Results on the SNR properties from $e^++e^-$ flux data}\label{sec:flux}
We now perform an analysis aimed at characterizing the $e^-$ emission from Vela YZ and Cygnus Loop SNRs through $e^++e^-$ (and $e^+$) flux data only.
We want to assess the power of $e^++e^-$ data on the source properties with respect to the information brought by radio (see Sect. \ref{sec:radio})
or the dipole anisotropy data (see Sect. \ref{sec:dipole}). We already know that these sources can contribute significantly to the $e^-$ flux (see \cite{DiMauro:2014iia,Manconi:2016byt,DiMauro:2017jpu}
and Fig.~\ref{fig:radioresults}). Therefore, we expect that the $e^++e^-$ flux data will bound the contribution from local sources. This results will be quantified by
bounds on $\gamma$ and $E_{\rm{tot}}$, parameters effectively connected with the injection physics and the number of particles per unit energy released in the ISM.
\\
In order to explain the $e^++e^-$ data over many energy decades we consider, in addition to $e^{-}$ from SNRs,
$e^{+}$ and $e^{-}$ produced by interactions of CRs on the ISM (secondary component) and by pair emission in PWNe \cite{Gaensler:2006ua, 2017SSRv..207..235B}.
We use here the model already employed in \cite{2010A&A...524A..51D,DiMauro:2014iia,Manconi:2016byt,DiMauro:2017jpu}. In particular, we refer to \cite{Manconi:2016byt}
for any detail. We only outline here the main characteristics of the different contributors.
The Galactic SNRs are divided into a {\it near} and a {\it far} population according to their distance $d$ from the Earth. As in Ref. \cite{Manconi:2016byt}, we set $d = 0.7$ kpc.
Since for sources near the Earth we dispose of abundant data, we model them individually picking their
$d$, $T$, $B_r^{\nu}(\nu)$ and $\gamma$ from the Green's catalog \citep{Green:2014cea}.
Far SNRs, for which the distance from Earth is $>0.7$ kpc, are instead assumed to be smoothly distributed in the Galaxy according to
the spatial density profile in \cite{2015MNRAS.454.1517G}, and to inject $e^-$ in the ISM with an average $E_{\rm{tot}}$ and $\gamma$.
As for the local SNRs, Vela YZ and Cygnus Loop will be modeled with free $E_{\rm tot}$ and spectral index $\gamma$.
Instead, the contribution from Vela Jr is fixed to the leptonic model of \reply{\cite{2011ApJ...740L..51T}}.
In particular we choose the values of \reply{$d=0.750$~kpc, $t=3$~ky, $B=12 \mu$G and $\gamma_{\rm{VelaJr}}=2.15$} \cite{2011ApJ...740L..51T,2008ApJ...678L..35K} and we compute the $Q_{0,\rm{VelaJr}}$ by using Eq.~\ref{eq:Br}.
This choice is motivated by very limited information available for its radio flux, and because Vela Jr mainly contributes to $e^++e^-$ flux above 10 TeV where the very few data points are not constraining.
Similarly to SNRs, the injection spectrum $Q_{ \rm PWN}(E)$ of $e^{-}$ and $e^{+}$ emitted by a PWN can be described by a power law with an exponential cut-off. The normalization of the PWN spectrum $Q_{0, \rm PWN}$ can be connected to the spin-down energy of the pulsar $W_0$ by:
\begin{equation}
\int_{E_{\rm min}}^{\infty}\,dE\,E\,Q_{ \rm PWN}(E)\,=\eta_{\rm PWN}\,W_0.
\label{eq:PWN}
\end{equation}
$W_0$ can be thus constrained from the measured pulsar properties and assuming that the whole energy lost is carried by the magnetic dipole radiation \cite{2010A&A...524A..51D}. The factor $\eta_{\rm PWN}$ represents the efficiency with which the spin-down energy of the pulsar is converted into $e^{-}$ and $e^{+}$ pairs,
and is expected to be of few $\%$ level~\cite{DiMauro:2014iia,DiMauro:2015jxa,Abeysekara:2017old}.
Our PWN sample is taken, as in~\cite{DiMauro:2014iia,DiMauro:2015jxa,Manconi:2016byt}, from the ATNF catalog \cite{ATNFcat}, from which we extract the spin-down energy, age and distance of each known PWN. All PWNe share a common efficiency $\eta_{\rm PWN}$ and spectral index $\gamma_{\rm PWN}$, that will enter in our fits to the $e^+$ and $e^-$ data as free parameters. Since the release of accelerated $e^-$ and $e^+$ pairs in the ISM is estimated to occur after $40-50$ kyr after the pulsar birth \cite{2011ASSP...21..624B}, we select only sources with $t_{\rm obs}>50$~kyr.
Secondary leptons originated by the scatterings of proton and helium CRs off the ISM are modeled here following \cite{DiMauro:2015jxa}, using a free overall re-normalization factor $q$, which accounts for uncertainties in the flux of primary CRs and in the production cross sections.
We fit the $e^++e^-$ flux data from HESS, CALET, DAMPE, AMS-02 and {\it Fermi}-LAT and AMS-02 $e^+$ flux with all the components described above.
We avoid strong biases from the solar modulation of the fluxes considering AMS-02, CALET and {\it Fermi}-LAT $e^{+}+e^{-}$ and AMS-02 $e^{+}$ data at $E>10$~GeV.
We nevertheless include its effect in the force field approximation with the Fisk potential $\phi$ treated as a free parameter. We use different $\phi_i$ in the fit to AMS-02, {\it Fermi}-LAT and CALET data since they cover different periods.
We take into account in the fit both the statistical and systematic uncertainties.
We also include the uncertainty in the absolute energy scale taking $1.3\%$ for DAMPE \cite{Ambrosi:2017wek}, $5\%$ for CALET \cite{PhysRevLett.119.181101}, $0\%$ at 10~GeV to $5\%$ at 1 TeV with a linear trend in $\log{E}$ for {\it Fermi}-LAT, $2\%$ for $E=[10,290]$~GeV and $5\%$ for AMS-02, while for HESS we use the systematic band as reported in \cite{HESSICRC17}.
The fit is performed on the $e^++e^-$ and $e^+$ flux data with free parameters: $E_{\rm{tot},\rm{Vela}}$, $E_{\rm{tot},\rm{Cygnus}}$, $\gamma_{\rm{Vela}}$,
$\gamma_{\rm{Cygnus}}$, $E_{\rm{tot}}$, $\gamma$, $\eta_{\rm PWN}$, $\gamma_{\rm PWN}$, $q$, $\phi_i$.
We only impose priors on $\gamma_{\rm{Vela}}$ ([1.90-3.10]) and $\gamma_{\rm{Cygnus}}$ ([1.50-2.50]).
The best fit ($\chi_{\rm red}^2 = \chi^2/\rm{d.o.f.} = 0.5$) parameters are
$\gamma_{\rm{Vela}}= 2.9\pm0.1$, $E_{\rm{tot},\rm{Vela}}=(2.4\pm0.2)\cdot10^{50}$~erg,
$\gamma = 2.70\pm0.06$, $E_{\rm{tot}}=(4.89\pm0.13)\cdot10^{47}$~erg,
$\eta_{\rm{PWN}}= 0.056\pm0.006$ and $\gamma_{\rm{PWN}} = 1.80\pm0.04$ for Vela YZ and Cygnus Loop SNRs.
\begin{figure}[t]
\centering
\includegraphics[width=0.6\textwidth]{fig4_Etot_velacygnus_fluxfit_radioblind.pdf}
\caption{Regions of the parameter space $E_{\rm{tot}}$ - $\gamma$ selected by the fit to the $e^{+}+e^{-}$ and $e^{+}$ flux data for Vela YZ and Cygnus Loop.
The shaded regions denote the $E_{\rm{tot}}$, $\gamma$ values at a given number of $\sigma$ from the best fit for each source. The magenta region is for Cygnus Loop and $2\sigma$, while the
gray (light gray) regions are for Vela YZ and $2\sigma$ ($5\sigma$).
}\label{fig:fluxresults}
\end{figure}
The configurations within $2\sigma$ from the best fit for Vela YZ and Cygnus Loop free parameters are reported in Fig. \ref{fig:fluxresults}.
For Vela YZ we also report the $5\sigma$ region which, as for Cygnus Loop, opens indeed to an upper bound.
In the case of Vela YZ, we find that the 2$\sigma$ region from the best fit for $E_{\rm{tot},\rm{Vela}}$ and $\gamma_{\rm{Vela}}$ is narrow and the two parameters are strongly correlated.
The $E_{\rm{tot},\rm{Vela}}$ values selected by the $e^++e^-$ flux have no overlap (at $2\sigma$) with the ones constrained by the fit to radio flux data, and are systematically higher by at least one order of magnitude. The two regions fully overlap at $5 \sigma$ (see Fig.~\ref{fig:radioresults}).
If we perform the same analysis for DAMPE data alone, or for the combination of AMS-02, CALET and HESS data, the two regions fully overlap at $2 \sigma$.
\section{\label{sec:dipole}Results on the SNR properties from $e^++e^-$ dipole anisotropy data}
We now assess the power of the recent {\it Fermi}-LAT data on the $e^++e^-$ dipole anisotropy $\Delta_{e^++e^-}$ \cite{Abdollahi:2017kyf}.
This measure has provided upper bounds on $\Delta_{e^++e^-}$ as a function of energy from 50 GeV up to about 1 TeV.
We compute the relevant single source dipole anisotropy for the sources in our model, following \cite{Manconi:2016byt}.
We remind here that the $\Delta_{e^++e^-}$ from a single source $s$ is given by:
\begin{equation}
\label{eq:eleposdipole}
\Delta(E)_{e^+ + e^-} = \frac{3 K(E)}{ c} \frac{2 d}{\lambda^2(E, E_{s})} \frac{\psi_{e^+ + e^-}^{s}(E)}{\psi_{e^+ + e^-}^{tot}(E)},
\end{equation}
where $d$ is the distance to the source,
$\lambda(E, E_{s})$ is the propagation scale defined in Eq.~\ref{eq:lambda},
$\psi_{e^+ + e^-}^{s}(E)$ is the $e^+ + e^-$ number density produced by the source $s$, and $\psi_{e^+ + e^-}^{tot}(E)$ is the total $e^+ + e^-$ number density obtained from the contributions of all the sources, both from isotropic smooth populations and from directional single sources.
This expression can be appropriately associated to a physical observable whenever the source $s$ can be considered as dominant.
In case more than one source is considered, the total dipole anisotropy may be computed as \cite{Manconi:2016byt}:
\begin{equation}
\label{eq:dipolesources}
\Delta(n_{max}, E)= \frac{1}{\psi^{tot}(E)} \cdot \sum_i \frac{\mathbf{r}_i\cdot \mathbf{n}_{max}}{||\mathbf{r}_i||}\cdot \psi_i(E)\, \Delta_i(E).
\end{equation}
Here $\psi_i(E)$ is the number density of $e^-$ and/or $e^+$ emitted from each source $i$, $\mathbf{r}_i$ is the
source position in the sky and
$n_{max}$ is the direction of the maximum flux intensity. The term $\psi^{tot}(E)=\sum_i \psi_i(E)$ is the total ($e^-$ and/or $e^+$) number density and includes the contribution from the discrete as well as all the isotropic sources.
The anisotropy from each single source is given by $\Delta_i = \frac{3 K(E)}{c} \frac{|\nabla \psi_i(E) |}{\psi_i(E)}$, where the gradient is performed with respect to each source position.
\begin{figure}[t]
\centering
\includegraphics[width=0.6\textwidth]{fig5_anis_VelaCygnus_winkler_ALLDATA_noradio.pdf}
\caption{Dipole anisotropy predictions for Vela YZ and Cygnus Loop treated as single dominant sources (solid black and magenta lines, respectively),
and for all the sources combined together, shown as gray dot-dashed line (see text for details).
The upper limits for {\it Fermi}-LAT dipole anisotropy are shown for the two different methods in \cite{Abdollahi:2017kyf}.
}\label{fig:dipoleresults}
\end{figure}
We compute the $\Delta_{e^++e^-}$ for Vela YZ and Cygnus Loop for all the parameters selected by the fit to ${e^++e^-}$ flux data \replyy{described in Sec.~\ref{sec:flux}} (at $2\sigma$ from the best fit),
and reported in Fig. \ref{fig:fluxresults}. The maximum of $\Delta_{e^++e^-}$ in each energy bin is then plotted as a black (magenta) solid line in Fig.~\ref{fig:dipoleresults} for Vela YZ (Cygnus Loop).
We compare our predictions to the {\it Fermi}-LAT $\Delta_{e^++e^-}$ data (Bayesian Method 1 in \cite{Abdollahi:2017kyf})
above 100~GeV, to limit the effect from the solar wind \citep{Buesching:2008hr,STRAUSS20141015}.
For Vela YZ, the anisotropy overshoots {\it Fermi}-LAT upper limits on the whole spectrum. We can therefore infer that {\it Fermi}-LAT data on the lepton dipole anisotropy add an independent piece of information
in addition to the flux data. This is one of the main results of this paper. The anisotropy amplitude data on {\it charged}
leptons have now the power to exclude configurations of the Vela YZ source spectrum,
in principle compatible with the absolute flux data.
For Cygnus Loop the conclusions are looser, since it shines at higher energies where the {\it Fermi}-LAT upper bounds are looser.
In order to constrain Cygnus Loop parameters one would need dipole data at least up to 10 TeV.
Since we are interested in the scenario in which the $\Delta_{e^++e^-}$ is maximal, we have checked for different effects that could lower our predictions.
In particular, we verified that also the total dipole anisotropy arising from all the individual sources entering in the predictions of the ${e^++e^-}$ and $e^+$ fluxes is not compatible with the experimental upper limits.
This is because Vela YZ is always the dominant contributor of the $e^-$ flux.
We computed the total anisotropy according to Eq.~\ref{eq:dipolesources} resulting from:
the local SNRs Vela YZ, Cygnus Loop and Vela Jr, and all the ATNF catalog PWNe. The results shown in \replyy{Fig. \ref{fig:dipoleresults} as a gray dot-dashed line} has been obtained
setting all the free parameters to their best fit to the ${e^++e^-}$ and $e^+$ fluxes data. The only prior being the flux data, Vela YZ turns out to dominate the flux as well as the dipole predicted at Earth.
Moreover, we considered the potential effect of the guide magnetic field over the few hundred pc to the nearest sources, following what was done in \cite{2016PhRvL.117o1103A}.
The local magnetic field properties were inferred by IBEX data ($l=210.5^{\circ}, b=-57.1^{\circ}$) \cite{0004-637X-776-1-30} from the study of the emission of high energy neutral atoms.
As discussed in \cite{2016PhRvL.117o1103A}, the alignment of the dipole anisotropy of CRs with the total ordered magnetic field is demonstrated to potentially modify the phase of the observed CR dipole and lower its amplitude.
We verified that projecting the dipole anisotropy of Vela YZ along the direction of the local magnetic field decreases the $\Delta_{e^++e^-}$ by a factor of roughly 2.
Since the maximal Vela YZ anisotropy in Fig.~\ref{fig:dipoleresults} overshoots the {\it Fermi}-LAT upper limits by more than a factor of 3 up to $500$ GeV, also considering this effect would not change our conclusions.
Therefore, the dipole anisotropy in the CR lepton arrival direction sets additional tight constraints to the Vela YZ injection spectrum.
\begin{figure}[t]
\centering
\includegraphics[width=0.6\textwidth]{fig6_Etot_vela_fluxfit_radioblind_dipole_new.pdf}
\caption{Dipole anisotropy constraints to the Vela YZ source parameters.
The regions of the parameter space $E_{\rm{tot}}$, $\gamma$ selected by the fit to the $e^{+}+e^{-}$ and $e^{+}$ flux data for Vela YZ are reported with shaded regions as in Fig.\ref{fig:fluxresults}.
The hatched region denotes the configurations selected by $e^{+}+e^{-}$ and $e^{+}$ flux data and excluded by {\it Fermi}-LAT dipole anisotropy upper limits (Meth. 1) at $E>100$~GeV.
}\label{fig:dipoleconstraints}
\end{figure}
We now quantify the power of the dipole anisotropy to exclude configurations in the Vela YZ source parameters, otherwise compatible with the $e^{+}+e^{-}$ flux data.
We compute $\Delta_{e^++e^-}$ for all the configurations selected by the fit to the flux \replyy{described in Sec.~\ref{sec:flux}}.
Whenever our predictions overestimate one data point at $E>$~100~GeV, the $E_{\rm{tot},\rm{Vela}} - \gamma_{\rm{Vela}}$ pair is considered as excluded.
Very similar results are obtained when requiring two or more non-consecutive data points to be below the predictions,
or if we employ only the two highest energy data points.
The results are displayed by the hatched region in upper panel of Fig.~\ref{fig:dipoleconstraints}.
The dipole anisotropy upper limits are not compatible with the configurations selected by the fit to the flux data at 2$\sigma$, and with a subset of the configurations at 5$\sigma$.
Indeed, the anisotropy data exclude higher values of $\gamma$, considered unlikely in acceleration models.
The {\it Fermi}-LAT data on $\Delta_{e^++e^-}$ supplement a valuable information of the
properties of Vela YZ, acting as a further physical observable for the understanding of the injection of $e^-$ in the ISM.
\begin{figure}[]
\includegraphics[width=0.5\textwidth]{fig7right_sum_winkler_radio_ALLDATA_Ec10TeV_noband.pdf}
\includegraphics[width=0.5\textwidth]{fig7_left_anis_VelaCygnus_winkler_ALLDATA_radio.pdf}
\caption{Results on the $e^++e^-$ flux (left) and on the corresponding dipole anisotropies (right) from the multi-wavelength fit to all the data.
Left: The contribution from secondary production (red dashed), PWNe (blue dot dashed), Vela YZ (black dotted), Cygnus Loop (magenta dot-dot dashed), Vela Jr (orange solid) and the far smooth distribution of SNRs (green dotted) are shown. The $e^++e^-$ {\it Fermi}-LAT, AMS-02, DAMPE, HESS and CALET data with their statistics and systematic errors are also shown.
Right: The maximal dipole anisotropy predicted for Vela YZ and Cygnus Loop as single dominant sources are reported with black solid and magenta dashed lines as in Fig.\ref{fig:dipoleresults}.
The total anisotropy resulting from the distribution of all the sources is shown with gray dot-dashed line.
The upper limits for {\it Fermi}-LAT dipole anisotropy are shown for the two different analysis methods in \cite{Abdollahi:2017kyf}.
}
\label{fig:mw}
\end{figure}
\section{\label{sec:multiw}Results from multi-wavelength analysis}
We now combine all the three observables explored in the previous sections.
Specifically, we compare the dipole anisotropy of Vela YZ and Cygnus Loop with the {\it Fermi}-LAT upper bounds, for the parameters of these sources selected by radio and $e^++e^-$ fluxes.
We perform new fits on the $e^++e^-$ and $e^+$ fluxes including the constraints for $E_{\rm{tot},\rm{Vela}}$, $\gamma_{\rm{Vela}}$, $E_{\rm{tot},\rm{Cygnus}}$, and $\gamma_{\rm{Cygnus}}$ derived from the fit to radio data. We minimize according to the following definition of the $\chi^2$:
\begin{equation}
\chi^2 = \sum^N_i \left(\frac{ \Phi^{\rm{model}}_i - \Phi^{\rm{data}}_i }{ \sigma^{\rm{data}}_i }\right)^2 + \sum^4_j \left(\frac{ \mathcal{P}^{\rm{model}}_j - \mathcal{P}^{\rm{data}}_i }{ \sigma^{\rm{data}}_{\mathcal{P},j} }\right)^2
\label{eq:chi}
\end{equation}
where the first term is the statistical term that takes into account the difference between the model $\Phi^{\rm{model}}$ and the $e^+ + e^-$ flux data at $1 \sigma$ ($\Phi^{\rm{data}}$ and $\sigma^{\rm{data}}$).
The second term runs over Vela YZ and Cygnus Loop $E_{\rm{tot}}$ and $\gamma$ and accounts for the deviation of these parameters in the model $\mathcal{P}^{\rm{model}}$ with respect to the best fit and $1\sigma$ error ($\mathcal{P}^{\rm{data}}$ and $\sigma^{\rm{data}}_{\mathcal{P}}$) as derived above in the fit to radio data.
We find a very good agreement between $e^++e^-$ and radio data ($\chi_{\rm red}^2\approx0.70$) with $\gamma_{\rm{Vela}} = 2.39\pm0.15$, $E_{\rm{tot}, \rm Vela} = (2.3\cdot\pm0.2)\cdot 10^{47}$ erg, $\gamma_{\rm{Cygnus}} = 2.03\pm0.05$ and $E_{\rm{tot}, \rm Cygnus} = (1.25 \pm0.06)\cdot 10^{47}$ erg for K15 propagation models. Using G15 propagation model the best fit parameters are extremely similar.
We illustrate in Fig.~\ref{fig:mw} the result of the best fit for all the components to the $e^++e^-$ flux.
We checked that all the predictions for the dipole anisotropy within $2\sigma$ from the best fit are below the {\it Fermi}-LAT upper bounds, as explicitly shown in Fig.~\ref{fig:mw}.
The $\gamma$ for the spatially smooth distribution of SNRs is $2.48/2.44$ for K15/G15, respectively.
We test different values for the cutoff energy of the smooth distribution and single SNRs. We find that the $\chi^2$ profile as a function of the cutoff energy is flat for $>10$ TeV while it worsens at low energy. The $95\%$ lower limit is at 8 TeV.
The putative $e^-$ injected by a radio unconstrained Vela SNR (see Sec.\ref{sec:flux}) are compensated in our framework
by the combination of $e^-$ produced by the Galactic smooth distribution of SNRs and all the PWNe.
\begin{figure}[]
\includegraphics[width=0.5\textwidth]{fig9_sum_winkler_radio_escape_ALLDATA_Ec10TeV_col}
\includegraphics[width=0.5\textwidth]{fig9_anis_VelaCygnus_winkler_ALLDATA_radio_escape_mod_col.pdf}
\caption{Same as Fig.~\ref{fig:mw} but using the evolutionary model of Ref.~\cite{2012MNRAS.427...91O} for the injection of $e^-$ by SNRs.}
\label{fig:mw_evol}
\end{figure}
\reply{In Fig.~\ref{fig:mw_evol} we report, on the same foot as Fig.~\ref{fig:mw}, the results obtained within the evolutionary escape model as discussed in Sec.~\ref{sec:model}.
We find again a good fit ($\chi_{\rm red}^2\sim 0.87$), with $\gamma_{\rm{Vela} + \beta/\alpha} = 2.66\pm 0.14$ and $\gamma_{\rm{Cygnus}} = 2.27\pm0.06$ for K15 propagation model. The parameters which describes the smooth SNRs, the PWNe and the secondary component are compatible within the errors with respect to the burst-like scenario.
Vela Jr has not been included in this analysis, since we found that in the evolutionary escape model its flux is suppressed, given its young age.
We notice that the fluxes from Vela YZ and Cygnus Loop are now both below the secondary component, which instead is almost unchanged.
This implies a slight increase of the PWNe contribution. We remind that also in this case the $e^+$ contribution is controlled by the AMS-02 $e^+$ flux data.
As illustrated in Fig.~\ref{fig:mw_evol} (right panel), also in this case all the predictions for the dipole anisotropy within $2\sigma$ from the best fit are below the {\it Fermi}-LAT upper bounds.
\replyy{In the multi-wavelength analysis, the Vela YZ and Cygnus Loop SNRs parameters within both the burst-like and evolutionary models are not
constrained by the anisotropy data.}
With respect to the burst-like scenario in Fig.~\ref{fig:mw}, the predicted dipole anisotropies are decreased by more than a factor of 2 for TeV energies.
}
Remarkably, we find a model which is compatible with all the $e^++e^-$ flux data, the radio data for Vela YZ and Cygnus Loop, and with the anisotropy upper bounds.
\section{\label{sec:conc}Conclusions}
This paper proposes a \reply{new} multi-wavelength and multi-messenger approach aimed at improving the description of the local SNRs which
can contribute significantly to the measured $e^++e^-$ flux. The latter is now measured with unprecedented statistics over
several decades in energy.
We work here within a framework in which the leptons measured at Earth from GeV up to tens of TeV energies have a composite origin.
Specifically, $e^-$ are injected in the ISM by SNRs, and a symmetric source of $e^\pm$ is provided by PWNe. Additionally, a low energy, asymmetric contribution of $e^+$ and $+e^-$
arises from the spallations of CRs on the ISM. In the understanding of the $e^-$ flux data, local sources, those located few hundreds parsecs from the Earth, may play a crucial role.
The single, local SNRs that are found to be among the main contributors to the $e^-$ flux at $>10$~TeV are Vela YZ and Cygnus Loop.
For these two sources, we develop a dedicated analysis to the injection spectrum of accelerated $e^-$ in the ISM.
\reply{The injection of $e^-$ by SNRs into the ISM is treated following the burst like approximation, as commonly assumed in the literature.
Moreover, we have implemented an evolutionary escape model for the $e^-$ injection, and for the first time we have investigated its consequences on both the synchrotron emission and on the propagated CRs measuread at the Earth.}
We investigate the {\it compatibility} of these models for the emission and propagation of $e^-$ and $e^+$ in the Galaxy using three physical observables:
\begin{itemize}
\item the {\it radio flux} at all the available frequencies from Vela YZ and Cygnus Loop SNRs,
\item the {\it $e^++e^-$ flux} from five experiments from the GeV to tens of TeV energy,
\item the {\it $e^++e^-$ dipole anisotropy} upper limits from 50 GeV to about 1 TeV.
\end{itemize}
\noindent
We find that the {\it radio flux} for these nearby SNRs strongly constraints the total energy and the spectral index of the emitted $e^-$.
\reply{In the case of the evolutionary escape model, we derive constraints on the total energy and spectral index of both trapped and runaway $e^-$.}
\reply{As for the burst-like approximation}, the flux of $e^-$ from Vela YZ and Cygnus Loop as derived from a fit to radio data is slightly below the data on the inclusive flux.
It can skim the HESS data, when all the uncertainties are considered.
In the assumption that all the radio emission is synchrotron radiation from $e^-$, our predictions indicate the highest flux expected from these sources
can shape the high energy tail of the $e^++e^-$ flux data.
\reply{In the case of the evolutionary escape model, the flux of runaway $e^-$ from Vela YZ and Cygnus Loop is slightly lower, }
\replyy{and their contribution to the $e^++e^-$ flux data is subdominant with respect to the other model components.}
\noindent
We perform a radio-blind analysis by fitting only and all the most recent {\it $e^+ + e^-$ flux} data. The data select correlated values for the total energy and spectral index of Vela YZ,
and to a less extent of Cygnus Loop. The results for Vela YZ are compatible with the radio analysis within errors considered at $5\sigma$ confidence level.
\noindent
\reply{As a further novelty}, we consider the upper limits on {\it $e^+ + e^-$ dipole anisotropy } as an \reply{additional} observable, and assess its power in constraining the Vela YZ and Cygnus Loop source properties.
\reply{This operation is performed at the cost of no new free parameters.}
We find that the anisotropy overshoots {\it Fermi}-LAT upper limits on the whole spectrum when the Vela SNR parameters are left free to fit the $e^+ + e^-$ flux data \replyy{within a burst-like scenario}.
For Cygnus Loop the conclusions are weaker, since it shines at higher energies where the {\it Fermi}-LAT upper bounds are looser.
The results are very similar when all the single sources considered in the analysis (SNRs and PWNe) contribute to the anisotropy, which is
dominated by Vela YZ.
\replyy{For the first time, we show the severe constraints imposed by the most recent data on the $e^+ + e^-$ anisotropy, what opens the opportunity of
describing the most promising local sources of $e^-$ with {\it charged} lepton CRs.}
\noindent
We finally perform a multi-wavelength multi-messenger analysis by fitting
simultaneously the radio flux on Vela YZ and Cygnus Loop and the $e^+ + e^-$ flux, and checking the outputs against the $e^+ + e^-$ dipole anisotropy data.
Considering the proper systematic uncertainties on the energy scale of the different data sets, we can fit the $e^+ + e^-$ spectrum on many energy decades using these local SNRs, a smooth distribution of SNRs, PWNe and secondary production.
\replyy{In this case, the Vela YZ and Cygnus Loop SNRs parameters within both the burst-like and evolutionary models are not
constrained by the anisotropy data. }
Remarkably, we find a model which is compatible with all the $e^++e^-$ flux data, the radio data for Vela YZ and Cygnus Loop, and with the anisotropy upper bounds.
\begin{acknowledgments}
\reply{We warmly thank Y. Ohira for useful discussions.}
SM gratefully acknowledges support by the Academy of Science of Torino through the
{\it Angiola Agostinelli Gili} scholarship and the KIPAC Institute at SLAC for the kind hospitality.
MDM acknowledges support by the NASA {\it Fermi} Guest Investigator Program 2014 through the {\it Fermi} multi-year Large Program N. 81303 (P.I. E.~Charles) and by the NASA {\it Fermi} Guest Investigator Program 2016 through the {\it Fermi} one-year Program N. 91245 (P.I. M.~Di Mauro).
This work is supported by the "Departments of Excellence 2018 -2022" Grant awarded by the Italian Ministry of Education, University and Research (MIUR) (L. 232/2016).
\end{acknowledgments}
|
2,877,628,091,653 | arxiv | \section{Introduction}
\label{sec:introduction}
Although recent neural models of language have made advances in learning syntactic behavior, research continues to suggest that inductive bias plays a key role in data efficiency and human-like syntactic generalization \cite{schijndel+al:2019,hu+al:2020}. Based on the long-held observation that language exhibits hierarchical structure, previous work has proposed coupling recurrent neural networks (RNNs) with differentiable stack data structures \cite{joulin+mikolov:2015,grefenstette+al:2015} to give them some of the computational power of pushdown automata (PDAs), the class of automata that recognize context-free languages (CFLs). However, previously proposed differentiable stack data structures only model deterministic stacks, which store only one version of the stack contents at a time, theoretically limiting the power of these stack RNNs to the deterministic~CFLs.
A sentence's syntactic structure often cannot be fully resolved until its conclusion (if ever), requiring a human listener to track multiple possibilities while hearing the sentence. Past work in psycholinguistics has suggested that models that keep multiple candidate parses in memory at once can explain human reading times better than models which assume harsher computational constraints. This ability also plays an important role in calculating expectations that facilitate more efficient language processing \cite{levy:2008}. Current neural language models do not track multiple parses, if they learn syntax generalizations at all \cite{futrell+al:2018,wilcox+al:2019,mccoy+al:2020}.
We propose a new differentiable stack data structure that explicitly models a nondeterministic PDA, adapting an algorithm by \citet{lang:1974} and reformulating it in terms of tensor operations. The algorithm is able to represent an exponential number of stack configurations at once using cubic time and quadratic space complexity. As with existing stack RNN architectures, we combine this data structure with an RNN controller, and we call the resulting model a Nondeterministic Stack RNN{} (NS-RNN).
We predict that nondeterminism can help language processing in two ways. First, it will improve trainability, since all possible sequences of stack operations contribute to the objective function, not just the sequence used by the current model. Second, it will improve expressivity, as it is able to model concurrent parses in ways that a deterministic stack cannot. We demonstrate these claims by comparing the NS-RNN{} to deterministic stack RNNs on formal language modeling tasks of varying complexity. To show that nondeterminism aids training, we show that the NS-RNN{} achieves lower cross-entropy, in fewer parameter updates, on some deterministic CFLs. To show that nondeterminism improves expressivity, we show that the NS-RNN{} achieves lower cross-entropy on nondeterministic CFLs, including the ``hardest context-free language" \cite{greibach:1973}, a language which is at least as difficult to parse as any other CFL and inherently requires nondeterminism. Our code is available at \url{https://github.com/bdusell/nondeterministic-stack-rnn}.
\section{Background and Motivation}
In all differentiable stack-augmented networks that we are aware of (including ours), a network called the \emph{controller}, which is some kind of RNN (typically an LSTM), is augmented with a differentiable stack, which has no parameters of its own. At each time step, the controller emits weights for various stack operations, which at minimum include push and pop. To maintain differentiability, the weights need to be continuous; different designs for the stack interpret fractionally-weighted operations differently. The stack then executes the fractional operations and produces a stack \emph{reading}, which is a vector that represents the top of the updated stack. The stack reading is used as an extra input to the next hidden state update.
Designs for differentiable stacks have proceeded generally along two lines. One approach, which we call \emph{superposition} \citep{joulin+mikolov:2015}, treats fractional weights as probabilities. The other, which we call \emph{stratification} \citep{sun+al:1995,grefenstette+al:2015}, treats fractional weights as ``thicknesses.''
\paragraph{Superposition}
In the model of \citet{joulin+mikolov:2015}, the controller emits at each time step a probability distribution over three stack operations: push a new vector, pop the top vector, and no-op. The stack simulates all three operations at once, setting each stack element to the weighted interpolation of the elements above, at, and below it in the previous time step, weighted by push, no-op, and pop probabilities respectively. Thus, each stack element is a superposition of possible values for that element. Because stack elements depend only on a fixed number of elements from the previous time step, the stack update can largely be parallelized. \Citet{yogatama+al:2018} developed an extension to this model that allows a variable number of pops per time step, up to a fixed limit $K$. \Citet{suzgun+:2019} also proposed a modification of the controller parameterization.
\paragraph{Stratification}
The model proposed by \citet{sun+al:1995} and later studied by \citet{grefenstette+al:2015} takes a different approach, assigning a \textit{strength} between 0 and 1 to each stack element. If the stack elements were the layers of a cake, then the strengths would represent the thickness of each layer. At each time step, the controller emits a push weight between 0 and 1 which determines the strength of a new vector pushed onto the stack, and a pop weight between 0 and 1 which determines how much to slice off the top of the stack. The stack reading is computed by examining the top layer of unit thickness and interpolating the vectors proportional to their strengths. This relies on $\min$ and $\max$ operations, which can have zero gradients. In practice, the model can get trapped in local optima and requires random restarts \cite{hao+al:2018}. This model also affords less opportunity for parallelization because of the interdependence of stack elements within the same time step. \Citet{hao+al:2018} proposed an extension that uses memory buffers to allow variable-length transductions.
\paragraph{Nondeterminism}
In all the above models, the stack is essentially deterministic in design. In order to recognize a nondeterministic CFL like $\{ww^\text{R}\}$ from left to right, it must be possible, at each time step, for the stack to track all prefixes of the input string read so far. None of the foregoing models, to our knowledge, can represent a set of possiblities like this. Even for deterministic CFLs, this has consequences for trainability; at each time step, training can only update the model from the vantage point of a single stack configuration, making the model prone to getting stuck in local minima.
To overcome this weakness, we propose incorporating a nondeterministic stack, which affords the model a global view of the space of possible ways to use the stack. Our controller emits a probability distribution over stack operations, as in the superposition approach. However, whereas superposition only maintains the per-element marginal distributions over the stack elements, we propose to maintain the full distribution over the whole stack contents. We marginalize the distribution as late as possible, when the controller queries the stack for the current top stack symbol.
In the following sections, we explain our model and compare it against those of \citet{joulin+mikolov:2015} and \citet{grefenstette+al:2015}. Despite taking longer in wall-clock time to train, our model learns to solve the tasks optimally with a higher rate of success.
\section{Pushdown Automata}
In this section, we give a definition of nondeterministic PDAs (\S\ref{sec:pdadef}),
describe how to process strings with nondeterministic PDAs in cubic time (\S\ref{sec:lang}), and reformulate this algorithm in terms of tensor operations (\S\ref{sec:tensor}).
\subsection{Notation}
Let $\epsilon$ be the empty string. Let $\indicator{\phi}$ be $1$ when proposition $\phi$ is true, $0$ otherwise.
If $A$ is a matrix, let $A_{i:}$ and $A_{:j}$ be the $i$th row and $j$th column, respectively, and define analogous notation for tensors.
\subsection{Definition}
\label{sec:pdadef}
A \textit{weighted pushdown automaton (PDA)} is a tuple $M = (Q, \Sigma, \Gamma, \delta, q_0, \bo
)$, where:
\begin{compactitem}
\item $Q$ is a finite set of states.
\item $\Sigma$ is a finite input alphabet.
\item $\Gamma$ is a finite stack alphabet.
\item $\delta \colon Q
\times \Gamma \times \Sigma \times Q \times \Gamma^\ast \rightarrow \mathbb{R}_{\geq 0}$ maps transitions, which we write as $\trans qaxry$, to weights.
\item $q_0 \in Q$ is the start state.
\item $\bot \in \Gamma$ is the initial stack symbol.
\end{compactitem}
In this paper, we do not allow non-scanning transitions (that is, those where $a = \epsilon$). Although this does not reduce the weak generative capacity of PDAs \citep{autebert+:1997}, it could affect their ability to learn; we leave exploration of non-scanning transitions for future work.
For simplicity, we will assume that all transitions have one of the three forms:
\begin{align*}
&\trans q a x r xy && \text{push $y$ on top of $x$} \\
&\trans q a x r y && \text{replace $x$ with $y$} \\
&\trans q a x r \epsilon && \text{pop $x$.}
\end{align*}
This also does not reduce the weak generative capacity of PDAs.
Given an input string $w \in \Sigma^\ast$ of length $n$, a \emph{configuration} is a triple $(i, q, \beta)$, where $i \in [0, n]$ is an input position indicating that all symbols up to and including $w_i$ have been scanned, $q \in Q$ is a state, and $\beta \in \Gamma^\ast$ is the content of the stack (written bottom to top). For all $i, q, r, \beta, x, y$, we say that $(i\mathord-1, q, \beta x)$ \emph{yields} $(i, r, \beta y)$ if $\transweight q{w_i}xry > 0$. A \emph{run} is a sequence of configurations starting with $(0, q_0, \bot)$ where each configuration (except the last) yields the next configuration.
Because our model does not use the PDA to accept or reject strings, we omit the usual definitions for the language accepted by a PDA. This is also why our definition lacks accept states.
As an example, consider the following PDA, for the language $\{ww^\text{R} \mid w \in \{\texttt{0}, \texttt{1}\}^\ast\}$:
\begin{align*}
M &= (Q, \Sigma, \Gamma, \delta, q_1, \bot) \\
Q &= \{q_1, q_2\} \\
\Sigma &= \{\texttt{0}, \texttt{1}\} \\
\Gamma &= \{\texttt{0}, \texttt{1}, \bot\}
\end{align*}
where $\delta$ contains the transitions
\begin{align*}
q_1, x &\xrightarrow{a} q_1, xa & x &\in \Gamma, a \in \Sigma \\
q_1, a &\xrightarrow{a} q_2, \epsilon & a &\in \Sigma \\
q_2, a &\xrightarrow{a} q_2, \epsilon & a &\in \Sigma.
\end{align*}
This PDA has a possible configuration with an empty stack ($\bot$) iff the input string read so far is of the form $ww^\text{R}$.
To make a weighted PDA probabilistic, we require that all transition weights be nonnegative and, for all $a, q, x$:
\begin{align*}
\displaystyle\sum_{r \in Q} \sum_{y \in \Gamma^\ast} \transweight qaxry &= 1.
\end{align*}
Whereas many definitions make the model generate symbols \citep{abney+al:1999}, our definition makes the PDA operations conditional on the input symbol $a$. The difference is not very important, because the RNN controller will eventually assume responsibility for reading and writing symbols, but our definition makes the shift to an RNN controller below slightly simpler.
\subsection{Recognition}
\label{sec:lang}
\citet{lang:1974} gives an algorithm for simulating all runs of a nondeterministic PDA, related to Earley's algorithm \citep{earley:1970}. At any point in time, there can be exponentially many possibilities for the contents of the stack. In spite of this, Lang's algorithm is able to represent the set of all possibilities using only quadratic space. As this set is regular, its representation can be thought of as a weighted finite automaton, which we call the \emph{stack WFA}, similar to the graph-structured stack used in GLR parsing \cite{tomita:1987}.
Figure~\ref{fig:stackops} depicts Lang's algorithm as a set of inference rules, similar to a deductive parser \citep{shieber+:1995,goodman:1999}, although the visual presentation is rather different. Each inference rule is drawn as a fragment of the stack WFA. If the transitions drawn with solid lines are present in the stack WFA, and the side conditions in the right column are met, then the transition drawn with a dashed line can be added to the stack WFA. The algorithm repeatedly applies inference rules to add states and transitions to the stack WFA; no states or transitions are ever deleted.
\begin{figure*}
\tikzset{state/.append style={rectangle,rounded corners,inner sep=3pt,anchor=base,execute at begin node={\strut}}}
\tikzset{x=2.25cm,baseline=0}
\renewcommand{\arraystretch}{4}
\begin{center}
\begin{tabular}{ccc}
Axiom &
\begin{tikzpicture}
\node[initial,state](q) at (0,0) {};
\node[state](r) at (1,0) {$0,q_0,\bot$};
\draw[dashed] (q) edge node {$\bot/1$} (r);
\end{tikzpicture}
& \\
Push &
\begin{tikzpicture}
\node[state](q1) at (1,0) {$j\mathord-1,q,x$};
\node[state](q2) at (2,0) {$j,r,y$};
\draw[dashed] (q1) edge node {$y / p$} (q2);
\end{tikzpicture} &
$p = \transweight q{w_j}xr{\bullet y}$
\\
Replace &
\begin{tikzpicture}
\node[state](q0) at (0,0) {$i,q,x$};
\node[state](q1) at (1,0) {$j\mathord-1,s,z$};
\node[state](q2) at (2,0) {$j,r,y$};
\draw (q0) edge node[below] {$z / p_1$} (q1);
\draw[dashed,bend left] (q0) edge node {$y / p_1 p$}(q2);
\end{tikzpicture} &
$p = \transweight s{w_j}zry$
\\
Pop &
\begin{tikzpicture}
\node[state](q0) at (0,0) {$i,q,x$};
\node[state](q1) at (1,0) {$k,t,y$};
\node[state](q2) at (2,0) {$j\mathord-1,s,z$};
\node[state](q3) at (3,0) {$j,r,y$};
\draw (q0) edge node[below] {$y / p_1$} (q1);
\draw (q1) edge node[below] {$z / p_2$} (q2);
\draw[dashed,bend left] (q0) edge node {$y / p_1 p_2 p$} (q3);
\end{tikzpicture} &
$p = \transweight s{w_j}zr\epsilon$
\end{tabular}
\end{center}
\caption{Lang's algorithm drawn as operations on the stack WFA. Solid edges indicate existing transitions; dashed edges indicate transitions that are added as a result of the stack operation.}
\label{fig:stackops}
\end{figure*}
\begin{figure*}
\tikzset{state/.append style={rectangle,rounded corners,inner sep=3pt,anchor=base,execute at begin node={\strut}}}
\tikzset{label/.style={anchor=base,execute at begin node={\strut}}}
\tikzset{x=2cm,baseline=0pt,node distance=2cm}
\begin{center}
\begin{tabular}{ll}
$j=0$ &
\begin{tikzpicture}
\node[initial,state] (start) {};
\node[accepting,state,right of=start] (0q1bot) {$0, q_1, \bot$};
\draw[dashed] (start) edge node {$\bot$} (0q1bot);
\end{tikzpicture}
\\[0.5cm]
$j=1$ &
\begin{tikzpicture}
\node[initial,state] (start) {};
\node[state,right of=start] (0q1bot) {$0, q_1, \bot$};
\draw (start) edge node {$\bot$} (0q1bot);
\node[accepting,state,right of=0q1bot](1q10) {$1, q_1, \texttt{0}$};
\draw[dashed] (0q1bot) edge node {$\texttt{0}$} (1q10);
\coordinate (p) at (12cm,0);
\node[label] at (1q10.base -| p) {$\trans{q_1}{\texttt{0}}{\bot}{q_1}{\texttt{0}}$};
\end{tikzpicture}
\\[0.5cm]
$j=2$ &
\begin{tikzpicture}
\node[initial,state] (start) {};
\node[state,right of=start] (0q1bot) {$0, q_1, \bot$};
\draw (start) edge node {$\bot$} (0q1bot);
\node[state,right of=0q1bot](1q10) {$1, q_1, \texttt{0}$};
\draw (0q1bot) edge node {$\texttt{0}$} (1q10);
\node[accepting,state,right of=1q10](2q11) {$2, q_1, \texttt{1}$};
\draw[dashed] (1q10) edge node {$\texttt{1}$} (2q11);
\coordinate (p) at (12cm,0);
\node[anchor=base] at (2q11.base -| p) {$\trans{q_1}{\texttt{1}}{\texttt{0}}{q_1}{\texttt{1}}$};
\end{tikzpicture}
\\[0.5cm]
$j=3$ &
\begin{tikzpicture}
\node[initial,state] (start) {};
\node[state,right of=start] (0q1bot) {$0, q_1, \bot$};
\draw (start) edge node {$\bot$} (0q1bot);
\node[state,right of=0q1bot](1q10) {$1, q_1, \texttt{0}$};
\draw (0q1bot) edge node {$\texttt{0}$} (1q10);
\node[state,right of=1q10](2q11) {$2, q_1, \texttt{1}$};
\draw (1q10) edge node {$\texttt{1}$} (2q11);
\node[accepting,state,right of=2q11](3q11) {$3, q_1, \texttt{1}$};
\draw[dashed] (2q11) edge node {$\texttt{1}$} (3q11);
\node[accepting,state,below=0.5cm of 3q11](3q20) {$3, q_2, \texttt{0}$};
\draw[dashed,out=-30,in=180] (0q1bot) edge node {$\texttt{0}$} (3q20);
\coordinate (p) at (12cm,0);
\node[label] at (3q11.base -| p) {$\trans{q_1}{\texttt{1}}{\texttt{1}}{q_1}{\texttt{1}}$};
\node[label] at (3q20.base -| p) {$\trans{q_1}{\texttt{1}}{\texttt{1}}{q_2}{\epsilon}$};
\end{tikzpicture}
\\[1.7cm]
$j=4$ &
\begin{tikzpicture}
\node[initial,state] (start) {};
\node[state,right of=start] (0q1bot) {$0, q_1, \bot$};
\draw (start) edge node {$\bot$} (0q1bot);
\node[state,right of=0q1bot](1q10) {$1, q_1, \texttt{0}$};
\draw (0q1bot) edge node {$\texttt{0}$} (1q10);
\node[state,right of=1q10](2q11) {$2, q_1, \texttt{1}$};
\draw (1q10) edge node {$\texttt{1}$} (2q11);
\node[state,right of=2q11](3q11) {$3, q_1, \texttt{1}$};
\draw (2q11) edge node {$\texttt{1}$} (3q11);
\node[state,below=0.5cm of 3q11](3q20) {$3, q_2, \texttt{0}$};
\draw[out=-30,in=180] (0q1bot) edge node {$\texttt{0}$} (3q20);
\node[accepting,state,right of=3q11](4q10) {$4, q_1, \texttt{0}$};
\draw[dashed] (3q11) edge node {$\texttt{0}$} (4q10);
\node[accepting,state,below=0.5cm of 4q10](4q2bot) {$4, q_2, \bot$};
\draw[dashed,every edge,rounded corners=5mm] (start) -- ($(start|-4q2bot)+(1.5,-0.75)$) to node {$\bot$} ($(4q2bot)+(-0.5,-0.75)$) -- (4q2bot);
\coordinate (p) at (12cm,0);
\node[label] at (4q10.base -| p) {$\trans{q_1}{\texttt{0}}{\texttt{1}}{q_1}{\texttt{0}}$};
\node[label] at (4q2bot.base -| p) {$\trans{q_2}{\texttt{0}}{\texttt{0}}{q_2}{\epsilon}$};
\end{tikzpicture}
\end{tabular}
\end{center}
\caption{Run of Lang's algorithm on our example PDA and the string $\texttt{0110}$. The PDA transitions used are shown at right.}
\label{fig:lang_example}
\end{figure*}
Each state of the stack WFA is of the form $(i, q, x)$, where $i$ is a position in the input string, $q$ is a PDA state, and $x$ is the top stack symbol. We briefly explain each of the inference rules:
\begin{asparadesc}
\item[Axiom] creates an initial state and pushes $\bot$ onto the stack.
\item[Push] pushes a $y$ on top of an $x$. Unlike Lang's original algorithm, this inference rule applies whether or not state $(j\mathord-1, q, x)$ is reachable.
\item[Replace] pops a $z$ and pushes a $y$, by backing up the $z$ transition (without deleting it) and adding a new $y$ transition.
\item[Pop] pops a $z$, by backing up the $z$ transition as well as the preceding $y$ transition (without deleting them) and adding a new $y$ transition.
\end{asparadesc}
The set of accept states of the stack WFA changes from time step to time step; at step $j$, the accept states are $\{(j, q, x) \mid q \in Q, x \in \Gamma\}$. The language recognized by the stack WFA at time $j$ is the set of possible stack contents at time $j$.
An example run of the algorithm is shown in Figure~\ref{fig:lang_example}, using our example PDA and the string $\texttt{0110}$. At time step $j=3$, the PDA reads $\texttt{1}$ and either pushes a $\texttt{1}$ (path ending in state $(3,q_1,\texttt{1})$) or pops a $\texttt{1}$ (path ending in state $(3,q_2,\texttt{0})$). Similarly at time step $j=4$, and the existence of a state with top stack symbol $\bot$ indicates that the string is of the form $ww^\text{R}$.
The total running time of the algorithm is proportional to the number of ways that the inference rules can be instantiated. Since the Pop rule contains three string positions ($i$, $j$, and $k$), the time complexity is $O(n^3)$. The total space requirement is characterized by the number of possible WFA transitions. Since transitions connect two states, each with a string position ($i$ and $j$), the space complexity is $O(n^2)$.
\subsection{Inner and Forward Weights}
\label{sec:tensor}
To implement this algorithm in a typical neural-network framework, we reformulate it in terms of tensor operations. We use the assumption that all transitions are scanning, although it would be possible to extend the model to handle non-scanning transitions using matrix inversions \citep{stolcke:1995}.
Define $\text{Act}(\Gamma) = \bullet\Gamma \cup \Gamma \cup \{\epsilon\}$ to be a set of possible stack actions: if $y \in \Gamma$, then $\bullet y$ means ``push $y$,'' $y$ means ``replace with $y$,'' and $\epsilon$ means ``pop.''
Given an input string $w$, we pack the transition weights of the PDA into a tensor $\Delta$ with dimensions $n \times |Q| \times |\Gamma| \times |Q| \times |\text{Act}(\Gamma)|$:
\begin{equation}
\begin{aligned}
\transtensor jqxr{\bullet y} &= \transweight q{w_j}{x}{r}{x y} \\
\transtensor jszry &= \transweight{s}{w_j}{z}{r}{y} \\
\transtensor jszr\epsilon &= \transweight{s}{w_j}{z}{r}{\epsilon}.
\end{aligned}
\label{eq:delta}
\end{equation}
We compute the transition weights of the stack WFA (except for the initial transition) as a tensor of \emph{inner weights} $\gamma$, with dimensions $n \times n \times |Q| \times |\Gamma| \times |Q| \times |\Gamma|$. Each element, which we write as $\gamma[i \xrightarrow{} j][q, x \xrightarrow{} r, y]$, is the weight of the stack WFA transition
\begin{center}
\begin{tikzpicture}
\tikzset{state/.append style={rectangle,rounded corners,inner sep=2pt}}
\node[state](q) at (0,0) {$i,q,x$};
\node[state](r) at (1in,0) {$j,r,y$};
\draw (q) edge node {$y$} (r);
\end{tikzpicture}
\end{center}
The equations defining $\gamma$ are shown in Figure \ref{fig:equations}. Because these equations are a recurrence relation, we cannot compute $\gamma$ all at once, but (for example) in order of increasing $j$.
\begin{figure*}
For $1 \leq i < j \leq n$,
\begin{equation*}
\begin{split}
&\gamma[i \xrightarrow{} j][q, x \xrightarrow{} r, y] = \\
&\qquad \begin{aligned}
&\mathds{1}[i=j\mathord-1] \; \transtensor jqxr{\bullet y} && \text{Push} \\
& + \sum_{s,z} \gamma[i \xrightarrow{} j\mathord-1][q, x \xrightarrow{} s, z] \; \transtensor jszry && \text{Replace} \\
& + \sum_{k=i+1}^{j-2} \sum_{t} \sum_{s,z} \gamma[i \xrightarrow{} k][q, x \xrightarrow{} t, y] \; \gamma[k \xrightarrow{} j\mathord-1][t, y \xrightarrow{} s, z] \; \transtensor jszr\epsilon && \text{Pop}
\end{aligned}
\end{split}
\end{equation*}
\caption{Equations for computing inner weights.}
\label{fig:equations}
\end{figure*}
Additionally, we compute a tensor $\alpha$ of \emph{forward weights} of the stack WFA. This tensor has dimensions $n \times |Q| \times |\Gamma|$, and its elements are defined by the recurrence
\begin{align*}
\alpha[1][r, y] &= \indicator{r = q_0 \wedge y = \bot} \\
\alpha[j][r, y] &=
\begin{multlined}[t]
\! \sum_{i=1}^{j-1} \sum_{q,x} \alpha[i][q, x] \, \gamma[i \xrightarrow{} j][q, x \xrightarrow{} r, y] \hspace{-6pt} \\
(2 \leq j \leq n).
\end{multlined}
\end{align*}
The weight $\alpha[j][r, y]$ is the total weight of reaching a configuration $(r, j, \beta y)$ for any $\beta$ from the initial configuration, and we can use $\alpha$ to compute the probability distribution over top stack symbols at time step $j$:
\begin{align*}
\tau^{(j)}(y) &= \frac{ \sum_r \alpha[j][r, y] }{ \sum_{y'} \sum_r \alpha[j][r, y'] }.
\end{align*}
\section{Neural Pushdown Automata}
Now we couple the tensor formulation of Lang's algorithm for nondeterministic PDAs with an RNN controller.
\subsection{Model}
\label{sec:controllerinterface}
The controller can be any type of RNN; in our experiments, we used a LSTM RNN. At each time step, it computes a hidden vector $\mathbf{h}^{(j)}$ with $d$ dimensions from the previous hidden vector, an input vector $\mathbf{x}^{(j)}$, and the distribution over current top stack symbols, $\tau^{(j)}$, defined above:
\begin{align*}
\textbf{h}^{(j)} &= R\left(\textbf{h}^{(j-1)}, \, \begin{bmatrix} \mathbf{x}^{(j)} \\ \tau^{(j)} \end{bmatrix} \right) \\
\intertext{where $R$ can be any RNN unit. This state is used to compute an output vector $\mathbf{y}^{(j)}$ as usual:}
\mathbf{y}^{(j)} &= \text{softmax}\left(\mathbf{A} \mathbf{h}^{(j)} + \mathbf{b}\right) \\
\intertext{where $\mathbf{A}$ and $\mathbf{b}$ are parameters with dimensions $|\Sigma| \times d$ and $|\Sigma|$, respectively. In addition, the state is used to compute a conditional distribution over actions, $\Delta[j]$:}
\mathbf{z}^{(j)}_{qxry} &= \exp\left(\mathbf{C}_{qxry:} \mathbf{h}^{(j)} + \mathbf{D}_{qxry}\right) \\
\transtensor jqxry &= \frac{\mathbf{z}^{(j)}_{qxry}}{\sum_{r',y'} \mathbf{z}^{(j)}_{qxr'y'}}
\end{align*}
where $\mathbf{C}$ and $\mathbf{D}$ are tensors of parameters with dimensions $|Q| \times |\Gamma| \times |Q| \times |\text{Act}(\Gamma)| \times d$ and $|Q| \times |\Gamma| \times |Q| \times |\text{Act}(\Gamma)|$, respectively. (This is just an affine transformation followed by a softmax over $r$ and $y$.)
These equations replace equations~(\ref{eq:delta}).
\subsection{Implementation}
We implemented the NS-RNN{} using PyTorch \citep{pytorch}, and doing so efficiently required a few crucial tricks. The first was a workaround to update the $\gamma$ and $\alpha$ tensors in-place in a way that was compatible with PyTorch's automatic differentiation; this was necessary to achieve the theoretical quadratic space complexity. The second was an efficient implementation of a differentiable \texttt{einsum} operation\footnote{\url{https://github.com/bdusell/semiring-einsum}} that supports the log semiring (as well as other semirings), which allowed us to implement the equations of Figure \ref{fig:equations} in a reasonably fast, memory-efficient way that avoids underflow. Our \texttt{einsum} implementation splits the operation into fixed-size blocks where the multiplication and summation of terms can be fully parallelized. This enforces a reasonable upper bound on memory usage while suffering only a slight decrease in speed compared to fully parallelizing the entire \texttt{einsum} operation.
\section{Experiments}
In this section, we describe our experiments comparing our NS-RNN{} and three baseline language models on several formal languages.
\subsection{Tasks}
\begin{asparadesc}
\item[Marked reversal] The language of palindromes with an explicit middle marker, with strings of the form $w\texttt{\#}w^\text{R}$, where $w \in \{ \texttt{0}, \texttt{1} \}^{*}$. This task should be easily solvable by a model with a deterministic stack, as the model can push the string $w$ to the stack, change states upon reading $\texttt{\#}$, and predict $w^\text{R}$ by popping $w$ from the stack in reverse.
\item[Unmarked reversal] The language of (even-length) palindromes without a middle marker, with strings of the form $ww^\text{R}$, where $w \in \{ \texttt{0}, \texttt{1} \}^{*}$. When the length of $w$ can vary, a language model reading the string from left to right must use nondeterminism to guess where the boundary between $w$ and $w^\text{R}$ lies. At each position, it must either push the input symbol to the stack, or else guess that the middle point has been reached and start popping symbols from the stack. An optimal language model will interpolate among all possible split points to produce a final prediction.
\item[Padded reversal] Like the unmarked reversal language, but with a long stretch of repeated symbols in the middle, with strings of the form $wa^pw^\text{R}$, where $w \in \{ \texttt{0}, \texttt{1} \}^{*}$, $a \in \{ \texttt{0}, \texttt{1} \}$, and $p \geq 0$. The purpose of the padding is to confuse a language model attempting to guess where the middle of the palindrome is based on the content of the string. In the general case of unmarked reversal, a language model can disregard split points where a valid palindrome does not occur locally. Since all substrings of $a^p$ are palindromes, the language model must deal with a larger number of candidates simultaneously.
\item[Dyck language] The language $D_2$ of strings with two kinds of balanced brackets.
\item[Hardest CFL] Designed by \citet{greibach:1973} to be at least as difficult to parse as any other CFL:
\begin{equation*}
\begin{split}
L_0 &= \{ x_1 \texttt{,} y_1 \texttt{,} z_1 \texttt{;} \cdots x_n \texttt{,} y_n \texttt{,} z_n \texttt{;} \mid {} \\
&\qquad n \geq 0, \\
&\qquad y_1 \cdots y_n \in \texttt{\$}D_2, \\
&\qquad x_i, z_i \in \{\texttt{,}, \texttt{\$}, \texttt{(}, \texttt{)}, \texttt{[}, \texttt{]}\}^\ast \}.
\end{split}
\end{equation*}
Intuitively, $L_0$ contains strings formed by dividing a member of $\texttt{\$}D_2$ into pieces ($y_i$) and interleaving them with ``decoy'' pieces (substrings of $x_i$ and $z_i$). While processing the string, the machine has to nondeterministically guess whether each piece is genuine or a decoy. Greibach shows that for any CFL $L$, there is a string homomorphism $h$ such that a parser for $L_0$ can be run on $h(w)$ to find a parse for $w$. See Appendix~\ref{sec:hardest_cfl} for more information.
\end{asparadesc}
\subsection{Data}
For each task, we construct a probabilistic context-free grammar (PCFG) for the language (see Appendix \ref{sec:grammars} for the full grammars and their parameters). We then randomly sample a training set of 10,000 examples from the PCFG, filtering samples so that the length of a string is in the interval $[40, 80]$ (see Appendix \ref{sec:lengthsample} for our sampling method). The training set remains the same throughout the training process and is not re-sampled from epoch to epoch, since we want to test how well the model can infer the probability distribution from a finite sample.
We sample a validation set of 1,000 examples from the same distribution and a test set with string lengths varying from 40 to 100, with 100 examples per length. The validation set is randomized in each experiment, but for each task, the test set remains the same across all models and random restarts. For simplicity, we do not filter training samples from the validation or test sets, assuming that the chance of overlap is very small.
\subsection{Evaluation}
\label{sec:evaluation}
Since, in these languages, the next symbol cannot always be predicted deterministically from previous symbols, we do not use prediction accuracy as in previous work. Instead, we compute per-symbol cross-entropy on a set of strings $S$. Let $p$ be any distribution over strings; then:
\begin{align*}
H(S, p) &= \frac{\sum_{w \in S} -\log p(s)}{\sum_{w \in S} |w|}.
\end{align*}
We compute the cross-entropy for both the stack RNN and the distribution from which $S$ is sampled and report the difference. This can be seen as an approximation of the KL divergence of the stack RNN from the true distribution.
Technically, because the RNN models do not predict the end of the string, they estimate $p(w \mid |w|)$, not $p(w)$. However, they do not actually use any knowledge of the length, so it seems reasonable to compare the RNN's estimate of $p(w \mid |w|)$ with the true $p(w)$. (This is why, when we bin by length in Figure~\ref{fig:test}, some of the differences are negative.)
A benefit of using cross-entropy instead of prediction accuracy is that we can easily incorporate new tasks as long as they are expressed as a PCFG. We do not, for example, need to define a language-dependent subsequence of symbols to evaluate on.
\subsection{Baselines}
We compare our NS-RNN{} against three baselines: an LSTM, the Stack LSTM of \citet{joulin+mikolov:2015} (``JM"), and the Stack LSTM of \citet{grefenstette+al:2015} (``Gref"). We deviate slightly from the original definitions of these models in order to standardize the controller-stack interface to the one defined in Section \ref{sec:controllerinterface}, and to isolate the effects of differences in the stack data structure, rather than the controller mechanism. For all three stack models, we use an LSTM controller whose initial hidden state is fixed to 0, and we use only one stack for the JM and Gref models. (In early experiments, we found that using multiple stacks did not make a meaningful difference in performance.) For JM, we include a bias term in the layers that compute the stack actions and network output. We do allow the no-op operation, and the stack reading consists of only the top stack cell. For Gref, we set the controller output~$\mathbf{o}'_t$ equal to the hidden state $\mathbf{h}_t$, so we compute the stack actions, pushed vector, and network output directly from the hidden state. We encode all input symbols as one-hot vectors; there are no embedding layers.
\subsection{Hyperparameters}
For all models, we use a single-layer LSTM with 20 hidden units. We selected this number because we found that an LSTM of this size could not completely solve the marked reversal task, indicating that the hidden state is a memory bottleneck. For each task, we perform a hyperparameter grid search for each model. We search for the initial learning rate, which has a large impact on performance, from the set $\{0.01, 0.005, 0.001, 0.0005\}$. For JM and Gref, we search for stack embedding sizes in $\{2, 20, 40\}$. We manually choose a small number of PDA states and stack symbol types for the NS-RNN{} for each task. For marked reversal, unmarked reversal, and Dyck, we use 2 states and 2 stack symbol types. For padded reversal, we use 3 states and 2 stack symbol types. For the hardest CFL, we use 3 states and 3 stack symbol types.
As noted by \citet{grefenstette+al:2015}, initialization can play a large role in whether a Stack LSTM converges on algorithmic behavior or becomes trapped in a local optimum. To mitigate this, for each hyperparameter setting in the grid search, we run five random restarts and select the hyperparameter setting with the lowest average difference in cross entropy on the validation set. This gives us a picture not only of the model's performance, but of its rate of success. We initialize all fully-connected layers except for the recurrent LSTM layer with Xavier uniform initialization \citep{glorot+bengio:2010}, and all other parameters uniformly from $[-0.1, 0.1]$.
We train all models with Adam \citep{kingma+ba:2015} and clip gradients whose magnitude is above~5. We use mini-batches of size~10; to generate a batch, we first select a length and then sample~10 strings of that length. We train models until convergence, multiplying the learning rate by 0.9 after~5 epochs of no improvement in cross-entropy on the validation set, and stopping after 10 epochs of no improvement.
{
\definecolor{color0}{rgb}{0.12156862745098,0.466666666666667,0.705882352941177}
\definecolor{color1}{rgb}{1,0.498039215686275,0.0549019607843137}
\definecolor{color2}{rgb}{0.172549019607843,0.627450980392157,0.172549019607843}
\definecolor{color3}{rgb}{0.83921568627451,0.152941176470588,0.156862745098039}
\pgfplotsset{lines/.style={semithick}}
\pgfplotsset{line0/.style={lines, color0, mark=triangle*, mark options={rotate=30}}}
\pgfplotsset{line1/.style={lines, color1, mark=triangle*, mark options={rotate=120}}}
\pgfplotsset{line2/.style={lines, color2, mark=triangle*, mark options={rotate=210}}}
\pgfplotsset{line3/.style={lines, color3, mark=triangle*, mark options={rotate=300}}}
\tikzset{bars/.style={opacity=0.12}}
\pgfplotsset{every axis/.style={
width=3.5in,height=2.4in,
title style={yshift=-4.5ex},
legend cell align={left},
legend style={at={(0.5,-0.33)},anchor=north,draw=none,/tikz/every even column/.append style={column sep=0.4cm}},
legend columns=-1,
tick style={color=black},
tick align=outside,
tick pos=left,
ymin=0,ytick distance=0.1,
scaled ticks=false,
ticklabel style={/pgf/number format/fixed,/pgf/number format/precision=5},
ylabel style={at={(axis description cs:-0.15,0.5)}},
}}
\begin{figure*}
\begin{minipage}[t]{\columnwidth}
\centering
\pgfplotsset{every axis/.append style={
xmin=0,xmax=160,xtick distance=50,
mark repeat=32,
}}
\pgfplotsset{line0/.append style={mark phase=0}}
\pgfplotsset{line1/.append style={mark phase=8}}
\pgfplotsset{line2/.append style={mark phase=16}}
\pgfplotsset{line3/.append style={mark phase=24}}
\tikzset{linelabel/.style={black,inner sep=2pt,font={\footnotesize}}}
{\pgfplotsset{every axis/.append style={xticklabels={,,}}}
\scalebox{0.8}{\input{figures/train-marked-reversal.tex}} \\
\scalebox{0.8}{\input{figures/train-unmarked-reversal.tex}} \\
\scalebox{0.8}{\input{figures/train-padded-reversal.tex}} \\
\scalebox{0.8}{\input{figures/train-dyck.tex}} \\
}
\scalebox{0.8}{\input{figures/train-hardest-cfl.tex}}
\caption{Cross-entropy difference in nats between model and source distribution on validation set, as a function of training time. Lines are averages of five random restarts, and shaded regions are standard deviations. After a random restart converges, the value of its last epoch is used in the average for later epochs.}
\label{fig:train}
\end{minipage}%
\hspace{\columnsep}%
\begin{minipage}[t]{\columnwidth}
\centering
\pgfplotsset{every axis/.append style={
xmin=40, xmax=112,
xtick={40,60,80,100},
mark repeat=8,
}}
\pgfplotsset{line0/.append style={mark phase=0}}
\pgfplotsset{line1/.append style={mark phase=2}}
\pgfplotsset{line2/.append style={mark phase=4}}
\pgfplotsset{line3/.append style={mark phase=6}}
\tikzset{linelabel/.style={black,inner sep=2pt,font={\footnotesize}}}
{\pgfplotsset{every axis/.append style={xticklabels={,,}}}
\scalebox{0.8}{\input{figures/test-marked-reversal.tex}} \\
\scalebox{0.8}{\input{figures/test-unmarked-reversal.tex}} \\
\scalebox{0.8}{\input{figures/test-padded-reversal.tex}} \\
\scalebox{0.8}{\input{figures/test-dyck.tex}} \\
}
\scalebox{0.8}{\input{figures/test-hardest-cfl.tex}}
\caption{Cross-entropy difference in nats on the test set, binned by string length. Some models achieve a negative difference, for reasons explained in \S\ref{sec:evaluation}. Each line is the average of the same five random restarts shown in Figure~\ref{fig:train}.}
\label{fig:test}
\end{minipage}
\end{figure*}
}
\section{Results}
We show plots of the difference in cross entropy on the validation set between each model and the source distribution in Figure \ref{fig:train}. For all tasks, stack-based models outperform the LSTM baseline, indicating that the tasks are effective benchmarks for differentiable stacks. For the marked reversal, unmarked reversal, and hardest CFL tasks, our model consistently achieves cross-entropy closer to the source distribution than any other model. Even for the marked reversal task, which can be solved deterministically, the NS-RNN{}, besides achieving lower cross-entropy on average, learns to solve the task in fewer updates and with much higher reliability across random restarts. In the case of the mildly nondeterministic unmarked reversal and highly nondeterministic hardest CFL tasks, the NS-RNN{} converges on the lowest validation cross-entropy. On the Dyck language, which is a deterministic task, all stack models converge quickly on the source distribution. We hypothesize that this is because the Dyck language represents a case where stack usage is locally advantageous everywhere, so it is particularly conducive for learning stack-like behavior. On the other hand, we note that our model struggles on padded reversal, in which stack-friendly signals are intentionally made very distant. Although the NS-RNN{} outperforms the LSTM baseline, the JM model solves the task most effectively, though still imperfectly.
In order to show how each model performs when evaluated on strings longer than those seen during training, in Figure \ref{fig:test}, we show cross-entropy on separately sampled test data as a function of string length. All test sets are identical across models and random restarts, and there are 100 samples per length. The NS-RNN{} consistently does well on string lengths it was trained on, but it is sometimes surpassed by other stack models on strings that are outside the distribution of lengths it was trained on. This suggests that the NS-RNN{} conforms more tightly to the real distribution seen during training.
\section{Conclusion}
We presented the NS-RNN{}, a neural language model with a differentiable stack that explicitly models nondeterminism. We showed that it offers improved trainability and modeling power over previous stack-based neural language models; the NS-RNN{} learns to solve some deterministic tasks more effectively than other stack-LSTMs, and achieves the best results on a challenging nondeterministic context-free language. However, we note that the NS-RNN{} struggled on a task where signals in the data were distant, and did not generalize to longer lengths as well as other stack-LSTMs; we hope to address these shortcomings in future work. We believe that the NS-RNN{} will prove to be a powerful tool for learning and modeling ambiguous syntax in natural language.
\section*{Acknowledgements}
This research was supported in part by a Google Faculty Research Award. We would like to thank Justin DeBenedetto and Darcey Riley for their helpful comments, and the Center for Research Computing at the University of Notre Dame for providing the computing infrastructure for our experiments.
\bibliographystyle{acl_natbib}
\section{Introduction}
\label{sec:introduction}
Although recent neural models of language have made advances in learning syntactic behavior, research continues to suggest that inductive bias plays a key role in data efficiency and human-like syntactic generalization \cite{schijndel+al:2019,hu+al:2020}. Based on the long-held observation that language exhibits hierarchical structure, previous work has proposed coupling recurrent neural networks (RNNs) with differentiable stack data structures \cite{joulin+mikolov:2015,grefenstette+al:2015} to give them some of the computational power of pushdown automata (PDAs), the class of automata that recognize context-free languages (CFLs). However, previously proposed differentiable stack data structures only model deterministic stacks, which store only one version of the stack contents at a time, theoretically limiting the power of these stack RNNs to the deterministic~CFLs.
A sentence's syntactic structure often cannot be fully resolved until its conclusion (if ever), requiring a human listener to track multiple possibilities while hearing the sentence. Past work in psycholinguistics has suggested that models that keep multiple candidate parses in memory at once can explain human reading times better than models which assume harsher computational constraints. This ability also plays an important role in calculating expectations that facilitate more efficient language processing \cite{levy:2008}. Current neural language models do not track multiple parses, if they learn syntax generalizations at all \cite{futrell+al:2018,wilcox+al:2019,mccoy+al:2020}.
We propose a new differentiable stack data structure that explicitly models a nondeterministic PDA, adapting an algorithm by \citet{lang:1974} and reformulating it in terms of tensor operations. The algorithm is able to represent an exponential number of stack configurations at once using cubic time and quadratic space complexity. As with existing stack RNN architectures, we combine this data structure with an RNN controller, and we call the resulting model a Nondeterministic Stack RNN{} (NS-RNN).
We predict that nondeterminism can help language processing in two ways. First, it will improve trainability, since all possible sequences of stack operations contribute to the objective function, not just the sequence used by the current model. Second, it will improve expressivity, as it is able to model concurrent parses in ways that a deterministic stack cannot. We demonstrate these claims by comparing the NS-RNN{} to deterministic stack RNNs on formal language modeling tasks of varying complexity. To show that nondeterminism aids training, we show that the NS-RNN{} achieves lower cross-entropy, in fewer parameter updates, on some deterministic CFLs. To show that nondeterminism improves expressivity, we show that the NS-RNN{} achieves lower cross-entropy on nondeterministic CFLs, including the ``hardest context-free language" \cite{greibach:1973}, a language which is at least as difficult to parse as any other CFL and inherently requires nondeterminism. Our code is available at \url{https://github.com/bdusell/nondeterministic-stack-rnn}.
\section{Background and Motivation}
In all differentiable stack-augmented networks that we are aware of (including ours), a network called the \emph{controller}, which is some kind of RNN (typically an LSTM), is augmented with a differentiable stack, which has no parameters of its own. At each time step, the controller emits weights for various stack operations, which at minimum include push and pop. To maintain differentiability, the weights need to be continuous; different designs for the stack interpret fractionally-weighted operations differently. The stack then executes the fractional operations and produces a stack \emph{reading}, which is a vector that represents the top of the updated stack. The stack reading is used as an extra input to the next hidden state update.
Designs for differentiable stacks have proceeded generally along two lines. One approach, which we call \emph{superposition} \citep{joulin+mikolov:2015}, treats fractional weights as probabilities. The other, which we call \emph{stratification} \citep{sun+al:1995,grefenstette+al:2015}, treats fractional weights as ``thicknesses.''
\paragraph{Superposition}
In the model of \citet{joulin+mikolov:2015}, the controller emits at each time step a probability distribution over three stack operations: push a new vector, pop the top vector, and no-op. The stack simulates all three operations at once, setting each stack element to the weighted interpolation of the elements above, at, and below it in the previous time step, weighted by push, no-op, and pop probabilities respectively. Thus, each stack element is a superposition of possible values for that element. Because stack elements depend only on a fixed number of elements from the previous time step, the stack update can largely be parallelized. \Citet{yogatama+al:2018} developed an extension to this model that allows a variable number of pops per time step, up to a fixed limit $K$. \Citet{suzgun+:2019} also proposed a modification of the controller parameterization.
\paragraph{Stratification}
The model proposed by \citet{sun+al:1995} and later studied by \citet{grefenstette+al:2015} takes a different approach, assigning a \textit{strength} between 0 and 1 to each stack element. If the stack elements were the layers of a cake, then the strengths would represent the thickness of each layer. At each time step, the controller emits a push weight between 0 and 1 which determines the strength of a new vector pushed onto the stack, and a pop weight between 0 and 1 which determines how much to slice off the top of the stack. The stack reading is computed by examining the top layer of unit thickness and interpolating the vectors proportional to their strengths. This relies on $\min$ and $\max$ operations, which can have zero gradients. In practice, the model can get trapped in local optima and requires random restarts \cite{hao+al:2018}. This model also affords less opportunity for parallelization because of the interdependence of stack elements within the same time step. \Citet{hao+al:2018} proposed an extension that uses memory buffers to allow variable-length transductions.
\paragraph{Nondeterminism}
In all the above models, the stack is essentially deterministic in design. In order to recognize a nondeterministic CFL like $\{ww^\text{R}\}$ from left to right, it must be possible, at each time step, for the stack to track all prefixes of the input string read so far. None of the foregoing models, to our knowledge, can represent a set of possiblities like this. Even for deterministic CFLs, this has consequences for trainability; at each time step, training can only update the model from the vantage point of a single stack configuration, making the model prone to getting stuck in local minima.
To overcome this weakness, we propose incorporating a nondeterministic stack, which affords the model a global view of the space of possible ways to use the stack. Our controller emits a probability distribution over stack operations, as in the superposition approach. However, whereas superposition only maintains the per-element marginal distributions over the stack elements, we propose to maintain the full distribution over the whole stack contents. We marginalize the distribution as late as possible, when the controller queries the stack for the current top stack symbol.
In the following sections, we explain our model and compare it against those of \citet{joulin+mikolov:2015} and \citet{grefenstette+al:2015}. Despite taking longer in wall-clock time to train, our model learns to solve the tasks optimally with a higher rate of success.
\section{Pushdown Automata}
In this section, we give a definition of nondeterministic PDAs (\S\ref{sec:pdadef}),
describe how to process strings with nondeterministic PDAs in cubic time (\S\ref{sec:lang}), and reformulate this algorithm in terms of tensor operations (\S\ref{sec:tensor}).
\subsection{Notation}
Let $\epsilon$ be the empty string. Let $\indicator{\phi}$ be $1$ when proposition $\phi$ is true, $0$ otherwise.
If $A$ is a matrix, let $A_{i:}$ and $A_{:j}$ be the $i$th row and $j$th column, respectively, and define analogous notation for tensors.
\subsection{Definition}
\label{sec:pdadef}
A \textit{weighted pushdown automaton (PDA)} is a tuple $M = (Q, \Sigma, \Gamma, \delta, q_0, \bo
)$, where:
\begin{compactitem}
\item $Q$ is a finite set of states.
\item $\Sigma$ is a finite input alphabet.
\item $\Gamma$ is a finite stack alphabet.
\item $\delta \colon Q
\times \Gamma \times \Sigma \times Q \times \Gamma^\ast \rightarrow \mathbb{R}_{\geq 0}$ maps transitions, which we write as $\trans qaxry$, to weights.
\item $q_0 \in Q$ is the start state.
\item $\bot \in \Gamma$ is the initial stack symbol.
\end{compactitem}
In this paper, we do not allow non-scanning transitions (that is, those where $a = \epsilon$). Although this does not reduce the weak generative capacity of PDAs \citep{autebert+:1997}, it could affect their ability to learn; we leave exploration of non-scanning transitions for future work.
For simplicity, we will assume that all transitions have one of the three forms:
\begin{align*}
&\trans q a x r xy && \text{push $y$ on top of $x$} \\
&\trans q a x r y && \text{replace $x$ with $y$} \\
&\trans q a x r \epsilon && \text{pop $x$.}
\end{align*}
This also does not reduce the weak generative capacity of PDAs.
Given an input string $w \in \Sigma^\ast$ of length $n$, a \emph{configuration} is a triple $(i, q, \beta)$, where $i \in [0, n]$ is an input position indicating that all symbols up to and including $w_i$ have been scanned, $q \in Q$ is a state, and $\beta \in \Gamma^\ast$ is the content of the stack (written bottom to top). For all $i, q, r, \beta, x, y$, we say that $(i\mathord-1, q, \beta x)$ \emph{yields} $(i, r, \beta y)$ if $\transweight q{w_i}xry > 0$. A \emph{run} is a sequence of configurations starting with $(0, q_0, \bot)$ where each configuration (except the last) yields the next configuration.
Because our model does not use the PDA to accept or reject strings, we omit the usual definitions for the language accepted by a PDA. This is also why our definition lacks accept states.
As an example, consider the following PDA, for the language $\{ww^\text{R} \mid w \in \{\texttt{0}, \texttt{1}\}^\ast\}$:
\begin{align*}
M &= (Q, \Sigma, \Gamma, \delta, q_1, \bot) \\
Q &= \{q_1, q_2\} \\
\Sigma &= \{\texttt{0}, \texttt{1}\} \\
\Gamma &= \{\texttt{0}, \texttt{1}, \bot\}
\end{align*}
where $\delta$ contains the transitions
\begin{align*}
q_1, x &\xrightarrow{a} q_1, xa & x &\in \Gamma, a \in \Sigma \\
q_1, a &\xrightarrow{a} q_2, \epsilon & a &\in \Sigma \\
q_2, a &\xrightarrow{a} q_2, \epsilon & a &\in \Sigma.
\end{align*}
This PDA has a possible configuration with an empty stack ($\bot$) iff the input string read so far is of the form $ww^\text{R}$.
To make a weighted PDA probabilistic, we require that all transition weights be nonnegative and, for all $a, q, x$:
\begin{align*}
\displaystyle\sum_{r \in Q} \sum_{y \in \Gamma^\ast} \transweight qaxry &= 1.
\end{align*}
Whereas many definitions make the model generate symbols \citep{abney+al:1999}, our definition makes the PDA operations conditional on the input symbol $a$. The difference is not very important, because the RNN controller will eventually assume responsibility for reading and writing symbols, but our definition makes the shift to an RNN controller below slightly simpler.
\subsection{Recognition}
\label{sec:lang}
\citet{lang:1974} gives an algorithm for simulating all runs of a nondeterministic PDA, related to Earley's algorithm \citep{earley:1970}. At any point in time, there can be exponentially many possibilities for the contents of the stack. In spite of this, Lang's algorithm is able to represent the set of all possibilities using only quadratic space. As this set is regular, its representation can be thought of as a weighted finite automaton, which we call the \emph{stack WFA}, similar to the graph-structured stack used in GLR parsing \cite{tomita:1987}.
Figure~\ref{fig:stackops} depicts Lang's algorithm as a set of inference rules, similar to a deductive parser \citep{shieber+:1995,goodman:1999}, although the visual presentation is rather different. Each inference rule is drawn as a fragment of the stack WFA. If the transitions drawn with solid lines are present in the stack WFA, and the side conditions in the right column are met, then the transition drawn with a dashed line can be added to the stack WFA. The algorithm repeatedly applies inference rules to add states and transitions to the stack WFA; no states or transitions are ever deleted.
\begin{figure*}
\tikzset{state/.append style={rectangle,rounded corners,inner sep=3pt,anchor=base,execute at begin node={\strut}}}
\tikzset{x=2.25cm,baseline=0}
\renewcommand{\arraystretch}{4}
\begin{center}
\begin{tabular}{ccc}
Axiom &
\begin{tikzpicture}
\node[initial,state](q) at (0,0) {};
\node[state](r) at (1,0) {$0,q_0,\bot$};
\draw[dashed] (q) edge node {$\bot/1$} (r);
\end{tikzpicture}
& \\
Push &
\begin{tikzpicture}
\node[state](q1) at (1,0) {$j\mathord-1,q,x$};
\node[state](q2) at (2,0) {$j,r,y$};
\draw[dashed] (q1) edge node {$y / p$} (q2);
\end{tikzpicture} &
$p = \transweight q{w_j}xr{\bullet y}$
\\
Replace &
\begin{tikzpicture}
\node[state](q0) at (0,0) {$i,q,x$};
\node[state](q1) at (1,0) {$j\mathord-1,s,z$};
\node[state](q2) at (2,0) {$j,r,y$};
\draw (q0) edge node[below] {$z / p_1$} (q1);
\draw[dashed,bend left] (q0) edge node {$y / p_1 p$}(q2);
\end{tikzpicture} &
$p = \transweight s{w_j}zry$
\\
Pop &
\begin{tikzpicture}
\node[state](q0) at (0,0) {$i,q,x$};
\node[state](q1) at (1,0) {$k,t,y$};
\node[state](q2) at (2,0) {$j\mathord-1,s,z$};
\node[state](q3) at (3,0) {$j,r,y$};
\draw (q0) edge node[below] {$y / p_1$} (q1);
\draw (q1) edge node[below] {$z / p_2$} (q2);
\draw[dashed,bend left] (q0) edge node {$y / p_1 p_2 p$} (q3);
\end{tikzpicture} &
$p = \transweight s{w_j}zr\epsilon$
\end{tabular}
\end{center}
\caption{Lang's algorithm drawn as operations on the stack WFA. Solid edges indicate existing transitions; dashed edges indicate transitions that are added as a result of the stack operation.}
\label{fig:stackops}
\end{figure*}
\begin{figure*}
\tikzset{state/.append style={rectangle,rounded corners,inner sep=3pt,anchor=base,execute at begin node={\strut}}}
\tikzset{label/.style={anchor=base,execute at begin node={\strut}}}
\tikzset{x=2cm,baseline=0pt,node distance=2cm}
\begin{center}
\begin{tabular}{ll}
$j=0$ &
\begin{tikzpicture}
\node[initial,state] (start) {};
\node[accepting,state,right of=start] (0q1bot) {$0, q_1, \bot$};
\draw[dashed] (start) edge node {$\bot$} (0q1bot);
\end{tikzpicture}
\\[0.5cm]
$j=1$ &
\begin{tikzpicture}
\node[initial,state] (start) {};
\node[state,right of=start] (0q1bot) {$0, q_1, \bot$};
\draw (start) edge node {$\bot$} (0q1bot);
\node[accepting,state,right of=0q1bot](1q10) {$1, q_1, \texttt{0}$};
\draw[dashed] (0q1bot) edge node {$\texttt{0}$} (1q10);
\coordinate (p) at (12cm,0);
\node[label] at (1q10.base -| p) {$\trans{q_1}{\texttt{0}}{\bot}{q_1}{\texttt{0}}$};
\end{tikzpicture}
\\[0.5cm]
$j=2$ &
\begin{tikzpicture}
\node[initial,state] (start) {};
\node[state,right of=start] (0q1bot) {$0, q_1, \bot$};
\draw (start) edge node {$\bot$} (0q1bot);
\node[state,right of=0q1bot](1q10) {$1, q_1, \texttt{0}$};
\draw (0q1bot) edge node {$\texttt{0}$} (1q10);
\node[accepting,state,right of=1q10](2q11) {$2, q_1, \texttt{1}$};
\draw[dashed] (1q10) edge node {$\texttt{1}$} (2q11);
\coordinate (p) at (12cm,0);
\node[anchor=base] at (2q11.base -| p) {$\trans{q_1}{\texttt{1}}{\texttt{0}}{q_1}{\texttt{1}}$};
\end{tikzpicture}
\\[0.5cm]
$j=3$ &
\begin{tikzpicture}
\node[initial,state] (start) {};
\node[state,right of=start] (0q1bot) {$0, q_1, \bot$};
\draw (start) edge node {$\bot$} (0q1bot);
\node[state,right of=0q1bot](1q10) {$1, q_1, \texttt{0}$};
\draw (0q1bot) edge node {$\texttt{0}$} (1q10);
\node[state,right of=1q10](2q11) {$2, q_1, \texttt{1}$};
\draw (1q10) edge node {$\texttt{1}$} (2q11);
\node[accepting,state,right of=2q11](3q11) {$3, q_1, \texttt{1}$};
\draw[dashed] (2q11) edge node {$\texttt{1}$} (3q11);
\node[accepting,state,below=0.5cm of 3q11](3q20) {$3, q_2, \texttt{0}$};
\draw[dashed,out=-30,in=180] (0q1bot) edge node {$\texttt{0}$} (3q20);
\coordinate (p) at (12cm,0);
\node[label] at (3q11.base -| p) {$\trans{q_1}{\texttt{1}}{\texttt{1}}{q_1}{\texttt{1}}$};
\node[label] at (3q20.base -| p) {$\trans{q_1}{\texttt{1}}{\texttt{1}}{q_2}{\epsilon}$};
\end{tikzpicture}
\\[1.7cm]
$j=4$ &
\begin{tikzpicture}
\node[initial,state] (start) {};
\node[state,right of=start] (0q1bot) {$0, q_1, \bot$};
\draw (start) edge node {$\bot$} (0q1bot);
\node[state,right of=0q1bot](1q10) {$1, q_1, \texttt{0}$};
\draw (0q1bot) edge node {$\texttt{0}$} (1q10);
\node[state,right of=1q10](2q11) {$2, q_1, \texttt{1}$};
\draw (1q10) edge node {$\texttt{1}$} (2q11);
\node[state,right of=2q11](3q11) {$3, q_1, \texttt{1}$};
\draw (2q11) edge node {$\texttt{1}$} (3q11);
\node[state,below=0.5cm of 3q11](3q20) {$3, q_2, \texttt{0}$};
\draw[out=-30,in=180] (0q1bot) edge node {$\texttt{0}$} (3q20);
\node[accepting,state,right of=3q11](4q10) {$4, q_1, \texttt{0}$};
\draw[dashed] (3q11) edge node {$\texttt{0}$} (4q10);
\node[accepting,state,below=0.5cm of 4q10](4q2bot) {$4, q_2, \bot$};
\draw[dashed,every edge,rounded corners=5mm] (start) -- ($(start|-4q2bot)+(1.5,-0.75)$) to node {$\bot$} ($(4q2bot)+(-0.5,-0.75)$) -- (4q2bot);
\coordinate (p) at (12cm,0);
\node[label] at (4q10.base -| p) {$\trans{q_1}{\texttt{0}}{\texttt{1}}{q_1}{\texttt{0}}$};
\node[label] at (4q2bot.base -| p) {$\trans{q_2}{\texttt{0}}{\texttt{0}}{q_2}{\epsilon}$};
\end{tikzpicture}
\end{tabular}
\end{center}
\caption{Run of Lang's algorithm on our example PDA and the string $\texttt{0110}$. The PDA transitions used are shown at right.}
\label{fig:lang_example}
\end{figure*}
Each state of the stack WFA is of the form $(i, q, x)$, where $i$ is a position in the input string, $q$ is a PDA state, and $x$ is the top stack symbol. We briefly explain each of the inference rules:
\begin{asparadesc}
\item[Axiom] creates an initial state and pushes $\bot$ onto the stack.
\item[Push] pushes a $y$ on top of an $x$. Unlike Lang's original algorithm, this inference rule applies whether or not state $(j\mathord-1, q, x)$ is reachable.
\item[Replace] pops a $z$ and pushes a $y$, by backing up the $z$ transition (without deleting it) and adding a new $y$ transition.
\item[Pop] pops a $z$, by backing up the $z$ transition as well as the preceding $y$ transition (without deleting them) and adding a new $y$ transition.
\end{asparadesc}
The set of accept states of the stack WFA changes from time step to time step; at step $j$, the accept states are $\{(j, q, x) \mid q \in Q, x \in \Gamma\}$. The language recognized by the stack WFA at time $j$ is the set of possible stack contents at time $j$.
An example run of the algorithm is shown in Figure~\ref{fig:lang_example}, using our example PDA and the string $\texttt{0110}$. At time step $j=3$, the PDA reads $\texttt{1}$ and either pushes a $\texttt{1}$ (path ending in state $(3,q_1,\texttt{1})$) or pops a $\texttt{1}$ (path ending in state $(3,q_2,\texttt{0})$). Similarly at time step $j=4$, and the existence of a state with top stack symbol $\bot$ indicates that the string is of the form $ww^\text{R}$.
The total running time of the algorithm is proportional to the number of ways that the inference rules can be instantiated. Since the Pop rule contains three string positions ($i$, $j$, and $k$), the time complexity is $O(n^3)$. The total space requirement is characterized by the number of possible WFA transitions. Since transitions connect two states, each with a string position ($i$ and $j$), the space complexity is $O(n^2)$.
\subsection{Inner and Forward Weights}
\label{sec:tensor}
To implement this algorithm in a typical neural-network framework, we reformulate it in terms of tensor operations. We use the assumption that all transitions are scanning, although it would be possible to extend the model to handle non-scanning transitions using matrix inversions \citep{stolcke:1995}.
Define $\text{Act}(\Gamma) = \bullet\Gamma \cup \Gamma \cup \{\epsilon\}$ to be a set of possible stack actions: if $y \in \Gamma$, then $\bullet y$ means ``push $y$,'' $y$ means ``replace with $y$,'' and $\epsilon$ means ``pop.''
Given an input string $w$, we pack the transition weights of the PDA into a tensor $\Delta$ with dimensions $n \times |Q| \times |\Gamma| \times |Q| \times |\text{Act}(\Gamma)|$:
\begin{equation}
\begin{aligned}
\transtensor jqxr{\bullet y} &= \transweight q{w_j}{x}{r}{x y} \\
\transtensor jszry &= \transweight{s}{w_j}{z}{r}{y} \\
\transtensor jszr\epsilon &= \transweight{s}{w_j}{z}{r}{\epsilon}.
\end{aligned}
\label{eq:delta}
\end{equation}
We compute the transition weights of the stack WFA (except for the initial transition) as a tensor of \emph{inner weights} $\gamma$, with dimensions $n \times n \times |Q| \times |\Gamma| \times |Q| \times |\Gamma|$. Each element, which we write as $\gamma[i \xrightarrow{} j][q, x \xrightarrow{} r, y]$, is the weight of the stack WFA transition
\begin{center}
\begin{tikzpicture}
\tikzset{state/.append style={rectangle,rounded corners,inner sep=2pt}}
\node[state](q) at (0,0) {$i,q,x$};
\node[state](r) at (1in,0) {$j,r,y$};
\draw (q) edge node {$y$} (r);
\end{tikzpicture}
\end{center}
The equations defining $\gamma$ are shown in Figure \ref{fig:equations}. Because these equations are a recurrence relation, we cannot compute $\gamma$ all at once, but (for example) in order of increasing $j$.
\begin{figure*}
For $1 \leq i < j \leq n$,
\begin{equation*}
\begin{split}
&\gamma[i \xrightarrow{} j][q, x \xrightarrow{} r, y] = \\
&\qquad \begin{aligned}
&\mathds{1}[i=j\mathord-1] \; \transtensor jqxr{\bullet y} && \text{Push} \\
& + \sum_{s,z} \gamma[i \xrightarrow{} j\mathord-1][q, x \xrightarrow{} s, z] \; \transtensor jszry && \text{Replace} \\
& + \sum_{k=i+1}^{j-2} \sum_{t} \sum_{s,z} \gamma[i \xrightarrow{} k][q, x \xrightarrow{} t, y] \; \gamma[k \xrightarrow{} j\mathord-1][t, y \xrightarrow{} s, z] \; \transtensor jszr\epsilon && \text{Pop}
\end{aligned}
\end{split}
\end{equation*}
\caption{Equations for computing inner weights.}
\label{fig:equations}
\end{figure*}
Additionally, we compute a tensor $\alpha$ of \emph{forward weights} of the stack WFA. This tensor has dimensions $n \times |Q| \times |\Gamma|$, and its elements are defined by the recurrence
\begin{align*}
\alpha[1][r, y] &= \indicator{r = q_0 \wedge y = \bot} \\
\alpha[j][r, y] &=
\begin{multlined}[t]
\! \sum_{i=1}^{j-1} \sum_{q,x} \alpha[i][q, x] \, \gamma[i \xrightarrow{} j][q, x \xrightarrow{} r, y] \hspace{-6pt} \\
(2 \leq j \leq n).
\end{multlined}
\end{align*}
The weight $\alpha[j][r, y]$ is the total weight of reaching a configuration $(r, j, \beta y)$ for any $\beta$ from the initial configuration, and we can use $\alpha$ to compute the probability distribution over top stack symbols at time step $j$:
\begin{align*}
\tau^{(j)}(y) &= \frac{ \sum_r \alpha[j][r, y] }{ \sum_{y'} \sum_r \alpha[j][r, y'] }.
\end{align*}
\section{Neural Pushdown Automata}
Now we couple the tensor formulation of Lang's algorithm for nondeterministic PDAs with an RNN controller.
\subsection{Model}
\label{sec:controllerinterface}
The controller can be any type of RNN; in our experiments, we used a LSTM RNN. At each time step, it computes a hidden vector $\mathbf{h}^{(j)}$ with $d$ dimensions from the previous hidden vector, an input vector $\mathbf{x}^{(j)}$, and the distribution over current top stack symbols, $\tau^{(j)}$, defined above:
\begin{align*}
\textbf{h}^{(j)} &= R\left(\textbf{h}^{(j-1)}, \, \begin{bmatrix} \mathbf{x}^{(j)} \\ \tau^{(j)} \end{bmatrix} \right) \\
\intertext{where $R$ can be any RNN unit. This state is used to compute an output vector $\mathbf{y}^{(j)}$ as usual:}
\mathbf{y}^{(j)} &= \text{softmax}\left(\mathbf{A} \mathbf{h}^{(j)} + \mathbf{b}\right) \\
\intertext{where $\mathbf{A}$ and $\mathbf{b}$ are parameters with dimensions $|\Sigma| \times d$ and $|\Sigma|$, respectively. In addition, the state is used to compute a conditional distribution over actions, $\Delta[j]$:}
\mathbf{z}^{(j)}_{qxry} &= \exp\left(\mathbf{C}_{qxry:} \mathbf{h}^{(j)} + \mathbf{D}_{qxry}\right) \\
\transtensor jqxry &= \frac{\mathbf{z}^{(j)}_{qxry}}{\sum_{r',y'} \mathbf{z}^{(j)}_{qxr'y'}}
\end{align*}
where $\mathbf{C}$ and $\mathbf{D}$ are tensors of parameters with dimensions $|Q| \times |\Gamma| \times |Q| \times |\text{Act}(\Gamma)| \times d$ and $|Q| \times |\Gamma| \times |Q| \times |\text{Act}(\Gamma)|$, respectively. (This is just an affine transformation followed by a softmax over $r$ and $y$.)
These equations replace equations~(\ref{eq:delta}).
\subsection{Implementation}
We implemented the NS-RNN{} using PyTorch \citep{pytorch}, and doing so efficiently required a few crucial tricks. The first was a workaround to update the $\gamma$ and $\alpha$ tensors in-place in a way that was compatible with PyTorch's automatic differentiation; this was necessary to achieve the theoretical quadratic space complexity. The second was an efficient implementation of a differentiable \texttt{einsum} operation\footnote{\url{https://github.com/bdusell/semiring-einsum}} that supports the log semiring (as well as other semirings), which allowed us to implement the equations of Figure \ref{fig:equations} in a reasonably fast, memory-efficient way that avoids underflow. Our \texttt{einsum} implementation splits the operation into fixed-size blocks where the multiplication and summation of terms can be fully parallelized. This enforces a reasonable upper bound on memory usage while suffering only a slight decrease in speed compared to fully parallelizing the entire \texttt{einsum} operation.
\section{Experiments}
In this section, we describe our experiments comparing our NS-RNN{} and three baseline language models on several formal languages.
\subsection{Tasks}
\begin{asparadesc}
\item[Marked reversal] The language of palindromes with an explicit middle marker, with strings of the form $w\texttt{\#}w^\text{R}$, where $w \in \{ \texttt{0}, \texttt{1} \}^{*}$. This task should be easily solvable by a model with a deterministic stack, as the model can push the string $w$ to the stack, change states upon reading $\texttt{\#}$, and predict $w^\text{R}$ by popping $w$ from the stack in reverse.
\item[Unmarked reversal] The language of (even-length) palindromes without a middle marker, with strings of the form $ww^\text{R}$, where $w \in \{ \texttt{0}, \texttt{1} \}^{*}$. When the length of $w$ can vary, a language model reading the string from left to right must use nondeterminism to guess where the boundary between $w$ and $w^\text{R}$ lies. At each position, it must either push the input symbol to the stack, or else guess that the middle point has been reached and start popping symbols from the stack. An optimal language model will interpolate among all possible split points to produce a final prediction.
\item[Padded reversal] Like the unmarked reversal language, but with a long stretch of repeated symbols in the middle, with strings of the form $wa^pw^\text{R}$, where $w \in \{ \texttt{0}, \texttt{1} \}^{*}$, $a \in \{ \texttt{0}, \texttt{1} \}$, and $p \geq 0$. The purpose of the padding is to confuse a language model attempting to guess where the middle of the palindrome is based on the content of the string. In the general case of unmarked reversal, a language model can disregard split points where a valid palindrome does not occur locally. Since all substrings of $a^p$ are palindromes, the language model must deal with a larger number of candidates simultaneously.
\item[Dyck language] The language $D_2$ of strings with two kinds of balanced brackets.
\item[Hardest CFL] Designed by \citet{greibach:1973} to be at least as difficult to parse as any other CFL:
\begin{equation*}
\begin{split}
L_0 &= \{ x_1 \texttt{,} y_1 \texttt{,} z_1 \texttt{;} \cdots x_n \texttt{,} y_n \texttt{,} z_n \texttt{;} \mid {} \\
&\qquad n \geq 0, \\
&\qquad y_1 \cdots y_n \in \texttt{\$}D_2, \\
&\qquad x_i, z_i \in \{\texttt{,}, \texttt{\$}, \texttt{(}, \texttt{)}, \texttt{[}, \texttt{]}\}^\ast \}.
\end{split}
\end{equation*}
Intuitively, $L_0$ contains strings formed by dividing a member of $\texttt{\$}D_2$ into pieces ($y_i$) and interleaving them with ``decoy'' pieces (substrings of $x_i$ and $z_i$). While processing the string, the machine has to nondeterministically guess whether each piece is genuine or a decoy. Greibach shows that for any CFL $L$, there is a string homomorphism $h$ such that a parser for $L_0$ can be run on $h(w)$ to find a parse for $w$. See Appendix~\ref{sec:hardest_cfl} for more information.
\end{asparadesc}
\subsection{Data}
For each task, we construct a probabilistic context-free grammar (PCFG) for the language (see Appendix \ref{sec:grammars} for the full grammars and their parameters). We then randomly sample a training set of 10,000 examples from the PCFG, filtering samples so that the length of a string is in the interval $[40, 80]$ (see Appendix \ref{sec:lengthsample} for our sampling method). The training set remains the same throughout the training process and is not re-sampled from epoch to epoch, since we want to test how well the model can infer the probability distribution from a finite sample.
We sample a validation set of 1,000 examples from the same distribution and a test set with string lengths varying from 40 to 100, with 100 examples per length. The validation set is randomized in each experiment, but for each task, the test set remains the same across all models and random restarts. For simplicity, we do not filter training samples from the validation or test sets, assuming that the chance of overlap is very small.
\subsection{Evaluation}
\label{sec:evaluation}
Since, in these languages, the next symbol cannot always be predicted deterministically from previous symbols, we do not use prediction accuracy as in previous work. Instead, we compute per-symbol cross-entropy on a set of strings $S$. Let $p$ be any distribution over strings; then:
\begin{align*}
H(S, p) &= \frac{\sum_{w \in S} -\log p(s)}{\sum_{w \in S} |w|}.
\end{align*}
We compute the cross-entropy for both the stack RNN and the distribution from which $S$ is sampled and report the difference. This can be seen as an approximation of the KL divergence of the stack RNN from the true distribution.
Technically, because the RNN models do not predict the end of the string, they estimate $p(w \mid |w|)$, not $p(w)$. However, they do not actually use any knowledge of the length, so it seems reasonable to compare the RNN's estimate of $p(w \mid |w|)$ with the true $p(w)$. (This is why, when we bin by length in Figure~\ref{fig:test}, some of the differences are negative.)
A benefit of using cross-entropy instead of prediction accuracy is that we can easily incorporate new tasks as long as they are expressed as a PCFG. We do not, for example, need to define a language-dependent subsequence of symbols to evaluate on.
\subsection{Baselines}
We compare our NS-RNN{} against three baselines: an LSTM, the Stack LSTM of \citet{joulin+mikolov:2015} (``JM"), and the Stack LSTM of \citet{grefenstette+al:2015} (``Gref"). We deviate slightly from the original definitions of these models in order to standardize the controller-stack interface to the one defined in Section \ref{sec:controllerinterface}, and to isolate the effects of differences in the stack data structure, rather than the controller mechanism. For all three stack models, we use an LSTM controller whose initial hidden state is fixed to 0, and we use only one stack for the JM and Gref models. (In early experiments, we found that using multiple stacks did not make a meaningful difference in performance.) For JM, we include a bias term in the layers that compute the stack actions and network output. We do allow the no-op operation, and the stack reading consists of only the top stack cell. For Gref, we set the controller output~$\mathbf{o}'_t$ equal to the hidden state $\mathbf{h}_t$, so we compute the stack actions, pushed vector, and network output directly from the hidden state. We encode all input symbols as one-hot vectors; there are no embedding layers.
\subsection{Hyperparameters}
For all models, we use a single-layer LSTM with 20 hidden units. We selected this number because we found that an LSTM of this size could not completely solve the marked reversal task, indicating that the hidden state is a memory bottleneck. For each task, we perform a hyperparameter grid search for each model. We search for the initial learning rate, which has a large impact on performance, from the set $\{0.01, 0.005, 0.001, 0.0005\}$. For JM and Gref, we search for stack embedding sizes in $\{2, 20, 40\}$. We manually choose a small number of PDA states and stack symbol types for the NS-RNN{} for each task. For marked reversal, unmarked reversal, and Dyck, we use 2 states and 2 stack symbol types. For padded reversal, we use 3 states and 2 stack symbol types. For the hardest CFL, we use 3 states and 3 stack symbol types.
As noted by \citet{grefenstette+al:2015}, initialization can play a large role in whether a Stack LSTM converges on algorithmic behavior or becomes trapped in a local optimum. To mitigate this, for each hyperparameter setting in the grid search, we run five random restarts and select the hyperparameter setting with the lowest average difference in cross entropy on the validation set. This gives us a picture not only of the model's performance, but of its rate of success. We initialize all fully-connected layers except for the recurrent LSTM layer with Xavier uniform initialization \citep{glorot+bengio:2010}, and all other parameters uniformly from $[-0.1, 0.1]$.
We train all models with Adam \citep{kingma+ba:2015} and clip gradients whose magnitude is above~5. We use mini-batches of size~10; to generate a batch, we first select a length and then sample~10 strings of that length. We train models until convergence, multiplying the learning rate by 0.9 after~5 epochs of no improvement in cross-entropy on the validation set, and stopping after 10 epochs of no improvement.
{
\definecolor{color0}{rgb}{0.12156862745098,0.466666666666667,0.705882352941177}
\definecolor{color1}{rgb}{1,0.498039215686275,0.0549019607843137}
\definecolor{color2}{rgb}{0.172549019607843,0.627450980392157,0.172549019607843}
\definecolor{color3}{rgb}{0.83921568627451,0.152941176470588,0.156862745098039}
\pgfplotsset{lines/.style={semithick}}
\pgfplotsset{line0/.style={lines, color0, mark=triangle*, mark options={rotate=30}}}
\pgfplotsset{line1/.style={lines, color1, mark=triangle*, mark options={rotate=120}}}
\pgfplotsset{line2/.style={lines, color2, mark=triangle*, mark options={rotate=210}}}
\pgfplotsset{line3/.style={lines, color3, mark=triangle*, mark options={rotate=300}}}
\tikzset{bars/.style={opacity=0.12}}
\pgfplotsset{every axis/.style={
width=3.5in,height=2.4in,
title style={yshift=-4.5ex},
legend cell align={left},
legend style={at={(0.5,-0.33)},anchor=north,draw=none,/tikz/every even column/.append style={column sep=0.4cm}},
legend columns=-1,
tick style={color=black},
tick align=outside,
tick pos=left,
ymin=0,ytick distance=0.1,
scaled ticks=false,
ticklabel style={/pgf/number format/fixed,/pgf/number format/precision=5},
ylabel style={at={(axis description cs:-0.15,0.5)}},
}}
\begin{figure*}
\begin{minipage}[t]{\columnwidth}
\centering
\pgfplotsset{every axis/.append style={
xmin=0,xmax=160,xtick distance=50,
mark repeat=32,
}}
\pgfplotsset{line0/.append style={mark phase=0}}
\pgfplotsset{line1/.append style={mark phase=8}}
\pgfplotsset{line2/.append style={mark phase=16}}
\pgfplotsset{line3/.append style={mark phase=24}}
\tikzset{linelabel/.style={black,inner sep=2pt,font={\footnotesize}}}
{\pgfplotsset{every axis/.append style={xticklabels={,,}}}
\scalebox{0.8}{\input{figures/train-marked-reversal.tex}} \\
\scalebox{0.8}{\input{figures/train-unmarked-reversal.tex}} \\
\scalebox{0.8}{\input{figures/train-padded-reversal.tex}} \\
\scalebox{0.8}{\input{figures/train-dyck.tex}} \\
}
\scalebox{0.8}{\input{figures/train-hardest-cfl.tex}}
\caption{Cross-entropy difference in nats between model and source distribution on validation set, as a function of training time. Lines are averages of five random restarts, and shaded regions are standard deviations. After a random restart converges, the value of its last epoch is used in the average for later epochs.}
\label{fig:train}
\end{minipage}%
\hspace{\columnsep}%
\begin{minipage}[t]{\columnwidth}
\centering
\pgfplotsset{every axis/.append style={
xmin=40, xmax=112,
xtick={40,60,80,100},
mark repeat=8,
}}
\pgfplotsset{line0/.append style={mark phase=0}}
\pgfplotsset{line1/.append style={mark phase=2}}
\pgfplotsset{line2/.append style={mark phase=4}}
\pgfplotsset{line3/.append style={mark phase=6}}
\tikzset{linelabel/.style={black,inner sep=2pt,font={\footnotesize}}}
{\pgfplotsset{every axis/.append style={xticklabels={,,}}}
\scalebox{0.8}{\input{figures/test-marked-reversal.tex}} \\
\scalebox{0.8}{\input{figures/test-unmarked-reversal.tex}} \\
\scalebox{0.8}{\input{figures/test-padded-reversal.tex}} \\
\scalebox{0.8}{\input{figures/test-dyck.tex}} \\
}
\scalebox{0.8}{\input{figures/test-hardest-cfl.tex}}
\caption{Cross-entropy difference in nats on the test set, binned by string length. Some models achieve a negative difference, for reasons explained in \S\ref{sec:evaluation}. Each line is the average of the same five random restarts shown in Figure~\ref{fig:train}.}
\label{fig:test}
\end{minipage}
\end{figure*}
}
\section{Results}
We show plots of the difference in cross entropy on the validation set between each model and the source distribution in Figure \ref{fig:train}. For all tasks, stack-based models outperform the LSTM baseline, indicating that the tasks are effective benchmarks for differentiable stacks. For the marked reversal, unmarked reversal, and hardest CFL tasks, our model consistently achieves cross-entropy closer to the source distribution than any other model. Even for the marked reversal task, which can be solved deterministically, the NS-RNN{}, besides achieving lower cross-entropy on average, learns to solve the task in fewer updates and with much higher reliability across random restarts. In the case of the mildly nondeterministic unmarked reversal and highly nondeterministic hardest CFL tasks, the NS-RNN{} converges on the lowest validation cross-entropy. On the Dyck language, which is a deterministic task, all stack models converge quickly on the source distribution. We hypothesize that this is because the Dyck language represents a case where stack usage is locally advantageous everywhere, so it is particularly conducive for learning stack-like behavior. On the other hand, we note that our model struggles on padded reversal, in which stack-friendly signals are intentionally made very distant. Although the NS-RNN{} outperforms the LSTM baseline, the JM model solves the task most effectively, though still imperfectly.
In order to show how each model performs when evaluated on strings longer than those seen during training, in Figure \ref{fig:test}, we show cross-entropy on separately sampled test data as a function of string length. All test sets are identical across models and random restarts, and there are 100 samples per length. The NS-RNN{} consistently does well on string lengths it was trained on, but it is sometimes surpassed by other stack models on strings that are outside the distribution of lengths it was trained on. This suggests that the NS-RNN{} conforms more tightly to the real distribution seen during training.
\section{Conclusion}
We presented the NS-RNN{}, a neural language model with a differentiable stack that explicitly models nondeterminism. We showed that it offers improved trainability and modeling power over previous stack-based neural language models; the NS-RNN{} learns to solve some deterministic tasks more effectively than other stack-LSTMs, and achieves the best results on a challenging nondeterministic context-free language. However, we note that the NS-RNN{} struggled on a task where signals in the data were distant, and did not generalize to longer lengths as well as other stack-LSTMs; we hope to address these shortcomings in future work. We believe that the NS-RNN{} will prove to be a powerful tool for learning and modeling ambiguous syntax in natural language.
\section*{Acknowledgements}
This research was supported in part by a Google Faculty Research Award. We would like to thank Justin DeBenedetto and Darcey Riley for their helpful comments, and the Center for Research Computing at the University of Notre Dame for providing the computing infrastructure for our experiments.
\bibliographystyle{acl_natbib}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.